Sei sulla pagina 1di 228

Bechtel Technology Journal

An Independent Analysis of Current Technology Issues

December 2008

Authors
v vii

Contents
Foreword Editorial Seismic Soil Pressure for Building Walls An Updated Approach Integrated Seismic Analysis and Design of Shear Wall Structures Structural Innovation at the Hanford Waste Treatment Plant

BECHTEL SYSTEMS & INFRASTRUCTURE, INC.


Farhang Ostadan, PhD Thomas D. Kohli; Orhan Grbz, PhD; and Farhang Ostadan, PhD John Power, Mark Braccia, and Farhang Ostadan, PhD 3 13 23

CIVIL
Samuel Daw Michel A. Thomet, PhD, and Farzam Mostoufi Samuel L. Hui; Andr Lejeune, PhD; and Vefa Yucel 33 43 49 Systems Engineering The Reliable Method of Rail System Delivery Simulation-Aided Airport Terminal Design Safe Passage of Extreme Floods A Hydrologic Perspective

COMMUNICATIONS
Jake MacLeod and S. Rasoul Safavian, PhD Nathan Youell Brian Coombe 57 77 91 FMC: Fixed-Mobile Convergence The Use of Broadband Wireless on Large Industrial Project Sites Desktop Virtualization and Thin Client Options

MINING & METALS


Jon Berkoe, Philip Diwakar, Lucy Martin, Bob Baxter, C. Mark Read, Patrick Grover, and Don Ziegler, PhD Terry Cunningham 101 Computational Fluid Dynamics Modeling of the Fjaral Smelter Potroom Ventilation 111 Long-Distance Transport of Bauxite Slurry by Pipeline

OIL, GAS & CHEMICALS


Cyrus B. Meher-Homji, Tim Hattenbach, Dave Messersmith, Hans P. Weyermann, Karl Masani, and Satish Gandhi, PhD Ramachandra Tekumalla and Jaleel Valappil, PhD Wei Yan, PhD; Lily Bai, PhD; Jame Yao, PhD; Roger Chen, PhD; Doug Elliot, PhD; and Stanley Huang, PhD 125 Worlds First Application of Aeroderivative Gas Turbine Drivers for the ConocoPhillips Optimized Cascade LNG Process 141 Innovation, Safety, and Risk Mitigation via Simulation Technologies 157 Optimum Design of Turbo-Expander Ethane Recovery Process

POWER
Kathi Kirschenheiter, Michael Chuk, Colleen Layman, and Kumar Sinha Justin Zachary, PhD, and Sara Titus Sanj Malushte, PhD; Orhan Grbz, PhD; Joe Litehiser, PhD; and Farhang Ostadan, PhD 171 Controlling Chemistry During Startup and Commissioning of Once-Through Supercritical Boilers 181 CO2 Capture and Sequestration Options Impact on Turbomachinery Design 201 Recent Industry and Regulatory Developments in Seismic Design of New Nuclear Power Plants

Volume 1, No. 1

Bechtel Global Business Units

Bechtel Systems & Infrastructure, Inc. (BSII)


BSII (US Government Services) engages in a wide range of government and civil infrastructure development, planning, program management, integration, design, procurement, construction, and operations work in defense, demilitarization, energy management, telecommunications, and environmental restoration and remediation.

Civil
Civil is a global leader in developing, managing, and constructing a wide range of infrastructure projects from airport, rail, and highway systems to regional development programs; from ports, bridges, and ofce buildings to theme parks and resorts.

Communications
Communications integrates mobilization speed and a variety of disciplines, experience, and scalable resources to quickly and efciently deliver end-to-end deployment services for wireless, wireline, and other telecommunications facilities around the world.

Mining & Metals (M&M)


Mining & Metals excels at completing logistically challenging projectsoften in remote areasinvolving ferrous, nonferrous, precious, and light metals, as well as industrial minerals, on time and within budget.

Oil, Gas & Chemicals (OG&C)


Oil, Gas & Chemicals has the experience with a broad range of technologies and optimized plant designs that sets Bechtel apart as a worldwide leader in designing and constructing oil, gas, petrochemical, LNG, pipeline, and industrial facilities.

Power
Power is helping the world to meetin ways no other company can matchan ever-greater energy demand by providing services for existing plants, by designing and constructing fossil- and nuclear-fueled electric generation facilities incorporating the latest technologies, and by taking the initiative in implementing new and emerging energy technologies.

TECHNOLOGY PAPERS

Bechtel Technology Journal


December 2008

Volume 1, No. 1

Contents
Foreword Editorial BECHTEL SYSTEMS & INFRASTRUCTURE, INC. (BSII)
Seismic Soil Pressure for Building Walls An Updated Approach
Farhang Ostadan, PhD

v vii

3 13 23

Integrated Seismic Analysis and Design of Shear Wall Structures


Thomas D. Kohli; Orhan Grbz, PhD; and Farhang Ostadan, PhD

Structural Innovation at the Hanford Waste Treatment Plant


John Power, Mark Braccia, and Farhang Ostadan, PhD

CIVIL
Systems Engineering The Reliable Method of Rail System Delivery
Samuel Daw

33 43 49

Simulation-Aided Airport Terminal Design


Michel A. Thomet, PhD, and Farzam Mostoufi

Safe Passage of Extreme Floods A Hydrologic Perspective


Samuel L. Hui Andr Lejeune, PhD, Universit de Lige, Belgium Vefa Yucel, National Security Technologies, LLC

COMMUNICATIONS
FMC: Fixed-Mobile Convergence
Jake MacLeod and S. Rasoul Safavian, PhD

57 77 91

The Use of Broadband Wireless on Large Industrial Project Sites


Nathan Youell

Desktop Virtualization and Thin Client Options


Brian Coombe

MINING & METALS (M&M)


Computational Fluid Dynamics Modeling of the Fjaral Smelter Potroom Ventilation
Jon Berkoe, Philip Diwakar, Lucy Martin, Bob Baxter, and C. Mark Read Patrick Grover, Alcoa, and Don Ziegler, PhD, Alcoa Primary Metals

101

Long-Distance Transport of Bauxite Slurry by Pipeline


Terry Cunningham

111

ii

Bechtel Technology Journal

OIL, GAS & CHEMICALS (OG&C)


Worlds First Application of Aeroderivative Gas Turbine Drivers for the ConocoPhillips Optimized Cascade LNG Process
Cyrus B. Meher-Homji, Tim Hattenbach, and Dave Messersmith Hans P. Weyermann, Karl Masani, and Satish Gandhi, PhD, ConocoPhillips Company

125

Innovation, Safety, and Risk Mitigation via Simulation Technologies


Ramachandra Tekumalla and Jaleel Valappil, PhD

141 157

Optimum Design of Turbo-Expander Ethane Recovery Process


Wei Yan, PhD; Lily Bai, PhD; Jame Yao, PhD; Roger Chen, PhD; and Doug Elliot, PhD; IPSI LLC Stanley Huang, PhD, Chevron Energy Technology Company

POWER
Controlling Chemistry During Startup and Commissioning of Once-Through Supercritical Boilers
Kathi Kirschenheiter, Michael Chuk, Colleen Layman, and Kumar Sinha

171

CO2 Capture and Sequestration Options Impact on Turbomachinery Design


Justin Zachary, PhD, and Sara Titus

181

Recent Industry and Regulatory Developments in Seismic Design of New Nuclear Power Plants
Sanj Malushte, PhD; Orhan Grbz, PhD; Joe Litehiser, PhD; and Farhang Ostadan, PhD

201

The BTJ is also available on the Web at www.bechtel.com/. (Enter BTJ in the search field to find the link.)

2008 Bechtel Corporation. All rights reserved.


Bechtel Corporation welcomes inquiries concerning the BTJ. For further information or for permission to reproduce any paper included in this publication in whole or in part, please email us at btj_edit@bechtel.com. Although reasonable efforts have been made to check the papers included in the BTJ, this publication should not be interpreted as a representation or warranty by Bechtel Corporation of the accuracy of the information contained in any paper, and readers should not rely on any paper for any particular application of any technology without professional consultation as to the circumstances of that application. Similarly, the authors and Bechtel Corporation disclaim any intent to endorse or disparage any particular vendors of any technology.

December 2008 Volume 1, Number 1

iii

Bechtel Technology Journal


Volume 1, Number 1
ADVISORY BOARD
Thomas Patterson Principal Vice President and Corporate Manager of Engineering Ram Narula Chief Technology Officer, Bechtel Power; Chair, Bechtel Fellows Jake MacLeod Principal Vice President, Bechtel Corporation; Chief Technology Officer, Bechtel Communications; Bechtel Fellow S. Rasoul Safavian, PhD Vice President, Technology and Network Planning, Bechtel Communications

EDITORIAL BOARD
S. Rasoul Safavian, PhD Editor-in-Chief Farhang Ostadan, PhD BSII GBU Editor Siv Bhamra, PhD Civil GBU Editor S. Rasoul Safavian, PhD Communications GBU Editor Bill Imrie M&M GBU Editor Cyrus B. Meher-Homji OG&C GBU Editor Sanj Malushte, PhD Power GBU Editor

EDITORIAL TEAM
Pam Grimes Production Coordinator Barbara Oldroyd Coordinating Technical Editor Richard Peters Senior Technical Editor Teresa Baines Senior Technical Editor Peggy Dufour Technical Editor Brenda Thompson Technical Editor Drake Ogilvie Technical Editor Zelda Laskowsky Technical Editor Jan Davis Technical Editor Ruthanne Evans Technical Editor Brenda Goldstein Technical Editor JoAnn Ugolini Technical Editor Ann Miller Technical Editor

GRAPHICS/DESIGN TEAM
Keith Schools Graphic Design Andy Johnson Graphic Design John Cangemi Graphic Design David Williams Graphic Design Joe Kelly Graphic Design Allison Levenson Graphic Design Matt Twain Graphic Design Matthew Long Graphic Design Michael Wescott Graphic Design Janelle Cataldo Graphic Design Luke Williams Graphic Design Kim Catterton Graphic Design

TRADEMARK ACKNOWLEDGMENTS
All brand, product, service, and feature names and trademarks mentioned in this Bechtel Technology Journal are the property of their respective owners. Specifically: 3GPP is a trademark of the European Telecommunications Standards Institute (ETSI) in France and other jurisdictions. Alcatel-Lucent is a trademark of Alcatel-Lucent, Alcatel, and Lucent Technologies. Alvarion is a registered trademark of Alvarion Ltd. AMD is a trademark of Advanced Micro Devices, Inc. ANSYS and FLUENT are registered trademarks of ANSYS, Inc., or its subsidiaries in the United States or other countries. [ICEM CFD is a trademark used by ANSYS, Inc. under license.] Aspen HYSYS is a registered trademark of Aspen Technology, Inc. cdma2000 is a registered trademark and certification mark of the Telecommunications Industry Association (TIA-USA). Cisco Systems and Aironet are registered trademarks of Cisco Systems, Inc., and/or its affiliates in the United States and certain other countries. ConocoPhillips Optimized Cascade is a registered trademark of ConocoPhillips. Econamine FG is a service mark of Fluor Corporation. Enhanced NGL Recovery Process is a service mark of IPSI LLC (Delaware Corporation). Glenium is a registered trademark of Construction Research & Technology GMBH. Huawei is a registered trademark of Huawei Corporation or its subsidiaries in the Peoples Republic of China and other countries (regions). IBM is a registered trademark of International Business Machines Corporation in the United States. Intel is a registered trademark of Intel Corporation in the US and other countries. Java is a trademark of Sun Microsystems, Inc., in the United States and other countries. Microsoft and Excel are registered trademarks of the Microsoft group of companies. Motorola is registered in the U.S. Patent and Trademark Office by Motorola, Inc. Nortel is a trademark of Nortel Networks. Rectisol is a registered trademark of Linde AG. Selexol is a trademark owned by UOP LLC, a Honeywell Company. SmartPlant is a registered trademark of Intergraph Corporation. Tecore Networks is a registered trademark with the U.S. Patent and Trademark Office. UNIX is a registered trademark of The Open Group. Wi-Fi is a registered trademark and certification mark of the Wi-Fi Alliance. WiMAX and WiMax Forum are trademarks of the WiMAX Forum; WiMAX is also the Forums certification mark.

iv

Bechtel Technology Journal

Foreword
t is our great pleasure to present you with this inaugural issue of the Bechtel Technology Journal (BTJ). The BTJs compilation of technical papers addresses current issues and technology advancements from each of Bechtels six Global Business Units (GBUs): Bechtel Systems & Infrastructure, Inc.; Civil; Communications; Mining & Metals; Oil, Gas & Chemicals; and Power. The objective of the BTJ is to provide our customers and colleagues a fresh look at technical and operational advances that are of prime interest to the various industries we serve. This publication is a logical extension of our attempts, over the years, to look over the horizon to identify relevant issues that pose unique challenges and require new and innovative technical solutions. Given the complexity of the arenas in which we do business, it is not difficult to identify numerous issues; the real challenge is to prioritize them relative to their impact on performance and return on investment. Therefore, we challenge our technical experts to collaborate with their counterparts around the world to define the critical issues insofar as possible. We then task them to address these issues from a technical perspective. Judging by the response to their past papers and presentations, we have experienced a high degree of success in many areas. The papers in the BTJ are grouped by business unit, although many apply to more than one. The primary authors are from their indicated Bechtel business unit or an affiliate; some co-authors are from our customer organizations. We want to take this opportunity to thank the authors and co-authors for helping make the BTJ a reality, and we certainly hope that you find their contributions of value. If you have an idea for a future paper, please feel free to contact me at rnarula@bechtel.com. Your suggestions are always welcome. Sincerely,

Ram Narula Chief Technology Officer, Bechtel Power Chair, Bechtel Fellows

December 2008 Volume 1, Number 1

Editorial
am happy to bring to you our very first issue of the Bechtel Technology Journal (BTJ). Some of you may already be familiar with the Bechtel Communications Technical Journal (BCTJ) that has been produced since 2002. The BCTJ focuses on our Communications business and related areas, whereas the BTJ has been developed with a much broader focus, encompassing all six of Bechtels Global Business Units (GBUs): Bechtel Systems & Infrastructure, Inc.; Civil; Communications; Mining & Metals; Oil, Gas & Chemicals; and Power. Following in the footsteps of the BCTJ, we have modeled the BTJ to address leading technical, operational, business, and regulatory issues relevant to the different Bechtel GBUs. The papers selected address issues of current and future interest to their corresponding industries in general and our valued customers in particular. New technologies and trends are highlighted in papers such as Structural Innovation at the Hanford Waste Treatment Plant by Power et al. New regulatory concerns are discussed in other papers, such as Recent Industry and Regulatory Developments in Seismic Design of New Nuclear Power Plants by Malushte et al. I hope you find this first edition of the BTJ informative and useful. As always, I look forward to your comments and contributions. I would also like to take this opportunity to wish everyone a very happy, prosperous, and safe new year! Happy Reading!

Dr. S. Rasoul Safavian Vice President, Technology and Network Planning, Bechtel Communications Editor-in-Chief

December 2008 Volume 1, Number 1

vii

BSII
Technology Papers

Seismic Soil Pressure for Building Walls An Updated Approach


Farhang Ostadan, PhD

13

Integrated Seismic Analysis and Design of Shear Wall Structures


Thomas D. Kohli Orhan Grbz, PhD Farhang Ostadan, PhD

23

Structural Innovation at the Hanford Waste Treatment Plant


John Power Mark Braccia Farhang Ostadan, PhD

BSII Waste Treatment Plant


Bechtel is providing engineering, procurement, and construction of this rst-of-a-kind nuclear and hazardous chemical processing facility for the US Department of Energy in Richland, Washington.

SEISMIC SOIL PRESSURE FOR BUILDING WALLS AN UPDATED APPROACH


Originally Issued: August 2005 Updated: December 2008 AbstractThe Mononobe-Okabe (M-O) method of predicting dynamic earth pressure developed in the 1920s in Japan continues to be widely used despite many criticisms and its limitations. The method was developed for gravity walls retaining cohesionless backfill materials. In design applications, however, the M-O method and its derivatives are commonly used for below-ground building walls. In this regard, the M-O method is one of the most abused methods in the geotechnical practice. In recognition of the M-O methods limitations, a simplified method was recently developed to predict lateral seismic soil pressure for building walls. The method is focused on the building walls rather than retaining walls and specifically considers the dynamic soil properties and frequency content of the design motion in its formulation. KeywordsMononobe-Okabe, SASSI2000, seismic, soil pressure, SSI
INTRODUCTION he Mononobe-Okabe (M-O) method of predicting dynamic earth pressure was developed in the 1920s. [1, 2] Since then, a great deal of research has been performed to evaluate its adequacy and develop improvements. This research includes the work by Seed and Whitman [3], Whitman et al. [4, 5, 6], Richard and Elms [7], and Matsuzawa et al. [8]. A good summary of the various methods and their application is reported in [9]. Most developments cited above are based on the original M-O method. The M-O method is, strictly speaking, applicable to soil retaining walls, which, upon experiencing seismic loading, undergo relatively large movement to initiate the sliding wedge behind the wall and to relieve the pressure to its active state. Unfortunately, the method has been and continues to be used extensively for embedded walls of buildings as well. Recent field observations and experimental data, along with enhancements in analytical techniques, have shown that hardly any of the assumptions used in the development of the M-O method are applicable to building walls. The data and the subsequent detailed analysis have clearly shown that the seismic soil pressure is a result of the interaction between the soil and the building during the seismic excitation and as such is a function of all parameters that affect soil-structure interaction (SSI) responses. Some of the more recent observations and experimental data, including an expanded discussion of the method presented herein, are reported in [10]. The major developments that consider the soil-wall interaction under dynamic loading are those by Wood [11] and Veletsos et al. [12, 13]. The solution by Wood commonly used for critical facilities [14] is, in fact, based on static 1 g loading of the soil-wall system and does not include the wave propagation and amplification of motion. The recent solution by Veletsos et al. is a much more rigorous solution and considers the effects of wave propagation in the soil mass. The solution, however, is complex and lacks simple computational steps for design application. The effect of soil nonlinearity and incorporation of harmonic solution for application to transient design motion require additional development with an associated computer program not currently available. At this time, while elaborate finite element techniques are available to obtain the soil pressure for design, no simple method has been proposed for quick prediction of the maximum soil pressure, thus hindering the designers ability to use an appropriate method in practice. To remedy this problem, the current research was conducted to develop a simple method that incorporates the main parameters affecting the seismic soil pressure for buildings. This paper presents the development of the simplified method and a brief summary of its extensive verification. Its application for a typical wall is demonstrated by a set of simple numerical steps. The results are compared with the commonly

Farhang Ostadan, PhD


fostadan@bechtel.com

2008 Bechtel Corporation. All rights reserved.

ABBREVIATIONS, ACRONYMS, AND TERMS


ATC M-O NEHRP NRC SASSI SDOF Applied Technology Council Mononobe-Okabe National Earthquake Hazard Reduction Program Nuclear Regulatory Commission System for Analysis of Soil-Structure Interaction single degree of freedom soil-structure interaction transfer function

earthquakes. Damage to bridges has also been reported from various earthquakes, including 1960 Chile, 1964 Alaska, 1964 Nigata, 1971 San Fernando, and 1974 Lima. Most of the reported damage can be attributed to the increased lateral pressure during earthquakes. Numerous reports are also available from recent earthquakes that report damage to the embedded walls of buildings. However, it is not possible to quantify the contribution of seismic soil pressure to the damage because the embedded walls often carry the inertia load of the superstructure, which is combined with seismic soil pressure load contributing to the damage. On the other hand, simple structures, such as underground box-type structures, retaining walls, and bridge abutments, have suffered damage due to the increased soil pressure. All of these reports and others not mentioned highlight the significance of using appropriate seismic soil pressure in design. In recent years, the understanding of the attributes of seismic soil pressure has improved significantly. This is mainly due to extensive field and laboratory experiments and data collected from instrumented structures, as well as to the improvement in computational methods in handling the SSI problems. Recent experiments and analyses of the recorded response of structures and seismic soil pressure have been reported in numerous publications. [1724] These observations confirm that seismic soil pressure is caused by the interaction of the soil and structure and is influenced by the dynamic soil properties, the structural properties, and the characteristics of the seismic motion. The new insight prompted the US Nuclear Regulatory Commission (NRC) to reject the M-O and M-O-based methods for application to critical structures.

In recent years, the understanding of the attributes of seismic soil pressure has improved significantly.

SSI TF

used methods such as the M-O method and the solution by Wood. The proposed method has been adopted and recommended by the National Earthquake Hazard Reduction Program (NEHRP). [15] Significance of Seismic Soil Pressure in Design: Recent Observations Seed and Whitman [3] summarized damage to wall structures during earthquakes. Damage to retaining walls with saturated backfills is typically more dramatic and is frequently reported in the literature. However, reports of damage to walls above the water table are not uncommon. A number of soil-retaining structures were damaged in the San Fernando earthquake of 1971. Wood [11] reports that the walls of a large reinforced concrete underground reservoir at the Balboa Water Treatment Plant failed as a result of increased soil pressure during the earthquake. The walls were approximately 20 ft high and were restrained by top and bottom slabs. Damage has been reported for a number of underground reinforced concrete box-type flood control channels. Richards and Elms [7] report damage to bridge abutments after the 1968 earthquake in Inangahua, New Zealand. Out of the 39 bridges inspected, 24 showed measurable movements and 15 suffered abutment damage. In the Madang earthquake of 1970 in New Guinea, the damage patterns were similar. Of the 29 bridges repaired, some experienced abutment lateral movements of as much as 20 in. Reports on failed or damaged bridge abutments indicate mainly settlement of the backfill and pounding of the bridge superstructure against the abutment in longitudinal and transverse directions. Nazarian and Hadjian [16] also summarized damage to soil-retaining structures during past

SIMPLIFIED METHODOLOGY

he simplified methodology presented in this paper focuses on the building walls rather than soil-retaining walls and specifically considers the following factors: Deformation of the walls is limited due to the presence of the floor diaphragms and the internal cross walls, and the walls are considered rigid and non-yielding. The effect of wave propagation in the soil mass and interaction of the soil and wall are considered. The frequency content of the design motion is fully considered. Use of a single parameter, such as the peak ground acceleration, as a
Bechtel Technology Journal

measure of design motion may misrepresent the energy content of the motion at frequencies important for soil pressure. Applicable dynamic soil properties, in terms of soil shear wave velocity and damping, are included in the analysis. The method is flexible to allow for consideration of soil nonlinear effect where soil nonlinearity is expected to be significant. It is recognized that the seismic soil pressure is affected not only by the kinematic interaction of the foundation, but also by the inertia effect of the building. The mass properties of buildings vary significantly from one to another. The proposed solution is limited to prediction of seismic soil pressure as affected by the kinematic interaction effects of the building, consistent with the inherent assumption used in the current methods. Experience from numerous rigorous SSI analyses of buildings confirms that using the proposed solution can adequately predict the amplitude of the seismic soil pressure for many buildings even when the inertia effect is included. Some local variation of soil pressure may be present, depending on the layout of the interconnecting slabs and the interior cross walls to the exterior walls, and the relative stiffness of the walls and the soil. To investigate the characteristics of the lateral seismic soil pressure, a series of seismic soilstructure interaction analyses was performed

using the Computer Program SASSI2000. [25] A typical SASSI model of a building basement is shown in Figure 1. The embedment depth is designated by H and the soil layer is identified by shear wave velocity Vs, Poissons ratio , total mass density , and soil material damping . The basemat is resting on rock or a firm soil layer. A column of soil elements next to the wall is explicitly modeled to retrieve the pressure responses from the solution. The infinite extent of the soil layers on the sides of the building, as well as the half-space condition below the building, are internally modeled in SASSI in computing the impedance functions for the structural nodes in contact with soil. The assumption of a firm soil or a rock layer under the basemat eliminates the rocking motion of the foundation. For deep soil sites, and depending on the aspect ratio of the foundation, the rocking motion can influence the magnitude and distribution of soil pressure. Due to space limitation, the extension of the method for deep soil sites is not presented in this paper. A detailed discussion is reported in [10]. For the SASSI analysis, the acceleration time history of the input motion was specified at the top of the rock layer corresponding to the basemat elevation in the free-field. To characterize the dynamic behavior of the soil pressure, the most commonly used wave field, consisting of vertically propagating shear waves, was specified as input motion. The frequency

The NRC rejects the conventional M-O method for application to critical structures.

Infinite Lateral Soil

Rigid Boundary at the Base

Figure 1. Typical SASSI Model of the Foundation

December 2008 Volume 1, Number 1

Infinite Lateral Soil

The dynamic characteristics of the seismic soil pressure are the same as the SDOF system.

characteristics of the pressure response were examined using harmonic shear waves for a wide range of frequencies. For each harmonic wave, the amplitude of the normal soil pressure acting on the building wall at several locations along the wall was monitored. To evaluate the frequency contents of the pressure response, the pressure transfer function (TF) amplitude was obtained. This consists of the ratio of the amplitude of the seismic soil pressure to the amplitude of the input motion (1 g harmonic acceleration in the free-field) for each harmonic frequency. The analyses were performed for a building with embedment of 15.2 m (50 ft) and soil shear wave velocities of 152, 305, 457, and 610 m/sec (500, 1,000, 1,500, and 2,000 ft/sec), all with the Poissons ratio of 1/3. The material damping in the soil was specified to be 5 percent. The transfer function results for a soil element near top of the wall are shown in Figure 2. As shown in this figure, the amplification of the pressure amplitude takes place at distinct frequencies. These frequencies increase as the soil shear wave velocity increases. To evaluate the frequency characteristics of each transfer function, the frequency axis was also normalized using soil column frequency f, which was obtained from the following relationship:

In the above equation, Vs is the soil shear wave velocity and H is the embedment depth of the building. The amplitude of soil pressure at low frequency was used to normalize the amplitude of the pressure transfer functions for all frequencies. The normalized transfer functions are shown in Figure 3. As can be seen, the amplification of the pressure and its frequency characteristics are about the same for the range of the shear wave velocities considered. In all cases, the maximum amplification takes place at the frequency corresponding to the soil column natural frequency. The same dynamic behavior was also observed for all soil elements along the height of the walls. Examining the dynamic characteristics of the normalized pressure amplitudes (such as those shown in Figure 3), it is readily evident that such characteristics are those of a single-degree-offreedom (SDOF) system. Each response begins with a normalized value of one, increases to a peak value at a distinct frequency, and subsequently reduces to a small value at high frequency. Dynamic behavior of an SDOF system is completely defined by the mass, stiffness, and associated damping constant. It is generally recognized that response of an SDOF system is controlled by stiffness at low frequency, by damping at resonant frequency, and by inertia at high frequencies.

f =

Vs 4H

(1)

40,000 35,000 Vs = 152 m/sec (500 ft/s) 30,000 25,000 20,000 15,000 10,000 5,000 Vs = 305 m/sec (1,000 ft/s) Vs = 457 m/sec (1,500 ft/s) Vs = 610 m/sec (2,000 ft/s)

Amplitude of Pressure Transfer Function

Frequency, Hz

Figure 2. Typical Transfer Functions for Soil Pressure Amplitude

Bechtel Technology Journal

4.5 4.0 3.5 3.0

Vs = 152 m/sec (500 ft/s) Vs = 305 m/sec (1,000 ft/s) Vs = 457 m/sec (1,500 ft/s) Vs = 610 m/sec (2,000 ft/s)

Normalized Amplitude

2.5 2.0 1.5 1.0 0.5 0 0 0.25

0.50

0.75

1.00

1.25

1.50

The analogy of an SDOF is used to formulate the new method for predicting seismic soil pressure.

Normalized Frequency

Figure 3. Normalized Transfer Functions

Following the analogy for an SDOF system and to characterize the stiffness component, the pressure amplitudes at low frequencies for all soil elements next to the wall were obtained. The pressure amplitudes at low frequency are almost identical for the wide range of the soil shear wave velocity profiles considered, due to the long wave length of the scattered waves at such low frequencies. The shape of the normalized pressure was used as a basis for determining seismic soil pressure along the height of the building wall. A similar series of parametric studies was also performed by specifying the input motion at the ground surface level [10]. The results of these studies also showed that the seismic soil pressure, in normalized form, could be represented by an SDOF system. For all cases considered, the low frequency pressure profiles depict the same distribution of the pressure along the height of the wall. This observation is consistent with the results of the analytical model developed by Veletsos et al. [12, 13] Since the SSI analyses were performed for the Poissons ratio of 1/3, the pressure distribution was adjusted for the soils Poissons ratio using the factor recommended by Veletsos et al. The factor is defined by: (2)

For the Poissons ratio of 1/3, is 1.897. Use of in the formulation allows correction of the soil pressure amplitude for various Poissons ratios. The adjusted soil pressure profile is compared with the normalized solution by Wood and the M-O method in Figure 4. In the proposed method, the maximum soil pressure is at the top of the wall. This is due to amplification of

0 1.0

0.2

0.4

0.6

0.8

1.0

1.2

M-O Method Wood (v = 1/3) Proposed Method

December 2008 Volume 1, Number 1

Y/H

Figure 4. Comparison of Normalized Pressure Profiles

1.4 EUS Local 1.2 EUS Distant ATC S1 WUS 1.0 RG 1.60 Loma Prieta 0.8

Spectral Acceleration, g

0.6

The proposed method is verified for typical design motions used for critical facilities.

0.4

0.2

0 0.1 1 10 100

Frequency, Hz

Figure 5. Motions Used in the Study

the motion in the soil with highest amplification at ground surface level. This effect was not considered in the Wood solution. Using the adjusted pressure distribution, a polynomial relationship was developed to fit the normalized pressure curve. The relationship in terms of normalized height, y = Y/H (Y is measured from the bottom of the wall and varies from 0 to H), is as follows:

the system in Figure 1 with wall height of 15 m (50 ft) and soil shear wave velocity of 457 m/sec (1,500 ft/sec) was subjected to six different input motions in successive analyses. The motions were specified at the ground surface level in the free-field. The acceleration response spectra of the input motions at 5 percent damping are shown in Figure 5. The motions are typical design motions used for analyses of critical structures. From the set of six motions shown in Figure 5, two motions labeled EUS local and distant are the design motions for a site in the Eastern United States with locations close and far away from a major fault. The Applied Technology Council (ATC) S1 motion is the ATC recommended motion for S1 soil conditions. The WUS motion is the design motion for a site close to a major fault in the Western United States. The RG 1.60 motion is the standard site-independent motion used for nuclear plant structures. Finally, the Loma Prieta motion is the recorded motion from the Loma Prieta earthquake. All motions are scaled to 0.30 g and limited to frequency cut-off of 20 Hz for use in the analysis. This cut-off frequency reduces the peak ground acceleration of the EUS local motion to less than 0.30 g due to the high frequency content of this motion. From the SASSI analysis results, the maximum seismic soil pressure from each element along the wall height was obtained for each of the input motions. The amplitude of the pressure
Bechtel Technology Journal

p ( y) = 0.0015 + 5.05y 15.84y2 + 28.25y3 24.59y4 + 8.14y5

(3)

The area under the curve can be obtained by integrating the pressure distribution over the height of the wall. The total area is 0.744 H for a wall with a height of H. Having obtained the normalized shape of the pressure distribution, the amplitudes of the seismic pressure can also be obtained from the concept of an SDOF. The response of an SDOF system subjected to earthquake loading is readily obtained from the acceleration response spectrum of the input motion at the damping value and the frequency corresponding to the SDOF. The total load is subsequently obtained from the product of the total mass times the acceleration spectral value at the respective frequency of the system. To investigate the effective damping associated with the seismic soil pressure amplification and the total mass associated with the SDOF system,

changes from one motion to the other, with larger values associated with use of RG 1.60 motion. Using the computed pressure profiles, the lateral force acting on the wall for each input motion was computed. The lateral force represents the total inertia force of an SDOF for which the system frequency is known. The system frequency for the case under consideration is the soil column frequency, which is 7.5 Hz based on Equation 1. The total force divided by the spectral acceleration of the system at 7.5 Hz at the appropriate damping ratio amounts to the mass of the SDOF. To identify the applicable damping ratio, the acceleration response spectrum of the free-field response motions at a depth of 15 m (50 ft) was computed for a wide range of damping ratios. Knowing the total force of the SDOF, the frequency of the system, and the input motion to the SDOF system, the relationship in the form proposed by Veletsos et al. [12] was used to compute the total mass and the damping of the SDOF system. For the total mass, the relationship is

particularly at the soil column frequency, is an important component for magnitude of the seismic soil pressure, the spectral damping ratio selected is a much less sensitive parameter. In practice, it is often warranted to consider the variation of soil properties typically using the best estimate and the lower and upper bound range of soil velocity profile. This, in effect, shifts the soil column frequency to a wider range. Computational Steps To predict the lateral seismic soil pressure for below-ground building walls resting on a firm foundation and assuming rigid walls (no significant deformation), the following steps should be taken: 1. Perform free-field soil column analysis and obtain the response motion at the depth corresponding to the base of the wall in the free-field. The response motion in terms of acceleration response spectrum at 30 percent damping should be obtained. The free-field soil column analysis may be performed using the Computer Program SHAKE [26] with input motion specified either at the ground surface or at the depth of the foundation basemat. The choice for location of control motion is an important decision that needs to be made consistent with the development of the design motion. The location of input motion may significantly affect the dynamic response of the building and the seismic soil pressure amplitudes. 2. Use Equations 4 and 2 to compute the total mass for a representative SDOF system using the Poissons ratio and mass density of the soil. 3. Obtain the lateral seismic force from the product of the total mass obtained in Step 2 and the acceleration spectral value of the free-field response at the soil column frequency obtained at the depth of the bottom of the wall (Step 1). 4. Obtain the maximum lateral seismic soil pressure at the ground surface level by dividing the lateral force obtained in Step 3 by the area under the normalized seismic soil pressure, 0.744 H. 5. Obtain the pressure profile by multiplying the peak pressure from Step 4 by the pressure distribution relationship shown in Equation 3. One of the attractive aspects of the simplified method is its ability to consider soil nonlinear effects. Soil nonlinearity is commonly

The proposed method is based on a set of simple computational steps to develop an accurate seismic soil pressure profile for design.

m = 0.50 H 2

(4)

where is the mass density of the soil (total weight density divided by the acceleration of gravity), H is the height of the wall, and is the factor to account for the Poissons ratio as defined in Equation 2. In the analytical model developed by Veletsos et al., a constant coefficient of 0.543 was used in the formulation of the total mass. Study of the soil pressure transfer functions and the free-field response motions at a depth of 15 m (50 ft) showed that spectral values at the soil column frequency at 30 percent damping have the best correlation with the forces computed directly from the SSI analysis. The high value of 30 percent damping is due to the radiation damping associated with soil-wall interaction. However, the spectral values of the motions at the depth corresponding to the base of the wall in the free-field are insensitive to the spectral damping ratios at the soil column frequency due to the dip in the response motion that appears in the acceleration response spectra at the soil column frequency (soil column missing frequency). The various motions, however, have significantly different spectral values at the soil column frequency, depending on the energy content of the design motion at the soil column frequency. This observation leads to the conclusion that while the frequency of the input motion,

December 2008 Volume 1, Number 1

considered by use of the equivalent linear method and the strain-dependent soil properties. Depending on the intensity of the design motion and the soil properties, the effect of soil nonlinearity can be important in changing the soil column frequency and, therefore, the amplitude of the spectral response at the soil column frequency.
Wall Height, m

Maximum Soil Pressure, kPa

Predicted Computed

The simplified method was verified extensively for a wide range of soil properties, wall heights, and design motions.

Wall Height, m

Maximum Soil Pressure, kPa

Predicted Computed

Wall Height, m

Wall Height, ft

Maximum Seismic Soil Pressure, psf

Figure 8. Predicted and Directly Computed Seismic Soil Pressure, 15.2 m (50 ft) Wall, Vs = 457 m/sec (1,500 ft/sec), EUS Local Motion

Maximum Seismic Soil Pressure, psf

Figure 6. Predicted and Directly Computed Seismic Soil Pressure, 4.6 m (15 ft) Wall, Vs = 152 m/sec (500 ft/sec), ATC Motion
10

Comparison with Other Commonly Used Methods The seismic soil pressure results obtained for a building wall 9.2 m (30 ft) high embedded in a soil layer with shear wave velocity of 305 m/sec (1,000 ft/sec) using the M-O, Wood, and proposed simplified methods are compared in Figure 7. For the simplified method, the input motions defined in Figure 5, all scaled to 0.30 g peak ground acceleration, were used.
Bechtel Technology Journal

Wall Height, ft

Accuracy of the Simplified Method The simplified method outlined above was tested for building walls with the embedment depths of 4.6, 9, and 15.2 m (15, 30, and 50 ft) using up to six different time histories as input motion. The results computed directly with SASSI are compared with the results obtained from the simplified solution. To depict the level of accuracy, typical comparisons for a 4.6 m (15 ft) wall with shear wave velocity of 152 m/sec (500 ft/sec) using the ATC motion, a 9.2 m (30 ft) wall with soil shear wave velocity of 305 m/sec (1,000 ft/sec) using the Loma Prieta motion, and a 15.2 m (50 ft) wall with soil shear wave velocity of 457 m/sec (1,500 ft/sec) using the EUS local motion are shown in Figures 6, 7, and 8, respectively. In all cases, the soil material damping of 5 percent and Poissons ratio of 1/3 were used. These comparisons show a relatively conservative profile of seismic soil pressure predicted by the simple method as compared with a more rigorous solution. A comprehensive validation of the proposed method is presented in [10].

Maximum Seismic Soil Pressure, psf

Figure 7. Predicted and Directly Computed Seismic Soil Pressure, 9.2 m (30 ft) Wall, Vs = 305 m/sec (1,000 ft/sec), Loma Prieta Motion
Maximum Soil Pressure, kPa

Predicted Computed

Wall Height, ft

The same soil shear wave velocity was used for all motions to compare the effects of frequency content of each motion on the pressure amplitude. In real application, the average strain-compatible soil velocity obtained from the companion freefield analysis would be used. The M-O method and the Wood solution require only the peak ground acceleration as input, and each yields one pressure profile for all motions. For the M-O method, it is commonly assumed (although specified by neither Mononobe nor Okabe) that the seismic soil pressure has an inverted triangular distribution behind the wall. As shown in Figure 9, the M-O method results in lower pressure. This is understood, since this method relies on the wall movement to relieve the pressure behind the wall. The Wood solution generally results in the maximum soil pressure and is independent of the input motion as long as the peak acceleration is 0.30 g. The proposed method results in a wide range of pressure profiles, depending on the frequency contents of the input motion, particularly at the soil column frequency. For those motions for which the ground response motions at the soil column frequency are about the same as the peak ground acceleration of the input motion, e.g., RG 1.60 motion, the results of the proposed method are close to those of the Wood solution. There is a similar trend in the results of the various methods in terms of the magnitude of the total lateral force and the overturning moment.
Maximum Soil Pressure, kPa

CONCLUSIONS

sing the concept of an SDOF system, a simplified method was developed to predict maximum seismic soil pressures for buildings resting on firm foundation materials. The method incorporates the dynamic soil properties and the frequency content of the design motion in its formulation. It was found that the controlling frequency that determines the maximum soil pressure is the one corresponding to the soil column adjacent to the embedded wall of the building. The proposed method requires the use of conventionally used, simple, one-dimensional soil column analysis to obtain the relevant soil response at the base of the wall. More importantly, this approach allows soil nonlinear effects to be considered in the process. The effect of soil nonlinearity can be important for some applications depending on the intensity of the design motion and the soil properties. Following one-dimensional soil column analysis, the proposed method involves a number of simple hand calculations to arrive at the distribution of the seismic soil pressure for design. The accuracy of the method relative to the more elaborate finite element analysis was verified for a wide range of soil properties, earthquake motions, and wall heights. The simplified method has been adopted by design codes and standards such as the NEHRP standards and the ASCE standards for nuclear structures.

The simplified method has been adopted by design codes and standards, e.g., ASCE4-09 and NEHRP standards.

ACKNOWLEDGMENT The Bechtel technical grant for development of the method is acknowledged.

REFERENCES
6

[1]
Wall Height, ft

Wall Height, m

N. Mononobe and H. Matuo, On the Determination of Earth Pressures During Earthquakes, Proceedings of World Engineering Congress, Tokyo, Japan, Vol. 9, Paper 388, 1929. S. Okabe, General Theory of Earth Pressures and Seismic Stability of Retaining Wall and Dam, Journal of Japan Society of Civil Engineers, Vol. 12, No. 1, 1924. H.B. Seed and R.V. Whitman, Design of Earth Retaining Structures for Seismic Loads, ASCE Specialty Conference on Lateral Stresses in the Ground and Design of Earth Retaining Structures, Cornell University, Ithaca, New York, June 2224, 1970. R.V. Whitman, Seismic Design and Behavior of Gravity Retaining Walls, Proceedings of ASCE 1990 Specialty Conference on Design and Performance of Earth Retaining Structures, Cornell University, Ithaca, New York, June 1821, 1990, pp. 817842, access via <http://cedb.asce.org/cgi/ WWWdisplay.cgi?9002388>.

[2]

EUS Local

EUS Distant ATC Loma Prieta RG 1.60 WUS M-O Wood

[3]

0
, , ,

[4]

Maximum Seismic Soil Pressure, psf

Figure 9. Maximum Seismic Soil Pressure for the 9.2 m (30 ft) Wall

December 2008 Volume 1, Number 1

11

[5]

R.V. Whitman and J.T. Christian, Seismic Response of Retaining Structures, POLA Seismic Workshop, San Pedro, California, 1990. R.V. Whitman, Seismic Design of Earth Retaining Structures, Proceedings of 2nd International Conference on Recent Advances in Geotechnical Earthquake Engineering and Soil Dynamics, St. Louis, Missouri, March 1115, 1991, pp. 17671778. R. Richards, Jr., and D.G. Elms, Seismic Behavior of Gravity Retaining Walls, ASCE, Journal of Geotechnical Engineering, Vol. 105, No. GT4, April 1979, pp. 449464. H. Matsuzawa, I. Ishibashi, and M. Kawamura, Dynamic Soil and Water Pressures of Submerged Soils, ASCE, Journal of Geotechnical Engineering, Vol. 111, No. 10, September 1984. R.M. Ebeling and E.E. Morrison, Jr., The Seismic Design of Waterfront Retaining Structures, US Army Corps of Engineers, Technical Report ITL-92-11, 1992.

[6]

[22] T. Itoh and T. Nogami, Effects of Surrounding Soils on Seismic Response of Building Basements, Proceedings of 4th US National Conference on Earthquake Engineering, Palm Springs, California, May 2024, 1990. [23] K. Koyama, O. Watanabe, and N. Kusano, Seismic Behavior of In-Ground LNG Storage Tanks During Semi-Long Period Ground Motion, Proceedings of 9th World Conference on Earthquake Engineering, Tokyo-Kyoto, Japan, August 29, 1988. [24] K. Koyama, N. Kusano, H. Ueno, and T. Kondoh, Dynamic Earth Pressure Acting on LNG In-Ground Storage Tank During Earthquakes, Proceedings of 10th World Conference on Earthquake Engineering, Madrid, Spain, July 1992. [25] J. Lysmer, F. Ostadan, and C.C. Chen, SASSI2000 A System for Analysis of Soil-Structure Interaction, Department of Civil Engineering, University of California, Berkeley, 1999. [26] P.B. Schnabel, J. Lysmer, and H.B. Seed, SHAKE A Computer Program for Earthquake Response Analysis of Horizontally Layered Sites, Earthquake Engineering Research Center, University of California, Berkeley, Report No. EERC 72-12, December 1972.

[7]

[8]

[9]

[10] F. Ostadan and W.H. White, Lateral Seismic Soil Pressure An Updated Approach, Bechtel Technical Grant report, 1997. [11] J.H. Wood, Earthquake Induced Soil Pressures on Structures, Doctoral Dissertation, EERL 73-05, California Institute of Technology, Pasadena, California, 1973. [12] A. Veletsos and A.H. Younan, Dynamic Soil Pressure on Rigid Vertical Walls, Earthquake Engineering and Soil Dynamics, Vol. 23, 1994, pp. 275301. [13] A. Veletsos and A.H. Younan, Dynamic Modeling and Response of Soil-Wall Systems, ASCE, Journal of Geotechnical Engineering, Vol. 120, No. 12, December 1994. [14] ASCE 4-98, Seismic Analysis of Safety-Related Nuclear Structures and Commentary, American Society of Civil Engineers, 1998. [15] NEHRP Recommended Provisions for Seismic Regulations for New Buildings and Other Structures, 2000 Edition, FEMA 369, March 2001. [16] H. Nazarian and A.H. Hadjian, Earthquake Induced Lateral Soil Pressure on Structures, ASCE, Journal of Geotechnical Engineering, Vol. 105, No. GT9, September 1979. [17] Electric Power Research Institute, Proceedings: EPRI/NRC/TPC Workshop on Seismic SoilStructure Interaction Analysis Techniques Using Data From Lotung, Taiwan, EPRI Publication No. NP-6154, Two Volumes, March 1989. [18] Electric Power Research Institute, PostEarthquake Analysis and Data Correlation for the 1/4-Scale Containment Model of the Lotung Experiment, EPRI Publication No. NP-7305SL, October 1991. [19] M. Hirota, M. Sugimoto, and S. Onimaru, Study on Dynamic Earth Pressure Through Observation, Proceedings of 10th World Conference on Earthquake Engineering, Madrid, Spain, July 1992. [20] H. Matsumoto, K. Arizumi, K. Yamanoucho, H. Kuniyoshi, O. Chiba, and M. Watakabe, Earthquake Observation of Deeply Embedded Building Structure, Proceedings of 6th Canadian Conference on Earthquake Engineering, Toronto, Canada, June 1991. [21] M. Watakabe, H. Matsumoto, Y. Fukahori, Y. Shikama, K. Yamanouchi, and H. Kuniyoshi, Earthquake Observation of Deeply Embedded Building Structure, Proceedings of 10th World Conference on Earthquake Engineering, Madrid, Spain, July 1992.

The original version of this paper was published in the Journal of Soil Dynamics and Earthquake Engineering, Vol. 25, Issues 710, AugustOctober 2005, pp. 785793.

BIOGRAPHY
Farhang Ostadan, a Bechtel Fellow, has more than 25 years of experience in geotechnical and geotechnical earthquake engineering and foundation design. As chief soils engineer for Bechtel, he has overall responsibility for this discipline and manages the efforts of a large and diverse group of geotechnical specialists in locations across the US and around the globe. His project oversight responsibilities range from major transportation projects to petrochemical, nuclear, and power- and energy-related projects. Dr. Ostadan has published more than 30 technical papers on topics related to geotechnical earthquake engineering. He co-developed a method for dynamic soil-structure interaction analysis currently in use by the industry worldwide. Dr. Ostadan is a frequent lecturer at universities and research organizations. Dr. Ostadan is currently a member of the American Society of Civil Engineers (ASCE), Geotechnical Division; the Earthquake Engineering Research Institute (EERI); and the National Earthquake Hazard Reduction Program (NEHRP) Foundation Committee, and is a past member of Californias Seismic Safety Commission. Dr. Ostadan received a PhD in Civil Engineering from the University of California, Berkeley; an MS in Civil Engineering from the University of Michigan, Ann Arbor; and a BS in Civil Engineering from the University of Tehran, Iran.

12

Bechtel Technology Journal

INTEGRATED SEISMIC ANALYSIS AND DESIGN OF SHEAR WALL STRUCTURES


Originally Issued: April 2006 Updated: December 2008 AbstractThis paper summarizes a new approach for the design of concrete shear wall structures for nuclear facilities. Static and dynamic analyses are carried out with the same finite element model, using SAP2000 and SASSI2000 computer programs, respectively. The method imports the dynamic solution from SASSI2000 into the optimum concrete (OPTCON) design computer code in order to compute the stresses in the concrete members for every time step of analysis. Static stresses are imported from the static solution for applicable static loads, and total stresses are computed for concrete design. The design process allows both element-based and cut-section design methods. This approach has the advantage of considering the stress time history in the design of concrete members, avoiding the conventional approach of combining maximum seismic stresses for all elements simultaneously. Significant savings in concrete design (both time and material) were obtained in a test problem, simulating a typical shear wall structure for nuclear facilities. Keywordscomputer code, cut-section design, element-based design, impedance matrix, integrated design, mass matrix, optimum design, reinforcement, seismic design, shear wall structure, shell element, static and dynamic analysis, stiffness matrix
INTRODUCTION he current approach to the design of safetyrelated shear wall structures generally involves using the SASSI2000 [1] computer code for the seismic soil-structure interaction (SSI) analysis. Acceleration profiles obtained from the SASSI analysis are applied to a detailed finite element model as equivalent static loads to determine the seismic forces. The SASSI models may be coarser than the static models. The design may be carried out using a concrete design program, with appropriate combination of applicable static loads. In the design process, maximum seismic forces in each of the three orthogonal directions are combined. This step assumes that all maximum seismic loads are acting at the same time, thus resulting in a very conservative design. The conventional two-step design procedure described above is tedious and requires two separate analyses to develop design loads and the compatibility of the static and the dynamic models needs to be demonstrated for each application. However, it has the advantage that a detailed static model considering major openings and composite slabs can be analyzed for design. The proposed approach requires the same model to be used for static and dynamic analysis and thus offers a more robust approach for concrete design, if the same static model can be used in the dynamic analysis. For dynamic analysis, the new version of the SASSI2000 code is used. This version has the state-of-the-art thin/thick shell element with five stress output points allowing computation of out-of-plane shear forces. To avoid transfer of large sets of stress time histories from SASSI2000 to optimum concrete (OPTCON) [design computer code], the transfer function solutions, which comprise a much smaller set of data, are imported to OPTCON and the STRESS module in SASSI2000 is implemented in OPTCON to compute element stresses. OPTCON combines the shell stresses into one 3-D record for design while preserving the maximum responses. OPTCON imports static loads, such as dead-load from SAP2000 [2] models. OPTCON module is a Windows program using a project database. An earlier version of the OPTCON reinforced concrete design engine was developed 30 years ago and used extensively in the design of

Thomas D. Kohli tdko@comcast.net Orhan Grbz, PhD


ogurbuz@bechtel.com

Farhang Ostadan, PhD


fostadan@bechtel.com

2008 Bechtel Corporation. All rights reserved.

13

ABBREVIATIONS, ACRONYMS, AND TERMS


ACI CG DLL DOF FFT OPTCON SSI American Concrete Institute center of gravity Dynamic Link Library degree(s) of freedom fast Fourier transform optimum concrete soil-structure interaction

non-seismic loads using the same model, and (3) design of the structural members using the OPTCON module. Each of these steps is described below. Seismic Analysis with SASSI2000 The computer program is widely used in the nuclear industry for seismic SSI analysis of structures and development of seismic responses for structural design and equipment design. In SASSI, the equation of motion is formulated in frequency domain and fast Fourier transform (FFT) techniques are used to convert the frequency domain solution to time domain solution. For each selected frequency of analysis, SASSI solves for the following equation of motion where [C] is a complex frequencydependent dynamic stiffness matrix
III II II III C ii C ii + X ii C iw CC is U X U i ii i U w = 0 (1) C II C II 0 wi ww III III C si 0 C ss U s 0

The process streamlines the analysis and design process and reduces the engineering time for design significantly.

nuclear power plants. [3] OPTCON optimizes its reinforcement design by considering all the factored shell forces at once, determining the best-fit reinforcement at each face of the concrete. For the new integrated design approach, the OPTCON computer code was modified to affect the design as follows: Design is performed using the element shell forces as a function of time, obtained from the Dynamic Link Library (DLL) using factored time history loads, thus preserving the phasing of the response motions and combining with applicable static loads. OPTCON assures design is adequate for stresses for all time steps. OPTCON performs automatic elementgrouping where forces on sections of elements need to be considered as a set. OPTCON uses parabolic relationship for concrete. stress-strain

and [K] and [M] are the global complex stiffness and mass matrices, respectively. Using the following subscripts, which refer to degrees of freedom (DOF) associated with different nodes (see Figure 1):

[C ] = [K ] w 2 [M ]
Subscript Nodes b i the boundary of the total system at the boundary between the soil and the structure

(2)

All design meets the requirements of ACI 349-01 (both shear walls as well as floor diaphragms are designed). Selective output includes the required reinforcement on each face of the wall or the slab, in each direction, for each element. Contour plots of shell forces and computed reinforcement are available to the designer. The process streamlines the analysis and design process and reduces the engineering time for design significantly. The integrated process incorporates theoretical accuracy and engineering judgment and is a valuable tool in the design of next generation of nuclear power plants.

w within the excavated soil volume g s f at the remaining part of the free-field site at the remaining part of the structure combination of i and w nodes

METHODOLOGY

T
14

he integrated design methodology involves (1) seismic analysis of the overall structure using SASSI2000, (2) static analysis under the

Formulation of the dynamic stiffness and mass matrices is very similar to all other finite element codes. To include the SSI effects, SASSI2000 requires computation of the free-field motion Ui in Equation 1 and the impedance matrix for all foundation interaction nodes Xii. The impedance matrix is, in effect, a complex frequency-dependent spring and dashpot, which is interconnected to all other interaction nodes and is obtained from the point load solution for each interaction node.

Bechtel Technology Journal

Qb
(a) Total System

i s

Qb
(b) Substructure I Free-Field Site (c) Substructure II Excavated Soil Volume (d) Substructure III Structure

Figure 1. Substructuring Method in SASSI2000 Analysis

Figure 1 depicts the substructuring method used in SASSI2000 analyses. Details of SASSI2000 substructuring methods and the internal modeling can be obtained from the theoretical manual of the program (see [1]). The solution from Equation 1 in terms of U is the transfer function solution for each degree of the freedom in the model. The transfer function solution is convolved with the Fourier components of the input motion and converted to time domain to obtain the response time history for the respective degree of freedom in the model. The stress time histories for each element are computed from the response time histories of the nodes forming that element using the stress-strain relationship of the respective element.

Static Analysis for Non-Seismic Loads The finite element model of the structure used in SASSI2000 for seismic analysis should be used for the non-seismic loads in accordance with the project criteria. Any general purpose finite element analysis program can be used for this purpose. In this study, SAP2000 is used. Since the design will be carried out for the total structure, it is necessary to capture the internal forces and moments for all members. Consequently, the finite element model must include the soil stiffness to capture the basemat response. Also, if the structure is embedded, the lateral soil pressures must be included in the static analysis.

December 2008 Volume 1, Number 1

15

The analysis results are saved in the project database that will be used during the design process. Definition of Design Forces Design of shear wall structures has been carried out in the past using both element stresses and cut section forces. In the integrated approach, a combination of both is used as described below. In the OPTCON program module, rows of elements in both walls and diaphragms are designated as element-groups. For each element-group, in-plane membrane forces and moments are calculated. For a wall, these forces are: membrane force in the vertical direction Pu, in-plane overturning moment Mu, and horizontal shear force Vu. For a diaphragm, these forces and moments would be calculated in both directions. The out-of-plane design forces are calculated on an element basis, including the out-ofplane bending moments M x and My and out-of-plane shears Vxz and Vyz . All analysis is done on a time-step basis for the entire time-history duration using the 3-D combined shell stresses. At each time-step, the ACI 349-01 code criteria is considered so the envelope of the controlling design step is assured. Integrated Design Using the OPTCON Module The shell stresses computed in the SASSI2000 STRESS DLL in OPTCON use the transfer-functions of the DOF defined by the connectivity of the 3-D shell finite element model. To do this, the SHL17 thin/thick shell elements stress recovery routines were rewritten using complex arithmetic so that their inputs were the frequency domain nodal transfer-functions rather than the original nodal displacement time histories. The stress recovery was therefore performed in the frequency domain and converted to time domain using the SASSI2000 FFT routines. The time histories of shell element stresses are used for design. The SHL17 quadrilateral shell element has five output points where the shell stresses are computed; one is at the center of gravity (CG) of the element and the other four are located about 80 percent of the way from the CG to the corner nodes. SASSI2000 performs the analysis one direction at a time (three runs for X-, Y-, and Z-excitation) and uses the three components of time histories,

All analysis is done on a time-step basis for the entire time-history duration using the 3-D combined shell stresses.

one for each respective direction. Therefore, the process used by OPTCON is for the user to isolate all the shell elements to be considered in a design (i.e., a shear wall or a floor diaphragm) and then the internal STRESS DLL is executed three times, once for each of the global directions and the results of these analyses are saved to disk. Then, each of the three directional analyses is post-processed so the result is a combined 3-D time history record for each shell element in the analysis. During the 3-D combination, the three directions of excitations are permutated in terms of plus/minus sign for all eight possible combinations so that their maximum resulting shell stress components are captured. Thus, the final 3-D combined shell stresses contain all the time-history stress components Sxx , Syy , Sxy , M xx , Myy , M xy , Vxz , and Vyz for each shell at all five stress output points on the SHL17 element. Then, OPTCON uses the imported static shell stresses for dead-load and live-load along with user specified loading-combination scale factors and for each time-step combines them into the final shell element stresses to be used for design. For element-based design (i.e., with no elementgrouping) the final shell element stresses are used directly. In this, typically Sxx and M xx are considered as P and M pairs at each timestep to design the horizontal reinforcement in the shear wall and Syy and Myy to design the vertical reinforcement. The twisting moment, M xy is added to amplify both M xx and Myy so that the design at each time-step is conservative. OPTCON performs element-based design using the single shell element stresses using an iteration process where both As and As are initially set to their minimums and then swept through all the steps so that all P and M pairs best-fit within a numerical reinforced concrete interaction diagram. For element-groups, OPTCON automatically assembles the individual shell elements to be considered in the design into internal logical groups. This is accomplished considering openings in the designs such as doors and windows in shear walls or openings in floor diaphragms. Such openings form piers at each of the element-levels that, in certain cases, require special ACI 349-01 code considerations. Thus, OPTCON, with element-group design, always considers all possible cut-sections of groups of elements in the design at each elementlevel. Figure 2 shows the element-grouping on a simple shear wall with one door opening.

16

Bechtel Technology Journal

Shell Elements Element Column No. 1 Element Column No. 8 8

Element Level No. 4

4 Element Level No. 1

1 Pier No. 1 Pier No. 2

Figure 2. Elevation of Shear Wall with One Opening , Showing Horizontal Groups

When using element-grouping, the shell stresses at the five output points on the element are averaged.

Fxi = (Sxx)(hi) Fyi = (Syy)(wi) Fvi = (Sxy)(wi) Iw/2 wi Xj Syy, Fyi Y Sxy, Fvi Sxx, Fxi X CG of Shell i Yj

Pu (+ = tension) (kips) Mu (ftkips)

Iw/2

(5 Shell Elements Shown)

hi

Vu (kips)

CG of Shell n C Wall Section L

Figure 3. Integration of CG Shell Stresses to Determine Design Forces and/or Moments Five-Element Group

In Figure 2, vectors 1, 2, 4, and 5 represent cut-sections on the wall piers to be used for design. Vectors 3 and 6 are cut-sections on multiple piers, and vectors 7 and 8 are cut-sections on the wall where no piers exist, across the whole wall. When using element-grouping, the shell stresses at the five output points on the element are averaged and assumed to be located at the CG of the element. Then, the individual CG

shell stresses are integrated at each time step to determine the design forces and/or moments. Figure 3 shows how these integrations are performed using an example of a five-element group: Applied Axial Load on Wall (tension is positive):

Pu = Fyi
i =1

i =n

(3)

December 2008 Volume 1, Number 1

17

Applied Moment about Center Line of Wall or Wall Segment (counter-clockwise is positive):
i =n 1 M u = Fy i w x i 2 i =1

(4)

If openings such as doors or windows should exist in the cut-section shown in Figure 4, OPTCON iterates on the area of reinforcement considering differing concrete stress block having discontinuities and the reinforcing steel is not considered where the openings exist.

Applied Shear Load on Wall:

Vu = F i
i =1

i =n

DESIGN EXAMPLE (5)

When designing reinforcing steel with element-groups while considering membrane forces and their overturning moments, OPTCON considers equally distributed rebar located along the section to be designed.

Concrete Stress Block When designing reinforcing steel with elementgroups while considering membrane forces and their overturning moments, OPTCON considers equally distributed rebar located along the section to be designed. Figure 4 shows the design of a typical cut-section where no openings exist: Strain in Rebar Set, e s , as a Function of the Strain of the Concrete, ec , at the Edge of the Concrete Sections:

he integrated approach was applied to a shear wall structure in a high seismic zone. The two-story building is approximately 150 ft x 250 ft and 64 ft high. As can be observed from this model, numerous openings exist in the walls and slabs and therefore the design forces must be calculated considering these discontinuities. The finite element model for this building is shown in Figure 5. The model used a 5 ft x 5 ft mesh size, resulting in 9,000 nodes and 8,000 elements. The design of the lower exterior shear wall illustrated in Figure 5 is shown as an example below. The shear wall runs from the basemat at elev. 0.0 to the first floor at elev. 27.27 and is 206 ft long, extending across the entire building. Figure 6 illustrates the shell element mesh in the wall segment used in the example. The shear wall has three shear panelsnumbered 4, 5, and 6formed by intersecting interior walls as shown above. Figure 7 is a contour plot of the Syy shell forces in units of kips per foot of shell width. These are the absolute value of the maximum 3-D combined seismic plus static vertical shell forces in the wall. This is an illustration of a plot of the eight shell stress components that may be viewed at the option of the engineer. These plots are for information only and are used
2

x Lna e s = ec i L na
Stress in Bar Set:

(6)

(strain, i.e., inches per inch)

f s = e s E s 0.9 f y
(force per unit of area)

(7)

Force in Bar Set:

Fs = f s x As

(units of force)

(8)

NOTE: Forces on bar sets in compression zone are reduced to consider force taken by concrete.

e0 = 0.002 xbar

c = 0.85c Xi

2e e0

e e0

(Parabolic Hognestad Concrete Stress Block)

C bar set i L

Concrete Stress Block 0.85c e Lna ec (max) = 0.002 es o Neutral Axis

lw

Figure 4. Design of a Typical Cut-Section with No Openings

18

Bechtel Technology Journal

to view the stress concentrations that tend to dominate the design of the reinforcing steel. In-Plane Reinforcement Requirements The entire wall was checked using OPTCON with element-grouping for limiting shear

strength per Section 21.6.5.6 of ACI 349-01. The largest demand-capacity ratio for individual piers was 0.45 and the largest for piers sharing a common lateral force was 0.39. Horizontal shear reinforcement was designed with OPTCON, using element-grouping to meet

Z X

Figure 5. Example of Shear Wall Structure

Figure 6. Lower Exterior Shear Wall

Figure 7. Contour Plot of Syy Shell Membrane Forces

December 2008 Volume 1, Number 1

19

the provisions of ACI 349-01, equations 21-7, 11-31, and 11-32 of ACI 349-01. The controlling reinforcement designed was 0.86 sq.-in. per foot of shell width per face. Shear-friction was checked at the bottom of the wall (basemat intersection) using elementgrouping per paragraph 11.7 of ACI 349-01 but did not control. Reinforcement Required Resulting From Out-of-Plane Loads on the Wall OPTCON was used with element-based design using only the shell moments Mxx and Myy, amplified by Mxy with the membrane forces Sxx and Syy set to zero (since they had already been considered in the in-plane design above). This resulted in the added reinforcement needed to resist the out-of-plane loadings on the wall.

REFERENCES
[1] J. Lysmer, F. Ostadan, and C. Chin, SASSI2000 A System for Analysis of Soil-Structure Interaction Theoretical Manual, Geotechnical Engineering Division, Civil Engineering Department, University of California, Berkeley, 1999, access via <http://203.96.131.85/SASSI2000/index_html>. SAP2000 Users Manual, Computers and Structures, Inc., Berkeley, CA, 2005, access via <http://orders.csiberkeley.com/ProductDetails. asp?ProductCode=SAP2KDOC-3>. T. Kohli and O. Grbz, Optimum Design of Reinforced Concrete for Nuclear Containments, Including Thermal Effects, Proceedings of the Second ASCE Specialty Conference on Structural Design of Nuclear Plant Facilities, New Orleans, Louisiana, December 1975, pp. 12921319, access via <http://cedb.asce.org/cgi/WWWdisplay. cgi?7670213> and <http://cedb.asce.org/cgi/ WWWdisplaybn.cgi?0872621723>.

[2]

[3]

The integrated design approach presented in this paper takes advantage of the time history phase relationship of the seismic forces and also optimizes the design.

RESULTS

The original version of this paper was published in the Proceedings of the 8th U.S. National Conference on Earthquake Engineering (Paper No. 996), held April 1822, 2006, in San Francisco, California, USA.

he total reinforcement was obtained by combining reinforcement required for in-plane loadings with the added reinforcement required to resist out-of-plane loadings and considering minimum code requirements. The proposed approach resulted in reinforcement requirements that were 77%93% of the reinforcement determined using the two-step approach. No out-of-plane reinforcements (stirrups) were required in the wall during the design.

BIOGRAPHIES
Thomas D. Kohli is a consulting engineer, retired from Bechtel after 25 years of service. He has more than 40 years of experience in the analysis and design of nuclear related structures. Thomas served on the Senior Structural Staff in the Bechtel Los Angeles Office and managed the Containment Specialty Group that performed front-end design for Nuclear Containment design in the 1970s. His specialty is modern windows software engineering as applied to Finite Element Analysis and optimized reinforced concrete design meeting the requirements of ACI 349 and ACI 359 Codes, directly related to time-history design due to combined static and seismic loading. Thomas holds a BS in Civil and Structural Engineering from the University of Southern California, Los Angeles. Orhan Grbz is a Bechtel Fellow and senior principal engineer with over 35 years of experience in structural and earthquake engineering. As a Fellow, he is an advisor to senior management on technology issues, and represents Bechtel in technical societies and at industry associations.

CONCLUSIONS

dequate tools are important for the design of complex structures for both commercial nuclear power plants and US Department of Energy facilities. The integrated design approach presented in this paper takes advantage of the time history phase relationship of the seismic forces and also optimizes the design to provide a balanced design. This design tool will accelerate the design process and, at the same time, will minimize the peer review process that has become a large part of such projects. The example design shown above was accomplished in less than 3.5 hours using a high-end PC running the OPTCON Windows program. Because the design process meets the ACI code requirements, it can be readily applied to complex projects.

20

Bechtel Technology Journal

As a senior principal engineer, he provides support to various projects and performs design reviews and independent peer reviews. The scope of work includes development of design criteria, seismic evaluations, structural evaluations and investigations, technical review and approval of design, serving as Independent Peer Reviewer for special projects, investigation and resolution of design and construction issues, and supervision of special analyses. Dr. Grbz is a member of the American Society of Civil Engineers Dynamic Analysis of Nuclear Structures Committee and the American Concrete Institute 349 Committee. These committees develop and update standards and codes used for the nuclear safety-related structures, systems, and components. Dr. Grbz received a PhD and an MS in Structural Engineering, and a BS in Civil Engineering, all from Iowa State University, Ames, Iowa. Farhang Ostadan, a Bechtel Fellow, has more than 25 years of experience in geotechnical and geotechnical earthquake engineering and foundation design. As chief soils engineer for Bechtel, he has overall responsibility for this discipline and manages the efforts of a large and diverse group of geotechnical specialists in locations across the US and around the globe. His project oversight responsibilities range from major transportation projects to petrochemical, nuclear, and power- and energy-related projects. Dr. Ostadan has published more than 30 technical papers on topics relating to geotechnical earthquake engineering. He co-developed a method for dynamic soil-structure interaction analysis currently in use by the industry worldwide. Dr. Ostadan is a frequent lecturer at universities and research organizations. Dr. Ostadan is currently a member of the American Society of Civil Engineers (ASCE), Geotechnical Division; the Earthquake Engineering Research Institute (EERI); and the National Earthquake Hazard Reduction Program (NEHRP) Foundation Committee, and is a past member of Californias Seismic Safety Commission. Dr. Ostadan received a PhD in Civil Engineering from the University of California, Berkeley; an MS in Civil Engineering from the University of Michigan, Ann Arbor; and a BS in Civil Engineering from the University of Tehran, Iran.

December 2008 Volume 1, Number 1

21

22

Bechtel Technology Journal

STRUCTURAL INNOVATION AT THE HANFORD WASTE TREATMENT PLANT


Issue Date: December 2008
AbstractThree innovative techniques have been used in meeting the structural challenges of designing and constructing the mammoth facilities constituting the first-of-a-kind Hanford Waste Treatment Plant. Addressing the areas of design production, nuclear safety assurance, and constructability, the first technique is a systematic approach to developing bounding loads considering dynamic response; the second is an advanced technique in soil-structure interaction (SSI) analysis, which provides confidence in the adequacy of the design; and the third approach is a construction tool that ensures the quality of concrete placement in highly congested areas. Keywordsacceleration, bounding load, congestion, Hanford, high-level waste, high-slump concrete, incoherency, low-activity waste, nuclear waste, radioactive waste, response acceleration, seismic criteria, soil-structure interaction (SSI), structural innovation, structural steel
INTRODUCTION ituated on the banks of the Columbia River in the shrubsteppe desert area of southeastern Washington state, the US Department of Energys (DOEs) Hanford Site was where plutonium was produced for the Manhattan Project atomic bomb program that began in the early 1940s. Throughout the four-decade Cold War, the facility continued its national security Figure 1. Hanford Waste Treatment Plant Facilities, Washington State mission to produce materials for nuclear The low-activity waste will be stored on site weapons. The end of production at Hanford over the long term while the high-level waste is brought the task of cleanup and finding a planned for shipment to Yucca Mountain in the long-term storage solution for the sites legacy: Nevada desert. 53 million gallons of highly radioactive waste stored on site in underground steel tanks. As shown in Figure 1, WTP consists of three In December 2000, the DOE Office of River mammoth nuclear waste processing facilities Pretreatment, Low-Activity Waste Vitrification, Protection awarded Bechtel National, Inc., the and High-Level Waste (HLW) Vitrification Hanford Waste Treatment Plant (WTP) project each with reinforced concrete core structures to build the worlds largest such facility to solve surrounded by structural steel frames. A fourth the long-term storage problem. The $12 billion facility, the large Analytical Laboratory, will facility will separate the waste into high-level contain a concrete hot cell where radioactive and low-level streams, which when mixed materials will be tested and categorized. The with silica, heated to melting, and deposited in concrete cores will house process equipment, stainless-steel canisters will cool and harden with support services located in the steel portion into a stable glass safe for long-term storage. of the structures.
23

John Power
jrpower@bechtel.com

Mark Braccia
mrbracci@bechtel.com

Farhang Ostadan, PhD


fostadan@bechtel.com

2008 Bechtel Corporation. All rights reserved.

ABBREVIATIONS, ACRONYMS, AND TERMS ACI ASCE DOE HLW ISRS NRC American Concrete Institute American Society of Civil Engineers US Department of Energy high-level waste in-structure response spectra US Nuclear Regulatory Commission psi PTF SASSI SSI WTP ZPA pounds per square inch Pretreatment Facility System for Analysis of Soil-Structure Interaction soil-structure interaction Waste Treatment Plant zero period acceleration

Determining adequate bounding loads is critical when design is close-coupled with construction.

The four facilities are designed to meet a combination of commercial and nuclear codes and regulations, and two of them, the Pretreatment Facility (PTF) and the HLW facility, are designed to the same rigor as nuclear power generating facilities, including full dynamic analyses using site-specific seismic criteria. The sheer size and scope of these facilities present first-of-a-kind challenges. Due to schedule constraints, WTP is a close-coupled project in which the design of the upper floors was under way during construction of the foundation. This approach posed an enormous challenge in developing reasonably conservative design loads in the anticipation of structural design changes. This paper describes three of the several innovative techniques used in designing and

constructing the facilities, addressing the areas of design production, nuclear safety assurance, and constructability. The first technique is a systematic approach to developing bounding loads considering dynamic response; the second is an advanced technique in soil-structure interaction (SSI) analysis, which provides confidence in the adequacy of the design; and the third approach is a construction tool that ensures the quality of concrete placement in highly congested areas.

DETERMINING BOUNDING LOADS [1] he first of these innovative techniques is the two-step method for determining design bounding loads. Current codes and DOE regulations require consideration of

Steel Structure Mesh

Z Y

Z Y (North) X Concrete Shell Element Mesh

Figure 2. SAP2000 Static Model for Steel Comparison

24

Bechtel Technology Journal

A-A Y
Critical Horizontal Brace Critical Vertical Brace (E-W) Critical Vertical Brace (N-S)

Figure 3. Plan View of Roof Critical Bracing Members

SSI and dynamic response effects in the analysis of major nuclear facilities, such as the PTF and HLW facilities. However, current SSI software does not efficiently combine seismic responses with other loads required for design. Compounding this problem is the enormity of the structures and the analytical models that depict them (e.g., the finite element model for these structures exceeds 100,000 elements). In addressing the challenge, WTP is one of the major nuclear projects using a detailed finiteelement model for seismic SSI analysis. The state-of-the-art methodology and computer program SASSI (System for Analysis of SoilStructure Interaction) was chosen for the SSI analysis. SASSI uses a complex frequencyresponse technique to simulate the time-history seismic analysis. Three separate SSI analyses are performed in which the seismic input motionin terms of acceleration time history at gradeis applied in the X, Y, and Z directions, respectively. A range of soil cases is considered in the analysis to account for the variability of soil properties and the results are enveloped. Maximum acceleration profiles and in-structure response spectra (ISRS) are calculated taking into account the co-directional responses. By reviewing the transfer functions obtained from the SASSI analyses, the response of the concrete structure and steel roof is adequately captured. Each steel members maximum forces and moments are computed for validation purposes. These steel forces are spatially combined using the component factor method
Critical Vertical Brace (N-S) Concrete Slabs (Down to El. 0-0) Concrete Walls (Down to El. 0-0)

El. 119 El. 108.25 El. 97.5

Z Y

Figure 4. Section Cut AA

(100/40/40), which assumes that when the maximum response from one component occurs, the responses from the other two components are 40% of the maximum. [2] Also for validation purposes, shear stresses are calculated in the concrete shear walls. In the second step, acceleration profiles derived from the first step are used, with an adjustment scale factor greater than or equal to 1.0, for the statistical analysis of the detailed finite-element model using SAP2000 computer code. [3] Figure 2 shows the finite-element model of the building studied in the steel member comparison. The Y-axis is in the N-S direction and the Z-axis is vertical upward. Figures 3 and 4 show the steel roof frame and identify the critical bracing members for which seismic force comparisons were made. The structure is supported by

December 2008 Volume 1, Number 1

25

.55 .48

.56

.56 .45 .43 .51 .43 .53 .51 .40 .56 .58 .44 .57 .61

.57 .46 .42 .47 .41 .50 .52 .42 .56 .55 .40 .56 .63

.57 .47 .45 .52 .43 .51 .50 .40 .53 .56 .43 .58 .62

.58 .59 .52 .61 .49 .60 .52 .59 .41 .43 .49 .50 .51 .50 .42 .41 .54 .52 .53 .54 .41 .44 .56 .56 .62 .60

.59 .67 .70 .61 .42 .50 .52 .44 .53 .52 .43 .54 .58

.62 .72 .73 .67 .44 .53 .51 .43 .51 .52 .44 .54 .57

.67 .74 .78 .67 .42 .53 .54 .45 .53 .51 .41 .52 .57

.69 .73 .77 .71 .45 .56 .53 .44 .52 .53 .43 .52 .55

.67 .71 .83 .68 .44 .56 .58 .46 .55 .52 .41 .50 .52

.64 .63 .76 .71 .47 .59 .59 .48 .55 .54 .44 .51 .50

.60 .59 .74 .70 .45 .61 .65 .53 .59 .55 .43 .49 .47

.56 .55 .72 .72 .50 .92 .68 .75 .68 1.06 .55 .62 .60 .47 .52 .44

.53 .50 .56 .49 .72 .67 .68 .71 .48 .53 .69 .74

.48 .48 .69 .70 .54 .76 .85 .62 .86 .77 .52 .58 .44

.47 .45 .68 .71 .59 .84 .87

.46 .47 .77 .68 .79 .70 .85 .56 .86

.45 .47 .66 .73 .61 .93

.44 .49 .68 .72 .63 .93

.44 .53 .66 .75 .73 .97

50

.47 1.43 .45 .43 .75 .53 .50 .76 .46 .43 .56 .53 .54

100

.53 .49

.75 .76 .61 .59 .70 .72 .63 .71 .48 .52 .51 .55 .42 .43

.98 1.02 1.09 1.08 .99 1.11 1.15 1.10 1.17 1.17 1.03 1.11 1.16 .82 .70 .52 .87 .73 .56 .89 .76 .60

The two-step method provides reasonable bounding loads to allow design and construction to progress.

150

.46 .84 .59 1.03 .59 .56 .74 .45 .41 .57

.63 .84 1.72 .92 1.05 1.11 .89 .96 1.70 .59 .67 .62 .46 .64 .48

200

.58 .57

.56 .59

X
250 100 200 300 400 500

Figure 5. Roof Bubble Plot X Acceleration (g) due to X-Direction Seismic Input

appropriate linear elastic soil springs. All steel member forces are extracted for design, and forces are combined using the component factor method. The key to the two-step method is the use of conservative acceleration profiles derived from the SASSI analysis results, adjusted to ensure that calculated seismic stresses are conservative, even at local high-stress concentration areas. Establishing conservative acceleration profiles requires a good understanding of the dynamic responses of the structure from the first-step SSI analysis. One of the tools used to show the dynamic behavior of the structure is the acceleration bubble plot. Available in Microsoft Excel, a bubble plot is simply a two-dimensional coordinate plan view of one elevation of the structure with a bubble plotted at every node within that elevation. The size of the bubble is proportional to the magnitude of the maximum acceleration at that node. Nine bubble plots are normally generated at each elevation, representing the response accelerations in the X, Y, and Z directions due to seismic input. In Figure 5, the bubble plot shows the maximum nodal response accelerations in the X direction due to seismic input in the X direction at the elevation of the steel roof. Based on the nine bubble plots, a mass-weighted average maximum acceleration is determined for each story; then, if required, the story is divided into regions with similar accelerations and a conservative acceleration is assigned to

each region. (These adjusted acceleration profiles are applied to the static model in the second step of analysis.) The acceleration plots provide a clear picture of the dynamic response of the structure, floor by floor and wall by wall, so that timely decisions on design changes can be made to improve the seismic response. Such an iterative process would not have been possible following the commonly used practice of developing seismic shear and moment diagrams, and would not have enabled examination of the dynamic response of individual structural members.

ADVANCED SOIL-STRUCTURE INTERACTION ANALYSIS USING INCOHERENCY EFFECT [4] he second innovation entails consideration of incoherency when determining seismic accelerations in the design of major nuclear facilities. The WTP facilities will contain and process highly radioactive materials, which if released would cause a serious threat. The facilities are just miles from the Columbia River, the largest North American river to flow into the Pacific Ocean. The public and regulators demand that appropriately conservative assumptions be made to bound the natural phenomena hazards the facility could experience. Earthquake is one of the primary design inputs, and although prescriptive techniques are available to define the potential seismic events, they require making assumptions that would directly influence the final spectra. With this situation in mind, the WTP design team took

26

Bechtel Technology Journal

on the task of showing that the development of conventional spectra is conservative because it does not consider that not all seismic waves impact the structure in unison. When determining the SSI, a fully coherent model will lead the ground motion to transfer directly to the structure across the full range of frequencies. Introducing incoherency into the model reduces the structural response at high frequencies, thereby reducing the seismic forces used to qualify equipment. This approach can result in significant project savings by lowering the cost of major equipment. The basic concept of incoherency is that highfrequency waves imposed on a large structure will not translate into the full building motion that would result in smaller structures. The best way to visualize this concept is to consider two vessels in an ocean: a 30-foot sailboat and an aircraft carrier. Four-foot swells would cause noticeable rocking and bouncing for the sailboats passengers, but passengers on the aircraft carrier would barely perceive the motion. The same concept applies to the process equipment in the primary facilities of a major nuclear facility. While the details of the incoherency model are beyond the scope of this paper, Figure 6 demonstrates the resulting reduction in accelerations at high frequencies as compared to the results using a fully coherent model. The peak acceleration is approximately 11% lower and the zero period acceleration (ZPA) has been reduced by 25%. These reductions will translate into significant savings when

seismically qualifying equipment in the high-frequency range. Following this important development and its documentation, the US Nuclear Regulatory Commission (NRC) approved the incoherency model and its implementation in the SASSI program for SSI analysis. [5]

DEVELOPMENT OF CONCRETE MIX FOR HIGHLY CONGESTED AREAS he third innovation is the use of high-slump concrete in congested concrete placements. The plant process areas will be located in multistory reinforced concrete structures. In addition to the heavy reinforcement, surface plates with embedded studs used throughout to support equipment and commodity attachments result in highly congested forms (see Figures 7 and 8). The danger in this situation is voids, which could require expensive repairs. The proposed solution was to use a highslump concrete mix that could work through the congestion without leaving voids. It was specified as a 5,000-psi (pounds per square inch) mix with a maximum water-cement ratio of 0.45. Additionally, maximum aggregate size was limited to 3/4 inch to prevent bridging or binding during placement. The construction team also sought a mix with 11-inch slump, which results in 1-inch final slump cone height, just slightly higher than the aggregate size. Finally, as the mix had to meet the criteria established in ACI 349, Bechtel worked with the projects concrete supplier to develop a mix that met the criteria.

The NRC has approved the use of incoherency in design of nuclear facilities.

0.70

Incoherent Motion
0.60

Coherent Motion

0.50

Spectral Acceleration, g

0.40

0.30

0.20

0.10

0.00 0.1 1 10 100

Frequency, Hz

Figure 6. Foundation Motion in the Horizontal Direction (5% Damping, Rigid Massless Square Foundation)

December 2008 Volume 1, Number 1

27

To prove that the mix was viable, testing demonstrated not only compressive strength, but also showed that it would not segregate. Four-inch-thick wall mock-ups were placed and allowed to cure for 24 hours. Once the forms were stripped, the concrete was broken and cross sections examined. Even with 11-inch slump, the aggregate remained evenly distributed throughout the matrix. Strength tests were also positive.

CONCLUSIONS

Use of high-slump concrete mixes allows placement of concrete in congested forms without costly repairs due to voids.

sing the high-slump mix in highly congested areas, WTP has avoided and will continue to avoid costly repairs associated with voids, saving DOE and taxpayers significant sums of money.

TRADEMARKS Glenium is a registered trademark of Construction Research & Technology GMBH. Microsoft and Excel are registered trademarks of the Microsoft group of companies.

Figure 7. Rebar at Shield Window (Note Multiple Layers of Reinforcing)

REFERENCES
[1] D. Watkins, O. Gurbuz, F. Ostadan, and T. Ma, Two-Step Method of Seismic Analysis, First European Conference on Earthquake Engineering and Seismology, Geneva, Switzerland, September 46, 2006, Paper Number 1230. [2] ASCE 4-98, Seismic Analysis of Safety-Related Nuclear Structures, American Society of Civil Engineers, 2000, access via <https://www.asce. org/bookstore/book.cfm?book=3929>. [3] SAP2000 Users Manual, Computers and Structures, Inc., Berkeley, California, 2005, access via <http://orders.csiberkeley.com/ ProductDetails.asp?ProductCode=SAP2KDOC-3>. [4] F. Ostadan, N. Deng, and R. Kennedy, Soil-Structure Interaction Analysis Including Ground Motion Incoherency Effects, 18th International Conference on Structural Mechanics in Reactor Technology (SMiRT 18), Beijing, China, August 712, 2005, Paper Number SMiRT18-K04-7 <http://www.iasmirt.org/ iasmirt-2/SMiRT18/K04_7.pdf>. [5] Interim US NRC Staff Guidance (ISG) supplement to NUREG-0800, Standard Review Plan for the Review of Safety Analysis Reports for Nuclear Power Plants, Section 3.7.1, Seismic Design Parameters, regarding the review of seismic design information submitted to support design certification (DC) and combined license (COL) applications, May 2008.

Figure 8. Reinforcing Spacing

The concrete supplier designed two high-slump concrete mixtures. The F-6 high-slump mixture uses a 3/8 inch aggregate and the F-7 high-slump mixture uses both a 3/4 inch and a 3/8 inch aggregate. The high slump is achieved using Master Builders 1 Glenium 3000NS high-range water-reducing admixture. As a precaution to prevent segregation and excessive bleed water, a viscosity-modifying admixture, Master Builders VMA 358, was introduced into the mix.

1 Master Builders is the brand name of a line of products manufactured by the Admixture Systems business of BASF Construction Chemicals, Cleveland, Ohio, a division of BASF Aktiengesellschaft, which is headquartered in Ludwigshafen, Germany.

28

Bechtel Technology Journal

BIOGRAPHIES
John Power, deputy discipline production manager, CSA, on the Waste Treatment Project in Richland, Washington, has 22 years of experience in structural and civil engineering. He assists in managing a staff of 100 structural and civil engineers, and architects. John has a Bachelor of Civil Engineering from the Georgia Institute of Technology, Atlanta, Georgia, and is a certified Six Sigma Black Belt. Mark Braccia, discipline production manager, CSA, on the Waste Treatment Project in Richland, Washington, has 30 years of experience in structural and civil engineering. Along with managing a staff of 100 structural, civil, and architectural engineers, he has been serving as the structural technical interface with the Defense Nuclear Facilities Safety Board and the Department of Energy for the project. Mark has a BS in Civil Engineering from the University of California, Berkeley. Farhang Ostadan, a Bechtel Fellow, has more than 25 years of experience in geotechnical and geotechnical earthquake engineering and foundation design. As chief soils engineer for Bechtel, he has overall responsibility for this discipline and manages the efforts of a large and diverse group of geotechnical specialists in locations across the US and around the globe. His project oversight responsibilities range from major transportation projects to petrochemical, nuclear, and power- and energy-related projects. Dr. Ostadan has published more than 30 technical papers on topics relating to geotechnical earthquake engineering. He co-developed a method for dynamic soil-structure interaction analysis currently in use by the industry worldwide. Dr. Ostadan is a frequent lecturer at universities and research organizations. Dr. Ostadan is currently a member of the American Society of Civil Engineers (ASCE), Geotechnical Division; the Earthquake Engineering Research Institute (EERI); and the National Earthquake Hazard Reduction Program (NEHRP) Foundation Committee, and is a past member of Californias Seismic Safety Commission. Dr. Ostadan received a PhD in Civil Engineering from the University of California, Berkeley; an MS in Civil Engineering from the University of Michigan, Ann Arbor; and a BS in Civil Engineering from the University of Tehran, Iran.

December 2008 Volume 1, Number 1

29

30

Bechtel Technology Journal

Bechtel Civil
Technology Papers

33

Systems Engineering The Reliable Method of Rail System Delivery


Samuel Daw

43

Simulation-Aided Airport Terminal Design


Michel A. Thomet, PhD Farzam Mostou

49

Safe Passage of Extreme Floods A Hydrologic Perspective


Samuel L. Hui Andr Lejeune, PhD Vefa Yucel

Civil Albanian Motorway


This 38-mile (61-km) four-lane highway in Albania, the central leg of a 106-mile (171-km) motorway traversing the country, is being constructed by a Bechtel-ENKA joint venture.

SYSTEMS ENGINEERING THE RELIABLE METHOD OF RAIL SYSTEM DELIVERY


Issue Date: December 2008 AbstractThis paper discusses the common issues and problems many rail system delivery projects face and provides insight into the way in which a systems engineering approach to rail system delivery would address them. Keywordscollaboration, integration, rail system, requirements, systems engineering
INTRODUCTION any rail system delivery projects suffer with similar, if not the same, issues and problems, which usually result in increased cost and/or schedule delays. Within a highly commercial and competitive environment, rail projects are rarely able to deliver required long-term operational performance benefits while satisfying short-term project delivery objectives. In some cases, requirements related to a railways long-term operational performance are compromised to fulfill shortterm project delivery objectives, and overall performance is adversely affected. When the impact of such a compromise is understood, significant efforts are made to ensure that the project delivers what is required, but these efforts increase the projects cost and/or delay its schedule, to the detriment of the business case. After a brief background discussion, this paper defines 10 key areas of a rail system delivery project in which issues and problems commonly occur. Then, in the next section, it describes how a systems engineering approach, especially when applied from the outset, provides a project team with a reliable method of managing a complex rail system delivery project effectively in a commercial and competitive environment. and supplier, with minimal disruption to existing operations. The rail systems definition should accurately reflect the needs of its users, customers, and operations and maintenance personnel, and specify requisite features and functions, operational safety and performance requirements, and other requirements related to support over its life cycle. Market Forces The financial authority to undertake a rail system delivery project is usually based on the strength of the related business case, one which demonstrates that the perceived benefits to be delivered by the project will exceed anticipated costs and a reasonable return will be made over a reasonable period. Once a case has been made and the financial authority has been granted, the next step is to select a suitable supplier (prime contractor), usually through a competitive bidding process, the rigor of which will be aligned to the scale and complexity of the project. The procurement agent will be seeking a supplier that can fully satisfy the technical and commercial requirements, usually at lowest cost. The successful supplier will most likely be the one that develops confidence in its ability to deliver what is required within the financial targets and time constraints set by the business case. In some cases, the procurement agent will work with the preferred supplier to further reduce cost and schedule associated with the project as a part of contract negotiation. Due to an environment where market forces prevail and similarly capable suppliers compete for the same work, cost and schedule can be driven down to unrealistic levels. It is not unusual for suppliers to accept very challenging and potentially
33

BACKGROUND

Samuel Daw
srdaw@bechtel.com

Objectives of Rail System Delivery Projects The primary objective for most, if not all, rail system delivery projects is to deliver the defined system within the financial targets and time constraints agreed upon between the customer

2008 Bechtel Corporation. All rights reserved.

ABBREVIATIONS, ACRONYMS, AND TERMS


EN50126:1999 European Standard entitled Railway Applications The Specification and Demonstration of Reliability, Availability, Maintainability, and Safety (RAMS) European Union quality assurance/ quality control Regulatory Impact Analysis system design specification system requirements specification subsystem design specification subsystem requirements specification

When the business case is cost and/or schedule sensitive, the project needs to be managed particularly carefully to ensure that a financial return is actually delivered. In this respect, the sponsor will carefully monitor the progress of the project as a means of ensuring the investment. Economies of Scale Product sales volumes in the railway industry are significantly lower than product sales volumes in many other industries and markets. In addition, many railway administrations operate their railways differently and apply variants of products in different applications. Hence, the market size for rail products can be relatively small. Unfortunately, this can lead suppliers to focus on their home markets and develop specific products for specific railway administrations, reducing the size of the overall market through which development costs can be recoveredin effect, increasing project costs. A number of global suppliers have begun developing generic products that can be used by multiple railway administrations, which allows development costs to be recovered over a wider market. However, these generic products usually require some form of adaptation before they can be used by a particular administration. In common with product development risk, product adaptation can also represent risk to project delivery. Many rail projects seek to reduce technology risks by applying only proven technologies. However, unless the equipment has been previously applied, operated, and maintained in a very similar specific application environment, each new specific application represents some application risk. Progress Measurement Normally, project progress is measured by the achievement of planned deliverables. In other words, the supplier must deliver documentation, construction materials, equipment, and infrastructure, and other requirements, on an agreed upon schedule. In many cases, measurements that quantify deliverables but do not take into account their qualitydetermine whether or not they truly fulfill project requirementscreate a false sense of progress. This can result in the project moving from one phase in its life cycle to the next prematurely, and, like building on a foundation of sand, it is almost certain to introduce difficulties during later phases of the project life cycle.

EU QA/QC

A supplier development process seeks to better understand the cost buildup of supply, encourages openness and innovation, and identifies improved and leaner ways of working as a means of reducing costs over the longer term.

RIA SDS SRS SSDS SSRS

unrealistic financial targets and schedules to secure competitive contracts to supply railway systems and subsystems. Unfortunately, overly zealous attempts to drive down cost and shorten schedules to safeguard the business case can actually end up threatening it. The need for a supplier to make a reasonable return for its efforts may be overlooked. The competitive bidding approach sometimes aims at short-term cost reductions with little or no understanding of the challenges and cost buildup of supply, making long-term results very costly. In contrast, a supplier development process would seek to better understand the cost buildup of supply, encourage openness and innovation, and identify improved and leaner ways of working as a means of reducing costs over the longer term.
Its unwise to pay too much, but it is worse to pay too little. When you pay too much, you lose a little money, that is all. When you pay too little, you sometimes lose everything, because the thing you bought was incapable of doing the thing it was bought to do. The common law of business balance prohibits paying a little and getting a lotit cant be done. If you deal with the lowest bidder, it is well to add something for the risk you run. And if you do that, you will have enough to pay for something better. John Ruskin, English philosopher (18191900)

34

Bechtel Technology Journal

DEFINITION Collaboration It is worth noting that complex projects are delivered through the collaboration of people and organizations. Most project organizations, however, are based on vertical structures and organized into the engineering disciplines and functions that are required to deliver the project. Unfortunately, this type of structure does not encourage collaboration among various disciplines and functionsor even among individuals within these groupsin pursuit of a common goal. To the contrary, verticality actually encourages the engineering disciplines and functions to work in isolation from one another toward their own objectives. This problem can be further exacerbated on major projects where these groups are geographically separated. Organizational problems can lead to rework, schedule delays, and increased cost. Operational Concept For many rail system projects, insufficient consideration is given up front to the definition of the operational concept. The operational concept defines how a system is intended to operate within the application environment, taking into account how it interacts with and influences adjacent systems, and the roles of its operators and maintainers. The operational concept should also define how operations are to be recovered in the event of a failure or disturbance, and the provisions that are required to facilitate preventative and reactive maintenance activities. In most cases, operational principles having to do with safety are clearly set out, and rightly so. But operational principles related to performance and availability may be neglected. Without a well understood and clearly defined operational concept, it is difficult to develop an accurate and complete set of system requirements and to convey those requirements between customer and supplier. Crucially, the systems inability to support an effective operational concept is only discovered during system validation, or worse, later. Standards In many cases, projects are required to demonstrate compliance not only with customer and/or contract requirements, but also with a range of related standards, such as legislative and industry requirements and local custom and practice.
December 2008 Volume 1, Number 1

The process of identifying relevant standards and eliminating those that are not relevant can be time consuming, as is the subsequent capture, apportionment, and tracking of requirements. When the hierarchy of industry and legislative standards has been aligned, compliance to highlevel requirements can be proven simply by demonstrating compliance with the low-level requirements, as they are derived from the highlevel requirements. However, in some cases the hierarchy of industry and legislative standards may not be aligned. Furthermore, many standards are based on the solutions already in use on the railway, and while it may be possible to extrapolate the underlying requirements for the existing solutions, it can be difficult to achieve agreement of these underlying requirements. For the supplier of modern products and systems, this can significantly increase overheads associated with the management of standards and noncompliances, and, in some cases, it can lead to delays in the acceptance of new products when the underlying requirements are unclear. Definition and Apportionment of Requirements For many projects, system and subsystem requirements are poorly defined, if they are defined at all. While it is an objective of many projects to make use of existing products and systems in new applications, and rightly so, the purpose of tracking requirements is to ensure that customer, contract, legislative, and industry requirements are all satisfied. One of the main reasons for defining, apportioning, and tracking the requirements through design and verification is to demonstrate this compliance. As such, it is an important means of determining to what degree requirements can be fulfilled using standard products and systems, allowing potential gaps to be identified so that appropriate action can be taken. In some cases, customer requirements are used as the basis of apportionment to the subsystems, leading to difficulties in subsystem delivery and systems integration. Customer requirements usually contain: Actual customer requirements System requirements Useful information Constraints System requirements should define what is required of the system in order to fulfill the customer requirements and to deliver the

Complex projects are delivered through the collaboration of people and organizations.

35

operational concept. Quite often, the definition of system requirements is missed altogether, and the project concentrates on the definition and fulfillment of subsystem requirements, in some cases defined around product specifications, rather than through the definition and top-down apportionment of system requirements. All in all, the definition of what the project is required to deliver can be quite poor. While the system may demonstrably fulfill the requirements that have been defined, missing features and functions are normally identified at a very late stage of the project, usually during validation, with significant impacts on project delivery. Project Life Cycle Another common feature of rail projects appears to be the desire to get on with the job, and move into detailed design and construction as soon as possible. Enthusiasm for progress sometimes causes a project to move from one phase to the next before it is ready to do so. Even when deliverables and content from a previous phase are found to be missing, and attempts to complete them are prioritized during the current phase, enthusiasm for progress may lead the project to advance prematurely yet again to the next phase. But in such cases, it is usually only a matter of time before the project arrives at a point where incorrect assumptions have been made and rework becomes the order of the day. Project Processes and Procedures A related project deficiency is the lack of suitable project processes, which should be rolled out to those that are required to implement them from the outset. A manual of project processes and procedures should be developed that documents the way in which members of the project team have agreed to collaborate and work together. In some cases, the task of defining project processes and procedures is allocated to the quality assurance/quality control (QA/QC) function. While members of the QA/QC function are capable of writing project processes and procedures, these documents may not necessarily reflect the way in which the project team members intend to work together. In other cases, there is a belief that the project team knows what it is doing because it has delivered similar projects before. While this may be true, it is almost certain that the project team has not delivered a rail system with the identical group of people, or with the identical conditions of this particular application.

Ironically, with experienced and competent individuals, project processes and procedures should be easily agreed on and defined; but its not the experienced and competent individuals who will need to refer to and rely on them mostits those who are less experienced and competent. On many projects, processes and procedures are treated in isolation from one another and are not integrated, with the catchall statement to be read in conjunction with . If the way in which processes and procedures relate to one another is left open to interpretation, there is room for error. This has an impact on a projects ability to deliver the system effectively. It can lead to an alarming audit trail and further burden the project with corrective actions, some of which may be valid, but many of which are not, distracting the projects attention from the delivery of the system. Project Schedule Another common weakness in many projects is the way in which the project schedule is developed and managed. In many cases, each project discipline and function defines its own sub-schedules, and attempts are then made to join the sub-programs together at the master program level. Due to the level of complexity of integrating the sub-schedules, as a compromise the subschedules are sometimes rolled up a level and integrated into the master program at a higher level only. Obviously, this means that any necessary integration at the detailed level can be missed, resulting in inputs that are not always made available when required, and information that is not exchanged as needed. Systems Integration In many projects, systems integration is perceived as the stage in the project where the black boxes are connected together and where validated subsystems are integrated with one another to form the system. Systems integration actually takes place when the scope of the subsystems is defined, i.e., when system requirements are apportioned to the subsystems. At this stage, the system has essentially been disintegrated in such a way that parts of the system can be delivered in a manageable way and effectively integrated with one another at a later stage. Hence, the system is designed for integration.

Systems integration actually takes place when the scope of the subsystems is defined.

36

Bechtel Technology Journal

Due to this misconception, systems integration is not always properly considered during early project phases. Also, a systematic approach to systems integration is sometimes lacking, and projects fail to identify the integration risks they face at an early enough stage. As a result, they fail to take positive action to eliminate integration risks at an early stage of the project. Competence Management One issue that is faced on almost every project is the effective management of competence. Staff are sometimes appointed to roles based upon their capability rather than their competence. A capable person is someone who can recognize the competencies required to undertake a role, develop those competencies, and then undertake the role. Hence, if we appoint capable individuals, they will get the job done, but much of their time in the early stages will be spent learning rather than doing. If this learning period is not recognized and carefully managed, perhaps through mentoring, it can lead to inappropriate decisions and inappropriate direction for the project during early stages, decisions and direction that must be maintained to save face. Role of the System Authority In many systems projects, the need to establish a system authority from the outset is not identified. In complex, multidisciplinary projects, it is not usual to identify a single person who understands all of the technical issues and challenges faced by the project and who is able to take all of the significant decisions and set the project direction in the interests of the project. Normally, a system authority would be constituted, usually someone with diverse expertise, to provide strong direction, make good decisions, and manage and coordinate the activities of the subsystem delivery teams. Without the system authority, it is possible that a suboptimal solution will be delivered that will adversely impact operations during initial operations, at least until problems and shortcomings are resolved.

as described in later sections of this paper, encourages collaboration from the outset, with project staff working together to agree on the way in which they will deliver the project. The organization of the project, taking into account the various constituent engineering disciplines and functions and their geographical locations, is a key element of the project design as the vehicle to deliver the system. While a vertical organizational structure is valuable from the outset, allowing like-minded individuals to work together as they develop and refine their thinking, it can constrain the overall collaboration. Groups in such organizations often end up at cross purposes when issues, challenges, and problems arise. In some instances, changing from a vertical organizational structure (based on various engineering disciplines and functions) to a horizontal organizational structure with multidisciplinary and multifunctional teams (based on tasks to be undertaken and deliverables to be produced) can greatly improve collaboration. However, the timing of the change is important and, as with any organizational change, requires sensitive management. Operational Concept The systems engineering approach embraces defining the operational concept from the outset, and uses modeling to check and confirm understanding with all affected stakeholders, and to demonstrate the operational benefits to them, when possible, to secure their buy-in. By using modeling, as appropriate, the operational concept can be validated from the outset, providing a graphical definition of the way in which the system is to operate, including the various modes of operation and humansystem collaboration. This approach also fosters a common understanding between customer and supplier at an early stage. Baselines of Standards and Stakeholders A systems engineering approach establishes clear baselines of relevant standards and key stakeholders from the outset, including a record of decisions and justifications relating to the selection of standards and stakeholder input. The baselines are subject to rigorous change control, such that the implications of any changes in inputs arising through changes in either related standards or related stakeholder input can be readily identified and considered prior to acceptance or implementation.

The organization of a project is a key element of project design as the vehicle for delivering a system.

SYSTEMS ENGINEERING APPROACH Collaboration A systems engineering approach clearly recognizes that projects are delivered through the collaboration of people and organizations. For example, the approach toward the definition and integration of project processes and procedures,

December 2008 Volume 1, Number 1

37

Definition and Apportionment of Requirements Systems engineering takes a systematic approach to the development and definition of customer requirements, and to defining acceptance criteria for each requirementi.e., what the project is required to do to demonstrate that customer requirements have been fully satisfied. Similarly, a systematic approach is taken toward defining system requirements and the associated verification criteriai.e., what the project is required to do to verify that system requirements have been fully satisfied. System requirements and the design outline are then refined through analyses and assessments from different perspectives, such as operability, safety, performance, human-factors integration, constructability, maintainability, etc., in an iterative manner. Also, system requirements are apportioned to the subsystems according to the outline system design. Importantly, the systems engineering approach aims to determine the minimal set of system and subsystem requirements necessary for full coverage. Getting that balance right from the beginning is essential, because establishing too many requirements will overburden the project, while establishing too few can lead to deficiencies in the systems design and operability.

Figures 1 and 2 provide illustrations of the iterative approach to the refinement of system requirements and system design, to the apportionment of system requirements to subsystem requirements, and to the refinement of subsystem requirements and design. They also show how the system design is updated to reflect each decision made during the detailed design of the subsystems. Project Life Cycle A systems engineering approach defines an appropriate project life cycle model at the outset, one that takes into account the profile of technical and commercial risks over the project life cycle. The life cycle may be based on an industry standard life cycle, e.g., EN50126:1999, but it is tailored to ensure its applicability to the project, taking into account the nature and context of the system to be delivered, its constituent subsystems, and their relationships to the system and to one another. By this means, the system management task is clearly defined. Objectives, inputs, requirements, outputs, and phase verification criteria are clearly defined for each phase of the project life cycle, and it is only possible to move from one phase to the next when all requirements of the current phase have been demonstrably fulfilled. Figure 3 provides an illustration of a project life cycle for a typical railway system delivery project, based on EN50126:1999 and organized as a V representation. Project Processes and Procedures A systems engineering approach seeks to define and harmonize the processes and procedures to be implemented by the project through a collaboration of the personnel who are required to implement themeffectively defining how they intend to work together to deliver the project, and encouraging a collaborative approach from the outset. Inputs, tasks, and outputs are clearly identified for each process, and processes are integrated with one another to ensure that all inputs can be satisfied and that owners of shared information take into account the needs of users. The processes may be integrated through the use of a matrix, providing visibility of owners and users of shared information. This approach allows interactions between the various processes to be clearly identified, and aims to establish a dialogue between owners and users as to the format and content of information to

A systems engineering approach encourages collaboration on a project from the outset.

System Requirements

System Design

Analyses and Assessments

Verify System Design

Apportionment of System Requirements

Design Optimization and Verification

Subsystem Requirements

Subsystem Design

Analyses and Assessments

Verify Subsystem Design

Figure 1. Iterative Refinement of Requirements and Design

38

Bechtel Technology Journal

Contractual Requirements

Legal Requirements

Other Requirements

Technical Requirements (What) Process Requirements (How)

RIA

Stated Standards

EU Legislation National Stakeholders and Instruments and Users Standards and Standards

Engineering Functional Expertise

Engineering Domain Expertise

Analyses and Assessments

Requirements Identification

Requirements Analysis

Requirements Classification

SRS

SDS

Requirements Apportionment

Requirements Tracing

Requirements Fulfillment

A systematic approach is taken toward defining system requirements.

SSRS

SSRS

SSRS

SSRS

SSRS

SSRS

SSRS

SSDS

SSDS

SSDS

SSDS

SSDS

SSDS

SSDS

KEY:
RIA = Regulatory Impact Analysis SDS = system design specification SSDS = subsystem design specification SRS = system requirements specification SSRS = subsystem requirements specification

Figure 2. Apportionment of System Requirements

1. Concept

Verification 2. System Definition and Application Conditions

Program Management
Verification Verification

Verification 10. System Acceptance

11. Operation and Maintenance

3. Risk Analysis

Verification 4. System Requirements

Verification 8. Installation Verification Verification 7. Manufacturing Verification Verification 6.3 Subsystem Installation Design

9. System Validation

System Authority

System Authority

5. Apportionment of System Requirements

6.1 Subsystem Outline Design

Contractual Mechanism

6.2 Subsystem Detailed Design Design and Implementation

Contractual Mechanism

Program Management

Figure 3. Typical Project Life Cycle

December 2008 Volume 1, Number 1

39

be shared and exchanged. Hence, it encourages collaboration among the various engineering disciplines and functions within the project. Project Schedule Following a systems engineering approach, the project schedule is based on the project life cycle and processes, and includes detailed information relating to task ownership and task durations, etc. By this means, the schedule reinforces project processes and procedures and vice versa.

The systems engineering approach was deliberately conceived to ensure that the long-term objectives of a project are fulfilled in an ever-changing external environment.

The schedule is carefully checked to ensure that all inputs will be made available as they are needed, and that all outputs required of the project will be delivered. If a required input is not clearly identified in the schedule, it is doubtful that it will somehow materialize on time. Therefore, it is crucial that all inputs and outputs are included in the schedule. Systems Integration A systems engineering approach will aim to ensure that the system is specifically designed to enable integration, recognizing that system integration actually takes place during the apportionment of system requirements to subsystem requirements. Integration risks are clearly identified and ranked at an early stage of the project life cycle, and specific activities are defined to mitigate potential risks at the earliest opportunity, making use of modeling, simulation, and testing as appropriate. As with all other project-related activities, it is essential that systems integration risk identification and mitigation activities be included in the project schedule. Competence Management A systems engineering approach seeks to identify the competencies that are required for each of the roles to be undertaken within the project. The aim is to employ both competent (with significant relevant experience) and capable individuals (with the ability to become competent with experience), and ensure that early decisions are made by those who are competent to make them while capable individuals are being developed and mentored in their project roles. The mix of personal characteristics required to contribute to project definition, development, and delivery requires careful consideration. Most projects attempt to retain technically competent staff from start to finish, mainly for consistency and familiarity reasons. However, this may not be in the best interests of the staff or project

for a number of reasons. During the projects initial phases, personnel who are shapers are needed to conceive and establish the project structure, as the vehicle for delivering the system, although they must be kept within the bounds of reality by well-grounded individuals. As the project progresses, completer finishers are needed to focus on delivery. Getting the balance of personality characteristics wrong in personnel who are assigned to each phase can adversely impact a suppliers project delivery performance. Role of the System Authority Depending on the nature and complexity of the system to be delivered, a systems engineering approach will implement a properly constituted system authority, whose role and responsibilities will be clearly defined, with decision-making authority to provide: Guidance and direction, based on highly relevant, broad experience System management, including oversight and coordination of subsystem delivery projects, interfaces, systems integration, and change management Long-term thinking

CONCLUSIONS

ystems engineering is an effective means of addressing many of the problems that we predictably experience in rail systems delivery projects. It ensures that customer requirements are actually delivered by the supplier in the most effective and efficient manner. The systems engineering approach was deliberately conceived to ensure that the longterm objectives of a project are fulfilled in an ever-changing external environment by the most efficient route possible, focusing the minds of customers and their suppliers on a common goal based on a common understanding. Ideally, systems engineering starts at the outset of project definition and continues as the facilitator for effective rail system delivery throughout the project life cycle. Although systems engineering is unable to prevent false expectations from being agreed upon at the outset, it represents the most reliable method available for successful rail system delivery. Unfortunately, it seems that systems engineering is not well understood in the railway industry, and its half-hearted implementation on many

40

Bechtel Technology Journal

projects has resulted in easily preventable project delivery difficulties. Managers of rail system delivery projects often dont recognize the need to adopt a systems engineering approach until their projects experience difficulties. Fortunately, even when systems engineering is not applied until the middle stage of a project, it can be used to minimize the impact of difficulties on the outcomealthough in some cases the approach may be applied too late to fully recover and satisfy all of the project objectives. To reap its maximum benefits, a systems engineering approach should always be implemented at the start of a rail systems delivery project and applied throughout its life cycle, until successful system completion and turnover is accomplished.

BIOGRAPHY
Samuel Daws 24 years of experience in the railway industry includes 2 years with Lloyds Register Rail Limited as head of systems integration, 4 years with Siemens Transportation Systems as principal engineer for products and systems engineer, and 3 years with Bechtel Civil as a rail systems engineer. Sam began his career as an electronic technician apprentice with ABB Signal Limited, in Plymouth, England, where he advanced to the position of electronic design engineer before joining Adtranz Limited (ABB Daimler-Benz Transporation/ Signal), in Plymouth as product manager. Sams extensive technical experience in rail systems covers systems integration and systems integration management, including operational concept definition, modeling, and validation; requirements engineering; project life cycle definition and integrated process definition and implementation; and system architecture and interface control. Sam is a chartered engineer and registered European engineer. He is also a member of the Institution of Engineering and Technology (IET), the Institution of Railway Signalling Engineers (IRSE), the Institute of Electrical and Electronic Engineers (IEEE), and the International Council on Systems Engineering (INCOSE). Sam earned a Diploma in Management Studies with distinction and a Certificate in Management from the Plymouth Business School, University of Plymouth, Drake Circus, Plymouth, England. He also holds a BE in Electrical and Electronic Engineering with honors from Polytechnic South West, Drake Circus, Plymouth, England.

December 2008 Volume 1, Number 1

41

42

Bechtel Technology Journal

SIMULATION-AIDED AIRPORT TERMINAL DESIGN


Issue Date: December 2008 AbstractThis paper presents the application of simulation techniques to the design of a new passenger terminal at Curaao International Airport. The purpose of the simulation was to confirm that the design would meet or exceed International Air Transport Association (IATA) level of service C (LOS C) planning standards during peak activity periods of the design day. The simulation model is a dynamic, object-oriented passenger movement analysis tool. The model is driven by a realistic flight schedule developed for a 24-hour design day, thereby providing passenger volumes and flows that reflect the arrival and departure of aircraft and passengers over the course of an entire day. Keywordsairport terminal, design, level of service, passengers, planning, simulation

INTRODUCTION he new terminal at Curaao International Airport (see Figure 1) began operating in July 2006. When it opened, the terminal was capable of handling 1.6 million passengers annually, although that traffic level is not expected to occur before July 2011. Its ultimate capacity will be 2.5 million passengers per year. The airport is a terminal for Caribbean Basin traffic serving mainly European (primarily Dutch) and US tourists (via Miami), and a small business segment. There is also a very small number of transfer passengers to and from other islands in the Netherlands Antilles region, including Aruba, Bonaire, and St. Maarten. When the new terminal was being designed, 18 airlines were expected to serve the airport, including three U.S.based companies, American, Continental, and Delta. One airline, Dutch Caribbean Express, was expected to carry almost half of all passengers and connect Curaao to the main Caribbean islands of Jamaica, Haiti, Santo Domingo, Trinidad and Tobago, and the other islands in the Netherlands Antilles region,

including Aruba, Bonaire, and St. Maarten; and to cities in nearby Venezuela, including Caracas, Valencia, and Maracaibo. The airline would also offer long-range flights to Miami and Amsterdam. Other airlines would serve several South American countries, including Venezuela, Colombia, Surinam, and Peru, and Central American countries, including Costa Rica. Flights to Cuba and Puerto Rico would also be available from Curaao. The projected activity meant that Curaao International Airport was poised to become a flexible and convenient hub for the Caribbean Basin.

Michel A. Thomet, PhD


mthomet@bechtel.com

Farzam Mostoufi
fmostouf@bechtel.com

Figure 1. Architectural Rendering of the New Curaao International Airport Terminal and Concourses at Opening

2008 Bechtel Corporation. All rights reserved.

43

ABBREVIATIONS, ACRONYMS, AND TERMS


AOCI ASCE ASME CAD CADD Airports Operations Council International American Society of Civil Engineers American Society of Mechanical Engineers computer-aided design computer-aided design and drafting Federal Aviation Administration International Air Transport Association Institute of Electrical and Electronics Engineers level of service [IATA] level of service C [standards] Society for Modeling and Simulation International Transportation and Development Institute [of ASCE]

Because of the high level of concentrated activity at such a compact airport, it was more difficult to apply the planning methodologies advocated by IATA and the FAA. Therefore, it was decided that, in addition to the IATA and FAA methodologies, a passenger simulation model would be used for the design. The simulation model makes it possible to quantify the LOS in each area of the terminal and for each type of airport patron, given a specific terminal size and layout and a specific scenario of arriving and departing flights during a 24-hour day (the design day).

The most challenging conditions in the terminal occur when two B-747s arrive at the same time during a peak hour.

FAA IATA IEEE LOS LOS C SCS TD&I

PASSENGER SIMULATION MODELTERMSIM

ERMSIM, Bechtels proprietary simulation package, enables the airport planner to quantify the level of service experienced by passengers going through a specific terminal layout, for a traffic level based on a forecasted design day flight schedule. The simulation model is driven by the design day flight schedule, in which each flight has a time of arrival or departure, consists of a specific aircraft type, is assigned to a specific gate or remote stand, and belongs to a specific airline with a unique flight number. In addition, the number of originating or terminating passengers and the number of transferring passengers in each flight are based on projected load factors. These passengers are further divided into first, business, and economy classes. For each departing flight, profiles of originating passengers are generated in the model at a time determined by their scheduled departure time and an earliness-of-arrival distribution. Once created by this process, each originating passenger has the following attributes: Travel class (first, business, or economy) Time and location of arrival at the airport Ground transportation mode used Airline and flight number Departure boarding lounge In addition, each originating passenger is assigned the following attributes: Number of people travelling together in a party (party size distribution) Number of checked bags per passenger based on a bag distribution range, as well as a probability distribution that a percentage of these bags are oversized or require special handling (e.g., animals)

Equipment used to serve this market varies from long-range, E-size aircraft with 400 seats, such as the B-747, to small, twin turbo prop aircraft with 48 seats, such as the de Havilland Dash 8. Each day, three B-747 flights from Europe will arrive in Curaao from London, Madrid, and Amsterdam. To serve these flights, the apron was designed with 12 positions, 5 of which have access to the terminal via passenger loading bridges. There are three positions for B-747s, three for B-767s, and six for Dash 8s. The airport has a curfew at night from approximately 11:00 p.m. to 6:00 a.m. When flights resume, activity builds to a midday peak, when nine aircraft arrive during one hour, representing 15 percent of daily aircraft arrivals. A second peak is reached at a later hour, when nine aircraft depart. During these two hours, passenger peaks are also reached, with 920 passenger arrivals and 940 departures (more than 18 percent of daily passengers). The most challenging conditions in the terminal occur during a peak hour when two B-747s arrive at the same time. The terminal is designed with sufficient facilities and public space so that the level of service (LOS) during this peak will not drop below IATA LOS C standards.

44

Bechtel Technology Journal

GATE 5 FROM REMOTE STANDS TO REMOTE STANDS

GATE 4

GATE 3

STAIRS DOWN TO REMOTE STANDS GATES 1 AND 2

HOLD ROOM

TO LOWER LEVEL

TRANSFER SECURITY

ESCORTED DISABLED PASSENGERS

HOLD ROOM

PASSPORT CONTROL SECURITY PROCESSING FROM LOWER LEVEL

2ND Level

ESCORTED DISABLED PASSENGERS

OPTIONAL REMOTE GATE BUS DROP-OFF

UP TO STERILE CORRIDOR DOWN TO REMOTE STANDS OPTIONAL REMOTE GATE BUS PICK-UP

FROM UPPER LEVEL IMMIGRATION BAGGAGE CLAIM

FROM UPPER LEVEL ELEVATOR FOR DISABLED TRANSFER PASSENGERS

CUSTOMS

CHECK-IN DOCUMENT CHECK TO UPPER LEVEL

The Curaao Airport terminal model simulates real-time behaviors. For example, while moving through the terminal, passengers have the option of walking or using a moving walkway.

Arriving Passengers Departing Passengers Escorted Disabled Arriving Passengers Escorted Disabled Departing Passengers

ARRIVAL CURBSIDE

DEPARTURE CURBSIDE

1ST Level

Figure 2. Curaao Terminal Airport Passenger Flows

Number of carry-on bags per passenger sampled from a distribution range Number of well-wishers per party sampled from a well-wisher distribution range Special needs passenger assumptions to account for wheelchairs or electric carts A similar process is used to generate terminating and transferring passenger profiles in arriving flights at their specific gate or hardstand, with appropriate attributes assigned as the passengers exit the aircraft and move into the terminal or in a bus. The passengers thus generated move within the terminal and concourses from one area to another according to their attributes. Each area or processing station has a location in the model specified by an x, y, or z coordinate that is tied to a scaled CADD drawing of the terminal, such as the one in Figure 2. The distance between two areas is calculated as the most direct distance along a travel route. Or, when a straight line between two areas is not physically possible, intermediate points are defined through which the passengers must pass.
December 2008 Volume 1, Number 1

Walking time between areas is computed by giving each party a walking speed, sampled from a distribution between a minimum and a maximum. Two such speed distributions are used: one for passengers with special needs and one for all other passengers. These speeds are reduced when the occupancy of the area that the passengers traverse rises above a given threshold (crowding effect). Randomization of walking speeds is used to reflect the reality of people moving in a terminal. The model simulates real-time behaviors. For example, while moving through the terminal, passengers have the option of walking, using a moving walkway, or boarding the automated people-mover. When changing levels, they can use escalators, elevators, or stairs. On the escalators and moving walkways, some passengers will stand while some will walk, adding their speed to that of the escalator or walkway. When passengers arrive at a processing area, they join a queue. Queues can be universal (a single queue serving several identical processes or checkpoints) or individual (one queue per

45

process or checkpoint). When passengers reach the head of the queue, they are processed. The processing time is a value sampled from a distribution range specific to each facility and passenger attribute. As passengers move through successive processing checkpoints or areas in the terminal and concourse, their movement is followed by the model and statistics are generated. The occupancy of each area (circulation or queuing) is tracked during the 24-hour simulation period. At each processing point the flow of passengers and queue length is tracked during the day and these data points are, in turn, used to assess the different LOS metrics. The model collects all of these output variables on a minute-by-minute basis. This enables the planners and architects to design facilities, public spaces, and corridor widths for the peak traffic activity of the design day.

available per passenger, a maximum dwell time not to be exceeded is specified (queuing time plus processing time). Likewise, a 90 percentile dwell time is also specified. This means that 90 percent of the passengers processed at that facility should have a dwell time shorter or at most equal to that criterion.

SIMULATION OUTPUTS

The simulation model collects all of the output variables on a minute-by-minute basis.

he results of the simulation are summarized in five categories:

Determination of facilities requirements. When the simulation run is progressing, the model automatically adds facilities to ensure that the desired LOS is not exceeded when the demand for this facility keeps growing. For instance, at the economy checkin, when the waiting time for 90 percent of the users exceeds 10 minutes, a new counter is opened. Performance of each processing station. The number of passengers processed during the design day is summarized in a comprehensive table for each processing station, together with the percentage of passengers that did not have to wait in a queue, the mean wait time and maximum wait time for all passengers, as well as the maximum queue length. Queuing areas. Groups of processing stations are generally fed from a single queuing space (universal queue). For instance, multiple economy check-in counters are fed from such a single, universal queue. The LOS in the queuing area is determined by the number of passengers and the size of the queuing area, using the IATA criteria. For each queuing area there is a graph showing the number of passengers in the queue every 10 minutes, together with lines showing the boundaries between LOS as shown in Figure 3. Clearance times. At the end of the simulation, each passenger from each category remembers the time spent in the airport, between the time of arrival or departure at a gate and the time of entrance or exit from a ground access mode. These clearance times are then ranked from the shortest to the longest and displayed in separate histograms for terminating passengers, originating passengers, and transferring passengers. Space occupancy. For boarding gate lounges and public lobbies where people are waiting in a given space, an occupancy graph is

INPUT ASSUMPTIONS Input assumptions fall into four categories: Description of terminal spaces and processing facilities. This is summarized in the CAD drawings of each floor plan of the terminal building and of each concourse. On each drawing, the flow paths of originating, terminating, and transferring passengers are shown, as well as the paths of the electric carts. Flowchart of passenger circulation and processing. For each category of passengers (originating, terminating, and transferring), a flowchart describes all the facilities the passengers have to visit, and the order in which the facilities are traversed, from the time passengers arrive at the airport until they leave the airport. Functional description of each processing facility and subfacility. For each category of passenger, a detailed table summarizes the facility parameters such as the percent of passengers using it, together with the processing time distribution (maximum, minimum, and average). Some facilities have no associated processing time, but passengers wait a specific length of time (e.g., wellwisher leaving point in departure hall) or wait for a specific event (e.g., boarding call in the departure lounges). Minimum acceptable LOS in each facility and subfacility. In addition to the IATA levels of service, which are based on areas

46

Bechtel Technology Journal

160 140 120

LOS C LOS B

Passengers

100 80 60 40 20 0 0:00

LOS A

1:00

2:00

3:00

4:00

5:00

6:00

7:00

8:00

9:00

10:00

11:00

12:00

13:00

14:00

15:00

16:00

17:00

18:00

19:00

20:00

21:00

22:00 22:00

LOS = level of service

Hour

Figure 3. Check-In Counters Queuing Area Passenger Density Distribution in 2031

200 180 160 140

LOS C LOS B

Passengers

120 100 80 60 40 20 0 0:00

LOS A

1:00

2:00

3:00

4:00

5:00

6:00

7:00

8:00

9:00

10:00

11:00

12:00

13:00

14:00

15:00

16:00

17:00

18:00

19:00

20:00

21:00

LOS = level of service

Hour

Figure 4. Boarding Area (Boarding Lounge Gate C001) IATA LOS Distribution in 2031

given, as well as a graph of the corresponding LOS, based on IATA criteria, as shown in Figure 4. For corridors, where people are walking, similar graphs are given, based on people per minute walking past a crosssection. The LOS is calculated by cordoning off a section of the corridor with virtual doors and counting how many people are in this section every 10 minutes. The number of people in that section is found by adding one

every time a person passes the virtual entry door and subtracting one whenever a person passes the virtual exit door.

CONCLUSIONS uraao is a small, compact airport in a very dynamic environment. In the contract, one of the performance specifications items was to demonstrate that even during peak hours of the

December 2008 Volume 1, Number 1

23:00

23:00

47

ultimate forecast, the LOS should be at IATA LOS C standards or higher. TERMSIM, Bechtels proprietary simulation package, made it possible to quickly investigate the performance of different terminal layouts and translate them into the design changes necessary to accommodate the traffic projections at desired design standards. Because TERMSIM can be used at different levels of detail, its use is practical and effective, even for small airports like Curaao.

Farzam Mostoufi is a senior planning and simulation specialist with Bechtel Civil, with 20 years of experience at Bechtel in the planning and design of transportation and material handling facilities, including international airport terminals, railroads, transit systems, bulk and container ports, and mining and metals production complexes. He is highly experienced in conducting technical simulation studies and economic analysis, and in the design, development, and use of specialized transportation and logistics models. Farzam has developed economic models and participated in feasibility studies to test the impact of projected operations and designed facilities on revenues, capital expenditures, and maintenance costs. He is currently supporting the New Doha International Airport project in Qatar, being designed to meet Qatars aviation needs for decades to come. When the airport opens in 2011, as many as 8,000 passengers will be able to use the 590,000+ m 2 passenger terminal complex in a single hour, and the 4,850 m eastern runway will be among the longest commercial runways in the world, allowing for unrestricted operations by Airbus A380 aircraft even under extreme meteorological conditions. Farzam received an MBA in Finance from Golden Gate University, San Francisco, California; has a BS in Economics and Insurance from Tehran College, Tehran, Iran; and has completed course requirements in the Doctor of Business Administration (DBA) degree program at Golden Gate University. As a lecturer at Golden Gate University, he taught graduate and undergraduate level courses in computer modeling, simulation, and database systems. Farzam also holds a Certificate in Airport Systems Planning from Massachusetts Institute of Technology, Cambridge, Massachusetts.

BIOGRAPHIES
Michel A. Thomet is the manager of facility planning and simulation for the Aviation Services Group in San Francisco, California. He has been involved in the master planning of transportation infrastructure megaprojects around the world, including airports, rail systems, transit systems, ports, mines, and industrial cities. On these projects, Dr. Thomet has been responsible for simulation studies (capacity, level of service), traffic forecast studies, economic feasibility studies, and noise and air quality impact studies. He currently supports the New Doha International Airport project in Qatar. Previously, Dr. Thomet was the planning director at Suiselectra in Basel, Switzerland, where he coordinated a team of experts in various fields related to transportation, and traveled widely in Europe and North America to gain first-hand knowledge of state-of-the-art urban transportation systems. Earlier, as a senior electric engineer at the Westinghouse research and development laboratories in Pittsburgh, Pennsylvania, he conducted research on solid state power conversion systems. Dr. Thomet is a member of the Institute of Electric and Electronics Engineers (IEEE), Society for Computer Simulation (SCS), and the Transportation & Development Institute (T&DI) of the American Society of Civil Engineers (ASCE). As a member of the executive committee of the Vehicular Technology Society of IEEE, he has been involved in preparing and supporting the annual American Society of Mechanical Engineers (ASME)/IEEE Joint Railroad Conference. Dr. Thomet has authored and published 12 technical papers (4 on electrical engineering and 8 on transportation), several of which have been presented at the Winter Simulation Conference (WSC) and at conferences sponsored by the IEEE, ASME, and Airports Operations Council International (AOCI). Dr. Thomet received an MBA in Management and Economics from the University of California, Berkeley; has a PhD in Systems Engineering and an MS in Electrical Engineering, both from Carnegie Mellon University, Pittsburgh, Pennsylvania; and received a Diploma in Electrical Engineering from the Federal Institute of Technology, Zurich, Switzerland.

48

Bechtel Technology Journal

SAFE PASSAGE OF EXTREME FLOODS A HYDROLOGIC PERSPECTIVE


Issue Date: December 2008 AbstractThis paper takes a fresh look at uncertainty in estimates of the inflow design floods (IDFs) used for spillway design for safe passage of extreme floods through dams, particularly dams with a height of 30 m or less. Development of IDFs currently involves statistical analysis; thus, IDFs incorporate uncertainties. The paper defines the extreme flood and suggests a means by which it can be estimated in order to incorporate uncertainty in the IDF. A clear understanding of the physical site conditions and the physical processes in question, as well as engineering judgment, are paramount in developing a safe design. KeywordsARI, bootstrap CI, dam, ELV, EMA, flood, hydraulics, ICOLD, IDF, PMF, PMP, spillway, WRC

INTRODUCTION any developed countries use the probable maximum flood (PMF) as the design basis for establishing the inflow design flood (IDF) for dams that are classified as high hazard because of their high dam heights and large storage volumes. The failure of such dams would cause loss of life and result in major adverse economic consequences due to damage to properties downstream. The PMF is defined as the flood that may be expected from the most severe combination of critical meteorological and hydrologic conditions that are reasonably possible for the drainage basin in question. In general, the PMF is considered to be statistically indeterminate. When the PMF is not used as the basis of the IDF, it is at least used as a check flood to ensure that a dam will not fail catastrophically if overtopped. However, in some developed and many developing countries, the peaks of the IDF are estimated from probabilistic approaches, regardless of the hazards these dams may pose to downstream inhabitants. This is particularly true for small dams with heights of 30 m or less. In many cases, the available flood-flow data is barely sufficient for a probabilistic analysis; therefore, estimates of design-flood peak discharges that use probabilistic approaches are highly uncertain. The International Commission on Large Dams (ICOLD) has charged the Technical Committee of

Hydraulics for Dams with developing a bulletin, entitled Safe Passage of Extreme Floods, to provide insight and approaches for determining design-flood peak discharges when probabilistic approaches are used. The bulletin was also developed to provide a better design of the outlet works that could safely pass extreme floods. The purpose of this paper is to capture the essence of Chapter 2 of that bulletin, Confidence Level Assessment of Design Flood Estimates, which suggests using the upper bound of the confidence limits to provide a margin of safety in defining IDFs for dams. The precision of confidence-level determinations may also be improved by using recently developed algorithms in determining quantile estimators for some distributions that are commonly used in flood-flow frequency analysis. This paper provides additional discussion, not presented in the ICOLD bulletin, regarding the estimate of confidence levels in determining extreme floods for dam design.

Samuel L. Hui
shui@bechtel.com

Andr Lejeune, PhD


Universit de Lige, Belgium

agh.lejeune@ulg.ac.be

CURRENT PRACTICE AND UNCERTAINTY OF IDF ESTIMATES

Vefa Yucel
National Security Technologies, LLC

yucelv@nv.doe.gov

he current practice in the design of dams is to first select the IDF appropriate for the hazard potential of a dam and reservoir and then to determine its peak flow rate and/or its entire hydrograph. Then the spillway and outlet works can be designed, or adequate storage can be allocated in the reservoir, to safely
49

2008 Bechtel Corporation. All rights reserved.

ABBREVIATIONS, ACRONYMS, AND TERMS


ARI CI ELV EMA ICOLD IDF LP3 average recurrence interval confidence interval estimated limiting value Expected Moments Algorithm International Commission on Large Dams inflow design flood Log Pearson Type III probable maximum flood probable maximum precipitation Water Resources Council

criteria (i.e., ranging from a 100- to a 10,000-year IDF). Therefore, an IDF estimate that exceeds the design-flood peak discharges (i.e., an extreme flood), as advocated by the ICOLD bulletin, should account for any uncertainty, usually defined by the confidence limits.

FLOOD FREQUENCY ANALYSIS

Estimates for extreme events extrapolated from flood frequency analyses based on short records are highly uncertain.

PMF PMP WRC

accommodate the design flood without endangering the integrity of the dam and its appurtenant structures. In many developed countries, dams are classified by their hazard potential, with regulations governing the selection of the IDF. For highhazard dams, the PMF, or the flood that may be expected from the most severe combination of critical meteorological and hydrologic conditions that are reasonably possible, is commonly adopted as the IDF. When the PMF, which is considered to be statistically indeterminate, is not used as the basis of the IDF, it is used as a check flood to ensure that the dam does not fail catastrophically if overtopped. The IDF is derived either deterministically with a precipitation-runoff model, using a design rainfall sequence and other basin hydrologic parameters appropriate for the design hydrometeorological conditions, or by means of a statistical analysis, using historical flood peaks observed at or near the proposed dam site. In the former case, the design precipitation values are generally determined using historical data. For example, in deriving the probable maximum precipitation (PMP) used in the development of the PMF, the 100-year or other frequency precipitation values form the basis of the PMP estimates. Therefore, regardless of the approach taken, determining the design-flood peak discharge involves statistical analysis of data. In many cases, the available data is insufficient (in terms of years of data collected), resulting in uncertainty in the estimate of the IDF. Dam design professionals generally recognize that good engineering demands realistic or justified design, and that dams should be designed to accommodate the maximum flood computed based on approved hydrologic design

he IDF chosen from a flood frequency analysis is generally located in the upper tail of the cumulative distribution of the observed phenomena. This would correspond to an average recurrence interval (ARI) of 100 years to more than 10,000 years, depending on the potential hazard a dam poses to the downstream inhabitants and properties. Unfortunately, available sample data may include, at best, 100 years of observations, and often less than 50 or even 20 years. Without some assumptions about the population distribution of the data, we would theoretically need 3,000 years of observation to roughly define the 1,000-year event. In this case, the interval bounded by the highest and the 7th highest of the sample data would have approximately a 90 percent chance of containing the 1,000-year value. [1] Therefore, when there is a limited amount of available historical data at a site, the use of regionalization techniques may be required to increase the database in deriving a reliable at-site frequency distribution. In any case, extrapolation beyond the database is required in estimating the design frequency event, usually in the range of a return period of 1,000 to 10,000 years. The extrapolation to derive extremely rare peak flows could lead to uncertain estimates with the resulting flood quantile estimates highly dependent on the choice of theoretical distribution. Regardless of the appropriateness of the probability distribution used, the extrapolation of data to the 1,000- and 10,000year range from even 100 years of available data is a stretch. However, sound designflood estimates can be achieved by explicitly accounting for uncertainty in the estimates by means of confidence intervals (CI), relying on a clear understanding of the hydrometeorological characteristics of the watershed, and using professional judgment. In addition, the evaluation of the safe passage of a flood requires the routing of the design flood hydrograph through the reservoir. It also requires a determination of the corresponding flood volumes and distributions.

50

Bechtel Technology Journal

CONFIDENCE INTERVALS he precision of a design event estimate, in terms of a return period in years (described as T-years; i.e., T = 100 years or T = 1,000 years), that is derived from a probability distribution fitted to the sample data, can be quantified by computing a CI of a certain confidence level, e.g., 95 percent, for the T-year event. The CI is a range of estimated values within which the true value of the T-year event is expected to lie. If different CIs are derived using various methods, the CI giving the smaller range should be chosen. The statistical distribution of the T-year event is usually unknown; therefore, it is not possible to derive an exact CI for the T-year event. However, analytical expressions (i.e., first-order approximations) have been developed that are acceptable for large sample sizes. Because hydrologic samples are typically small, these approximate CIs may lack accuracy. Methods for computing CIs are further summarized below. IMPACT OF RECORD LENGTH ON CONFIDENCE INTERVALS he accuracy of estimates of the T-year events and the associated CIs are functions of the number of years of records available for the analysis, the assumed probability relationships (frequency distribution), and the way the sample statistics are estimated. As the length of the record increases, the reliability of the estimate also increases. Approximate values of reliability (percent chance) can be calculated for different return periods. The approximate values for infrequent events are shown in Table 1, giving the approximate reliabilities as a function of confidence limit, ARI, and

record length. [2] For example, there is almost a certainty that with 25 years of historical database, the estimate for the 2-year ARI will fall within plus or minus 50 percent of the estimated value; but the chance of having the estimated value fall to within plus or minus 10 percent of the estimate is only about 68 percent. [3] The table also depicts the risk of having a flood with an ARI greater than T-years during the life of a project. It is important to note, however, that the lifetime of a dam, generally defined in economic terms, is different from the real lifetime of the structure, which is usually greater.

ACCOMMODATING UNCERTAINTY IN THE IDF

n the basis of the method suggested in the U.S. Geological Surveys Guidelines for Determining Flood Flow Frequency, Bulletin 17B [4], for a project involving a 1,000-year flood of 4,478 m3/s derived from a 39-year systematic record using the Log Pearson Type III (LP3) Distribution, the corresponding upper 95 percent confidence limit is estimated to be 7,632 m3/s. The extreme flood used for the design of such a project would include all floods up to 7,632 m3/s, an increase of about 72 percent of the expected value that is normally used as the IDF. In this example, the dam design professionals would have to find a way to accommodate the extra 3,154 m3/s, other than relying on the planned spillway, if they wish to have a 95 percent confidence that the dam could safely pass the IDF based on data accuracy alone. Using the same database and the same LP3 distribution, the 10,000-year-flood peak discharge and the corresponding upper 95 percent confidence limit are estimated to be 11,768 m3/s and 19,039 m3/s, respectively. The dam designers

The shorter the historic record, the larger the CIs and the associated errors.

Table 1. Approximate Reliabilities as a Function of Confidence Limit


Average Recurrence Interval (ARI), years
2

Record Length, years


10 25 100 10

Condence Limits, % error

Risk of Flood (ARI = T-years) Within Lifetime (N-years)

10%
47 68 96 46 50 85 37 46 73 35 45 64

25%
88 99 100 77 93 100 70 91 99 66 89 99

50%
99 100 100 97 99 100 91 97 100 90 98 100

N = 30
100%

N = 50
100%

10

25 100 10

95%

99%

50

25 100 10

45%

63%

100

25 100

26%

39%

December 2008 Volume 1, Number 1

51

would have to find ways to accommodate the additional 7,271 m3/s in order to pass the extreme flood safely through the dam if the 10,000-year flood is adopted as the IDF. It is possible that the extreme flood defined by the upper limit of the CI could exceed the PMF or the estimated limiting value (ELV) flood. The dam designers would need to perform sufficient analyses to ensure that this would not happen and to minimize any unnecessary over-design.

demonstrate that their expressions provide for useful estimates of the CIs even though they are not exact. Bootstrap methods are among the many modern tools used by statisticians. There are nonparametric and parametric bootstrap methods. The idea behind the nonparametric bootstrap method is the use of the sample data at hand to generate many artificial samples of the same size using random sampling with replacement. For each artificial sample data, a quantile of interest can be computed based on the sample distribution. If there are N samples of data generated, then there will be N estimates of the quantile of interest. If a 95-percentile CI is sought, then the 2.5 and 97.5 percentiles of the N sample quantiles provide the needed upper and lower bounds of the 95-percentile CI. The major advantage of this method is that it can be applied to any estimating problem without the need to make assumptions about the uncertainty distribution around the estimate of the statistics of interest, which is often the problem with the analytical confidence expressions. The parametric bootstrap method has four main steps: (1) use the observed data and compute the parameters based on a certain assumed parametric distribution, (2) generate a large number of samples from the assumed parametric distribution, (3) calculate the statistic of interest for each sample, and (4) sort these values and use the appropriate quantiles to define the CI. The parametric bootstrap method of computing confidence levels is also found in the regional flood frequency analysis using L-moments developed by Hosking and Wallis. [7] Hosking and Wallis show that the method performs well even with the issues of heterogeneity and dependency among the gauging stations used in the regional analysis; the method also avoids the issue of making assumptions about the uncertainty distributions around the quantiles of interest.

The accuracy of CI estimates are very much method dependent.

METHODS TO DERIVE CONFIDENCE INTERVALS Is based on asymptotic theory, along with CIs constructed using the noncentral t-distribution, are commonly used in practice. Stedinger [5] discussed these methods in computing CIs for quantile estimates. He concluded that the use of the non-central t-distribution and the asymptotic distribution normally work well with observations and their logarithms if the data is normally distributed. For LP3 distribution with a known skew coefficient, a combination of the non-central t-distribution, with an adjustment based on asymptotic variance of the quantile estimator, is also generally performed satisfactorily. However, the approach suggested by Bulletin 17B did not perform as well as the other methods because a possible error in the specified population skewness coefficient was ignored. Recent literature on CIs includes attempts to remedy some of the issues identified with the current commonly used CI estimating methods, particularly issues associated with procedures suggested in Bulletin 17B. These can be summarized as analytical methods and bootstrap methods. One of the recent analytical methods is the Expected Moments Algorithm (EMA) method developed by Cohn et al. [6] The EMA is an attempt to remedy the shortcomings of the Bulletin 17B procedures, in which the parameters used to describe the distribution are derived independently of the distribution, without modifying or abandoning the use of the method of moment, a basic statistical structure in Bulletin 17B. EMA is an iterative method of moment procedure that computes the parameters of the LP3 distribution using systematic flood peak data as well as historic flood peaks, with analytical expressions for the asymptotic variance of EMA flood-quantile estimators and CIs for flood quantile estimates. Using the parametric bootstrap method (also known as Monte Carlo simulations), Cohn et al.

CONCLUSIONS e have defined the extreme flood in the context of the bulletin Safe Passage of Extreme Floods for the design of a dam. We have also suggested means by which the extreme flood can be estimated, while accounting for uncertainty. As in flood hydrology and in any predictive analysis that deals with nature, a clear understanding of the physical site conditions and the physical processes in

52

Bechtel Technology Journal

question, as well as engineering judgment, are paramount in the development of a safe design. The question of how confident we are in our estimates of the confidence levels remains the real issue. The papers referenced here point out the inexact nature of the approaches used in deriving CIs, as well as the associated shortcomings of these procedures. Some methods perform better than others, depending on the sample data at hand. In [8], six methods are evaluated, including both analytical approximate methods and the bootstrap method. The following is an excerpt from that paper, which also makes reference to a paper presented at the American Water Resources Associations 1997 conference. [9]
Nonparametric computer-intensive Bootstrap CIs are compared with parametric CIs for simulated samples, drawn from an LP3 distribution. Using this methodology, biased in favor of parametric CIs since the parent distribution is known, Bootstrap CIs are shown to be more accurate for small to moderate confidence level (^80%), when parameters are estimated by the indirect method of moment (Bulletin 17B). However, the actual level of Bootstrap CIs is almost always lower than the target level. It is expected that, compared to parametric CIs, Bootstrap CIs perform even better when applied to actual series of maximum annual floods, since they need not come from an LP distribution.

Coordination, and Interagency Advisory Committee on Water Data, U.S. Geological Survey, Reston, VA, 1982 <http://choctaw.er. usgs.gov/new_web/reports/other_reports/ flood_frequency/guidelinesflofreq.html>. [5] J.R. Stediger, Confidence Intervals for Design Events, ASCE, Journal of Hydraulic Engineering, Vol. 109, No. 1, January 1983, pp. 1327, access via <http://cedb.asce.org/cgi/WWWdisplay. cgi?8300105>. T.A. Cohn, W.L. Lane, and J.R. Stedinger, Confidence Intervals for Expected Moments Algorithm Flood Quantile Estimates, Water Resources Research, Vol. 37, No. 6, June 2001, pp. 16951706 <http://www.agu.org/pubs/ crossref/2001/2001WR900016.shtml>. J.R.M. Hosking and J.R. Wallis, Regional Frequency Analysis: An Approach Based on L-Moments, Cambridge University Press, Cambridge, UK, May 1997, see <http://www. springerlink.com/content/g848127107ul6403/>. V. Fortin and B. Bobee, Nonparametric Bootstrap Confidence Intervals for the Log Pearson Type III Distribution, Transactions on Ecology and the Environment, Vol. 6, WIT Press, 1994, access via <http://library. witpress.com/pages/listpapers.asp?q_ bid=245&q_subject=Ecology>. J.F. England, Jr., and T.A. Cohn, Scientific and Practical Considerations Related to Revising Bulletin 17B: The Case for Improved Treatment of Historical Information and Low Outliers, Proceedings of the World Environmental & Water Resources Congress: Restoring Our Natural Habitat, K.C. Kabbes, ed., Tampa, FL, May 1519, 2007, Paper 409272565, see <https://www.asce.org/ bookstore/book.cfm?book=7402>.

[6]

[7]

Multiple methods should be employed in defining the CIs.

[8]

[9]

It is recommended that several methods be used in defining confidence levels, and, based on performance criteria and professional judgment, the best method should be selected. Consultation with a professional statistician is always a prudent way for the hydrologist to build further confidence in his or her estimates.

This paper was presented at the International Conference on Dam Safety Management, held in Nanjing, China, in October 2008. The original version is slated for inclusion in the conference proceedings, which will be published at a future date. Additionally, the original version of this paper is scheduled for publication in the January issue of L Houille Blanche, the technical journal of the Socit Hydraulique de France in Paris and one of the premier hydraulic engineering technical journals in France and the world. The paper was translated into French by Dr. Lejeune (who will be listed as lead author on the translation) and will be submitted under the title Passage en scurit des crues extrmes.

REFERENCES
[1] ICOLDCIGB, International Symposium on Dams and Extreme Floods (Tomo 1 and Tomo 2), Granada, Spain, September 1992 (see <www.icold-cigb.org.> and <http://www.wrm. ir/ircold/pdf/eng_books/Proceedings5.pdf>). M. Wanielista, R. Kersten, and R. Eaglin, Hydrology: Water Quantity and Quality Control, 2nd Edition, John Wiley & Sons, New York, NY, 1997, access via <http:// he-cda.wiley.com/WileyCDA/HigherEdTitle/ productCd-0471072591,courseCd-E63100.html>. G.W. Kite, Frequency and Risk Analyses in Hydrology, Water Resources Publications, LLC, Littleton, CO, 1988, access via <http:// www.wrpllc.com/books/kitebooks.html>. Guidelines for Determining Flood Flow Frequency, Bulletin 17B of the Hydrology Subcommittee, Office of Water Data

[2]

BIOGRAPHIES
Samuel L. Hui has 42 years of hydraulic engineering experience, including 35 years at Bechtel. His vast technical knowledge and skills have been applied to more than 40 projects in the United States and around the world, including such Bechtel megaprojects as the Jubail Industrial City, King Khalid International Airport, and King Fahd International Airport in Saudi Arabia;

[3]

[4]

December 2008 Volume 1, Number 1

53

Sohar Aluminum Smelter in Oman; and Fjaral Aluminum Smelter in Iceland. Currently, as the senior principal engineer with Bechtel Civil, Sam participates in hydraulic or hydrologic engineering tasks on multiple projects, and is the off-project design reviewer of hydraulic/ hydrology tasks for the Guinea Alumina Project in West Africa, which ranks as one of the largest and most significant greenfield projects ever to be developed. Sam was manager and global technical lead for Bechtels Hydraulics and Hydrology Group, which performed technically challenging hydraulics and hydrologic studies worldwide, from 1995 to 2004. Sams many professional memberships include the U.S. Society on Dams (USSD), in which he serves on the Technical Committee on Hydraulics of Dams; and the International Commission on Large Dams (ICOLD), in which he serves on the subcommittee charged with the preparation of the ICOLD bulletin titled Safe Passage of Extreme Floods. He was also a member of the American Society of Civil Engineers (ASCE), and formerly chaired the Surface Water Hydrology Technical Committees control group and served on the subcommittee that oversaw revisions to the ASCE Hydrology Handbook (second edition). Sam holds MS and BS degrees in Civil Engineering from Queens University, Kingston, Ontario, Canada. He is a registered civil engineer in the province of Ontario, Canada, and in the state of California. Professor Andr Lejeune teaches at the Universit de Lige, Belgium, where he heads the Department of Hydraulics and Transport and the Laboratory of Applied Hydrodynamics and Hydraulic Construction. Dr. Lejeune has also taught at the International Institute for Infrastructural, Hydraulic and Environmental Engineering, Netherlands, and lEcole Polytechnique Fdrale de Lausanne, Switzerland, as a visiting professor. Dr. Lejeune has lent his outstanding hydraulics expertise to projects in 70 countries, including China, Egypt, Ethiopia, Indonesia, Iran, Israel, Japan, Jordan, Kenya, Madagascar, Pakistan, Poland, Thailand, the former Soviet Republic, Venezuela, and Yemen. He currently participates in a feasibility study for the Red SeaDead Sea Canal, a potential joint JordanianIsraeli initiative to bring water from the Red Sea to the Dead Sea, which is shrinking rapidly due to evaporation and upstream water diversion. Also, Dr. Lejeune recently served as an advisor for post-earthquake reconstruction of the Jian Rive irrigation dam near the city of Mianyang, in Sichuan, China. Dr. Lejeune is a member of the Belgian Royal Academy of Science, and a past peer reviewer for ABET, Inc. (formerly the Accreditation Board for Engineering and Technology), which accredits educational programs in applied science, computing, engineering, and technology. He is also a member of the International Commission on Large Dams (ICOLD), and currently chairs the Technical Committee on Hydraulics for Dams.

In 1972, Dr. Lejeune received the Lorenz G. Straub Award, a prestigious international award presented annually by the University of Minnesota to the author of a particularly meritorious doctoral thesis on a topic related to hydraulic engineering. His paper was titled The Operating Forces for the Opening and Closing of Miter Gates on Navigation Locks. Dr. Lejeune holds a PhD in Hydraulics, and has received the US equivalent of master of science degrees in Oceanography and Civil Engineering, all with highest marks, from the Universit de Lige, Belgium. Vefa Yucel is a principal engineer with National Security Technologies, LLC (NSTec), which provides management and operations (M&O) services for the Nevada Test Site (NTS), a 1,350-square-mile area northwest of Las Vegas, Nevada. He leads GoldSim modeling (a contaminant transport and regulatory compliance model) development for low-level and transuranic (TRU) waste performance assessments and compliance evaluations of two disposal facilities, closure planning and cover design, and environmental monitoring of the sites waste management facilities. As engineering supervisor and principal hydrologist at NTS with Bechtel Nevada, he managed many of the same tasks he currently manages as one of NSTecs principal engineers. Earlier, with Bechtel Environmental, Inc., in Oak Ridge, Tennessee, Vefa supervised Geotechnical and Hydraulic Engineering Services hydraulics and hydrology group, which provided specialty services to environmental restoration and waste management projects in surface water and groundwater hydrology, and fate and transport modeling. He was originally a senior engineer in Bechtels Hydraulics and Hydrology Group in San Francisco, where he was engaged in hydrologic studies for water resource and flood control projects. Vefa is a member of the American Society of Civil Engineers (ASCE), and served on ASCEs Task Committee on Paleoflood Hydrology in 1999. He has authored numerous technical papers, most of which have been presented at professional conferences worldwide, including Decision Support System for Management of Low-Level Radioactive Waste Disposal at the Nevada Test Site, Hydrologic Simulation of the Rio Grande Basin, Bolivia, Pollutant Loadings on the San Francisco Bay Estuary, An Integrated Model for SurfaceSubsurface Fate and Transport and Uncertainty Analyses (Part I: Theory, Part II: Application), and Development of Rainfall IntensityDuration-Frequency Data for the Eastern Province of Saudi Arabia. Vefa holds MS and BS degrees in Engineering from Iowa State University, Ames, Iowa, and has completed graduate courses in Civil Engineering at Stanford University, Palo Alto, California. He is a registered civil engineer in the state of California.

54

Bechtel Technology Journal

Bechtel Communications
Technology Papers

57

FMC: Fixed-Mobile Convergence


Jake MacLeod S. Rasoul Safavian, PhD

77

The Use of Broadband Wireless on Large Industrial Project Sites


Nathan Youell

91

Desktop Virtualization and Thin Client Options


Brian Coombe

Communications AT&T
The AT&T Mobility project involves construction, engineering, procurement, project management, and site acquisition for 3G mobile networks in the United States.

FMC: FIXEDMOBILE CONVERGENCE


Originally Issued: June 2006 Updated: December 2008 AbstractFixedmobile convergence (FMC) is providing a new direction for the future of telecommunications, with a potentially profound impact on various segments and industries. As the boundaries between various services blur, so do the rules of engagement of various industries. The impact is more than purely technical. FMC could potentially redefine the nature of telecommunications, information, and entertainment services and how various types of service providers compete. This paper looks into FMC drivers; various technical aspects of FMC; and the current evolutionary steps toward FMC implementation, such as generic access network (GAN), cellularwireless local area network (WLAN) integration, femtocells, and next-generation networks (NGNs). Keywords3GPP, 3GPP2, end-to-end, fixedmobile convergence, FMC, FTTH, GAN, generic access network, integration, interoperability, interworking, layer, mobility, next-generation network, NGN, nomadicity, seamless, security, TISPAN, VDSL, VHE, virtual home environment, WiMAX, wireless local area network, WLAN
INTRODUCTION

ixedmobile convergence (FMC) is not new! The original concept has been around since the early 1990s and was originally perceived as representing the ultimate telecommunications merger. The future would be one in which there would be little or no difference between fixed and mobile phone services. Each user would have a single number and receive a single bill. There would be a quality of service (QoS) range, end-to-end security, and a single point of contact for customer service. The key focal points in this vision were, and still are, services built around individual users, independently of their access networks! So what went wrong? The answer varies depending on whom you ask, but perhaps the main shortcomings could be attributed to technology immaturity; lack of unified standards; slow acceptance of packet services in the mobile arena; lack of (or delay in) delivery of the appropriate terminal devices; and, perhaps most importantly, absence of the appropriate market business drivers. Adding a further blow, the slowdown of the telecommunications boom has had a huge impact on capital expenditures (CAPEX). Technologically, FMC on the core network was hampered by the QoS issues of the Internet

Protocol (IP) backbone. On the mobile side, packet data services had a much slower than desired or expected takeoff. Fortunately, the recent emergence of common multimedia services standards such as session initiation protocol (SIP) and IP multimedia subsystem (IMS)both wireless and wirelinehas made it possible and efficient to share not only network infrastructure, but also application and billing layers, thus reversing the technical limitations. This paper is organized under five headings: What is FMC?Defines FMC and its main drivers. Different FMC Approaches and Solutions Addresses various evolutionary steps toward full FMC, beginning with an examination of wireless local area network (WLAN)cellular integration, generic access networks (GANs), and related issues. Next-Generation NetworksExamines a comprehensive new approach to FMC known as next-generation networks (NGNs) and addresses NGN requirements, architecture, functional elements, etc. Security ConcernsAddresses the allimportant issue of end-to-end security. ConclusionsProvides concluding remarks and future research directions.
57

Jake MacLeod
jmacleod@bechtel.com

S. Rasoul Safavian, PhD


srsafavi@bechtel.com

2008 Bechtel Corporation. All rights reserved.

ABBREVIATIONS, ACRONYMS, AND TERMS


3G third generation, enhanced digital mobile phone service at broadband speeds enabling both voice and nonvoice data transfer Third Generation Partnership Projecta collaboration agreement among several communications standards bodies to produce and maintain globally applicable specifications for a third-generation mobile system based on GSM technology Third Generation Partnership Project 2a sister project to 3GPP and a collaboration agreement dealing with North American and Asian interests regarding third-generation mobile networks authentication, authorization, and accounting authentication center access gateway control function access gateway authentication and key agreement access MGF access node American National Standards Institute access point application program interface access point name average revenue per user application server application service provider authentication center base station base station controller base transceiver station CommunicationAssistance for Law Enforcement Act capital expenditures charging collection function code division multiple access HLR HPLMN HSPA HSS HTN HTTP IETF IKE IM IMS IP IPSec IPv4 IPv6 ISDN ISIM DSL DSS DTM E911 EAS EDGE EMTEL ESP ETSI EV-DO FCC FE FMC F-MMS FMS FNO FNP FTTH GAA GAN GANC GBA GERAN GGSN GMSC GPRS GPS GSM digital subscriber line digital subscriber signaling dynamic asynchronous transfer mode emergency 911 (service) emulation AS enhanced data rates for GSM evolution emergency telecommunication encapsulating security payload European Telecommunications Standardization Institute evolutiondata optimized (3GPP2 standard) Federal Communications Commission functional entity fixedmobile convergence fixed line MMS fixed-mobile substitution fixed network operator fixed network portability fiber to the home generic authentication architecture generic access network GAN controller generic bootstrapping architecture GSM/EDGE RAN gateway GPRS support node gateway MSC general packet radio service global positioning system global system for mobile communications home location register home PLMN high-speed packet access home subscriber server handoff trigger node hypertext transport protocol Internet Engineering Task Force Internet key exchange instant messaging IP multimedia subsystem Internet Protocol IP security IP version 4 IP version 6 integrated services digital network IMS subscriber identity module

3GPP

3GPP2

AAA AC AGCF AGW AKA A-MGF AN ANSI AP API APN ARPU AS ASP AuC BS BSC BTS CALEA CAPEX CCF CDMA

cdma2000 A family of standards, developed through comprehensive proposals from Qualcomm, describing the use of code division multiple access technology to meet 3G requirements for wireless communication systems CGW CN CS CSCF DMH DoS charging gateway core network circuit switched call session control function dual-mode handset denial of service

58

Bechtel Technology Journal

ISP ISUP IT ITU-T

Internet service provider ISDN user part information technology International Telecommunication UnionTelecommunication Standardization Sector ISDN Q.921 user adaptation layer multimedia broadcast multicast service media gateway control media gateway function man in the middle multimedia messaging service mobile network operator mobile number portability minutes of use MPEG-1 Audio Layer 3 mobile station mobile switching center Multiservice Forum network attachment subsystem network address translation next-generation network operation, administration, and management online charging system original equipment manufacturer Open Mobile Alliance operating expenditures open service access private branch exchange personal digital assistant packet data gateway packet data serving node PSTN/ISDN emulation subsystem public land mobile network plain old telephone service packet switched public safety answering point public switched telephone network quality of service resource and admission control subsystem radio access network radio frequency residential gateway residential MGF radio network controller standardization development organization software-defined radio security gateway

SG-17 SGSN SIM SIP SLF SMLC SPAN TC TDD TDM TGW TIPHON

(ITU-T) Study Group 17 serving GPRS support node subscriber identity module session initiation protocol subscriber location function serving mobile location center Services and Protocols for Advanced Networks (ETSI TC) (ETSI) technical committee time-division duplex time-division multiplexing trunking gateway Telecommunications and Internet Protocol Harmonization Over Networks (ETSI TC) Telecommunications- and Internetconverged Services and Protocols for Advanced Networking (ETSI TC) transport layer security user equipment unlicensed mobile access UMA Consortium UMA network universal mobile telecommunications system UMA network controller user profile server function UMTS SIM UMTS terrestrial RAN V5.2-user adaptation layer voice call continuity very high data rate DSL virtual home environment visitor location register video on demand voice over IP visited PLMN WLAN access gateway wireless fidelity (Although synonymous with the IEEE 802.11 standards suite and standardized by IEEE, Wi-Fi is a certification mark promoted by the Wi-Fi Alliance.) worldwide interoperability for microwave access (Although synonymous with the IEEE 802.11 standards suite and standardized by IEEE, WiMAX is a certification mark promoted by the WiMAX Forum.) wireless ISP wireless local area network XML configuration access protocol any type of DSL extensible markup language

IUA MBMS MGC MGF MitM MMS MNO MNP MoU MP3 MS MSC MSF NASS NAT NGN OA&M OCS OEM OMA OPEX OSA PBX PDA PDG PDSN PES PLMN POTS PS PSAP PSTN QoS RACS RAN RF RGW R-MGF RNC SDO SDR SEGW

TISPAN

TLS UE UMA UMAC UMAN UMTS UNC UPSF USIM UTRAN V5UA VCC VDSL VHE VLR VOD VoIP VPLMN WAG Wi-Fi

WiMAX

WISP WLAN XCAP xDSL XML

December 2008 Volume 1, Number 1

59

WHAT IS FMC?

The different types of convergence can profoundly change how telecommunications sectors compete nationally and internationally.

he European Telecommunications Standardization Institute (ETSI) is an independent, not-for-profit organization whose mission is to produce telecommunications standards for the global marketplace. ETSI members come from network operators, equipment manufacturers, government, and academia. ETSIs various technical committees (TCs) work on different projects. ETSI defines FMC as being concerned with providing network and service capabilities independently of the access technique. It is concerned with developing converged network capabilities and supporting standards. This does not necessarily imply the physical convergence of networks. These standards may be used to offer a set of consistent services via fixed or mobile access to fixed or mobile, public or private, networks. In other words, FMC allows users to access a consistent set of services from any fixed or mobile terminal via any compatible access point. An important extension of this principle relates to roaming: users should be able to roam from network to network while using the same consistent set of services throughout those visited networks. This feature is referred to as the virtual home environment (VHE). The key word in FMC is convergence, and it is crucial to understand what convergence means and what convergence implies. What Does Convergence Mean? There are several fundamentally different types of convergence: Voice/data/multimedia convergence, which has the ultimate goal of providing intelligent and personalized services to subscribers and can be referred to as service convergence Information/communications/entertainment convergence, which can also be perceived as Internet or information technology (IT)/telecommunications/broadcasting or media convergence and implies industry convergence Broadband, heterogeneous, all-IP convergence, which implies network convergence Terminal/computer/electronic home appliance (game consoles, video cameras, personal digital assistants [PDAs], MPEG-1 Audio Layer 3 [MP3] players, etc.) convergence, which is referred to as device convergence

What Does Convergence Imply? So, what does convergence in the context of FMC imply? Potentially, all of the above different types of convergence! Convergence also applies equally to consumer and enterprise users. Although enterprises potentially stand to benefit sooner by increased employee productivity attributed to nomadic activities, the distinction between consumer and enterprise users will eventually blur significantly as workplace, home, and personal spaces converge. These different types of convergence can also profoundly change how telecommunications sectors compete nationally and internationally. Until recently, competition was limited to enterprises within a given sector; for example, a mobile network operator (MNO) competed only with other MNOs, a fixed network operator (FNO) competed only with other FNOs, and Internet service providers (ISPs) competed only with other ISPs. But this is no longer the case; instead, various industries are now converging into a single telecommunications industry. What Drives FMC? As of 2008, there were more than 3.2 billion wireless consumers and only approximately 1.2 billion wireline subscribers. These numbers are reflected in significant changes in recent telecommunications trends. For example, fixed line usage is decreasing dramatically for classic services, and mobile usage is increasing steadily. Likewise, fixed line minutes of use (MoU) have been steadily declining, while mobile MoU have been rising. On the other hand, the fierce competition among MNOs and saturation of subscriber penetration have led to a decline in mobile average revenue per user (ARPU). Furthermore, broadband Internet deployment has grown rapidly from 100 million subscribers in 2003, to 280 million in 2006, to 350 million in early 2008. At the same time, cable companies have gone from delivering just entertainment services to delivering dial tone as a bundled part of triple-play (entertainment, Internet, and voice). Significantly, voice over Internet Protocol (VoIP) usage is on the rise: In 2005, the number of US VoIP subscribers tripled to 4.5 million and VoIP revenue surpassed $1 billion, with the strongest growth occurring in the fourth quarter. Although Vonage is currently the leader in providing VoIP services in the US, Time Warner Cable could overtake them in the very near future. So what does one do with too much competition and too few customers? One approach is to start

60

Bechtel Technology Journal

buying up competitors, but other tactics could include seeking new markets, new services, and new ways of offering services. This is exactly where FMC enters the picture. What Benefits Does FMC Offer? In general, FMC offers two basic benefits: 1. It guarantees interoperability. 2. It reduces CAPEX and operating expenditures (OPEX) by using common resources; transports; operation, administration, and management (OA&M) functions; services; etc. More specifically, FMC can provide: Benefits for network operatorsFor operators that own both fixed and mobile networks, FMC makes it easier and cheaper to launch new services. It provides service continuity for customers, raising their network performance experience and thus reducing churn, thereby maintaining or increasing revenue. FMC also makes it easier to manage services, thereby leading to potential reduction in OPEX. For operators that have either fixed or mobile networks, FMC builds new services that leverage on the other network, thereby providing service differentiators. This becomes particularly important where there are no longer just MNOs competing with MNOs or FNOs competing with FNOs and the focus shifts from delivering connectivity to delivering cost-effective services. Furthermore, MNOs can realize a reduction in CAPEX brought about by (a) less spectrum being required as they employ wireless fidelity (Wi-Fi) technology to offload traffic from cellular networks to WLANs and (b) fewer cell sites, repeaters, etc., being needed as, for example, they leverage fixed networks via WLANs. Benefits for equipment vendorsOriginal equipment manufacturers (OEMs) benefit from FMC through developing common products (reusable software/hardware components); gaining a larger addressable market (increased revenue); and producing better, richer, more cost-effective products. Benefits for customersFMC provides customers with new services, continuity of service, personalized services (same ergonomics, same feel and look), mobility, simplicity (via a single number independent of network connectivity and via single billing), guaranteed QoS, security, and single customer care interface.

What are FMCs Potential Drawbacks? One concern with FMC is that the business model may regress toward that in place before the 1984 diversification in the US, when there was basically only one network operator and few equipment vendors that provided limited services. After all, some believe, it was diversification that fueled competition in the telecommunications industry, leading to a broader set of services, more operators competing in price and quality, the Internet explosion, and so forth. The security requirements of FMC also pose a concern. These requirements are currently based on IMS requirements, which may be challenging for FNOs to meet. Security issues are addressed in more detail later in this paper. What Does FMC Bring to Pre-Convergence Networks? Current legacy networks are basically singlepurpose networks that provide silo solutions. These are also referred to as vertically integrated networks. Each provides its own services: A fixed network offers fixed services, a mobile network offers mobile services, an entertainment network offers entertainment services. A user who wants to access different services must go back and forth between these silos to get the complete set. This is called the spaghetti solution (see Figure 1). In the FMC approach, applications and services are placed in one layer, there is a service control layer, and all users, regardless of access technology, can access the applications or services using the service control layer. The obvious benefits of this horizontally layered approach (called the lasagna solution) include the capacity to provide all services to all eligible subscribers; the diversity to provide market/ service differentiators for different operators; the consistency to use standard technologies such as IP, SIP, IMS, etc.; and the ability to reduce the OPEX explosion.

Current legacy networks are basically single-purpose networks that provide silo solutions. These are also referred to as vertically integrated networks.

Entertainment Network/ Entertainment Applications

Mobile Networks/ Mobile Applications

Fixed Networks/ Fixed Applications

From Silos to Layers

Applications and Content Common Service Capabilities Network/Transport (Different Access Networks IP)

Vertically Integrated (Spaghetti) Networks

Horizontally Layered (Lasagna) Networks

Figure 1. Vertically Integrated Networks vs. Horizontally Layered Networks (Traditional Silos of Services vs. Future Converged Services)
61

December 2008 Volume 1, Number 1

What Technology Enables FMC? The emergence of the following technologies has been instrumental in the development of FMC: VoIP, SIP, IP version 6 (IPv6), GAN/ unlicensed mobile access (UMA) Multimode/multiradio phones (e.g., Wi-Fi/ cellular, softphones, software-defined radio [SDR]) New access and core network solutions, such as very high data rate digital subscriber line (VDSL), mobile worldwide interoperability for microwave access (WiMAX), IMS, etc. What Current Issues and Challenges Face FMC? Some of the current issues and challenges facing FMC are: Number plans and number portabilityFixed numbers and mobile numbers come from separate blocks. Prefixes contain important information for interconnection charging and number portability. Interconnection charges usually have symmetrical arrangements between two fixed networks and asymmetrical arrangements between a fixed and a mobile network. Typically, charges increase fixed network costs and reduce mobile network costs; only the consumers are the beneficiaries. Currently, there is separate fixed number portability (FNP) and mobile number portability (MNP), but no fixed/ mobile number portability. Directory servicesFixed carriers provide unified directory service to their customers through a unified directory database containing information on all fixed line customers. Currently, mobile carriers have no such obligations. In fact, mobile numbers are considered personal data. Changes to this situation may be subject to public consultation. Handset availabilityThis is a typical issue in the early stages of the introduction of any telecommunications technology. Role of regulatorsThere are two opposing views about the role of regulators in FMC. One viewpoint is that it is not for regulators (the Federal Communications Commission [FCC] and/or the US Congress) to decide whether there should be FMC and what its pace of implementation should be. Rather, regulators should set up the environment so that market forces guide direction, extent, and pace. The other viewpoint is that, since the definitions of information, data, and entertainment have

In addition to improving and extending the UMA standard, the GAN specification allows any generic IP-based access network to provide connectivity between the handset and the GANC, through the Up interface.

changed, the rules governing network and service providers should change accordingly to encourage fair and healthy competition. For instance, when the Telecommunications Act of 1996 was passed, the capability to provide VoIP was virtually unanticipated, and telecommunications services and information services had different meanings. Regardless of their differences, both camps agree that new policies regarding FMC should provide a stable and secure legal and regulatory environment to enable innovation and competition and encourage investment. DIFFERENT FMC APPROACHES AND SOLUTIONS

urrent FMC solutions can be classified into four basic categories:

GANcellular integration Third Generation Partnership Project (3GPP)WLAN interworking Femtocells NGNs GANCellular Integration The first phase of FMC started with WLAN cellular integration via UMA technology. The initial activities were defined by the Unlicensed Mobile Access Consortium (UMAC) and published in September 2004 for global system for mobile communications (GSM)/general packet radio service (GPRS) integration with WLAN (Wi-Fi) service. The key drivers behind this development were (a) the need for a quick fix to address the challenges of improving coverage for markets deploying higher frequency GSM networks and (b) the desire for brand association for the early convergence offerings and the higher data speed advantage implied by Wi-Fi access. In this approach, a Wi-Fi network is perceived simply as an extension of a GSM/GPRS network and UMA as a new access technology. The main element in UMA is a UMA network controller (UNC), which provides the same basic functionality as a conventional base station controller (BSC). That is, the UNC handles mutual authentication and encryption and data integrity. The UNC enables mobile devices to access circuit-switched (CS) services via A interfaces with mobile switching centers (MSCs) and to access GPRS services via Gb interfaces with serving GPRS support nodes (SGSNs). The UNC maintains session control during handoff. The basic UMA network is shown in Figure 2.

62

Bechtel Technology Journal

RAN

Private Network A UMAenabled, Dual-mode Handset BTS BSC Gb Mobile Core Network MSC UMAN A Gb IP Access Network Unlicensed Wireless Network (e.g., Wi-Fi, Bluetooth) UNC CS Core GMSC

SGSN PS Core

GGSN

Figure 2. UMA (GAN)Cellular Networks

UMA development work was transferred to 3GPP in April 2005 and renamed GAN. [1] In addition to improving and extending the UMA standard, the GAN specification allows any generic IP-based access network to provide connectivity between the handset and the GAN controller (GANC), through the Up interface. The GANC also includes a security gateway (SEGW) that terminates secure remote access tunnels from the handset. The SEGW interfaces with the authentication, authorization, and accounting (AAA) proxy-server via the Wm interface, and the AAA proxy-server retrieves the user information from the home location register (HLR). GAN (UMA) functional architectural is shown in Figure 3. GAN Security Issues GAN supports security mechanisms at different levels and interfaces, as shown in Figure 4. There are three levels of security: First, the security mechanisms over the Up interface protect signaling, voice, and data traffic flows between the handset and the GANC from unauthorized use, data manipulation, and eavesdropping. Second, authentication of the subscriber by the core network (CN) occurs at the MSC/ visitor location register (VLR) or SGSN and is transparent to the GANC; however, there is cryptographic binding between the handset and the CN, as well as handset and GANC authentication, to prevent man-in-the-middle (MitM) attacks.

Third, there is also an optional additional application level security mechanism that may be employed in the packetswitched (PS) domain to secure end-to-end communication between the handset and the application server (AS).

SMLC Lb A Generic IP Access Network Up GANC Gb SGSN Wm SEGW AAA Proxy/Server Wd HPLMN (Roaming Case) AAA Server D/Gr HLR D/Gr HLR VPLMN/HPLMN MSC

MS

Figure 3. GAN (UMA) Functional Architecture

MS

Generic IP Network

GANC

A/Gb Interfaces

MSC SGSN

IP Network

Application Server

1. Up Interface Security 2. CN Authentication and Ciphering 3. Data Application Security, e.g., HTTPs (optional)

Figure 4. GAN (UMA) Security Mechanisms

December 2008 Volume 1, Number 1

63

The specification defining the interworking between 3GPP systems and WLAN is not limited to WLAN but is also valid for other IP-based access networks that support the same capabilities toward interworking that WLAN supports.

GAN Advantages/Disadvantages As mentioned, GAN uses lower cost Wi-Fi access points to improve and extend cellular network coverage. When using service at home or inside a building, a subscriber could have excellent coverage. GAN may also potentially relieve congestion on the GSM/GPRS network. That is, GAN will shift traffic to unlicensed spectrum. This means, however, that if another service provider is legally operating service in the same spectrum, available bandwidth could become limited or even be eliminated. Traffic prioritization techniques can significantly improve Wi-Fi performance and, of course, voice and data paths can be separated. Also, because the handset must listen to two different radio technologies, it must have two radios on board. Both radios must scan for networks at all times, in case the user roams into an area where a Wi-Fi network exists. This could affect battery life. And since all the data from the handset goes through the carriers servers, it may be chargeable! Subscribers might wonder why they are being charged for the data going over their own Internet connections, when they can use other devices such as laptops for no extra charge. 3GPPWLAN Interworking The interworking between 3GPP systems and WLAN has been defined in [2]. This specification is not limited to WLAN but is also valid for other IP-based access networks that support the same capabilities toward interworking that WLAN supports (e.g., any type of digital subscriber line

[xDSL]). The intent of 3GPPWLAN interworking is to extend 3GPP services and functionalities to the WLAN access environment, where the WLAN effectively becomes a complementary radio access technology to the 3GPP system. The interworking levels have been categorized into six hierarchical service scenarios. [3] Of these, scenarios 2 and 3 are of the most interest; more specifically: Scenario 2This scenario deals with 3GPP system-based access control and charging. Here, AAA is provided by the 3GPP system for WLAN access. This ensures that the user does not see significant differences in the way access is granted. This may also provide the means for the operator to charge access in a consistent manner over the two platforms. Scenario 3The goal of this scenario is to allow the operator to extend 3GPP system PS-based services (e.g., IMS) to the WLAN. These services may include, for example, access point names (APNs), IMS-based services, location-based services, instant messaging (IM), presence-based services, multimedia broadcast multicast service (MBMS), and any service built on a combination of several of these components. Even though this scenario allows access to all services, it is a question of implementation as to whether only a subset of services is actually provided. Also, service continuity is not required between the 3GPP system part and the WLAN part. Figure 5 shows the 3GPPWLAN interworking architecture. The key network components are:

3GPP Visited Network


Intranet/Internet
Wa Wg

3GPP AAA Proxy

Wf

Offline Charging System

WLAN UE

Ww

WLAN
WLAN 3GPP IP Access

Wn

WAG

Wd

Dw
Wp Wm

SLF
Wx
D/Gr

3GPP AAA Server


Wo

HSS HLR

Wu

PDG
Wi

Wy

OCS
Wz

Wf

Offline Charging System

3GPP Home Network


IP Network

Figure 5. 3GPPWLAN Roaming Reference Model

64

Bechtel Technology Journal

Packet data gateway (PDG)3GPP PS-based services are accessed via a PDG. A PDG has functionality like that of the gateway GPRS support node (GGSN), e.g., charging data generation, IP address management, tunnel endpoint, QoS handling. WLAN access gateway (WAG)Data to/from the WLAN access node (AN) is routed through the WAG via a public land mobile network (PLMN) through a selected PDG to provide a WLAN terminal with third generation (3G) PS-based services. 3GPP AAA proxy-serverThis proxy-server handles all AAA-related tasks and performs relaying when needed. HLR/home subscriber server (HSS)The HLR and HSS contain the required authentication and subscription data to access the WLAN interworking services. They are located within the 3GPP subscribers home network. The WLAN user profile should be stored in the HSS. For the HLR, the user profile may be located in the 3GPP AAA server. The user profile contains such information as user identification, operator-determined barring of 3GPPWLAN interworking subscription and tunneling, charging mode (prepaid, postpaid, both), roaming privileges, and so forth. Online charging system (OCS)/charging collection function (CCF)/charging gateway (CGW)These entities collect charging data, perform accounting and on-line charging, and carry out similar functions. The critical issue is network (both WLAN and PLMN) selection and advertisement. The standard allows two modes: automatic and manual. WLAN access network selection is technology dependent, although user and operator may have preferred lists of access networks. PLMN network selection and advertisement, however, should be WLAN agnostic. 3GPPWLAN interworking is part of 3GPP Release 6. On the cdma2000 side, the Third Generation Partnership Project 2 (3GPP2) has also undertaken similar activities, and there are now 3GPP2WLAN interworking specifications. [4, 5] WLANCellular Handover Issues The make-before-break handover provided by WLANcellular interworking and GAN enables seamless service provisioning. [6] There are intrasystem (horizontal) handovers and intersystem (vertical) handovers. Handovers from 3GPP access networks to WLAN (or GAN) are called handover in, and handovers from WLAN (or GAN) to 3GPP access networks are called handover out.
December 2008 Volume 1, Number 1

WLANcellular integration architecture is characterized by the degree of interdependence between the two component networks. There are two types of integration architectures: tightly coupled interworking and loosely coupled interworking (see Figure 6). In tight coupling, the WLAN (gateway) is integrated into the cellular infrastructure (connected directly to either the SGSN for 3GPP or the packet data serving node (PDSN) for 3GPP2) and operates as a slave to the 3G network. Here, the WLAN network appears to the 3G CN as another 3G access network, and the WLAN gateway hides the details of the WLAN network from the 3G core. The WLAN gateway also needs to implement all 3G protocols (mobility management, authentication, etc.) required in a 3G radio access network (RAN). This approach could have several disadvantages [7], two of which bear mentioning. First, since the 3G CN directly exposes its interfaces to the WLAN network, and direct connectivity to the 3G core is required, the same operator must own both the WLAN and the 3G network. Second, since WLAN traffic is injected directly into the 3G CN, 3G core elements have to be dimensioned properly for the extra WLAN traffic. In loose coupling, the WLAN gateway connects to the Internet and does not have a direct link to the 3G network elements. Here, the WLAN and 3G data paths are kept completely separate. There are several obvious advantages to this approach; again, two bear mentioning. First, this approach allows the 3G and WLAN networks to be independently deployed and engineered for traffic. Second, via roaming agreements with many partners, the 3G operator can have widespread coverage without extensive CAPEX. Unfortunately, the latency associated with vertical handoffs could be long, leading to unacceptable

In loose coupling, the WLAN gateway connects to the Internet and does not have a direct link to the 3G network elements.

AuC IMS

AAA HLR

3G Core Network

Internet

SGSN RNC
Node B/BTS UTRAN/GERAN

IP

Gateway AP
WLAN (WISP1 Tightly Coupled)

Gateway AP
WLAN (WISP2 Loosely Coupled)

Figure 6. 3G and WLAN Integration Architectures

65

In the femtocell solution, the AP supports the same radio access technology (e.g., UMTS/HSPA or cdma2000/EV-DO) as the outdoor system, thus eliminating the need for DMHs.

dropped call rates; this may be particularly true for voice calls. To make matters worse, the transition between WLAN hotspots and cellular coverage is typically very abrupt (e.g., upon entering or leaving a building). One approach to potentially alleviate this issue is to use handoff trigger nodes (HTNs) at the transition areas. [8] An HTN generates link layer triggers that cause early initiation of the vertical handoff. During a successful handoff, the terminal is assigned capacity in the cellular network. In tightly coupled architecture, it is possible to reserve capacity for WLANcellular handoff, thus improving performance (reducing the dropped call rate). In loosely coupled architecture, a cellular base transceiver station (BTS) may not be able to distinguish the vertical handoff from a new call request. It is important to note, as mentioned at the beginning of this section, that the 3GPPWLAN interworking specification is also valid for any other IP-based access network that supports the same capabilities toward interworking as WLAN, such as xDSL. Femtocell In the femtocell solution, the access point (AP) supports the same radio access technology (e.g., universal mobile telecommunications system [UMTS]/high-speed packet access [HSPA] or cdma2000/evolutiondata optimized [EV-DO]) as the outdoor system, thus eliminating the need for dual-mode handsets (DMHs). Because the femtocell is gaining momentum as a path to fixed-mobile substitution (FMS), this paper describes a number of its potential challenges in the following expanded discussion: Zero-touch provisioningTo successfully deploy femtocells on a large scale, the devices must be designed to include simple plug-andplay and self-configuration capabilities that avoid costly and time-consuming truck rolls. Network integration and AP management Typically, mobile networks have thousands or tens of thousands of macrocell sites. Adding potentially millions of femtocell base stations (BSs) could bring about a surge in network management and operational requirements and a significant increase in network signaling traffic, all of which must be addressed through proper planning. Also, since information will be traveling over the Internet, data must be authenticated and encrypted (e.g., using IP security [IPSec]) to avoid typical security risks. Because femtocells serve as points of entry to mobile operator networks, additional security issues are associated with placing

them at customers homes or premises and providing physical access to the devices. Mobile operators must be able to act remotely to detect and promptly disable any rogue or malfunctioning APs. InterferenceBecause femtocells operate in the licensed spectrum, the potential exists for interference between femtocells and macrocells, as well as interference among femtocells, particularly in large multi-dwelling units. Operators having the required spectrum can set aside a portion to be used solely for femtocell deployment, thus eliminating potential macrocell-related interference issues. However, the potential for interference among femtocells would still remain and must be addressed through proper planning. Automatic system selection and handoffThe handset must appropriately select the system to operate on and hand off calls properly as the user moves between outdoor macrocell and indoor femtocell. Proper performance is particularly important for voice calls, given that radio frequency (RF) conditions can change significantly in a relatively short period of time. Improper design and planning could lead to an unacceptable level of dropped calls and user dissatisfaction and churn. TroubleshootingLarge-scale deployment of femtocells will require carrier-grade diagnostic capabilities and tools to minimize costs associated with troubleshooting. To provide a viable business model, most customer trouble calls must be resolvable in the first few minutes of the first call. Location determination and E911Since voice calls can be placed over femtocells, the device must provide support for emergency 911 (E911) services. This task may become particularly challenging as femtocells are placed indoors and are easily moved from location to location. Also, the impact that adding potentially millions of femtocells (or BSs) can have on the public safety answering points (PSAPs) must be examined and plans made to accommodate an increase of this magnitude. Lawful interceptionFemtocell APs, like any other public communications system, typically must comply with all relevant lawful interception requirements (e.g., the Communications Assistance for Law Enforcement Act [CALEA] in the US). Timing and synchronizationPrecise timing and synchronization knowledge is vital to ensure proper performance of many access

66

Bechtel Technology Journal

technologies and services. This is particularly important for time-division duplex (TDD) systems, such as current mobile WiMAX profiles. Several synchro-nization options exist, among them: Synchronizing to a network time protocol server via the IP backhaul connection Listening to timing signals from a macro network (viable only where one exists) Using a global positioning system GPS) receiver for timing (although the necessary GPS hardware increases cost, and the femtocell or antenna must have unobstructed GPS access) Access controlFemtocells can operate either in the open access mode where any potential user can get access, or in the closed mode where only a few (typically two to eight) authorized users can use the AP. In either case, emergency calls should always be allowed. On one hand, since the AP is typically purchased by a specific user and the AP taps into that users broadband services, the owner of the AP should have the choice of operating in open or closed mode. The owner of the AP may also choose to move the AP from one geographical location to another completely different one. On the other hand, since the AP operates on the licensed spectrum, the owner of the spectrum (mobile operator) is fully responsible (and liable) for any use of, and RF transmissions from, the AP. This creates a dilemma regarding who really owns the femtocell; this dilemma must be addressed carefully. QoS on backhaul and business modelsAs the volume of femtocell-generated traffic increases, the indoor broadband service providers being paid by subscribers to provide femtocell backhaul may choose one of the following actions: Downgrade the level of quality accorded backhaul traffic. This could potentially deteriorate user quality of experience to the point that certain real-time services such as VoIP or video become unacceptable. Of course, taking this step could eventually raise legal issues regarding Internet neutrality. Switch from unlimited, unmetered use to service packages with different prices and different maximum traffic volumes and QoS levels (e.g., 1 GB/month for $30, 3 GB/month for $50). Institute tariff sharing, where the indoor broadband service providers demand a

portion of the revenue from the mobile operators based on the amount of backhaul they carry over their networks. The Femto Forum [9] is currently working to standardize femtocell technology and address most of the above issues. Large-scale femtocell deployments are expected in the 20092010 time frame.

NEXT-GENERATION NETWORKS

he full evolution to FMC will be through the NGN path. NGNs promise to be multiservice, multiprotocol, multi-access, IP-based networks: secure, reliable, and trusted. The NGN framework is set by the International Telecommunication UnionTelecommunication Standardization Sector (ITU-T) through ETSI. ETSIs Telecommunications- and Internetconverged Services and Protocols for Advanced Networking (TISPAN) TC deals with fixed networks and migration from CS networks to PS networks. TISPAN was formed in September 2003 by the merger of the Telecommunications and Internet Protocol Harmonization over Networks (TIPHON) and Services and Protocols for Advanced Networks (SPAN) TCs. The TISPAN TC focuses on all aspects of standardization for present and future converged networks, including NGNs, and produces implementable deliverables that cover NGN service aspects, architectural aspects, protocol aspects, QoS studies, securityrelated studies, and mobility aspects within fixed networks. Major standardization development organizations (SDOs) such as the Internet Engineering Task Force (IETF), 3GPP, 3GPP2, American National Standards Institute (ANSI), CableLabs, MultiService Forum (MSF), and Open Mobile Alliance (OMA) are actively involved in defining NGN standards. The TISPAN TC structure (working groups and projects) is summarized in Figure 7.

The full evolution to FMC will be through the NGN path. NGNs promise to be multiservice, multiprotocol, multi-access, IP-based networks: secure, reliable, and trusted.

Eight Working Groups

Projects

Services Architecture Protocols Numbering and Routing QoS Testing Security Network Management

Telecommunications Equipment Identity

TISPAN NGN

F-MMS

EMTEL

Figure 7. ETSI TISPAN TC Structure

December 2008 Volume 1, Number 1

DTM

OSA

67

NGN architecture enables new subsystems to be added over time to cover new demands and service classes. It also provides the ability to import subsystems defined by other standardization bodies.

TISPAN NGN Roadmap NGN Release 1, published on December 9, 2005, incorporates the following capabilities: realtime conversational services, messaging (IM and multimedia messaging service [MMS]), and content delivery (e.g., video on demand [VOD]). This release provides limited mobility support, with user-controlled roaming but no in-call handover. It allows a wide range of access technologies (xDSL, Ethernet, WLAN, cable) and allows for interworking with public switched telephone network (PSTN), integrated service digital network (ISDN), PLMN, and other IP networks. NGN Release 2, finalized in early 2008, focuses on optimizing access resource usage according to user subscription profile and service use. Work began on NGN Release 3 in mid-2008. When released, it is expected to include full interdomain nomadicity and accommodate higher bandwidth access such as VDSL, fiber to the home (FTTH), and WiMAX. NGN Requirements An ideal network would fuse the best of todays networks and capabilities and allow the incorporation of tomorrows inventions; it would have the following characteristics: The reliability of a PSTN The mobility of a cellular network The bandwidth of an optical network The security of a private network The flexibility of the Ethernet The video delivery of cable television The content richness of the Internet
Applications
Other Subsystems Core IMS PSTN/ISDN Emulation Subsystem Service Layer Transport Layer

A decoupling of services from networks and transports Open interfaces Full QoS selection and control The capability to support legacy as well as NGN-aware terminal devices Simplicity and reasonable price NGNs promise to provide exactly all of these and more! NGN Architecture The TISPAN TC has developed a functional architecture [10] consisting of a number of subsystems and structured in a service layer and an IP-based transport layer. This subsystem-oriented architecture enables new subsystems to be added over time to cover new demands and service classes. It also provides the ability to import subsystems defined by other standardization bodies. Each subsystem is specified as a set of functional entities and related interfaces. Figure 8 shows the overall NGN functional architecture. The NGN service layer comprises the following: PSTN/ISDN emulation subsystem (PES) Core IMS Other multimedia subsystems (e.g., streaming subsystem, content broadcasting subsystem) Common components used by several subsystems (e.g., subsystems required for accessing applications, charging functions, user profile management, security management) The transport layer provides the IP connectivity for NGN users. The transport layer is composed of a transport control sub-layer on top of transfer functions. The transfer control sub-layer is further divided into the network attachment subsystem (NASS) and the resource and admission control subsystem (RACS). The NASS provides registration at the access level and initializes terminal accessing to NGN services. More specifically, the NASS provides the following functionalities [11]: Authorization of network access based on user profile Dynamic provisioning of IP addresses and other terminal configuration parameters Authentication at the IP layer, before or during the address allocation procedure Location management at the IP layer

User Profiles

Other Networks

User Equipment

NASS RACS

Transfer Functions

Figure 8. NGN Overall Architecture

68

Bechtel Technology Journal

There may be more than one NASS to support multiple access networks. The RACS provides applications with a mechanism for requesting and reserving resources from the access network. More specifically, the RACS provides the following functionalities [12]: Session admission control Resource reservation, permitting applications to request bearer resources in the access network Service-based local policy control to authorize QoS resources and define policies Network address translation (NAT) traversal As mentioned, the major subsystems in the service layer are the PES, core IMS, and other multimedia subsystems, such as a streaming subsystem. PSTN/ISDN Service Continuity in an NGN The NGN supports the legacy plain old telephone service (POTS). That is, an NGN mimics a PSTN/ ISDN from the point of view of legacy terminals (or interfaces) via an IP network through a residential gateway (RGW) or an access gateway (AGW). This is referred to as PSTN/ISDN emulation. All PSTN/ISDN services remain available and identical (i.e., with the same ergonomics) so that end users are unaware that they are not connected to a time-division multiplexing (TDM)based PSTN/ISDN. This allows TDM equipment replacement, while keeping legacy terminals unchanged. The ITU-T H.248 protocol is used by the emulation AS (EAS) to control the gateway. A typical PES configuration is shown in Figure 9. PES is defined in [13]. The NGN also supports PSTN/ISDN simulation, allowing PSTN/ISDNlike services to be provisioned to advanced terminals (IP phones) or IP interfaces. Although there are no strict requirements to make all PSTN/ISDN services available or identical, end users expect to have access to the most popular ones, possibly with different ergonomics. Either the pure or the 3GPP/TISPAN version of the SIP is used to provide simulation services. Core IMS The IMS is the main platform for convergence. [14] Currently, the IMS is at the heart of convergent NGNs. The mobile SIP-based IMS is also the core of both 3GPP and 3GPP2 networks. It is expected that tomorrows entire multimedia mobile world will be IMS-based. The IMS is IP end-to-end and allows applications and services to be supported seamlessly across all networks. The IMS is defined

by 3GPP [15] and builds on IETF protocols; 3GPP has enhanced those protocols to allow for mobility. The TISPAN TC has decided to adopt the IMS and work with 3GPP on any modifications or improvements that may be needed for the NGN. [16] The main differences between the core IMS and the 3GPP IMS are as follows: Access networks differ significantly (xDSL and WLAN versus UMTS), although 3GPP Release 6 provides WLAN access and Release 7 provides xDSL access. There are bandwidth and transmission delay constraints. NGN terminals are usually more featurerich and have less stringent requirements, such as for a UMTS subscriber identity module (USIM)/IMS subscriber identity module (ISIM). Location information different. is fundamentally

Explicit resource reservation signaling is not available in terminals and access edge points; there is no dedicated channel for signaling. IP version 4 (IPv4) is still very much in use on the NGN.

SECURITY CONCERNS

The NGN supports the legacy POTS. That is, an NGN mimics a PSTN/ISDN from the point of view of legacy terminals (or interfaces) via an IP network through an RGW or an AGW. This is referred to as PSTN/ISDN emulation.

he telecommunications and IT industries are seeking cost-effective, comprehensive, end-to-end security solutions. ITU-T Study Group 17 (SG-17) is the designated lead study group for telecommunications security. Working groups within SG-17, called Questions (Qs), are tasked with looking into specific areas of

RGW or AGW

H.248

SIP-I

H.248 DSS1/IUA

RGW or AGW

RGW or AGW

H.248

SIP-I

H.248 V5.2/V5UA

S/T

RGW or AGW

AN H.248 SIP-I H.248 V5.2

RGW or AGW

H.248

SIP-I H.248 TGW

ISUP

PSTN/ISDN

Figure 9. Emulation Configuration

December 2008 Volume 1, Number 1

69

SECURITY-RELATED TERMS
CONFIDENTIALITY AUTHENTICITY The concealment of information or resources The identication and assurance of the origin of information The trustworthiness of data or resources in terms of preventing improper and unauthorized changes The prevention of the ability to deny that an activity on the network occurred The ability to use the information or resources desired A potential violation of security Any action that violates security. An attack has an implicit concept of intent. A router misconguration or server crash can also cause loss of availability, but they are not attacks. There are passive attacks and active attacks. A passive attack refers to eavesdropping on or monitoring transmissions to obtain message content or monitor trafc ow, whereas in an active attack, the attacker modies the data stream to masquerade one entity as another, to replay previous messages, to modify messages in transit, or to create denial of service (DoS). A statement of what is and is not allowed A procedure, tool, or method of enforcing a policy. Security mechanisms implement functions that help prevent, detect, and respond to recovery from security attacks. Security functions are typically made available to users as a set of security services through application program interfaces (APIs) or integrated interfaces. Cryptography underlies many security mechanisms.

INTEGRITY

NON-REPUDIATION

The security architecture logically divides the complex set of end-to-end network-securityrelated features into separate architectural components: security dimensions, security layers, and security planes.

AVAILABILITY THREAT

ATTACK

POLICY

MECHANISM

SECURITY DOMAIN

A set of elements made up of the security policy, security authority, and security-relevant activities. The set of elements is subject to the security policy for the specied activities, and the security policy is administered by the security authority for the security domain.

telecommunications security and produce technical specifications that are published as Recommendations. Q7 is chartered to look into telecommunications security management, Q5 into security architecture and framework, and Q9 into mobile secure communications. Q5 has published key Recommendations X.800 [17] and X.805 [18], and Q9 has published X.1121 [19] and X.1122 [20]. Recommendation X.800 deals
Table 1. Threat Models Defined by ITU-T Recommendation X.800
Model Destruction Corruption Denition/Description Destruction of information and/or network resources Unauthorized tampering with an asset Theft, removal, or loss of information and/or other resources Unauthorized access to an asset Unavailability or unusability of the network Attack On Availability Integrity

mainly with security architecture, and X.805 addresses security architecture for end-to-end communications. ITU-T Recommendation X.800 [17] provides a systematic way of defining security requirements. It defines security services in the five major categories of authentication, access control, data confidentiality, data integrity, and nonrepudiation, and it defines five threat models, as listed in Table 1. ITU-T Recommendation X.805 [18] defines the security architecture for systems providing end-to-end communications. This security architecture was created to address the global security challenges of service providers, enterprises, and consumers and is applicable to wireless and wireline, including optical and converged networks. The security architecture logically divides the complex set of end-to-end network-security-related features into separate architectural components: security dimensions, security layers, and security planes as follows:

Removal

Availability

Disclosure Interruption

Condentiality Availability

70

Bechtel Technology Journal

Security dimensionsA security dimension is a set of security measures designed to address a particular aspect of network security. Recommendation X.805 identifies eight sets of dimensions that protect against all major security threats. These eight sets are: access control, authentication, non-repudiation, data confidentiality, communication security, data integrity, availability, and privacy. Security layersTo provide an end-to-end security solution, the security dimensions are applied to a hierarchy of network equipment and facility groupings, referred to as security layers. There are three security layers: applications, services, and infrastructure. These layers identify where security must be addressed in products. Each security layer has unique vulnerabilities, threats, and mitigations. The infrastructure security layer enables the services layer, and the services layer enables the application layer. Security planesSecurity planes address the security of activities performed in a network. There are three security planes: end-user, control, and management. Each security plane is applied to every security layer. This yields nine security perspectives. Each security perspective has unique vulnerabilities and threats. Since there are eight security dimensions for each security perspective, this implies 72 combinations that need to be addressed! The architecture for the end-to-end network security proposed by Recommendation X.805 is shown in Figure 10. NGN Security Issues NGN security requirements are addressed in [21], and security architecture is addressed in [22]. The security requirements for IMS applications are,

to a large extent, based on 3G requirements for the IMS, although there are some differences and challenges related specifically to fixed networks. The TISPAN NGN TC is working with 3GPP to add, modify, or extend the existing 3GPP IMS to encompass the fixed network requirement. Some of the main security issues currently under study are: Security to support xDSL, WLAN, cable, etc. NAT/firewall traversal of NGN signaling and media protocols Authentication of NASS and IMS services Security to RGWs and AGWs Interworking of various security mechanisms Interdomain/interconnection security Lawful interception Legacy terminals (without ISIM) The NGN Release 1 security architecture assumes the existence of a well-defined NGN architecture that includes the IMS, NASS, RACS, and PES, and basically consists of the following major parts: NGN security domains Security services (authentication, authorization, policy enforcement, key management, confidentiality, and integrity) Security protocols (IMS access security, SIP hypertext transport protocol [HTTP]digest, presence security) Application key management SEGW functions IMS RGWs (to secure access of legacy terminals) NGN subsystem-specific security measures (e.g., for PES)

The NGN Release 1 security architecture assumes the existence of a well-defined NGN architecture that includes the IMS, NASS, RACS, and PES.

Applications Security
Communication Security

Threats
Destruction Corruption Removal Disclosure Interruption Data Integrity Availability

Non-repudiation

Access Control

Authentication

Services Security

Data Confidentiality

Vulnerabilities
Infrastructure Security

Privacy

End User Plane Control Plane Management Plane

Attacks

Eight Security Dimensions

Figure 10. Security Architecture for End-to-End Network Security

December 2008 Volume 1, Number 1

71

Within the NGN security architecture, the following logical security planes with their respective security functional entities (FEs) are distinguished: NASS security planeThis plane encompasses the security operations during network attachment for gaining access to the NGN access network. IMS security planeThis plane encompasses the call session control functions (CSCFs) and the user profile server function (UPSF). UPSF is the NGN version of HSS in the 3GPP IMS. Generic authentication architecture (GAA)/ generic bootstrapping architecture (GBA) key management planeThis plane is optional and is provided for in application layer security. The NGN security architecture partitions the NGN into the following security domains [22]: Access network security domainFEs are hosted by the access network provider. Visited NGN security domainFEs are hosted by a visited network provider, where the visited network may provide access to some application services. The visited network provider may host some applications and may own its own database of subscribers. Alternatively, or additionally, the visited network provider may outsource some applications services to the home network provider or even a third-party application provider. Home NGN security domainFEs are hosted by the home network provider, where the home network may provide some application services. The home network provider hosts some applications and owns a database of subscribers. Third-party application service provider (ASP) network security domainFEs are hosted by the ASP, which provides some application services. The ASP may be a separate service provider different from the visited or the home network provider. The ASP may need to deploy authorization information offered by the visited or home network provider. The NASS and RACS FEs are mapped to these four NGN security domains. Figure 11 shows the NGN security architecture with the NASS and RACS. SEGW functions within each security domain protect the exposed interfaces between security domains and ensure that a minimum security policy is enforced among the domains. The NGN IMS security architecture is very similar to that of the 3GPP IMS. In the NGN, the

Third-Party ASP Network Security Domain Application

Third-Party ASP Network Security Domain Application

NASS UE RACS Access Network Security Domain RACS

NASS

NASS

RACS Home NGN Security Domain

Visited NGN Security Domain

Access security, also known as first-hop or first-mile security, is a difficult part of the NGN architecture to achieve because of the different access technologies within interconnects. Access security consists of the network attachment part and the service layer part.

Figure 11. NGN Security Architecture with NASS and RACS and Different Domains

3GPP-specific transport domain is replaced by the generic IP transport domain. The security architectures of the NGN application and the IMS application are also similar. The NGN also defines a security protocol (HTTP digest over transport layer security [TLS]) to protect PSTN/ ISDN simulation services. It uses an extensible markup language (XML) configuration access protocol [XCAP] on the Ut interface between the terminal(s) as the XCAP client and the AS as the XCAP server. [23] Use of an authentication proxy for user authentication is optional (see Figure 12). NGN security can also be divided into the following three basic areas: Access security Core security Interconnection security Access security, also known as first-hop or firstmile security, is a difficult part of the NGN architecture to achieve because of the different access technologies within interconnects. Access security consists of the network attachment part and the service layer part. Network attachment includes network authentication between the UE and the NASS; network authentication is access technology dependent. For IMS access security, TISPAN has adopted the 3GPP solution of using the IPSec transport mode and SIP digest authentication and key agreement (AKA). The presence of NAT introduces some difficulties, but several potential solutions have been under investigation.

HTTP Digest over TLS


UE Ut Authentication Proxy (optional) AS Ut

Figure 12. Application Security Architecture

72

Bechtel Technology Journal

Customers Premises

Operators Premises Single Operators Security Domain

Legacy User Equipment


(terminals, PBXs) (R-MGF)

(A-MGF)

AGW

IP Transport
(Access and Core Network)

Control Subsystem
(AGCF with MGC)

RGW

Figure 13. PSTN Emulation Security

Core or intra-domain security is mainly the responsibility of the network operator. Protection at the domain borders is insufficient; experience has shown that many attacks are launched from inside the network. The separation principle, whereby information flow types (signaling, management, and media) and node types are isolated and individually protected, could significantly reduce the extent of an attack. [24] Interconnection or interoperator security is addressed by SEGWs, which enforce a domains security policy toward the SEGW of another domain. The use of the IPSec encapsulating security payload (ESP) tunnel mode with Internet key exchange (IKE) is a recommended option for mutual SEGW authentication, information integrity, and anti-replay. Confidentiality is optional. [14] The one remaining issue is security for non-IMS services, which have been mainly PSTN/ISDN services. VoIP can be supported by the IMS but can also be provided via a PES configuration. The PES is well positioned to replace the PSTN. The PES uses the ITU-T H.248 protocol instead of SIP between its AGW control function (AGCF) and media gateways. For security aspects, a distinction must be made between the AGWs at the operators premises and the RGWs in the subscribers homes. For AGWs, no authentication is required, since AGWs have a one-to-one relationship with an AGCF and security features can be provisioned. The security solutions for RGWs are somewhat more difficult because authentication is required. Authentication should be performed while maintaining the users PSTN experience. Security negotiations should be fully embedded in the RGW, and the RGW and AGCF should belong to the same security domain. See Figure 13. From a business standpoint regarding risks and vulnerabilities, network operators are typically

most worried about theft of service via identity theft and DoS attacks. The former threatens revenue, while the latter endangers service delivery and consequently service quality. Poor service quality often leads to higher churn, which, in turn, leads to loss of revenue. As a closing remark, it is worth mentioning that, until recently, handover between a CS voice (or potentially video or other multimedia services) call and a PS (WLAN or IMS) call was not addressed. This issue is now being addressed by 3GPP as voice call continuity (VCC), and the final specifications are part of 3GPP Release 7.

The wheels of convergence are already in motion. We can choose to embrace, participate in, and prepare for convergence or be caught unprepared!

CONCLUSIONS

MC has strong market drivers, and convergence is inevitable! FMC promises to provide something for everyonefrom end-user to network operator to application or service provider. Current FMC solutions are evolutionary steps toward full convergence, which is envisioned as occurring via the NGNs. The wheels of convergence are already in motion. We can choose to embrace, participate in, and prepare for convergence or be caught unprepared!

TRADEMARKS 3GPP is a trademark of the European Telecommunications Standards Institute (ETSI) in France and other jurisdictions. cdma2000 is a registered trademark and certification mark of the Telecommunications Industry Association (TIA-USA). Wi-Fi is a registered trademark and certification mark of the Wi-Fi Alliance. WiMAX is a trademark and certification mark of the WiMAX Forum.

December 2008 Volume 1, Number 1

73

REFERENCES
[1] [2] [3] [4] [5] [6] [7] 3GPP TS 43.318, Generic Access to the A/Gb Interface, Release 6. 3GPP TS 23.234, 3GPP System to WLAN Interworking System Description. 3GPP TS 22.934, Feasibility Study on 3GPP System to WLAN Interworking. 3GPP2 X.P0028-200, Access to Operator Service and Mobility for WLAN Interworking. 3GPP2 S. R0087, 3GPP2 WLAN Interworking. 3GPP TS 33.234, 3G Security: WLAN Interworking Security. M. Buddhikot et al., Design and Implementation of a WLAN/CDMA2000 Interworking Architecture, IEEE Communications Magazine, November 2003, pp. 90100. P. Khadivi et al., Handoff Trigger Nodes for Hybrid IEEE 802.11 WLAN/Cellular Networks, Proceedings of the First International Conference on Quality of Service in Heterogeneous Wired/ Wireless Networks, 2004. Femto Forum <http://www.femtoforum.org>.

The original version of this paper was published in the Bechtel Technology Telecommunications Journal, Vol. 4, No. 2, June 2006, pp. 3753.

BIOGRAPHIES
Jake MacLeod is chief technology officer, Engineering and Technology, for Bechtel Communications, and a Bechtel principal vice president. He was recently named a Bechtel Fellow in recognition of his contributions to the advancement of telecommunications technology and acknowledgment of his strong advocacy of Bechtels role in that arena. Jake joined Bechtel in May 2000 and is responsible for expanding the scope of Bechtels communications engineering services to include all aspects of technical design, from network planning to commercial system optimization. He initiated and developed Bechtels RF and Network Planning team, which, at its peak, had more than 150 world-class engineers. Jake also designed and established two world-class Bechtel telecommunications laboratories that have provided clients with applied research and development services ranging from interoperability testing to product characterization. Jake was the first Bechtel Communications person to enter Baghdad in 2003, immediately after the conflict paused. He and his teams assessed the Iraqi telecommunications network, then designed and replaced 12 wire centers (equivalent to 240,000 POTS lines) in a period of 4 months, an unprecedented achievement in telephony. Jake and his teams also analyzed and replaced the air traffic control system at Baghdad International Airport. Under his purview, Bechtels technology teams have developed the Virtual Survey Tool (VST), an automated network planning tool with the potential to radically change the conventional methods of network design. Jakes laboratories are currently working with global wireless equipment manufacturers to analyze and characterize UMTS, HSDPA, Node B hotels, WiMAX, and intuitive networks. Under his direction, the Bechtel Communications Technical Journal authoritatively analyzes cutting-edge operational issues. Bechtel Communications also hosts semiannual global technology debates focused on the pros and cons of the most advanced telecommunications technologies. Throughout the year, Jake typically gives an average of six to eight keynote and technologybased presentations at industry conferences. Jake started his career in the telecommunications industry in 1978, beginning in transmission engineering with Southwestern Bell Telephone Company (SWBTC) in San Antonio, Texas. His responsibilities at SWBTC included design and implementation of radio systems in Texas west of Ft. Worth. He participated in the original cellular telephone system designs for SWBTC in San Antonio, Dallas, and Austin. After SWBTC, Jake became the second employee for PageNet/CellNet and vice president of Engineering for its cellular division. He designed more than 135 cellular network systems,

[8]

[9]

[10] ETSI ES 282 001, NGN Functional Architecture. [11] ETSI ES 187 004, NGN Functional Architecture; Network Attachment Sub System (NASS). [12] ETSI ES 187 003, Resources and Admission Control Sub-system (RACS); Functional Architecture. [13] ETSI ES 187 002, PSTN/ISDN Emulation Sub-system (PES); Functional Architecture. [14] R. Safavian, IP Multimedia Subsystem (IMS): A Standardized Approach to All-IP Converged Networks, Bechtel Telecommunications Technical Journal, January 2006, pp. 1336. [15] 3GPP TS 23.223, IP Multimedia Subsystem (IMS) (Stage 2) Release 5. [16] ETSI ES 282 007, TISPAN: IP Multimedia Subsystem (IMS): Functional Architecture. [17] ITU-T Recommendation X.800, Security Architecture. [18] ITU-T Recommendation X.805, Security Architecture for Systems Providing End-to-End Communications. [19] ITU-T Recommendation X.1121, Framework of Security Techniques for Mobile End-to-End Communications. [20] ITU-T Recommendation X.1122, Guidelines for Implementing Secure Mobile Systems Based on PKI. [21] ETSI TS 187 001, NGN SECurity (SEC); Requirements. [22] ETSI TS 187 003, NGN Security; Security Architectures. [23] ETSI TS 183 023, TISPAN; PSTN/ISDN Simulation Services; XML. Configuration Access Protocol (XCAP) over Ut Interface for Manipulating NGN PSTN/ISDN Simulation Services. [24] M. Mampaey et al., Security from 3GPP IMS to TISPAN NGN, Alcatel Telecommunications Review, 4th Quarter 2005.

74

Bechtel Technology Journal

including San Franciscos, and filed them with the FCC. In addition to his responsibilities at PageNet/CellNet, Jake was asked to chair the FCCs Operational Relationships Committee. Jake has held executive management positions with NovAtel (Calgary), NovAtel (Atlanta), Western Communications, and West Central Cellular. More recently, Jake spent 9 years with Hughes Network Systems (HNS), where he was instrumental in establishing its cellular division. Jake designed and established cellular and WLL systems in areas ranging from central Russia to Indonesia, as well as in 57 US markets. Jake holds a BS degree from the University of Texas.

Dr. Safavian is quite familiar with the Electrical Engineering departments of four universities: The George Washington University, where he has been an adjunct professor for several years; The Pennsylvania State University, where he is an affiliated faculty member; Purdue University, where he received his PhD in Electrical Engineering, was a graduate research assistant, and was later a member of the visiting faculty; and the University of Kansas, where he received both his BS and MS degrees in Electrical Engineering and was a teaching and a research assistant. He is a senior member of the IEEE and a past official reviewer of various transactions and journals. Dr. Safavian is pleased to have been selected for inclusion in Marquis Whos Who in America , Sixty-Third Edition, January 2009.

S. Rasoul Safavian brings more than 20 years of experience in the wired and wireless communications industry to his position as Bechtel Communications vice president of Technology. He is charged with establishing and maintaining the overall technical vision and providing guidance and direction to its specific technological activities. In fulfilling this responsibility, he is well served by his background in cellular/PCS, fixed microwave, satellite communications, wireless local loops, and fixed networks; his working experience with major 2G, 2.5G, 3G, and 4G technologies; his exposure to the leading facets of technology development as well as its financial, business, and risk factors; and his extensive academic, teaching, and research experience. Before joining Bechtel in June 2005, Dr. Safavian oversaw advanced technology research and development activities, first as vice president of the Advanced Technology Group at Wireless Facilities, Inc., then as chief technical officer and vice president of engineering at GCB Services. Earlier, over an 8-year period at LCC International, Inc., he progressed through several positions. Initially, as principal engineer at LCCs Wireless Institute, he was in charge of CDMA-related programs and activities. Next, as lead systems engineer/senior principal engineer, he provided nationwide technical guidance for LCCs XM satellite radio project. Then, as senior technical manager/senior consultant, he assisted key clients with the design, deployment, optimization, and operation of 3G wireless networks. Dr. Safavian has spoken at numerous conferences and industry events and has been published extensively. He has authored three technical papers and co-authored two in the Bechtel Communications Technical Journal (formerly, Bechtel Telecommunications Technical Journal) since August 2005. Most recently, he co-authored Next-Generation Mobile Backhaul, which appeared in the September 2008 issue.

December 2008 Volume 1, Number 1

75

76

Bechtel Technology Journal

THE USE OF WIRELESS BROADBAND ON LARGE INDUSTRIAL PROJECT SITES


Issue Date: December 2008 AbstractAs wireless broadband technologies continue to evolve, they are becoming more practical and cost effective for use on large industrial project sites where reliable voice and data communications are essential. As a result, projects are becoming interested in the potential benefits and efficiencies of wireless broadband communications systems. In addition to facilitating a mobile workforce, wireless broadband can leverage a host of other advantages on large sites (e.g., safety and security, rapid deployment, survivability, asset tracking, sensors). This paper helps companies and projects become familiar with wireless broadband implementations and knowledgeable about the surrounding issues. Thus, they will be better prepared to evaluate this innovative way of providing value to their customers and their industries. Keywordsasset tracking, network planning, network in a box, NIB, project criteria, rapid deployment, spectrum availability, sustainable development, traffic analysis, untethered workforce, wireless broadband, wireless mesh, WiMAX

INTRODUCTION

BACKGROUND Industry Trends Until recently, the voice/data communications systems typically used by large industrial projects have been combinations of narrowband mobile radio, push-to-talk (PTT) phones, hardwired office data connections, private branch exchange (PBX) voice switches, and temporary fiber to construction trailers. The wireless broadband system is a fairly new network concept that provides high-speed wireless Internet and data access over a wide area. In the past several years, both licensed and unlicensed services and devices have enjoyed extensive growth in the wireless marketplace. Initial wireless access technology deployments have already begun, but have been mostly limited to smaller wireless fidelity hotspots and wireless bridging devices. Significant opportunities exist today to deploy and extend wireless networks/technologies across entire project sites to support multiple operations. An example of how wireless communications could be deployed across a project is illustrated in Figure 1. No single wireless broadband solution is applicable to all communications needs on all projects. Instead, these solutions can play

t is not unusual for large engineering, construction, and other heavy industry companies to be working on a variety of projects at locations around the world. The substantial size and complexity of these projectspower plants, refineries, mines, smelters, roads, airports, environmental cleanup facilities, pipelines, etc.make reliable onsite voice and data communications essential for successful project execution. Bechtel, a global leader in the engineering, construction, and management of large projects of all kinds, prides itself on bringing an unmatched combination of knowledge, skill, experience, and customer commitment to every job. [1] This includes designing and deploying the latest jobsite communications technologies. To grow and to be an industry leader, a company must keep abreast of those technologies that can help it to improve its processes, become more efficient, and provide better value to its customers. Deploying wireless broadband communications on large project sites is an example of using a technology that has the potential of providing such benefits. (Project size could be based on physical size, communications requirements, number of potential communications users, or a combination of these and similar factors.)

Nathan Youell
ntyouell@bechtel.com

2008 Bechtel Corporation. All rights reserved.

77

ABBREVIATIONS, ACRONYMS, AND TERMS


3GPP Third Generation Partnership Projecta collaboration agreement among several communications standards bodies to produce and maintain globally applicable specifications for a third-generation mobile system based on GSM technology Third Generation Partnership Project 2a sister project to 3GPP and a collaboration agreement dealing with North American and Asian interests regarding third-generation mobile networks fourth generation enhanced digital mobile phone service that promises to boost data transfer rates to 20 Mbps capital expenditures customer premises equipment evolutiondata optimized file transfer protocol global positioning system global system for mobile communication high speed packet access WLAN IEEE IT LAN LTE M2M MMS NIB OPEX PBX PTT RFID SMS UHF VHF WiMAX Institute of Electrical and Electronics Engineers information technology local area network long-term evolution mobile-to-mobile multimedia messaging service network in a box operating expenditures private branch exchange push to talk radio frequency identification short message service ultra high frequency very high frequency worldwide interoperability for microwave access (Although synonymous with the IEEE 802.11 standards suite and standardized by IEEE, WiMAX is a certification mark promoted by the WiMAX Forum.) wireless LAN

3GPP2

4G

CAPEX CPE EV-DO FTP GPS GSM HSPA

Construction Sites

Offices

Mobile Users Base Station Community

Residential/Camps

Figure 1. Wireless Project Site

78

Bechtel Technology Journal

an important role in the successful execution of specific types of projects. At the same time, wireless solutions are being included more and more often by customers as a project requirement and expectation; this calls for a level of expertise and judgment on the part of the contractor. Why not leverage this knowledge and the potential benefits of wireless systems to solve multiple communication challenges? Often, the issue with respect to using wireless broadband is one of overcoming risk or even the perception of risk. Traditional solutions have been used for years, are well understood, and are universally accepted. There is a natural reluctance to implement solutions perceived as being unproven. Factors such as cost, design and installation, performance, and supportability can affect a projects bottom line. The good news is that wireless solutions tend to mitigate these concerns. There is great interest in wireless solutions because their advantages over traditional methods can make taking the risk worth the effort. Vendor Support Many large industrial project sites and engineering and construction companies are currently evaluating and selecting wireless broadband solutions to support their operations. Prime examples include mining and refineries, construction site automation, private security systems, ports, and infrastructure projects. Equipment vendors are working with these potential customers to develop solutions targeted to these opportunities. Vendors include those traditionally used on large sitessuch as Motorola, Nortel, Cisco Systems, and Alcatel-Lucent as well as new entrants to this marketsuch as Huawei, Alvarion, and Tecore Networks. For example, Motorola has recently broadened its portfolio of communications solutions for the mining industry [2], and Alvarion has produced a brochure addressing the communications needs of oil, gas, and industrial sites [3]. There is little doubt that vendors are constantly evolving and evaluating innovative ways of moving forward with wireless broadband solutions for the worlds large industrial sites.

characteristics and attributes of a given project as they relate to the potential feasibility of using wireless solutions for site communications. Classification and categorization can also be used to provide a generic baseline for determining the feasibility and benefit of using wireless communications across different types of projects. The following is an example of this categorization process: Very large, remote projectOften, the very large, remote project is a greenfield site with little or no existing wireless service available. The large, remote project may realize the greatest benefit from ubiquitous wireless broadband coverage. Besides size and remoteness, this kind of project often has multiple locations that could benefit from wireless coverage between them. These locations may include residential camps, office buildings, construction sites, adjacent communities, and the transportation networks connecting them. Wireless solutions provide an opportunity to support the entire project with access to data, voice, applications, and other specialized services, making the large, remote project an ideal candidate for wireless broadband solutions. Categorization: High Feasibility. Mega-projectAnother common project type is the mega-project, which may or may not be remote and may or may not encompass multiple locations. Examples include large power plants, airports, and environmental waste treatment facilities. The benefits of wireless broadband depend primarily on the general wireless criteria discussed later in this paper, as well as on factors such as the number of users, access to spectrum, project duration, and specific wireless niches. Categorization: Medium Feasibility. Linear projectA linear project takes place over a large area that is linear in nature. This kind of project tends to be road or railway construction, where crews typically focus on one or only a few local areas before moving to the next, usually down the road. Wireless broadband is potentially very useful because of the significant area the project usually covers from one end to the other. Imagine a worker who must frequently travel the entire length of the project to get updated construction drawings from the main office. The time and safety implications alone make wireless communication particularly attractive. However, the same project

To grow and to be an industry leader, a company must keep abreast of those technologies that can help it to improve its processes, become more efficient, and provide better value to its customers.

PROJECT CATEGORIZATION AND EVALUATION Project Categorization As stated earlier, mobile wireless broadband solutions do not fit universally across all projects. Therefore, it is important to identify the

December 2008 Volume 1, Number 1

79

Table 1. Evaluation Criteria for a Wireless Solution


Criterion CLASSIFICATION Description Does the project t into a generic project classication such as one of those discussed above? The project location may lend itself to using a wireless broadband solution or just as easily prevent its use. Location factors affecting the feasibility and acceptability of wireless solutions include the availability of spectrum, either licensed or unlicensed; the availability of acceptable technologies; the presence of service providers; and the terrain. Spectrum availability is one of the most important considerations. Frequencies may be licensed or unlicensed, and, although there are similarities in band usage worldwide, each country or region can be completely different regarding how spectrum is used and regulated. For example, a frequency that may be unlicensed in one country could very easily be licensed in another. Also, certain regulations could limit the types of technologies used. (Spectrum issues are discussed further later in this paper.) It is more difcult to affect a project far along in the construction effort. Projects in the initial design stage or with specic communication challenges or issues are the prime targets to be assessed for the use of wireless broadband technologies. Project duration needs to be evaluated against needs and costs (cost-benet analysis). The actual number of wireless users is usually much less than the total number of people at the jobsite. However, it is anticipated that the number of users will grow as wireless technologies become more acceptable and gain more traction for different applications throughout a project. The number of users is useful to know for the cost-benet analysis and is a critical piece of information for performing trafc analysis and determining capacity requirements. How physically large is the project? The size of a particular project is a critical factor in evaluating the potential benet of implementing a wireless broadband solution because it is used to determine how many cell sites are needed to provide adequate coverage based on project requirements. Wireless solutions are engineered to provide the required coverage throughout a project area and to meet performance requirements. The solution could be as simple as providing basic outdoor connectivity in a well-dened area or as complex as supporting bandwidth-intensive applications in a coverage-limiting location. For example, coverage may need to be provided inside buildings with unusually thick walls, in mines, through tunnels, or in other hard-toaccommodate conditions. Because of the nature of project work, the more difcult situation is more often the standard rather than the exception. It is expected that unrealistic situations or particularly high demands will dictate the use of wired technologies. Wireless broadband is not a solution to every problem, which is why it is important to evaluate the requirements to determine what t, if any, may exist. Throughput and latency requirements play a big role in technology selection and implementation strategies. The types of applications that need to be supported largely determine the performance requirements. Applications include voice, Internet/intranet, e-mail, video conferencing, streaming video, and other company-specic applications, each with its own set of requirements for efcient operation. The challenge comes in designing a wireless network with limited resources that can be used in the same way as, or as an extension to, a traditional wired LAN that provides unlimited bandwidth and low latency. To provide a high-quality user experience, attention must be paid to the types of applications that will be used on the wireless networks. In some cases, it could even necessitate specic application development or redesign. Is this project distributed across different sites? If so, to what extent? A key benet of wireless broadband solutions is that they may help provide fast, efcient, and consistent services between different locations (construction site, residential camp, warehouse, etc.) without the need for a dedicated wired infrastructure. On the other hand, if a project is too widely distributed, a wireless solution might not be the best. Instead, a traditional leased line or microwave system is still generally the best option for extending connectivity. It is hard to focus solely on solutions and technologies without at least mentioning cost, which is often the deciding factor. The question to ask is whether or not the cost of the solution is in scale with the overall project value and allocated budget. Is there a specic project need or application that requires wireless broadband? Or, maybe a customer requirement or demand? More and more frequently, customers are requesting wireless broadband communications.

LOCATION

SPECTRUM AVAILABILITY

Frequencies may be licensed or unlicensed, and, although there are similarities in band usage worldwide, each country or region can be completely different regarding how spectrum is used and regulated.

STATUS, POINT IN PROJECT DURATION

NUMBER OF USERS

PROJECT AREA

COVERAGE AND PERFORMANCE REQUIREMENTS

THROUGHPUT AND LATENCY REQUIREMENTS

APPLICATIONS

DISTRIBUTION (MULTIPLE SITES)

BUDGET

NEEDS

80

Bechtel Technology Journal

configuration makes wireless communication equally difficult and costly to implement. Because of the longer distance, more base stations or radios are usually needed to provide adequate coverage, and the potential number of users is relatively small. For the linear project, it usually makes more sense to leverage existing commercial wireless infrastructure if possible. Categorization: Low Feasibility. Widely distributed projectThe widely distributed project extends across a very large geographical area, such as multiple states or regions throughout a country. A number of smaller regional offices scattered throughout the country is a good example of this type of project. A wide-scale wireless broadband solution is not practical to implement on such a large scale, but it does make sense to leverage any existing wireless communication infrastructure. For individual offices, local area network (LAN) solutions may make more sense, including using existing pre-wired office buildings and wireless LAN (WLAN) technologies. Categorization: Very Low Feasibility. Project Evaluation Criteria In evaluating the possible use of a wireless broadband solution on a project, a host of items needs to be considered to make an informed decision about whether or not a wireless solution is appropriate and, if so, the best technology to use. Items to consider include, but are not limited to, those described in Table 1. Table 2 shows how the various criteria could relate to project benefits and the feasibility of implementation. Although the relationship varies from project to project, the table provides a good starting point. Other Considerations Needs and Requirements Versus Nice-To-Haves At present, wireless broadband is seen as nice to have but, for the most part, not yet necessary or required for completing the project. In other words, a project may recognize that workers with tablets are potentially very useful, yet not identify the value/savings of having an alwaysconnected workforce. Even though wireless is potentially more efficient, the mindset is that current solutions still work, so why change them? However, a shift is occurring: specific applications are more regularly prescribing the need for wireless solutions, and customers are beginning to expect and require them. Companies and

projects should become familiar with wireless broadband implementations and be knowledgeable about the surrounding issues so as not to be caught off guard. Phases of the Project Site In addition to categorizing a project according to type, consideration should be given to evaluating the three main phases where there are potential uses for wireless broadband on a given project site: construction, operation, and sustainability. The classification process discussed previously focuses mainly on the construction phase. However, as will be seen more commonly in the future, customer requirements may dictate the installation and subsequent use of wireless networks for the operations phase or even as a sustainability accomplishment. Depending on project requirements, schedule, and contractual obligations, wireless could be used during any one of these phases individually or, more efficiently, throughout two or even all three.

At present, wireless broadband is seen as nice to have but, for the most part, not yet necessary or required for completing the project.

WIRELESS TECHNOLOGIES FOR CONSIDERATION everal dominant wireless broadband technologies need to be considered as possible solutions for use on large industrial project sites as alternatives to cable and DSL. The main technologies to evaluate are currently Institute of Electrical and Electronics Engineers (IEEE) 802.11 wireless fidelity standards, wireless fidelity mesh architecture, IEEE 802.16e-2005 worldwide interoperability for microwave access (WiMAX) standards, Third Generation Partnership Project (3GPP) high-speed packet access (HSPA) and long-term evolution (LTE), and 3GPP2 evolution data optimized (EV-DO). Each has its pros and
Table 2. Project Benefits and Implementation Feasibility
Project Benet Number of Users Project Area Throughput Applications Specic Need or Requirement Cost Savings Implementation Feasibility Location Spectrum Availability Status, Point in Project Duration Number of Users Coverage and Performance Requirements Throughput Distributed Site Budget

December 2008 Volume 1, Number 1

81

Table 3. Technology Evaluation


Technology Cost Spectrum1 Equipment Maturity CPE Availability Coverage/ Range Latency Performance Throughputs

Wireless Fidelity Wireless Fidelity Mesh WiMAX 2

Low

High

High

High

Low

Low

High

Low/Med

High

High

High

Medium

Medium

Med/High

Medium Med/High3 Med/High4 High5

Med/High Low Low Low

Medium High High Low

Medium High High Low

Med/High Med/High Med/High Med/High

Medium High High Medium

Medium Low Low Med/High


Most Desirable

As technologies continue to improve, increasing consolidation should be expected among the different wireless systems on the jobsite.

HSPA EV-DO LTE

1 High = spectrum widely available; low = licensed spectrum required 2 IEEE 802.16e-2005 standard 3 Medium for small-scale systems (network in a box), high for commercial-grade equipment 4 Medium for small-scale systems (network in a box), high for commercial-grade equipment 5 Equipment not yet available

Least Desirable

cons, as shown in the brief evaluation given in Table 3. One overarching difficulty to mention is that technologies are constantly changing and evolving, a critical factor when it comes to feeling confident in selecting and investing in a specific wireless technology. Each technology listed in Table 3 is currently viable and has commercial equipment available, with the exception of LTE, which is a little further out but fast approaching. These technologies need to be evaluated to determine which not only provides the best performance for a specific project, but is also the most cost-effective solution for use on that project. This evaluation usually points to a wireless broadband technology such as WiMAX or a mesh implementation of the popular wireless fidelity standards. Initial wireless deployments focus mostly on providing data pipes and usually coexist with existing commercial wireless voice services such as global system for mobile communications (GSM) and traditional construction-oriented solutions like two-way radios. As technologies continue to improve, increasing consolidation should be expected among the different wireless systems on the jobsite.

site-wide access to data, voice, applications, project servers, and other specialized services. They also provide additional features and benefits. Some of these are security and safety improvements, more rapid deployment, survivability, asset tracking, and sustainable development. These additional benefits may be a high priority on some project sites and, by themselves, justify implementation. Even where wireless broadband is less likely to fit as a bigger solution, there may still be target opportunities and niches that might be predisposed to wireless solutions and applications. Examples of the benefits of wireless broadband on large industrial project sites are explained in more detail in the following paragraphs. Untethered Workforce A wireless broadband communications system can benefit a project by creating an untethered workforce that can access voice communications, e-mail, and important data at any time or place on the project site. For example, a wireless system would enable project personnel to operate with tablets and laptops anywhere in the field in real-time collaboration with engineers at the home office or any other location available on the company network. Think about wireless handheld devices and how prevalent they have become over the years. The untethered workforce could be the most visible and most evident benefit of implementing a wireless broadband system on a project.

POTENTIAL BENEFITS OF A WIRELESS BROADBAND SOLUTION

I
82

mplementing a wireless broadband network can bring many benefits. Wireless solutions provide an opportunity to give a project

Bechtel Technology Journal

Sustainable Development Wireless broadband networks deployed in thirdworld countries or territories that previously had none can be left behind as a sustainable development initiative. Local economies can greatly benefit from access to basic communication systems. Students can attend online classes at universities outside their means of travel, and businesses can instantly become globally visible, exponentially increasing their customer base. A sustainable development project such as this helps broaden project acceptance by the local communities. Retention and Employee Satisfaction The availability of broadband services and wireless broadband networks is playing an increasing role in attracting and retaining employees, particularly for projects in remote locations. Regardless of where the workforce is from, employee expectations are on the rise when it comes to onsite perks or benefits. One of these is the ability to remain connected with friends, family, and the outside world. Given the choice between two projects, one with and one without wireless access, it is not difficult to envision which would have the most difficulty attaining and keeping its workforce. Advanced Applications Safety and Security When a business office or project site is being deployed in a foreign country where hostilities are prevalent, safety and security are the two most important considerations. Here, a wireless broadband solution provides many improvement opportunities: The speed at which information can be relayed is very important and, in certain situations, can save lives. For example, simple text messaging allows the dissemination of safety messages prompting urgent action to all employees. Ubiquitous coverage allows personnel to remain in constant communication in the presence of dangerous situations. Global positioning system (GPS)-enabled mobile devices give a multitude of options for tracking and communicating with large numbers of people. In the near future, mobile devices may even be used to detect harmful threats by using a network of cell phones to detect and track chemical, biological, and radiological events.

Rapid Deployment The speed at which a communications network can be deployed may be the most important factor, at least initially. Wireless broadband solutions provide a way in which to deploy onsite communications very quickly, especially when other alternatives are not available or require additional outside resources or construction efforts. Today, it is possible to have a complete communications system set up and ready to go within hours of arriving on site, using a rapiddeployment approach colloquially referred to as the network in a box (NIB). Extending a coverage area in a traditional wired network involves digging another trench and laying more cable. The amount of time and money spent to achieve this appears high when compared with the wireless solution. Various telecommunications vendors provide rapid deployment or NIB systems that can be quickly set up and operational to provide immediate coverage when emergency or merely temporary communication is needed. Examples of NIB systems include Ciscos Aironet 1524 Lightweight Outdoor Mesh Access Point [4] and Tecores rapidly deployable mobile networks [5]. NIB systems typically provide both radio and core network functions, yet are small enough to fit in the back of an SUV. Survivability If a project network is going to be active for a long time or carry business-critical information, its survivability may become more important than its speed or scalability. Survivability refers to a networks ability to better withstand disruptions. For example, fiber, once installed, provides very high data rates; however, depending on its location, it may be prone to downtime caused by cuts (both accidental and intentional). Cables are susceptible to being cut during construction digging or even intentionally by malevolent forces. Wireless solutions can be used to provide backup or protection from such events. It can also be used with other solutions to provide more robust communications. Asset Tracking Tracking assets is an important task that all businesses face. Wireless broadband LANs provide suitable environments for implementing asset tracking solutions like radio frequency identification (RFID) systems. RFID uses paperthin transponder tags that can be placed on equipment, products, and materials. Each tag emits a unique signal that positively identifies the object, as well as storing descriptive information. RFID eliminates the need for an employee

The availability of broadband services and wireless broadband networks is playing an increasing role in attracting and retaining employees, particularly for projects in remote locations.

December 2008 Volume 1, Number 1

83

to get close enough to an item to physically scan a barcode to retrieve or write information on a particular piece of inventory. RFID also dramatically speeds the process of logging when assets are on the move during shipping and receiving. Zones can be set up to automatically scan and track items whenever they move in or out of a zone. Tags can hold several fields of data about a particular object, including name and/or catalog number. Wireless asset tracking technologies are particularly useful where a large inventory is dispersed across a large area.

IDENTIFYING APPLICATIONS AND REQUIREMENTS It is important to consider the types of applications and services required on project sites that may be delivered more effectively via a wireless broadband network (see Table 4 for examples). As shown in Figure 2, applications and services have minimum performance requirements that must be met before they can operate properly and satisfy user expectations. For example, wireless coverage that supports only voice services or very limited data will not add sufficient large-scale value. Likewise, large bandwidths but high latencies are unacceptable because many specialized applications have strict performance requirements.

A large-scale wireless broadband network increases the practicality of such sensors and encourages innovative ways of solving unique problems.

Sensors and Video Surveillance Any number of sensors are used throughout a large project site, from perimeter surveillance to vehicle telemetry to monitoring water levels in dams and retention ponds. A large-scale wireless broadband network increases the practicality of such sensors and encourages innovative ways of solving unique problems. Video is a prime area where a wireless system dramatically increases flexibility. Without the wireless network, video camera locations either need to be determined in advance and prewired, or are limited to areas where existing wired interfaces are present. Of course, the wired infrastructure can always be expanded, but only at a cost. With wireless video capabilities, cameras can be installed anywhere within the coverage area and automatically be connected to the proper surveillance network. Simply put, without such a wireless network, the use of many of these sensors would be impractical or cost-prohibitive. In summary, many applications can be deployed inside a wireless network that can add to or improve that networks featuresfeatures that may prove to be crucial to overall project success and warrant careful consideration. The examples mentioned in this section represent only a few possible options. Many more exist, some of which have yet to be fully investigated.

IMPEDIMENTS TO A WIRELESS BROADBAND SOLUTION

roadband wireless networks have been tested, used, and deployed in different fashions on different projects on multiple occasions. In many cases, however, these networks have been less than successful. Lower-than-expected performance, poor coverage, unproven technology, lack of interoperability with required applications, and general underutilization are some of the more common causes of failure. Wireless deployment on a large-scale project (beyond hotspot Internetcaf-style and office environments) has yet to be fully proven, and the initial attempts to do so have often led to distrust of vendor marketing ploys and the technology itself. These factors are compounded by the fact that projects are extremely cost and schedule (risk) sensitive and often do not have the luxury of implementing new technologies without a well-thought-out and proven business case. In most cases, wireless does not fit within this framework. In addition to the more intangible perceptual impediments, spectrum, performance, and interoperability are the three most tangible

Table 4. Application Examples


Generic Two-Way Radios (PTT) Mobile Voice Mobile Data E-Mail Streaming Video Messaging (SMS, Multimedia) Specic Project Database Asset and Inventory Tracking Timecards Other Corporate Applications Sensors Location-Based Services Emergency Services Survivability/Backup Other

84

Bechtel Technology Journal

Video Streaming >5 Mb/s

Video Conferencing M2M: Robot Security, Video Broadcasting Video Conferencing

Audio/Video Download

Real-Time Gaming

Bandwidth

1 Mb/s

FTP

Mobile Office/e-mail

Multiplayer Games Video Telephony, Audio Streaming Voice Telephony M2M: Remote Control 200 Ms 100 Ms 20 Ms Interactive Remote Games

MMS, Web Browsing >64 Kb/s SMS Voice Mail >1 Sec

Network Latency
(Source: IST-2003-507581 WINNER, D1.3 Version 1.0, Final Usage Scenarios. 30/06/2005; Parameters for Tele-traffic Characterization in Enhanced UMTS2, University of Beira, Portugal, 2003)

Figure 2. Latency and Bandwidth Requirements for Various Services [6]

impediments to successfully deploying wireless broadband on large project sites. Each is discussed in the following paragraphs. Spectrum Spectrum remains the most critical piece of the puzzle for wireless broadband technology and its deployment. Any wireless solution requires a finite amount of spectrum; therefore, the availability of spectrum is the first consideration on any project. Availability of spectrum is crucial to the overall success of the wireless solution. Because licensed spectrum is usually expensive and involves dealing with an often unknown bureaucracy, unlicensed spectrum is initially very appealing. However, it is important to consider the potential risks associated with using unlicensed spectrum and to plan accordingly by developing a robust network, periodically scanning the spectrum, etc. So many different rules and regulations control the use of wireless networks in every country that complying with them is generally the initial hurdle to deploying networks at many of the remote project sites throughout the world. Up to this point, the most widely regulated wireless spectrum has been standard ultra high frequency (UHF) and very high frequency (VHF). To further complicate matters, as wireless technologies evolve, their bandwidth and spectrum requirements generally increase.

For example, future fourth-generation (4G) technologies will rely on scalable, very wideband spectrum such as 20 MHz and even 40 MHz. Even though current WiMAX specifications allow for 20 MHz bandwidths, profiles have yet to be defined beyond 10 MHz channels. Without this wideband spectrum, performance metrics and target data rates will be unachievable. Performance Measurements and Considerations On the most basic level, wireless communications systems on jobsites need to provide connectivity (quick, reliable, and often temporary) to construction trailers and other site locations not easily serviced via traditional cable due to time or cost constraints. In reality, due to bandwidth demands and security and reliability issues, high-bandwidth cable is, and will continue to be, used in highly concentrated and other key areas where replacing it with wireless is not an option. Without a proper understanding of performance requirements and considerations, wireless broadband cannot really be thought of as a replacement for traditional LAN cables inside buildings or as a replacement for fiber or other cables installed for high bandwidth applications. Interoperability of Specialized Applications In many cases, specialized or company-specific applications are neither optimized nor geared for use in a wireless environment. Instead, they are usually designed for use on corporate LANs.

In addition to the more intangible perceptual impediments, spectrum, performance, and interoperability are the three most tangible impediments to successfully deploying wireless broadband on large project sites.

December 2008 Volume 1, Number 1

85

CONSIDERATIONS AND RECOMMENDATIONS


700

Com mmercial Commercial


0 600

WiM MAX WiMAX


0 500

0 400

0 300

Building a wireless network on every project is not an automatic solution for project communication needs.

0 200

100 0

10

Years

Commercial versus Enterprise Solutions Even though a certain project may seem to be an ideal candidate to benefit from the deployment of an onsite wireless broadband network, one or more specific negative attributes may weigh more heavily. Depending on the projects configuration and location, numerous solutions could be used to provide onsite wireless access. In a remote location, there may be no choice but to install and operate a standalone enterprise network. However, at a project site in or near a city or for a short-term project, especially in the United States and other developed countries, operating an independent wireless network may be impractical for a number of reasons, such as spectrum availability. For these projects, leasing the services of an existing wireless provider might be more attractive, offering exceptional coverage and sure performance. In some cases, time, money, number of users, etc., make the best solution the use of existing commercial services. The point is to use the best solution for each project. Building a wireless network on every project is not an automatic solution for project communication needs. The graph in Figure 3 plots the results of an analysis of using a commercial or an enterprise wireless (WiMAX) solution on a sample project. Based on conservative assumptions, there is a slightly greater than 5-year breakeven point for using a dedicated enterprise wireless solution instead of an existing commercially available solution. This outcome does not take into account additional supportability issues. The assumptions used in the analysis are listed in Table 5 and include a minimum monthly lease cost of

Total Cost, $ Thousand

Figure 3. Commercial versus Enterprise Analysis

Because of this, their performance could suffer substantially and even reduce the performance of applications that have been optimized for wireless systems. An ongoing effort is needed to further test and develop the applications typically used on large industrial sites to improve their bandwidth and latency efficiencies or even to optimize them for use on mobile devices.

Table 5. Commercial versus Enterprise Assumptions


Commercial System
Users Cost per User (CPE) Cost per User (Monthly) Total Initial Cost Total Monthly Cost Total Annual Cost 100 $200 $50 $20,000 $5,000 $60,000 Required Distance (Miles) Number of Miles per Base Transceiver Station Estimated Number of Sites Miscellaneous Costs (Antennas, Cables, etc.) Radio Equipment Customer Premises Equipment Core Network Equipment Site Lease Cost Total Lease Cost Total Initial Cost Total Monthly Cost Total Yearly Cost

WiMAX System
11 4 3 $3,000/Site $25,000/Site $250/User $150,000 Total $500/Month $1,500/Month $259,000 $1,500 $18,000 $6,000/Year $18,000/Year $9,000 Total $75,000 Total $25,000 Total

86

Bechtel Technology Journal

$500 for each site. This recurring expense adds to the increase in total cost over time. If, for example, no sites currently exist and need to be constructed, the initial cost would be higher but the cost over time would remain relatively flat. Network Planning The value of proper planning and engineering can never be underestimated. Network planning for a proposed wireless broadband solution is no exception. It is not enough to think that because wireless access points can be readily deployed in the home or around a building or campus, this correlates with ease of implementing a wireless network on a large project site. Networks deployed without this consideration are destined to face challenges and issues. Coverage Required The actual layout of a project site must be taken into account in determining optimal equipment locations and analyzing expected coverage. This analysis should consider geographic features and clutter in the desired coverage area. Are there hills or trees that could obstruct coverage? Is indoor coverage required? What types of buildings are there and how are they constructed? What kinds of towers (existing, rooftop, new construction) are available and where are they? What sorts of customer premises equipment (CPE) will be used (fixed outdoor, fixed indoor, mobile, etc.)? Answering these and similar questions and preparing accordingly will help optimize coverage and performance issues in the design. The results of this analysis should appear similar to the coverage plot shown in Figure 4. Traffic Supported: Capacity A traffic analysis will need to be performed for the target project based on the expected number and types of users. This analysis will determine user requirements and help to appropriately size the wireless network. Just because coverage is available does not mean the resources are adequate to provide service to the desired number of users. Capacity planning should address not only the number of users, but also the demands they make on the network. Some users may be voice only, some may be heavy data users, some may use video cameras or other sensors, etc. Number of Sites Required The results of the coverage analysis and traffic study determine the expected number of sites and radio resources required to adequately cover the project site. This may include specialized sites for indoor coverage or enhanced coverage for potential trouble areas.

SupportabilityOperating Expenditures While this paper primarily considers the capital expenditures (CAPEX) of purchase and deployment, the operating expenditures (OPEX) should not be forgotten. Supportability is a major concern (and for a good reason), especially within the function responsible for operating the network, usually the information technology (IT) function. Initial costs are only a small portion of overall costs. Operating and supporting costs play a critical role in the successful implementation of wireless broadband communications across multiple projects within a company. Hence, vendor and process standardization is crucial not only to lower OPEX, but also to prevent each project from individually reinventing the wheel, so to speak.

The value of proper planning and engineering can never be underestimated.

Figure 4. Typical Wireless Broadband Access Coverage Plot

CONCLUSIONS ireless broadband is currently being used (to different degrees) on many of todays large project sites. Current implementations are usually extensions of basic WLAN systems extending wireless access points from an indoor office environment to limited outdoor and wide area coverage. Success has also been limited because not much network planning is usually involved. In addition, wireless bridging for a LAN extension is becoming very popular for providing connectivity throughout a project site (office to trailer or trailer to trailer). There is keen interest on projects for continuing to improve performance by deploying (where appropriate) advanced wireless broadband technologies. End users need and want wireless connectivity and technologies, and the expertise

December 2008 Volume 1, Number 1

87

needed to provide it is readily available. One of the biggest hurdles to wireless broadband deployments is not technical, but cultural, namely, risk avoidance. Project sensitivity to cost and schedule factors necessitates the preparation of a well-thought-out and proven business case, i.e., a well-defined need for a particular wireless solution. Projects are simply reluctant to change their tried and true, well-known methods of operation for something new. There is an underlying need to raise an awareness of what constitute realistic capabilities of and expectations for wireless networks. Unfortunately, wireless broadband networks have been sold as a solution to all communication needs. But even though this is not the case, there are certain areas where wireless is decidedly useful.

REFERENCES
[1] [2] Bechtel website, <http://www.bechtel.com>. Motorola; see, e.g.: Motorola Broadens Portfolio of Communications Solutions for the Mining Industry <http://www.motorola.com/mediacenter/ news/detail.jsp?globalObjectId=10166_10095_23>. Case Study: In the Coal Fields of Wyoming, wi4 Mesh Solutions are Mining Enhanced Efficiency, Productivity and Profitability <http://www.motorola.com/staticfiles/ Business/Solutions/Industry%20Solutions/ Manufacturing/MOTOwi4/_Document/_ Static%20files/International-Mining[1].pdf? pLibItem=1&keywords=Manufacturing+ Education+Case%20Studies>. MOTOMESH Solo Networks Enduring Broadband Performance in Challenging RF Environments <http://www.motorola.com/staticfiles/ Business/Products/Wireless%20Broadband %20Networks/Mesh%20Networks/ MOTOMESH%20Solo/_Documents/ Static%20files/MOTOMESH%20Solo_ Brochure_9.21.08.pdf>. [3] Alvarion: Pumping Up Productivity, Alvarion <http://www.alvarion.com/upload/ contents/291/Oil_and_gas_Brochure_LR[1].pdf>. Cisco Aironet 1524 Lightweight Outdoor Mesh Access Point, Cisco Systems <http://www.cisco.com/en/US/products/ ps8731/index.html>. Rapidly Deployable Mobile Networks, Tecore Networks <http://www.tecore.com/ solutions/rapid.cfm>. Charting the Course for Mobile Broadband: Heading Towards High-Performance All-IP with LTE/SAE, Nokia Siemens Networks white paper, 2008 <http://www.nokiasiemensnetworks.com/ NR/rdonlyres/AB092948-6281-4452-8D5990B7A310B5BA/0/broadband_lte_sae_ update_intranet.pdf>. J. Centi, J. Owens, and N. Youell, Broadband Wireless Technology for Bechtel Project Sites, Bechtel Technology Grant, 2008.

One of the biggest hurdles to wireless broadband deployments is not technical, but cultural, namely, risk avoidance.

ACKNOWLEDGMENTS The contents of this paper were adapted from work performed under a 2008 Bechtel Technology Grant titled Broadband Wireless Technology for Bechtel Project Sites. [7] The author would like to thank the co-recipients of the grant as well as Larry Bartsch, all of whom contributed to the effort.

[4]

[5]

TRADEMARKS 3GPP is a trademark of the European Telecommunications Standards Institute (ETSI) in France and other jurisdictions. Alcatel-Lucent is a trademark of Alcatel-Lucent, Alcatel, and Lucent Technologies. Alvarion is a registered trademark of Alvarion Ltd. Cisco Systems and Aironet are registered trademarks of Cisco Systems, Inc., and/or its affiliates in the United States and certain other countries. Huawei is a registered trademark of Huawai Corporation or its subsidiaries in the Peoples Republic of China and other countries (regions). Motorola is registered in the U.S. Patent and Trademark Office by Motorola, Inc. Nortel is a trademark of Nortel Networks. Tecore Networks is a registered trademark with the U.S. Patent and Trademark Office. WiMAX and WiMAX Forum are trademarks of the WiMAX Forum.
[7] [6]

88

Bechtel Technology Journal

BIOGRAPHY
Nathan Youell joined Bechtel in 2001 and is currently a systems engineer with the Strategic Infrastructure Group. He is the resident subject matter expert for wireless systems and is responsible for testing and evaluating telecommunications equipment, as well as modeling and simulating critical infrastructure, with a primary focus on telecommunications systems. Previously, as a staff scientist/engineer and the manager of the Bechtel Telecommunications Laboratory, Nathan gained and then provided expertise in developing and implementing test plans and procedures. He was instrumental in creating the Bechtel Training, Demonstration, and Research (TDR) Laboratory in Frederick, Maryland, and the Bechtel Wireless Test Bed (BWTB) in Idaho Falls, Idaho. He also tested numerous telecommunications equipment and technologies, including TMA, 802.11, 802.16, GSM, DAS, DWDM, FSO, microwave and millimeter wave radio, and wireless repeater. Earlier, Nathan was an RF engineer in the New York and Washington, DC, markets as part of Bechtels nationwide build-out contract with AT&T Wireless. Nathan has authored three papers and co-authored three more in the Bechtel Communications Technical Journal (formerly, Bechtel Telecommunications Technical Journal) since its inception in December 2002; his most recent paper 4G: Fact or Fiction? appeared in the September 2008 issue. Nathan received his MS and BS degrees, both in Electrical Engineering, from Clemson University, South Carolina.

December 2008 Volume 1, Number 1

89

90

Bechtel Technology Journal

DESKTOP VIRTUALIZATION AND THIN CLIENT OPTIONS


Issue Date: December 2008 AbstractGlobal engineering, procurement, and construction (EPC) firms face an increasingly difficult environmentprojects are growing in risk, complexity, and size, while customers expect delivery on or ahead of schedule, at or below budget, with exacting safety and quality standards. EPC firms must leverage advances in technology to improve their workers efficiency and effectiveness and utilize the opportunities presented by a globally educated workforce. The unique demands of the flexible, mobile, and distributed employees of these firms can be addressed by virtualization and thin client architectures. This paper outlines the challenges faced by the EPC firm and the advantages and disadvantages to consider when evaluating both desktop and application virtualization, as well as client architecture choices. Keywordsclient, data center, desktop, hypervisor, infrastructure, paravirtualization, remote, server, streaming, terminal services, thick, thin, virtual, virtualization
INTRODUCTION odern businesses spend much time, money, and effort to maintain their desktop computing infrastructures. Along with the costs of deploying and maintaining hardware, information technology (IT) departments are saddled with the burden of addressing software updates, application installations, client-issue drivers, patches, license administration, malware and viruses, and general troubleshooting. While remote administration tools have eased this burden, many issues arising within the context of the traditional computing architecture (see Figure 1) cannot be solved remotely, requiring direct interaction with the actual desktop. [1]

DATA CENTER DATA CENTER


WAN
CAD Proxy
Application

Exchange

Internet

Storage
Application

Application

LAN

Application

Brian Coombe
bcoombe@bechtel.com
APPLICATIONS AND DESKTOP OS

Figure 1. Traditional Computing Architecture


2008 Bechtel Corporation. All rights reserved. 91

ABBREVIATIONS, ACRONYMS, AND TERMS


BIOS CAD DLL EPC IT LAN OS PC RAM TCP USB VDI WAN basic input/output system computer-aided design dynamic link library engineering, procurement, and construction information technology local area network operating system personal computer random access memory transmission control protocol universal serial bus virtual desktop infrastructure wide area network

workers efficiency and effectiveness and to utilize the opportunities presented by a globally educated workforce. Fortunately, these distinct demands can be addressed by virtualization and thin client architectures.

VIRTUALIZATION anaging the licenses, Java programming language clients, database clients, drivers, and other potentially conflicting elements of a large computer network often requires a visit from a technician. An employee changing roles may need to request new applications or services, requiring multiple installs. Trying to patch or install a new application on multiple desktopswhether 30 or 3,000poses a challenge whether this is done remotely or one at a time by service technicians. Virtualization moves the computing hardware and the applications residing on that hardware from the desktop personal computer (PC) to the data center. This allows the IT department to manage the applications and resources from a centralized, single location, often with the assistance of flexible configurations and software tools. A forerunner of virtualization was terminal services. Unlike traditional client/server computing, where a dedicated, per-application client with processing and local memory storage is required, terminal services use an application to present a client interface to the user. The client application only presents a video display of the

This problem is particularly complicated in the global engineering, procurement, and construction (EPC) industry, where users may be remotely deployed, may be located at disparate sites, and may perform multiple roles that need different applications for each. At the same time, EPC projects are growing in risk, complexity, and size, while customers continue to expect delivery on or ahead of schedule, at or below budget, with exacting safety and quality standards. These circumstances make it vital for EPC firms to leverage advances in technology to improve their

Figure 2. Terminal Services


92 Bechtel Technology Journal

DATA CENTER DATA CENTER


WAN
Application

Exchange

Application

Internet

DESKTOP ENVIRONMENTS

Application

Proxy

THIN CLIENT

LAN

Storage

Figure 3. Desktop Virtualization

client (for this reason, it is often called a presentation server) and only receives and forwards input from the mouse and keyboard. The same presentation server can be used to create multiple sessions with various applications. [2] Figure 2 illustrates a typical terminal services configuration with presentation server and client interfaces. Desktop virtualization builds on the concept of terminal services. In its simplest form, desktop virtualization uses the data center environment to host the client applications, streaming the client presentation over the network to a thick or thin client1 (see Figure 3). This approach, while effective, does not satisfy the desire of many thin client users to have their own personal machines and also limits the execution of certain types of applications. New solutions have emerged to address these challenges. While native virtualization was possible on higher-end IBM machines and other hardware typical to UNIX computer operating system
1 Thick and thin refer to the configuration of hardware and applications at the client interface. A thick client has a hard disk drive and can use installed applications to perform most of the processing itself; there is no need for continuous communication with the server because the client primarily communicates information for storage. A thin client, on the other hand, is designed to be compact, with most of the data processing done by the server; typically lacking a hard disk drive, the thin client acts as a simple terminal requiring constant communication with the server.

(OS) environments, it was not possible, until 2005, to natively virtualize an OS on hardware that uses the x86 instruction set architecture widely implemented in processors from many computer and chipset manufacturers. Within the last few years, both Intel and AMD have included virtualization capabilities directly in their chipsets. [3] Hybrid approaches leverage a thick client desktops power to perform processing. One approach, known as desktop streaming, involves transferring the code required by applications to the users terminal and memory just for the session, with no permanent storage on the users machine. Another approach, application streaming, allows the server to send the runtime cache for each application to the terminal, potentially limiting the total number of application licenses required. Chipset manufacturers are also now building new methods of virtualization, allowing virtualized OSs to run natively. The architectures, advantages, and challenges of each are explored later in this paper. Finally, because desktop virtualization is not an independent vector, the impacts to the data center of moving to a virtualized environment are discussed as well. [4]

Virtualization moves the computing hardware and the applications residing on that hardware from the desktop PC to the data center. This allows the IT department to manage the applications and resources from a centralized, single location.

THIN CLIENT ARCHITECTURES

hin client virtualization, often referred to as virtual desktop infrastructure (VDI), makes use of a full desktop environment operating

December 2008 Volume 1, Number 1

93

Desktop virtualization also results in less network traffic, since the only traffic sent to and from the client is the visual display of information and the inputs from the mouse and keyboard.

remotely. While the client machine could be thick, the full advantages of VDI are obtained when thin clients are used. As stated earlier, this type of virtualization is very similar to terminal services; users have already had the ability to access a full desktop remotely via a standard terminal server program bundled with the typical OS. The primary differentiator is how the user environment is provisioned. In traditional terminal services, multiple users access the same resources. Environments can be tailored for the individual user, but resources are not dedicated, often causing performance problems. Furthermore, certain applications often could not be run via terminal services due to issues with executing certain instructions. Additionally, multiple users sharing an application can cause issues with temporary files; applications that interface with network traffic protocols or listen for particular transmission control protocol (TCP) ports may find conflicts. In contrast, desktop virtualization uses the ability to run an OS in a virtual environment and moves that virtual environment to the data center. This then enables legacy, low performance, or thin client workstations to be deployed to users. Virtualization is realized by deploying an independent software layer that provides abstraction between the computer hardware and the OS kernel. This layer allows the hardware to be separated, and each machine instruction code is passed from the OS to the hardware. Often, this layer is known as a hypervisor. Virtualization can be a complete simulation known as full virtualization, where any OS can run in an unmodified formor take a more limited form known as partial virtualization. In partial virtualization, various resources such as address space can be virtualized and simulated, and, therefore, isolated and reused. Partial virtualization allows applications to run in isolation but does not work for all applications, particularly ones that need to access particular hardware that may not be virtualized. [5] Paravirtualization is a hybrid of full and partial virtualization; it simulates the instructions of the underlying hardware but has a software interface that is slightly different from the actual hardware, requiring OSs that reside on the virtualized hardware to be slightly modified. Paravirtualization often provides better performance than full virtualization. Whether running the native OS in full virtualization mode or in a modified partial or paravirtualization mode, desktop virtualization

uses management infrastructure to create individual user images, complete with each users suite of applications software. Multiple vendors offer different desktop virtualization configurations, which typically include server software, client software, and management infrastructure. Various protocols can be used, generally specific to the vendor implementation. Extreme examples use a diskless workstation that loads its boot image to random access memory (RAM) over the network from the server or that uses virtualized disk operations translated into instructions executed over the network protocols. [6] Diskless workstations have significant advantages for EPC firms. Areas of high radiation, extreme temperature, heavy dust, or other potential contaminants are well suited for thin clients. Because it has no fans or other moving parts, a thin client machine damaged in a harsh environment is less costly to replace and does not result in significant work to institute a new machine. Thin client machines also offer much better security: there is no data to steal from the machine, and an opportunistic worker or burglar is less likely to pilfer a device of no value by itself. Both in the office and in the field, there are several other advantages of using thin client hardware in a virtualized environment. Users who move from office to office or from the office to the field encounter an identical environment, data, and applications wherever they use the network. Client machineswhether fully thin or thickusing virtualized desktop streaming are less dependent on technology and need to be refreshed less often. Clients that fail can be refreshed immediately, or a user can be moved to another client seamlessly. Desktop virtualization also results in less network traffic, since the only traffic sent to and from the client is the visual display of information and the inputs from the mouse and keyboard; most of the network file- and information-processing traffic is isolated to the data center. Through efficient planning and design, network traffic can be isolated to particular server and storage arrays, reducing the overall load on the local and wide area networks (LANs and WANs). Finally, moving the processing to the data center results in overall processor savings, along with better power and cooling usage. Typically, each desktop user is given processor, storage, and memory consistent with peak requirements, but at any given time, only a small percentage of users require peak processing or memory. Furthermore,

94

Bechtel Technology Journal

many applications only require asynchronous processing and offer limited differences in user experience if the execution of those operations is resource-leveled. In contrast, the processing, storage, and memory allocations of a consolidated server running images of workstations in use can be sized to be greater than the sum of the average demands of the users but much less than the sum of everyones peak demands together. Furthermore, the larger-scale processing, memory, and storage of servers makes much better use of economies of scale, making for a lower capital expense. In addition to the capital savings, the cost of operating equipment within the data center can be significantly lower because the mass concentration of equipment means shorter power runs, less power distribution equipment to maintain, and less overall/more-concentrated cooling equipment. Even with all of these advantages, enterprises can be reluctant to move to a totally thin client infrastructure. Users often feel an attachment to, and have a comfort level with, their PC and do not want to give up the box under the desk for a fully thin client. Furthermore, mobile and traveling users often prefer to have a laptop so they can work out of hotel rooms and on airplanes, perform troubleshooting on equipment in the field, and give demonstrations or presentations at customer sites. Using thin clients can also pose a challenge for certain types of users. Thin clients can be designed to support particular types of peripherals, including those connected via universal serial bus (USB), but they do not have the proper drivers to support every make and model. Multimedia-heavy applications, particularly those with full-motion video and sound, can strain network resources and cause performance problems. Certain applications require access to video or sound cards; addressing this in a totally thin client environment can prove challenging, if possible at all. A simple solution that addresses these concerns is to use a thick client for those usersmobile, bandwidth-intensive, and the likefor whom the thin client solution just does not work, while perhaps virtualizing the OS, but only for some or even none of the applications. By nature, this solution is inefficient, providing additional and potentially wasted hardware distributed throughout the enterprise. It also brings back some of the problems that thin client virtualization was supposed to fix. As a result, newer, hybrid

approaches have been developed and are being deployed that more efficiently use the distributed processing and storage while maintaining some of the key benefits of a fully deployed thin client infrastructure. These hybrid approaches are categorized as desktop streaming or application streaming; each is discussed in the following sections.

DESKTOP STREAMING

n desktop streaming (see Figure 4), each user has a complete OS and images of the applications being used, all running on a virtual machine resident in the desktop PC. A hypervisor manages the virtual machine and the applications that it executes, as well as providing the interface to the images hosted in the data center. Both the OS and applications are streamed to the PC; updates, changes, and patches are only done once, in the image resident in the server. [7] Desktop streaming can provide better performance in demanding environments, since it leverages local processing and memory. Diskless workstations can be used, executing the streamed OS from local RAM. Alternatively, the client can be a full-blown terminal or laptop where hard drive space is used for swap files, and local storage can be provisioned as well. In one such implementation, a users primary (i.e., C:\) drive resides on the server and is accessible from any login, while the local drive becomes the D:\ drive. Users of desktop streaming are generally pleased with the much faster boot via basic input/ output system (BIOS) and streamed OS versus the traditional boot of the locally loaded OS. The security and reliability of this implementation offer several advantages. Centralized data permits only images of the information required by the application to be streamed, preserving control of sensitive and proprietary information. For EPC firms sharing information with a third party or someone overseas, this control offers significant reassurance. Machines that malfunction (whether due to a virus or malware or to a user- or machine-driven issue), can be easily restarted and repaired simply by terminating the virtual OS image and activating a new one. A challenge with desktop streaming is the integration of multiple software manufacturers products and various environments. One provider may be used for the hypervisor and desktop client, and another may be used to provide desktop images or to virtualize applications in the server environment. While IT managers can

Users often feel an attachment to, and have a comfort level with, their PC and do not want to give up the box under the desk for a fully thin client.

December 2008 Volume 1, Number 1

95

Centralized Virtual Desktops

Virtualization Infrastructure

Machines that malfunction (whether due to a virus or malware or to a user- or machine-driven issue), can be easily restarted and repaired simply by terminating the virtual OS image and activating a new one.

Active Directory Management Client Clients

Figure 4. Desktop Streaming

DATA CENTER
Sequencer

Packaging Server Application Servers

Management Console

LAN

Clients

Figure 5. Application Streaming

96

Bechtel Technology Journal

Table 1. Benefits of Different Virtualization Approaches


Benet
Performance Security First Cost Total Cost Flexibility

Non-Virtualized Thick Client


High Medium Medium Low Medium

Virtualized Thin Client


Medium High High High Medium

Desktop Streaming
Medium High Medium High High

Application Streaming
Medium High Medium High High

build a best-of-breed implementation, careful execution of integration is required. Even with desktop streaming, the data center must be sized to support the OS images for each user receiving a virtual desktop, along with the associated applications. It is possible to configure the implementation to decentralize OS images so that they reside on the clients, but this approach gives up some of the advantages of centralized virtualization.

conflicting with each other, creating memory holes, or causing dynamic link library (DLL) issues. In fact, it is possible for two or more different versions of the same application to run at the same time. Application streaming, while offering the significant advantages outlined above, does require each application to be sequenced and packaged. This can be a burden on the IT department, since it is an additional step in deploying virtualized applications.

Application streaming leverages the fact that many software features are seldom used; only 15%20% of the core code is generally required by standard users.

APPLICATION STREAMING

n application streaming (see Figure 5), applications are configured on a server, then virtualized, sequenced, and packaged for deployment over the network. Application streaming can use either a virtualized OS at the desktop or a traditionally installed OS. Because the applications run in containerized, virtualized spaces, they are not subject to the number of conflicts and deployment scenario permutations that traditional applications running on a standard client may encounter. Application streaming leverages the fact that many software features are seldom used; only 15%20% of the core code is generally required by standard users. The code required to launch and provide the basic functionality of the application is streamed to the user, and the other pieces are either sent on demand over the network or loaded in their entirety in the background while the user works. These pieces of code required to execute the application are known as the runtime cache. In a well-designed environment with robust connectivity and high-performance machines, users see no difference between a streamed and a locally executed application. [8] With applications centrally managed, migrations to new OSs or standard hardware suites can be easily accomplished; the application only needs to be modified once to address the changes, and then repackaged. The applications use of their own memory spaces prevents them from

CONCLUSIONS ultiple options are available to EPC (and other) firms for deploying a virtualized desktop infrastructure. While each has subtle advantages, the challenge for a firms IT department is how to best leverage the multiple hardware, software, and management solutions and to best understand the requirements of its unique users and infrastructure in order to deploy the correct architecture. Table 1 summarizes the benefits of each approach. The coming years will see even more options and, as firms migrate to this environment, the proffered advantages of each type of virtualization implementation will be validated.

TRADEMARKS AMD is a trademark of Advanced Micro Devices, Inc. IBM is a registered trademark of International Business Machines Corporation in the United States. Intel is a registered trademark of Intel Corporation in the US and other countries. Java is a trademark of Sun Microsystems, Inc., in the United States and other countries. UNIX is a registered trademark of The Open Group.

December 2008 Volume 1, Number 1

97

REFERENCES
[1] SO31 The Virtual DesktopA Computer Support Model that Saves Money, California Government Performance Review <http://cpr.ca.gov/CPR_Report/ Issues_and_Recommendations/Chapter_7_ Statewide_Operations/Information_Technology/ SO31.html>. G. Gruman, Desktop Virtualization: Making PCs Manageable, Infoworld, September 11, 2006 <http://www.infoworld.com/article/06/09/11/ 37FEvirtcasedesk_1.html?s=feature>. K. Adams and O. Agesen, A Comparison of Software and Hardware Techniques for x86 Virtualization, Proceedings of the 12th International Conference on Architectural Support for Programming Languages and Operating Systems (ASPLOS XII), San Jose, California, October 2125, 2006, pp. 213, access original paper via <http://www.vmware.com/pdf/ asplos235_adams.pdf>. T. Olzak, Desktop Application Virtualization and Application Streaming: Function and Security Benefits, Erudio Security white paper, August 2007 <http://adventuresinsecurity.com/ Papers/Desktop_Virtualization_Aug_2007.pdf>. Desktop Virtualization Comes of Age: The Data Center is the Desktop, Virtualization, November 27, 2007 <http://virtualization. sys-con.com/node/466375>. Why Software Vendors Need to Care About Virtualization, Intel Software Network, October 2, 2008, retrieved November 3, 2008 <http://software.intel.com/en-us/articles/ why-software-vendors-need-to-care-aboutvirtualization>. S. Higginbotham, Desktop Virtualization: Where Thin Clients Meet the Cloud, GigaOM, August 27, 2008, retrieved November 3, 2008 <http://gigaom.com/ 2008/08/27/desktop-virtualization-wherethin-clients-meet-the-cloud/>. R.L. Mitchell, Streaming the Desktop: Application Streaming Creates a Virtualized Desktop That Can Be Managed Centrally, Yet Offers the Speed of Local Execution, Computerworld, November 21, 2005 <http://www.computerworld.com/ softwaretopics/software/apps/story/ 0,10801,106354,00.html>.

BIOGRAPHY
Brian Coombe, a program manager in the Bechtel Federal Telecoms organization, oversees the provision of project management and systems engineering expertise to the Department of Defense in support of a major facility construction, information technology integration, and mission transition effort. Prior to holding this position, Brian served as the program manager of the Strategic Infrastructure Group, where he oversaw work involving telecommunications systems and critical infrastructure modeling, simulation, analysis, and testing. In addition, he evaluated government telecommunications markets, formulated requirements for telecommunications and water infrastructure work, and developed the Strategic Infrastructure Groups scope. As Bechtels technical lead for all optical networking issues, Brian draws on his extensive knowledge of wireless and fiber optic networks. In his first position with the company, he engineered configurations to allow for capacity expansion of the AT&T Wireless GSM network in New York as part of a nationwide buildout contract. Later, he was the lead engineer for planning, designing, and documenting a fiber-to-the-premises network serving more than 20,000 homes. He was the Bechtel Communications Training, Demonstration, and Research (TDR) Laboratorys resident expert for optical network planning, evaluation, and modeling. Before joining Bechtel in 2003, Brian was a systems engineer at Tellabs, where he launched the companys dense wavelength-division multiplexing services and managed network design and testing. He developed solutions to complex network issues involving echo cancellation, optical networking, Ethernet, TCP/IP, transmission, and routing applications. Brian is a member of the IEEE; the Project Management Institute; SAME; ASQ; NSPE; MSPE; INSA; AFCEA; the Order of the Engineers; and Eta Kappa Nu, the national electrical engineering honor society. He has authored six papers and co-authored one in the Bechtel Communications Technical Journal (formerly, Bechtel Telecommunications Technical Journal) since August 2005; his most recent paper GPON Versus EP2P appeared in the September 2008 issue. His tutorial on Micro-Electrometrical Systems and Optical Networking was presented by the International Engineering Consortium. Brian earned an MS in Telecommunications at the University of Maryland and a BS with honors in Electrical Engineering at The Pennsylvania State University. He is a licensed professional engineer in the state of Maryland.

[2]

[3]

[4]

[5]

[6]

[7]

[8]

98

Bechtel Technology Journal

Bechtel M&M
Technology Papers

TECHNOLOGY PAPERS

101

Computational Fluid Dynamics Modeling of the Fjaral Smelter Potroom Ventilation


Jon Berkoe Philip Diwakar Lucy Martin Bob Baxter C. Mark Read Patrick Grover Don Ziegler, PhD

111

Long-Distance Transport of Bauxite Slurry by Pipeline


Terry Cunningham

M&M Fjaral Aluminum Smelter


This project, Bechtels rst in Iceland, is one of the worlds safest, most sustainable, and technologically advanced aluminum production facilities.

COMPUTATIONAL FLUID DYNAMICS MODELING OF THE FJARAL SMELTER POTROOM VENTILATION


Originally Issued: February 2005 Updated: December 2008 AbstractThe potrooms of the Fjaral Aluminum Smelter in East Iceland use a system based on natural convection to ventilate heat and fugitive emissions. The design of the potlinesa series of potroomsis based on one used at Alcoas Deschambault Smelter in Canada. However, the Fjaral potlines are longer and are situated on a sloping site that is adjacent to a fjord, surrounded by rough terrain, and subject to high winds from multiple directions. Because the complex terrain and wind interactions made ventilation system performance difficult to predict, Bechtel and Alcoa conducted a state-of-the-art computational fluid dynamics (CFD) analysis to help guide system design by simulating the velocities, temperatures, pressures, and emissions concentrations inside and outside the potrooms. CFD modelingemployed for similar studiesproduced a potroom ventilation model that was validated against Deschambault smoke test measurements. The analysis confirmed that Fjaral potroom airflow patterns, ambient temperatures, and emissions concentrations are relatively unaffected by the terrain and wind conditions. This result indicates that the Fjaral ventilation system design is highly effective. KeywordsCFD modeling, claustra wall, computational fluid dynamics, modeling, natural convection, potline, potrooms, smelter, ventilation
INTRODUCTION

Jon Berkoe
jberkoe@bechtel.com

Philip Diwakar
pmdiwaka@bechtel.com

Lucy Martin
lmartin1@bechtel.com

Bob Baxter
rfbaxter@bechtel.com

C. Mark Read
cmread@bechtel.com

he Fjaral Aluminum Smelter in East Iceland is located adjacent to a fijord, surrrounded by rough terrain, and subject to high wind speeds. The smelter potroomsbuildings that house the large steel vessels, or pots, used for electrolytic reduction of aluminaare designed to ventilate heat and fugitive emissions, such as hydrogen fluoride (HF), using a system based on natural convection. The design of the Fjaral potlinesa series of potroomsis based on the design used at Alcoas Deschambault Smelter, located in Deschambault, Quebec, Canada. However, the Fjaral Smelter potlines are longer, situated on a sloping site adjacent to a fjord, and subject to high winds from multiple directions. A rendering of the smelter is shown in Figure 1. Wind effects around the potrooms are especially important to consider in designing a ventilation system; this is because infiltration of winds into the potrooms can disrupt the chimney effect plume rise required for proper ventilation, and potentially cause drifting and re-entrainment of heated air and emissions from the outside back into the potroom. A ventilation system must keep fresh air flowing into the worker areas of the potroom and ventilate heat and emissions generated by the pots to the outside.

line Pot

Potroom

Figure 1. CAD Model of Fjaral Aluminum Smelter, Iceland

Alcoa

Patrick Grover
patrick.grover@alcoa.com

Don Ziegler, PhD


donald.ziegler@alcoa.com

Because the complex terrain and wind interactions made potroom ventilation system performance difficult to predict for the Fjaral site, Bechtel and Alcoa conducted a state-of-theart computational fluid dynamics (CFD) analysis to help guide system design by simulating the velocities, temperatures, pressures, and emissions concentrations inside and outside the potrooms. The purpose of the analysis was to confirm that local environmental conditions, particularly wind speed, wind direction, and air temperature, would not be detrimental to the final system design. Data for the analysis included the plot plan for the site potline layout and potrooms,

2008 Bechtel Corporation. All rights reserved.

101

ABBREVIATIONS, ACRONYMS, AND TERMS


CAD CFD HF ICEM computer-aided design computational fluid dynamics hydrogen fluoride integrated computer-aided engineering and manufacturing

This model validation did not apply to the wind conditions prevailing at the Fjaral Smelter, but it did confirm the CFD models accuracy in simulating both the potroom airflow patterns and the effects of thermal buoyancy. PHYSICAL DESCRIPTION OF POTROOM VENTILATION

Ventilation system designs based on natural convection are advantageous because they offer lower life cycle cost in terms of capital investment and operating and maintenance costs, as compared to powered ventilation.

wind rose data, topographic data of the nearby terrain, detailed vent and louver parameters, and heat release design data for the pots. The analysis used CFD modeling based on the commercial software program FLUENT 6.2. Such modeling has been employed for similar studies [1, 2]. The unstructured mesh was generated from computer-aided design (CAD) models using the ANSYS ICEM CFD meshing software program. The objectives of the CFD analysis were to: Determine: Whether sufficient airflow for personnel cooling and ambient air quality is achieved If there are significant HF release gradients along the length of the potroom Whether the terrain is influencing the degree of re-entrainment and intake air velocity profiles Where to locate emissions-monitoring equipment to ensure reliable readings Confirm that: The roof ventilator performs adequately The required cooling of the pot and busbar is achieved The potroom ventilation achieves vertical evacuation of fumes, with minimal drifting along the length of the potroom Evaluate: The impact of adding basement panels to restrict inlet airflow (Because of the windy conditions in Iceland, basement panels are used to prevent blowing snow from entering the potroom, and to minimize heat loss from the pots during the winter.) The dispersion modeling assumption that uniform release of HF along the length of the potroom is not too unrealistic The CFD model produced during the analysis was validated by comparing the computed airflow patterns and temperatures with those measured and observed during smoke tests conducted by the Bechtel team at the Deschambault Smelter.

entilation system designs based on natural convection are advantageous because they offer lower life-cycle cost in terms of capital investment, and operating and maintenance costs, compared to powered ventilation. In aluminum potline facilities, a turbulent natural convection occurs in the potroom because the heat dissipated from the pots generates a buoyancy-driven plume that draws outside air into and through the potroom side openings, and exhausts the air through a roof ventilator. A schematic diagram of the potroom ventilation is shown in Figure 2. Thermal buoyancy is virtually impossible to replicate accurately in a scale model because of the nature of the convection scaling laws, i.e., natural convection in large spaces does not scale accurately to small-scale experiments. Because thermal buoyancy is the driver for flow through the potroom, the heat flux from the pots and the potroom dimensions must be prescribed as close to full scale as possible. Otherwise, the flow patterns and rates determined by the CFD model will most likely be incorrect. Wind rosesgraphic tools that provide a succinct view of how wind speed and direction are typically distributed at a particular location were used to model the wind conditions at the Fjaral Smelter site. A detailed study of the wind rose data revealed that certain wind speeds and directions were representative of the conditions at the Fjaral Smelter site throughout most of the year. These wind speeds and directions were used as boundary conditions for the CFD analysis.

CFD ANALYSIS METHODOLOGY FD is typically applied to modeling continuous processes and systems. CFD software uses a CAD-like model-building interface combined with advanced computational methods that solve the fundamental conservation equations. CFD eliminates the need for many typical assumptions, such as those required for equilibrium, plug flow, averaged quantities, etc., because the physical domain is replicated in the form of a computerized prototype.

102

Bechtel Technology Journal

Heated Airflow Exiting Roof Vent

Internal Recirculating Airflow

Claustra Walls

Pot Hoods

Aluminium Electrolysis Pots Airflow Directed Across Pots Below-Grade Basement

The CFD model of the Fjaral Smelter incorporated the key factors that influence ventilation system performance.

Airflow Directed Below Pots

Figure 2. Schematic Diagram of Potroom Ventilation

A typical CFD simulation begins with a CAD rendering of the geometry, adds physical and fluid properties, and then specifies natural system boundary conditions. By appropriately changing these parameters, countless what if? questions can be quickly addressed. The CFD model of the Fjaral Smelter was based on a fully three-dimensional steadystate simulation. The model incorporated the key factors that influence ventilation system performance. They include wind speed, wind direction, thermal buoyancy, and local terrain, as well as geometry features such as side-by-side pot arrangement, a tulip-shaped roof ventilator, floor grills between the pots, and claustra wall details. (A claustra wall is a building sidewall inlet air plenum used to prevent the ingress of rain and snow. The plenum discharge to the potroom is equipped with a lattice structure to direct the air horizontally into the potroom, allowing for maximum penetration of fresh air into the process area.) The model was configured to include local terrain surrounding the smelter, as well as nearby structures of significant size. It was also configured so that any number of basement panels could be closed, and the effects of closing them during the winter months could be studied. The following assumptions and simplifications were used for the CFD analysis:

The analysis assumed a steady-state wind condition and an ambient temperature of 2 C. No sources of heat, other than heat dissipated from the pots and busbar, were accounted for in the simulation; 100 kW of heat was assumed to reenter the pots after being sucked into the pot gas exhaust system through small gaps between the pot hoods. The floor grills were modeled, but gaps in the floor between concrete slabs were not modeled. Concentration profiles of pot emissions such as HF were modeled either as line sources or as inlets emanating from the sides, ends, or tops of the pots. As shown in Figure 3, the heat dissipated from the pots was modeled using a prescribed heat
Superstructure 0.44 kW/m2 Hoods 1.31 kW/m2 Upper Side Wall 9.91 kW/m2 (area = 13.6 m2) Lower Side Wall 6.30 kW/m2 (area = 5.6 m2) Lower End Wall 1.31 kW/m2 (area = 3.23 m2) Bottom 1.39 kW/m2

Small Area to Model Pot Ventilation and Exhaust to Gas Treatment Centers at 2.4 Nm3/s/pot No Heat Flux Prescribed; Used as Source for HF Evolution Upper End Wall 5.68 kW/m2 (area = 7.85 m2)

Figure 3. Prescribed Pot Surface Heat Flux in CFD Model

December 2008 Volume 1, Number 1

103

flux assigned to various surfaces on the pot. Due to the complex geometries of the roof vents, floor grills, and claustra walls, each of these features was first modeled as a submodel. The individual pressure-drop values for each feature were compared against vendor test data and then incorporated in the full CFD model using a simplified feature known as porous media. Creating a computational grid for a model like this one is extremely challenging and requires highly sophisticated meshing tools and expertise. To accurately simulate heat transfer effects adjacent to the pot, the grid thickness needed to be about a millimeter. A coarser grid was used to accommodate convection in the potroom. The terrain in the model domain extended at least 2 km in all directions. To conserve the total mesh size within manageable limits, grids with large control volumes, on the order of 1 m3, were required. The final CFD model of the Fjaral Smelter included two potrooms, each with 168 pots, and the surrounding terrain, buildings, and silos. The mesh was fully hexahedral and unstructured. The most effective use of computer resources was achieved by limiting the total mesh size, resulting in a model consisting of 2.8 million control volumes. Figure 4 presents a model of a single 42-pot potroom segment showing the computational grid. Closing the basement panels during the winter makes the potroom warmer but has a detrimental effect on the ventilation. Basement panels were modeled as thin walls so that panel sections could be simulated as open, partially open, and closed. Opening and closing basement panels effectively regulates the air entering the potroom. The opening and closing of the basement panels was modeled to study the effects of emissions spread at worker level under conditions of reduced airflow entry into the building, such as when basement panels are partially closed. Figure 5 shows a section of the completed CFD model configured with the basement panels 60 percent closed.

the models airflow patterns and vectors with those observed and photographed during the smoke tests. The smoke tests at the Deschambault Smelter were carried out on April 27, 2004, at 4 a.m. Velocity and temperature measurements were obtained using handheld thermal anemometers, rotating vane anemometers, and thermocouples. The test locations were selected so that the influence of localized changes in the flow direction and magnitude could be qualified. Velocity and temperature measurements were taken from the walkway near the roof vent, and at the floor grill, basement inlets, and claustra wall. Data was also obtained from the meteorological station local to the plant. The following benchmark comparisons were used to validate the potroom CFD model: Pictures and videos of the smoke tests against velocity vectors at corresponding sections in the CFD model Velocities obtained using both rotating vane and thermal anemometers Temperatures obtained using handheld thermocouples

Creating a computational grid for a CFD model like that for the Fjaral Smelter is extremely challenging and requires highly sophisticated meshing tools and expertise.

Roof Vents Ground Level Claustra Walls Basement Floor

Pots

Figure 4. Surface Grid for Section of CFD Model

VALIDATION OF THE POTROOM CFD MODEL

he detail design for the Fjaral Smelter had not been finalized at the time of the CFD analysis. However, because the potroom design is similar to the design used at the Deschambault Smelter, the main focus of the potroom CFD model validation was to confirm that the thermal buoyancy effects simulated by the model were realistic. This validation was done by comparing

Figure 5. Section of CFD Model Showing Potline Sections with Partially Closed Basement Panels

104

Bechtel Technology Journal

32.0 C 2.0 C

Figure 6. Still Photograph from Videotape of Smoke Test Conducted at Deschambault Smelter

Figure 8. Temperature (C) Through Cross-Section of Potroom Calculated by Potroom CFD Model

in Figure 7. This flow pattern is very similar to the one observed during the smoke tests. Note that a secondary flow, driven by the natural convection, forms a recirculating flow that drops back down to the top of the claustra wall. What is most important is that the fresh incoming airflow has more energy than the recirculating flow and does not allow the recirculating flow to reach floor level. This flow pattern ensures that, at the worker level, fresh air is ambient. Qualitatively, this result demonstrates that the potroom CFD model reproduced the flow patterns observed during the smoke tests. As shown in Figure 8, potroom temperatures calculated by the model also showed good agreement with the temperature of the air at the roof vent intake at the Deschambault Smelter, which is typically about 20 C above ambient.

The CFD analysis showed that flow patterns around the potlines can be influenced by local terrain, wind direction, and nearby buildings.

Figure 7. Flow Streamlines Through Cross-Section of Potroom Generated from Potroom CFD Model

Temperatures and velocities measured at the roof monitor Heat and mass balances inside the potroom (Heat balance was obtained from pot external surface heat flux.) The first part of the model validation involved comparing the flow field obtained from the CFD analysis with measurements, observations, and videotaped flow patterns from the smoke tests. The observations showed that, on the wide-aisle side of the potroom, the smoke entering the claustra wall travels at least two-thirds of the pot length, parallel to the ground and the pot, before thermal buoyancy effects from the hot pots and the flow from the narrow aisle force the flow to move upward. However, the flow entering the claustra wall on the narrow-aisle side of the potroom travels only a small distance along the pots before being driven upward. Subsequently, the upward-moving hot air crosses the roof vent, taking a curved path before encountering the roof vent. A still photograph of this airflow movement is shown in Figure 6. The potroom CFD model produced the flow pattern as indicated by the streamlines shown

KEY FINDINGS FROM THE FULL CFD MODEL OF THE FJARAL SMELTER

he CFD analysis showed that flow patterns around the potlines can be influenced by local terrain, wind direction, and nearby buildings. Lower wind speeds (i.e., calm conditions) showed results typical for potlines sited at low-wind locations. Higher wind speeds from the east direction, which is aligned with the potline layout, were found to have minimal impact on potroom internal temperatures, as shown in Figure 9. However, as shown in Figure 10, which presents the velocity patterns around the potlines at an elevation 12 m above ground for the northwest wind case, there are several areas of low velocity, referred to as dead zones, both between the potrooms and in the wake of the potrooms. In particular, there are pronounced dead zones downwind of the passageways that extend sideways from the potrooms.

December 2008 Volume 1, Number 1

105

32.0 C 2.0 C

Figure 11 presents the pressure contours around the potlines at an elevation 12 m above grade for the northwest wind case. As shown in the figure, the dead zones tend to produce a slight pressure buildup on the windward side of the potline and a negative pressure on the leeward side. This creates local regions of pressure gradient that can cause the internal ambient flow to either be pushed away from or pulled toward the dead zones. Although such effects increased with increasing wind speed, they were not significant in the final analysis. The primary findings from the CFD analysis are summarized below (Figures 12, 13, and 14 apply to all cases without basement panels). As shown in Figure 12, the standard deviation of the roof vent exhaust HF concentration is fairly low for all wind conditions without basement panels, and moderate for east and west wind conditions with closed basement panels (not shown). This result indicates that the variation in exhaust concentration is reasonably stable along the potroom length. As shown in Figure 13, temperature levels in the potroom at medium and high elevations are consistent with measured data from the Deschambault Smelter smoke tests. These temperature levels are also reasonably uniform over the range of wind conditions evaluated. This result implies that heat removal from the pots is effectively accomplished under all wind conditions. Velocity magnitudes in the potroom are as expected, with stronger inflow in the wider aisle; demonstrate some cross-flow recirculation; and are dominated by the central vertical plume. This data is consistent with observations made during the Deschambault Smelter smoke tests. HF concentrations are acceptable in the potroom worker aisles and outside the potrooms. For high-wind-speed cases without basement panels, the accumulated HF in the aisles is not significantly higher than it is for the low-wind speed cases. Note that when the basement panels are closed, the accumulated HF in the worker aisles can increase by 50 to 100 percent. The flow patterns in the potroom can be affected by locally varying outside pressure gradients. Specifically, as shown in Figure 14, prevailing winds parallel to the potrooms may cause airflow drift to occur in some areas, such as near the roof.

HF concentrations are acceptable in the potroom worker aisles and outside the potrooms.

Figure 9. Temperature (C) Through Cross-Sections of Potroom, with Fully Open Basement Panels, for East Wind Case at 10 m/s

10.0 m/s

0.0 m/s

Figure 10. Velocity Contours (m/s) Around Potlines for Northwest Wind Case

320.0 Pa

300.0 Pa

Figure 11. Pressure Contours (Pascal [Pa]) Around Potlines for Northwest Wind Case

106

Bechtel Technology Journal

0.25

0.20

0.15

mg/m3
0.10 0.05 0 North Pot South Pot E 3 m/s North Pot South Pot WSW 8 m/s North Pot South Pot W 10 m/s North Pot South Pot NW 8 m/s North Pot South Pot ENE 10 m/s North Pot South Pot E 10 m/s

Figure 12. HF Concentration (Average [Blue]) and Standard Deviation (Red) Through Roof Vent

25

20

15

C
10 5 0 North Pot South Pot E 3 m/s North Pot South Pot WSW 8 m/s North Pot South Pot W 10 m/s North Pot South Pot NW 8 m/s North Pot South Pot ENE 10 m/s North Pot South Pot E 10 m/s

Figure 13. Potroom Temperature (C) Just Above Hoods (Blue) and at Roof Vent Intake (Red)

Higher wind speeds do not appear to increase the velocity of the cross-flow patterns near the roof, i.e., patterns induced by the asymmetric airflow in the potroom. The cross-flow velocities are similar for low- and high-wind-speed cases, both with open and closed basement panels. Therefore,

wind speed should not affect the location of emissions monitoring equipment. Re-entrainment of roof vent exhaust is not a concern. Some minor dead zones exist between the buildings; however, HF buildup in these areas is negligible.

December 2008 Volume 1, Number 1

107

1.4

1.2

1.0

0.8

m/s
0.6

The results of the CFD analysis of the Fjaral Smelter potroom ventilation system confirmed that its final design would provide adequate overall performance under all expected wind conditions.

0.4

0.2

0 North Pot South Pot E 3 m/s North Pot South Pot WSW 8 m/s North Pot South Pot W 10 m/s North Pot South Pot NW 8 m/s North Pot South Pot ENE 10 m/s North Pot South Pot E 10 m/s

Figure 14. Potroom Air Velocity in Axial (Drift) Direction (Blue) and Vertical (Thrust) Direction (Red) Below Roof Vent Intake

CONCLUSIONS

TRADEMARKS ANSYS and FLUENT are registered trademarks of ANSYS, Inc., or its subsidiaries in the United States or other countries. [ICEM CFD is a trademark used by ANSYS, Inc., under license.]

he results of the CFD analysis of the Fjaral Smelter potroom ventilation system confirmed that its final design would provide adequate overall performance under all expected wind conditions. The claustra wall design is highly effective in dampening the effects of outside winds on the internal potroom environment, and in drawing in sufficient amounts of fresh air to prevent HF and heat buildup in the main worker areas. The potroom ventilation system design generally accommodates the secondary flows while preventing such recirculating air from penetrating the worker level in the wide aisles. Additionally, vertical thrust of the plume over the pots is facilitated by sufficient potroom height, roof pitch, and roof vent dimensions.

REFERENCES
[1] M. Dupuis, 3D Modeling of the Ventilation Pattern in an Aluminium Smelter Potroom Building Using CFX-4, CFD 2001 Conference, Waterloo, Ontario, Canada, May 2001 <http://www.genisim.com/website/ cfd2001.htm>. J. Bos et al., Numerical Simulation, Tools to Design and Optimize Smelting Technology, TMS Light Metals, 1998, pp. 393401.

[2]

ACKNOWLEDGMENTS

he authors would like to thank the staff at the Deschambault Smelter for their superb efforts in supporting the smoke tests in a safe and reliable manner, and for producing quality photographs and videotape for use in the model validation effort. The authors would also like to recognize Aluminium Pechiney, the original technology supplier of the ventilation system at the Deschambault Smelter, for its permission to publish this paper.

The original version of this paper was published in Light Metals 2005, edited by Halvor Kvande, TMS (The Minerals, Metals & Materials Society), February 2005.

108

Bechtel Technology Journal

BIOGRAPHIES
Jon Berkoe is a senior principal engineer and manager for Bechtel Systems & Infrastructure, Inc.s Advanced Simulation and Analysis Group. He oversees a team of 20 technical specialists in the fields of CFD, finite element structural analysis, virtual reality, and dynamic simulation in supporting Bechtel projects across all global business sectors. Jon is an innovative team leader with industry-recognized expertise in the fields of CFD and heat transfer. During his 20-year career with Bechtel, he has pioneered the use of advanced engineering simulation on large, complex projects encompassing a wide range of challenging technical issues and complex physical conditions. Jon has presented and published numerous papers for a wide variety of industry meetings and received several prominent industry and company awards. They include the National Academy of Engineerings Gilbreth Lecture Award, the Society of Mining Engineers Henry Krumb Lecturer Award, and three Bechtel Outstanding Technical Paper awards. Jon holds MS and BS degrees in Mechanical Engineering from the Massachusetts Institute of Technology, Cambridge, Massachusetts, and is a licensed mechanical engineer in the state of California. Philip Diwakar is a senior engineering specialist for Bechtel Systems & Infrastructure, Inc.s Advanced Simulation and Analysis Group. He employs state-ofthe-art technology to resolve a wide range of complex engineering problems on large-scale projects. Philip has more than 15 years of experience in CFD and finite element analysis for structural mechanics. His more recent experience includes work on projects involving fluid-solid interaction and explosion dynamics. During his 6-year tenure with Bechtel, Philip has received two full technical grants. One was used to determine the effects of blast pressure on structures at liquefied natural gas plants, with a view toward an advanced technology for designing less costly, safer, and more blast-resistant buildings. The other grant was used to study the effects of soil and fluids on building structures and vessels during seismic events. Philip has also received three Bechtel Outstanding Technical Paper awards, as well as two awards for his exhibit on the applications of fluid-solid interaction technology at the 2006 Engineering Leadership Conference in Frederick, Maryland. Before joining Bechtel, Philip was a project engineer with Caterpillar, Inc., where he served as part of a Six Sigma team. He applied his CFD expertise to determine the best approach for solving issues involving the cooling of Caterpillar heavy machinery. Philip holds an M.Tech degree in Aerospace Engineering from the Indian Institute of Science,

Bengalaru; a B.Tech degree in Aeronautics from the Madras Institute of Technology, India; and a BS degree in Mathematics from Loyola College, Baltimore, Maryland. He is a licensed professional engineer in Mechanical Engineering and is Six Sigma Yellow Belt certified. Lucy Martin is chief environmental engineer for Bechtels Mining & Metals (M&M) global business unit. She is functionally responsible for the environmental engineering executed from Bechtel offices in Montreal, Canada; Brisbane, Australia; and Santiago, Chile. Lucy also provides environmental expertise to major aluminum smelter projects. As lead environmental engineer for the Fjaral Aluminum Smelter Project, Lucy was responsible for permitting, wastewater management, and energy reduction and recovery. She also coordinated the CFD modeling study of the ventilation pattern in the potroom and carbon area to ensure environmental compliance. On the Alba Aluminum Smelter Potline 5 Expansion Project in Bahrain, Lucy served as the environmental engineer responsible for developing environmental design criteria and ensuring environmental compliance with legislative and client requirements. Lucy began her career with Bechtel as a process engineer with the Bechtel Water organization and transitioned to Bechtel Oil, Gas and Chemicals, Inc., before joining M&M in 2003. Lucy holds a BS degree in chemical engineering from the University of Sheffield, England, and is a registered professional engineer in Ontario, Canada. Bob Baxter is a technology manager and technical specialist in the Bechtel Mining & Metals Aluminum Center of Excellence. He provides expertise in the development of lean plant designs and materials handling and environmental air emission control systems for aluminum smelter development projects, as well as smelter expansion and upgrade studies. He is one of Bechtels technology leads for the Az Zabirah, Massena, and Kitimat aluminum smelter studies. Bob has 25 years of experience in the mining and metals industry, including 20 years of experience in aluminum electrolysis. He is a recognized specialist in smelter air emission controls and alumina handling systems. Before joining Bechtel, Bob was senior technical manager for Hoogovens Technical Services, where he was responsible for the technical development and execution of lump-sum turnkey projects for the carbon and reduction areas of aluminum smelters. Bob holds an MAppSc degree in Management of Technology from the University of Waterloo, and a BS degree in Mechanical Engineering from Lakehead University, both in Ontario, Canada.

December 2008 Volume 1, Number 1

109

C. Mark Read is senior specialist, primary aluminum processes, in the Bechtel Mining & Metals Aluminum Center of Excellence. He serves as engineering manager for the Kitimat Smelter Modernization Project in British Columbia. The project will increase the smelters production capacity by 40 percent, making it one of the largest smelters in North America. Mark has 28 years of technology management experience in the mining and metals industry, including leadership of advanced simulation and analysis groups with substantial CFD capability. He has expertise in the application of CFD to primary processes, including smelter potroom ventilation; magneto-hydrodynamics in Hall-Hroult cells; solidification, fluid flow, and stress distribution in direct chill casting of aluminum ingot; and combustion in coke calcination kilns and anode baking furnaces. Mark has provided technology support to smelter studies in Iceland, Russia, North America, the Middle East, and the Caribbean. His expertise in mining and metals covers Hall-Hroult cell design and operation, prebaked carbon products processing and performance, and aluminum casting operations. Before joining Bechtel, Mark served as director, Technology and Processing, for the Elkem Metals Company. Mark holds an MSc degree in Industrial Metallurgy and a BSc degree in Metallurgical Engineering, both from Sheffield Hallam University, Sheffield, South Yorkshire, England. Patrick Grover is director of Environmental, Health and Safety (EHS) for Alcoas Global Primary Products Growth, Energy, Bauxite, and Africa Business Unit. He leads all aspects of EHS and related g o v e r n m e n t /c o m m u n i t y consultation for aluminum mining, refining, smelting, and power generation megaprojects. Patrick has 17 years of experience in the aluminum industry. He is currently working on the development of aluminum projects in Greenland and North Iceland. Before joining Alcoa, Patrick served as area manager with the Virginia Department of Environmental Quality. Patrick holds a BS in Chemistry from Virginia Commonwealth University, Richmond, Virginia.

Don Ziegler is program manager for Modeling and Simulation for Alcoa Primary Metals. He leads a team that provides CFD, magneticfield, thermoelectric, and structural analyses support to Alcoas aluminum smelter projects. Since joining Alcoa 23 years ago, Dr. Ziegler has specialized in CFD and magneto-hydrodynamics. He has also developed a wide variety of process models for various aspects of aluminum processing. In addition, he has devised several novel approaches to micro-scale models of aluminum to simulate structural evolution. Before joining Alcoa, Dr. Ziegler served as a research engineer at St. Joe Minerals Corporation in Monaca, Pennsylvania. Dr. Ziegler holds PhD and MS degrees in Metallurgical Engineering from the University of California, Berkeley, and a BS degree in Metallurgy from Pennsylvania State University, University Park.

110

Bechtel Technology Journal

LONG-DISTANCE TRANSPORT OF BAUXITE SLURRY BY PIPELINE


Issue Date: December 2008 AbstractThe traditional methods of transporting bauxite from mine to alumina refinery over long distances are by rail or conveyor. However, slurrying bauxite and pumping it through a pipeline is a viable option that warrants serious consideration. This is especially true in rugged terrain where rail and conveyor construction can become expensive and time consuming. Furthermore, a pipeline can offer the benefits of being unobtrusive, more environmentally friendly, and less subject to interference from local populations. The worlds first long-distance bauxite slurry pipeline was commissioned in Brazil in May 2007. Owned by Minerao Bauxita Paragominas (MBP), it is pumping up to 4.5 million tonnes of dry bauxite per year 245 km to the Alunorte Refinery. This paper discusses the advantages and disadvantages of a bauxite slurry pipeline and notes major issues in considering such an option. It compares the costs of the rail and pipeline transport options over a range of long distances traversing rugged terrain and identifies the point where economics begin to favor the pipeline. Keywordsbauxite, breakeven point, economic pipeline length, long-distance pumping, pipeline, rail, slurry
INTRODUCTION he traditional methods of transporting bauxite from mine to alumina refinery over long distances are by rail or conveyor. Bauxite is delivered to the refinery relatively dry as run of mine ore ready for feed into the plant crushing/ grinding circuit. However, an alternative means of transport gaining attention is to prepare a slurry from the bauxite ore at the mine and pump it through a pipeline to the refinery, where it is dewatered in high-pressure filters. This option warrants consideration when traversing rugged terrain where rail and conveyor construction can become expensive and time consuming. Furthermore, a pipeline can offer the benefits of being unobtrusive, more environmentally friendly, and less likely to suffer interference from local populations. This paper discusses the advantages and disadvantages of the rail and pipeline transport options. The shorter-haul options of truck and conveyor are also discussed but discounted as not appropriate for long-distance transportation. The papers focus is on the pipeline transport option and when this option is likely to become feasible in terms of cost and operation. Major factors requiring consideration when evaluating the pipeline option are outlined. Costs are compared for rail versus pipeline, showing unit costs against distance. A cost breakeven point (BEP) is established to show under what conditions and over what distance pipeline becomes more cost-effective than rail.

BACKGROUND

ransporting minerals by pumping slurry through a pipeline has a perceived advantage throughout the transport system. However, many important issues need to be addressed when assessing a pipeline option. Pipeline transport of minerals became of interest in 1967 after Bechtel constructed the worlds first iron ore slurry pipeline: the 85-km-long Savage River project in Tasmania. Since then, long-distance slurry pipeline transport has become quite commonly adopted for mineral concentrates. Interest has increased in the alumina industry, where the worlds first long-distance bauxite slurry pipeline was commissioned in Brazil in September 2007. Owned by Minerao Bauxita Paragominas (MBP) and 245 km long, the pipeline is used to pump up to 4.5 million tonnes (dry weight) of bauxite per year across remote and rugged terrain to the Alunorte Refinery. [1]

Terry Cunningham
tacunnin@bechtel.com

2008 Bechtel Corporation. All rights reserved.

111

ABBREVIATIONS, ACRONYMS, AND TERMS


BEP MBP MPa Mt/y w/w m breakeven point Minerao Bauxita Paragominas megapascal million tonnes per year weight-to-weight micrometer (micron)

5075 km, depending on the terrain. They are designed to carry run-of-mine ore and can be loaded directly at the mine face. These trucks also allow mining from multiple facings while allowing a larger incremental extension of the haul distance. The haul roads are less costly than those for short-haul trucks because multiple axles reduce pavement load, but they are similarly affected by terrain. Overland Conveyor Overland conveyor systems are specifically designed to transport large quantities of bauxite and are cost-effective over distances of 15100 km. A conveyor can have a single flight of 1550 km, depending on the type chosen (e.g., cable belts). Longer distances require several conveyors in series. Conveyors occupy a smaller footprint than haul roads and accommodate steeper grades of up to 10%. They have horizontal alignment restrictions but can accommodate horizontal curves ranging from 24 km radius, depending on the tonnage being transported. Conveyor costs are affected by terrain, but the conveyors can be elevated in steel galleries to maintain the required vertical grades at creek or other crossings. Rail Rail is feasible for transport distances exceeding 100 km and excels in gentle terrain with only few stream crossings. Rail requires a generally flat vertical alignment with about 1% maximum grade. Horizontal alignment can accommodate tight bends down to 300 m radius. Rail is severely affected by terrain because it must follow the contours to maintain the low maximum grade and minimize cuttings. Therefore, in rugged terrain, rail length can become up to 50% longer than the length of a direct route unless large cuttings and embankments are constructed, with associated high costs. Stream crossings are usually expensive, requiring bridges and/or large culvert structures to accommodate high axle loads and flood flows. Pipeline A pipeline uses water as the transport medium, in contrast to the four dry options just described. The ore must be slurried with water to approximately 50% solids for pumping and then dewatered at the refinery to return it to the required moisture content for refinery feed. Because of this additional requirement, a pipeline becomes cost-effective only when long routes are being considered usually well in excess of 100 km in rugged terrain. Slurry pipeline alignments, both vertical and horizontal, are considerably more flexible than

Reviews to date show that pipeline transport becomes more viable over long distances in rugged terrains with steep gullies, peaks, and ridges where rail and conveyor solutions can be difficult and costly to implement.

Reviews to date show that pipeline transport becomes more viable over long distances in rugged terrains with steep gullies, peaks, and ridges where rail and conveyor solutions can be difficult and costly to implement.

TRANSPORT SYSTEM OPTIONS

ptions used to transport bauxite vary according to a number of factors, particularly distance and terrain. Transport options include short-haul truck, long-haul truck, overland conveyor, rail, and pipeline. The two truck options are generally best suited for relatively short distances (up to 75 km) between mine and refinery. As distance between mine and refinery increases, the more-permanent overland conveyor, rail, and pipeline transport modes are generally employed to reduce the operating costs of hauling to the refinery. These three options require the establishment of mining and ore preparation hubs at each systems feed end; these hubs are typically fed from the mine face by short-haul trucks. Each of the five options is briefly discussed in this section. Short-Haul Truck Large-capacity trucks (generally ranging from 100250 tonnes) are cost-effective for hauls up to 20 km. They allow operational flexibility when mining occurs from multiple pits. They require costly heavy-duty haul roads (due to their high axle loads and width) that require ongoing maintenance to achieve optimum operations. Road costs are heavily affected by the terrain, rising rapidly as it steepens and as the roads are lengthened to maintain safe design parameters. Long-Haul Truck Specially designed large-capacity road-train trucks (generally ranging from 250350 tonnes) increase the cost-effective haul distance to

112

Bechtel Technology Journal

those of rail or conveyor. Horizontally, pipe may weave around obstacles with a minimum radius of just a few meters. Vertically, pipe may rise and fall with grades of up to about 16%, which is a limiting factor to prevent settled particles from sliding downward when flow ceases. Pipelines are generally buried and follow existing terrain without substantial cut and fill earthworks. At creek crossings, the pipe is usually buried under the creek bed. Substantial rivers are crossed in conjunction with an access bridge.

of its slurrying and dewatering requirements. Thus, pipelines are feasible only for transporting bauxite over long distances, where these costs can be amortized. The remainder of this paper discusses only the two long-distance transport options of rail and pipeline. To make this discussion meaningful, a better understanding is required of the features of each system. Therefore, the following paragraphs provide more detailed information about their key features. Rail Rail is a straightforward, well-understood, bulk materials transport system. Robust in construction and operation, it offers great flexibility in its carrying capacity. Furthermore, it does not require altering the properties of the bauxite that feeds the refinery. As described in Table 1, the rail system consists of a rail loading facility, the rail line, and a rail unloading facility. The requirement for loading and unloading facilities is independent of the transport length and so represents a fixed cost. The most significant variables affecting the cost of rail are transport distance, associated terrain, and number of significant crossings. Also, adverse geotechnical conditions such as hard rock in cuttings and access through swamps can significantly affect cost.

TRANSPORT SYSTEM COMPONENTS Overview To further understand the different options, the entire transport system needs to be considered; that is, from mine face to refinery grinding circuit. This approach properly represents the total cost of a system on an equitable basis. Table 1 shows the key components of each transport system, highlighting the fixed components (F) required for each transport mode, which is variable (V). Also highlighted are the components common (C) to all systems regardless of transport system. The information in Table 1 indicates that the pipeline option has substantially more fixed components than the other options because

Rail is a straightforward, well-understood, bulk materials transport system.

Table 1. System Components for Each Transport Mode


MATERIAL FLOW Short-Haul Truck Long-Haul Trucks
Mine Supply (C) Mine Supply (C) Mine Supply (C) Rail Loading Station (F) Short-Haul Road (V) Long-Haul Road (V) Ore Crushing (C) Ore Crushing (C) Ore Crushing (C) Overland Conveyor (V) Rail Unloading Station, Conveying (F) Slurry Storage, Dewatering, Water Disposal, Conveying, Power (F) Storage (C) Grinding (C) Renery Process (C) Renery Process (C) Renery Process (C)

Storage (C)

Grinding (C)

Overland Conveyor

Storage (C)

Grinding (C)

Rail

Mine Supply (C)

Ore Crushing (C)

Rail Line (V)

Storage (C)

Grinding (C)

Renery Process (C)

Pipeline

Mine Supply (C)

Ore Crushing (C)

Grinding (C)

Cycloning, Slurry Storage, Pumping (F)

Pipeline (V)

Solids Storage (F)

Re-slurry (F)

Renery Process (C)

Mine Supply = truck loading and local hauling Ore Crushing = truck dump, crushing, and local conveying

December 2008 Volume 1, Number 1

113

In rugged terrain, rail length can become up to 50% longer than the length of a direct route as the route follows contours to maintain the low maximum grade and minimize cuttings. If a more direct route is required, large cuttings and embankments must be constructed, with associated high costs. Stream crossings are usually expensive, necessitating bridges and/or large culvert structures to accommodate high axle loads. Rail system capacity is generally limited by the amount of rolling stock (wagons and locomotives) available. Typically, the smallest amount of rolling stock practicable is employed because rail requires such a large initial investment in rail loading and unloading facilities and formation earthworks. System capacity can be expanded only by incurring the expense of adding more rolling stock. Rail does create a small barrier across the landscape, but pedestrian and/or vehicle access can be provided relatively easily and safely, commonly via large box-culvert access ways or crossover bridges. Pipeline The significant difference in transport via slurry pipeline is that the bauxite must be slurried for pumping and then dewatered to the equivalent moisture (12%14%) found in the as-mined bauxite, before it is fed into the refinery. Dewatering is necessary because once the bauxite is in the refinery circuit, additional water content beyond the normal range incurs additional capital and power costs to evaporate the excess moisture. As indicated in Table 1, the pipeline system has three major parts: slurry preparation facility, pipeline and pumps, and dewatering facility. Slurry preparation and dewatering are generally independent of pipeline length. Slurry Preparation Facility The slurry preparation facility includes crushing, grinding, water supply, agitator tanks, and slimes removal. To ensure that slurry pumping is performed as designed, a consistent product with the right characteristics must be fed into the pipeline. Achieving the correct particle size distribution during slurry preparation requires a well designed and controlled plant. The ideal particle distribution for bauxite slurry has a top size of 250 m, with about 50% of the particles less than 45 m. [1] Grinding circuits are generally designed to target this grading, but

the ore feed from mining also plays an important role. It may be necessary to extensively practice and optimize the selective mining and blending of ores, because this part of the process affects the sizing of the end product considerably. Pipeline and Pumps The transport system from slurry preparation area to refinery consists of a long steel pipeline and high-pressure pumps, plus ancillary items. To pump large quantities of bauxite requires special considerations to maintain a high level of reliability in this single life line. Of paramount importance are the characteristics of the bauxite particles in the slurry. Solids degradation by particle attrition through the pumps and pipeline could change the fluid rheology, which, in turn, could affect pumping by necessitating a higher hydraulic head and more power. Opportunities for the occurrence of plugs and leaks need to be mitigated by a design that reduces the risk of such events. The design must also maintain a pipeline velocity high enough to prevent solids from dragging on the bottom of the pipe in a situation called incipient sanding, which can increase pipe wear, reduce pipeline life, and lead to a higher pumping head requirement as a result of the reduced flow cross-section. Dewatering Facility Once the slurry arrives at the refinery, significant processing is required to dewater it to about 12%14% water content. This process minimizes the dilution of the refinery liquor in the circuit. Dewatering is typically achieved via a bank of hyperbaric high-pressure filters. [2] The filtration process uses large amounts of compressed air, and some processes use pressurized steam. The amount of power required to operate the filters and compressors and to produce steam is significant, typically about 12 MW for a refinery of 3.5 million tonnes of annual alumina capacity. Water from the filters is usually clarified before its release downstream. In some cases, it may be returned to the mine via a water pipeline, but this latter option is not normally practiced because of its high cost. The refinery also uses some of the water. Expansion The capacity of a pipeline and any expansion requirements must be decided upon during the initial design and construction phase because there is little opportunity to do so later. Sizing the pipeline only for the initial bauxite demand leaves little leeway to increase capacity. Additional pump capacity may produce only an extra 10%20% of tonnage. To allow for additional

In rugged terrain, rail length can become up to 50% longer than the length of a direct route as the route follows contours to maintain the low maximum grade and minimize cuttings.

114

Bechtel Technology Journal

future demand, the pipeline must be oversized until that demand is met. This means that batch pumping is required, which is operationally expensive because significantly more water than ore is delivered to the refinery. If the maximum future bauxite demand is met early, this could be an acceptable option because the high operating cost associated with pumping large quantities of water is limited in time.

PIPELINE SYSTEM DESIGN CONSIDERATIONS

his section presents an overview of the key elements to be considered when designing a pipeline transport system. These elements are focused on water supply, slurry properties, and pipeline design. Water Supply A reliable water supply is essential. The water resource should be located near the slurry preparation area, with continuous supply to meet the demand. The water supply scheme generally involves an off-take weir from a local stream, with a pump station and a pipeline to a storage dam. The storage dam is needed to accommodate seasonal variations in stream flow. A pump transfers water to the grinding circuit for slurry preparation. If water resources are scarce, a return water pipeline may be necessary from the dewatering facility at the refinery back to the mine, at increased project cost.

COMPARISON OF RAIL AND PIPELINE SYSTEMS

able 2 compares the attributes and differences of the rail and pipeline longdistance transport systems. Many factors may make a pipeline more favorable. For example, if a transport site is long, located in rugged terrain, has safety and security concerns, and/or is located in environmentally sensitive areas or crosses numerous creeks, then the case for pipeline could be strong.

Providing a large diameter pipeline for future use initially means pumping significant extra quantities of water to the refinery.

Table 2. Comparison of Rail and Pipeline Attributes and Differences


SELECTION FACTOR RAIL
Suited to 100+ km Cut and ll quantities important Heavy equipment for sleeper plant (in country), track, and bridge construction High earthworks

PIPELINE
Suited to 100+ km Potentially higher cost BEP

Distance

Constructability Rugged Terrain

Pipeline simpler to construct, with minimal earthworks

River Crossings

High cost for bridges and culverts

Stream crossings simpler and cheaper (buried under stream bed) Water supply required for slurry preparation No expansion unless initially undersized or designed as batch transfer Large diameter for future use means signicant extra quantities of water Better protected and thus more secure because buried Bauxite is enclosed in buried pipeline Low impact Lower requirement for clearing because footprint is small Habitat not cut off or isolated

Water Requirements

Minor

Future Expansion

Additional rolling stock Sidings

Security and Interference

Exposed to risk from human and environmental inuences Exposure to moving equipment

Safety Local Population

Environmental Habitats

Can restrict fauna movement

Environmental Noise and Dust

Moderate to high impact

Low impact No noise or dust issues except during construction

Community Impact of Route

High impact Larger deviations result in longer route

Low impact Can deviate easily Fewer resettlement issues

December 2008 Volume 1, Number 1

115

The slurry grading is very important in achieving the right performance during slurry pumping.

Slurry Properties RheologyThe rheology of the slurry influences a number of important factors in long-distance pumping. It is important to have sufficient slurry viscosity to prevent solids from settling out during short flow interruptions. To obtain the proper viscosity, the slurry must have a sufficient proportion of ultra-fine (less than 10 m) particles. However, while the presence of these particles serves to increase slurry viscosity, it also increases pumping costs because of the increased head requirement. During normal flow, pipeline velocity must be above the minimum settling velocity to prevent larger particles from dragging on the bottom of the pipeline. The design velocity is usually above laminar flow, just inside the turbulent flow region. This compromise keeps particles in suspension and the pumping head as low as possible. A higher velocity may be used in the design to ensure that no particles settle even if larger particles appear due to change of grind or ore body. Pump test loops are used to confirm the hydraulic properties of the slurry before finalizing design. Slurry densityThe higher the slurry density, the better, from a transport viewpoint. Typically, 50% solids weight-to-weight (w/w) is used, but a higher density results in less water being transported. Water is the carrying medium only and is otherwise of no value. However, increasing pumping density beyond 55% increases pumping requirements in terms of both head and power. Higher concentrations have not been accepted for long-distance pumping because the operational risk element is too high at present. GradingThe slurry grading is very important in achieving the right performance during slurry pumping. For bauxite slurry, the ideal grading has a top size of 250 m, with about 50% less than 45 m. A percentage of ultra-fine particles is desirable to improve slurry characteristics. However, if desliming is required to remove highly reactive silica, this may not be achieved. In this case, higher velocity pumping is required to keep the coarser particles in suspension. Particle degradationMuch has been reported about the potential of bauxite to break down because the ore tends to be soft and could suffer particle size degradation due to turbulent flow and the mechanical impacts of pumping. Such a breakdown can increase the viscosity and the pumping head required.

This is especially true if the proportion of ultra-fine particles is increased. However, MBP in Brazil has reported no particle attrition or increase in viscosity in more than 12 months since the initial operation of its bauxite slurry pipeline. [1] Pipeline Design Performance and Security Considerations A pipelines long-term performance and security are paramount for a successful slurry transport system. The following relevant concerns in determining pipe wall thickness require serious attention: WearErosion of the inside surface of the pipe is inevitable when a mineral slurry is being pumped. To deal with this issue, the pipe is either lined or provided with additional (sacrificial) wall thickness. However, experience has shown that bauxite slurry is not very abrasive and causes negligible erosion. Furthermore, most of the solids are in suspension because of the fine particles. CorrosionInterior corrosion is difficult to predict but must be allowed for if no lining is used. An allowance for bauxite is typically 0.2 mm/year. [1] The outside of the pipe is protected using a cathodic protection system coupled with a commercial paint system. TransientsPressure waves or transients are induced by velocity changes inside the pipeline. Sudden loss of power, valve closure, and column separation induce transients. In long pipelines, this can be significant and must be assessed. Slurry behaves differently from water, but there are occasions when the line is filled with water or only partially filled with slurry. Steel gradeHigher grade steel yields a lower wall thickness, per hoop stress calculations. The availability of various grades and their costs affect the choice. Pumping and Size Considerations Pumping high-pressure slurry over long distances in a pipeline requires a number of serious considerations in the design process: PumpsThe pumps used to drive the slurry along the pipeline must be robust, reliable, and of proven performance. For long pipelines, the selection of choice is the positive-displacement pump. Whereas centrifugal pumps are limited by the maximum casing pressure, typically about 7 MPa or 700 m head, positive-displacement

116

Bechtel Technology Journal

pumps can provide pressures up to 25 MPa. Piston wear, once a problem, has been addressed by the piston diaphragm pump, in which a diaphragm protects the piston and liner from abrasive sliding contact wear. Pipeline internal diameterThe pipeline internal diameter is crucial for the long-term performance of the system. A diameter should be used that achieves the minimum slurry velocity needed to prevent solids settling. This usually results in flow in the turbulent region. The penalty for using a smaller diameter and higher line velocity is higher pumping heads. However, this lower-risk strategy also decreases wear and reduces the chance of plugging. Valves and fittingsA number of fittings are required for pipeline operation. These include isolating valves, scour valves, and air valves. Also, pressure monitors and magnetic flow recorders are installed as part of the monitoring/leak detection system. Air valves and scour valves facilitate emptying sections of pipeline and capturing the associated slurry in dump ponds. If these valves are not provided, managing line breaks or blockages could be challenging, particularly in very long pipelines. Dump pondsDump ponds may be needed to trap and contain slurry if it becomes necessary to manage breaks and line blockages. The requirement to contain slurry depends on the environmental regulations of the country of operation. Power and controlsPower supply and distribution are required, along with a feed control system. Commonly, a power distribution system of about 15 MW is required to deliver power to the pumps. A complex control system is required to link feed into the pipeline with demand at the refinery end. In addition, some long pipelines require leak detection monitoring, which involves recording slurry pressure at intervals along the pipeline and sending this information to the control room.

Increased Hydraulic Pressure A change in slurry rheology as a result of altered particle size distribution changes the pump head requirements. For positivedisplacement pumps, this means applying more power to the piston stoke and consequently drawing more power from the motor. To cover this possibility, a larger motor may be required. Shortened Pipeline Life The pipeline may be subject to greater wear if larger particles are generated in the grinding circuit due to a change in ore characteristics. Larger particles may drag as a shifting bed, thereby increasing erosion. In addition, a change in slurry chemistry may accelerate corrosion. Pipe inspections are needed to determine the extent of wall thickness reduction. Plugging The slurry should be within the design grading, with a top size of 250 m and about 50% at less than 45 m. With this grading and a pipeline slope of no greater than 16%, the risk of plugging is low. However, if there is a change in sizing, plugging is possible, especially if the pumps stop for more than 30 minutes. Excessive Slimes Slimes are detrimental to dewatering plant filter operation because they blind the filter cloths. Failure to remove slimes at the slurry preparation area increases filter usage and maintenance. This situation could be costly in terms of additional capital costs and lost refinery production. Dewatering Plant Startup It is possible that dewatering plant startup and commissioning could take at least 3 months. [3] Efficient plant operation may be further delayed if the initial slurry particle size distribution is too fine and/or there is a significant presence of slimes. Thus, control of particle size and slimes at the slurry preparation end is important from the outset of operations.

The pipeline internal diameter is crucial for the long-term performance of the system.

RISKS IN A PIPELINE TRANSPORT SYSTEM

TRANSPORT SYSTEM ECONOMICS: RAIL VERSUS PIPELINE

number of risks need to be addressed in adopting a pipeline slurry transport system. These are increased hydraulic pressure, shortened pipeline life, plugging, excessive slimes, and dewatering plant startup.

auxite transportation system study data comparing rail and pipeline provides a range of costs for projects in a number of countries. The data covers rugged terrain sites in Cameroon, Guinea, Brazil, and Vietnam for production rates of 10 million tonnes per year (Mt/y) and 20 Mt/y.

December 2008 Volume 1, Number 1

117

Table 3. System Components Rail versus Pipeline


RAIL
Land Acquisition easements for rail Earthworks cuttings, embankments Stream Crossings culverts, bridges Trackwork rail and rolling stock Signaling and Telecommunications Workshops maintenance facilities Crossings level or grade separation

PIPELINE
Raw Water System pipeline, pump, and pond Slurry Thickener and Slurry Tank with Agitator Slurry Pipeline Slurry Pumps two pump stations Slurry Tank with Agitator Pressure Filters Water Clarier (from pressure lters)

Studies have shown that pipeline transport often does not become economical until transport distances of a few hundred kilometers are reached.

Table 4. Bauxite Rail and Pipeline Transportation Costs, 2006 (US$/Tonne)


BAUXITE TONNAGE AND DISTANCE Cameroon RAIL Guinea Brazil Vietnam Cameroon PIPELINE Guinea Brazil Vietnam 10 Mt/y 200 km
3.37 3.83 4.08 3.04 5.75 5.74 5.47 4.27

20 Mt/y 800 km
12.06 13.77 13.48 10.72 10.97 10.87 10.74 8.71

400 km
6.60 7.08 7.17 4.86 7.55 7.51 7.32 5.80

200 km
2.41 2.90 2.85 2.21 4.77 4.76 4.51 3.49

400 km
4.57 5.45 5.30 4.24 5.82 5.80 5.57 4.31

800 km
8.72 10.54 10.06 8.06 7.91 7.49 7.62 6.30

The system components used to develop the costs are listed in Table 3. Total capital and operating costs are summarized in Table 4 as cost per tonne against distance. Figures 1 and 2 show the cost BEPs for bauxite transportation by rail and by pipeline at capacities of 10 Mt/y (Figure 1) and 20 Mt/y (Figure 2) over rugged terrain in Cameroon, Guinea, Brazil, and Vietnam. In both cases, Vietnam is an outlier compared with the other countries, which are closely aligned. This is because the Vietnamese wage rates are lower than the others. Discounting Vietnam, it is seen that for both the 10 Mt/y and 20 Mt/y capacities, the BEPs are similar, ranging from 450 to 600 km and 450 to 650 km, respectively. This shows that the BEP is largely independent of transport capacity. It should be noted that the BEPs in the two figures pertain only to the case studies selected and will vary for other locations. Figures 1 and 2 suggest that pipeline transport does not become economical until the transport distance reaches about 450 km. The capital cost of components is the determining factor. For rail, the greater the distance, the more rolling stock (to maintain unit-train cycle time and bauxite delivery rate), sleepers, and rail are needed. For pipeline, the greater the distance, the more

pumps (to maintain bauxite delivery rate) and pipe are needed. Cost-wise, pipeline components are cheaper to buy, replace, and augment than are rail components. It should be noted that where the BEP for rail/ pipeline occurs, the unit-cost-versus-distance curves intersect at acute angles. This means that a small change in cost for either rail or pipeline results in a large shift in BEP. For example, if the pipeline cost varies by 10% of the estimated cost, the BEP range changes as shown in Table 5. Thus, the BEP could range from 350 to 800 km, a significant departure from the base case range of 450 to 600 km.
Table 5. Sensitivity Range of Cost BEPs, 10 Mt/y System
BASE CASE
450600 km

BASE CASE 10%


350450 km

BASE CASE +10%


600800 km

MBP BAUXITE SLURRY PIPELINE IN BRAZIL [1]

t the MBP facility in Brazil, bauxite is pumped through a 245 km pipeline across remote and rugged terrain to the Alunorte Refinery. This pipeline system became an economical transport mode because of factors favorable to a pipeline solution but not to a rail alternative, including

118

Bechtel Technology Journal

15 14 13 12 11 10 9

Breakeven Cost Range 450600 km

10 Mt/y Train

10 Mt/y Pipeline

US $/tonne

8 7 6 5 4 3 2 1 0 0 200 400 800 Breakeven Cost for Vietnam

Distance, km

Figure 1. Bauxite Rail and Pipeline Cost Breakeven Points, 10 Mt/y System

15 14 13 12 11 10 9

Breakeven Cost Range 450650 km

20 Mt/y Train 20 Mt/y Pipeline

US $/tonne

8 7 6 5 4 3 2 1 0 0 200 400 800 Breakeven Cost for Vietnam

Distance, km

Figure 2. Bauxite Rail and Pipeline Cost Breakeven Points, 20 Mt/y System

December 2008 Volume 1, Number 1

119

location in a remote area; location in rugged terrain; use of an existing, cleared right-of-way; and the need for many stream crossings. The MBP bauxite transportation system has the following slurry preparation, pipeline, and dewatering features. Slurry Preparation Features Slurry is prepared by crushing and grinding mined ore taken from stockpiles. Cyclones are used to adjust particle size. Tailings are separated and sent to a waste thickener. The prepared slurry at 50% solids is sent to agitated storage tanks next to the pipeline pump station. The pump station consists of six mainline crankshaft-driven piston diaphragm pumps connected in parallel. The pumps are Geho TZPM 2000 units from Weir Minerals Netherlands, each with a design capacity of 356 m/h and a maximum discharge pressure of 13.7 MPa. Slurry Pipeline Features The pipeline is designed for a future maximum annual production of 12.6 million tonnes of bauxite, but the initial capacity is only 4.5 million tonnes. The ultimate capacity will be reached after a few years, when the 610 mm (24 in.) diameter

pipeline is fully used. Currently, the operation pumps slurry-water batches, a requirement to maintain slurry minimum design velocity. A second pump station will be needed to meet the additional pumping requirements for the future maximum production. The pipeline has a leak detection system featuring five intermediate pressure-monitoring stations to indicate continuous pressure data to the pipeline operator. The pressure data is transmitted via a fiber-optic cable. The pipeline shares right-of-way with two existing kaolin slurry pipelines. Slurry Dewatering Features At the terminal station, agitator tanks receive the slurry to provide feed to the filter plant. The slurry is dewatered using hyperbaric pressure filters to produce a bauxite with about 12% moisture in the filter cake feed to the refinery.

The MBP pipeline has a leak detection system featuring five intermediate pressuremonitoring stations to indicate continuous pressure data to the pipeline operator.

SUMMARY OF PIPELINE TRANSPORT SYSTEM ATTRIBUTES able 6 summarizes the key advantages and disadvantages that need to be taken into account when considering a pipeline transport system.

Table 6. Advantages and Disadvantages of a Pipeline Transport System


ADVANTAGES Unobtrusive More Secure Safer Continuous Flow Low Maintenance Flexible Alignment Shorter Route Easier Stream Crossings Environmentally Friendly
Line buried, low visibility, low environmental impact, lower footprint Better protected, less likely to be vandalized Local population better protected, no moving parts to clash with No stop/start operation, less likely to experience product delay at renery None on pipeline, minor on pumps, high on lters Easily adjustable around villages or obstacles Fewer vertical and horizontal alignment constraints, resulting in more direct route Can pass buried under streams without bridging Lower footprint, less clearing, does not isolate habitat, no noise/dust

DISADVANTAGES
Slurry Preparationhigh capital and operating costs from mine to pipeline for slurry preparation (crushing, grinding, water supply, etc.) Slurry Dewateringhigh capital and operating costs for slurry receiving, dewatering lters and associated compressors, cake storage and re-slurrying, water disposal Large water requirement for slurry transport (approximately 1 tonne of water per tonne of bauxite) Change in ore characteristics can change particle distribution, leading to possible increase in pumping head and increased ltering May be difcult to locate and remove Expensive to return lter water to mine; disposal at renery may require treatment before release; downstream issues, including environmental and community, may occur Long-term pipeline performance, higher-than-expected internal corrosion and erosion

High Capital and Operating Costs

Water Usage Rheology Change Blockages Dewatering Management Pipeline Life

120

Bechtel Technology Journal

CONCLUSIONS ong-distance transport of bauxite by pumping slurry through a pipeline is in its infancy, with only one operation at present, worldwide. This operation is at the MBP facility in Brazil, which has been pumping bauxite slurry 245 km across rugged terrain for only about 1 year. To date, there have been no reported problems. Transporting bauxite slurry by pipeline may be the preferred solution over transporting dry bauxite by rail if a number of the favorable selection criteria listed in Table 6 are met. However, pipeline transport is likely to be used only in special cases, and selection must proceed with caution. There are a number of risks associated with pumping bauxite, as outlined in this paper. However, if the risks are addressed and properly managed, an economic advantage can be realized. Compared with the long-distance rail transport of bauxite, slurry pumping becomes more economical over distances beyond 450 km for the cases presented in this paper. However, in contemplating the transportation of bauxite slurry by pumping it through a pipeline, it is essential to completely understand the properties and characteristics of the bauxite. In the final analysis, each project is different and site dependent. Thus, even if the distance is long, the terrain is rugged, and the bauxite properties are known, much detailed analysis is still required before an informed decision can be made.

BIOGRAPHY
Terry Cunningham is a senior civil engineer with 39 years of experience in planning, investigating, designing, and supervising the construction of a diverse range of largescale infrastructure works, particularly in the mining and metals sector. He has spent the last 14 years with Bechtel and is currently based in Brisbane, Queensland, Australia, in the Alumina and Bauxite Centre of Excellence. His areas of expertise include earthworks, tailings disposal, water management, water resources, haul roads, dams, mine dewatering, drainage, flood mitigation, sewerage treatment, pumping systems, feasibility studies, and report writing. Terry has worked throughout Australia and in Oman; Bahrain; Montreal, Quebec; Indonesia; and Papua New Guinea. Representative Bechtel assignments include serving as lead civil engineer on the Sohar aluminum smelter project in Oman, managing the design requirements for the complex licensing process required by the Queensland Government for the Pasminco Century zinc tailings dam, and designing and managing an important environmental cleanup project for the Comalco Bell Bay aluminum smelter in Tasmania. He presented a paper on this award-winning project to an international conference hosted by the Minerals Council of Australia at Newcastle near Sydney. Before joining Bechtel, Terry worked for BHP Engineering on many coal infrastructure projects and on a major upgrade of the Brisbane-to-Cairns railway. The railway work involved replacing more than 300 old timber bridges (built in 1906) with large box culverts and bridges. Terry qualified in Civil Engineering at Swinburne University of Technology, Melbourne, Australia. He is a member of the Institution of Engineers Australia and a registered professional engineer. Terry lectured part-time for 2 years at Royal Melbourne Institute of Technology.

Pipeline transport is likely to be used only in special cases, and selection must proceed with caution.

REFERENCES
[1] R. Gandhi, M. Weston, M. Talavera, G.P. Brittes, and E. Barbosa, Design and Operation of the Worlds First Long Distance Bauxite Slurry Pipeline, in Light Metals 2008, edited by D.H. DeYoung, The Minerals, Metals & Materials Society, March 2008, pp. 95100, access publication via <http://iweb.tms.org/Purchase/ ProductDetail.aspx?Product_code=08-7100-G>. R. Bott, T. Langeloh, and J. Hahn, Filtration of Bauxite After Pipeline Transport: Big Challenges Proper Solutions, presented at the 8th International Alumina Quality Workshop, Darwin, NT, Australia, September 712, 2008, access technical program via <http://www.aqw.com.au/Portals/32/ AQW%202008%20final%20program.pdf>. M. Santa Ana, J. Morales, R. Prader, J. Kappel, and H. Heinzle, Hyperbaric Bauxite Filtration: New Ways in Bauxite Transportation, presented at the 8th International Alumina Quality Workshop, Darwin, NT, Australia, September 712, 2008, access technical program via <http://www.aqw.com.au/Portals/32/ AQW%202008%20final%20program.pdf>.

[2]

[3]

December 2008 Volume 1, Number 1

121

122

Bechtel Technology Journal

Bechtel OG&C
Technology Papers

TECHNOLOGY PAPERS

125

Worlds First Application of Aeroderivative Gas Turbine Drivers for the ConocoPhillips Optimized Cascade LNG Process
Cyrus B. Meher-Homji Tim Hattenbach Dave Messersmith Hans P. Weyermann Karl Masani Satish Gandhi, PhD

141

Innovation, Safety, and Risk Mitigation via Simulation Technologies


Ramachandra Tekumalla Jaleel Valappil, PhD

157 OG&C Jamnagar Export Renery


With the construction of this renery in northwest India, the second renery on the site, Jamnagar has now become the worlds largest oil-rening operation.

Optimum Design of Turbo-Expander Ethane Recovery Process


Wei Yan, PhD Lily Bai, PhD Jame Yao, PhD Roger Chen, PhD Doug Elliot, PhD Stanley Huang, PhD

WORLDS FIRST APPLICATION OF AERODERIVATIVE GAS TURBINE DRIVERS FOR THE CONOCOPHILLIPS OPTIMIZED CASCADE LNG PROCESS
Originally Issued: April 2007 Updated: December 2008 AbstractMarket pressures for new thermally efficient and environmentally friendly liquefied natural gas (LNG) plants, coupled with the need for high plant availability, have resulted in the worlds first application of high-performance PGT25+ aeroderivative gas turbines for the 3.7 MTPA Darwin LNG plant in Australias Northern Territory. The plant was operational several months ahead of contract schedule and exceeded its production target for 2006. This paper describes the philosophy leading to this first-of-a-kind aeroderivative gas turbine plant and future potential for the application of larger aeroderivative drivers, which are an excellent fit for the ConocoPhillips Optimized Cascade Process. Keywordsaeroderivative, gas turbine, greenhouse gas, LNG, LNG liquefaction, thermal efficiency

INTRODUCTION

OVERVIEW OF THE DARWIN LNG PROJECT

Cyrus B. Meher-Homji
cmeherho@bechtel.com

Tim Hattenbach
thattenb@bechtel.com

Dave Messersmith
dmessers@bechtel.com

Hans P. Weyermann
hans.weyermann@ conocophillips.com

eroderivative engines fit the ConocoPhillips Optimized Cascade Process1 because of the two-trains-in-one design concept that facilitates the use of such engines. Further, the application of a range of larger aeroderivative engines that are now available allows for a flexible design fit for this process. Benefits of aeroderivative engines over large heavy-duty single- and twoshaft engines include significantly higher thermal efficiency and lower greenhouse gas emissions, the ability to start up without the use of large helper motors, and improved production efficiency due to modular engine change-outs. For instance, the Darwin liquefied natural gas (LNG) plant is able to operate at reduced rates of 50% to 70% in the event that one refrigeration compressor is down. Several practical aspects of the application of aeroderivative gas turbines as refrigeration drivers along with design and implementation considerations are discussed below. The selection of aeroderivative engines and their configurations for various train sizes, and evaluation of emission considerations are also covered.

n February 14, 2006, the Darwin LNG plant was successfully commissioned and the first LNG cargo was supplied to the buyers, Tokyo Electric and Tokyo Gas. The plant represents an innovative benchmark in the LNG industry as the worlds first facility to use highefficiency aeroderivative gas turbine drivers. This benchmark follows another landmark innovation by ConocoPhillips: the first application of gas turbine drivers at the Kenai LNG plant in Alaska built in 1969. The Darwin plant is a nominal 3.7 million tonnes per annum (MTPA) capacity LNG plant at Wickham Point, located in Darwin Harbour, Northern Territory, Australia. The plant is connected via a 500 km, 26-inch-diameter subsea pipeline to the Bayu-Undan offshore facilities. The Bayu-Undan field was discovered in 1995 approximately 500 km northwest of Darwin in the Timor Sea (see Figure 1). Delineation drilling over the next 2 years determined the field to be of world-class quality with 3.4 trillion cubic feet (tcf) of gas and 400 million barrels (MMbbl) of recoverable condensate and liquefied petroleum gas (LPG). The Bayu-Undan offshore facility began operating in February 2004; current production averages 70,000 bbl of condensate and 40,000 bbl of LPG per day. The Darwin project was developed through a lump-sum, turnkey (LSTK) contract with Bechtel

Karl Masani
karl.masani@ conocophillips.com

Satish Gandhi, PhD


satish.l.gandhi@ conocophillips.com
1 ConocoPhillips Optimized Cascade Process services are provided by ConocoPhillips Company and Bechtel Corporation via a collaborative relationship with ConocoPhillips Company.

2008 Bechtel Corporation. All rights reserved.

125

ABBREVIATIONS, ACRONYMS, AND TERMS


ASME bbl CC CIT CNG CO2 DBT DLE FEED FOB GENP GT HHV HPT HSPT ISO kg/sec American Society of Mechanical Engineers barrel combined cycle compressor inlet temperature compressed natural gas carbon dioxide dry bulb temperature dry low emissions front-end engineering design free on board General Electric Nuovo Pignone gas turbine higher heating value high-pressure turbine high-speed power turbine International Organization for Standardization kilograms per second LNG LPG LSTK MDEA MMbbl MMBtu MTPA NGL NOx NPV PGT25+ ppm RH rpm SAC SC SHP tcf TMY2 VFD liquefied natural gas liquefied petroleum gas lump-sum, turnkey methyldiethanolamine million barrels million British thermal units million tonnes per annum natural gas liquid nitrogen oxide net present value GENP designation of the LM2500 engine with HSPT parts per million relative humidity revolutions per minute single annular combustor simple cycle shaft horsepower trillion cubic feet typical meteorological year 2 variable-frequency drive

LM2500+G4 gas generator manufactured by GE Industrial Aeroderivative group

DILL TIMOR LESTE SUNRISE

ABADI AUSTRALIA

SUAI

JPDA

EVANS SHOAL

EKKN
ONE SIA

IND

BAYU-UNDAN
TIMOR SEA

AUS

IA RAL

CRUX TERN SCOTT REEF BREWSTER

PUTREL

DARWIN
NORTHERN TERRITORY

BLACKTIP BRECKNOCK
0

N
50 km 100

Figure 1. Bayu-Undan Field Location and the Darwin LNG Plant

126

Bechtel Technology Journal

Figure 2. Aerial View of 3.7 MTPA Darwin LNG Plant 188,000 m3 Storage Tank, 1,350 m Jetty, and Loading Dock

The Darwin plant established a new benchmark in the LNG industry by being the first LNG plant to use aeroderivative gas turbines as refrigerant compressor drivers.

Corporation that was signed in April 2003 with notice to proceed for construction issued in June 2003. An aerial photo of the completed plant is shown in Figure 2. Details regarding the development of the Darwin LNG project have been provided by Yates. [1, 2] Not only has the Darwin plant established a new benchmark in the LNG industry by being the first LNG plant to use aeroderivative gas turbines as refrigerant compressor drivers, it also is the first to use evaporative coolers. The GE PGT25+2 is comparable in power output to the GE Frame 5D gas turbine but has an ISO thermal efficiency of 41% compared to 29% for the Frame 5D. This improvement in thermal efficiency results in a reduction of required fuel, which reduces greenhouse gas in two ways. First, CO2 emissions are reduced due to a lower quantum of fuel burned, and second, the total feed gas required for the same LNG production also is reduced. The feed gas coming to the Darwin LNG facility contains carbon dioxide, which is removed in an amine system before LNG liquefaction and is released to the atmosphere. The reduction in the feed gas (due to the lower fuel gas requirements) results in

a reduction of carbon dioxide or greenhouse gas emissions from the unit. The Darwin plant incorporates several other design features to reduce greenhouse gas emissions. They include the use of waste heat recovery on the PGT25+ turbine exhaust that is used for a variety of heating requirements within the plant. The facility also contains ship vapor recovery equipment. Both of these features not only reduce emissions that would have been produced from fired equipment and flares, but they also lead to reduced plant fuel requirements, which reduce the carbon dioxide released to the atmosphere. Gas turbine nitrogen oxide (NOx) control is derived by water injection, which allows the plant to control NOx emissions while maintaining the flexibility to accommodate fuel gas compositions needed for various plant operating conditions. At the same time, there is no need for costly fuel treatment facilities for dry low NOx combustors. The Darwin plant uses a single LNG storage tank with a working capacity of 188,000 m3, one of the largest aboveground LNG tanks. A ground flare is used instead of a conventional stack to minimize visual effects from the facility and any intrusion on aviation traffic in the Darwin area. The plant also uses vacuum jacketed piping in the storage and loading system to improve thermal efficiency and reduce insulation costs. Methyldiethanolamine

2 This engine uses a LM2500+ gas generator, coupled with a two-stage high-speed power turbine developed by GE Oil & Gas.

December 2008 Volume 1, Number 1

127

RAW GAS FEED PRETREATMENT


METHANE ETHYLENE

AIR FIN HEAT EXCHANGER AIR FIN HEAT EXCHANGER COMPRESSORS

PLANT FUEL COMPRESSORS

COMPRESSORS

TURBINES

TURBINES

TURBINES

AIR FIN HEAT EXCHANGER

Sizes of available aeroderivative engines ideally fit the two-trains-in-one concept of the ConocoPhillips LNG process.

METHANE

ETHYLENE THYL COLD BOX

METHANE

METHANE COLD BOX

PROPANE

PROPANE HEAT EXCHANGE

ETHYLENE

HEAVIES REMOVAL NGL SHIP VAPOR BLOWER VAPOR FROM SHIP WHEN LOADING TO SHIP LOADING FACILITIES STORAGE TANKS AND LOADING PUMPS
LNG

TANK VAPOR BLOWER

Figure 3. Simplified Process Flow Diagram of the Optimized Cascade Process

(MDEA) with a proprietary activator is used for acid gas removal. This amine selection lowers the required regeneration heat load, and for an inlet gas stream containing more than 6% carbon dioxide, this lower heat load leads to reduced equipment size and a corresponding reduction in equipment cost. Plant Design The Darwin LNG Plant uses the ConocoPhillips Optimized Cascade Process, which was first used in the Kenai LNG plant in Alaska and more recently in Trinidad (four trains), Egypt (two trains), and a train in Equatorial Guinea. A simplified process flow diagram of the Optimized Cascade Process is shown in Figure 3. Thermal Efficiency Considerations Several fundamental conditions in todays marketplace make aeroderivative engines an excellent solution: Sizes of available aeroderivative engines ideally fit the two-trains-in-one concept of the ConocoPhillips LNG Process.

Aeroderivative engines are variable-speed drivers, which facilitate the flexibility of the process and allow startup without the use of large variable-frequency drive (VFD) starter motors commonly used on single-shaft gas turbines. Aeroderivative engines also allow startup under settle-out pressure conditions, with no need to depressurize the compressor as is common for single-shaft drivers. High efficiency results in a greener train with a significant reduction in greenhouse gas emissions. Several LNG projects are gas constrained due to a lack of available supplies. This situation occurs both on potential new projects and at existing LNG facilities. Under such constraints, any fuel reduction resulting from higher gas-turbine thermal efficiency means it can be converted to LNG. Gas supplies are also constrained due to greater national oil company control of the sources. Gas supplies are no longer available at low cost to LNG plants and the notion that fuel is free is now a thing of the past. Several current projects and front-end engineering design (FEED) studies

128

Bechtel Technology Journal

Value of Converting Fuel Savings into LNG for a 5.0 MTPA LNG Plant
(Power Cycle Efficiency Increase vs. Power Cycle Efficiency of 30%) 500 (Delta Between Each Efficiency Case and Base Case)

Present Value of Gross Margin, $ Million

33% 400 37% 40% 300 50%

200

100

0
1.00 2.00 3.00 4.00 5.00

LNG Price, $/MMBtu


Fixed feed gas flow: gas cost = $0.75/MMBtu Present value calculated at discount rate = 12% and 20-year life Availability is adjusted for aeroderivatives (+1%) and combined cycle (2%) Capital cost adjusted for incremental capacity (SC: $150/tonne, CC: $300/tonne)

Figure 4. Present Value of Gross Margin as a Function of Driver Thermal Efficiency, for a Range of LNG FOB Prices

Process flexibility and stability of operation are of paramount importance and must be incorporated into the considerations regarding thermal efficiency.

have encountered fuel valued much higher than a decade ago. Host governments also are requiring more gas for domestic use, increasing the shortfalls for LNG plants. Given this situation and the fact that fuel not consumed can be converted to LNG, use of high-efficiency aeroderivative engines delivers significant benefits with a net present value (NPV) of hundreds of millions of dollars. Because NPV is a strong function of feed gas costs and LNG sales price, it is highly affected by a plants thermal efficiency, especially when the free on board (FOB) LNG costs are high, as in the current market. The present value of converting fuel into LNG for a nominal 5.0 MTPA plant is shown in Figure 4 for a range of driver efficiencies between 33% and 50%, as compared to a base case of 30%. Results are provided for FOB LNG prices ranging from $1 to $5 per million British thermal units (MMBtu). The present value of the gross margin (defined as LNG revenuefeed gas cost) is calculated over a 20-year life and a discount rate of 12%. The graph shows the strong influence of driver efficiency. The thermal efficiency of an LNG facility depends on numerous factors such as gas composition, inlet pressure and temperature, and even more obscure factors such as the location of the loading dock relative to the site of the liquefaction process. Higher thermal efficiency is typically

a tradeoff between capital and life cycle costs. Gas turbine selection, the use of waste heat recovery and ship vapor recovery, and selfgeneration versus purchased power all have a significant effect on the overall thermal efficiency of the process. Process flexibility and stability of operation are of paramount importance and must be incorporated into the considerations regarding thermal efficiency because the value of a highly efficient process is diminished if plant reliability and availability are sacrificed. Yates [3] has provided a detailed treatment of the design life cycle and environmental factors that affect plant thermal efficiency, such as feed gas characteristics, feed gas conditioning, and the LNG liquefaction cycle itself. Some of the key elements of this discussion are provided below, leading into the discussion of the selection of high-efficiency aeroderivative engines. A common consideration in evaluating competing LNG technologies is the difference in thermal efficiency. The evaluation of thermal efficiency tends to be elusive and subjective in that each project introduces its own unique characteristics that determine its optimum thermal efficiency based on the projects strongest economic and environmental merits. Different technologies or plant designs cannot be compared on thermal efficiency without understanding and compensating for such unique differences of each project.

December 2008 Volume 1, Number 1

129

Turndown capabilities of an LNG process also need to be considered when thermal efficiency and life-cycle cost comparisons are being made.

The definition of thermal efficiency also has proven to be subjective depending on whether an entire plant, an isolated system, or an item of equipment is being compared. Thermal efficiency, or train efficiency, has been expressed as the ratio of the total higher heating value (HHV) of the products to the total HHV of the feed. This definition fails to recognize the other forms of thermodynamic work or energy consumed by the process. For example, the definition would not account for the work of purchased power and electric motors if they are used for refrigeration and flashed gas compression. When evaluating the benefits of achieving high thermal efficiency with a specific LNG plant design, a true accounting of all of the energy being consumed in the process must be considered. Turndown capabilities of an LNG process also need to be considered when thermal efficiency and life-cycle cost comparisons are being made. Thermal efficiency comparisons are typically based on the process operating at design conditions. In an actual plant environment, this design point is elusive, and an operator is always trying to attain a sweet spot where the plant will operate at its peak performance under prevailing conditions. As the temperature changes during the day, affecting the performance of the air coolers, turbines, or process fluid and equipment, the operator needs to continually adjust plant parameters to achieve optimal performance. Designing a plant to allow an operator to continually achieve this optimum performance will affect the plants overall thermal efficiency and life cycle costs. The efficiency of an LNG process depends on many features. The two most significant ones are the efficiency of heat exchange and the turbomachinery efficiency. The heat exchange efficiency is a function of the process configuration and selection of the individual heat exchangers, which sets temperature approaches. The turbomachinery efficiency depends on the compressor and turbine efficiencies. Cooling Curve Performance The liquefaction cooling curve3 performance is another benchmark reviewed in LNG technology comparisons and is often misunderstood or incorrectly applied. Recent analyses by Ransbarger et al. [4] have comprehensively evaluated the issue of cooling curve performance with respect to overall thermal efficiency.

A liquefaction cooling curve plot depicts the temperature change of the heat sink and the heat source as a function of the heat transferred. Frequently, cooling curves are shown with only the feed gas as a heat source and then used as a means to compare different liquefaction processes. Cooling curves should include all duty that is transferred at a given temperature, which includes cooling and condensing of the refrigerants as well as the feed gas. The composite cooling curve analysis seeks to optimize the area or temperature difference between the heat source and the heat sink in a cost-effective manner. Each of the available liquefaction processes attempts to optimize this temperature difference in a different way. Very often, process efficiencies of LNG technologies have been compared with the classical cascade process. It is important to note that the ConocoPhillips Optimized Cascade Process encompasses two major modifications: The addition and optimization of heat recovery schemes Where appropriate, the conversion of the traditional closed-loop methane refrigeration system to an open-loop system The plate fin heat exchangers used in this process are also recognized for their ability to achieve an exceptionally close temperature approach. The use of pure refrigerants allows continually accurate prediction of refrigerant performance during plant operation without the need for on-line refrigerant monitoring. Therefore, for a given feed gas composition range, the cascade liquefaction technology provides the plant designer with flexibility in cooling stage locations, heat exchanger area, and operating pressure ranges during each stage, resulting in a process that can achieve high thermal efficiency under a wide range of feed conditions. When using cooling curves, incorrect conclusions can be drawn if only the feed gas is used as a heat source. It is imperative that heat transfer associated with cooling and condensing refrigerants also be included4, so that a complete cooling curve can be derived. Complete cooling curves of the classical and Optimized Cascade Process are depicted in Figure 5. The average temperature approach of the classical cascade is 16 F (8.89 C), while the average approach temperature of the Optimized Cascade is
4 The Optimized Cascade Process would include heat transfer associated with the propane refrigerant loads necessary to cool and condense ethylene, as well as the ethylene refrigeration loads necessary to cool and condense methane flash vapors.

3 Also known as a temperature-duty curve.

130

Bechtel Technology Journal

12 F (6.67 C), i.e., a reduction of 25%, which represents a 10% to 15% reduction in power. The maturity of the liquefaction processes has approached a point at which changes in duty curve no longer represent the greatest impact. Two developments that have a significant impact on efficiency are the improvement in liquefaction compressor efficiency5 and the availability of high-efficiency gas turbine drivers. A comparison of LNG technologies at a single design condition does not address plant performance during variations in operating conditions. A two-shaft gas turbine such as the PGT25+ used at Darwin, with its ability to control compressor performance without the need for recycle, can deliver significant improvements in thermal efficiency during turndown operations. Due to significant production swings during the day as a result of changes in ambient temperature, described earlier, the performance of the gas turbine and compressor package needs to be considered in any comparison of plant thermal efficiency.

than industrial gas turbines. The figure illustrates the engines thermal efficiency vs. specific work (kW per unit air mass flow). The higher efficiency of an aeroderivative gas turbine can result in a

Classical Cascade Process


Complete Cooling Curve
150 100 50

Temperature, F

0 50 100 150 200 250 300 0% 20% 40% 60% 80% Average Approach = 16 F

T sink T source T avg

100%

Enthalpy Change

Optimized Cascade Process


Complete Cooling Curve
150 100 50

Temperature, F

0 50 100 150 200 250 300 0% 20% 40% 60% 80% Average Approach = 12 F

T sink T source T avg

SELECTION OF AERODERIVATIVE ENGINES he earlier discussion demonstrated that the selection of the gas turbine plays an important role in efficiency, greenhouse gas emissions, and flexibility under various operating conditions. The gas turbine selection for Darwin LNG was based on the economic merits that the turbine would deliver for the overall life cycle cost of the project. When high fuel costs are expected, the selection of a high-efficiency driver becomes a strong criterion in the life cycle cost evaluation. However, LNG projects are developed to monetize stranded gas reserves, while low-cost fuel has favored industrial gas turbines. This situation is changing and the value of gas is growing. Further, when the gas is pipeline or otherwise constrained, there is a clear benefit to consuming less fuel for a given amount of refrigeration power. In such cases, a high-efficiency gas turbine solution through which the saved fuel can be converted into LNG production can reap large benefits. Figure 6 6 shows that aeroderivative gas turbines achieve significantly higher thermal efficiencies
5 Compressor polytropic efficiencies now exceed 80% and high-efficiency gas turbines are available with simple-cycle thermal efficiencies of approximately 40%. 6 Based on Frame 5C, 5D, 7EA, and 9E frame type drivers and GE PGT25+, LM6000, RR 6761, and RR Trent aeroderivative units.

100%

Enthalpy Change

Figure 5. Comparison of Cooling Curves for Classical Cascade Process and ConocoPhillips Optimized Cascade Process

50

45

Aeroderivative

Thermal Efficiency, %

40

35

30

Heavy-Duty Engines
25 200 250 300 350 400

GT-Specific Work, kW/kg/sec

Figure 6. Map of ISO Thermal Efficiency vs. Specific Work of Commonly Used Frame Drivers and Aeroderivative Engines (Aeroderivative Engines Exhibit Higher Specific Work and Thermal Efficiency)

December 2008 Volume 1, Number 1

131

3% or greater increase in overall plant thermal efficiency. Further, plant availability significantly improves because a gas turbine generator (or even a complete turbine) can be completely changed out within 48 hours compared to the 14 or more days required for a major overhaul of a heavyduty gas turbine. The GE PGT25+ aeroderivative gas turbine is used as the refrigerant compressor driver at Darwin. The PGT25+ is comparable in power output to the GE Frame 5D but has a significantly higher thermal efficiency of 41.1%. This improvement in thermal efficiency results directly in a reduction of specific fuel required per unit of LNG production. This reduction in fuel consumption in turn results in decreased CO2 emissions, as depicted in Figure 7, which shows relative CO2 emissions for various drivers. A similar beneficial greenhouse gas reduction comes from the use of waste heat recovery on the PGT25+ turbine exhaust used for various heating requirements within the plant. The use of this heat recovery eliminates greenhouse gas emissions that would have been released had gas-fired equipment been used. The result is an approximately 9.3% reduction in total greenhouse gases. Advantages of Aeroderivative Engines over Heavy Duty Gas Turbines Several advantages of using aeroderivative engines, some of which have been discussed, include: Much higher efficiency that leads to reduced fuel consumption and greenhouse emissions

Ability to rapidly swap engines and modules, thus improving maintenance flexibility High starting torque capacity; excellent torque-speed characteristics, allowing large trains to start up under settle-out pressure conditions Essentially zero-timed after 6 years; maintenance can also be done on condition, allowing additional flexibility Dry-low-emissions (DLE) technology, available and proven on several engines Relatively easy installation due to low engine weight Implementation of the PGT25+ in the Darwin Plant Gas Turbine and Compressor Configurations The Darwin LNG compressor configuration encompasses the hallmark two-in-one design of the Optimized Cascade Process, with a total of six refrigeration compressors configured as shown below in a 2+2+2 configuration. All of the turbomachinery was supplied by GE Oil & Gas (Nuovo Pignone).
Propane: 2 X PGT25+ + GB + 3MCL1405 Ethylene: 2 X PGT25+ + GB + 2MCL1006 Methane: 2 X PGT25+ + MCL806 + MCL 806 + BCL608

Aeroderivative gas turbines achieve significantly higher thermal efficiencies than industrial gas turbines.

1.1

Both the propane and ethylene trains have speed reduction gearboxes. All compressors are horizontally split except for the last casing of the methane string, which is a barrel design. The gas turbines and compressors are mezzanine mounted as shown in Figure 8, which facilitates a down-nozzle configuration for the compressors. A view of the six strings from the gas turbine inlet side is shown in Figure 9. The four once-through steam generators are on the four turbine exhausts to the left. The LM2500+ gas generator is shown in Figure 10.

1.0

Relative CO2 Emissions

0.9

AERODERIVATIVE ENGINE TECHNOLOGY FOR DARWIN LNG

0.8

T
GE Frame 5C GE LM2500+ GE Frame 6B GE Frame 7EA GE Frame 5D Rolls Royce 6761 GE LM6000PD Trent 60 DLE

0.7

0.6

0.5

he PGT25+ engine used at the Darwin plant has a long heritage, starting from the TF39 GE aeroengine, as shown in Figure 11. This highly successful aeroengine resulted in the industrial LM2500 engine, which was then upgraded to the LM2500+. The PGT25+ is essentially the LM2500+ gas generator coupled to a 6,100 revolution-per-minute (rpm) high-speed power turbine (HSPT). The latest variant of this engine is the G4, rated at 34 MW. The first LM2500+ design was based on the successful history of the LM2500 gas turbine

Figure 7. Relative CO2 Emissions from Different Classes of Gas Turbines

132

Bechtel Technology Journal

that was completed in December 1996. The LM2500+ was originally rated at 27.6 MW, with a nominal 37.5% ISO thermal efficiency. Since that time, its ratings have grown to the current 31.3 MW and 41% thermal efficiency. The LM2500+ has a revised and upgraded compressor section with an added zero stage for a 23% increased airflow and pressure ratio, and revised materials and design in the high-pressure and power turbines. Details can be found in Wadia et al. [5] Description of the PGT25+ Gas Turbine The PGT25+ consists of the following components: Axial flow compressor The compressor is a 17-stage axial-flow design with variablegeometry compressor inlet guide vanes that direct air at the optimum flow angle, and variable stator vanes to ensure ease of starting and smooth, efficient operation over the entire engine operating range. The axial flow compressor operates at a pressure ratio of 23:1 and has a transonic blisk as the zero stage7. As reported by Wadia et al. [5], the airflow rate is 84.5 kg/sec at a gas generator speed of 9,586 rpm. The axial compressor has a polytropic efficiency of 91%. Annular combustor The engine is provided with a single annular combustor (SAC) with coated combustor dome and liner similar to those used in flight applications. The SAC features a through-flow, venturi swirler to provide a uniform exit temperature profile
7 The zero stage operates at a stage pressure ratio of 1.43:1 and an inlet tip relative mach number of 1.19.

Figure 8. Compressor Trains at Darwin LNG Plant

Figure 9. Compressor Block Viewed from Gas Turbine Filter House End (Note Four Once-Through Steam Generators on Gas Turbines)

Figure 10. Installation of LM2500+ Gas Generator

Power Output MW/SHP Thermal Efficiency


23/32,000 38% LM2500/PGT25 /

C-5

31.3/42,000 39%41% TF39/CF6-6 LM2500+/PGT25+ 34.3/46,000 39%41%

DC-10

LM2500+G4/PGT25+G4
(Source: GE Energy)

Figure 11. LM2500 Engine Evolution

December 2008 Volume 1, Number 1

133

and distribution. This combustor configuration features individually replaceable fuel nozzles, a full-machined-ring liner for long life, and an yttrium-stabilized zirconium thermal barrier coating to improve hot corrosive resistance. The engine is equipped with water injection for NOx control. High-pressure turbine (HPT) The PGT25+ HPT is a high-efficiency air-cooled, twostage design. The HPT section consists of the rotor and the first- and second-stage HPT nozzle assemblies. The HPT nozzles direct the hot gas from the combustor onto the turbine blades at the optimum angle and velocity. The high-pressure turbine extracts energy from the gas stream to drive the axial flow compressor to which it is mechanically coupled. High-speed power turbine The PGT25+ gas generator is aerodynamically coupled to a high-efficiency HSPT with a cantileversupported two-stage rotor design. The power turbine is attached to the gas generator by a transition duct that also serves to direct the exhaust gases from the gas generator into the stage-one turbine nozzles. Output power is transmitted to the load by means of a coupling adapter on the aft end of the power turbine rotor shaft. The HSPT operates at a speed of 6,100 rpm with an operating speed range of 3,050 to 6,400 rpm. The high-speed two-stage power turbine can be operated over a cubic load curve for mechanical drive applications. Engine-mounted accessory gearbox driven by a radial drive shaft The PGT25+ has an engine-mounted accessory drive gearbox for starting the unit and supplying power for critical accessories. Power is extracted through a radial drive shaft at the forward end of the compressor. Drive pads are provided for accessories, including the lube and scavenge pump, starter, and variablegeometry control. An overview of the engine, including the HSPT, is shown in Figure 12. Maintenance A critical factor in any LNG operation is the lifecycle cost influenced in part by the maintenance cycle and engine availability. Aeroderivative engines have several features that facilitate on condition maintenance, rather than strict timebased maintenance. Numerous boroscope ports allow on-station, internal inspections to determine the condition of internal components, thereby increasing the interval between scheduled, periodic removal of engines. When the condition

(Source: GE Energy)

Figure 12. PGT25+ Gas Turbine

A critical factor in any LNG operation is the life-cycle cost influenced in part by the maintenance cycle and engine availability.

of the internal components of the affected module has deteriorated to such an extent that continued operation is not practical, the maintenance program calls for exchange of that module. The PGT25+ is designed to allow for rapid on-site exchange of major modules within the gas turbine. Component removal and replacement can be accomplished in less than 100 hours, and the complete gas generator unit can be replaced and be back online within 48 hours. The hot-section repair interval for the aeroderivative is 25,000 hours on natural gas; however, water injection for NOx control shortens this interval to between 16,000 hours and 20,000 hours, depending on the NOx target level8. Performance Deterioration and Recovery Gas turbine performance deterioration is of great concern to any LNG operation (see [6, 7, and 8]). Total performance loss is attributable to a combination of recoverable (by washing) and non-recoverable (recoverable only by component replacement or repair) losses. Recoverable performance loss is caused by airborne contaminant fouling of airfoil surfaces. The magnitude of recoverable performance loss and the frequency of washing are determined by site environment and operational profile. Generally, compressor fouling is the predominant cause of this type of loss. Periodic washing of the gas turbine, using online and crank-soak wash procedures, will recover 98% to 100% of these losses. The objective of online washing is to increase the time interval between crank washes. The best approach is to couple online and offline washing. The cooldown time for an aeroderivative engine is much less than that for a heavy-duty frame machine due to the lower casing mass. Crank washes can therefore be done with less downtime.
8 The level of water injection is a function of the NO x target level.

134

Bechtel Technology Journal

Upgrades of the PGT25+ Another advantage of using aeroderivative engines for LNG service is that they can be uprated to newer variants, generally within the same space constraintsa useful feature for future debottlenecking. The Darwin LNG plant is implementing this upgrade. The LM2500+G4 is the newest member of GEs LM2500 family of aeroderivative engines. The engine, shown in Figure 13, retains the basic design of the LM2500+ but increases the power capability by approximately 10% without sacrificing hot-section life. The modification increases the engines power capability by increasing the airflow, improving the materials, and increasing the internal cooling. The number of compressor and turbine stages and the majority of the airfoils and the combustor designs remain unchanged from the LM2500. Details on the LM2500+G4 can be found in [9]. The increased power of this variant compared to the base engine is shown in Figure 14.

40,000

36,000

Shaft Power Output, kW

32,000

28,000

24,000 LM2500+G4 SAC Power LM2500+ SAC Power LM2500 SAC Power 20,000 30.0 15.0 0.0 15.0 30.0

Ambient Temperature, C

(Source: GE Energy)

Figure 14. Comparative Power Output of LM2500+G4 Variant

40,000

95

35,000
(Source: GE Energy)

90

Figure 13. Uprated LM2500+G4 Engine DLE Variant

85

Power, kW

POWER AUGMENTATION BY EVAPORATIVE COOLING NG production is highly dependent on the power capability of the gas turbine drivers of the propane, ethylene, and methane compressors. Industrial gas turbines lose approximately 0.7% of their power for every 1 C rise in ambient temperature. This effect is more pronounced in aeroderivative gas turbines due to their higher specific work for which the sensitivity can increase to much greater than 1% per C. The impact of ambient temperature on the PGT25+ power and air flow is depicted in Figure 15. As aeroderivative machines are more sensitive to ambient temperature, they benefit significantly from inlet air cooling. Darwin LNG uses media-type evaporative coolersanother first for LNG refrigeration drivers. Details on

80 25,000 75 20,000 70

15,000 kW Airflow, kg/sec 10,000 5 10 15 20 25 30 35 40

65

60

Ambient Temperature, C

Figure 15. Variations in Power Output and Airflow Rate for PGT25+ Gas Turbine

December 2008 Volume 1, Number 1

Air Mass Flow Rate, kg/sec 135

30,000

35 33 31 29 27 25 23 21 19 17 15
20:30 22:30 14:30 16:30 0:30 2:30 4:30 6:30 8:30 12:30 18:30 10:30 Mean

Because aeroderivative machines are more sensitive to ambient temperature, they benefit significantly from inlet air cooling.

Temperature, C

Jan Feb Mar

Apr May Jun

Jul Aug Sep

Oct Nov Dec

Time of Day

Figure 16. Darwin Temperature Profile Based on Time of Day over 12-Month Period

media-based evaporative cooling can be found in Johnson. [10] Among the key advantages of power augmentation are that it leads to: Greater LNG production due to reduced gas turbine compressor inlet air temperature, increasing the air mass flow rate and power Improved thermal efficiency of the gas turbine, resulting in lower CO2 emissions

There is considerable evaporative cooling potential available in Darwin especially during the periods of high ambient temperatures, because the relative humidity tends to drop as the temperature increases. The average daily temperature profile at Darwin is shown in Figure 16, and the relationship of relative humidity and dry bulb temperature for the month of September is shown in Figure 17 9. Details regarding the climatic analysis of evaporative cooling potential can be found in [11]. Media-based evaporative coolers use corrugated media over which water is passed. The media material is placed in the gas turbine airflow path within the air filter house and is wetted via water distribution headers. The construction of the media allows water to penetrate through it, and any non-evaporated water returns to a catch basin. The material also provides sufficient airflow channels for efficient heat transfer and minimal pressure drop. As the gas turbine airflow passes over the media, the airstream absorbs moisture (evaporated water). Heat content in the airstream is given up to the wetted media, resulting in a lower compressor inlet temperature. A typical evaporative cooler effectiveness range is 85% to 90%, and is defined as follows:

100 90 80 70 60

RH, %

50 40 30 20 10 0 10 12 14 16 18 20 22 24 26 28 30 32 34 36

DBT, C

Figure 17. RH vs. DBT at Darwin Airport for the Month of September (Considerable Evaporative Cooling Potential is Available During Hot Daytime Hours)

9 Data is for Darwin Airport, from the typical meteorological year (TMY2) database.

136

Bechtel Technology Journal

DBT, CIT Media Evaporative Efficiency = 90% (TMY2 Database Data for Month of January )
40 35 30

DBT

DBT, C CIT, C; Efficiency = 90%

DBT, CIT, C

25 20 15 10 5
129 161 193 225 257 289 321 353 385 417 449 481 513 545 577 609 641 673 705 33 65 97 1

CIT with Evaporative Cooler

Hours per Month

Figure 18. Calculated CITs due to Evaporative Cooling During the Summer Month of January

Effectiveness =
Where:

(T1DB T2DB) (T1DB T2WB)

FUTURE POTENTIAL OF AERODERIVATIVE ENGINES USING THE OPTIMIZED CASCADE PROCESS

T1DB = entering-air dry bulb temperature T2DB = leaving-air dry bulb temperature T2WB = leaving-air wet bulb temperature Effectiveness is the measure of how capable the evaporative cooler is in lowering the inlet-air dry bulb temperature to the coincident wet bulb temperature. Drift eliminators are used to protect the downstream inlet system components from water damage, caused by carryover of large water droplets. The presence of a media-type evaporative cooler inherently creates a pressure drop, which reduces turbine output. For most gas turbines, media thickness of 12 inches will result in a pressure drop of approximately 0.5 in. to 1 in. of water. Increases in inlet duct differential pressure will cause a reduction of compressor mass flow and engine operating pressure. The large inlet temperature drop derived from evaporative cooling more than compensates for the small drop in performance due to the additional pressure drop. Inlet temperature drops of approximately 10 C have been achieved at Darwin LNG, which results in a power boost of approximately 8% to 10%. Figure 18 shows calculated compressor inlet temperatures (CITs) with the evaporative cooler for a typical summer month of January.

everal factors must be considered in choosing an optimal train size, including:

Gas availability from the field Market demand and LNG growth profile (which would also dictate the buildup and timing between subsequent trains) Overall optimization of production, storage, and shipping logistics Operational flexibility, reliability, and maintenance of the refrigeration block (Flexibility is of extreme importance in todays operational market environment, which has seen some departure from longterm LNG supply contracts.) As the Optimized Cascade Process uses a two-train-in-one concept, in which two parallel compressor strings are used for each refrigeration service, the application of larger aeroderivative engines is an ideal fit. Using the Optimized Cascade Process, the loss of any refrigeration string does not shut down the train but only necessitates a reduction in plant feed, with overall LNG production remaining between 60% and 70% of full capacity10. The significant benefits of aeroderivative engines as opposed to large single-shaft gas turbines make large aeroderivative units an attractive proposition for high-efficiency, high-output LNG plants. Larger LNG plant sizes can be derived
10 Obtained by shifting refrigerant loads to the other drivers.

Because the Optimized Cascade Process uses a two-train-in-one concept, in which two parallel compressor strings are used for each refrigeration service, the application of larger aeroderivative engines is an ideal fit.

December 2008 Volume 1, Number 1

137

Table 1. Configuration/Size of LNG Plants Using Aeroderivative Engines


Aeroderivative Engine
6 x LM2500+ 8 x LM2500+G4 DLE 6 x LM6000 DLE 9 x LM6000 DLE

Conguration (Propane/Ethylene/Methane)
2/2/2 3/3/2 2/2/2 3/3/3

Approximate Train Size, MTPA


3.5 5 5 7.5

The new generation of highly efficient and high-power aeroderivative engines in the 40 MW to 50 MW range available today is ideally suited to the Optimized Cascade Process due to its two-trains-in-one concept.

by adding gas turbines, as shown in Table 1. While the output with one driver down in a 2+2+2 configuration is approximately 60% to 70%, the output would be even higher with a larger number of drivers. As split-shaft industrial gas turbines are not available in the power class of large aeroderivative gas turbines, the application of aeroderivative engines offers the significant advantage of not requiring costly and complex large starter motors and their associated power generation costs. For example, the LM6000 depicted in Figure 19 is a 44 MW driver11, with a thermal efficiency of 42%, operating at a pressure ratio of 30:1 and with an exhaust mass flow rate of 124 kg/sec. This engine is a two-spool gas turbine with the load driven by the low-speed spool, which is mounted inside the high-speed spool, enabling the two spools to turn at different speeds. The output speed of this machine is 3,400 rpm. The LM6000 gas turbine makes extensive use of variable geometry to achieve a large operating envelope. The variable geometry includes the variable inlet guide vanes, variable bypass valves, and the variable stator vanes in the engine

compressor with each system independently controlled. The gas turbine consists of five major components: a 5-stage low-pressure compressor, a 14-stage high-pressure compressor, an annular combustor, a 2-stage high-pressure turbine, and a 5-stage low-pressure turbine. The low-pressure turbine drives the low-pressure compressor and the load. The engine is available in both a waterinjected and DLE configuration, with a DLE capability of 15 parts per million (ppm) NOx. The importance of high thermal efficiency and the details on the implementation and operating experience of aeroderivatives at Darwin LNG have been presented by Meher-Homji et al. [12]

CONCLUSIONS

n 1969, the ConocoPhillips-designed Kenai LNG plant in Alaska was the first LNG plant to use gas turbines as refrigeration drivers. This plant has operated without a single missed shipment. Another groundbreaking step was made 38 years later with the worlds first successful application of high-efficiency aeroderivative gas turbines at the Darwin LNG plant. This efficient plant has shown how technology can be integrated into a reliable LNG process to minimize greenhouse gases and provide the high flexibility, availability, and efficiency of the Optimized Cascade Process. The plant, engineered and constructed by Bechtel, was started several months ahead of schedule and has exceeded its production targets. It has been successfully operated for close to 3 years and will shortly be upgraded by implementing PGT25+G4 engines as part of a debottlenecking effort. The new generation of highly efficient and high-power aeroderivative engines in the 40 MW to 50 MW range available today is ideally suited to the Optimized Cascade Process due to its two-trains-in-one concept. The ConocoPhillipsBechtel LNG collaboration will offer the engine for future LNG projects. In the meantime, the ConocoPhillips-Bechtel LNG Product Development Center continues to design and develop new and highly efficient plant designs that can be used for 5.08.0 MTPA train sizes.

(Source: GE Energy)

Figure 19. LM6000 Gas Turbine

11 To compare the power/wt ratio, the LM6000 core engine weighs 7.2 tons compared to 67 tons for a 32 MW Frame 5D engine (core engine only).

138

Bechtel Technology Journal

TRADEMARKS ConocoPhillips Optimized Cascade registered trademark of ConocoPhillips. is a

REFERENCES
[1] D. Yates and C. Schuppert, The Darwin LNG Project, 14th International Conference and Exhibition on Liquefied Natural Gas (LNG 14), Doha, Qatar, March 2124, 2004 <http://www.lng14.com.qa/lng14.nsf/ attach/$file/PS6-1.ppt>. D. Yates and D. Lundeen, The Darwin LNG Project, LNG Journal, 2005. D. Yates, Thermal Efficiency Design, Lifecycle, and Environmental Considerations in LNG Plant Design, GASTECH, 2002 <http://lnglicensing. conocophillips.com/NR/rdonlyres/8467A499F292-48F8-9745-1F7AC1C57CAB/0/thermal.pdf>. W. Ransbarger et al., The Impact of Process and Equipment Selection on LNG Plant Efficiency, LNG Journal, April 2007. A.R. Wadia, D.P. Wolf, and F.G. Haaser, Aerodynamic Design and Testing of an Axial Flow Compressor with Pressure Ratio of 23.3:1 for the LM2500+ Engine, ASME Transactions, Journal of Turbomachinery, Vol. 124, Issue 3, July 2002, pp. 331340, access via <http://scitation.aip.org/getabs/ servlet/GetabsServlet?prog=normal&id= JOTUEI000124000003000331000001&idtype= cvips&gifs=yes>. C.B. Meher-Homji, M. Chaker, and H. Motiwalla, Gas Turbine Performance Deterioration, Proceedings of the 30th Turbomachinery Symposium, Houston, Texas, September 1720, 2001. C.B. Meher-Homji and A. Bromley, Gas Turbine Axial Compressor Fouling and Washing, Proceedings of the 33rd Turbomachinery Symposium, Houston, Texas, September 2023, 2004, pp. 163-192 <http://turbolab.tamu.edu/pubs/ Turbo33/T33pg163.pdf>. G.H. Badeer, GE Aeroderivative Gas Turbines Design and Operating Features, GE Power Systems Reference Document GER-3695E, October 2000 <http://gepower.com/prod_serv/products/ tech_docs/en/downloads/ger3695e.pdf>. G.H. Badeer, GEs LM2500+G4 Aeroderivative Gas Turbine for Marine and Industrial Applications, GE Energy Reference Document GER-4250, September 2005 <http://gepower. com/prod_serv/products/tech_docs/en/ downloads/ger4250.pdf>.

for Gas Turbines and Power, Vol. 128, No. 4, October 2006, pp. 815825, see <http://scitation. aip.org/getabs/servlet/GetabsServlet?prog=norm al&id=JETPEZ000128000004000815000001&idtype =cvips&gifs=yes> (see also Proceedings of ASME Turbo Expo 2002, Amsterdam, the Netherlands, June 36, 2002, Paper 2002-GT-30559 <http://www.meefog.com/downloads/ 30559_International_Cooling.pdf>). [12] C.B. Meher-Homji, D. Messersmith, T. Hattenbach, J. Rockwell, H. Weyermann,and K. Masani, Aeroderivative Gas Turbines for LNG Liquefaction Plants Part 1: The Importance of Thermal Efficiency and Part 2: Worlds First Application and Operating Experience, Proceedings of ASME International Gas Turbine and Aeroengine Conference, Turbo Expo 2008, Paper Nos. GT2008-50839 and GT2008-50840, Berlin, Germany, June 913, 2008 (see http://www.asmeconferences.org/ TE08//pdfs/TE08_FinalProgram.pdf, p. 86).

[2] [3]

[4]

[5]

A modification of the original version of this paper was presented at LNG 15, held April 2427, 2007, in Barcelona, Spain.

BIOGRAPHIES
Cyrus B. Meher-Homji is a Bechtel Fellow and senior principal engineer assigned to the Houston, Texas-based Bechtel-ConocoPhillips LNG Product Development Center as a turbomachinery specialist. His 29 years of industry experience covers gas turbine and compressor design, engine development, and troubleshooting. Cyrus works on the selection, testing, and application of gas turbines and compressors for LNG plants. His areas of interest are turbine and compressor aerothermal analysis, gas turbine deterioration, and condition monitoring. Cyrus is a Fellow of ASME and past chair of the Industrial & Cogeneration Committee of ASMEs International Gas Turbine Institute. He also is a life member of the American Institute of Aeronautics and Astronautics (AIAA) and is on the Advisory Committee of the Turbomachinery Symposium. Cyrus has more than 80 publications in the area of turbomachinery engineering. Cyrus has an MBA from the University of Houston, Texas, an ME from Texas A&M University, College Station, and a BS in Mechanical Engineering from Shivaji University, Maharashtra, India. He is a registered professional engineer in the state of Texas. Tim Hattenbach has worked in the oil and gas industry for 36 years, 30 of which have been with Bechtel. He is the team leader of the Compressor group in Bechtels Houston office and has worked on many LNG projects and proposals as well as a variety of gas plant and refinery projects.

[6]

[7]

[8]

[9]

[10] R.S. Johnson, The Theory and Operation of Evaporative Coolers for Industrial Gas Turbine Installations, ASME International Gas Turbine and Aeroengine Congress and Exposition, Amsterdam, Netherlands, June 59, 1988, Paper No. 88-GT-41 <http://www.muellerenvironmental.com/ documents/100-020-88-GT-41.pdf>. [11] M. Chaker and C.B. Meher-Homji, Inlet Fogging of Gas Turbine Engines: Climatic Analysis of Gas Turbine Evaporative Cooling Potential of International Locations, Journal of Engineering

December 2008 Volume 1, Number 1

139

Tim is Bechtels voting representative on the American Petroleum Institute (API) Subcommittee on Mechanical Equipment and is a member of its Steering Committee. He is Taskforce Chairman of two API standards (API 616 Gas Turbines and API 670 Machinery Protection Systems). Tim has an MS and a BS in Mechanical Engineering from the University of Houston, Texas. Dave Messersmith is deputy manager of the LNG and Gas Center of Excellence, responsible for LNG Technology Group and Services, for Bechtels Oil, Gas & Chemicals Global Business Unit, located in Houston, Texas. He has served in various lead roles on LNG projects for 14 of the past 17 years, including work on the Atlantic LNG project conceptual design through startup as well as many other LNG studies, FEED studies, and projects. Daves experience includes various LNG and ethylene assignments over 17 years with Bechtel and, previously, 10 years with M.W. Kellogg, Inc. Dave holds a BS degree in Chemical Engineering from Carnegie Mellon University, Pittsburgh, Pennsylvania, and is a registered professional engineer in the state of Texas. Hans P. Weyermann is a principal rotating equipment engineer in the Drilling and Production Department of the ConocoPhillips Upstream Technology Group. He supports all aspects of turbomachinery for business units and grassroots capital projects and is also responsible for overseeing corporate rotating machinery technology development initiatives within the ConocoPhillips Upstream Technology Group. Before joining ConocoPhillips, Hans was the supervisor of rotating equipment at Stone & Webster, Inc., in Houston, Texas. Earlier, he was an application/design engineer in the Turbo Compressor Department at Sulzer Escher Wyss Turbomachinery in Zurich, Switzerland. Hans is a member of ASME, the Texas A&M University Turbomachinery Advisory Committee, and the API SOME, and, in addition, serves on several API task forces. Hans has a BS degree in Mechanical Engineering from the College of Engineering in Brugg-Windisch, Switzerland.

Karl Masani is a director for LNG Licensing & Technology in the Global Gas division of ConocoPhillips, where he is responsible for LNG project business development and project supervision. Previously, he held various managerial positions at General Electric Company, Duke Energy Corporation, and Enron Corporation. Karl holds an MBA in Finance from Rice University, Houston, Texas, and a BS degree in Aerospace Engineering from the University of Texas at Austin. Satish Gandhi is LNG Product Development Center (PDC) director and manages the center for the ConocoPhillipsBechtel Corporation LNG Collaboration. He is responsible for establishing the work direction for the PDC to implement strategies and priorities set by the LNG Collaboration Advisory Group. Dr. Gandhi has more than 34 years of experience in technical computing and process design, as well as troubleshooting of process plants in general and LNG plants in particular. He was previously process director in the Process Technology & Engineering Department at Fluor Daniel with responsibilities for using state-of-the-art simulation software for the process design of gas processing, CNG, LNG, and refinery facilities. He also was manager of the dynamic simulation group at M.W. Kellogg, Ltd., responsible for technology development and management and implementation of dynamic simulation projects in support of LNG and other process engineering disciplines. Dr. Gandhi received a PhD from the University of Houston, Texas; an MS from the Indian Institute of Technology, Kanpur, India; and a BS from Laxminarayan Institute of Technology, Nagpur, India, all in Chemical Engineering.

140

Bechtel Technology Journal

INNOVATION, SAFETY, AND RISK MITIGATION VIA SIMULATION TECHNOLOGIES


Issue Date: December 2008 AbstractDevelopments in hardware and software have made simulation technologies available to the design engineer. Engineering companies are adapting the latest computer-aided design (CAD) tools such as SmartPlant 3D to improve work processes and the final design. However, many companies still rely on traditional design methods, especially in mitigating the safety and operational risks associated with a projects design. Bechtel is an industry leader in successfully applying advanced simulation technologies, such as computational fluid dynamics (CFD), finite element analysis (FEA), and lifecycle dynamic simulation, to identify and mitigate any safety and operational risks associated with plant design and to improve the design over the projects lifecycle. Bechtel is leveraging the concept of lifecycle dynamic simulation to develop advanced applications, such as operator training simulator (OTS) and advanced process control (APC), to enhance plant operations for clients. This paper presents case studies on the use of such simulation technologies and applications to improve the design and operation of liquefied natural (LNG) plants. KeywordsAPC, CFD, dynamic process modeling, FEA, life-cycle dynamic simulation, LNG, OTS, simulation-based design, simulation technologies, simulator

INTRODUCTION

ver the past 15 years, rapid developments in computer technology have led to significant advancement in simulation technologies. Published papers provide numerous examples of the application of simulation tools to enhance design and operations. For example, simulation tools have been used to enhance plant design [1], maximize plant profits through real-time process optimization [2], and analyze complex operational problems involving scheduling, sequencing, and material handling [3]. Simulation tools are especially useful for visualizing the development of a plant during design.

to simulate a plants design and to analyze it for safety and operational risks. When performed early in the design phase, such an analysis allows design engineers to identify and avert potential problems that could occur during plant startup, when they would be expensive to resolve. The case studies presented in this paper describe how the Bechtel Oil, Gas & Chemicals Global Business Unit (OG&C GBU) is applying three key simulation technologiescomputational fluid dynamics (CFD), finite element analysis (FEA), and lifecycle dynamic simulation to mitigate safety and operational risks during the design and commissioning of ConocoPhillips Optimized Cascade Liquefied Natural Gas (LNG) Process plants. This paper also describes how OG&C is innovatively leveraging the applications of these technologies to provide added value to global clients. CFD, FEA, and Life-cycle Dynamic Simulation CFD is a technique used for modeling the dynamics of fluid flows in processes and systems, such as those in an LNG plant. CFD enables design engineers to build a virtual model of a system or process so they can simulate what will happen when fluids and gases flow and

Ramachandra Tekumalla
rptekuma@bechtel.com

Jaleel Valappil, PhD


jvalappi@bechtel.com

Bechtel routinely uses computer-aided design (CAD) tools such as SmartPlant 3D to visualize how a physical plant design (e.g., piping, plot layout) will develop throughout the engineering phase. In fact, 3D model reviews are now considered necessary milestones during design. Evaluating a plants physical structure is much easier than identifying and analyzing the safety and operational risks associated with a plants design. The ability to conduct such an analysis once depended on a design engineers experience. However, todays simulation tools allow design engineers at all experience levels

2008 Bechtel Corporation. All rights reserved.

141

ABBREVIATIONS, ACRONYMS, AND TERMS


AISC APC ASCC ASME CAD CFD DCS EPC FAT FEA FEED FSI HYSYS LNG NGL OG&C GBU OPT OTS PSV RT SIS American Institute of Steel Construction advanced process control Australian Safety and Compensation Council American Society of Mechanical Engineers computer-aided design computational fluid dynamics distributed control system engineering, procurement, and construction factory acceptance test finite element analysis front-end engineering design fluid-solid interaction AspenTech computer software program liquefied natural gas natural gas liquid [Bechtel] Oil, Gas & Chemicals Global Business Unit optimization operator training simulator pressure safety valve real-time safety instrumented system

CFD and FEA can be used to determine how flow phenomena, heat and mass transfer, and various stresses affect process equipment. Studying these effects in a virtual environment enables engineers to identify and mitigate safety and operational risks associated with a design and to design plant equipment that can operate under a wide range of conditions. Life-cycle dynamic simulation allows design engineers to build a dynamic model of an LNG plant that can evolve over the projects life cycle. This dynamic model can then be used for a variety of purposes, such as evaluating safety and operating procedures, supporting startup, and training plant personnel. The ConocoPhillips Optimized Cascade LNG Process Over the past decade, Bechtel has built eight ConocoPhillips Optimized Cascade LNG trains of varying capacities. These projects were based on a lump-sum, turnkey approach whereby Bechtel, with assistance from ConocoPhillips, was responsible for commissioning, startup, and operation of the plants until the performance requirements were met. The case studies presented in this paper apply to the ConocoPhillips Optimized Cascade LNG Process. A brief description of the process follows. As shown in the schematic in Figure 1, feed gas is first processed in the feed-treatment section of the plant. Diglycolamine, or a similar solvent, is typically used for the gas sweetening process to remove H2S and CO2. Next, treated gas is fed to the molecular sieve dehydrators where water vapor is removed and processed through activated carbon beds to remove any remaining mercury. The treated gas is then fed to the liquefaction unit, where it is cooled in stages and condensed prior to entering the LNG tanks. The liquefaction process includes three refrigeration circuits consisting of predominantly pure component refrigerants: propane, ethylene, and methane. Although not required, each refrigeration circuit typically uses parallel compressors (up to two or three per refrigerant service) combined with common process equipment. The feed gas passes through multiple stages of chilling in the propane, ethylene, and open-loop methane circuits. Each successive stage is at a lower temperature and pressure. The resulting LNG product is pumped to storage, where it is stored at near-atmospheric pressure and 161 C.

interact with the complex surfaces used in engineering. CFD software performs the millions of calculations required to simulate fluid flows and produces the data and images necessary to predict the performance of a system or process design. This technique makes it easier for engineers to identify, analyze, and solve design-related problems that involve fluid flows. FEA is a technique for simulating and evaluating the strength and behavior of components, equipment, and structures under various loading conditions, such as applied forces, pressures, and temperatures. FEA enables design engineers to produce detailed visualizations of where structures bend or twist, to pinpoint the distribution of stresses and displacements, and to minimize weights, materials, and costs. Bechtel engineers also use FEA techniques to analyze scenarios related to design optimization and code compliance.
142

Bechtel Technology Journal

CASE STUDIES IN THE USE OF CFD AND FEA he following case studies describe how Bechtels design engineers used CFD and FEA to identify and mitigate safety and operational risks associated with the design of LNG plant equipment. Case Study: Design of a Wet Ground Flare System A gas flare is typically an elevated vertical stack or chimney found on oil wells or oil rigs, and in landfills, refineries, and chemical plants. Gas flares are used to burn off unwanted gas or flammable gas and liquids released by pressure relief valves during an unplanned over-pressuring of plant equipment. On one of Bechtels LNG plant construction projects, the local airport was situated close to the plant site, and the aircraft landing path was over the LNG plant. To ensure that landing aircraft would not be affected by the plants ground flare system, local authorities required Bechtel to ensure that the systems design minimized the heat impact to the atmosphere. Bechtel was required to use a wet ground flare for material flows that were warmer than 29.5 C. The Problem To comply with the Australian Safety and Compensation Council (ASCC) National Standard for Control of Major Hazard Facilities, the design of the plants wet ground flare system required an assessment of the impact of heat release by radiation on the surrounding area during an excess propane release, as well

as an assessment of the effects of continuous exhaust from the gas turbines. The surroundings included buildings, terrain, and vegetation. The assessment required Bechtel engineers to study individual flames from each burner, with nozzles on the order of 1 mm, and the effect of 180 burners on a large area and surrounding terrain, with length scales of several meters. This required managing the complexity of varying length scales. [4] Calculations spanned 750 m in the lengthwise direction of the flare position on the ground, 700 m in width, and 500 m in height. The Solution The complexity of the assessment would have required massive parallel computing efforts if engineers were to handle the above calculations in their entirety. Instead, Bechtel used CFD to implement the following more manageable three-phase approach to conduct the assessment: Phase 1The combustion model used in a single flare study was validated. Phase 2The interaction of two to three neighboring burners was studied for velocity, temperature, and composition. Phase 3The interaction between neighboring burners and the overall flare was extended, and the propagation of the plume was studied. This approach enabled Bechtel to break down the complex problem into smaller-scale

Bechtel applies advanced simulation technologies to visualize design and develop operational applications for clients.

Inlet Raw Gas

Inlet Meter Scrubber Carbon Filter Inlet Scrubber Air Fin Heat Exchanger Amine Treater Propane Chiller Plant Fuel Ethylene Compressor Propane Compressor Air Fin Heat Exchanger Air Fin Heat Exchanger Pressure Surge Dehydrator

Dry Gas Filter

Methane Compressor LNG Heat Exchangers Heat Exchangers

Heat Exchangers Ethylene Surge Ship Vapor Blower

Tank Vapor Blower Storage Tanks and Loading Pumps

Vapor From Ship When Loading To Ship Loading Facilities

Transfer Pump

(Reprinted with permission from ConocoPhillips Company)

Figure 1. Schematic of ConocoPhillips Optimized Cascade LNG Process

December 2008 Volume 1, Number 1

143

Fire, Medical Warehouse, and Other Buildings

Hills in Terrain Radiation Fence Wet Burner Risers

Mangroves Domain Dimensions are 750 m x 700 m x 500 m

Figure 2. Draft of Flare Region Showing Wet and Dry Flare Arrays and Corresponding Computational Grid

sub-problems, which worked within the limitations of the CFD technology. Each phase provided the information required for the succeeding phase. As a result, the assessment ensured that the design of the wet ground flare system met the requirements of the ASCC National Standard for the Control of Major Hazard Facilities. During the assessment, CFD was used to study: Flux on the incoming pipe rack to the flare Ground-level flux at buildings near the flare The probability of a fire, and the resulting damage to vegetation/nearby mangroves The effects of the flare event on personnel and buildings Figures 2, 3, and 4 were generated from the CFD model. Figure 2 shows the flare region with the wet and dry flare arrays and the corresponding computational grid. The temperature contours and surface-incident radiation on the full terrain

are shown in Figure 3. The temperature contours on the ground below the burner risers are shown in Figure 4. Case Study: Mitigation of Temperature-Driven Bending Stress and the Failure of a Multiphase Flare Line under Cryogenic Conditions Bechtel engineers used CFD and FEA to perform a flow and stress analysis of a flare line. This case study describes how, when applied together, these simulation technologies led to an understanding of the thermal stresses induced by the Joule-Thompson effect within a flare line. This understanding led to the design of effective mitigation measures. The Problem Flare lines in an LNG plant carry a mixture of methane, propane, and ethylene. Several laterals bring these constituents to the flare header, and the constituents flow out to burn during a flaring event. Pressure safety valves and
Bechtel Technology Journal

144

1.00e+03 9.38e+02 8.75e+02 8.13e+02 7.50e+02 6.88e+02 6.25e+02 5.63e+02 5.00e+02 4.38e+02 3.75e+02 3.13e+02 2.50e+02 1.88e+02 1.25e+02 6.25e+01 0.00e+00

Z X Y

1.00e+03 9.38e+02 8.75e+02 8.13e+02 7.50e+02 6.88e+02 6.25e+02 5.63e+02 5.00e+02 4.38e+02 3.75e+02 3.13e+02 2.50e+02 1.88e+02 1.25e+02 Z X 6.25e+01 Y 0.00e+00

Figure 3. Temperature Contours and Surface-Incident Radiation on Full Terrain

3.50e+02 3.36e+02 3.23e+02 3.09e+02 2.96e+02 2.82e+02 2.69e+02 2.55e+02 2.42e+02 2.28e+02 2.15e+02 2.01e+02 1.88e+02 1.74e+02 1.61e+02 1.47e+02 1.34e+02 1.20e+02 1.07e+02 9.35e+01 8.00e+01

CFD was used to assess the impact of wet-flare design on the terrain and the atmosphere. This assessment was required to meet regulator standards.

Y Z X

Figure 4. Temperature Contours on Ground Below Burner Risers

depressurizing valves are located in the laterals. As a result of Joule-Thompson effects that occur when the valves are operating, liquid can form and flow into the flare header. This occurrence leads to large temperature differentials and bending that causes cracks in the welded joints at the support shoes and tees. This case study [5] involves a flare header system with the following: 24- to 36-in. main header to flare stack Laterals that enter perpendicular to the main header, with sizes up to 24 inches Shoe spacing 6 m apart Pipe supports with stitch-welded repads A slope of 5 mm every 6 m Saddle-type shoes resting on an I-beam
December 2008 Volume 1, Number 1

The Solution Bechtel engineers undertook a two-phase approach to solve the design problem. In the first phase, CFD was used to predict: The liquid accumulation upstream of the flare header Temperature differentials across the cross section of the flare header due to liquid accumulation The occurrence of any high-temperature differentials during the use of 45-degree laterals In the second phase, Bechtel engineers applied predictions from the CFD analysis to an FEA analysis to determine if upward bowing occurred in the pipe and to assess any subsequent stress impacts. They also studied various mitigation
145

8" Line from PSV 14,000 mm Three Supports Supports with Padded Plate Resting on I-Beam Flow Inlet for This Case 24" Line from PSV 8" Line from PSV Slope Modeled 5 mm Every 6 m 17,170 mm Three Supports 11,000 mm Two Supports 24" Line from PSV

9,000 mm to Support

6" Line from PSV

CFD allows engineers to visualize how fluid distributes with original design. This is the basis for understanding temperature contours in the flare line.

10" Line from PSV

Flow Outlet

3,000 mm to Support

90-Degree Lateral and 45-Degree Lateral

Figure 5. Geometry and Grid of Supports, Laterals, and Pipe Connections for CFD and FEA Analyses

3.10e+02 3.01e+02 2.93e+02 2.84e+02 2.76e+02 2.67e+02 2.58e+02 2.50e+02 2.41e+02 2.32e+02 2.24e+02 2.15e+02 2.07e+02 1.98e+02 1.89e+02 1.81e+02 1.72e+02 1.64e+02 1.55e+02 1.46e+02 1.38e+02

Maximum Temperature Difference Between Top and Bottom 170 K (306 F)

Liquid (Blue Contour) Fraction After 20 Seconds

Y Z X Y Z X

Cross-Sectional Temperature Contours Shown in Slices

Inner Liquid Levels Shown in Cut Slices Taken at 1 m Intervals

11.0 m

Y Z X

Y Z X

Figure 6. Volume Fraction and As-Designed Temperature Contours

146

Bechtel Technology Journal

Maximum Displacement in a Maximum Liquid Case May Be 3" and Four Shoes Are Lifted
.008231 .835E03 .026364 .03543 .044496 .062629 .080761 .017298 .053562 .071695

Maximum Vertical Bending Displacement While Using 45-Degree Laterals is < 1/4"
.003108 .919E03 .00127 .00346 .002365 .004555 .005649 .006744 .002014 .176E03

Bending Stresses of 50,000 psi Stresses at Supporting Shoes of About 60,000 psi
0 .750E+08 .375E+08 .150E+09 .425E+08 0 .128E+09 .170E+09 .113E+09

Stresses of About 25,000 psi Seen at T-Joints and Temperature Gradient Locations

.851E+08

Padding Required at Tees Since Stresses Can Be High at Tee Welds


Flare Header Stress Analysis
.523E+02 41696 .157E+05 .314E+09 .41EE+05 .196E+08 0 .105E+09 .209E+09 .366E+05 470E+09

Flare Header Stress Analysis


.589E+08 .981E+08 .137E+09 .177E+09 .392E+08 .785E+08 .118E+09 .157E+09

Figure 7. Stress Contour Predictions Generated by FEA Analysis

Reinforcing the Tees Significantly Reduces the Stresses

Tees Not Reinforced

A judicious application of fluid-solid interaction (CFD and FEA) helps in understanding and mitigating hydraulic and thermal stresses in flare lines.

.424E+08 0

.127E+09

.212E+09

.297E+09

.352E+09 0

.676E+08

.203E+09

.338E+09

.473E+09

.608E+09

.848E+08

Flare Header Stress Analysis

.170E+09

.254E+09

.339E+09

.135E+09

Flare Header Stress Analysis

.270E+09

.405E+09

.541E+09

Figure 8. Recommendations for Stress Reduction

measures to ensure that the pipe met American Society of Mechanical Engineers (ASME) 31.3 code requirements. Figure 5 shows the geometry and grid of the supports, laterals, and pipe connections used for the CFD and FEA analyses. Through the CFD analysis, Bechtel engineers determined that when the flow entered the main header through the 90-degree lateral, liquid accumulated on the upstream side even though there was a slope. This cold liquid stagnation could be as high as 15 m downstream, causing a significant temperature differential along the cross section. The 45-degree lateral, however, did not result in a similar effect, and was therefore determined to be better than a 90-degree lateral. The stresses, as designed, were determined to be 61,000 psi at the pipe-to-shoe weld. The allowable stresses were 36,000 psi for structural steel support and 60,000 psi for the piping itself, based on American Institute of Steel Construction (AISC) and ASME 31.3 standards.

Further analysis of the mitigation measures determined that continuously stitched support pads with an overhang of 100 m would bring the stresses within allowable limits. Figure 6 shows the volume fraction and as-designed temperature contours. Figure 7 shows the stress contour predictions generated by the FEA analysis. Finally, Figure 8 presents the recommendations for stress reduction. LIFE-CYCLE DYNAMIC SIMULATION

echtel uses life-cycle dynamic simulation to improve the design and operability of LNG plants. As shown in Figure 9, this technology enables design engineers to produce a dynamic model of a plant that evolves with the various stages of a projects life cycle. This dynamic model can also be tailored to various applications throughout the projects life cycle. Additionally, design engineers can use the dynamic model to perform engineering,

December 2008 Volume 1, Number 1

147

PLANT EVOLUTION

FEED PROCESS STUDIES

EPC

Startup

Operations

Process, Control Studies

Anti-Surge and Relief Valve

DCS Check and FAT

Startup Support

Operator Training

APC RT OPT

Bechtel uses lifecycle dynamic modeling based on validated modeling approaches to manage design risk at various stages in a project.

DYNAMIC MODEL EVOLUTION

Figure 9. Life-Cycle Dynamic Simulation Approach

process, and control system studies. For example, a dynamic model is typically used in engineering studies associated with the refrigeration compressor system design and critical-valve sizing. Dynamic simulation can be used during the control system design to ensure that a plant has sufficient design margins to handle any disturbances. It can also be used to assess a plants optimal operation in the face of changing conditions, such as ambient temperature, and to evaluate a plants distributed control system prior to commissioning to reduce the time required for onsite commissioning during startup. Life-cycle dynamic simulation enables engineers to consider all aspects of the process and control system design during the early stages of a plants design, helping eliminate or reduce the need for costly rework later. In addition, dynamic simulation can be used to: Study the effects that extensive heat integration (typical of LNG plants) will have on plant stability and startup Analyze the effects of external factors such as ambient conditions and compositional changes on a plants future operation Improve a plants day-to-day operations Descriptions of LNG process dynamic modeling and the LNG plantwide dynamic model validation follow. A number of case studies are also presented. The first case study describes how dynamic simulation was used early in the design stage to ensure a plants operability. The remaining case studies highlight the benefits of using dynamic simulation to perform both engineering and control studies.

Information on extending the dynamic model to an operator training simulator (OTS) is also provided at the end of this section. LNG Process Dynamic Modeling LNG process dynamic modeling uses a plantwide dynamic simulation model that is fundamental and based on rigorous thermodynamics. The models level of detail and fidelity depend on the application it is used for and the information available for modeling. This discussion assumes a model with a level of detail required to perform engineering studies. Unlike steady-state simulation, dynamic simulation requires an accurate pressure profile, equipment volumes, and other specific information. Two main components in the LNG process are heat exchangers and turbine-driven refrigeration system compressors. The compressors are modeled with design performance curves. A transfer function-based model is used for the gas turbine. The speed governor dynamics, turbine dynamics, and turbine power output are computed based on a model derived from vendor-provided data. A balance between power supply and demand is used to compute turbine speed. The anti-surge control and performance control systems from Compressor Controls Corporation are modeled in the simulation environment. Another key piece of equipment in the LNG process is the brazed aluminum heat exchanger. Application of the pressure profile to the dynamic simulation model (i.e., calculations of conductance coefficients for piping and equipment) is based on the isometric information, whenever available.
Bechtel Technology Journal

148

Validation of the LNG Plantwide Dynamic Model Validating the performance of the dynamic model against field data was critical for establishing confidence in the model. The predictions generated by the dynamic model under transient conditions were checked against appropriate data from an operating LNG plant. The dynamic model was also based on design information from the LNG plants engineering, procurement, and construction (EPC) stage. No model fitting with the plant data was done prior to the validation. The model validation was performed using plant data that represented a major change in plant feed rate and, therefore, LNG production. The response of the model variables was compared with the plant variables. Actual plant data for certain variables, such as plant feed rate, ambient temperature, and compressor speeds, was externally specified at 1-minute intervals to drive the model. The model generated predictions about other process variables, which were then analyzed. The information presented in Figure 10 shows that the model effectively captured the LNG plants behavior. The plant feed rate during the validation period went to about 60 percent of the normal operating rate and then returned to a rate close to the original plant feed rate. The

scaled values of the feed are shown in the upperleft-hand box. During the validation period, the ambient temperature ramped up, and the refrigeration compressors slowed down because of the reduced load. This led to a temporary decrease in the compressor discharge pressure, as shown in the upper-right-hand box. The antisurge valves for the compressors also opened up to protect the machines from surge, as shown in the lower-left-hand box. A comparison between the output from a plant level controller and the output from the model level controller is shown in the lower-right-hand box. Case Study: Using Dynamic Simulation to Improve Plant Operability During the Front-End Engineering Design Stage This case study describes how dynamic simulation was used to verify the operability and controllability of a proposed large-capacity LNG plant. [6] It highlights the value of using life-cycle dynamic simulation during the front-end engineering design (FEED) stage. The LNG industry has experienced rapid growth in recent years, and larger plants are being built around the world. Bechtel and ConocoPhillips have collaborated in designing and building several LNG plants with a capacity in the range of 3 million tons per annum and greater.

Dynamic simulation is used to ensure LNG plant operability in the FEED stage.

1.0 Propane Discharge Pressure 0 50 100 150 200 250 300 0.9 Plant Feed Rate 0.8 0.7 0.6 0.5

1.3 1.2 1.1 1.0 0.9 0.8

50

100

150

200

250

300

Time, minutes

Time, minutes

0.35 Exchanger Level Control Valve 0 50 100 150 200 250 300 350 0.30 0.25 Ethylene Recycle 0.20 0.15 0.10 0.05 0.00

1.05 1.00 0.95 0.90 0.85 0.80 0.75 0.70 0 50 100 150 200 250 300

Time, minutes

Time, minutes

Figure 10. Comparison of Model Predictions (Blue) with Plant Data (Green)

December 2008 Volume 1, Number 1

149

The increasing number and size of LNG plants make the use of dynamic simulation to prevent operability problems after plant commissioning more important than ever. The Problem The process design for the proposed largecapacity LNG plant was different from that of existing LNG plants based on the ConocoPhillips Optimized Cascade LNG Process. First, a single driver was used for both the propane and ethylene refrigeration compressors. In addition, gas turbines with a limited speed range (95 to 101 percent) were used as drivers for the refrigeration compressors. Subsequently, the control schemes in the refrigeration systems were modified to suit the design. A helper motor was used with the gas turbine to provide additional power. These factors raised concerns about the plants potential for safe and reliable startup and operations. As a result, the design required verification. The Solution Dynamic simulation was used to address concerns about the design during the FEED stage of the project and to ensure the plants overall operability. The specific objectives of the simulation were to: Verify startup of the gas turbine drivers, which included determining the correct starting pressures and the adequacy of starter motor sizes for the turbines Develop a procedure for pressurizing and loading the refrigeration systems, including a procedure for loading and balancing the two parallel refrigeration systems Study the effect of the gas turbine trip on the parallel turbine and refrigeration units, and determine the configuration and size of the anti-surge valves required to prevent compressor surge for the turbine that tripped Study the performance of the control scheme of the frame-type gas turbines and identify any improvements Further details about the simulation follow. Trip of Gas Turbines A major advantage of the ConocoPhillips Optimized Cascade LNG Process is that it can maintain plant production at half rate or more with the trip of one gas turbine driver. The applicability of this advantage to the largecapacity train design required verification. The trip of a single-shaft gas turbine is a matter of concern because it causes the parallel turbine

LNG plant startup procedures are verified and optimized using dynamic simulation.

to get overloaded and trip. Reducing the feed rate does not prevent this occurrence because the piping and equipment volumes cause a time lag in the system. Therefore, special procedures such as throttling the suction valves were necessary to prevent the overload trip of the parallel turbine. Such procedures momentarily reduced the load on the running turbine, keeping it online. Another method for preventing turbine trip was to use the novel anti-bog-down gas turbine control scheme. This scheme detects impending overload trip and throttles back all suction valves to ensure that the second running gas turbine stays online. Dynamic simulation was used to study the impact of the gas turbine trip on other process equipment. The simulation showed that the process equipment stabilized normally after the trip, and the intermediate variation in pressures and other variables was reasonable. Figure 11 shows the speed and power responses for the two GE Frame 7 gas turbines after the trip of one of the turbines. As the figure shows, the speed of the tripped gas turbine (solid blue line) coasts down to zero. The gas turbine that remains online (solid red line) initially slows down and then recovers and stabilizes at normal speed. The power of the tripped gas turbine (dotted red line) instantaneously goes to zero. The power of the operating turbine (dotted green line) remains close to the normal operating value. Plant Startup Startup procedures for previous trains were not directly applicable to this LNG plant because of its unique process design, so dynamic simulation was used to develop and verify new startup procedures for the plant. Specifically, engineers used the simulation to: Verify that the sizes of the starter motors were sufficient to bring the turbine-compressor string to full speed

1.06

Scaled Speed, Power

0.86 0.66 0.46 0.26 0.06 14 15 16 17 18 19 20

Time, minutes

Figure 11. Speed and Power Responses for Two Gas Turbine Drivers During Trip

150

Bechtel Technology Journal

Determine the correct starting pressures for the refrigeration loops for the selected starter motor sizes Identify the various process conditions and operating points required for the compressor during startup to ensure that the compressor stages do not go into surge or stonewall during startup Define the procedure for loading and balancing the parallel trains so that if the second train is running, the other train can be used to share the load equally Estimate the time required for starting each individual train and the total plant startup time (This data is important for the reliability, availability, and maintainability analysis.) Using dynamic simulation, the following three phases were identified for plant startup: Phase 1Turbine startup involves bringing the compression trains to the point of stable recycle at a minimum of 95 percent speed. The refrigeration systems must be depressurized to the correct pressures before turbine startup can occur. The motor size required for proper startup and acceleration was verified using dynamic simulation. The starting pressure is critical for ensuring proper acceleration of the gas turbine and for preventing it from becoming bogged down during startup. Phase 2Pressurization prepares the compressor system for loading by bringing the pressure to the right values. Phase 3Loading begins with the introduction of refrigerant vapor from the process systems, and proceeds to fully integrated and normal parallel train operation. The procedures for pressurization and loading were verified and refined using the dynamic simulation model. In the above case study, the objective of dynamic simulation was to verify operability and controllability of the proposed LNG plant during the early FEED stage. The simulation enabled design engineers to address concerns about the plants design, including verifying the plants response to compressor trips and determining the required startup procedures, compressor starter motor sizes, and starting pressures. Case Study: The Application of Dynamic Simulation for Detailed Engineering Studies The dynamic simulation model used for engineering studies is typically based on
December 2008 Volume 1, Number 1

information from the EPC stage of a project. To be effective, this model must also include sufficient detail about the issue under study. A dynamic simulation model is run offline, and therefore does not run in real time. However, the simulation time and model complexity must still be within reasonable limits. The size of the model depends on the scope of the study and the process units involved. A lumped parameter modeling approach is normally used for plantwide simulation for engineering studies. Actual equipment sizes are used for vessels, heat exchangers, and other equipment. Control valves are modeled with appropriate actuator dynamics, control characteristics, and valve sizes. The process control aspects of this model include only those details that are relevant to the study. Therefore, the dynamic simulation model used for engineering studies is highly dependent on the study scope and should represent but not replicate the plant. Dynamic simulations have been used to perform many engineering studies related to the LNG process. These studies have involved: Analyzing compressor anti-surge systems for the refrigeration systems Observing the effects of upset conditions for a feed gas compressor providing gas for three LNG trains Mitigating pressure relief scenarios Supporting the design of individual pieces of equipment The following case study highlights the value of applying life-cycle dynamic simulation to a detailed engineering study. The Problem An engineering study was required to analyze the compressor anti-surge system for a plants refrigeration systems to ensure that: Anti-surge valves were sized adequately Stroke times of the anti-surge valves and the compressor suction and discharge isolation valves were fast enough The Solution To meet the objectives of the engineering study, a dynamic simulation was performed for each of the three refrigeration systems. Various process upsets were modeled to determine which one governed the size of the anti-surge valves. These upsets included closing each of the suction isolation valves, closing the discharge isolation valve, and tripping the compressor.

Compressor anti-surge valve sizing is best done using dynamic simulation.

151

The upset condition that showed the greatest impact on the sizing of the anti-surge valves was the tripped compressor. This upset condition included a compressor/turbine trains coast down to a complete stop after a trip. The dynamic plant model used for this study included refrigeration compressors as well as other units used in the LNG process. Figure 12 shows the results of the dynamic simulation of a refrigeration compressor trip, as well as the speed response of the tripped compressor. The surge margins for three compressor stages are shown in the left plot. The right plot shows the speed response of the tripped compressor (blue line) and the parallel compressor (red line) that is still in operation. One of the key parameters monitored in this simulation was the surge margin, which is defined as the difference between actual flow through the compressor and the surge flow at the corresponding speed. This parameter has to be positive during the trip to prevent compressor surge. The anti-surge valve sizes and speeds were selected so that the surge margin was adequate during the trip. One of the findings of the dynamic simulation was that, in some cases, due to the configuration of the refrigeration system, the traditional method of sizing the anti-surge valves does not provide enough capacity to protect the compressor from surging during a compressor trip and coast down to stop. The results of the dynamic simulation were used to determine how much additional capacity each anti-surge valve needed to protect the compressor from surging during a trip. The dynamic simulation also revealed that, if left in automatic control mode, most antisurge controllers do not react quickly enough to protect the compressors during a trip. To mitigate

this issue, the emergency control strategy was modified to deactivate the controllers and to command open the valves immediately. The dynamic simulation also indicated that reducing the stroke times of the suction isolation valves significantly aided in protecting the compressors from surging during coast down. Reducing these stroke times also helped to reduce the anti-surge valve sizes to some extent. Dynamic simulation was an invaluable tool in providing the data necessary to optimize the anti-surge system design. A steady-state analysis could not have provided this data. Case Study: The Application of Dynamic Simulation for Control Studies During the early stages of an LNG plant project, dynamic simulation can be used to determine the right process control structure. [7] There are systematic methods for analyzing process controllability using the dynamic simulation model. This case study describes how dynamic simulation was used to study alternate control strategies for an LNG plant. It highlights the benefits of applying this technology to the control system design. The Problem In the original plantwide control scheme for an LNG plant, the plant feed was controlled along with the LNG condensation pressure. A disadvantage of this control scheme was that operators had to manipulate the plant feed rate to account for changes in operating parameters, such as variations in ambient temperature. The variations in ambient temperature changed the refrigeration capacity and, therefore, the LNG production rate. In addition, with this control scheme, the plant feed had to be externally reduced during a compressor trip to account for the reduced available refrigeration.

45,000 40,000 35,000 Surge Margin, m3/hr 30,000 25,000 20,000 15,000 10,000 5,000 0 0 5 Time, sec 10

Stage 1 Stage 2 Stage 3


Speed, rpm

5,000 4,500 4,000 3,500 3,000 2,500 2,000 1,500 1,000 15 0 5 Time, sec 10 15

Speed 1 Speed 2

Figure 12. Response of Refrigeration Compressor in Trip that Occurs at 2-Second Point

152

Bechtel Technology Journal

0.98 0.96 0.94 0.92 LNG Production, Scaled 0.90 0.88 0.86 0.84 0.82 0.80 0 500 1,000 1,500 2,000 Time, sec 2,500 3,000 3,500 4,000

Figure 13. Comparison of Plant Production with Production from Dynamic Simulation with Modified Control Scheme

Tuning parameters for the control loops are determined early in the project to save startup and commissioning time.

The Solution A plantwide control scheme that eliminated the need for operators to change the plant feed rate was studied using dynamic simulation. This scheme used the temperature control downstream of the refrigeration and front-end pressure control to indirectly control the plant feed rate. Using this scheme in the simulation, the plant was designed to utilize the available refrigeration capacity. Therefore, the plant feed rate was automatically reduced to maintain the LNG temperature with varying ambient conditions. Figure 13 presents a comparison of the LNG plant production using the original plantwide control scheme (red line) with the production from the dynamic simulation with no operator manipulation of the plant feed rate (blue line). Some operational problems with this modified control scheme were noticed during test runs with certain plant scenarios. For example, one of the key operating pressures varied unacceptably during the refrigeration compressor trip. To prevent this occurrence, an override scheme was tested on the model, and other control schemes were explored. In addition, the test runs showed that a supervisory scheme was necessary to prevent the gas turbines from running in their temperature override mode under certain conditions. As a result, a supervisory program was developed that runs the turbines at their limits and, therefore, the plant at full capacity without operator intervention. Supervisory schemes that maintain the operation of the

heavies removal column were also found to be useful. A plantwide dynamic simulation model was used to study these schemes and to test their effectiveness under various upset conditions, such as plant trips and changes in ambient temperature and plant feed conditions. Dynamic simulation was also used to select the appropriate tuning parameters for the plant. Typically, tuning parameters from a similar, previously constructed plant are used; or the control loops are tuned onsite during the commissioning phase of the project. These methods prolong the startup-commissioning time, making it economically inefficient. A dynamic simulation model makes it possible to tune the control loops offline in advance using established tuning procedures. The control loops can then be quickly fine-tuned onsite during commissioning. Use of a Plantwide Dynamic Model for the OTS A plantwide dynamic model can be integrated with the control system emulations for use as an OTS. A simulator is a true replica of the plant, reflecting both the process and control elements. Control system emulations are used to replicate the plant control strategy. The shutdown/interlock logic of the plant is also implemented in a virtual plant simulator. This logic can be implemented in separate script files or by using commercially available programmable logic control emulators. The OTS allows for complete control system checkout prior to plant commissioning. It enables engineers to resolve any issues regarding
153

December 2008 Volume 1, Number 1

MIMICS CONTROL ROOM ENVIRONMENT

Operator Station

Operator Station

DCS Database Server

Simulated DCS Controls

Simulated SIS Controls

STAND-ALONE OTS NETWORK

Dynamic models are integrated with control system emulation to create a high-fidelity OTS solution, which is further used to develop APC applications to increase plant production and ensure quality.

HYSYS Dynamic Model Station

HYSYS Dynamic Model Station

Field Operator Station

Instructor Station

MIMICS LNG PLANT RESPONSE

TRAINING TOOL

Figure 14. Schematic Architecture of OTS

graphics or control logic coding early in the project. Operating personnel can also use the OTS for training purposes to ensure a smooth plant startup. Feedback on the OTS from operating personnel also helps to provide the framework for future OTS development. A schematic architecture of the OTS is presented in Figure 14. Bechtel has provided OTS solutions to Bechtel LNG clients and is currently developing OTS solutions for other projects. Improving the Profitability of LNG Plants Using Advanced Process Control Advanced process control (APC) is a technology designed to stabilize processes in LNG plants so they can operate as close to full capacity as possible. The application of APC enables LNG plant owners to maximize plant performance and increase operating profits. Other benefits of APC include increased process efficiency and natural gas liquid (NGL) production and reduced operator intervention. [8] The use of APC also results in a smoother and more consistent plant operation. The objective of the LNG APC controller is to maximize gas feed rate subject to the various process operating constraints. The APC controller can also incorporate other objectives, such as minimizing power usage or maximizing NGL production and liquefaction thermal efficiency.

The increase in LNG production achieved by using APC depends on an operators skill level and assigned activities in the plant. For a highly skilled operator who makes frequent plant adjustments, a production increase of about 1 percent is achievable with APC. For an average operator who makes less frequent changes during a day, a production increase of between 2 and 3 percent is achievable. This is sufficient economic incentive to justify the implementation of APC in an LNG plant.

CONCLUSIONS

he case studies presented in this paper illustrate the crucial role simulation technologies and concepts play throughout an LNG projects life cycle, especially in quantifying the risks associated with design. Quantifying these risks using traditional design methods is not always possible. With the rapid developments in hardware and software and the increasing awareness of the power of simulation technologies, future projects will involve an enormous amount of simulation-based design. This advancement will lead to a paradigm shift in current design methods. However, simulation technology will never completely replace traditional design methods; instead, it will complement the experience design engineers bring to a project.

154

Bechtel Technology Journal

This paper also highlights how Bechtel has innovatively leveraged the life-cycle concept to create applications of value for clients. The concepts and applications described in the case studies have also been successfully applied to other processes such as gas processing and to facilities such as refineries.

[6]

M. Wilkes, S. Gandhi, J. Valappil, V. Mehrotra, D. Messersmith, and M. Bellamy, Large Capacity LNG Trains: Focus on Improving Operability During the Design Stage, 15th International Conference & Exhibition on Liquified Natural Gas (LNG 15), Barcelona, Spain, April 2427, 2007, PO-32, access via <http://kgu.or.kr/admin/data/ P-000/d29fdfc6a6d5dd3b6f8b0c0c8b1b3fa3.pdf>. J. Valappil, V. Mehrotra, D. Messersmith, and P. Bruner, Virtual Simulation of LNG Plant, LNG Journal, January/February 2004, <http://www.lngjournal.com/ articleJanFeb04p35-39.htm>. J. Valappil, S. Wale, V. Mehrotra, R. Ramani, and S. Gandhi, Improving the Profitability of LNG Plants Using Advanced Process Control, 2007 AIChE Spring National Meeting, Houston, Texas, April 2226, 2007, see <http://aiche.confex.com/aiche/s07/ preliminaryprogram/abstract_79719.htm>.

[7]

TRADEMARKS Aspen HYSYS is a registered trademark of Aspen Technology, Inc. ConocoPhillips Optimized Cascade registered trademark of ConocoPhillips. SmartPlant is a registered Intergraph Corporation. is a of
[8]

trademark

REFERENCES
[1] D.P. Sly, Plant Design for Efficiency Using AutoCAD and FactoryFLOW, Proceedings of the 1995 Winter Simulation Conference, Arlington, Virginia, December 1995, pp. 437444, access via <http://ieeexplore.ieee.org/xpl/freeabs_all.jsp? tp=&arnumber=478771&isnumber=10228>. C.R. Cutler and R.T. Perry, Real-Time Optimization with Multivariable Control is Required to Maximize Profits, Computers and Chemical Engineering, Vol. 7, No. 5, 1983, pp. 663667, access via <http://www. sciencedirect.com/science/journal/00981354>. E.J. Williams and R. Narayanswamy, Application of Simulation to Scheduling, Sequencing, and Material Handling, Proceedings of the 1997 Winter Simulation Conference, Atlanta, Georgia, December 710, 1997, pp. 861865, see <http://portal.acm.org/ citation.cfm?doid=268437.268666>. P. Diwakar, V. Mehrotra, R. Vallavanatt, and T.J. Maclean, Challenges in Modeling Ground Flares Using Computational Fluid Dynamics, 5th International Bi-Annual ASME/JSME Symposium on Computational Technology for Fluid/Thermal/Chemical/Stressed Systems with Industrial Applications, San Diego, California, July 2529, 2004, access via <http://catalog.asme. org/ConferencePublications/PrintBook/2004_ Computational_2.cfm>. P. Diwakar, V. Mehrotra, and F. Richardson, Mitigation of Bending Stress and Failure Due to Temperature Differentials in Piping Systems Carrying Multiphase Fluids: Using CFD and FEA, Proceedings of the 2005 ASME International Mechanical Engineering Congress and Exposition, Orlando, Florida, November 511, 2005, access via <http://store.asme.org/ product.asp?catalog_name=Conference%20 Papers&category_name=Recent%20 Advances%20in%20Solids%20and%20 Structures_IMECE2005TRCK-33&product_ id=IMECE2005-79969>.

BIOGRAPHIES
Ramachandra Tekumalla is chief engineer for OG&Cs Advanced Simulation Group, located in Houston, Texas. He leads a group of 10 experts in Bechtels Houston and New Delhi offices in developing advanced applications for various simulation technologies, such as APC, CFD, FEA, OTS, dynamic simulation, and virtual reality. Ram has more than 10 years of experience in applying these technologies, as well as in real-time optimization, to ensure the successful completion of projects worldwide. Prior to joining Bechtel, Ram was an applications engineer with the Global Solutions Group at Invensys Process Systems, where he developed applications for refineries and power plants, including real-time control, performance monitoring, and optimization. Ram holds an MS degree from the University of Massachusetts, Amherst, and a BE degree from the Birla Institute of Technology & Science, Pilani, India, both in Chemical Engineering. Jaleel Valappil is a senior engineering specialist with OG&Cs Advanced Simulation Group, located in Houston, Texas. His expertise covers dynamic simulation, real-time optimization, operator training simulation, and advanced process control. Dr. Valappil is also highly experienced in developing and deploying lifecycle modeling concepts for applications in design, engineering, and operations. He currently applies advanced simulation and optimization technology to improve the design and operation of plants being built by Bechtel. Before joining Bechtel, Dr. Valappil was a senior consulting engineer with the Advanced Control Services Group of Aspen Technology, Inc., where he

[2]

[3]

[4]

[5]

December 2008 Volume 1, Number 1

155

was responsible for developing and implementing advanced control and optimization solutions for operating facilities. He also took part in identifying and analyzing the economic benefits of advanced control. Dr. Valappil holds a PhD degree from Lehigh University, Pennsylvania, and a BS from the Indian Institute of Technology, Karagpur, India, both in Chemical Engineering.

156

Bechtel Technology Journal

OPTIMUM DESIGN OF TURBO-EXPANDER ETHANE RECOVERY PROCESS


Originally Issued: March 2007 Updated: December 2008 AbstractThis paper explores methods for determining the optimum design of turbo-expander ethane (C2 ) recovery processes, focusing on constrained maximum recovery (C-MAR), a new methodology. C-MARsuccessor to the system intrinsic maximum recovery (SIMAR) methodology introduced recently uses a set of curves developed to benchmark C2 recovery applications based on the popular gas sub-cooled process (GSP) and external propane (C 3 ) refrigeration (35 C). Using the C-MAR curves, a process engineer can quickly determine the optimum design and estimate the performance and cost of various C2 recovery opportunities without performing time-consuming simulations. Moreover, the C-MAR curves enable alternative process configurations to be compared against GSP performance. KeywordsC-MAR, compressor, ethane, expander, refrigeration, SIMAR, turbo-expander
INTRODUCTION necessary to move to sub-SIMAR operations, and additional steps are required. This paper presents a new approach to eliminate the aforementioned shortcomings of SIMAR. The new method is called C-MAR, which stands for constrained maximum recovery. C-MAR redefines the reference case by adopting the gas sub-cooled process (GSP), a well-known industrial design [6], as the benchmark case and by incorporating a fixed refrigeration temperature of 35 C, the practical lower bound of propane (C 3 ) refrigeration circuits. Since this new reference case is a realistic industrial design, its results are more readily transferable to industrial applications (for example, cost estimates). The technical background for the development of C-MAR is described in some detail in this paper. SIMAR methodology is discussed and illustrated. C-MARs usefulness and applications are demonstrated in real cases using the Enhanced Natural Gas Liquid (NGL) Recovery ProcessSM (ENRP) [1, 7, and 8] (employing a stripping gas system) and the lean reflux process. [9]

IPSI LLC

Wei Yan, PhD


wyan@bechtel.com

Lily Bai, PhD


lbai@bechtel.com

Jame Yao, PhD


jxyao@bechtel.com

Roger Chen, PhD


rjchen@bechtel.com

ince its acceptance by the industry in the 1970s, the expander-based process has become the mainstay technology in ethane (C2 ) recovery applications. [1] Despite the great technical and commercial success of this technology, a systematic methodology for determining the optimal system design has remained elusive until recently. Design optimization was approached as an art to be mastered; to this end, a new process engineer would typically spend several years gaining experience and acquiring the necessary expertise. The steep and frustrating learning curve was not conducive to extending this art beyond the province of process specialists to general engineers. Recently, a methodology called SIMAR, which stands for system intrinsic maximum recovery, was described in papers presented at a key technical conference. [2, 3] These works and subsequent follow-up papers [4, 5] identified a systematic approach to arrive at the optimal design for a given feed stream. Although SIMAR greatly facilitates the design procedures by reducing a two-dimensional (2-D) search to a single dimension, its reference case is a hypothetical scenario in which infinite amounts of refrigeration are available to the system. In many real cases, the refrigeration supply is limited and costly. Therefore, it is

Doug Elliot, PhD


delliot@bechtel.com

Chevron Energy Technology Company

Stanley Huang, PhD


shhuang@chevron.com

TECHNICAL BACKGROUND FOR DEVELOPMENT OF C-MAR ollowing a general categorization and discussion of expander-based C2 recovery processes, SIMAR methodology is explored in this section. A scenario in which liquefied

2008 Bechtel Corporation. All rights reserved.

157

ABBREVIATIONS, ACRONYMS, AND TERMS


1-D 2-D one-dimensional two-dimensional methane ethane propane constrained maximum recovery demethanizer column Enhanced NGL Recovery ProcessSM Gas Processors Association gallons per Mscf gas sub-cooled process Joule-Thomson liquefied natural gas lean reflux process million scf per day thousand scf natural gas natural gas liquid side reboiler standard cubic feet system intrinsic maximum recovery vapor fraction expander; expressed as XPDR in conjunction with XPDR 1, 2, or 3 process configuration categories
Expander HP LNG Inlet Separator LP Residue Gas to Recompression Recycled Chilled HP Residue Gas HP LNG Inlet Separator Expander DeCl LP Residue Gas to Recompression Feed Gas LP Residue Gas Inlet Chiller (Refrigeration Integration) Subcooler Expander DeCl Separator Expander-Based C2 Recovery Scheme C2 + Product M To Air Pipeline Cooler

C1 C2
C3 C-MAR DeCl ENRP GPA GPM GSP JT LNG LRP MMscfd Mscf NG NGL SB scf SIMAR VF XPDR

Figure 1. Generalized Gas Processing Scheme for C2 Recovery

XPDR 1

C2 + Product

Subcooler

DeCl

XPDR 2

C2 + Product

natural gas (LNG) is used as feed is described. Since LNG contains abundant refrigeration, the SIMAR reference case can be approximated well. SIMAR curves are compared for the expander (XPDR) 1, XPDR 2, and XPDR 3 process configuration categories. The discussion then examines typical results when the feed is shifted from LNG to natural gas (NG), based on the XPDR 3 category. C2 Recovery Processes Figure 1 shows a generalized scheme for C2 recovery based on expander technology. The process is intended to strip the inlet NG of its heavier components. The residue gas is recompressed and returned to the pipeline. Sweet, dry inlet NG flows through an inlet chilling section, where the gas is chilled to a
158

LP Residue Gas to Recompression

Subcooler

DeCl Expander

HP LNG Inlet Separator

XPDR 3 XPDR 1: No Rectification XPDR 2: Full Rectification

C2 + Product

XPDR 3: Self-Rectification (Heat Pump Not Shown)

Figure 2. Categorizing Expander-Based Schemes

Bechtel Technology Journal

suitable level before entering the heart of the plant: an expander-based C2 recovery system. The refrigeration of the inlet chilling section is mainly provided by the returning residue gas and is supplemented by side-draws from the demethanizer column (DeCl) (side-draws not shown in Figure 1). Depending on actual requirements, external refrigeration may be required (not shown in Figure 1). The main components of the expander section include a separator, an expander, and a DeCl. A subcooler is usually provided for improved refrigeration integration in the low temperature regions. The exact design of this section is an art. Figure 2 provides further details on the expander section. Following earlier practice, the expander schemes are grouped into three process configuration categories: separator top not rectified (XPDR 1), separator top fully rectified (XPDR 2), and separator top selfrectified (XPDR 3). A fourth category, heat pumps, is not shown but is described shortly. The XPDR 3 is the well-known GSP in the industry, which uses a small portion of the non-condensed vapor as the top reflux to the demethanizer, after substantial condensation and sub-cooling. The main portion, typically in the range of 65%70%, is subjected to turbo expansion as usual. Configurations with heat pumps are discussed separately because: (1) a heat pump can be an

enhancement to any of the aforementioned three categories, and (2) a heat pump moves heat from low to high temperature and changes the temperature distribution in its base configuration. Its working principle is different from that of the three categories. A heat pump design can be recognized by the use of a compressor; a cooler for rejecting heat to a high temperature sink, a Joule-Thomson (JT) valve, or a second expander; and, optionally, a second exchanger to take heat from the low temperature source. Figure 3 depicts the ENRP as an example. In the Figure 3 configuration, a side-draw liquid stream from the bottom of the demethanizer is expanded to generate refrigeration. This stream is then heated by indirect heat exchange with inlet gas to generate a two-phase stream. The two-phase stream is flashed in a separator. The flashed vapor is compressed and recycled to the demethanizer as a stripping gas. The flashed liquid stream can be mixed with other NGL product streams or returned to the column. This heat pump effectively moves heat from the inlet stream to the bottom of the column. The main features of this novel design center on the fact that the stripping gas (1) enhances relative volatility ratios and NGL recovery levels, and (2) lowers the column temperature profile and makes heat integration easier. The lean reflux process, which belongs to the XPDR 3 category, was developed to achieve high recovery levels of C2 in an NG feed without

The XPDR 3 process configuration category is the well-known gas sub-cooled process in the industry.

Residue Gas Compressor

Residue Gas

Expander Compressor Inlet Gas Cold Separator

Expander

Demethanizer Side Reboilers FC

FC Liquid Product IPSI Stripping Gas Package

Figure 3. Enhanced NGL Recovery Process

December 2008 Volume 1, Number 1

159

Residue Gas Compressor

LRP Package

Residue Gas

Expander Compressor Expander Inlet Gas Cold Separator Demethanizer

The entire inlet pre-chilling section can be eliminated when the NG feed is replaced by LNG.

Side Reboilers

Liquid Product

Figure 4. Lean Reflux Process

NG Trim Heater or Cooler M LP Residue Gas From LNG Recondenser Primary Booster Pump LNG Preheater To Pipeline

Subcooler Expander Separator DeCl

Expander-Based C2 Recovery Scheme C2 + Product

Introducing a lean reflux considerably reduces equilibrium loss, thereby leading to high C2 recovery while maintaining the demethanizer at a relatively high operating pressure. The process overcomes deficiencies in the commonly used gas sub-cooled reflux process in which C2 recovery levels are ultimately restricted to approximately 90% due to equilibrium loss, or otherwise demand a lower demethanizer pressure and a higher recompression and/or refrigeration horsepower. SIMAR Methodology C2 Recovery with LNG as Feed For the sake of easy visualization, Figure 5 depicts a scenario wherein the NG feed shown in Figure 1 is replaced by LNG. The major difference resulting from this change is the fact that the entire inlet pre-chilling section can be eliminated when LNG is used as the feed. The refrigeration in the residue gas can be retained, thus dramatically reducing the recompression power. A SIMAR curve can be constructed following a few simple steps. The process starts from a relatively high temperature at a reasonable pressure level, as shown in Figure 6. The track of testing temperatures penetrates through the two-phase region and ends at an arbitrarily chosen level of 100 C. The fluid remains liquid at and below this temperature level. Once the temperature reaches a certain point, the columns operating limits are exceeded and the column no longer converges.

Figure 5. Generalized Processing Scheme for C2 Recovery with LNG as Feed

120 100 80 Track of Testing Separator Temperature Lean Case Rich Case

Pressure, bar

60 40 20 0 180

140

100

60

20

20

60

Temperature, C

Figure 6. Phase Envelope of Inlet Gas

adding substantial amounts of recompression and/or external refrigeration power. This process uses a slipstream from the cold separator or feed gas to generate an essentially C2-free stream as a lean reflux to the demethanizer (see Figure 4).

160

Bechtel Technology Journal

Figure 7 plots the C2 recovery level against the test separator temperature for the three categories of process configurations. The DeCl operating pressure is 22 bar. Both XPDR 1 and XPDR 2 shows a monotonic trend of improvement as the temperature decreases. This trend continues even after the inlet gas is totally liquefied, when the expander is replaced by a JT valve and no expansion work is recovered. XPDR 3 shows a different trend, however. As the separator temperature decreases, the C2 recovery reaches a maximum value and decreases. In other words, too much refrigeration at the separator may hurt the C2 recovery. For XPDR 3, the SIMAR is defined as the maximum of the curve. For XPDR 1 and XPDR 2, the SIMAR is defined at the temperature level where the separator fluid has 30% vapor fraction (VF). The choice of 30% is based on a practical consideration that no expanders would be installed if the gas flow is below this level. A SIMAR curve is the collection of all the SIMAR conditions that cover the entire range of DeCl operations of interest. Figure 8 depicts typical SIMAR curves corresponding to the three XPDR categories. The characteristics of the three categories can be observed. The reflux stream makes XPDR 2 more efficient than XPDR 1 throughout the entire range, which corresponds to the operating pressures of DeCl from 17 to 42 bar. The efficiencies of XPDR 2 and XPDR 3 are comparable, while each has its advantages over a certain span. C2 Recovery with NG as Feed Figure 9 shows typical results, based on the XPDR 3 category, when the feed is shifted from LNG to NG. Since the refrigeration in the residue gas must be recovered to cool the inlet gas, the recompression power increases significantly by this shift in feed. A big gap is apparent between the two thin curves on the left. In addition to the recovered refrigeration in the residue gas, external refrigeration may also be needed. When this is true, the compression power required in the external refrigeration power should be added to the aforementioned recompression power to form the total compression power. The two thick curves on the right represent the recompression duties by using one or two side reboilers (SBs). The gap between the curves of total compression and the recompression curves on the left in Figure 9 represents the external refrigeration. Using two SBs reduces the external refrigeration because

of improved refrigeration integration in the prechilling section. The gap narrows in Figure 9 when the DeCl operating pressure decreases, indicating the decreased demands for external refrigeration. As can be observed in Figure 8 as well, when the DeCl operating pressure decreases or C2 recovery level increases, the need for external refrigeration also decreases. The expander provides more refrigeration for process needs at lower DeCl operating pressures.
1.00 0.95 0.9 C2 Recovery, bar 0.850 0.80 0.75 0.70
XPDR 1 XPDR 2 XPDR 3 0% 50% 90% VF

DeCl Pressure = 22 bar


Liquid JT Expander

In addition to the recovered refrigeration in the residue gas, external refrigeration may also be needed.

0.65 100.00 95.00 90.00 85.00 80.00 75.00 70.00 65.00 60.00

Separator Temperature, C

Figure 7. Two Types of Behavior for Different Categories

1.00 0.95 0.90

C2 Recovery, bar

0.85 0.80 0.75 0.70 0.65 1.000 2.000


XPDR 1 XPDR 2 XPDR 3

3.000

4.000

5.000

6.000

Total Power, MW per 100 MMscfd LNG Inlet

Figure 8. Comparing SIMAR Curves for Three XPDR Categories

1.00 SIMAR Curve 0.95

C2 Recovery, bar

0.90

NG Feed

0.85 Recompression 0.80


SIMAR Curve NG Delivery NG Overall (2 SB) NG Overall (1 SB)

Total Compression

0.75 2.00

3.00

4.00

5.00

6.00

7.00

8.00

9.00

10.00

Compression Power, MW per 100 MMscfd NG Inlet

Figure 9. Comparing Compression Duties and Impact of Side Reboilers Based on XPDR 3

December 2008 Volume 1, Number 1

161

FEED GAS COMPOSITION AND SIMULATION PARAMETERS able 1 lists two feed gas compositions used in this paper, rich case and lean case. They represent different richness in C2+ components. The richness of a gas sample is reflected in its C2+ or C3+ components, expressed in gallons per Mscf (GPM). The GPM value for the rich case is 5.71 and for the lean case is 2.87. The phase envelopes corresponding to the two compositions are shown in Figure 6. The richer the gas, the wider its envelope becomes. The raw gas supply is 300 MMscfd (dry basis). All simulations in this paper are performed using Aspen HYSYS 3.2. Table 2 lists pertinent parameters. The delivery pressure to the pipeline is similar to the inlet pressure. Two
Table 1. Feed Gas Compositions
Components
Nitrogen CO2 Methane Ethane Propane i-Butane n-Butane i-Pentane n-Pentane n-Hexane n-Heptane n-Octane n-Nonane GPM for C2+

temperature levels of heat sinks are defined: The high temperature represents air coolers, and the low temperature represents the external refrigeration temperature supplied by two-stage C3 compressor loops.

C-MAR METHODOLOGY Principal Elements and Assumptions C-MAR methodology includes two major elements: XPDR 3 process configuration as the benchmark model (the GSP, which is well-known in the industry) Fixed refrigeration temperature of 35 C For purposes of conceptual discussions, the pre-chiller is simulated using one integrated exchanger, which handles all streams including inlet gas, returning residue, SBs, and external refrigeration. Only the minimum amount of refrigeration is added to satisfy the refrigeration balances. The intent is to minimize the additional compression work. Unless specified otherwise, two SBs in an integrated exchanger are assumed. External refrigeration implies closed-loop designs of C3 circuits. The results of C-MAR methodology, including the characteristics of resultant curves and their relation to SIMAR, are examined below. The paper concludes with a discussion of the C-MAR curve in relation to the ENRP and the lean reflux process. C-MAR Methodology Results, Curves, and Relation to SIMAR Figure 10 shows the C2 recovery versus separator temperature for the lean case. As the separator temperature decreases, the C2 recovery shows a maximum at about 57 C for all curves.

The two feed gas compositions used in this paper, rich case and lean case, represent different richness in C2+ components.

Rich Case, mole %


0.315 0.020 79.550 10.600 5.470 0.926 1.690 0.468 0.478 0.295 0.132 0.060 0.020 5.710

Lean Case, mole %


0.750 0.217 88.910 4.950 3.090 0.442 0.894 0.224 0.221 0.300 0.000 0.000 0.000 2.870

Table 2. Simulation Parameters Used in This Paper


Parameter
Inlet Temperature, C Inlet Pressure, bar Send-Out Residual Gas Temperature, C Send-Out Residual Gas Pressure, bar Number of Trays in DeCl DeCl Operating Pressure, bar Composition Ratio of C1 to C2 in DeCl Bottom Product High-Temperature Sink, C Low-Temperature Sink, C

Value
27 69 or 55 C2 Recovery, % 38 74 or 55 16 17 to 37 0.015 38 35
100 90 80 70 60 50 40 60.0 Column Pressure 17 bara 26 bara 32 bara 37 bara 50.0

Inlet Pressure = 69 bara

40.0

30.0

20.0

10.0

High-Pressure Separator Temperature, C

Figure 10. Determining Maximum C2 Recovery Using C-MAR Methodology (Lean Case)

162

Bechtel Technology Journal

This pattern bears similarities to the XPDR 3 curve in Figure 7, indicating the existence of the maximum behavior for the GSP configuration under different constraints, e.g., DeCl pressure and refrigeration availability. Physically, separator temperatures that are too cold result in C1 condensation. The DeCl reboiler would input extra heat to prevent excessive C1 loss from the bottom. The net result is the increased C2 loss in the residue gas. To calculate the power requirement for the external refrigeration, this discussion assumes that the external refrigeration is from two-stage C3 compressor circuits with evaporator temperature at 41 C and refrigerant condensing temperature at 49.5 C. From the Gas Processors Association (GPA) data book [9], the value for the power can be obtained. It should be noted that the maximum C2 recovery using C-MAR occurs at a higher temperature (57 C) than that of SIMAR (about 70 C). Using C-MAR, the constraint in refrigeration prevents the separator temperature from decreasing further. Using SIMAR, the constraint is imposed last by forcing the selection into the sub-SIMAR region. Either approach would lead to similar results. Figure 11 shows trends for the rich case similar to those described above. With the separator temperature further decreasing below some point, the C2 recovery decreases due to C1 condensation.

1.00 0.98 0.96

Inlet Pressure = 69 bara


Total Power Recompression Power 17 bara High Column Pressure Needs More External Refrigeration 26 bara 32 bara

C2 Recovery, bara

0.94 0.92 0.90 0.88 0.86 0.84 0.82 0.80 0.00


1.00 2.00 3.00

37 bara
4.00 5.00 6.00 7.00

Power, MW/100 MMscfd

Figure 12. Operation Curves Determined by C-MAR Methodology (Lean Case)

1.00 0.98 0.96

Inlet Pressure = 69 bara


Total Duty Recompression Duty 17 bara 22 bara 26 bara High Column Pressure Needs More External Refrigeration

C2 Recovery, bara

0.94 0.92 0.90 0.88 0.86 0.84 0.82 0.80 0.00


1.00 2.00 3.00

It should be noted that the maximum C2 recovery using C-MAR occurs at a higher temperature than that of SIMAR.

37 bara
4.00 5.00 6.00 7.00

Power, MW/100 MMscfd

Figure 13. Operation Curves Determined by C-MAR Methodology (Rich Case)

Inlet Pressure = 69 bara


100 90

the same inlet pressure. And with the decrease of DeCl pressure, C2 recovery increases and less external refrigeration is needed because the relative volatility is greater at lower column pressure. Figures 14 and 15 show C-MAR curves at two inlet pressure levels and two inlet gas GPM values. Figure 14 is for total power and Figure 15 is for recompression power only. In Figure 14, rich feed gas needs more total power (or more power than lean feed gas) because more external refrigeration is needed to condense heavy components in rich feed gas in the DeCl into liquid product. But the total power (or the power of lean and rich feed gas) will be about the same, or the lean case can even require more power than the rich case, at high C2 recovery level. The reason is that more recompression power is needed for lean feed gas to handle the larger residual gas flow. As can be seen from Figure 15, for both 69 bara and 55 bara inlet pressure cases, rich feed gas requires more recompression power at low C2 recovery level than lean feed gas. But at high C2 recovery level, lean feed gas needs more

C2 Recovery, %

80 70 60 50 40 60.0 Column Pressure 17 bara 26 bara 32 bara 37 bara 50.0 40.0 30.0 20.0 10.0

High-Pressure Separator Temperature, C

Figure 11. Determining Maximum C2 Recovery Using C-MAR Methodology (Rich Case)

Figures 12 and 13 depict operation curves determined by C-MAR methodologies for lean and rich cases. As anticipated, the trends are similar to those of SIMAR shown in Figure 9. The gap between recompression and total power represents the external refrigeration. Again, as anticipated, the rich case demands more refrigeration duties than the lean case at

December 2008 Volume 1, Number 1

163

1.00 0.95

C2 Recovery, bara

0.90 0.85 0.80 0.75 0.70 2.00 P = 69 bara, Lean P = 69 bara, Rich P = 55 bara, Lean P = 55 bara, Rich 3.00 4.00 5.00 6.00 7.00

interpolate required duties for different feed gases. Since the curves in Figure 14 represent the maximum C2 recoveries achievable by the GSP with realistic refrigeration supplies, the interpolated results provide expedient estimates in feasibility investigations. In addition, since the GSP has practically become a benchmark configuration in this field, the curves in Figure 14 acquired by C-MAR methodology can be used to evaluate different process configurations. The following subsection provides an illustration using the ENRP and the lean reflux process as examples. C-MAR Curve and the ENRP and Lean Reflux Process In Figure 16, the ENRP and lean reflux process are compared with the C-MAR curve. Either of the two processes or both in combination can achieve higher C2 recovery at lower power than the C-MAR curve or the highest recovery by the GSP. Improvement can be expected from the two processes. Obviously, the ENRP can expend less power to achieve higher C2 recovery than the GSP, and the lean reflux process can achieve high C2 recovery with less power than the GSP. Combining the ENRP and lean reflux process is better because the combination can achieve high C2 recovery with less power. In Figure 17, the C-MAR curve is compared with the recovery and power for the Pascagoula NGL plant, which uses the GSP. The point plotted for Pascagoula falls on the right side of the C-MAR curve and is quite close to it. This shows that the design of this plant can achieve a C2 recovery close to the maximum achievable by the GSP. Another example shown in Figure 18 is the Neptune II NGL plant, which uses the ENRP in its design. For comparison, the point for the GSP without refrigeration is also marked. The

Total Power, MW/100 MMscfd

At a given feed gas pressure and inlet gas richness, it is possible to develop general correlations to interpolate required duties for different feed gases.

Figure 14. Impact of Inlet Pressure and Richness of Feed Gas on C-MAR Curves (Total Power)

1.00 0.95

C2 Recovery, bara

0.90 0.85 0.80 0.75 0.70 1.00 2.00 3.00 4.00 5.00 6.00 7.00 P = 69 bara, Lean P = 69 bara, Rich P = 55 bara, Lean P = 55 bara, Rich

Recompression Power, MW/100 MMscfd

Figure 15. Impact of Inlet Pressure and Richness of Feed Gas on C-MAR Curves (Recompression Power)

recompression power than rich feed gas. At low C2 recovery level or high DeCl pressure, to obtain the same C2 recovery, rich feed gas needs lower DeCl pressure to create higher relative volatility, which leads to a higher recompression power requirement. But at high C2 recovery or low DeCl pressure, either lean or rich feed gas has high relative volatility, while lean feed gas requires a greater flow rate to achieve the same C2 recovery. This explains the larger recompression power requirement of the lean case at high C2 recovery. It is easy to understand that high pressure feed gas (69 bara) needs more recompression power than low pressure feed gas (55 bara) because of the assumption that the inlet pressure is the same as the delivery pressure. As mentioned earlier, the external refrigeration requirement can be deduced from the curves in Figures 14 and 15 because it is simply the difference between the total power and the recompression power. Examining the regularities between the rich and lean cases in Figure 14 leads to an important conclusion. At a given feed gas pressure and a given richness of inlet gas (i.e., GPM value), it is possible to develop general correlations to
164

1.0 0.95

Inlet Pressure = 69 bara, Rich Case


IPSI Stripping Gas Refrigeration + Lean Reflux

C2 Recovery, bara

0.90 0.85 0.80 0.75 0.70 2.00 IPSI Stripping Gas Refrigeration

Lean Reflux

C-MAR Curve

3.00

4.00

5.00

6.00

7.00

Total Power, MW/100 MMscfd

Figure 16. Comparison of ENRP and Lean Reflux Process with C-MAR Curve

Bechtel Technology Journal

1.00 0.95

Inlet Pressure = 69 bara, GPM = 2.18


C-MAR

C2 Recovery, bara

0.90 0.85 0.80 0.75 0.70 2.00 Pascagoula GSP

Using the C-MAR curves enables optimum design to be determined and initial cost estimates to be prepared for project scoping, avoiding the need to perform intricate simulations. Separately, use of the stripping gas process (ENRP) and the lean reflux process can significantly improve the system performance.

3.00

4.00

5.00

6.00

7.00

Total Power, MW/100 MMscfd

A combination of the aforementioned two processes further improves the system performance.

Figure 17. C-MAR with Pascagoula GSP

TRADEMARKS
Inlet Pressure = 72 bara, GPM = 4.50
1.00 0.95 0.90 IPSI Stripping Gas Refrigeration C-MAR

Aspen HYSYS is a registered trademark of Aspen Technology, Inc. Enhanced NGL Recovery Process is a service mark of IPSI LLC (Delaware Corporation).

0.85 0.80 0.75 0.70 0.65 0.60 0.55 0.50 2.00 3.00 4.00 5.00 6.00 7.00 GSP Without Refrigeration

REFERENCES
[1] R.J. Lee, J. Yao, and D. Elliot, Flexibility, Efficiency to Characterize Gas-Processing Technologies in the Next Century, Oil & Gas Journal, Vol. 97, Issue 50, December 13, 1999, p. 90, access as IPSI technical paper via <http://www.ipsi.com/Tech_papers/ paper2.htm>. S. Huang, R. Chen, J. Yao, and D. Elliot, Processes for High C2 Recovery from LNG Part II: Schemes Based on Expander Technology, 2006 AIChE Spring National Meeting, Orlando, Florida, April 2327, 2006, access via <http://aiche.confex.com/aiche/s06/ techprogram/P43710.HTM>. S. Huang, R. Chen, D. Cook, and D. Elliot, Processes for High C2 Recovery from LNG Part III: SIMAR Applied to Gas Processing, 2006 AIChE Spring National Meeting, Orlando, Florida, April 2327, 2006, see <http://aiche.confex.com/aiche/s06/ preliminaryprogram/abstract_43672.htm and <http://aiche.confex.com/aiche/s06/ preliminaryprogram/abstract_43672.htm>. J. Trinter and S. Huang, SIMAR Application 1: Evaluating Expander-Based C2+ Recovery in Gas Processing, 2007 AIChE Spring National Meeting, Houston, Texas, April 2226, 2007, see <http://aiche.confex.com/aiche/s07/ preliminaryprogram/abstract_81508.htm>. C. McMullen and S. Huang, SIMAR Application 2: Optimal Design of ExpanderBased C2+ Recovery in Gas Processing, 2007 AIChE Spring National Meeting, Houston, Texas, April 2226, 2007, see <http://aiche.confex.com/aiche/s07/ preliminaryprogram/abstract_81509.htm>.

C-MAR is a valuable tool and a new approach that eliminates the shortcomings of SIMAR methodology and enables optimum design to be determined.

C2 Recovery, bara

Total Power, MW/100 MMscfd

Figure 18. C-MAR with Neptune II


[2]

point for the ENRP is on the left side of the C-MAR curve and shows the improvement realized from use of the ENRP over the GSP. GSP without refrigeration is some distance away from the C-MAR curve on the right side; the C2 recovery is limited because no external refrigeration is supplied.

[3]

CONCLUSIONS n this paper, the authors explored the technical background for developing design optimization methodologies for turbo-expander C2 recovery processes. Methods and processes to optimize design and improve system performance were examined and illustrations presented. The discussion and data support the following conclusions, briefly summarized below: C-MAR is a valuable tool and a new approach that eliminates the shortcomings of the SIMAR methodology by expediently determining the maximum C2 recovery and compression power based on use of the well-known GSP.

[4]

[5]

December 2008 Volume 1, Number 1

165

[6]

R.N. Pitman, H.M. Hudson, and J.D. Wilkinson, Next Generation Processes for NGL/LPG Recovery, 77th GPA Annual Convention Proceedings, Dallas, Texas, March 1998, access via <http://www.gpaglobal.com/ nonmembers/catalog/index. php?cPath=45&sort=1a&page=2> and <http://www.gasprocessors.com/ dept.asp?dept_id=7077#>. L. Bai, R. Chen, J. Yao, and D. Elliot, Retrofit for NGL Recovery Performance Using a Novel Stripping Gas Refrigeration Scheme, Proceedings of the 85th GPA Annual Convention, Grapevine, Texas, March 2006, access via <http://www.gasprocessors.com/product. asp?sku=P2006.10>. P. Nasir, Enterprise Products Operating, LP; W. Sweet, Marathon Oil Company; and D. Elliot, R. Chen, and R.J. Lee, IPSI LLC, Enhanced NGL Recovery ProcessSM Selected for Neptune Gas Plant Expansion, Oil & Gas Journal, Vol. 101, Issue 28, July 21, 2003, access as IPSI technical paper via <http://www.ipsi.com/ Tech_papers/neptune_gas_REV1.pdf>. GPSA Engineering Data Book, 12th edition, Gas Processors Association, Tulsa, Oklahoma, 2004, access via <http://www.gasprocessors. com/gpsa_book.html>.
The original version of this paper was presented at the 86th Annual Gas Processors Association Convention, held March 1114, 2007, in San Antonio, Texas, USA.

Dr. Yan is a member of the Society of Petroleum Engineers and the American Institute of Chemical Engineers. Dr. Yan holds a PhD from Rice University, Houston, Texas, and a Bachelors degree from Tianjin University, China, both in Chemical Engineering. Lily Bai, a senior process engineer with IPSI LLC, has more than 10 years of experience in research, process design, and development in chemicals, petrochemicals, gas processing, and LNG. Dr. Bai works on the Wheatstone LNG project and is responsible for process simulation. The Wheatstone facility, to be located on the northwest coast of mainland Australia, will have initial capacity of at least one 5 millionton-per-annum LNG production train. Before her current assignment, Dr. Bai worked on projects such as Angola LNG, Santos Gladstone LNG, and Atlantic LNG (Train 4) reliability. Her responsibilities included process simulation and preparation of process flow diagrams and equipment datasheets. Dr. Bai holds a PhD from Rice University, Houston, Texas, and MS and BS degrees from Tianjin University, China, all in Chemical Engineering. Jame Yao has 28 years of experience in the development of gas processing and LNG technologies. As vice president of IPSI LLC, Dr. Yao is responsible for all IPSI/Bechtel process design/ simulation and development in cryogenic gas processing, nitrogen rejection, and LNG technology. He holds several patents in the field. Dr. Yao joined International Process Services, Inc., the predecessor to IPSI LLC, in 1986 as a senior process engineer. During his tenure with IPSI, he has co-invented several processes for the cryogenic separation and liquefaction of N2 , He, LNG (methane), and other light hydrocarbons. Previously, Dr. Yao worked as a member of the worldwide Technology Center for Gas Processing of DM International (Davy McKee) in Houston, Texas. Dr. Yao performed graduate study/research at Purdue University related to the measurement and prediction of thermodynamic properties of cryogenic gas mixtures. This work enabled him to co-invent several processes for the separation and processing of natural gas. He also contributed to the design of gas plants in Australia, New Zealand, Venezuela, the UK, North Sea, Norway, and the United States. Dr. Yao is the author of more than 20 technical publications and holds more than 15 patents. He is a member of the American Institute of Chemical Engineers.

[7]

[8]

[9]

BIOGRAPHIES
Wei Yan has more than 10 years of experience in the oil and gas industry. He joined IPSI LLC 1 as a senior process engineer in 2006 to work on design and technology development for LNG and natural gas processing projects. Before joining IPSI LLC, Dr. Yan worked at Tyco Flow Control Co. as an application engineer focused on new flowcontrol product development. He also served as a process engineer for China Huanqiu Chemical Engineering Corp., where he worked on the process design of petrochemical projects. Previously, as a research assistant at Rice University, Dr. Yan focused on the foam-aided alkaline-surfactantenhanced oil recovery process.

1 Bechtel affiliate IPSI LLC, based in Houston, Texas, was formed in 1986 to develop technology and provide conceptual/front-end design services for oil and gas production and processing facilities as well as for engineering, procurement, and construction companies.

166

Bechtel Technology Journal

Dr. Yao holds PhD and MS degrees from Purdue University, West Lafayette, Indiana, and a BS degree from National Taiwan University, Taipei, all in Chemical Engineering. Roger Chen has more than 30 years of experience in research, process design, and development in gas processing and in oil and gas production facilities. As a senior vice president of IPSI LLC, he is responsible for process design and development for gas processing facilities. Dr. Chen designed the Enterprise Neptune II natural gas plant in Louisiana, constructed to match the capacity of Neptune I. He used an IPSI patent process in the design. Dr. Chen also has served as the technical auditor for several LNG projects, including Darwin in Australia, Zaire Province in Angola, and BG Egyptian in Idku, Egypt. Previously, Dr. Chen was senior process engineer for IPSI. In this role, he initiated the process design for BGs Hannibal gas processing plant located near Sfax, Tunisia. Dr. Chen also has served as a chief process engineer for IPSI, with a focus on the BG Pascagoula liquid recovery facility, part of the 1.5-billion-cubic-feet-per-day Pascagoula natural gas processing plant in Mississippi. His activities included process design and startup assistance. Dr. Chen has been a member of the American Institute of Chemical Engineers and the American Chemical Society for more than 40 years, and the Gas Processors Association Research Steering Committee for 8 years. He holds 10 patents and has authored more than 30 technical publications. Dr. Chen holds PhD and MS degrees from Rice University, Houston, Texas, and a BS degree from National Taiwan University, Taipei, all in Chemical Engineering. Doug Elliot, a Bechtel Fellow and a fellow of the American Institute of Chemical Engineers, has more than 40 years of experience in the oil and gas business, devoted to the design, te c h nolog y development, and direction of industrial research. He is president, chief operations officer, and co-founder (with Bechtel Corporation) of IPSI LLC. Before helping establish IPSI, Dr. Elliot was vice president of Oil and Gas for DM International (Davy McKee). He started his career with McDermott Hudson Engineering in 1971 following a post-doctoral research assignment under Professor Riki Kobayashi at Rice University, where he developed an interest in oil and gas thermophysical properties research and its application.

Dr. Elliot has authored or co-authored more than 65 technical publications and holds 12 patents. He served on the Gas Processors Association Research Steering Committee from 1972 to 2001 and as chairman of the Gas Processors Suppliers Association Data Book Committee on Physical Properties. Dr. Elliot also served as chairman of the South Texas Section and director of the Fuels and Petrochemical Division of the American Institute of Chemical Engineers and is currently a member of the PETEX Advisory Board. Dr. Elliot holds PhD and MS degrees from the University of Houston, Texas, and a BS degree from Oregon State University, Corvallis, all in Chemical Engineering. Stanley Huang is a staff LNG process engineer with Chevron Energy Technology Company in Houston, Texas. His specialty is cryogenics, particularly as applied to LNG and gas processing. Since 1996, Dr. Huang has worked on many LNG baseload plants and receiving terminals. He has also fostered process and technology improvements by contributing more than 20 publications and corporate reports. Before joining Chevron, Dr. Huang worked for IPSI LLC and for KBR, a global engineering, construction, and services company supporting the energy, petrochemicals, government services, and civil infrastructure sectors. By training, Dr. Huang is an expert in thermodynamics, in which he still maintains a keen interest. After leaving school, he worked for Exxon Research and Engineering Company as a post-doctorate research associate. Dr. Huang then worked for DB Robinson and Associates Ltd. in Alberta, Canada, a company that provides phase behavior and fluid property technology to the petroleum and petrochemical industries. He contributed more than 30 papers and corporate reports before 1996, including one on a molecularly based equation of state called SAFT, which is still popular in polymer applications today. Dr. Huang holds PhD and MS degrees in Chemical Engineering and an MS in Physics, all from Purdue University, West Lafayette, Indiana, and a BS in Chemical Engineering from National Taiwan University, Taipei, Taiwan. He is a registered professional engineer in the state of Texas.

December 2008 Volume 1, Number 1

167

168

Bechtel Technology Journal

Bechtel Power
Technology Papers

TECHNOLOGY PAPERS

171

Controlling Chemistry During Startup and Commissioning of Once-Through Supercritical Boilers


Kathi Kirschenheiter Michael Chuk Colleen Layman Kumar Sinha

181

CO2 Capture and Sequestration Options Impact on Turbomachinery Design


Justin Zachary, PhD Sara Titus

201

Power Springerville Power Expansion


Springerville Unit 3, a 400 MW, pulverized-coal-red generating station in Arizona, was named 2006 Plant of the Year by Power magazine.

Recent Industry and Regulatory Developments in Seismic Design of New Nuclear Power Plants
Sanj Malushte, PhD Orhan Grbz, PhD Joe Litehiser, PhD Farhang Ostadan, PhD

CONTROLLING CHEMISTRY DURING STARTUP AND COMMISSIONING OF ONCE-THROUGH SUPERCRITICAL BOILERS


Originally Issued: October 2007 Updated: December 2008 AbstractAs new power plants commit to once-through supercritical boilers and rush to come on line, engineering, procurement, and construction (EPC) turnkey contractors face both a short-term and long-term chemistry dilemma related to oxygenated treatment (OT) during normal long-term operation. Since most industry experience is based on converting existing once-through boilers from all volatile treatment (AVT) OT, relatively little information exists on newer boilers operating on OT. Electric Power Research Institute (EPRI) all volatile treatment oxidizing (AVT[O]) and all volatile treatment reducing (AVT[R]) startup guidelines facilitating conversion to OT are sound but untested on new boilers and do not address considerations like cycles without deaerators, which must be treated on a case-by-case basis. The startup and commissioning cycle, including startup on AVT and quick conversion to OT, is the EPC turnkey contractors responsibility. To ensure efficient startup and commissioning of once-through supercritical boilers, the EPC turnkey contractor must address these chemistry issues and develop a practical approach to achieving steam purity and specified feedwater chemistry requirements. Keywordsaction level, all volatile treatment (AT); all volatile treatment (oxidizing) (AVT[O]); all volatile treatment (reducing) (AVT[R]); American Society of Mechanical Engineers (ASME); coal-fired power plants; chemistry guidelines; commercial operation; condensate polishers; Electric Power Research Institute (EPRI); engineering, procurement, and construction (EPC) contractor; lump-sum, turnkey (LSTK); once-through boilers; oxygenated treatment (OT); risk assessment, startup; steam and cycle chemistry; steam purity; supercritical

INTRODUCTION he engineering, procurement, and construction (EPC) contractor may ensure efficient once-through supercritical boiler startup and commissioning by developing practical steam purity chemistry limits and a timely, workable approach to meeting these limits. Once-through supercritical boiler chemistry is uncontrollable by boiler blowdown; therefore, constant, stringent chemistry control is required. Significant operation outside boiler and turbine manufacturer chemistry limits may void the warranty, leaving the owner/EPC contractor solely responsible for all costs associated with repairs required within the warranty period. In addition to boiler and turbine supplier warranty-related water quality and steam purity limits, various industry groups (e.g., American Society of Mechanical Engineers [ASME], VGB PowerTech, and Electric Power Research Institute [EPRI]) have developed standards that represent industry consensus on good, prudent

Kathi Kirschenheiter
kakirsch@bechtel.com

Michael Chuk
mjchuk@bechtel.com

Colleen Layman
cmlayman@bechtel.com

practice for cycle chemistry control. Within the past 15 years, supplier and industry group chemistry limits have been re-evaluated and revised for once-through supercritical boilers. Revisions include operating under various control modes, such as oxygenated treatment (OT) and all volatile treatment (AVT). Operators, engineers, and turnkey contractors have also reviewed chemistry limit guidelines. Further examination of revised chemistry guidelines show that specified chemistry constraints can be achieved during operation using full-flow, online condensate polishers with timely regenerations. However, during commissioning, it is difficult to ensure that these stringent limits are met without allowing for an uncharacteristically long startup time. Most chemistry control guidelines developed by industry groups address plant operation after commissioning and initial startup. These guidelines include action levels outlining acceptable chemistry deviations based on hours of

Kumar Sinha
ksinha@bechtel.com

2008 Bechtel Corporation. All rights reserved.

171

ABBREVIATIONS, ACRONYMS, AND TERMS


ACC ASME AVT AVT(O) AVT(R) EPC air-cooled condensers American Society of Mechanical Engineers all volatile treatment all volatile treatment (oxidizing) all volatile treatment (reducing) engineering, procurement, and construction EPRI LSTK ORP OT VPCI Electric Power Research Institute lump-sum, turnkey oxidation-reduction potential oxygenated treatment vapor phase corrosion inhibitor

The sooner full-load turbine roll is reached, the sooner target feedwater chemistry and steam purity may be achieved.

operation outside recommended chemistry limits, and are valuable tools for operators. Action levels and allowable hours of chemistry excursion are implemented to protect power plant components from corrosion; however, controlling chemistry during startup and commissioning of oncethrough supercritical boilers and steam-related systems is a completely different scenario.

CHEMISTRY CONTROL PHILOSOPHY FOR ACHIEVING STEAM PURITY

steam chemistry. Although increasing blowdown and makeup demineralized water to the cycle is effective for cleaning drum boilers, these simple cleaning methods are not practical for oncethrough supercritical boilers. As demonstrated in numerous startups, full-load turbine roll will dilute concentrated pockets of impurities in the feedwater system and uniformly mix feedwater with condensate. Therefore, the sooner full-load turbine roll is reached, the sooner target feedwater chemistry and steam purity may be achieved. Before awarding a contract (boiler or turbine), the EPC contractor should negotiate startup and commissioning feedwater quality and steam purity guidelines with the boiler and turbine suppliers to ensure that long-term warranties are not voided during startup and commissioning activities. In addition, industry standard steam purity guidelines for operation should be relaxed to the most practical limits feasible during commissioning, while considering the owners long-term warranty interests.

he EPC contractor may ensure that chemistry limits are met during and after once-through supercritical boilers commissioning by implementing the following steps: Control system component cleanliness during shop fabrication Control system component cleanliness during construction Flush system components prior to startup Implement stringent water quality requirements for hydrotesting Perform boiler and feedwater system chemical cleaning Flush system components thoroughly following chemical cleaning Perform steamblows to obtain steam cycle cleanliness Implement time-based, progressively improving feedwater and steam chemistry targets Boiler and feedwater system startup cleaning issues depend on boiler type, heat exchanger equipment type and metallurgy, and success of pre-commissioning cleanliness measures. Without industry standard guidelines for power plant component cleaning methods, the EPC contractor must implement its own methods to quickly achieve and control boiler feedwater and

EPC CONTRACTORS CHEMISTRY CONTROL PROGRAM

he EPC contractors chemistry control program must start at the equipment manufacturers fabricating facilities where cleanliness methods for boiler tubes and other system components are initiated. To ensure that system components have been kept clean during fabrication, contract-negotiated cleaning and inspection procedures should be implemented. Hydrotesting components in the shop using pH-adjusted demineralized water to maintain component cleanliness followed by a final rinse with a silica-free, vapor phase corrosion inhibitor (VPCI) to reduce corrosion from residual moisture after shop cleaning or hydrotesting is recommended.

172

Bechtel Technology Journal

Next, the EPC contractor must implement cleanliness methods during field fabrication to ensure that all construction debris is removed from the system upon completion, and cleanliness is maintained during component installation by capping pipe ends and cleaning weld areas. pH-adjusted demineralized water is recommended during boiler, condensate, and feedwater system components and piping flushing and hydrotesting to eliminate potential scaling and deposits. Potable quality water is acceptable for flushing and hydrotesting if a thorough chemical cleaning of components follows flushing and hydrotesting, and if a pH-adjusting chemical or silica-free VPCI is added to flush or hydrotest water. Following flushing and hydrotesting, boiler, condensate and feedwater systems should be chemically cleaned using demineralized water as the chemical dilution medium. After cleaning, boiler and feedwater systems should be flushed with pH-adjusted demineralized water treated with a suitable, silica-free VPCI. Chemical cleaning is essential to the chemistry control program as it improves boiler chemistry stability by safely removing all deposits from inside boiler tubes (including organics from manufacturing; rust, mill scale, and welding slag from construction; and residual contaminants from hydrotesting). Finally, the EPC contractor must meet agreedupon chemistry limits in a timely fashion and complete system steamblows. Steamblows, which clear final debris and surface scale from the steam side of the system through thermal cycling and physical force of steam through the components, are the final step in ensuring steam chemistry meets required limits. Steamblows should be conducted using pH-adjusted demineralized water. Bechtel contends that startup chemistry guidelines should primarily focus on main steam chemistry targets, including cation conductivity, silica, and sodium, as they are easily and reliably measured using relatively inexpensive online instrumentation. Targets for chlorides, sulfates, and organic compounds should be deferred until the end of the commissioning cycle. Degassed cation conductivity is the preferred conductivity to be measured during commissioning since system air leaks are still being discovered and sealed during the startup and commissioning phase. The measurement of degassed cation conductivity will aid in differentiating between air leaks and other contamination sources.

ONCE-THROUGH BOILER STARTUP CHEMISTRY TRENDS ost once-through supercritical boilers in the US have been converted from previously predominant AVT to OT, with new facilities almost exclusively using OT. This chemistry change requires all-ferrous metallurgy in the feedwater train, and precludes copper or copper-based alloy feedwater heat exchangers in system design and bronze impellers in condensate pumps and valve trims in the condensate system. The benefits from operating a once-through supercritical boiler on OT include: Lowering overall corrosion rates by forming a protective, double-oxide layer with a controlled amount of oxygen present in the condensate (This protective layer is considered to be more stable than the oxide layer formed using AVT.) Decreasing boiler chemical cleaning frequency due to reduced amounts of iron transport and deposition Allowing quicker, cleaner startups and reduced corrosion product transport rates during cold and hot startups Allowing boiler operation at lower pH with overall objective of minimizing chemical costs Eliminating feeding, handling, and storage of oxygen scavenger products To achieve these overall short- and long-term objectives, chemistry controls must be tightened during startup and commissioning. However, tighter chemistry controls add extra time to the already tight startup schedule, and longer startup time equates to lost revenue. Some once-through supercritical boiler manufacturers have instituted penalties against the allowable pressure drop during initial boiler performance testing, an additional complication that may impact startup and commissioning activities. These penalties are based on extended operation on all volatile treatment reducing (AVT[R]) during startup and commissioning. The reducing environment (negative oxidationreduction potential [ORP]) present when operating on AVT(R) may contribute to increased iron transport, subsequently increasing the pressure drop through the boiler. These pressure drop correction penalties will be fervently debated by the EPC contractor during commissioning and challenged by both owners and plant operators.

Most once-through supercritical boilers in the US have been converted from previously predominant AVT to OT.

December 2008 Volume 1, Number 1

173

EPRI OT CHEMISTRY GUIDELINES uring steam-side startup and commissioning, the EPC contractor is mostly interested in main and reheat steam chemistry. Table 1 lists EPRI recommendations for once-through boilers operating under OT, including normal target value and action levels 1, 2, and 3.
Table 1. EPRI Recommendations for Main and Reheat Steam Chemistry for Once-Through Boilers on OT [1]
Parameter
Cation Conductivity, S/cm Silica, ppb Sodium, ppb Chloride, ppb Sulfate, ppb TOC, ppb

Table 2. EPRI Feedwater Chemistry Control Guidelines for Once-Through Boilers on OT [1]
Parameter
Cation Conductivity, S/cm pH, STU Dissolved Oxygen at Economizer Inlet, ppb Iron, ppb Ammonia, ppm

Normal Limit
0.15 8.0 to 8.5 30 to 150 2 0.02 to 0.07

The pH/conductivity relationship is crucial for once-through cycles on OT.

Target Value N
0.15 10 2 2 2 100

Action Levels 1
0.3 20 4 4 4 >100

2
0.6 40 8 8 8

3
>0.6 >40 >8 >8 >8

Table 1 includes three parameters requiring special consideration during commissioning: cation conductivity, silica, and sodium. Monitoring cation conductivity is essential since it warns of salts and acids that may cause turbine corrosion. Controlling silica levels in the steam is important as silicate scaling may contribute to turbine capacity and efficiency losses. Monitoring sodium is critical for avoiding corrosion because uncontrolled sodium hydroxide concentrations are known to cause corrosion damage failures in boiler tubes. [1] Recommendations listed in Table 1 are based on stringent steam quality and feedwater requirements for long-term corrosion control and for reducing forced outages caused by water quality. Most boiler and turbine manufacturers have either agreed to the chemistry limits outlined in Table 1 or have proposed similar limits. From Bechtels perspective, these recommendations are acceptable during operation. Although recommendations listed in Table 1 are acceptable for targeted chemistry limits during operation, EPC contractors would like to see the following two columns added to this table: Allowable chemistry excursions during hot startup Allowable chemistry excursions during cold startup Feedwater chemistry control is also essential for successful OT. Table 2 specifies EPRI feedwater chemistry guidelines.

The two most important parameters in Table 2 are feedwater cation conductivity and pH. Cation conductivity should be maintained below 0.15 S/cm during operation on OT. Normal pH range for feedwater under OT is 8.0 to 8.5. The EPC contractor is challenged with controlling pH when feedwater cation conductivity increases to concentration levels listed in Table 1, action levels 1, 2, and 3 (0.3 S/cm, 0.6 S/cm, >0.6 S/cm, respectively). [1] Controls for pH and cation conductivity are discussed in boiler and steam turbine startup documents; however, these chemistry control guidelines are not always consistent. The pH/conductivity relationship is crucial for once-through cycles on OT; thus, the EPC contractor implements the chemistry control at its own risk. Important issues to be implementing OT include: addressed when

At what point during the startup and commissioning process should the chemistry regime be switched from AVT to OT to prevent frequent switching back and forth between a reducing and an oxidizing environment? What would be the detrimental effects of going from an oxidizing atmosphere to a reducing (or close to reducing) atmosphere, for temporary periods? How can these detrimental effects be quantified and addressed during design and equipment procurement? ROLE OF CONDENSATE POLISHERS DURING COMMISSIONING

nce-through supercritical boilers are commonly installed with full-flow condensate polishers to control corrosive impurities concentration in condensate and feedwater systems. The presence of impurities in feedwater will significantly affect feedwater chemistry, potentially exceeding boiler supplier feedwater limits and turbine supplier steam

174

Bechtel Technology Journal

purity specifications. Although chemistry control with full-flow condensate polishers makes startup and commissioning activities progress smoothly, a certain degree of boiler cleanliness must be achieved before placing condensate polishers in operation. If condensate polishers are operated before a certain level of cleanliness is achieved, there will be an increase in chemical regenerations frequency (for deep-bed condensate polishers) or resin replacement (for precoat condensate polishers), which will lead to an increase in operations and maintenance costs during commissioning. The EPC contractor should evaluate both precoat and deep-bed condensate polishers for use during commissioning and startup. Tight, perfectly installed condenser tubes cant be confirmed during startup without extensive condenser tube testing and installation of an expensive leak detection system. Therefore, the EPC contractor must use its own experience in selecting one type of polisher over the other, weighing the cost/benefit of each type of polisher. Generally, when circulating water is brackish or seawater, a deep-bed polisher is required without exception. Bechtels design standard is to use deep-bed condensate polishers on all cycles designed to operate on OT. Impurities shaken loose during startup may cause a chemistry hold, where plant load increases are temporarily halted until these impurities are removed from the system. For a once-through supercritical boiler, impurities are removed exclusively by condensate polishers subsequent to chemical cleaning and boiler flush. Once impurities are removed, the chemistry hold is lifted and the plant is allowed to continue to ramp up to full load without exceeding allowable boiler or turbine chemistry limits. Operation at low or reduced loads during startup is frequently insufficient to eliminate these chemistry holds. Installation of polishers allows the plant to reach full power more quickly, resulting in substantial cost savings and increasing revenue production. The cost of condensate polisher regenerations should be accounted for in the overall commissioning costs. To minimize the number of condensate polisher regenerations, polishers should be operated beyond ammonia break, if feasible. However, presence of a fullflow condensate polisher doesnt make the unit immune to chemistry problems. A major condenser leak during commissioning will still lead to severe chemistry excursions, even with the aid of a condensate polisher.

CASE HISTORIES Case History 1: Once-Through Supercritical Boiler Commissioning With ACC This case study discusses a power plant currently in full operation. It has a once-through supercritical boiler, commissioned in early 2000, and an air-cooled condenser (ACC). Startup followed a chemistry control program similar to what is now classified as all volatile treatment oxidizing (AVT[O]). Feedwater chemistry and steam purity control is a challenge on ACC-equipped units. During commissioning, the EPC contractor faced numerous difficulties controlling oxygen, pH, and cation conductivity due to the ACCs large condensing surface and frequent regenerations required by precoat condensate polishers at ammonia break. In addition, turbine supplier silica requirements couldnt be relaxed because this provision hadnt been negotiated with the turbine supplier before turbine contract award. The EPC contractor mitigated chemistry and steam purity control issues using a membrane contactor to remove dissolved gasses, particularly dissolved oxygen, from the makeup water to the cycle. Membrane contactors containing microporous hydrophobic membranes were used to bring gas and liquids in direct contact without mixing. The contactors lowered gas pressure and created a driving force that removed dissolved gasses from the water. Installed at the optimum location, membrane contactors are highly efficient and compact. Using a membrane contactor allowed the EPC contractor to reduce makeup water impurities, resulting in improved feedwater quality control. Challenges of Commissioning a Once-Through Supercritical Boiler with an ACC Chemistry control during commissioning of a once-though supercritical boiler with an ACC is complicated by the ACCs large condensing surfaces, on which high-purity steam must condense. To meet steam quality requirements, these surfaces must be contaminant free. However, ACCs cannot be chemically cleaned or efficiently flushed with water; therefore, the EPC contractor must rely on cleanliness controls implemented during shop fabrication and site installation. In addition, large surface areas dramatically increase iron content in condensate. Full-flow condensate polishers help to remove iron; however, pressure drop through the polishers increases rapidly, compared to a system operating with a traditional steam surface condenser, and requires frequent regenerations

Feedwater chemistry and steam purity control is a challenge on ACC-equipped units.

December 2008 Volume 1, Number 1

175

or polisher bed cleanings. Additional precoat filters or cartridge filters upstream of the main condensate polishers should be considered, at a minimum, for initial startup and commissioning to provide additional cleaning, to supplement online condensate polishers in crud removal, and to speed up the plant startup process. Case History 2: Once-Through Supercritical Boiler Commissioning This case history discusses a unit in the final stages of construction. The preliminary startup and commissioning chemistry control philosophy has been developed. The unit will start up on AVT(O) and will normally operate on OT. The EPC contractor and the boiler and turbine suppliers negotiated target chemistry guidelines to be used during plant commissioning. After cleaning and flushing contaminants from condensate, feedwater, and boiler systems and completing steamblows, the EPC contractor will initiate startup in turbine bypass mode until startup steam chemistry limits listed in Table 3 are met.
Table 3. Startup Steam Chemistry Limits
Parameter
Degassed Cation Conductivity, S/cm Sodium, ppb Silica, ppb Chloride, ppb TOC, ppb Sulfate, ppb

If the chemistry limits in Table 4 are not met within the allotted 168 operating hours, the EPC contractor and turbine supplier shall mutually agree to an approach for demonstrating balanceof-commissioning-period chemistry limits while operating in the bypass mode. The negotiation of relaxed turbine steam purity limits during startup confirms that an additional allowance can be given to the EPC contractor for impurities that could impact startup and delay the overall commissioning schedule. Recommended Startup Feedwater Chemistry for Once-Through Boilers When Implementing OT AVT(O) and AVT(R) are the two best-known methods referenced by EPRI for startup of once-through boilers implementing OT during operation. Operational chemistry control guidelines for each of these methods are summarized in Table 5 and Table 6, respectively.
Table 5. AVT(R) Feedwater Chemistry Control Guidelines [1]
Parameter
Cation Conductivity, S/cm pH, STU Dissolved Oxygen at Economizer Inlet, ppb Iron, ppb Copper, ppb ORP, mV

The negotiation of relaxed turbine steam purity limits during startup confirms that an additional allowance can be given to the EPC contractor.

Normal Limit
0.2 9.2 to 9.6 <5 <2 <2 300 to 350

Limit
<0.45 <12 <40 <12 <100 <12

Table 6. AVT(O) Feedwater Chemistry Control Guidelines [1]


Parameter Normal Limit
0.2 9.2 to 9.6 <10 <2 <2

After demonstrating that startup steam chemistry limits have been met, turbine roll will be initiated and the turbine startup process will continue, including loading the turbine to full load. The steam chemistry will be monitored to ensure that chemistry is continually improving from the startup steam chemistry limits listed in Table 3 to the balance-of-commissioning period chemistry limits listed in Table 4. A negotiated period of 168 operating hours will be allowed to achieve steam chemistry below the balance-of-commissioning-period chemistry limits.
Table 4. Balance-of-Commissioning-Period Chemistry Limits
Parameter
Degassed Cation Conductivity, S/cm Sodium, ppb Silica, ppb Chloride, ppb TOC, ppb Sulfate, ppb

Cation Conductivity, S/cm pH, STU Dissolved Oxygen at Economizer Inlet, ppb Iron, ppb Copper, ppb

Limit
<0.30 3 <20 3 100 3

During commissioning, the EPC contractor must develop a chemistry implementation program to meet guidelines specified in Table 5 and Table 6, respectively. For startup and commissioning of a once-through supercritical boiler with a deaerator, feedwater chemistry control guidelines specified under AVT(O) and AVT(R) are readily achievable. However, for cycles without a deaerator, it is more difficult to achieve AVT(O) and AVT(R) feedwater chemistry guidelines (particularly dissolved oxygen and iron limits), even if oxygen is removed from makeup water prior to introduction into the cycle through, for example, membrane contactors. Elimination of noncondensable gases from the system is
Bechtel Technology Journal

176

limited to the condenser air removal system efficiency and capacity when no deaerator is included in the cycle design. Suggested chemistry guidelines for cycles without deaerators are listed in Table 7. These proposed guidelines are based on Bechtels startup experience, taking into account that oxygen removal to the low levels proposed for AVT(R) and AVT(O) operation is an important, but not crucial, requirement in the absence of a deaerator.
Table 7. Suggested Startup Feedwater Chemistry Guidelines for Once-Through Cycles Without Deaerators
Parameter
Cation Conductivity, S/cm pH, STU Dissolved Oxygen at Economizer Inlet, ppb Iron, ppb Copper, ppb

available. Once cation conductivity has reached the required 0.15 S/cm level, the unit will be switched from AVT to in accordance with the EPRI guidelines. [1]

EPC STARTUP CHALLENGES

Limit
<0.2 9.2 to 9.6 <100 <5 <2

n addition to stringent steam quality limits implemented by steam turbine suppliers, boiler manufacturers have tightened limits on feedwater chemistry. OT guidelines call for consistent feedwater quality with cation conductivity 0.15 S/cm (see Table 2) before and during OT chemistry program implementation. AVT guidelines call for similar chemistry limits (<0.2 S/cm). Since meeting these chemistry limits during startup and commissioning is extremely difficult, the EPC contractor requires standards to be relaxed during commissioning to permit timely unit startup.

Case History 3: Once-Through Supercritical Boiler Commissioning Without Deaerator This case history discusses a project development phase, once-through supercritical unit without a deaerator. However, a preliminary plant startup and commissioning plan has been developed. The unit will be operated on OT during normal operation. Because there is no deaerator, reaching EPRI-recommended cation conductivity, iron, and dissolved oxygen limits will be a greater challenge. The commissioning and startup plan includes unit startup on AVT, as recommended by EPRI. [1] However, to minimize high iron transport and deposition, the plan calls for unit startup on AVT(O). Startup on AVT(O) will control pH by adding ammonia, increase temperature to reduce dissolved oxygen concentration through use of an auxiliary boiler and sparging in the hotwell, and reduce cation conductivity by treating water through full-flow condensate polishers. Additional startup schedule time, compared to time normally allotted for a unit with a deaerator, has been included because reaching dissolved oxygen and cation conductivity limits is not anticipated to be a quick and easy task. Additional schedule time was also added for condenser and feedwater sparging with steam from the auxiliary boiler, helium sweep of condenser and vacuum areas, and unit inspection for vacuum leaks. If it is impossible to meet the dissolved oxygen and cation conductivity limits within a reasonable timeframe, the option for startup on AVT(R) is

One of the EPC contractors biggest dilemmas during commissioning is determining the appropriate time to switch from AVT to OT, even when considering EPRI and boiler supplier guidelines. Once cation conductivity levels are stable below 0.15 S/cm, EPRI recommends operation on OT with oxygen injection in a pH range of 8.0 to 8.5. EPRI guidelines also state that oxygen injection into feedwater may continue with pH controlled between 9.2 and 9.6 and cation conductivity between 0.15 S/cm and 0.3 S/cm. However, at cation conductivity levels greater than 0.3 S/cm, EPRI recommends that oxygen injection be terminated and AVT resumed. [1] Upsets in cation conductivity may lead to serious corrosion problems if oxygen is continuously fed during upset conditions. Defining stable operation, given the many factors in play and pieces of equipment still being tested during typical unit startup, is the true challenge to the EPC contractor. Preventing system chemistry switching back and forth between AVT and OT is extremely important. Detrimental effects caused by system chemistry switching between AVT and OT include increased iron transport through dissolution of magnetite and protective hematite (developed during OT operation) layers, boiler deposits, and increased boiler pressure drop.

In addition to stringent steam quality limits implemented by steam turbine suppliers, boiler manufacturers have tightened limits on feedwater chemistry.

WARRANTY IMPLICATIONS

team turbine suppliers are also setting limits, in the equipment contract, on the number of hours a turbine can be operated with out-of-specification chemistry. These limits are typically listed in an action-level

December 2008 Volume 1, Number 1

177

Stringent operational chemistry guidelines applied to startup and commissioning activities negatively impact quick and efficient startup.

format where minor chemistry excursions are allowable for predetermined time periods without violating the equipment warranty. The more severe the chemistry excursion, the shorter the allowable operating time period the supplier will allow while maintaining the equipment warranty. These action levels impose additional restraints on steam turbine operation. The limit on number of hours of operation in each action level is very difficult to meet without adversely impacting the equipment warranty should delays arise during unit startup. During startup and commissioning, steam chemistry is expected to be degraded as compared to when the unit is in full-load, steady-state operation because of numerous cold and hot startups experienced in a short timeframe. Therefore, during each startup, the turbine will operate with degraded steam purity (within the specified action levels). From Bechtels perspective, hours of operation under each of the different action levels accumulated during the commissioning and startup phase should not count against allowable hours for warranty purposes.

Table 8. Steam Purity Limits During Startup EPC Contractor Recommendation for Once-Through Boilers
Parameter
Degassed Cation Conductivity, S/cm Sodium, ppb Silica, ppb Chloride, ppb TOC, ppb Sulfate, ppb

Frequency
Continuous Sampling Grab Daily Grab Daily Grab Daily Grab Weekly Grab Daily

Limit
<0.45 <12 <40 <12 <200 <12

Using the practical chemistry limits provided in Table 8, typical operation duration would be about one week and would be outlined in the equipment contracts. After this, the normal chemistry limits, as recommended by EPRI and equipment manufacturers, would be met and maintained. Time-based chemistry limits and cumulative hours under action levels would be started after commissioning. For once-through supercritical boilers without deaerators, startup chemistry guidelines should be developed allowing the EPC contractor as much allowance on dissolved oxygen as practical. Finally, it is important to develop a relationship of trust among the EPC contractor, turbine and boiler suppliers, and owners/operators. For it is through trust that the combined chemistry knowledge of these parties can be integrated to complement plant startup and bring the unit online more quickly, resulting in economic rewards for all.

CONCLUSIONS

he EPC contractors ultimate goal is to perform an efficient, once-through supercritical boiler and turbine startup and commissioning. Stringent operational chemistry guidelines applied to startup and commissioning activities negatively impact quick and efficient startup. To meet chemistry guidelines, rigorous cleaning and inspection procedures must be adhered to during all fabrication, construction, and installation phases. The success of any cleaning program is ultimately judged by the ease with which acceptable feedwater and steam chemistry is achieved. Practical startup chemistry guidelines should be established by consensus among the turbine manufacturer, boiler manufacturer, and EPC contractor early on in project development and outlined in equipment contracts. These startup guidelines should be based on the EPC contractors startup experience, the manufacturers desire to prevent corrosion and deposition in equipment components, and the EPC contractors and owners desire for efficient and timely unit startup. Table 8 provides practical chemistry limits suitable for startup and commissioning activities for once-through supercritical boilers.

REFERENCES
[1] Cycle Chemistry Guidelines for Fossil Plants: Oxygenated Treatment, Electric Power Research Institute (EPRI) Technical Report 1004925, March 9, 2005 <http://my.epri.com/ portal/serverpt?space=CommunityPage&cached =true&parentname=ObjMgr&parentid=2&control =SetCommunity&CommunityID=277&PageID =0&RaiseDocID=000000000001004925&RaiseDoc Type=Abstract_id>.

This paper was originally presented at the 68th Annual International Water Conference (IWC), held October 2125, 2007, in Orlando, Florida.

178

Bechtel Technology Journal

BIOGRAPHIES
Kathi Kirschenheiter has worked for Bechtel Power for more than 7 years. Her experience has focused mainly on engineering design of water and wastewater treatment systems, including equipment specification and procurement. She has worked with several different Bechtel GBUs in various locations, including Power in Alabama with the Tennessee Valley Authoritys Browns Ferry Nuclear Plant Unit 1 Restart, BNI in Washington with the Hanford Waste Treatment Plant Project, and OG&C in London with the Jamnagar Export Refinery Project. Kathi holds a BS in Chemical Engineering from Purdue University, West Lafayette, Indiana, and is currently pursuing an ME in Environmental Engineering from the Johns Hopkins University, Baltimore, Maryland. She is a registered professional engineer in the state of Maryland in Environmental Engineering and is currently a Black Belt Candidate in Bechtel Powers Six Sigma program. Michael Chuk has worked for Bechtels Power business for more than 3 of his 4 years in the industry. His experience includes the engineering, design, and procurement of water and wastewater treatment systems for power plant projects. Michael is part of the mechanical engineering group, and he has most recently been working on the Prairie State 1,600 MW supercritical coalfueled plant project. Mikes work has included awarding water treatment packages and completing system design and sizing calculations for all of the projects water treatment equipment. Previously, he worked in Bechtel Powers water treatment engineering group on several other fossil, nuclear, and integrated gasification combined-cycle (IGCC) plant projects. His responsibilities included water balance calculations and water characterization calculations, and early procurement activities such as preparation of material requisitions and specifications. Michael holds a BS in Chemical Engineering from Worcester Polytechnic Institute, Worcester, Massachusetts. He is an engineer-in-training in the state of Maryland. Colleen Layman, manager of water treatment, has more than 15 years of experience in water and wastewater treatment for power generating facilities. Her wide variety of experience includes engineering design, construction, and startup of power generating facilities; field service and engineering of water and wastewater treatment equipment and water quality control programs;

and experience in the day-to-day operations of a power plant burning waste anthracite coal. Currently, as manager of Bechtel Powers water treatment group, she is responsible for the conceptual design, process engineering, startup and operational support of the water/wastewater treatment systems, and the steam/water chemistry issues for Bechtels power projects. Colleen is an active member of both the American Society of Mechanical Engineers and the Society of Women Engineers. She currently serves as a member of ASME PTC 31 High-Purity Water Treatment Systems, as a member of the ASME Research and Technology Committee on Water and Steam in Thermal Systems, and as President of the Baltimore-Washington Section of the Society of Women Engineers. Colleen holds an MS in Water Resources and Environmental Engineering from Villanova University, Pennsylvania, and a BS in Mechanical Engineering Technology from Thomas Edison State College, Trenton, New Jersey. She is a registered professional engineer in the state of Ohio. Kumar Sinha, principal engineer, water and wastewater, on Bechtels mechanical engineering staff, has 40 years of experience (30 years of which have been with Bechtel) dealing with water and wastewater for power plants, refineries, and other industries. He has held increasingly responsible positions, including senior water treatment engineer, water treatment supervisor, senior engineer and project engineer with Bechtel Civil, supervisor and principal engineer with the Fossil Technology Group, and principal engineer for the Mechanical Project Acquisition Group. His experience includes project and process engineering, licensing, construction support, startup, and hands-on operation of water and wastewater systems in the US and abroad. Areas of expertise include boiler water and steam chemistry, pretreatment and demineralization, water desalination, treatment of cooling water, and wastewater disposal, including water recycle and zero discharge. Kumar has been an executive committee member of the International Water Conference since 2004 and was general chair in 2007 and 2008, was a member of the American Society of Mechanical Engineers Subcommittee on Water Technology and Chemistry, was a member and director of the Engineers Society of Western Pennsylvania in 2007 and 2008, and is a retired member of the American Institute of Chemical Engineers. Kumar received an MS in Energy Engineering from the University of Illinois, Chicago Circle campus, and has completed various business courses at Bechtel and Eastern Michigan University. He holds a BS in Chemical Engineering from the University of Ranchi, India, and is a registered professional engineer in the state of Illinois.

December 2008 Volume 1, Number 1

179

180

Bechtel Technology Journal

CO2 CAPTURE AND SEQUESTRATION OPTIONSIMPACT ON TURBOMACHINERY DESIGN


Originally Issued: June 2008 Updated: December 2008 AbstractIn todays climate of uncertainty about carbon dioxide (CO2 ) emissions legislation, owners and power plant planners are evaluating the possibility of accommodating add-on carbon capture and sequestration (CCS) solutions in their current plant designs. The variety of CCS technologies under development makes the task very challenging. This paper discusses the more mature post-combustion CCS technologies, such as chemical absorption, and the associated equipment requirements in terms of layout, integration into the generating plant, and auxiliary power consumption. It also addresses supercritical coal-fired as well as combined cycle plants; evaluates plant configuration details and various operational scenarios; and discusses the issues related to balance-of-plant systems, including water treatment, availability, and redundancy criteria. The paper then presents several options for pre-combustion processes such as oxy-fuel combustion and integrated gasification combined cycle (IGCC) water-shift carbon monoxide (CO) conversion to CO2. The impacts of several processes that only partially capture carbon are also evaluated from an engineering, procurement, and construction (EPC) contractors perspective as plant designer and integrator. Finally, the paper presents several examples of studies in development by Bechtel in which a neutral but proactive technical approach was applied to achieve the best and most cost-effective solution. Keywordschemical looping, CO2 capture, gas turbine, Graz cycle, oxy-fuel combustion, steam turbine, turbomachinery
INTRODUCTION CO2 Capture Options In the past few years, the major thrust to lower greenhouse gases (GHG) emitted into the atmosphere has been directed toward increasing the thermal efficiency of plant equipment, in particular the turbomachinery. The other option for lowering GHG emissions is to capture the CO2, a process associated with significant thermal efficiency losses. In the power generation industry, the most common CO2 capture technologies fall under these categories: Post-combustion capture of CO2 from the plant exhaust flue gases using chemical absorption. CO2 capture from a number of different processes such as oxy-fuel combustion and chemical looping. These technologies all have relatively low efficiency and high cost. Other than the postcombustion amine-based processes, all are in various stages of concept validation or small-scale demonstration. [1] The available CO2 capture options are summarized in Figure 1. Meaning of CO2 Capture Ready CO2 capture and sequestration (CCS) from power generation sources will eventually be required in one form or another. Today the timing and the extent of regulations governing the process are only speculative. So far, none of the existing technologies has emerged as the dominant solution, and many new and innovative alternatives are in various stages of research, development, or testing. For plants in the planning or design stage, owners; engineering, procurement, and construction (EPC) contractors; and equipment suppliers are trying to determine which features need to be applied today for future CCS implementation.

Justin Zachary, PhD


jzachary@bechtel.com

Sara Titus
sltitus@bechtel.com

Capture of CO2 before the combustion process. In this arrangement, the fuel is synthesis gas (syngas) containing mostly hydrogen and CO. The CO is converted to CO2 in a water-shift reactor; then, the CO2 is removed by physical absorbent and hydrogen (H 2) is used as fuel in the gas turbine.

2008 Bechtel Corporation. All rights reserved.

181

ABBREVIATIONS, ACRONYMS, AND TERMS


AC AGR AQCS ASU AZEP B&W CAR amine cooler acid gas removal air quality control systems air separation unit advanced zero emission plant The Babcock & Wilcox Company ceramic auto-thermal recovery combined cycle carbon capture and sequestration Clean Environmental Development Facility circulating fluidized bed computational fluid dynamics chemical looping combustion US Department of Energy enhanced capture CO2 program engineering, procurement, and construction the force exerted by gravity flue gas FG desulfurization FG recirculation greenhouse gases gigajoule, an SI unit of energy equal to 109 joules gas turbine burning hydrogen GTsyn HHV HP HPT HRSG HTT HX ICFB ID IP IGCC ITM LP LPST MEA MPT Mwe OTM PC ppm SCOC-CC STG syngas USC gas turbine burning syngas higher heating value high pressure HP turbine heat recovery steam generator high-temperature turbine heat exchanger internally CFB induced draft intermediate pressure integrated gasification CC ion transport membrane low pressure LP steam turbine monoethanolamine measuring point for mass, pressure, and temperature megawatt electrical oxygen transport membrane pulverized coal parts per million semi-closed oxy-fuel combustion CC steam turbine generator synthesis gas ultra-supercritical

The CO2 capture ready plant concept presents technical and commercial challenges.

CC CCS CEDF CFB CFD CLC DOE ENCAP EPC Fg FG FGD FGR GHG GJ GTH2

In this context, the term CO2 capture ready requires careful consideration. Beyond the technical challenges, the commercial investment in the specific features aimed at future CCS must be justified. Selecting a specific CCS technology poses a significant risk because the technology could become obsolete and result in a stranded asset. At this juncture, a pragmatic approach requires evaluating all known factors in existing carbon capture technologies, considering the additional space for the carbon capture facility, and laying out the plant to incorporate and modify existing hardware at a later date. This paper addresses the impact various methods of carbon capture have on the turbomachinery and the gas and steam turbines, focusing mainly on add-on features that do not require substantially modifying existing equipment. Because carbon capture is an energy-intensive process, the discussion also covers the impact on plant performance of large steam extractions for CO2 capture processes and
182

the use of electrical power for CO2 compression. Cost estimates of various technologies are not included, however, due to the high volatility of labor, material, and equipment prices.

BRIEF REVIEW OF CO2 CAPTURE TECHNOLOGIES Post-Combustion The post-combustion capture of CO2 from flue gases can be done by various methods: distillation, membranes, adsorption, and physical and chemical absorption. Absorption in chemical solvents such as amines is a proven technology performed consistently and reliably in many applications. It is used in natural gas sweetening and hydrogen production. The reaction between CO2 and amines currently offers the most cost-effective solution for directly obtaining high-purity CO2. [2]

Bechtel Technology Journal

CO2 Capture

Post-Combustion

Pre-Combustion

Oxy-Fuel Cumbustion

Chemical Looping

Amine-Based Solvent

Chilled Ammonia

Watershift Selexol

ASU CO2 + H2O Condenser

Carbonate Looping

NO-Based Looping

CC

OxygenBlown IGCC

Air-Blown IGCC

Graz Cycle

PC

Hybrid IGCC

Pressurized CC

USC PC

H2 Turbine

Semi-Open CC

USC PC

USC PC

CC with CO2 + H2O Turbine

Retrofit PC

USC ICFB

Figure 1. Available CO2 Capture Options

In the post-combustion capture process, the flue gases from the power plant are cooled and treated to reduce particulates and sulfur oxide (SOx) and nitrogen oxide (NOx). Then, boosted by a fan to overcome pressure drops in the system, the flue gases pass through an absorber. A lean amine solution, typically monoethanolamine (MEA), counter-currently interacts with the flue gases to absorb the CO2. The clean flue gases continue to the stack. The CO2-rich amine solution is then pumped into a stripper (regenerator) to separate the amine from the CO2. The energy to desorb the CO2 from the solution is provided by steam. The CO2-rich solution at the top of the stripper is condensed, and the CO2 is sent for further drying and compression. A schematic is given in Figure 2, and Table 1 summarizes the advantages and limitations of this process. Several commercial solvents are available. Fluors Econamine FG(SM), using a 30% aqueous solution

of MEA solvent, is the most widely deployed process with more than 20 plants located in the United States, China, and India. Yet none of the plants capture CO2 from coal-derived flue gas. KS-1 solvent produced by Mitsubishi Heavy Industries, Ltd., is in operation in four commercial-scale units capturing between 200 and 450 metric tons of CO2 per day. The main effort in the development aims at lower heat of reaction, lower sensible heat duty, and lower heat of vaporization. Bechtel, in cooperation with HTC Purenergy, conducted an engineering study for a 420 MW gross power combined-cycle CCS system in Karsto, Norway, applying a proprietary solvent. See [3] for details on the solvent properties and regeneration process. Post-Combustion Chilled Ammonia Ammonia is the lowest form of amine. Like other amines it can absorb CO2 at atmospheric

Table 1. Amine Scrubbing Advantages and Limitations


ADVANTAGES
Proven technology Known chemical process Ability to predict performance and solvent consumption

LIMITATIONS
High energy consumption for solvent regeneration High rate of process equipment corrosion High solvent losses due to fast evaporation High degradation in presence of oxygen

December 2008 Volume 1, Number 1

183

Vent to Reheat/Stack Condenser CO2 to Compression/ Dehydration

Lean AC

Reflux Drum

Absorber Booster Pump Storage Tank Preflux Pump Regenerator (Stripper)

Amine scrubbing is a proven technology with known limitations.

Flue Gas from Power Plant

Cross Exchanger

Filtration MEA Pre-Claimer NO2CO2

Sludge

Figure 2. Post-Combustion Amine Process

pressure, but at a slower rate than that of MEA. The chilled ammonia system uses a CO2 absorber similar to sulfur dioxide (SO2) absorbers and is designed to operate with slurry. CHEMICAL ELEMENTS
ABC AC Ar C CaCO3 CaO CaS CaSO4 CO CO2 H2 H 2O Hg HX Me MeO N2 NG NH3 NiO Ni NO NO2 NOX O2 SO2 SOX ammonium bicarbonate ammonium carbonate argon carbon calcium carbonate calcium oxide calcium sulfide calcium sulfate carbon monoxide carbon dioxide hydrogen water mercury hydrogen halide metal metal oxide nitrogen gas natural gas ammonia nickel oxide nickel nitric oxide nitrogen dioxide nitrogen oxide oxygen sulfur dioxide sulfur oxide

The process requires the flue gas to be cooled before entering the cleanup system. The cooled flue gas flows upward, counter-current to a slurry containing a mix of dissolved and suspended ammonium carbonate (AC) and ammonium bicarbonate (ABC). More than 90% of the CO2 from the flue gas is captured in the absorber. The CO2-rich spent ammonia is regenerated under pressure to reduce the CO2 liquefaction compression energy requirement. The remaining low concentration of ammonia in the clean flue gas is captured by cold-water wash and returned to the absorber. The clean flue gas, which now contains mainly nitrogen, excess oxygen, and a low concentration of CO2, flows to the stack. A schematic of this process is given in Figure 3, and Table 2 summarizes the advantages and limitations of this process. Alstom owns the process rights. Several participants, including Alstom, have started tests on a sidestream pilot plant at the Pleasant Prairie Power Plant in Wisconsin. This pilot will be able to capture CO2 emissions from a slipstream of less than 1% from one of the two boilers operating at the plant. Additionally, a non-chilled ammonia scrubbing process is being planned by Powerspan Corp. for demonstration at FirstEnergys Burger Power Station in Ohio.

184

Bechtel Technology Journal

Table 2. Chilled Ammonia Advantages and Limitations


ADVANTAGES
Compared to using amine, the regeneration energy is potentially lower. Applicable for new plants and for retrot of existing coal-red power plants. AC and ABC are stable over a wide range of temperatures. Solvent is oxygen and sulfur tolerant. The heat of the absorption reaction is lower. Potential to capture 90% of CO2 emissions.

LIMITATIONS
Ammonia volatility can be an issue. Absorption rate is slower than that of MEA and requires as much as three times more packing to achieve same CO2 removal performance. Several absorber vessels are required, increasing capital cost and affecting plant layout. Large-scale process experience is limited; experience should increase knowledge of feasibility.

Pre-Combustion IGCC The main advantage of IGCC pre-combustion CO2 capture is that the amount of fluid to be processed is much smaller than for postcombustion capture at a coal-fired or combined cycle plant. For IGCC, only the syngas is treated, whereas for post combustion the entire exhaust flue gas flow must be processed. For oxygenblown IGCC, the main syngas components are H2 and CO, with some CO2, steam, nitrogen gas (N2 ), and traces of other elements. The raw syngas produced by the gasifier must be cleaned of contaminants, including mercury, sulfur, and fluorides. The syngas chemical cleaning processes, acid gas removal (AGR)

such as Rectisol or Selexol, are able to remove a certain amount of CO2 . However, the actual conversion of the carbon monoxide (CO) into CO 2 and H2 occurs in a water-shift process in which steam and syngas are mixed in the presence of a catalyst to convert the CO to CO2 in an exothermic reaction. The shift stage can be integrated into the process either before (sour shift) or after (sweet shift) the sulfur removal stage. A CO2 high-concentration capture (90%) will require two stages of shift in addition to the enhanced AGR. The fuel to be burned in the gas turbine is mainly H2 with diluent. The amount of shift will determine the

Chilled ammonia is one capture option.

Existing Stack CO2 Flue Gas Existing FGD 120 F Chiller CO2 Absorber Purge Rich ABC 35 F Regenerator HP Pump Stage Two Cooling Wash Wash

Lean AC

HX Reboiler

Flue Gas Water Rich Slurry Lean Slurry CO2

Cooling and Cleaning of FG

CO2 Absorption

CO2 Regeneration

Figure 3. Chilled Ammonia Schematic [4]

December 2008 Volume 1, Number 1

185

concentration of H2, which could vary from 35% to 90%. A process schematic is shown in Figure 4. Oxy-Fuel Combustion In an oxy-fuel combustion-based power plant, oxygen rather than air is used to combust fuel, resulting in a highly pure CO2 exhaust that can be captured at relatively low cost and sequestered. Often, the oxygen is mixed with flue gas to regulate burning, as well as to achieve a high CO2 level in the flue gas. For the Rankine steam cycle, the volume of flue

gas leaving the boiler is considerably smaller than the conventional air-fired volume. The reasons for this difference are that N2 in the air is not part of the flue gas and that the amount of flue gas is approximately 80% less for combustion with oxygen than with air. The flue gas consists primarily of CO2. The schematic is given in Figure 5, and Table 3 summarizes the advantages and limitations of this process. Theoretical and experimental research on this technology has intensified in the past 2 years. A 30 MWe oxy-fuel plant is under construction near Schwarze Pumpe, Germany,

Combustion using only oxygen results in highly pure CO2 exhaust.

N2 O2 ASU Coal Handling Gasification Removal Particles Module Scrubber A

Diluent N2

Shift Reactor

Hg Removal

Two-Stage Acid Gas Removal


Gas Turbine CO2

HRSG

STG Sulfur Removal CO2 Compressor

Figure 4. IGCC with Two-Stage Acid Gas Removal

Extraction to CO2 Plant MPT HP N2 Seal Leak CO2 Seal Leak IP Seal CO2 Recycle Leak

Throttling Valve

LP Particle Removal SOx Renoval

LP Noncondensable Gases

Air

Air Separation Unit

Boiler O2 Coal Combustion Heat Steam Power Cycle

Flue Gas Cleaning

CO2

CO2 CO2 Compression

Boiler Fuel Consumption (GJ/hr HHV)

Figure 5. Oxy-Fuel Process Block Flow Diagram

186

Bechtel Technology Journal

Table 3. Oxy-Combustion Advantages and Limitations


ADVANTAGES
High concentration of CO2 (80%) in ue gas may lower the cost of carbon capture. Retrot possibilities due to similar heat transfer rates; boiler can be used as oxygen or air blown. 60% lower NOx emissions, and lower mercury concentration in furnace backpass gas. Amount of unburned Carbon (C) in ash reduced. Char combustion rate increased.

LIMITATIONS
Expensive ASU is typically a cryogenic air separation process with high energy demand (15%20% of power output). Due to air ingress, further cryogenic distillation is required after combustion to purify CO2 in ue gas, which consumes additional energy. Air inltration into system is inevitable to a certain extent and is detrimental to process operation.

to demonstrate the technology using an Alstom-supplied boiler. In the United States, the Babcock and Wilcox Company (B&W) and Air Liquide have converted B&Ws 30 MW Clean Environment Development Facility (CEDF) in Alliance, Ohio, to an oxycombustion system. IHI Corporation has a 1.2 MW pilot-scale testing facility in Japan. The oxy-fuel combustion process uses an air separation unit (ASU), a device requiring high electricity consumption. To reduce the auxiliary load, new, less energy-intensive oxygen separation technologies are in development, including ion transport membrane (ITM), oxygen transport membrane (OTM), and BOCs ceramic auto-thermal recovery (CAR) oxygen production process. Oxy-fuel combustion is also associated with promising combined cycles involving gas and steam turbines. [5] The Graz cycle and the semi-closed oxy-fuel combustion combined cycle are two examples now under theoretical investigation. The oxy-fuel combustion concept is applicable to a variety of fuels, including methane, syngas, and biomass gasification. In the Graz cycle, the working fluid following the combustion process is a mixture of steam (approximately 75%) and CO2 (approximately 24%), with some small amounts of N2 and O2. The turbomachinery needed due to the unusual working fluid is discussed below. The expected cycle efficiency is in the range of 50%. A 50 MW oxy-fuel demonstration plant using methane fuel is being planned in Norway. In the United States, the Department of Energy (DOE), in cooperation with Siemens, has instituted a program to develop a high-temperature turbine for these types of plants. A pilot demonstration plant is expected to be operational in about 2015.

N2O2

CO2 and H2O

Chemical looping combustion promises flue gas free of N2 .

Air Reactor

Fuel Reactor

Air

Fuel

Figure 6. Generic CLC Process [6]

Chemical Looping Chemical looping combustion (CLC) employs a circulating solid such as metal oxide or calcium oxide to carry oxygen from the combustion air to the fuel. Direct contact between the combustion air and fuel is avoided, and the flue gas is free of N2. The metal oxide is reduced in a fuel reactor by a mixture of natural gas and steam and oxidized in an air reactor. The oxidation reaction is exothermic, and the resulting heat production in the air reactor is used to maintain the oxygen carrier particles at the high temperature necessary for the typically endothermic reaction in the fuel reactor. The fuel could be syngas from coal gasification, natural gas, or refinery gas. The net heat evolved is the same as for normal combustion with direct contact. Because the fuel has no direct contact with air, the flue gas is free of N2. The N2-free flue gas contains moisture, SOx, CO2, and particulates. For coal fuel, after the particulates are removed by a baghouse and moisture is removed by cooling, the remaining gas is essentially high-purity CO2. The highpurity CO2 is compressed and cooled in stages to yield a liquid ready for sequestration. A schematic representation is given in Figure 6,

December 2008 Volume 1, Number 1

187

Table 4. Chemical Looping Advantages and Limitations


ADVANTAGES
No energy penalty for oxygen production and CO2 separation. No need for energy-intensive, high-cost ASU (assuming that coal gasication is not needed). Potential exists for greater than 98% CO2 capture. Potential exists to use a variety of fuels (natural gas, coal, residual oil, etc.). Possible to retrot conventional air-blown CFB to CLC CFB with limited modications. Alstom bench scale tests suggest potential to meet ultra-clean low emissions targets, including CO2 capture, at about the same cost and efciency as todays power plants.

LIMITATIONS
Most work performed to date has used methane as fuel; only limited studies with oxygen carriers used to react with coal or gasied coal. Carbon deposition (formation of solid carbon) can occur. For combustion of solid fuels, separate energy-intensive (due to use of ASU) gasication process is required. Metal oxide must have high afnity for reaction with fuel but must not decompose at high temperatures or over a large number of cycles. CO can be also produced; some mechanism to control it possible based on circulation rate. Process has not been demonstrated on a large scale.

CCS plant components occupy a significant area.

and Table 4 summarizes the advantages and limitations of this process. Intensive academic research programs now under way are mainly concentrated on finding the appropriate metal oxides for different fuels. Alstom has completed engineering studies and bench-scale tests on the chemical looping process, and some pilot-scale process testing is proceeding. Chalmers 10 kW CLC testing concluded the following for nickel (Ni)-based particles: No CO2 is released from the air reactor. No significant carbon formation yields 100% CO2 capture. Sand tests show low leakage from the air reactor to the fuel reactor: almost pure CO2 is possible. 99.5% conversion of fuel occurs at 800 C. Because this method is still undergoing research, further details will not be discussed here.

to be considered. If the plant is situated far from an adequate geological storage location or an enhanced oil recovery site, the cost of constructing a pipeline and the additional loads for pumping CO2 must be accounted for. Space requirements and plant layout should also be considered. By itself, the CO2 capture hardware has a large footprint. For amine scrubbing, the CCS plant components (absorber, stripper, compression stations, and various cooling and storage tanks) occupy a significant area. The plant layout also has to accommodate large ducts for the flue gases, which need to be routed from the exit of the air quality control system (AQCS) block, between the induced-draft (ID) fan, to the amine scrubber without interfering with roads, buildings, and other infrastructure. As discussed in greater detail below, large lowpressure (LP) pipes are needed to transfer the steam from the steam turbine generator (STG) to the amine scrubber, which requires pipe racks with adequate support. The entire balance of plant equipment must also be augmented to cater to the CCS requirements. The electrical system design for transformers, transmission cables, motor control centers, and other elements needs to be enhanced. Particularly when existing plants are being retrofitted with CCS capabilities, the ripple effect of adding a CO2 plant requires detailed and careful review. One other consideration applies to the plant heat sink, which should be sized to allow the condenser and cooling tower to accommodate the additional steam when the post-combustion capture system is not in operation. Steam Turbine Generator A significant amount of steam is required for solvent regeneration. The steam consumption for

CASES his section discusses the impact of each CO2 capture technology on the cycle, and particularly on the turbomachinery. Coal-Fired Supercritical With Post-Combustion CCS Power Plant Site Location, Available Space, and Other Requirements Before discussing in detail the evaluation process of determining the impact on the turbomachinery, it is worthwhile to briefly mention the other implications of equipping a plant with a CCS system. Primarily, the suitability of the CO2 sequestration site needs

188

Bechtel Technology Journal

Table 5. Performance Comparison for Plants With and Without CO2 CCS Capabilities
Performance Element
Gross Power, MW Net Power, MW Steam Turbine Gross Power, MW Auxiliary Loads, MW Noncondensing Turbine, MW Crossover Steam Extraction, % IP Exhaust Flow

floating-pressure LP, LP spool with clutched LP turbine, and backpressure turbine.


Delta, %
19 32.3 23.5 145 N/A N/A

Without CCS
865 800 865 65 N/A 0

With CCS
702 542 662 160 40 62

a representative amine-based post-combustion capture system is shown in Table 5. Typical steam conditions are 3 bar and 270 C. The amount of steam for 90% CO2 recovery from the flue gas may be as high as 1.6 kg of steam for 1 kg of CO2more than 50% of the LP steam turbine flow. Therefore, it is imperative in all plant operational scenarios to consider the possibility that the CO2 capture plant might not be able to receive part or all the extraction steam. This consideration is important in a case in which the steam turbine was permanently configured to operate with a reduced LP steam flow. Because venting such large quantities of steam is not an option in this case, any design must offer rapid configuration changes that allow the LP modules to operate under zero extraction conditions. The available options to extract the steam from the system are throttle LP,

Throttle LPThis configuration keeps the crossover pressure constant despite the large amount of steam extracted. The arrangement requires a throttling valve downstream of the solvent steam extraction point. Despite the significant throttling losses that occur, this setup offers flexibility to extract any amount of steam needed (i.e., for less than 90% CO2 capture scenarios) and the capability to restore full power generation rapidly when the CO2 capture system is not in operation. The throttling valves are commercially available for current LP crossover pipe sizes. A schematic is provided in Figure 7. Floating-pressure LPIn this arrangement, the turbine intermediate-pressure (IP) module must be designed to operate with a variable backpressure. When the CO2 capture plant operates, the crossover pressure is lower. In this case, the last-stages loading of the LP module has increased and the exit losses are higher. For retrofits, the IP last stages can be replaced to match the desired operating conditions at both high and low steam flows, depending on the CO2 capture steam demand. Obviously, additional valves in the extraction line and downstream of the extraction point in the crossover pipe facilitate operational control for the different steam flows required by variable CO2 capture rates (e.g., 30%, 50%,

Steam for 90% CO2 recovery may be more than 50% of the LP steam turbine flow.

Extraction to CO2 Plant MPT HP Seal Leak Seal Leak IP IP Seal Leak LP

Throttling Valve

LP

Boiler Fuel Consumption (GJ/hr HHV)

Figure 7. Throttling LP Turbine

December 2008 Volume 1, Number 1

189

90%) from the flue gas stream. A schematic is provided in Figure 8. LP spool with clutched LP turbine In this scheme, one of the LP modules is connected via a clutch to the generator in an arrangement similar to that used in a single shaft combined cycle in which a clutch is situated between the generator and the steam turbine. In this case, when the CO2 capture plant is in operation, only one LP module is operating and the other is disconnected. The inlet flow and pressure of the operating LP module have to be designed to accommodate the steam conditions at the anticipated CO 2 capture levels.

This option is costly, requiring additional structural pedestals and a longer turbine hall, and offers little flexibility for various CO2 capture rates. However, restoring the full capacity of the module when the CO2 plant is not functional is not a complex activity. A schematic is provided in Figure 9. The LP module, which remains in operation, performs at the design conditions, thus achieving a higher efficiency than with other options. A variant of this arrangement could even operate without a clutch. In this setup, the second LP must rotate; thus, a minimum amount of steam flow (between 5% and 10% of the LP flow) must pass through this module to prevent

Extraction to CO2 Plant MPT HP Seal Leak Seal Leak IP IP Seal Leak LP

Floating Pressure

LP

Boiler Fuel Consumption (GJ/hr HHV)

Figure 8. Floating-Pressure LP Turbine

Extraction to CO2 Plant MPT HP Seal Leak Seal Leak IP IP Seal Leak LP

Shutoff Valve

LP

Boiler Fuel Consumption (GJ/hr HHV)

Figure 9. LP Spool with Clutched LP Turbine

190

Bechtel Technology Journal

Noncondensing Turbine

To CO2 Capture Plant

Generator Extraction to CO2 Plant MPT HP Seal Leak Seal Leak IP IP Seal Leak LP LP

A backpressure turbine can produce power to reduce auxiliary loads.


Boiler Fuel Consumption (GJ/hr HHV)

Figure 10. Additional Noncondensing Turbine

overheating or mechanical vibrations under these minimal flow conditions. Extracting additional steam without producing real power is an added loss for the system. A more permanent solution for the second LP module is to replace the bladed rotor with a dummy. In this scenario, when the postcombustion capture is not operational, the steam cannot be returned to the cycle to produce power and must be either vented or condensed. Backpressure turbineIf the steam extraction for the post-combustion capture plant is taken from the IP/LP crossover pipe, the pressure and the temperature are too high for direct use in the sorbent regeneration process. One solution to exploit the available energy is to generate electric power through a noncondensing turbine, and use the power to reduce the auxiliary loads. A schematic is provided in Figure 10. Table 5 offers an example of the impact of a CO2 capture plant on the overall performance for a nominal 800 MW net power plant without post-combustion capture. It should be emphasized that each project must conduct its own evaluation based on specific site conditions, the selected capture technology, and type of sorbent used. Because each steam

turbine vendor has a different cycle design with dissimilar IP module exhaust pressures, the output power of the noncondensing turbine varies accordingly. In the given example, it can be seen that the steam extraction for the postcombustion capture plant reduces the steam turbine output by almost 23%. Because of the post combustion capture plant, which in this example also contains the CO 2 compression section, the auxiliary loads increase by almost 95 MW. The noncondensing turbine produces 40 MW of power; without this turbine, the auxiliary loads would be even higher. Summary of Options Impact Figure 11 depicts the relative efficiency loss for each option. [7] This comparison of plant output does not account for the auxiliary power loss due to CO2 compression loads. As expected, the setup with an additional noncondensing turbine offers the lowest power loss (7%), followed by the clutch arrangement, which has the least steam throttling and lowest LP turbine losses. However, both options require additional hardware or significant modifications to plant arrangement. For a retrofit, these alternatives require substantial pre-investment and site preparation.

December 2008 Volume 1, Number 1

191

14 Throttle LP Turbine 12 Floating Pressure

Clutch Arrangements

10

Power Loss, %

Floating Pressure and Back Pressure Turbine

Oxy-fuel combustion requires flue gas recirculation.

Options for Steam Extraction Arrangements

Figure 11. Power Loss for Various LP Turbine Arrangements

Steam Cycle Oxy-Fuel Combustion with Post-Combustion Capture System Steam Cycle Location, additional equipment, and space requirementsAs with the other CCS options, proximity to a CO2 sequestration site should be considered for the oxy-fuel option, and adequate feasibility studies must be conducted. The distance from a geological storage location or an enhanced oil recovery site influences not only the plants economics, but also its configuration and auxiliary power requirements. Compared to a conventional coal-fired plant, an oxy-fuel plant requires a number of new components besides the CO2 capture hardware. Examples include an ASU, additional flue gas treatment modules, several heat exchangers to extract low heat, and fans and ducts for flue gas recirculation (FGR). The optimal FGR ratio is still a topic of investigation. A commonly used value is 0.7, where zero is pure oxygen combustion with no recirculation. [8] Adequate space must be allocated not only for the equipment, but also for the interconnecting pipes, electrical cables, and controls. While SO2 and other elements can be removed in the CO2 capture plant, the quality of recirculation flue gas must be controlled in supplementary or modified

sulfur-removal devices to avoid long-term corrosion of the boiler. Another aspect to be considered is the increased cooling duty of the plant required by the ASU, flue gas condenser, and CO2 compression unit. When the heat sink is a cooling tower, the plant layout needs to account for additional cells capable of coping with a larger cooling load than can be handled by a conventional plant without CCS. Steam turbine generatorIn principle, the steam turbine configuration for this CCS option is the same as for a conventional steam plant without carbon capture. However, the cycle energy balance indicates that several sources of low-grade heat, such as the ASU and the CO2 compressor, could be recovered, allowing substantial reduction in bleed flows for condensate and feedwater heating. As a result, there is an increased LP flow through the turbine. If the LP module laststage blade system and generator are sized properly to handle the additional flow, the steam turbine gross power output increases. According to [7], the gross power output could be an estimated 4.5% or higher. This arrangement also yields a better efficiency, between 0.3 and 1.3 percentage points. Some LP steam extractions are required for oxygen preheating and the ASU plant dryers.

192

Bechtel Technology Journal

HTT Combustor Intercooler Deaerator

H 2O

C1

C2

HPT

LPST

Condenser

HRSG

C3

C4

CO2

Figure 12. Principal Flow Schematic of Modified Graz Cycle Power with Condensation/Evaporation

Combined Cycle ConfigurationTwo combined cycle examples are presented (see [5]). The first is the semi-closed oxy-fuel combustion combined cycle (SCOC-CC), which consists of a high-temperature Brayton cycle with a compressor for CO2, a high-temperature turbine (HTT), a combustion chamber, a heat recovery steam generator (HRSG), and a conventional bottoming cycle. The combustion process occurs at 40 bars and has a nearly stoichiometric mass ratio of fuel and oxygen. The fuel gas composition is mainly CO2 (92.5%), with steam (7.1%) and some residues of N2 and O2. The expansion process from almost 1,400 C is done in the HTT, cooled with recycled CO2, pressurized by the compressor. [5] The second example, the Graz cycle, has a more complicated layout with two loops. The first loop is a high-temperature cycle comprising two compressors with intercooling, an HTT, a HRSG, and a-high pressure (HP) turbine. The LP cycle has a low-temperature turbine, two compressors for CO2, and a condenser. The two-compressor configuration allows water (H2O) segregation after the first compressor, thus reducing the power demand for the second compressor. See Figure 12 for the Graz cycle schematic. Development considerationsWhile the expected thermal efficiency of both the Graz and SCOC-CC cycles is close to 50%, including CO2 compression auxiliary load, the unusual working fluids require a massive development effort in design, testing, and validation for the high-temperature

turbomachinery. Using the working fluid for cooling poses the risk of blocking the internal cooling passages with soot and ash from the combustion process. Additional critical items are the LP turbine, which works with a mixture of steam and CO2 , and the condenser, where operation at very low pressures may lead to severe metal corrosion. In the power generation industry, where reliability and simplicity of operation are paramount, these promising cycles are far from practical implementation in the next decade. Natural Gas Combined Cycle with Post-Combustion CCS Configuration The use of chemical solvents in post combustion CO2 capture is a proven technology. The real challenge is to identify the most efficient conversion technology in terms of steam consumption for the solvent regeneration and use of electricity for CO2 compression. To exemplify the impact of a CCS on turbomachinery, Table 6 provides information about a 1 x 1 combined cycle consisting of one F class gas turbine and one steam turbine. The amount
Table 6. Typical Plant CO2 Capture Parameters for a Combined Cycle Plant
Parameter
Plant Output, nominal Exhaust Flue Gas Flow CO2 Capture at 85% Rate Reboiler Steam Consumption Electricity Consumption

For steam consumption, the real challenge is to identify the most efficient conversion technology.

Value
450 MW 65,000 tons/day 3,200 tons/day 4,500 tons/day 200 kW/ton CO2

December 2008 Volume 1, Number 1

193

of CO2 capture was arbitrarily set at 85%. There are many designer formulas for solvents, consisting of several primary, secondary, and tertiary amines including MEA and other reactive ingredients. Therefore, providing an absolute ratio of steam or electrical consumption per ton of CO2 captured is not representative. Figures 13 and 14 show how changing the target CO2 percentage affects the steam and electricity required for capture and compression. It is assumed that the basis for evaluation is 95% CO2 removal from the flue gas. It can be seen that reducing the CO2 capture rate from 95% to 80% reduces steam consumption by 20% and electricity consumption by 5%.

Impact on Gas Turbine In a gas turbine, the nature of the premix combustion system decreases the concentration of CO2 in the exhaust flue gas to half of that in a coal-fired boiler. Recirculating part of the exhaust gases achieves a higher CO2 concentration. Among the thermal NO x reduction techniques developed in recent decades, internal FGR is being used as a very effective method to lower peak flame temperatures. However, the FGR rate can only be increased to a certain value for stable operation. Particularly interesting to note for this mode of operation is the fact that the NOx emission levels and combustion system acoustics are substantially improved. However, the process could affect combustion stability and heat transfer properties. Theoretically, the amount of recirculated flow could be close to 40% of the exhaust gases. For CCS, FGR takes place at the compressor inlet. It should be noted that the amount of cooling necessary to bring the flue gases from exhaust conditions (at least 40 C) to ambient temperature adds a substantial parasitic load. Due to the high sensitivity of gas turbine output to the compressor inlet temperature, a mixed stream of external air and recirculated gas above ambient temperature would certainly reduce the power generated. Large gas turbine manufacturers are conducting extensive studies not only on the operational impact of the FGR on various components, but also on the technical and economical optimization of the amount of recirculated flue gas. IGCC with CCS The main impact on IGCC with CCS is the use of H2-rich fuel in the gas turbine. At 90% carbon capture, the expected hydrogen concentrations in the fuel may vary from 30%78%. Hydrogen is an excellent fuel with a high heating value (52,000 Btu/lb). For comparison, natural gas has only 21,000 Btu/lb. The flame temperature is hotter (more NOx), and flame propagation is faster, requiring modified combustor cooling schemes. The current proposal for an H2 burner is based on the diffusion flame, which is more stable and less prone to combustion oscillations than the premix lean combustion flame. At this time, the state-of-the-art process for premix combustion of fuels with high H2 concentrations (greater than 50%) is only in the experimental phase. The main unresolved issues continue to be premature ignition (flash-backs) and combustion noise.

Percentage, kg steam/kg CO2

10

15

20

25 78.0

80.0

82.0

84.0

86.0

88.0

90.0

92.0

94.0

96.0

CO2 Recovery Percentage

Figure 13. Steam Consumption at Various CO2 Capture Rates

6 5

Percentage, KW/CO2 ton

4 3 2 1 0 78.0

80.0

82.0

84.0

86.0

88.0

90.0

92.0

94.0

96.0

98.0

CO2 Recovery Percentage

Figure 14. Electricity Consumption at Various CO2 Capture Rates

194

Bechtel Technology Journal

In diffusion combustion, the H2 is mixed with N2 before entering the system. The N2 dilution is used to meet the NOx emission limit (15 ppm). Because the firing temperature of a gas turbine operating on H2 is approximately 28 C lower than that of a turbine operating with conventional IGCC syngas, the additional mass of the inert gas N2 expanding in the turbine section helps compensate for the power loss associated with the lower firing temperature. As discussed earlier, an IGCC with no CCS uses syngas, a fuel with much lower H2 content and a significant amount of CO. The syngas combustion process has been known for many years and is used extensively in conventional IGCC applications using E- and F-class turbines. High H2 operating experience indicates that in some process gas applications, the concentration of H2 could be 60%70% and even reach 90%. For example, GE reports [9] that an MS6000B gas turbine is burning refinery gas with a 70% H 2 concentration at the San Roque site in Cadiz, Spain, and that at the Daesan, Korea, site, the H2 percentage can be as high as 95% for this model. The F-class operational experience indicates combustion with lower levels of H2 concentration, about 44%. However, flame dynamics are such that combustion can occur safely only in the diffusion mode. Despite the number of dilution additives (steam, N2), current technology has high NOx emissions. In diffusion mode, combustion of H2-rich fuel is limited to 50% [10] to meet emissions targets and control flame stability. If N2 is the dilution agent, then the percentage of H2 cannot exceed 35%. The commonly used diluents are pure steam, pure N2, and different mixtures of both. In Europe, many initiatives and research activities, such as the Enhanced Capture of CO2 (ENCAP) program, aim at developing premix burners capable of burning high percentages of H2. The development of the burners is only the first step of the integration process. The impact on the combustion system, either annular or can, must also be evaluated. Major equipment suppliers, including GE, Siemens, and MHI, are conducting extensive combustion tests to demonstrate lower than 15 ppm NOx. As part of the first phase of DOEs Advanced Hydrogen Turbine Development Program, Siemens investigated several promising premix combustion configurations for H2 concentrations up to 60%.

There are also other differences between a gas turbine burning conventional syngas (GTsyn) and one burning fuel with a high concentration of H2 (GTH2). In order to maintain compressor pressure ratio, the first-stage turbine nozzle area must be sized properly to account for the differences between the larger flow of GTsyn and GTH2. An increased percentage by volume of H2 affects the life of the turbine hot sections as a result of the higher moisture content of the combustion products. [11] The GTH2 exhaust gas moisture content (12.4%) is higher than that of the GTsyn (8.4%); thus, the heat transfer properties and behavior of the hot gas path are different. More work is needed to redefine the computational fluid dynamics (CFD) boundary conditions and to conduct durability and life expectancy analyses. Ultimately, the metal temperature increases due to the higher moisture content accompanying a higher H2 content, resulting in a significant reduction of hot path component life. The practical solution recommended by gas turbine suppliers is to reduce the firing temperature, which reduces power output and efficiency. To protect the hot gas path components, the initial GTH2 firing temperature is approximately 28 C less than that of a GTsyn. A relationship for the reduction of firing temperature as a function of H2 percentage [11] is: Tf = 13.312 x (volume % H2) ^ 0.69 (1)

ENCAP aims at developing premix burners to burn high H2 fuel.

In order to increase the firing temperature and reduce the NOx emissions, several options are under investigation: new blade cooling concepts, advanced materials, high-temperature thermal barrier coating, and hybrid component design. The hybrid component is superior to a monolithic component, allowing expensive materials to be incorporated in the airfoils just in high-temperature areas. It is obvious that only intensive and continuous development efforts will make it possible to meet the ambitious goals set by DOE for IGCC plants with CCS capability: 2 ppm NOx emissions, 3%5% improved cycle efficiency, and capability to burn high-H2 fuel. Chemical Looping Combustion Combined Cycle with Gaseous Fuel Cycle configuration In CLC, the combustion occurs without any direct contact between the air and the fuel. As previously described, the CCS energy penalty is lower for CLC than for either pre- or post-combustion methods, because the CO2 is not diluted with other combustion products. [12] A combined

December 2008 Volume 1, Number 1

195

Air (Depleted O2 )

CO2 MeO Stack

Me

Air

NG Fuel HRSG Compressor Air Turbine STG

The gas turbine industry can convert existing products for CLC combined cycle.

Figure 15. CLC in Combined Cycle Configuration

cycle with chemical looping is depicted in Figure 15. The principles of chemical looping were presented earlier in the paper (see Figure 6). This section provides details for a specific combined cycle application. Pressurized air from the compressor enters the air reactor (oxidizer), where it reacts with the metal to create a metal oxide. This oxidation process is exothermic. The air (depleted oxygen) exiting the oxidizer at high temperature and pressure is available for further expansion in a turbine, to generate power. The air exiting the turbine passes through an HRSG, where its remaining energy is extracted for use in a conventional steam bottoming cycle. In the other system element, the fuel reactor, fuel, and metal oxide react to strip the metal from the O2 to create a CO2- and H2O-rich steam. This steam can be further expanded in a turbine and then condensed to separate the H2O and the CO2. For this application, the most promising substance is NiO. The metal oxide requires an inert stabilizer to improve chemical reactivity and ensure mechanical stability. TurbomachineryThe gas turbine industry has sufficient technical knowledge to convert existing products for this application. The air compressor [12] has a moderate pressure ratio (18) and an air flow close to 800 kg/sec. The turbine inlet temperature is not above current gas turbine values (1,140 C), and compressor bleed air can be used for cooling duty. The turbine exhaust temperature (500 C) is typical for HRSG applications. The steam bottoming cycle is no different from the one used in a conventional combined cycle. According to [12], a typical cycle efficiency, including CO2 compression plant load, is close to 50%.

Better cycle efficiency can be achieved if the hot stream of CO2 and steam exiting the fuel reactor passes through an additional turbine before being condensed. While cycle efficiency improves by 2%, this new turbine adds complexity and requires a full development program. An additional challenge facing the cycle is imposed by the requirement to maintain equal pressure at the exits of the two reactors under all operating conditions, to avoid gas leakages. Steam Cycle with Solid Fuel Intensive studies are being conducted to identify appropriate substances for the chemical looping process of solid fuels. One option is to use calcium sulfide and sulfate (CaS and CaSO4) reactions in one reaction loop between the oxidizer and the reducer [13]. The fuel reactor (reducer) uses coal, steam, and calcium carbonate (CaO3) as input. The heat of the exothermic reaction in the oxidizer is transferred to steam and from there to a conventional steam cycle. Another option (see [14]) uses calcium compounds to carry O2 and heat between two reaction loops. The first loop uses CaS/CaSO4 reactions to gasify the coal. In a water-shift reactor, CO is then combined with steam to generate CO2 and H2. The second loop uses calcium oxide (CaO) and CaCO3. Thermal energy is transferred from one loop to the other using bauxite as the heat transfer medium. In the final account, the products of the process are concentrated streams of CO2 for sequestration and H2 for consumption as fuel. An experimental pilot unit of this hybrid gasification and chemical looping is currently under development by the process owner, Alstom. A successful full scale demonstration of the process is expected in the 20162020 time frame.

196

Bechtel Technology Journal

Turbomachinery for the Cycle In the first option, the steam generated by the process is used in a conventional steam turbine. However, the steam temperature and pressure values could exceed the current level of ultra supercritical steam turbine conditions. An additional concern is part load operation. The chemical looping process is not yet sufficiently developed to allow more detailed steam turbine design and operation. In the second option, the turbomachinery is identical to that of an IGCC gas turbine using a highly concentrated stream of H2 as the operating fuel. (A detailed discussion is provided in the previous section, IGCC with CCS.) CO2 Compression Issues A typical CO2 processing system includes compression, dehydration, and purification/ liquefaction. As described earlier, this process is one of the major contributors to auxiliary power consumption and higher costs for the power plant. The compression process [15] includes at least two compressors, intercoolers, water separators, dehydrators, and purifiers. The amount of impurities in the CO2 stream has a major impact on the process. The presence of SO2 and H2O may decrease the amount of compression work, while the existence of N2, O2, and argon (Ar) may increase it. In the selection process, the intercooler temperature must be higher than the condensing temperature of the mixture. Additionally, CO 2 compression equipment requires stainless steel construction due to the presence of water vapors and potential corrosion. A discussion of turbomachinery for CCS plants would not be complete without mentioning CO2 compression technology. [16] The major effort in this area is dedicated to identifying processes capable of reducing power consumption, which represents 40% of the auxiliary loads. In some cases it represents 8%12% of plant power output. [17] For CO 2 compression applications, the traditional approach has been to use high-speed

reciprocating compressors. However, centrifugal compressors offer a challenging alternative: better efficiency, oil-free compression, and less maintenance. Given the importance of intercooling capability, it is worthwhile to mention an integral-gear design for centrifugal compressors that offers more flexibility for intercooling after each stage, optimization of flow coefficients due to selection of the most favorable rotating speed for each pair of impellers, and finally, a choice of drivers either motors or steam/gas turbines. A novel technology supported by DOE funding is the Ramgen supersonic shockwave CO2 compressor. [17] Following a process similar to the one occurring in the air intake for aircraft engines at supersonic speeds, the device is able to achieve compression ratios of 10:1 in a single stage. With stage discharge temperatures of approximately 230 C, the energy removed in the intercooler can be recovered and used in the solvent regeneration process. According to the details provided by Ramgen, 70% of the electrical input energy for compression can be recovered as useful heat. The second phase of this promising program will include detailed design specifications. Thermal Performance Comparison It is clear that the CCS processes discussed in this paper yield lower thermal efficiency than conventional systems without CCS. To quantify this phenomenon, the following examples are provided: Chilled ammoniaA study to evaluate the energy consumption and the cost of a full-scale CO2 capture system was conducted by Alstom (see [4]) and the results were compared to those of a study of an MEA system performed by Parsons in 2000 and 2002 (see [4]). The base power plant is a supercritical pulverized coal (PC) boiler firing 333,542 lb/hr of Illinois No. 6 coal, operating at 40.5% net efficiency (at higher heating value [HHV]), and generating 462 MW of net power. Plant energy performance with and without CO2 capture is summarized in Table 7.

Table 7. Chilled Ammonia Plant Energy Performance


Parameters
Net Power Output Net Efciency, % HHV

Supercritical PC Without CO2 Removal


462,058 40.5

SCPC With MEA CO2 Removal (Parsons Study) [4]


329,524 28.9

SCPC With NH3 CO2 Removal (Alstom Study) [4]


421,717 37.0

December 2008 Volume 1, Number 1

197

Oxy-fuel combustionThermal performance is better for air-fired units than for oxy-combustion, as shown in Table 8. The unburned carbon content in fly ash is similar in both processes; however, additional coal needs to be burned for oxy-fuel combustion to achieve the same net output.
Table 8. Oxy-Fuel Combustion Thermal Performance
Process Efciency, % HHV
39.1 29.9 34.5

Post-combustion capture methods are most suitable for future capture-ready plants.

Air-Fired PC Boiler Oxy-PC with CO2 Capture Oxygen Transport Membrane Process with CO2 Capture

future capture-ready power plants, requiring minimal pre-investment for steam turbines and only a few later modifications. Current turbine designs and plant layouts can also accommodate more efficient future postcombustion CO2 capture technologies as they become available. While the technical and economic penalties for CO2 capture are high, post-combustion technology represents one of the most probable short-term solutions. The use of ammonia in place of traditional amines may eventually reduce the parasitic electrical and heat loads. Oxy-Fuel Combustion Combined Cycles Several cycles have shown theoretical promise of high efficiency. The use of unconventional working fluids demands extensive development efforts before any large-scale implementation can occur. This methodology does not lend itself to the concept of capture-ready because the turbomachinery is process specific. Under current legislative conditions and given the status of competitive technologies, a 2015 or 2020 implementation date is reasonable. Other Methods Other capture methods, such as chemical looping, are in the initial development or pilot demonstration stage. Their chances of being implemented in full-scale applications ultimately depend on future legislative and environmental policies.

CONCLUSIONS n anticipation of the future greenhouse gases regulations, the power industry has embarked on a major effort to develop alternative capture and compression technologies, mainly for CO2. The proposed processes all require substantial amounts of energy, which negatively affects plant net power output and efficiency. This paper reviewed the current leading options and addressed their impacts on turbomachinery. Due to the uncertainties associated with CCS legislation, many developers, EPC contractors, and equipment manufacturers are looking at options to make plants currently in the design stage capture-ready, thus minimizing the future costs of CO2 capture retrofits. Apart from the technical implications of various CO 2 capture processes, a collective effort of the engineering community should be devoted to inform and educate the public about the direct impact of the CCS on the electricity production and cost. For their part, by providing a long-term framework for CCS, policymakers could stimulate the deployment of low-carbonfootprint technologies and encourage the development of more cost-effective concepts. Post-Combustion Capture Plants The main impact on the steam turbine for either amine- or ammonia-based CCS technology is attributed to the large steam extractions needed for solvent regeneration. A range of solutions, standalone or in combination, is available to cope with various amounts of extracted steam. The options presented can be implemented with limited effect on steam turbine efficiencies. These post-combustion capture methods are the most suitable for

TRADEMARKS Econamine FG is a service mark of Fluor Corporation. Rectisol is a registered trademark of Linde AG. Selexol is a trademark owned by UOP LLC, a Honeywell Company.

ACKNOWLEDGMENTS Justin Zachary wishes to express his gratitude to his co-author, Sara Titus, for preparing the background information on various CO2 capture methods used in the development of this paper.

198

Bechtel Technology Journal

REFERENCES
[1] J.-Ch. Haag, A. Hildebrandt, H. Hnen, M. Assadi, and R. Kneer, Turbomachinery Simulation in Design Point and Part-Load Operation for Advanced CO2 Capture Power Plant, Proceedings of ASME Turbo Expo 2007 (GT2007), Montreal, Quebec, Canada, May 1417, 2007, Paper GT2007-27488, access via <http://store.asme.org/ product.asp?catalog%5Fname= Conference+Papers&category%5Fname= &product%5Fid=GT2007%2D27488>. S. Chakravarti, A. Gupta, and Balazs Hunek, Advanced Technology for the Capture of Carbon Dioxide from Flue Gases, Proceedings of First National Conference on Carbon Sequestration, Washington, DC, May 1517, 2001 <http://www.netl.doe.gov/publications/ proceedings/01/carbon_seq/p9.pdf>. A. Veawab, A. Aroonwilas, A. Chakma, and P. Tontiwachwuthikul, Solvent Formulation for CO2 Separation from Flue Gas Streams, paper from the Faculty of Engineering, University of Regina, Regina, Saskatchewan, Canada, Proceedings of First National Conference on Carbon Sequestration, Washington, DC, May 1517, 2001 <http://www.netl.doe.gov/ publications/proceedings/01/carbon_seq/ 2b4.pdf>. S. Black, Chilled Ammonia Process for CO2 Capture, Alstom position paper, November 29, 2006 <http://www.icac.com/ files/public/Alstom_CO2_CAP_pilot_psn_ paper_291106.pdf>. W. Sanz, H. Jericha, B. Bauer, and E. Gttlich, Qualitative and Quantitative Comparison of Two Promising Oxy-Fuel Power Cycles for CO2 Capture, Proceedings of ASME Turbo Expo 2007 (GT2007), Montreal, Quebec, Canada, May 1417, 2007, Paper GT2007-27375, access via <http://store.asme.org/product.asp? catalog%5Fname=Conference+Papers&category% 5Fname=&product%5Fid=GT2007%2D27375>). Q. Zafar, T. Mattisson, M. Johansson, and A. Lyngfelt (Chalmers University of Technology), ChemicalLooping CombustionA New CO2 Management Technology, Proceedings of First Regional Symposium on Carbon Management, Dhahran, Saudi Arabia, May 2224, 2006 <http://www.entek.chalmers.se/~anly/ co2/ 54Dharan.PDF> and <http://www. co2management.org/proceedings/Chemical_ Looping_Combustion_Qamar_Zafar.pdf>. CO2 Capture Ready Plants, International Energy Agency (IEA), Greenhouse Gas R&D Programme, Technical Study Report Number 2007/4, May 2007 <http://www.iea.org/ textbase/papers/2007/CO2_capture_ready_ plants.pdf>. E. Rubin, A.B. Rao, and M.B. Berkenpas, Development and Application of Optimal Design Capability for Coal Gasification Systems: Oxygen-based Combustion Systems (Oxyfuels) with Carbon Capture and Storage (CCS), Carnegie Mellon University Contract DE-AC2192MC29094 (final report submitted to U.S. DOE in May 2007) < http://www.iecm-online.com/ PDF%20files/2007/2007rd%20Rao%20et%20 al,%20IECM%20Oxy%20Tech.pdf>.

[9]

B. Jones, Gas Turbine Fuel Flexibility for a Carbon Constrained World, Workshop on Gasification Technologies, Bismarck, North Dakota, June 2829, 2006 <http://www.gasification.org/Docs/ Workshops/2006/Bismarck/03RJones.pdf>.

[2]

[10] G. Rosenbauer, N. Vortmeyer, F. Hannemann, and M. Noelke, Siemens Power Generation Approach to Carbon Capture and Storage, Power-Gen Europe 2007 (Madrid, Spain) Conference Proceedings, June 2628, 2007, access via <http://www.pennwellbooks.com/ poeuandporee.html>. [11] E. Oluyede and J.N. Phillips, Fundamental Impact of Firing Syngas in Gas Turbines, Proceedings of ASME Turbo Expo 2007 (GT2007), Montreal, Quebec, Canada, May 1417, 2007, Paper GT2007-27385, access via <http://store.asme.org/product.asp?catalog %5Fname=Conference+Papers&category %5Fname=&product%5Fid=GT2007%2D27385>. [12] R. Naqvi and O. Bolland, Optimal Performance of Combined Cycles with Chemical Looping Combustion for CO2 Capture, Proceedings of ASME Turbo Expo 2007 (GT2007), Montreal, Quebec, Canada, May 1417, 2007, Paper GT2007-27271, access via <http://store.asme.org/product.asp?catalog% 5Fname=Conference+Papers&category% 5Fname=&product%5Fid=GT2007%2D27271>. [13] G. Jukkola, Combustion Road Map and Chemical Looping, CURC Technical Subcommittee Meeting presentation, Pittsburgh, Pennsylvania, October 2007. [14] G.J. Stiegel, R. Breault, and H.E. Andrus, Jr., Hybrid Combustion-Gasification Chemical Looping Coal Power Technology Development, Project Facts Gasification Technologies, U.S. DOE, Office of Fossil Energy, NETL, 10/2008 <http://www.netl.doe.gov/publications/ factsheets/project/Proj293.pdf>. [15] H. Li and J. Yan, Preliminary Study on CO2 Processing in CO2 Capture from Oxy-Fuel Combustion, Proceedings of ASME Turbo Expo 2007 (GT2007), Montreal, Quebec, Canada, May 1417, 2007, Paper GT2007-27845, access via <http://store.asme.org/product.asp? catalog%5Fname=Conference+Papers&category% 5Fname=&product%5Fid=GT2007%2D27845>. [16] P. Bovon and R. Habel, CO2 Compression Challenges, CO2 Compression panel presentation at ASME Turbo Expo 2007 (GT2007), Montreal, Quebec, Canada, May 1417, 2007, see <http://www.netl.doe.gov/technologies/ coalpower/turbines/refshelf/asme/ ASME_TURBO_EXPO_CO2_Panel_MAN_ TURBO_presentation.pdf>. [17] P. Baldwin, Ramgens Novel CO2 Compressor, Ramgen Document 0800-00153, August 2007 <http://www.ramgen.com/files/ Ramgen%20CO2%20Compressor%20 Technology%20Summary%2008-21-07.pdf>.

[3]

[4]

[5]

[6]

[7]

[8]

The original version of this paper was presented at ASME Turbo Expo 2008, held June 913, 2008, in Berlin, Germany.

December 2008 Volume 1, Number 1

199

BIOGRAPHIES
Justin Zachary is currently assistant manager of technology for Bechtel Power Corporation. He oversees the technical assessment of major equipment used in Bechtels power plants worldwide. Additionally, he is engaged in the evaluation and integration of integrated gasification combined cycle power island technologies. He also actively participates in Bechtels CO2 capture and sequestration studies, as well as the application of other advanced power generation technologies, including renewables. Dr. Zachary has more than 30 years of experience with electric power generation technologies, particularly those involving the thermal design and testing of gas and steam turbines. He has special expertise in gas turbine performance, combustion, and emissions for simple and combined cycle plants worldwide. Before coming to Bechtel, he designed, engineered, and tested steam and gas turbine machinery while employed with Siemens Power Corporation and General Electric Company. He is a wellrecognized international specialist in turbomachinery and has authored more than 72 technical articles on this and related topics. He also owns patents in combustion control and advanced thermodynamic cycles. Dr. Zachary is an ASME Fellow and a member of a number of ASME performance test code committees. Dr. Zachary holds a PhD in Thermodynamics and Fluid Mechanics from Western University in Alberta, Canada. His MS degree in Thermal and Fluid Dynamics is from Tel-Aviv University, and his BS in Mechanical Engineering is from Technion Israel Institute of Technology, Haifa, both in Israel. Sara Titus is a mechanical engineer on the Edwardsport Integrated Gasification Combined Cycle project. In her 2 years with Bechtel Power, she has already contributed to a variety of projects as a control systems engineer, in nuclear operating plant services; a mechanical engineer, in the environmental group; and an air quality control systems engineer, in the fossil technology group. Before her current position, Sara supported a front-end engineering and design study for a CO2 test center in Norway. She was the responsible engineer for the test centers rich amine reclaimer and regenerator systems, and also helped to evaluate optional functionalities being considered for the project.

Earlier, Sara worked as a control systems engineer on nuclear power projects for the Tennessee Valley Authority (TVA) and Southern Nuclear Operating Company, where her duties included performing safety-related equipment qualification assessments and preparing design change packages. In addition, she worked on the Holcomb Station power plant expansion and the IGCC plant for American Electric Power. Sara has authored several technical papers on the topic of CO2 capture technologies and economics. Her most recent, CO2 Capture and Sequestration Option: Impact on Turbo Machinery, was presented at the ASME Turbo Expo conference in Berlin, Germany, in June 2008. Sara holds MS and BS degrees in Chemical Engineering from the University of Maryland, Baltimore County, and is a member of the Society of Women Engineers, Women in Nuclear, and North American Young Generation Nuclear.

200

Bechtel Technology Journal

RECENT INDUSTRY AND REGULATORY DEVELOPMENTS IN SEISMIC DESIGN OF NEW NUCLEAR POWER PLANTS
Issue Date: December 2008 AbstractThis paper provides a review of the evolution of seismic safety-related developments concerning safety-related nuclear power plant (NPP) facilities during the past 15 to 20 years and describes how these developments have shaped the recent changes in seismic regulations and the associated implementation rules. Keywordsdesign certification, dynamic soil properties, ground motion attenuation, ground motion high-frequency content, ground motion incoherency, nuclear power plant, operating basis earthquake, performance-based spectrum, probabilistic seismic hazard analysis, safe-shutdown earthquake, seismic hazard, seismic risk, site response analysis, soil-structure interaction, standard plant, uniform-hazard spectrum

INTRODUCTION he safe-shutdown earthquake (SSE) ground motion for nuclear power plants (NPPs) in operation before January 10, 1997, or for those plants whose construction permits were issued before then, is governed by the requirements of 10 CFR 100 Subpart A. [1] These requirements are less rigorous than those for the new generation NPPs (licensed after January 10, 1997). For new nuclear plants, the governing requirements are specified in 10 CFR 100 Subpart B [2] and 10 CFR 50 Appendix S [3]. The current regulation specifies that a more rigorous approach applying the probabilistic seismic hazard assessment (PSHA) method be used to determine the design ground motion.

geotechnical engineers, and structural engineers. Additionally, the following two factors have had a further impact on the scope and challenges faced during the seismic design work process: Use of the latest seismicity data and ground motion models leads to increased estimates of seismic hazard, particularly in the highfrequency (HF) range for the Central and Eastern United States (CEUS). This change has a direct downstream impact on seismic qualification of equipment and the degree of refinement needed in the seismic analysis models used by structural and geotechnical engineers. The advent of the standard plant concept along with the associated revised licensing process (10 CFR 52 [4]; see [5] for further information) has altered the roles of seismic engineers and specialists working for the owners (nuclear utilities), nuclear steam supply system (NSSS) vendors (who are now the standard plant suppliers), and engineering and construction (E&C) companies such as Bechtel. As a result, many seismic design and analysis issues have been identified during the recent design certification document (DCD) reviews for standard plants, as well as in early site permit (ESP) and combined operating license (COL) applications for the various candidate sites. NEI has facilitated addressing these issues by appointing a Seismic Issues Task Force (SITF) comprising senior seismic specialists from the nuclear industry. The task force has been working
201

Sanj Malushte, PhD


smalusht@bechtel.com

Orhan Grbz, PhD


ogurbuz@bechtel.com

Joe Litehiser, PhD


jjlitehi@bechtel.com

Farhang Ostadan, PhD


fostadan@bechtel.com

This development, along with a host of subsequent developments, has had a significant impact on the procedures for not only the seismic ground motion determination, but also the downstream geotechnical and structural analyses and seismic design. The new procedures are often difficult to implement, and unlike before, both the nuclear industry and the Nuclear Regulatory Commission (NRC) have much less experience in their implementation and regulatory review. With this situation in mind, the Nuclear Energy Institute (NEI) has been working with the NRC to help clarify and streamline the implementation rules, most of which have by now been addressed. The new procedures for seismic ground motion, analysis, and design have had a significant impact on the relevant NPP design services performed by geologists, seismologists,

2008 Bechtel Corporation. All rights reserved.

ABBREVIATIONS, ACRONYMS, AND TERMS


ASCE CAV CEUS CFR COL CSDRS American Society of Civil Engineers cumulative absolute velocity Central and Eastern United States Code of Federal Regulation combined operating license certified seismic design response spectrum cyclic triaxial (test) design certification document US Department of Energy deterministic seismic hazard assessment engineering and construction Electric Power Research Institute early site permit foundation input response spectrum spectral acceleration ground motion response spectrum high-frequency individual plant examination for external events Interim Staff Guidance (NRC) in-structure response spectrum/spectra ITAAC LLNL NEI NGA NPP NRC NSSS NUREG OBE PBS PGA PSHA RC/TS RG SECY SITF SRP SSC SSE SSHAC SSI UHS inspections, tests, analyses, and acceptance criteria Lawrence Livermore National Laboratory (US DOE) Nuclear Energy Institute next generation attenuation (ground motion attenuation model) nuclear power plant US Nuclear Regulatory Commission nuclear steam supply system Regulatory Guide operating-basis earthquake performance-based spectrum peak ground acceleration probabilistic seismic hazard assessment resonant column/torsional shear (test) (NRC) Regulatory Guide Office of the Secretary Seismic Issues Task Force (of NEI) Standard Review Plan (for nuclear power plants, NUREG-0800) structures, systems, and components safe-shutdown earthquake Senior Seismic Hazard Analysis Committee soil-structure interaction uniform-hazard spectrum

Use of the latest seismicity data and ground motion models leads to increased estimates of seismic hazard, particularly in the high frequency range for the CEUS.

CT DCD DOE DSHA E&C EPRI ESP FIRS g GMRS HF IPEEE ISG ISRS

with the NRC to help resolve the issues for the past 3 years. This paper provides the background behind the current developments and discusses outcomes and status of many of the issues that have recently been addressed.

ground acceleration (PGA) caused at the site due to each seismic source. The design value for PGA at the site was taken as the maximum of the various PGA values due to the individual seismic sources. For some existing sites (typically in the CEUS), the SRP allowed use of less sophisticated approaches (e.g., review of past seismic activity, recorded ground motions at nearby locations, historical records of damage data, and best judgments of experts in local geology and seismology) because of insufficient data and seismic ground motion models for these regions (unlike for the Western United States, where fault characteristics were better known and understood). Regardless of whether the PGA was determined using the DSHA or even simpler approaches, the shape of the resulting seismic response

BACKGROUND

he past practice for determining design seismic ground motion for NPP sites can best be described as applying a method called deterministic seismic hazard assessment (DSHA), as delineated in Standard Review Plan (SRP, NUREG [Regulatory Guide]-0800) Section 2.5.2, Rev. 2. [6] This method entailed identifying all seismic sources that posed a seismic hazard at the site, defining their maximum credible earthquake magnitude and distance from the site for each source, and using an appropriate ground motion attenuation relationship to determine the peak
202

Bechtel Technology Journal

spectrum for rock sites was typically assigned using prescriptive rules, as in NRC Regulatory Guide (RG) 1.60. [7] Thus, given the PGA value, simple rules were used for constructing both horizontal and vertical response spectra. For soil sites, the soil amplification effects (whereby the propagating seismic waves from the crystalline bedrock below the site are amplified as they pass through the intervening soil layers) were captured by performing site amplification studies. However, 10 CFR 100 Appendix A, RG 1.60, and SRP 2.5.2, Rev. 2, did not prescribe rigorous rules for performing such studies, especially for accounting for the uncertainties about the soil stratification and its properties. The probabilistic seismic hazard assessment (PSHA) method was first formulated by Prof. Allin Cornell in 1968 [8]; however, it was not immediately embraced by the NRC and industry because of insufficient understanding within the profession as well as insufficient seismologic data. During the mid-to-late 1980s, motivated by a desire to better understand the available seismic margins at the existing NPPs, the NRC became interested in getting a better grasp of the seismic hazard at the existing sites by using the PSHA method. This initiative led to two studies: NUREG/ CR-5250 [9], commissioned by the NRC, and the Electric Power Research Institute (EPRI) report EPRI-NP-4726 [10], supported by the industry. While the two studies produced similar hazard curves for each site and generally similar representations of relative hazards at various sites, the absolute hazard estimates differed significantly for several sites. This variation caused concern regarding the viability of the PSHA technique in terms of the ability to produce consistent hazard estimates, independent of the analysts. The NRC therefore decided to supplement the Lawrence Livermore National Laboratory (LLNL) study by improving the elicitation of data and its associated uncertainty among the experts to better capture the gaps in their knowledge. The results of this study were published in NUREG-1488. [11] Although the PSHA results in NUREG-1488 showed a reasonable agreement with regard to the return periods associated with plant-specific SSEs, the LLNL seismic hazard estimates in the 10-4 to 10-6 annual probability of exceedance range were systematically higher than the EPRI hazard results for this range, which are of most interest in terms of seismic risk to NPPs. To address the foregoing concern, a working group called the Senior Seismic Hazard Analysis Committee (SSHAC) was created during the mid1990s. The group was sponsored by the NRC, the

US Department of Energy (DOE), and EPRI. The SSHACs charge was to provide an up-to-date procedure for obtaining reproducible results from the application of PSHA principles, not to advance the basic foundations of PSHA or develop a new methodology. This focus led to an emphasis on procedures for eliciting and aggregating data and models for performing a hazard analysis. In 1997, the SSHAC issued a report entitled Recommendations for Probabilistic Seismic Hazard Analysis: Guidance on Uncertainty and Use of Experts. [12] At the request of the NRC, the report was reviewed by the National Research Council, which issued its own assessment in a report entitled Review of Recommendations for Probabilistic Seismic Hazard Analysis: Guidance on Uncertainty and Use of Experts. [13] These efforts helped establish PSHA as a rational method for seismic hazard characterization.

SEISMIC DEVELOPMENTS FROM THE MID-1990s TO 2004 aving established the viability of the PSHA approach, 10 CFR 100 Subpart B (10 CFR 100.23 in particular) was written to specifically invoke PSHA as a method for determining the design ground motion for new generation NPPs. EPRI-NP-4726 had established that, on a median basis, the SSE spectra for a group of existing nuclear power plants corresponded to an annual exceedance probability of about 10-5 per year (see Figure 1). Given this finding, the NRC introduced RG 1.165 [14] in 1997 to require that the new generation of nuclear power plants be designed for a seismic hazard of 10-5 per year (referred to as

The advent of the standard plant concept, along with the associated revised licensing process, has altered the roles of seismic engineers and specialists working for the owners (nuclear utilities), NSSS vendors (who are now the standard plant suppliers), and E&C companies.

1.0 0.9 0.8 0.7

Cumulative Distribution

0.6 0.5 0.4 Median: 1E-5 10-6 10-5 0.3 0.2 0.1 0.0 10-7

10-4

10-3

Composite Probability of Exceeding SSE

Figure 1. Composite Probability of Exceedance for SSEs at Existing Nuclear Power Plants [14]

December 2008 Volume 1, Number 1

203

the reference probability in RG 1.165) using the PSHA method. The thinking was that the new plants thus designed would be at least as safe as the median level from among the existing fleet of plants, a somewhat misleading premise because seismic safety is a function of both the seismic hazard and the seismic fragility of the plant structures, systems, and components (SSC), not the seismic hazard alone. RG 1.165, with its requirement to produce design response spectra for a 100,000-year return period, went relatively unnoticed until 2003, when site studies for new plant licenses began. Use of the latest seismicity data and ground motion models showed a general increase in the predicted ground motions, especially for large return periods and in the HF range for rock sites. (Both the EPRI and LLNL studies had already confirmed that the seismic ground motion for the CEUS regions has significant HF content, an issue that generated much debate about if and how SSCs can be accurately analyzed or tested for HF excitation.) Besides the cost implications, the increase in hazard estimates also meant that the median seismic hazard associated with the design spectra for the existing fleet of plants was less than 10-5 per year if the latest data/models were to be used, thus bringing into question the very basis for use of the 10-5 per-year value as the reference probability. Because seismic-related cost and schedule impacts are significant for the nuclear power industry, there was a desire to identify an alternative rational basis that would lead to more reasonable design ground motion estimates. With the above concerns in mind, the industry looked to use seismic risk, rather than seismic hazard, as the more appropriate basis for seismic design for new NPPs. Certain previous and parallel developments have influenced the course of the more recent seismic regulatory guidance and implementation criteria: In the early 1990s, the NRC asked all nuclear plant licensees to conduct an individual plant examination for external events (IPEEE) program to evaluate the plant risks associated with seismic events, high winds, internal fire events, etc. For reporting seismic assessment results, the licensees were given a choice to use either the seismic margins approach (i.e., maximum ground motion that could be resisted versus the PGA value that the plant was designed for) or a more comprehensive annual seismic risk approach. In all, licensees of 25 existing plants conducted the

seismic risk-based evaluations. The results proved to be a better indicator of each plants seismic safety because the studies addressed both the hazard and fragility aspects of the controlling SSCs. The NRC published a summary report, NUREG-1742, in 2002 based on the IPEEE results reported by each utility. [15] With either EPRI or LLNL data [10, 11] as the hazard basis, the report provided a good perspective on the seismic risk (or seismic safety, depending on ones perspective) at the existing NPPsrather than [9] and [10], which provided information on the seismic hazard only. Similar to the concept of reference probability introduced by the NRC in RG 1.165 (defined in terms of median annual seismic hazard level corresponding to the SSE spectra for the existing nuclear plants), NUREG-1742 enabled the nuclear industry to think in terms of a reference riskdetermined as the median annual seismic risk for the existing NPP units as an indicator of their seismic safety. Figure 2 shows that the reference risk value for seismically induced failure is about 1.2 x 10-5. As this value is based on the seismicity data and ground motion models from the early 1990s, the annual seismic risk estimate would be higher if one used the current data/models (which result in increased hazard estimates). The industrys desire to use seismic safety (rather than seismic hazard, as stipulated in RG 1.165) as the basis for seismic design of new NPPs was shaped by the risk-based design approach first introduced in DOE Standard 1020-02. [16] The case for using a risk-based approach received a further boost because of its incorporation in ASCE Standard 43-05. [17] Both of these standards specify that seismic design of NPPs be based on a performance goal of 10-5 per year (i.e., the probability of failure of any SSC due to a seismic event must be less than 10-5 per year). This was a serendipitous development for the nuclear industry because the acceptable risk goal in ASCE 43-05 happened to be about the same as the median seismic risk level for existing NPPs (as Figure 2 shows). Using conservative fragility characteristics for typical nuclear SSCs as the fragility basis, ASCE 43-05 provides a simple method for deriving the performance-based spectrum (PBS) for a risk of 10-5 per year. One method for developing the PBS is to use

Similar to the concept of reference probability introduced by the NRC in RG 1.165, NUREG-1742 enabled the nuclear industry to think in terms of a reference risk determined as the median annual seismic risk for the existing NPP units as an indicator of their seismic safety.

204

Bechtel Technology Journal

the PSHA-based mean uniform-hazard spectra for 10,000-year and 100,000-year return periods. The uniform risk PBS thus

developed lies in between the two mean hazard spectra (see Figure 3 for comparison). The PBS also turns out to be smaller

1.0

0.9

0.8

0.7

Cumulative Distribution

0.6

0.5

0.4

0.3

0.2

0.1

0 1.00E-07 1.00E-06 1.00E-05 1.00E-04 1.00E-03

The adoption of a risk-based design approach by DOE and ASCE standards was a serendipitous development for the nuclear industry because the acceptable risk goal in ASCE 43-05 happens to be about the same as the median seismic risk level for existing NPPs.

Seismic Core Damage Frequency

Figure 2. Annual Seismic Core Damage Frequency at Existing Nuclear Power Plants [18]

2 Mea Mean UHS 1E-5 Mea Mean UHS 1E-4 Performance-Based GMRS

Spectral Acceleration, g

0 0.1 1

Frequency, Hz

10

100

Figure 3. Comparison of the Mean 10-4 and 10-5 UHS and the Performance-Based GMRS [19]

December 2008 Volume 1, Number 1

205

10.0

Spectral Acceleration, g

1.0

The PBS turns out to be smaller compared to the 100,000-year median UHS per RG 1.165.

0.1 0.1 1 10 100

Frequency, Hz
Approx. RG 1.165 (Median: 10-5) Approx. ASCE PB Spectrum RG 1.60 at 0.30 g

Figure 4. Sample Comparison of RG 1.60 0.30 g Spectrum, RG 1.165 10-5 Median UHS, and ASCE 43-05 10-5 PBS [20]

compared to the 100,000-year median uniform-hazard spectrum (UHS) per RG 1.165 (Figure 4 provides a sample comparison and includes the RG 1.60based spectrum). The spectrum reduction is especially significant in the HF range, which was a matter of concern for the nuclear industry. Thus, it was clear that the performance-based approach per ASCE 43-05 would be an attractive option in place of the uniform-hazard-based approach per RG 1.165. (The case for using the performancebased [i.e., uniform risk] spectrum per ASCE 43-05 was made in [18].) The potential concern about design and testing of SSCs to address the HF content in CEUS ground motion had been acknowledged in the nuclear industry since the early 1990s. The general consensus was that HF excitation is inconsequential to structures and most systems (e.g., piping). The concern about chattering response of old electrical relays subjected to HF excitation could be easily addressed by simply replacing them with relays that use solid-state electronics (and preventing use of the old relays in future NPPs). An EPRI report [21] sought to address these issues in the early

1990s; however, further progress and NRC consensus was not achieved at the time because of insufficient regulatory impetus (i.e., no new plants were being built or licensed). In any case, these early developments provided a framework for further discussions with the NRC during the past few years. The advent of the standard plant concept also had implications for NPP seismic analysis and design. The standard plant is a priori designed for a site-independent seismic spectrum (called the certified seismic design response spectrum [CSDRS]) and an array of possible soil conditions, with the expectation that the selected seismic design parameters will envelop the site-specific design spectra and soil characteristics at the candidate sites. While attractive in principle, this goal has been somewhat elusive because most standard plant designs have employed design spectra that do not contain sufficient HF content to envelop the site-specific spectra for many CEUS hard-rock sites (see Figure 5). Furthermore, none of the standard plant designs would be able to withstand the expected design ground motion for West Coast sites (especially in the

206

Bechtel Technology Journal

low-to-medium frequency range). As a result, the standard plant designs often would need to be assessed for site-specific soil and ground motion parameters, and rules for conducting such assessments needed to be developed to address this new situation for the industry. The operating-basis earthquake (OBE) is another ground motion concept that has evolved over the past 15 years, especially as NSSS suppliers developed their respective standard plant concepts. Experience from seismic design/qualification tests for existing plants had showed that, compared to the SSE, the OBE rarely governed the final design of SSCs (especially the structures). This determination meant that the significant design, analysis, and testing effort expended for OBE was not worthwhile, so the industry lobbied the NRC during the early 1990s to make OBE an inspection level earthquake rather than an explicit design-level earthquake. The NRCs earlier concurrence on this subject was first documented in SECY (Office of the Secretary) 93-087 [22] and later reflected in 10 CFR 50 Appendix S. The OBE concept, however, has needed some revisiting during the past few years as it became clear that not all of an NPPs seismic category I structures would be standardized

(e.g., intake structure configuration can vary from site to site, a reason why it is usually not part of the standard plant offering). Another factor is that the free-field motion corresponding to foundation elevations of various structures differs because not all structures have the same embedment in soil. As OBE and SSE are both meant to encompass not only site-dependent but also structure-independent ground motions, it became necessary to define them clearly and to clarify their usage for both standard and nonstandard plant structures. The phenomenon of ground motion incoherence, attributable to wave passage effect as well as random incoherency, had been recognized for a long time. The wave passage effect reflects the fact that a time lag is associated with passage of a given wave such that identical particle motion at two different locations cannot happen at any given instant of time (the wave passage effect is captured in most commercial software for SSI analysis). Random incoherency, which is the more significant source of incoherency, relates to the fact that the particle motion at any location within the soil is a result of many reflected and refracted waves that pass through at any given time (i.e.,

10.0

While attractive in principle, the basic goal of having a standard plant seismic design that covers most candidate sites has been somewhat elusive because most standard plant designs have employed design spectra that do not contain sufficient HF content to envelop the site-specific spectra for many CEUS hard-rock sites.

Spectral Acceleration, g

5% Damping
1.0

0.1 0.1 1

Frequency, Hz
RG 1.60 (0.30 g)

10

100

Augmented RG 1.60 (0.30 g)

CEUS Rock

Figure 5. Comparison of Design Motion Developed for a CEUS Rock Site, and Design Motions Used for Design Certification [23]

December 2008 Volume 1, Number 1

207

waves do not arrive in an orderly fashion; many waves are incident at a given time due to multiple reflections in the soil layers). It has been known that the incoherency increases with frequency and spatial separation between observation points. While some empirical models existed for describing ground motion incoherency, there was no proper technique to account for its effect on the seismic response of structures. ASCE 4-98 [24] permitted an ad hoc reduction of the design ground motion spectrum based on the foundation expanse and frequency range (i.e., larger reductions were permitted for HF range and large foundation sizes). However, because of the lack of rigor in this scheme, the NRC never fully endorsed it. Nonetheless, it was always recognized that the use of a good incoherency model, combined with its proper implementation into seismic SSI analysis (whereby the structure is analyzed for the incident incoherent motion), would result in a reduced significance of HF excitation. The trend toward using a very large common nuclear island basemat also meant that the resulting foundation sizes were ripe for realizing the benefit of incoherency. There was thus a clear impetus to develop a consensus on incoherency models and their treatment in the seismic analysis schemes. RG 1.138 [25] requires field measurements for seismic shear wave velocity and subsequent lab testing for determining dynamic soil properties (i.e., strain dependence of soil damping and modulus of elasticity) to characterize the seismic behavior of the underlying soil strata. Such tests need to be performed for both the in situ layers above the bedrock as well as for the backfill material to be placed on top of the uppermost competent in situ material. A recent development in this regard is the so-called resonant column/torsional shear (RC/TS) test. This increasingly used method, developed at the University of Texas at Austin, enables both damping and modulus characteristics to be determined from a single test and with multiple uses of the same specimen (compared to the separate cyclic triaxial [CT] tests and RC tests mentioned in RG 1.138). Although not mentioned directly in RG 1.138, the RC/TS test is considered to be superior for synthesizing the test results along with saving time. For these reasons, the nuclear industry became interested in seeking regulatory approval of this method.

The seismic-related activities in support of recent ESP and COL applications further highlighted the importance of the foregoing developments in terms of seismic ground motion development, site geotechnical studies, and subsequent seismic analysis and design. These developments thus formed the bases for industrys desire to influence the seismic requirements.

SEISMIC ISSUES ADDRESSED SINCE 2004

The trend toward using a very large common nuclear island basemat also meant that the resulting foundation sizes were ripe for realizing the benefit of incoherency.

ecognizing the importance of the issues discussed earlier, both the NRC and the NEI formed senior-level seismic task forces to achieve consensus and resolutions. These task forces have been meeting regularly since late 2004 and are now at a point at which they have resolved most of the issues, which are captured in RG 1.208 [19], the NRCs May 2008 Seismic Interim Staff Guidance (ISG) document [26], and the latest revisions of SRP Sections 2.5.2 [27] and 3.7.1 [28]. The following sections discuss some of the more important issues and their resolutions. Option to Use Uniform-Hazard Spectrum (UHS) or Performance-Based Spectrum (PBS) The industry (NEI) succeeded in persuading the NRC to allow the use of PBS. The NRC issued a new regulatory guide (RG 1.208 [19]) that allows the use of PBS in lieu of UHS per RG 1.165. Figures 3 and 4 illustrate why the PBS approach is attractive to the nuclear industry; most of the recent applicants have been opting for the PBS approach. Use of either method requires rigorous PSHA work and site amplification studies with support from top-notch geology and seismology experts from boutique firms and architectural/ engineering companies as described below: Geology and seismology experts performing detailed geological, seismological, and geophysical investigations to identify and characterize regional/local seismic sources. Seismology/probability experts performing PSHA work for developing UHS (and subsequently PBS, if desired) by considering the seismicity data for the sources and appropriate ground motion attenuation models. The spectra thus obtained correspond to the crystalline bedrock below the site, which can be a few tens of feet to many hundreds of feet below grade. Geotechnical experts performing site amplification studies to determine the freefield ground motion at various locations within the site soil profile (wherever structure foundations are to be located). This work

208

Bechtel Technology Journal

entails the use of test data for dynamic properties of the soil layers at the site (from the bedrock to the grade) and accounting for the uncertainties. Because the soil near the grade elevation is typically not of the competent caliber, the properties of the structural backfill (or lean concrete layer) have to be accurately determined and accounted for in the site amplification studies. As a result, the design ground motion is initially established only at the uppermost competent soil layer because the backfill properties are not known upfront. NRC Interim Staff Guidance Definitions of Key Ground Motion Terms, Including Interpretations of SSE and OBE The NRC requires that the design ground motions be established at (1) the top of the uppermost competent in situ soil layer under the site (generally defined as a layer in which the seismic shear wave velocity is at least 1,000 ft/sec), and (2) various elevations corresponding to the bottoms of the foundations of all safety-related structures. The latter is necessary because the foundations are often located on top of structural fill materials that are used to replace the unsuitable upper layers at most sites. To ensure that the standard plant design is adequate for the site-specific ground motion, the site-independent design ground motion (which is applied as free-field ground motion either at the grade level or at the foundations of the concerned structures) used for design of the standard plant structures is compared with the site-specific spectra. With the advent of the standard plant concept, it also has become necessary to clarify what the OBE spectrum means for SSCs that are not part of the standard plant. The following new terms and definitions have thus been introduced in the recent NRC seismic ISG [26]: Certified seismic design response spectrum (CSDRS)Site-independent seismic design response spectrum approved by the NRC for a certified standard plant design. Ground motion response spectrum (GMRS)Site-specific ground motion response spectrum (horizontal and vertical) determined as free-field outcrop motions on the uppermost in situ competent material, determined using RG 1.165 or RG 1.208. Foundation input response spectra (FIRS) As the GMRS is established at the uppermost in situ competent layer, the resulting motion has to be transferred to the base elevations

of each seismic category I foundation. These site-specific (amplified) ground motion response spectra at the foundation levels in the free field are referred to as FIRS and are derived as free-field outcrop spectra. Safe-shutdown earthquake (SSE)The SSE for the site is the performance-based design motion defined at the ground surface. Given this definition, any deviant as-found conditions can be evaluated against this spectrum, provided that the condition is subsequently restored to the design basis (e.g., the CSDRS for standard plant). Also, the slope stability soil liquefaction analyses need to be performed using the site-specific SSE. This definition poses a dilemma because it is difficult to locate (and maintain) seismic monitoring instrumentation at the GMRS elevation. Therefore, the subject of ground motion monitoring requirements still needs to be sorted out. The NRC plans to publish a revised version of RG 1.12 [29] and possibly RG 1.166 [30] to address this issue with industry input. For now, NRCs seismic ISG [24] states that the applicants monitoring and instrumentation plan will be reviewed on a case-by-case basis. Operating-basis earthquake (OBE)For license applications for the use of a certified standard plant design, the OBE ground motion is defined as follows: For the standard plant structures, the OBE ground motion is one-third of the CSDRS. For the safety-related structures that are not part of the certified design, the OBE ground motion is one-third of the design motion response spectra, as stipulated in the design certification conditions specified in the DCD (which could be the CSDRS or GMRS). It is noted that, for situations when the DCD specifies GMRS as the design motion response spectrum for site-specific (non-standard) structures, the OBE for such structures need only match (or exceed) one-third of the GMRS. As the GMRS is often lower than the CSDRS, this would result in a lower OBE for the non-standard SSCs. However, selection of a rather low OBE level poses an economic risk to the plant in that a potential occurrence of an earthquake that exceeds the low OBE threshold would trigger an extended plant shutdown for post-earthquake inspections.

Because the soil near the grade elevation is typically not of the competent caliber, the properties of the structural backfill (or lean concrete layer) have to be accurately determined and accounted for in the site amplification studies.

December 2008 Volume 1, Number 1

209

Outcome of Initiatives for Reduction of HF Content and Its Impact During recent years, the nuclear industry has mounted a multi-pronged effort to minimize the significance of HF content in CEUS hardrock ground motions. (The HF content can still persist for medium hard sites because their soil layers may not fully filter out the HF content.) This effort consisted of the following initiatives and outcomes: Use of PBS is now permitted in lieu of the UHS approach, which results in an effectively smaller return period for the HF range of the response spectrum and correspondingly reduced spectral values (see Figure 3). This approach complies with RG 1.208. Use of the cumulative absolute velocity (CAV) is permitted as a filter to help weed out seismic hazard contributions from lowmagnitude earthquake events. It has been known that such events produce lower levels of ground shaking that usually do not damage most structures, let alone nuclear structures. It is also known that the low-magnitude events contribute more significantly to the HF content because the HF waves incur less attenuation compared with lowfrequency waves. With this in mind, the industry proposed use of CAV (a term closely correlated with the earthquake magnitude) as a filter for eliminating seismic hazard contributions from low-magnitude events. The NRC endorsed this approach in RG 1.208 with the stipulation that a conservative CAV threshold of 0.16g-seconds be used for hazard calculation. Truncation of the number of standard deviations included in defining the ground motion model, which can have a significant impact on the hazard estimates, is not permitted. An industry study to determine if such models could be truncated with a limited number of standard deviations found that there was no rational basis for such truncation (except as implied by the inherent strength limit of the geologic materials through which the motion is transmitted). While RG 1.208 does not permit such truncation, it does acknowledge that the magnitude of the standard deviation need not be too conservative. That is, if better ground motion models are developed using more data, the conservatism in the standard deviation magnitude can be reduced. Incorporation of ground motion incoherence is now permitted in the seismic SSI analysis.

All of the industry initiatives with regard to HF ground motion, except for truncation of the number of standard deviations used in the ground motion model, have been successful in reducing the significance of HF content in the design ground motion.

Several sample studies have confirmed that the application of incoherent ground motion into SSI analysis results in reduced impact of the HF content (i.e., reduced HF acceleration levels in the in-structure response spectra and reduced response of walls and floors to HF excitation). Many researchers have proposed incoherency models, and the NEI in particular advocated those proposed by Abrahamson. [31] The NRC approved this model for rock sites and its incorporation into the SASSI program by Ostadan [32], among other similar implementations. All of these industry initiatives, except for truncation of the number of standard deviations used in the ground motion model, have been successful in reducing the significance of HF content in the design ground motion. However, it is important to note that the wave incoherence benefit can be realized only when the unreduced (high-frequency-rich) spectrum is applied as (incoherent) free-field ground motion during the SSI analysis. Impact of HF Content on Structural Modeling and Seismic Qualification Despite the aforementioned improvements, there is still significant HF content in the ground motion for CEUS rock sites (and very stiff sites). Consequently, new rules have been introduced to ensure that structural analyses and seismic qualifications are conducted properly to capture the response to HF excitation. Seismic analysis models must be refined enough to accurately capture response to the HF content (at least up to 50 Hz) of the horizontal and vertical GMRS/FIRS. The NRC considered this requirement to be important in developing accurate in-structure response spectra (ISRS) for walls and floors (as well as in accurately capturing any significant HF response of wall and floor panels). The ISG [26] further requires that the ISRS be developed for frequencies up to 100 Hz. The spectra thus developed will enable proper seismic qualification of HF-sensitive equipment and systems supported by the structures. Use of screening techniques to identify and screen out electrical and mechanical equipment not sensitive to HF excitation is outlined in the ISG. [26] This screening helps narrow down the number of items that have to be qualified for the HF excitation. It is expected that a large number of SSCs could be screened out using the screening criteria

210

Bechtel Technology Journal

provided in the ISG document. For the remaining SSCs, which are deemed to be HF sensitive, the seismic qualification method (whether by test, analysis, or a combination thereof) will need to be suitable to capture the response (and any vulnerability) to HF excitation. Determination of Dynamic Soil Properties and Engineered Backfill The RC/TS method has now been accepted by the NRC and is being widely used by most of the COL applicants. For some time, the University of Texas at Austin has been the only facility capable of conducting these tests for nuclear applications (i.e., with the requisite quality assurance program). While relatively efficient, an RC/TS test still takes about 1 week per granular sample and 2 weeks for cohesive samples (it is not needed for hard-rock sites), which has created a bottleneck for the COL applicants. Soil testing also becomes a significant challenge for deepsoil sites, where hard rock is not reached even at depths of several hundred feet. Careful planning and coordination with other applicants are therefore needed at the outset of an ESP or COL project to ensure that the application submittal schedule is realistic. Reconciliation of Site Parameters with Standard Plant Design The standard design is based on a site-independent ground motion (represented by CSDRS) and a variety of potential soil profiles to arrive at an SSI response. Generally speaking, a standard plant supplier considers several soil profiles deemed representative of candidate sites and provides a design that considers the maximum response from all such profiles. Therefore, from a seismic design standpoint, the following site-specific characteristics have to be considered in assessing whether the standard plant design envelops the site conditions: Reconciliation with site-specific seismic design spectrumThe CSDRS must exceed the site GMRS (or structure-specific FIRS, if the structure is not founded at the GMRS elevation) at all frequencies across the spectrum. As noted earlier, this is often not the case for CEUS hardrock sites because most suppliers did not choose sufficient HF-rich design spectrum for their standard design. Reconciliation with site soil profileThe applicant must demonstrate that the site soil profile is bounded by or very close to one of the generic profiles considered in the standard
December 2008 Volume 1, Number 1

plant design. This condition can also be elusive because the presence of a soft soil lens by way of an engineered fill layer on top of a medium hard or hard rock layer (a not-souncommon scenario) has generally not been considered by most standard plant suppliers. The presence of such layers can in fact be problematic because of increased shaking levels due to reflection of seismic waves from the underlying hard layer. Most ancillary structures are not as deeply embedded as the nuclear island structure (which may be directly founded on rock at hard-rock sites). One remedy for such situations is the use of lean concrete as a backfill (in lieu of the usual compacted structural backfill), which can help align the resulting soil profile with one of the generic profiles considered in the standard plant design. Inclination (dip) of the top surface of the uppermost competent in situ soil layerThe standard plant design typically factors in some dip (say, up to 20 degrees) for the soil layer where CSDRS is applied. The applicant must verify that the dip at the site, if any, does not exceed the limit prescribed by the standard plant supplier. Dynamic soil-bearing capacity and friction coefficient at soil-foundation interfaceThe standard plant suppliers often prescribe a minimum value for the dynamic bearing capacity for the soil directly below the foundation and for the friction coefficient at the soil-foundation interface. These limits ensure that (1) the supporting soil at the site can withstand potentially large transient bearing pressures caused during design level shaking, and (2) there is enough friction resistance at the foundation interface to prevent sliding of the structure. Of these two parameters, the dynamic bearing capacity can be problematic because there are no prescribed tests to determine it. Consequently, the applicant (and/or the E&C company representing the applicant) must somehow justify that the soil in question has sufficient dynamic bearing capacity. Site-specific reconciliation is warranted if one or more of the aforementioned conditions cannot be satisfied. As more COL applicants move forward, many applications are requiring such reconciliation, which consists of demonstrating that the critical sections of structures, as reported in the DCD, remain adequate for the forces and moments resulting from site-specific conditions. (This evaluation often works out favorably because most standard plants have been designed

One remedy for avoiding situations involving a soft backfill layer underlain by a hard in situ layer is to use lean concrete as backfill (in lieu of the usual compacted structural backfill), which can help align the resulting soil profile with one of the generic profiles considered in the standard plant design.

211

very conservatively, using the envelope of design requirements stemming from several generic soil profiles.) The reconciliation leads to either (1) demonstrating that the site-specific ISRS are enveloped by those reported in the DCD, or (2) generating a new (higher) set of ISRS for the applicants use during the detailed design phase. So, even with the advent of the standard plant concept, industry observers are noting that significant seismic analysis and design activities are taking place and are expected to continue.

Need for Refined Seismic Modeling The task of generating refined analytical models to accurately capture seismic response for at least 50 Hz excitation is not easy. The following considerations apply for the size of concrete and soil elements: For concrete elements, their size of elements depends on the shear wave velocity through concrete as well as vibration frequencies of individual wall and floor panels. Consideration of these factors leads to element sizes in the range of 10 ft by 10 ft (sometimes slightly smaller for thinner floor slabs). The analytical models used for static analysis often have this degree of refinement. The key difference is that static analysis is not nearly as computationally intensive as seismic analysis; as a result, this level of mesh refinement can be considerably challenging for seismic analysis in terms of software and hardware limits. For rock sites (with a shear wave velocity of at least 5,000 ft/sec), the maximum size of the soil elements can be as much as 20 ft by 20 ft for ensuring faithful transmission of 50 Hz frequency waves. This in itself is not a big problem in terms of the potential model size for the SSI analysis. The real problem arises with a common scenario wherein the safety-related structures are underlain by a lens of backfill material with low shear wave velocity characteristics (often 800 ft/sec to 1,000 ft/sec). In this situation, the element size must be limited to 3 ft to 4 ft in these soil layers, which in turn balloons the total number of soil elements used for the SSI analysis (the number of soil elements increases quickly because the soil volume modeled for SSI analysis is often quite large). The total number of elements dictates the computation time required to solve the SSI problem. Making models work for 50 Hz frequency is a challenge for the whole nuclear industry. This latest NRC requirement has yet to be fully addressed by any of the NSSS vendors and E&C companies. SASSI is the most common and versatile SSI program in the industry; however, both the Bechtel version of the program and the commercially available SASSI version have difficulty meeting the challenge, especially for large structures (i.e., whereas the total number of nodes in SSI analyses was previously less than 100, the number may now reach several tens of thousands for even small structures).

Even with the advent of the standard plant concept, industry observers are noting that significant seismic analysis and design activities are taking place and are expected to continue.

STILL-TO-BE-RESOLVED SEISMIC ISSUES ost major seismic issues have been addressed during the past year or two. However, several issues, discussed in the following subsections, are continuing to be resolved. New Ground Motion Models for the CEUS As mentioned earlier, the magnitude of standard deviation in the ground motion model can contribute significantly to a high hazard estimate. The problem has been especially significant for the CEUS because, unlike the Western United States, there has been a general lack of specific ground motion attenuation models for this region. To address this issue, the NRC recently sponsored ([33]) a research project called NGA-East (Next Generation Attenuation models for the region), which is patterned after a similar NGA-West project that was concluded in 2007. It is expected that the NGA-East work will lead to some reduction in seismic hazard estimates for the CEUS, which will be in line with similar benefits realized based on the results of the NGA-West project. While the final results from this project are a few years away, it is likely that the nuclear industry will start using some of the research data and formulations as they become available. Application of FIRS for SSI Analysis So far, the NRC has indicated that an SSI analysis should be conducted by applying the amplified ground at the grade level, whereby the input motion on the embedded portion of the structure (i.e., basement walls and basemat) is determined through a de-convolution analysis. The industry has been arguing against this approach. The NEI issued a white paper in September 2008 (see [34]) and reached an agreement with the NRC to use the performance-based design motion at the foundation level for SSI analysis. The NRC is in the process of documenting this agreement in a new ISG.

212

Bechtel Technology Journal

Improvements are therefore needed in terms of both software enhancements and increased hardware capacity using many parallel processors. Before such an effort is undertaken (or in parallel with same), one possible option is to demonstrate the acceptability of a lower cutoff frequency (such as 25 Hz) by showing that the wall and floor responses and associated ISRS remain relatively unaffected at 25 Hz as well as 50 Hz. It is also possible to show that the cutoff frequency can be smaller for soft-to-medium-stiff soil sites, because such soil layers may sufficiently filter out HF transmission into the superstructure. In any case, such demonstrations would likely be structure and/or soil profile specific, rather than grounds for an outright exemption from the current 50 Hz NRC requirement. Engineered Backfill Backfill properties are important for foundation design and for developing site-specific seismic responses of the plant structures. However, the fact that the backfill properties can only be measured once backfill is placed during plant construction imposes a condition on the license (inspections, tests, analyses, and acceptance criteria [ITAAC] on backfill) to be met after backfill is placed. This is not a desirable development for COL applicants. The NEI has formed a task force to develop a solution and reach an agreement with the NRC. Moisture Barrier All standard designs require a moisture barrier with a defined frictional capacity between the concrete and the barrier. A barrier design that meets the requirements has not been fully developed and is subject to future testing and performance assessment. The COL applications continue to accept a condition on the license (ITAAC) for a satisfactory design of a moisture barrier. CONCLUSIONS

REFERENCES
[1] 10 CFR 100 Subpart A Evaluation Factors for Stationary Power Reactor Site Applications Before January 10, 1997 and for Testing Reactors. 10 CFR 100 Subpart B Evaluation Factors for Stationary Power Reactor Site Applications on or After January 10, 1997. 10 CFR 50 Appendix S Earthquake Engineering Criteria for Nuclear Power Plants. 10 CFR 52 Licenses, Certifications, and Approvals for Nuclear Power Plants. Bechtel Power Corporation Technical Bulletin TB 30H-G01G-TB031, Issues and Challenges with the 10 CFR Part 52 Licensing Process, Rev. 0, January 2007. NUREG-0800, Standard Review Plan for the Review of Safety Analysis Reports for Nuclear Power Plants, Section 2.5.2, Vibratory Ground Motion, Rev. 2, August 1989. USNRC Regulatory Guide 1.60, Design Response Spectra for Seismic Design of Nuclear Power Plants, Rev. 1, December 1973. C.A. Cornell, Engineering Seismic Risk Analysis, Bulletin of the Seismological Society of America, Volume 58, Issue 5, 1968, pp. 15831606, access via <http://bssa.geoscienceworld.org/ cgi/reprint/58/5/1583>. NUREG/CR-5250, Seismic Hazard Characterization of 69 Nuclear Plant Sites East of the Rocky Mountains, Volumes 18, January 1989.

[2]

[3] [4] [5]

[6]

[7]

[8]

[9]

[10] EPRI-NP-4726, Probabilistic Seismic Hazard Evaluations at Nuclear Power Plant Sites in the Central and Eastern United States, All Volumes, 19891991. [11] NUREG-1488, Revised Livermore Seismic Hazard Estimates for 69 Nuclear Power Plant Sites East of the Rocky Mountains, April 1994. [12] NUREG/CR-6372, Recommendations for Probabilistic Seismic Hazard Analysis: Guidance on Uncertainty and Use of Experts, 1997. [13] National Research Council Report, Review of Recommendations for Probabilistic Seismic Hazard Analysis: Guidance on Uncertainty and Use of Experts, 1997, access via <http://books. nap.edu/openbook.php?record_id=5487>. [14] USNRC Regulatory Guide 1.165, Identification and Characterization of Seismic Sources and Determination of Safe Shutdown Earthquake Ground Motion, Rev. 0, March 1997. [15] NUREG-1742, Perspectives Gained from Individual Plant Examination of External Events (IPEEE) Program, Final Report, All Volumes, April 2002. [16] DOE Standard 1020-02, Natural Phenomena Hazards Design and Evaluation Criteria for Department of Energy Facilities, Department of Energy, January 2002. [17] ASCE Standard 43-05, Seismic Design Criteria for Structures, Systems, and Components in Nuclear Facilities, American Society of Civil Engineers, January 2005. [18] S.R. Malushte and R.P. Kennedy, Implications From Past Seismic Safety Assessments on Development of a Risk-Based Seismic Design Philosophy, 18th International Conference on

Bechtel has remained engaged with other industry players in successful resolution of the seismic issues, and has continued to assist customers (standard plant suppliers and utilities) with the difficult implementation process during the ESP and COL phases.

everal challenging seismic issues in designing nuclear power plants have already been dealt with, and more are still being addressed. Bechtel has remained engaged with other industry players in successful resolution of these issues and has continued to assist customers (standard plant suppliers and utilities) with the difficult implementation process during the ESP and COL application phases. Thus, while its role has changed with the advent of the standard plant concept, Bechtel remains a key player in the nuclear industry. Bechtels technical leadership in the seismic arena continues to be vital to maintaining its prominent industry role.

December 2008 Volume 1, Number 1

213

Structural Mechanics in Reactor Technology (SMiRT 18), Beijing, China, August 712, 2005, Paper SMiRT18-K01-3 <http://www.iasmirt.org/ iasmirt-2/SMiRT18/K01_3.pdf>. [19] USNRC Regulatory Guide 1.208, A PerformanceBased Approach to Define the Site-Specific Earthquake Ground Motion, Rev. 0, March 2007. [20] G. Hardy, Recent Seismic Research Programs for New Nuclear Power Plants and Performance Based Design, 19th International Conference on Structural Mechanics in Reactor Technology (SMiRT 19), Toronto, Canada, August 1217, 2007. [21] EPRI Report TR-102470, Analysis of HighFrequency Seismic Effects, prepared by Jack R. Benjamin and Associates, Inc., and RPK Structural Mechanics Consulting, October 1993. [22] SECY 93-087, Policy, Technical and Licensing Issues Pertaining to Evolutionary and Advanced Light-Water Reactor (ALWR) Designs, April 1993. [23] EPRI Report 1015108 (Technology Innovation Deliverable), The Effects of High-Frequency Ground Motion on Structures, Components, and Equipment in Nuclear Power Plants, Technical Update, June 2007. [24] ASCE Standard 4, Seismic Analysis of SafetyRelated Nuclear Structures and Commentary, American Society of Civil Engineers, 1998. [25] USNRC Regulatory Guide 1.138, Laboratory Investigations of Soils and Rocks for Engineering Analysis and Design of Nuclear Power Plants, Rev. 2, December 2003. [26] COL/DC-ISG-1, Interim Staff Guidance on Seismic Issues Associated with High Frequency Ground Motion, May 2008. [27] NUREG-0800, Standard Review Plan for the Review of Safety Analysis Reports for Nuclear Power Plants, Section 2.5.2, Vibratory Ground Motion, Rev. 4, March 2007. [28] NUREG-0800, Standard Review Plan for the Review of Safety Analysis Reports for Nuclear Power Plants, Section 3.7.1, Seismic Design Parameters, Rev. 3, March 2007. [29] USNRC Regulatory Guide 1.12, Nuclear Power Plant Instrumentation for Earthquakes, Rev. 2, March 1997. [30] USNRC Regulatory Guide 1.166, Pre-Earthquake Planning and Immediate Nuclear Power Plant Operator Post-Earthquake Actions, Rev. 0, March 1997. [31] EPRI Report, Hard-Rock Coherency Functions Based on the Pinyon Flat Array Data, prepared by N. Abrahamson, July 2007. [32] F. Ostadan and N. Deng, SASSI-SRSS Approach for SSI Analysis with Incoherent Ground Motions, Bechtel National report to Nuclear Energy Institute, August 2008. [33] NRC Seismic Research Program Plan FY 20082011, by Structural, Geotechnical & Seismic Engineering Branch, Division of Engineering, Office of Nuclear Regulatory Research, US Nuclear Regulatory Commission, January 2008 <http://peer.berkeley.edu/ngaeast/assets/ nrc_research_plan_public.pdf>. [34] R.P. Kennedy and F. Ostadan, Consistent Site-Response/Soil-Structure Interaction

Calculations Workshop on Seismic Issues, presentation to US Nuclear Regulatory Commission, Joint NRC-NEI Seismic Issues Meeting, Palo Alto, California, September 2008.

BIOGRAPHIES
Sanj Malushte, senior principal engineer and Bechtel Fellow, has more than 25 years of varied experience as a practicing civil/structural engineer, engineering supervisor, resident engineer, assistant chief engineer, project engineer, researcher, adjunct (part-time) faculty member, and technical specialist. During his 19 years at Bechtel Power, Dr. Malushte has served on many US and international fossil and nuclear power projects in various roles during all phases of projects. He has also provided technical consultation to several other projects for Bechtels Mining & Metals, OG&C, and Bechtel National divisions. Outside of Bechtel, Dr. Malushte spent a year designing structures for chemical and industrial process facilities. He also has 9 years of experience as an adjunct faculty member at The Johns Hopkins University, Baltimore, Maryland, where he has taught graduate level structural engineering courses in structural dynamics, advanced steel design, and earthquake engineering/seismic designsubjects that he has taught extensively in-house at Bechtel as well. Prior to joining Bechtel, Dr. Malushte worked for 6 years as a research/teaching assistant, research associate, and post-doctoral research scientist at the Virginia Polytechnic Institute and State University (Virginia Tech), Blacksburg, Virginia, while completing his graduate studies. At Virginia Tech and later at Bechtel, he worked on several research projects funded by the National Science Foundation (NSF), United Technologies Research Center, and Bechtel, in the fields of earthquake engineering, structural dynamics, computational fluid mechanics, and modular walls/floors using steel-concrete composite construction. Dr. Malushtes primary expertise is in the fields of earthquake structural engineering, structural mechanics, and analysis for impact loads. He also has significant expertise in interpretation and application of US/international structural codes/standards, steel/concrete/composite design, and use of finite element method. Dr. Malushte is an active member of several key American Society of Civil Engineers (ASCE) and American Institute of Steel Construction (AISC) code/ standard committees related to nuclear and nonnuclear structures. He is also a member of the National Earthquake Hazards Reduction Program (NEHRP) Task Committee for Nonbuilding Structures and Nonstructural Components and the Nuclear Energy Institute (NEI) Seismic Issues Taskforce. Dr. Malushte has been an invited member on peer review panels for National Institute of Standards and Technology (NIST) and the NSF, and is a past associate editor of ASCEs Journal of Structural Engineering. He currently serves as an adviser to the Korean Society of Steel Construction (KSSC) for their research

214

Bechtel Technology Journal

and standardization program on modular composite construction for nuclear facilities, and is the chair of an associated AISC standard committee. Dr. Malushte has presented several invited seminars worldwide, authored or co-authored more than 25 journal and conference papers, and has served as a peer reviewer for numerous technical journals. He is a fellow of ASCE and UKs Institution of Civil Engineers, and in 2005, he was elected a Bechtel Fellowthe highest technical recognition conferred within Bechtel. Dr. Malushte received a PhD and an MS in Engineering Mechanics, and also an MS in Civil Engineering, all from Virginia Tech; a masters degree in Engineering Management from George Washington University, Washington, DC; and a bachelors degree with Honors from the University of Bombay, India. Dr. Malushte is a licensed civil, mechanical, and structural engineer in the United Kingdom (UK), and in several US states, including California. Orhan Grbz is a Bechtel Fellow and senior principal engineer with over 35 years of experience in structural and earthquake engineering. As a Fellow, he is an advisor to senior management on technology issues, and represents Bechtel in technical societies and at industry associations. As a senior principal engineer, he provides support to various projects and performs design reviews and independent peer reviews. The scope of work includes development of design criteria, seismic evaluations, structural evaluations and investigations, technical review and approval of design, serving as Independent Peer Reviewer for special projects, investigation and resolution of design and construction issues, and supervision of special analyses. Dr. Grbz is a member of the American Society of Civil Engineers Dynamic Analysis of Nuclear Structures Committee and the American Concrete Institute 349 Committee. These committees develop and update standards and codes used for the nuclear safety-related structures, systems, and components. Dr. Grbz received a PhD and an MS in Structural Engineering, and a BS in Civil Engineering, all from Iowa State University, Ames, Iowa. Joe Litehiser, Jr., joined Bechtel straight out of graduate school. During his 38 years with the company, he has worked as a senior geologist and engineering specialist (seismologist) in what is now the Geotechnical & Hydraulic Engineering Services (G&HES) group. During this time, Dr. Litehiser has developed, reviewed, and/or approved site-specific earthquake design criteria under foreign and domestic regulatory provisions for more than 700 projects across multiple GBUs. He has also published, with Bechtel colleagues, an empirical attenuation relationship for vertical ground motion as well as a probabilistic earthquake shaking hazard algorithm to estimate

probabilistic liquefaction potential; defended seismic design load choices to projects, foreign consultants, and before licensing panels; managed the Seismology/Geophysics Technical Working Group; and developed a white paper on regional seismicity and faulting for the power plant projects in Turkey following a nearby damaging earthquake. Dr. Litehiser has been quite active in various organizations. He is a member of (and for 20 years was the secretary of) the Seismological Society of America and the Earthquake Engineering Research Institute; is chairman of ANS 2.30, a committee charged with maintaining the nuclear industry standard for characterization of neotectonic features; was a past member and chairman of the San Francisco Seismic Investigations and Hazards Survey Advisory Committee (SIHSAC), a group of technical specialists charged with keeping the San Francisco Board of Supervisors apprised of appropriate earthquake hazard preparedness measures; and is a past member and chairman of the policy advisory board of the Bay Area Regional Earthquake Preparedness Project (BAREPP). Dr. Litehiser has authored more than 25 papers, reports, and presentations in technical journals or conference proceedings, and has also been the editor for one book. In 2001, he was elected a Bechtel Fellow in recognition of his substantial technical achievement on behalf of Bechtel. Dr. Litehiser received a PhD in Seismology and an MA in Geophysics, both from the University of California, Berkeley, and an AB in Geology from Indiana University, Bloomington, Indiana. He is a registered geologist in the state of California. Farhang Ostadan, a Bechtel Fellow, has more than 25 years of experience in geotechnical and geotechnical earthquake engineering and foundation design. As chief soils engineer for Bechtel, he has overall responsibility for this discipline and manages the efforts of a large and diverse group of geotechnical specialists in locations across the US and around the globe. His project oversight responsibilities range from major transportation projects to petrochemical, nuclear, and power- and energy-related projects. Dr. Ostadan has published more than 30 technical papers on topics relating to geotechnical earthquake engineering. He co-developed a method for dynamic soil-structure interaction analysis currently in use by the industry worldwide. Dr. Ostadan is a frequent lecturer at universities and research organizations. Dr. Ostadan is currently a member of the American Society of Civil Engineers (ASCE), Geotechnical Division; the Earthquake Engineering Research Institute (EERI); and the National Earthquake Hazard Reduction Program (NEHRP) Foundation Committee, and is a past member of Californias Seismic Safety Commission. Dr. Ostadan received a PhD in Civil Engineering from the University of California, Berkeley; an MS in Civil Engineering from the University of Michigan, Ann Arbor; and a BS in Civil Engineering from the University of Tehran, Iran.

December 2008 Volume 1, Number 1

215

216

Bechtel Technology Journal

Bechtel Fellows
Chosen for their substantial technical achievement over the years, the Bechtel Fellows advise senior management on questions related to their areas of expertise, participate in strategic planning, and help disseminate new technical ideas and findings throughout the organization.

Prem Attanayake, PhD

Amos Avidan, PhD

August Benz

Siv Bhamra, PhD

Peter Carrato, PhD

Doug Elliot, PhD

Angelos Findikakis, PhD

Benjamin Fultz

Orhan Grbz, PhD

William Imrie

Joe Litehiser, PhD

Jake MacLeod

Sanj Malushte, PhD

Cyrus B. Meher-Homji

Ram Narula

Farhang Ostadan, PhD

Stew Taylor, PhD

Linda Trocki, PhD

Ping Wan, PhD

Fred Wettling

Major Offices In:

Brisb ane, Australia Fred erick, Maryland, USA Hou ston, Texas, USA Lo n d on, England, UK Mon treal, Canada New Delhi, India San Francisco, California, USA San tiago, Chile Sha n ghai, China Taip ei, Taiwan, ROC

www.bechtel .co m
8570

Potrebbero piacerti anche