Sei sulla pagina 1di 9

Analytical Modeling for Thermodynamic Characterization of Data Center Cooling Systems

Madhusudan Iyengar
e-mail: mki@us.ibm.com e-mail: c28rrs@us.ibm.com International Business Machines Systems and Technology Group, Poughkeepsie, NY 12601

Roger Schmidt

The increasingly ubiquitous nature of computer and internet usage in our society has driven advances in semiconductor technology, server packaging, and cluster level optimizations in the IT industry. Not surprisingly this has an impact on our societal infrastructure with respect to providing the requisite energy to fuel these power hungry machines. Cooling has been found to contribute about a third of the total data center energy consumption and is the focus of this study. In this paper we develop and present physics based models to allow the prediction of the energy consumption and heat transfer phenomenon in a data center. These models allow the estimation of the microprocessor junction and server inlet air temperatures for different ows and temperature conditions at various parts of the data center cooling infrastructure. For the case study example considered, the chiller energy use was the biggest fraction of about 41% and was also the most inefcient. The room air conditioning was the second largest energy component and was also the second most inefcient. A sensitivity analysis of plant and chiller energy efciencies with chiller set point temperature and outdoor air conditions is also presented. DOI: 10.1115/1.3103952

Introduction

The use of computers and the internet has become startlingly pervasive in todays society. Not surprisingly this has an impact on our societal infrastructure with respect to providing the requisite energy to fuel these power hungry machines. A recent study from the Lawrence Berkeley National Laboratory LBNL 1 has reported that in 2005 server driven power usage amounted to 1.2% 5000 MW and 0.8% 14,000 MW of the total energy consumption of the United States and the world, respectively. The cost of this 2005 energy was $2.7 B and $7.2B for the United States and the world, respectively 1 . This study also reports a doubling of the server related electricity consumption between 2000 and 2005. Thus, understanding and improving the energy efciency of data center systems is of paramount importance from a cost and sustainability perspective. This importance is reected in the growing industry interest and activity in this eld 2,3 The increasingly ubiquitous nature of computer and internet usage in our society has driven advances in semiconductor technology, server packaging, and cluster level optimizations, in the IT industry. Microprocessor performance has improved exponentially from the reduction in transistor length scales and the use of multiple cores on a single chip, thus resulting in higher powered chips. Many more such higher performance microprocessor modules and memory devices are now packaged into a single server node, thus yielding frighteningly high rack powers. Figure 1 displays the ASHRAE trends in rack heat load 4 , showing the 2005 computer server rack heat uxes to be about 43,056 W / m2 4000 W / ft2 , which translates to 27,000 W for a 19 in. rack. A 19 in. rack is one of the standard server racks with a footprint of 0.61 m wide 1.016 m deep 24 40 in.2 . It is not at all uncommon for rack heat loads today to be in the 32 kW range for a 19 in. rack, which translates to a rack heat ux of 52,000 W / m2 4800 W / ft2 . Recent experimental studies by Schmidt and coworkers 57 presented the measured values for average and hot spot high density computing data center heat uxes. In one of the
Contributed by the Electrical and Electronic Packaging Division of ASME for publication in the JOURNAL OF ELECTRONIC PACKAGING. Manuscript received June 30, 2008; nal manuscript received December 10, 2008; published online April 2, 2009. Assoc. Editor: Koneru Ramakrishna. Paper presented at the ASME Interpack 2007.

2005 measurements a server cluster test facility 7 showed extremely high hot spot heat uxes of 7750 W / m2 720 W / ft2 over an area of 40 m2 440 ft2 . The total area of this cluster 7 was 1486 m2 16,000 ft2 , and the net energy consumption was 3.1 MW. Almost all the electrical energy consumed by the chip package is released into the surroundings as heat, which thus places an enormous burden on the environmental cooling infrastructure. Existing cooling technology typically utilizes air to carry the heat away from the chip and rejects it to the ambient. This ambient environment comprises in a typical data center facility of an airconditioned room. The air-conditioning units usually receive chilled water from a refrigeration chiller plant, which is, in turn, often cooled at its condenser using cooling tower cooled water. Thus, the obvious consequence of this steep increase in the chip, node, rack, and data center cluster power consumptions is the corresponding increase in the energy needs of the building level cooling infrastructure. The occurrence of hot spots on the chip, the server node, and the data center oor are a source of inefciencies in the cooling of these computing systems. Using data provided by another recent LBNL study 8 , the cooling energy consumption of todays data center can be estimated to be about 3040% of the total facility energy consumption. For a 10 MW facility, this will represent a $2.63.5 M annual cost using a 0.1 $/kWh value for the unit energy expense, thus justifying research and engineering efforts on improving overall energy efciency of cooling systems. In this paper we develop and present analytical models that allow the prediction of the thermal performance and energy efciency of the entire stack of components, which make up the data center cooling infrastructure. The intent of this work is to begin to answer different questions that are germane to the cooling energy use in data canters, for example: What is the breakdown of the cooling energy consumption between the various components of a facility? What is the impact on total cooling energy consumption of temperature increase in the chilled water set point? How much does the cooling energy requirements change between the summer and the winter months? JUNE 2009, Vol. 131 / 021009-1

Journal of Electronic Packaging

Copyright 2009 by ASME

Downloaded 28 Sep 2010 to 130.179.16.201. Redistribution subject to ASME license or copyright; see http://www.asme.org/terms/Terms_Use.cfm

Fig. 1 Industry data center rack heat ux trends 4

Data Center Cooling Energy Usage

In a typical data center, electrical energy is drawn from the main grid utility lines to power an uninterruptible power supply UPS system, which then provides power to the IT equipment. Electrical power from the main grid is also used to supply power to ofces, as well as to power the cooling infrastructure, i.e., for the computer room air-conditioning CRAC units, building chilled water pumps, and the water refrigeration plant. Detailed energy consumption data was presented in a recently published Pacic Gas and Energy report, which was prepared by Rumsey Engineers and the University of California at Berkeley 9 . The IT equipment was found to consume, on average, 56% of the total load with the heating ventilation and air-conditioning HVAC cooling energy consumption averaging about 31% of the total load 8,9 . The HVAC cooling is made up of three elements: the refrigeration chiller plant including the cooling tower fans and condenser water pumps, in the case of water-cooled condensers , the building chilled water pumps, and the data center oor airconditioning units CRACs . While there are many different perspectives for optimizing such a computing facility, ensuring device reliability by delivering uninterruptible power and cool air to the inlet of the electronics remains the most important goal. The computer equipment is usually designed with the assumption of rack air inlet temperatures of 20 30 C. Airow distribution within a data center has a major impact on the thermal environment of the data processing equipment located within these rooms. To provide such a cool and controlled humidity environment, customers of such equipment commonly utilize two types of air distribution congurations, namely, the underoor supply and overhead supply layouts. Figure 2 shows the most prominent high performance data center oor air ventilation conguration, namely, the raised oor arrangement wherein the chilled air enters the room via oor vents and exits the room into air-conditioning units. The chilled air enters the room via perforated oor tiles, passes through the racks, getting heated in the process, then nds it way to the intake of the air-conditioning units known as CRAC units, which cool the hot air and blow it into the underoor plenum. Subambient refrigerated water leaving the chiller plant evaporator is circulated through the CRAC units using building chilled water pumps. A 021009-2 / Vol. 131, JUNE 2009

condenser pump circulates water between the chiller condenser and an air cooled cooing tower. Thus, in the standard facility cooling design considered in this study, the primary energy consumption components are as follows: the the the the the the server fans room air-conditioning unit blowers building chilled water pumps refrigeration compressors condenser water pumps cooling tower blowers

There are many variations to the plant level cooling infrastructure assumed in this paper. For example, the CRAC units can be comprised of a refrigerant loop, or the chiller might be eliminated with cooling tower water being routed directly to the CRAC units, or the chiller could posses an air cooled condenser or the cooling tower might be of natural draft design without including a forced air movement. All these and other such nonstandard conguration are not considered in this study.

Plant Energy Consumption Model

Figure 3 displays a schematic of the cooling energy ow for a data center facility, showing the electrical power dissipated by the microprocessors as heat, being carried away by successive thermally coupled coolant loops, which consume energy either due to pumping or compression work. It should be noted that the heat gained by the building envelope via solar radiation and the heat lost via convection, conduction, and radiation from the same surfaces has been neglected in the model presented in the paper. 3.1 Coolant Flow Calculations. The ow rates of the various coolant loops shown in Fig. 3, which carry heat away from the server electronics to the environment, can be calculated using flowmodule = CFMmodule/2118.9 flownode = Nmodule,p flowrack = Nnode flowmodule/ 1 flownode/ 1
node

1 2 3

rack

Transactions of the ASME

Downloaded 28 Sep 2010 to 130.179.16.201. Redistribution subject to ASME license or copyright; see http://www.asme.org/terms/Terms_Use.cfm

Fig. 2 Traditional room air-conditioning design with a raised oor, an underoor plenum, CRAC units, and perforated tiles

flowrack,total = Nrack

flowrack
supply leakage

4 5 6 7 8 9 10 11

flowrack,suppy = flowrack,total

flowdatacenter,exact = flowrack,supply/ 1 flowdatacenter = flowdatacenter,exact/

overprovision

flowCRAC = CFMCRAC/2118.9 flowBCW = GPMBCW 3785 / 60 1,000,000

flowCTA = CFMCTA/2118.9 flowCTW = GPMCTW 3785 / 60 1,000,000

where flowmodule, flownode, flowrack, flowrack,total, flowrack,suppy, flowrack,total, flowdatacenter,exact, flowdatacenter, flowCRAC, and flowCTA are the volumetric air ow rates in SI units through the processor module, the server node, the server rack, all the server racks, the total of all the tiles in front of all the racks, the entire data center oor without overprovisioning, the data center oor with overprovisioning, the air-conditioning unit CRAC , and the cooling tower, respectively. The water ow rates in SI units through the building chilled water loop and the cooling tower loop are represented by flowBCW and flowCTW, respectively. CFMmodule, CFMCRAC, and CFMCTA are the air ow rates in cubic feet per minute through the module, the CRAC, and the cooling tower, respectively, and GPMBCW and GPMCTW are the water ow rates in gallons per minute owing through the building chilled water

and cooling tower loops, respectively. Nmodule,p, Nnode, and Nrack are the number of modules in parallel in the server node, the number of server nodes in a rack, and the number of server racks on the data center oor. In Eqs. 1 11 , the factors node, rack, supply, leakage, and overprovision characterize to the fraction of total node air ow that bypasses the microprocessor modules; the fraction of rack level air ow that bypasses the server nodes; the fraction of the total rack air ow rate, which is actually supplied by the perforated tiles located in front of the rack; the fraction of the total data center air ow, which bypasses the perforated tiles and leaks out from cable cutout openings and the seams in the tiles; and the normalized overprovisioning factor that is present in the total data center air ow rate. Regarding overprovision, a value of 0.5 means there is twice the number of CRAC units than necessary; a value of 1 represents the exact amount of CRAC units needed. 3.2 Coolant Pressure Drop Model. The hydraulic pressure drop for the coolant ow through loops shown in Fig. 3 can be expressed as Pnode = C2 node
2 Pfd = Cfd 2 Prd = Crd 2 PCRAC = CCRAC

flow2 node
2 flowrack 2 flowrack 2 flowCRAC

12 13 14 15

Fig. 3 Cooling energy and heat ow in a data center system

Journal of Electronic Packaging

JUNE 2009, Vol. 131 / 021009-3

Downloaded 28 Sep 2010 to 130.179.16.201. Redistribution subject to ASME license or copyright; see http://www.asme.org/terms/Terms_Use.cfm

2 PBCW = CBCW 2 PCTW = CCTW 2 PCTA = CCTA

2 flowBCW 2 flowCTW 2 flowCTA

16 17 18

pressor and is calculated using equation presented in Sec. 4. In Eq. 25 the variable NCRAC is the number of CRAC units on the data center ow, which is given by NCRAC = flowdatacenter/flowCRAC 30 Since the heat removed by a CRAC unit can vary signicantly based on the suction air temperature, the inlet chilled water temperature, and the air and water ow rates, respectively, it was found that using Eq. 30 is the optimal method to calculate the required number of CRAC units, rather than the use of the ratio of the total heat load that needs to be removed to the rated CRAC heat load. Naturally, if the system is dened and NCRAC is a known quantity, there is no need to use Eq. 30 .

where Pnode, Pfd, Prd, PCRAC, PBCW, PCTW, and PCTA are pressure drops through the server node, the rack front cover, the rack rear cover, the CRAC unit, and the data center oor associated with that specic CRAC unit, the building chilled water loop, the cooling tower water loop, and the cooling tower air loop, respectively. The corresponding pressure loss coefcient terms used in Eqs. 12 18 to calculate these pressure drops are Cnode, Cfd, Crd, CCRAC, CBCW, CCTW, and CCTA, which represent the server node, the rack front cover, the rack rear cover, the air-conditioning unit CRAC and the data center oor associated with that specic CRAC unit, the building chilled water loop piping, the cooling tower water loop piping, and the cooling tower air-side open loop, respectively. Except for the CRAC ow loop pressure loss coefcient term CCRAC, all the other coefcients used in Eqs. 12 18 can be calculated or derived using empirical data from building data collection systems or manufacturers catalogs for the various components. The pressure loss arising from ow through the underoor plenum of a raised oor data center and through the perforated tiles is usually an order of magnitude smaller than the pressure loss through the CRAC unit itself. The pressure drop through the CRAC unit CCRAC,int is made up primarily of the losses due to the tube and n coils, the suction side air lters, and several expansion, contraction, and turning instances experienced by the ow. Usually using measurement data, the pressure loss coefcient term associated with a single CRAC unit on the data center oor can be dened as CCRAC = CCRAC,int 1+
CRAC

Plant Heat Transfer Model

While the calculation of the pumping power for the different coolant loops displayed in Fig. 3 is relatively straight forward, the thermal analysis is signicantly more complex. For example, the heat removed by the cooling tower needs to be equal to the sum of the heat rejected at the refrigeration chiller condenser and the energy expended by the condenser water pumps. Another example of the coupling necessary to satisfy energy balance is that the heat extracted by the CRAC units need to be equal to the sum of the heat dissipated by the IT equipment and the CRAC blowers. A model for the calculation of the heat transfer that occurs in various parts of the facility cooling infrastructure is described in this section. 4.1 Data Center Heat Load Calculations. The data center IT equipment heat load can be calculated using the following equations: qnode = Nmodule qrack = Nnode qrack,total = Nrack qmodule / qnode qrack
node

31 32 33 34 35 36

19

where 1 + CRAC is a multiplication factor to correct the internal CRAC pressure loss coefcient term to include the underoor plenum and the perforated tiles. 3.3 Electrical Power Consumption Model for Coolant Pumping/Compression. The electrical pumping power consumed in pumping coolant through the various loops depicted in Fig. 3 is given by IPnode = Pnode IPfd = Pfd IPrd = Prd IPrack = Nnode flownode/ flowrack/ flowrack/
fan

qdatacenter = qrack,total + IPCRAC qBCW = qdatacenter/


BCW

qchiller = qBCW + IPBCW

20 21 22 23 24
CRAC

fan fan

where qnode, qrack, qrack,total, qBCW, and qchiller are the heat loads at the node, the rack, the total of all the racks, the total data center oor, the total building chilled water, and the refrigeration chiller evaporator, respectively. Nmodule is the total number of modules in a node, which is given by Nmodule = Nmodule,p Nmodule,s 37 where Nmodule,s and Nmodule,p are the number of CPU modules inside the server node that are in series and parallel, respectively, assuming a rectangular array of modules in a series and parallel conguration. The heat load factors node and BCW represent the fraction of node heat load consumed by the modules, and the fraction of total building chilled water heat load contributed by the data center, respectively. 4.2 Chiller Thermal Performance. The chiller thermodynamic work is a function of the heat load at its evaporator, the temperature of water entering the condenser, the desired set point temperature of the water leaving the evaporator, and several more operating and design parameters including the loading of the chiller with respect to its rated capacity. Figure 4 shows a scatter plot of the reciprocal of the chiller coefcient of performance COP versus the reciprocal of the chiller evaporator heat load. This data set of 113 points was collected via manufacturer test information for one of the chillers located in the IBM Poughkeepsie plant. The plot shows the typical chiller behavior of deTransactions of the ASME

IPnode + IPfd + IPrd IPrack flowCRAC /


BCW CTW CTA

IPrack,total = Nrack IPCRAC = NCRAC PCRAC

25 26 27 28

IPBCW = PBCW IPCTW = PCTW IPCTA = PCTA

flowBCW/ flowCTW/ flowCTA/

Thus, the total facility electrical power used for coolant pumping and compression can be calculated as IPtotal = IPrack,total + IPCRAC + IPBCW + Wchiller + IPCTW + IPCTA 29 where Wchiller is the work done at the refrigeration chiller com021009-4 / Vol. 131, JUNE 2009

Downloaded 28 Sep 2010 to 130.179.16.201. Redistribution subject to ASME license or copyright; see http://www.asme.org/terms/Terms_Use.cfm

Fig. 4 Data for the reciprocal of COP versus the reciprocal of heat loading for the IBM chiller

creasing 1/COP with reducing values of the reciprocal of the heat load 1/MW , until the heat load is almost 100% of the rated capacity 7000 kW for a 2000 ton chiller . The typical possible range of chiller usage is 20100% loading. The minima of the plot shows an operating point very close to full loading, while the steep increase in the 1/COP term to the left of this point in Fig. 4 represents the overloaded region. While there are several analytical models in the literature 1013 , which help characterize chiller operation, the GordonNg model 10 was chosen in this paper for its simplicity and the ease with which commonly available building data can be regressed to t the model coefcients. The GordonNg model as described in Ref. 11 takes the form y = a 1x 1 + a 2x 2 + a 3x 3 where x1 = Tcho/Qchiller x2 = Tcdi Tcho / Tcdi x3 = y= 1/COP + 1 1/COP + 1 Qchiller Qchiller /Tcdi Tcho/Tcdi 1 39 40 41 42 38

qcondenser = 1/COP + 1

qchiller + IPCTW

44

It should be noted that the chiller model presented herein is for a specic chiller, and this model will have to be modied when used for a different chiller. 4.3 Cooling Tower Thermal Performance. The heat and mass transfers that occur in a counterow mechanical draft cooling tower can be characterized using equations that are very similar to single phase heat exchanger expressions. A cooling tower effectiveness can be dened such that 14
CT =

1 eNTUct

m1

/ 1m

eNTUct

m1

45

where NTUct and m are the number of transfer units and the capacitance rate ratio, respectively, and are given by 14 NTUct = c m = flowCTA flowCTW
air water

/ flowCTA
water

air

46

Cpm,air / flowCTW

Cpwater 47

where COP is the ratio of the evaporator heat load to the electrical power consumption at the compressor, and Tcdi and Tcho are the water temperatures in Kelvin entering the condenser and leaving the evaporator, respectively. The chiller heat load in kilowatts Qchiller is the chiller heat load in watts qchiller divided by 1000. The data presented in Fig. 4 were correlated using Qchiller, COP, Tcho, and Tcdi, information, using the GordonNg 11 form, to yield the chiller model as y = 1.2518x1 2013.5757x2 + 0.0022x3 43

where water and air are the water and air mass densities; Cpwater and Cpm,air are the mass specic heats of water and moist air, respectively; and c and n are cooling tower characteristics. As discussed in Ref. 14 , the value of c usually ranges from 1 to 4, with the magnitude representing the amount of heat and mass transfer areas available. Thus, a cooling tower with c equal to 1 has very little heat and mass transfer area and is designed for a high air ow rate high operating energy expense , while a cooling tower with a c of 4 has a large amount of n area and requires relatively lower air ow rates low operating energy cost 14 . The value of n is usually in the 47 range and a well designed cooling tower the ratio of the air to water mass ow rate is in the 0.40.6 range 14 . The heat removed by the cooling tower qCT, is given by qCT = hao hai flowCTA
air

48

with a least square t R value of 0.999. Thus, knowledge of qchiller, Tcho, and Tcdi can allow the calculation of the COP, and subsequently the calculation of the condenser water heat load using Journal of Electronic Packaging

where hao and hai are the air enthalpies leaving and entering the cooling tower, respectively. While the enthalpy of the air entering the cooling tower hai is known via the dew point temperature and the relative humidity at JUNE 2009, Vol. 131 / 021009-5

Downloaded 28 Sep 2010 to 130.179.16.201. Redistribution subject to ASME license or copyright; see http://www.asme.org/terms/Terms_Use.cfm

Table 1 Parameter description

Input parameter values for case study example Symbol


fan CRAC BCW CTW CTA

Value 0.1 0.5 0.8 0.8 0.8 375 Pa0.5 / m3 / s 3 Pa0.5 / m3 / s 3 Pa0.5 / m3 / s 3.55 Pa0.5 / m3 / s 1557 Pa0.5 / m3 / s 1311 Pa0.5 / m3 / s 0.12 Pa0.5 / m3 / s 0.5 1.0 0.3 0 0.555 0.5 1.0 0.2 1.0 2 1 42 350 280 K 100 W 23,000 W/K 20 C 0.88 101.3 kPa 2.0 0.6 18 CFM 15,000 CFM 5,000 GPM 6,000 GPM 500,000 CFM 1,007 J / kg K 1.16 kg/ m3 4179 J / kg K 995 kg/ m3

Electromechanical efciency of server fan Electromechanical efciency of CRAC Electromechanical efciency of building chilled water pump Electromechanical efciency of cooling tower pump Electromechanical efciency of cooling tower fans Pressure loss coefcient term for server node Pressure loss coefcient term for rack front door Pressure loss coefcient term for rack rear door Pressure loss coefcient term for CRAC internals Pressure loss coefcient term for the building chilled water loop Pressure loss coefcient term for the cooling tower water loop Pressure loss coefcient term for the cooling tower air path Fraction of the server node heat load, which is the module power Fraction of the total building chilled water heat load in data center Fraction of total node ow, which bypasses the CPU modules Fraction of the total rack ow, which bypasses the server nodes Fraction of total rack ow rate that is supplied by the tiles Fraction of the total data center ow rate attributed to leakage Factor of ow overprovisioning Factor to account for under oor and tile pressure drop Fraction of building chilled water that is routed to the data center No. of microprocessor modules in parallel in the node No. of microprocessor modules in series in the node No. of server nodes in the rack No. of racks in the data center Chilled water set point temperature at chiller evaporator exit Module heat load Thermal conductance of the CRAC unit heat exchanger coil Dew point of the air entering the cooling tower Relative humidity of the air entering the cooling tower Atmospheric pressure at the inlet of the cooling tower Cooling tower parameter Cooling tower parameter Volumetric air ow rate through the module Volumetric air ow rate through the CRAC units Volumetric water ow rate through the building chilled water loop Volumetric water ow rate through the cooling tower water loop Volumetric air ow rate through the cooling tower air path Specic heat of air Mass density of air Specic heat of water Mass density of water

CCRAC,int CBCW CCTW CCTA


node BCW node rack supply leakage overprovision CRAC BCW

Cnode Cfd Crd

Nmodule,p Nmodule,s Nnode Nrack Tcho qmodule UACRAC c n CFMmodule CFMCRAC GPMBCW GPMCTW CFMCTA Cpair Cpwater
water air

an atmospheric pressure condition e.g., from a weather report , the enthalpy of the air leaving the cooling tower hao is given by hao =
CT

hswi hai + hai

49

4.4 CRAC Heat Transfer. The total heat removed by the CRAC units qCRAC,actual located on the data center oor can be calculated using qCRAC,actual = NCRAC
CRAC

where hswi is the enthalpy of saturated air at the temperature of the water entering the cooling tower Tcdo. Using a guess value for the water temperature leaving the cooling tower Tcdi, the temperature of the water entering the cooling tower can be calculated using Tcdo = qcondenser/ flowCTW
water

Cmin

TCRAC Tcho

52

Cpwater + Tcdi

50

where Tcdi is the temperature of the water leaving the cooling tower and entering the chiller condenser . For the calculations carried out in this paper, the value for Tcdi was guessed and iterated upon until the heat removed by the cooling tower from the condenser water is equal to the sum of the heat rejected to the water at the chiller condenser and the condenser water pumping power, as given by qcondenser in Eq. 44 . Thus, we have the following equality: qCT = qcondenser which needs to be satised via iteration of the Tcdi value. 021009-6 / Vol. 131, JUNE 2009 51

where CRAC, Cmin, and TCRAC are the heat exchanger effectiveness, the minimum of the air and water capacitance rates, and the temperature of the hot air at the suction side of the CRAC, respectively. The effectiveness of the heat exchanger coil for a cross ow conguration is given by 15
CRAC =

1/Cr

1 eCr

53

where A = 1 eNTUCRAC Cr = Cmin/Cmax 54 55

where Cmax is the maximum of the two uid capacitance rates, and NTUCRAC is the number of transfer units for the CRAC heat exchanger coil, which is dened as Transactions of the ASME

Downloaded 28 Sep 2010 to 130.179.16.201. Redistribution subject to ASME license or copyright; see http://www.asme.org/terms/Terms_Use.cfm

Fig. 5 Cooling energy breakdown for the case study example

NTUCRAC = UACRAC/Cmin

56

where UACRAC is the thermal conductance of the CRAC heat exchanger and can be calculated using manufacturer specications. For typical data center air-conditioning units, the value of UACRAC is in the 10,00025,000 W/K range. The water ow rate through each of the CRAC units is given by flowCRAC,w = flowBCW
BCW/NCRAC

57

where BCW is the ratio of the total building chilled water that ows through the data center. The value of the CRAC inlet air temperature TCRAC is not known a priori and needs to be guessed and iterated upon to ensure that the total CRAC cooling is equal to the sum of the total IT equipment heat load and the CRAC blower power qdatacenter, thus dened by the following equality: qCRAC,actual = qdatacenter 58 Once the condition dened by Eq. 58 is satised, then the chilled air temperature exiting the CRAC unit, thus entering the data center room through the perforated tiles Ttile can be calculated using Ttile = TCRAC qCRAC,actual/ NCRAC flowCRAC
air

the hot spot region. In Eq. 60 , Tcaloric and Tinlet are two temperature increase terms, which take into account the temperature increase due to sensible heating of the air as it passes through a module heat sink, and the temperature excess between the chilled air coming out of the tiles and the server intake location, which could be as high as 6 ft above and as far as 26 ft away from the tile. Thus, the value or the air temperature increase Tcaloric,n through the nth module is given by Tcaloric,n = Nmodules,n1 qmodule / flowmodule
air

Cpair 61

Cpair 59

where Tcaloric,n is the temperature increase term for the nth modules, and Nmodules,n1 is the number of modules that the air has already passed over. The value of Tinlet, the second temperature increase term, is determined via computational uid dynamics CFD modeling of the data center room 1719 , or using previous eld data to make an estimate 57,20 . The magnitude of Tinlet is usually in the 5 20 C range and depends on the fraction of rack air ow rate that is supplied by the perforated tiles, the oor average and hot spot heat uxes, the location of the rack in the aisle, and several server and room parameters.

4.5 Determination of Server Microprocessor Junction Temperature. Once the temperature of the chilled air entering the tile in front of the rack is known using Eq. 59 , the junction temperature of the server microprocessor T j can be expressed as T j = Rja qmodule + Ttile + Tcaloric + Tinlet 60 where Rja is the junction maximum to ambient thermal resistance of the heat ow path between the device side of the chip through the various interfaces and materials to the inlet air to the air cooled heat sink. There is considerable literature on models for calculating Rja, for example, a comprehensive method for predicting the maximum junction temperature given a power map and a module design has been presented in Ref. 16 . Such a module level model 16 can be easily coupled to the larger data center model proposed in this paper. Typical values for the Rja range from 0.1 1 C / W depending on the geometry, material set, and the air ow parameters. Thus, for a 100 W module with a Rja value of 0.5 C / W, the temperature excess between the inlet air and the maximum chip junction temperature will be 50 C. Although the phrase maximum junction temperature is used in this paper, in reality, it would be the chip sensor temperature located in Journal of Electronic Packaging

Fig. 6 Energy efciency metric for the various cooling components for the case study example

JUNE 2009, Vol. 131 / 021009-7

Downloaded 28 Sep 2010 to 130.179.16.201. Redistribution subject to ASME license or copyright; see http://www.asme.org/terms/Terms_Use.cfm

Fig. 7 Variation of total plant cooling and chiller energy efciency metric with chiller water set point temperature

Case Study Example

Table 1 details all the parameter values used to carry out the energy consumption and heat transfer calculations described in Eqs. 1 61 .

Results and Discussion

6.1 Cooling Energy Breakdown. Figure 5 displays the cooling energy breakdown for the case study described via Table 1, which yielded a total cooling power consumption of 2.61 MW for cooling 5.88 MW of server electronics heat load. The corresponding tile air temperature for this conguration was 10 C. As may be expected, the chiller contribution to the energy consumption is the largest fraction of 41.2%. It should be noted that that the chiller is operating at near full load and is thus functioning close to its highest efciency point. If the total load reduces then the fraction represented by the chiller energy use can be expected to increase signicantly. The server fans are found to consume 14% of the total. The fans used for this example are typical 40 mm fans and are at the lower efciency portion of the spectrum of server air moving devices. A higher performance blower used in a high

end server application can be expected to be much more electromechanically efcient. The second largest energy drain on the cooling system is the CRAC units, making up 27.6% of the total cooling power use. It must be noted that for the example considered, only marginal overprovisioning occurred. In a more inefcient data center ventilation design, for example, with signicant overprovisioning or using refrigeration chiller type CRAC units DX , this percentage could be much higher. 6.2 Energy Efciency Comparison. The energy efciency metric values in the units of kilowatts per ton 3.517 kW of cooling for the various cooling system components resulting from the analysis using the parameters in Table 1 and the equations detailed in this paper, are presented in Fig. 6. The plant energy efciency metric was found to be 2.26 kW/ton with a perforated tile chilled air temperature of 10 C. Thus, assuming a module thermal resistance Rja of 0.65 C / W and a module heat load of 100 W, and assuming the inlet air superheat Tcaloric to be equal to zero for the rst module in the node , and a tile to server inlet temperature excess Tinlet of 15 C, the maximum junction temperature of the chip using Eq. 60 will be 90 C. Once again the

Fig. 8 Variation of total plant cooling and chiller energy efciency metric with outdoor air dew point temperature

021009-8 / Vol. 131, JUNE 2009

Transactions of the ASME

Downloaded 28 Sep 2010 to 130.179.16.201. Redistribution subject to ASME license or copyright; see http://www.asme.org/terms/Terms_Use.cfm

chiller is seen to consume the highest amount of energy for a given cooling function performed, with the CRAC unit being the second most inefcient component. 6.3 Trends With Chilled Water Set Point Temperature. One of the key parameters, which inuence the energy consumption of the biggest cooling energy component, namely, the chiller plant, is the set point temperature of the chilled water leaving the chiller evaporator. The typical range for this set point is 5 10 C. Figure 7 shows the variation in the plant and chiller energy efciencies with this set point temperature. A change in this specication from 0 C to 10 C results in an 8% reduction in the total plant energy efciency from 1.63 kW/ton to 1.51 kW/ton. For the same change in set point temperature, the corresponding reduction in chiller energy efciency is 17%, from 0.63 kW/ton to 0.52 kW/ton, respectively. 6.4 Trends With Outdoor Air Temperature. Data centers such as the one discussed in this paper are located in different parts of the world. The ultimate heat sink medium for the data center is the ambient air, whose dew point temperature and relative humidity can naturally vary from one global location to another. The change in the total cooling plant and chiller energy efciency for a 10 C increase in the outdoor air dew point temperature is depicted in Fig. 8. For an increase in outdoor air dew point temperature from 20 C to 30 C for the same relative humidity of 0.88, the plant and chiller energy uses increases by 5% from 1.55 kW/ton to 1.63 kW/ton and 13% from 0.55 kW/ton to 0.63 kW/ton , respectively.

Conclusions

In this paper we develop and present physics based models that allow the prediction of the energy consumption and heat transfer phenomenon in a data center. These models allow the estimation of the chilled air temperature entering the data center room, the server air inlet temperature, and ultimately the maximum microprocessor junction temperature, for different ows and temperature conditions at various parts of the data center cooling infrastructure. For a case study example considered, the cooling energy consumption was 2.61 MW for a server electronic equipment load of 5.88 MW, yielding an energy efciency value of 2.26 kW/ton. The chiller energy use was the biggest fraction of the total cooling energy used at about 41% and was also the most inefcient. The room air conditioning was the second largest energy component 28% and was also the second most inefcient. A sensitivity analysis of plant and chiller energy efciencies with chiller set point temperature and outdoor air conditions is also presented.

References
1 J. G. Kooney, 2007, Estimating Total Power Consumption by Servers in the US and the World, Lawrence Berkeley National Laboratory, http://

enterprise.amd.com/Downloads/svrpwrusecompletenal.pdf. 2 Green Grid Industry Consortium, 2007, Green Grid MetricsDescribing Data Center Power Efciency, Technical Committee White Paper, http:// www.thegreengrid.org/~/media/WhitePapers/ Green_Grid_Metrics_WP.ashx?lang en. 3 2006, HP Unveils Automated Cooling System for Data Centers, Information Week, Nov. 29. 4 ASHRAE Publication, 2005, Datacom Equipment Power Trends and Cooling Applications. 5 Schmidt, R., 2004, Thermal Prole of a High Density Data Center Methodology to Thermally Characterize a Data Center, ASHRAE Summer Meeting, Symposium NA-04, Nashville, TN, Jun., Paper No. NA-04-4-2. 6 Schmidt, R., Iyengar, M., Beaty, D., and Shrivastava, S., 2005, Thermal Prole of a High Density Data CenterHot Spot Heat Fluxes of 512 W / ft2, Proceedings of the ASHRAE Annual Meeting, Symposium DE-05-11, Denver, CO, Jun. 2529. 7 Schmidt, R., Iyengar, M., and Mayhugh, S., 2006, Thermal Prole of Worlds 3rd Fastest Supercomputer-IBMs ASCI Purple Cluster-, Proceedings of the ASHRAE Summer Meeting, Symposium DE-05-11, Quebec City, Canada, Jun., Paper No. QC-03-019. 8 Tschudi, W., 2006, Best Practices Identied Through Benchmarking Data Centers, Presentation at the ASHRAE Summer Conference, Quebec City, Canada, Jun. 9 Pacic Gas and Electric Company Report, 2006, High Performance Data CentersA Design Guidelines Sourcebook, Developed by Rumsey Engineers and Lawrence Berkeley National Laboratory, http://hightech.lbl.gov/ documents/DATA_CENTERS/06_DataCenters-PGE.pdf. 10 Gordon, J. M., and Ng, K. C., 1995, Predictive and Diagnostic Aspect of a Universal Thermodynamic Model for Chillers, Int. J. Heat Mass Transfer, 38 5 , pp. 807818. 11 Jiang, W., and Reddy, T. A., 2003, Reevaluation of the Gordon-Ng Performance Models for Water-Cooled Chillers, Proceedings of the ASHRAE Annual Meeting, Vol. 109, Part 2, Kansas City, MO, pp. 272287. 12 Bourdouxhe, J., Grodeni, M., Lebrun, J. J., Saavedra, C., and Silva, K., 1994, A Toolkit for Primary HVAC System Energy CalculationsPart2: Reciprocating Chiller Models, Proceedings of the ASHRAE Annual Meeting, Vol. 100, Orlando, FL, Paper No. OR-94-9-2 RP-665 , pp. 774786. 13 Sreedharan, P., 2001, Evaluation of Chiller Modeling Approaches And Their Usability For Fault Detection, Masters Project at the University of California at Berkeley, Department of Mechanical Engineering, http:// repositories.cdlib.org/lbnl/LBNL-48856/. 14 Stout, M. R., Jr., 2002, Cooling Tower Fan Control for Energy Efciency, Energ. Eng., 99 1 , pp. 731. 15 Incropera, F. P., and DeWitt, D. P.,1990, Introduction to Heat Transfer, 2nd ed., Wiley, New York. 16 Iyengar, M., and Schmidt, R., 2006, Analytical Modeling for Prediction of Hot Spot Chip Junction Temperature for Electronics Cooling Applications, Proceedings of the Inter Society Conference on Thermal Phenomena ITherm , San Diego, MayJun. 17 Schmidt, R., Iyengar, M., and Chu, R., 2005, Data CentersMeeting Data Center Temperature Requirements, ASHRAE J., 47 4 , pp. 4449. 18 Schmidt, R., Cruz, E., and Iyengar, M., 2005, Challenges of Data Center Thermal Management, IBM J. Res. Dev., 49 4/5 , pp. 709723. 19 Schmidt, R., and Iyengar, M., 2006, Comparison Between Under Floor Supply and Overhead Supply Data Center Ventilation Designs for High Density Clusters, Proceedings of the ASHRAE Winter Meeting, Symposium DA-07, Chicago, IL, Paper No. DA-07-013. 20 Schmidt, R., and Iyengar, M., 2005, Effect of Data Center Layouts on Rack Inlet Temperatures, Proceedings of the Pacic Rim/ASME International Electronic Packaging Technical Conference InterPack , San Francisco, CA, Jul. 1722, Paper No. IPACK2005-73385.

Journal of Electronic Packaging

JUNE 2009, Vol. 131 / 021009-9

Downloaded 28 Sep 2010 to 130.179.16.201. Redistribution subject to ASME license or copyright; see http://www.asme.org/terms/Terms_Use.cfm

Potrebbero piacerti anche