Sei sulla pagina 1di 5

ENERGY-EFFICIENCY IN THE CLOUD: PROMISES, RESULTS AND FUTURE

Pierrick Barreau Master in e-Commerce (Technical), School of Computing, Dublin City University
pierrick.barreau2@mail.dcu.ie

ABSTRACT
Nowadays, with the increasing price of electricity bills and the global climate change, energy-efficiency in data centres has become a huge concern for public and companies. Thanks to the apparition of virtualisation and later on of cloud computing, the last technological advances in Information and Communication Technologies (ICT) give us tools to reduce significantly our carbon dioxide footprints in the upcoming years. In this paper, we will briefly give the possible source of electricity waste in data centres and then confront the early promises of virtualisation with the results that we currently measure. Finally, we will review the last green algorithms discovered and conclude on the future of the information infrastructures. Keywords- energy efficiency; green computing; virtualisation; Cloud computing; datacentres

impact. By monitoring and maximizing the energyefficiency on its infrastructure, the survey shows that a five times reduction of the sectors own footprint is reachable. In this paper, we will give some clues to understand how managing more efficiently the IT infrastructures to reach this reduction. The first step is to study the different components of the data centres to identify where energy is wasted. The main objective is then to analyse how best practices and virtualisation could avoid these expensive leaks and verify if the economy allowed by this technology counterbalances its computational overhead. To go further on the carbon footprint reduction, the use of recent live Virtual Machines (VM) migration algorithms according to the location of green energy sources or to the workload within a Virtual Private Cloud, along with the use of On/off algorithms will be reviewed in order to conclude on the potential future of the IT infrastructures in the upcoming years.

1. INTRODUCTION
Since the early day of computing industry, performance and price have always been the key factors to improve because of their impact on the customers purchases. However, with combination of the rising cost of electricity and the increasing energy consumption of data centres, companies and researchers focus now on designing more energy-efficient IT infrastructures. One of the main factors is that the electricity bills due to servers tend to cost more than their hardware initial price [1]. Thus, a paper of 2008 estimates than IT infrastructures consumed about 61 billion kWh for a total electricity cost of about 4.5 billion dollars in 2006 and tend to double by 2011 [2]. Even if the electricity used by data centres worldwide increased by about 56% from 2005 to 2010 instead of doubling [3], their environmental impact is still huge considering that they represent about 1.3% of all electricity use for the world [3] and 2% of the global Greenhouse Gas (GHG) emissions [4]. Fortunately, the SMART 2020 report [4] demonstrated that the ICT industry has the tools to minimize its

2. ELECTRICITY WASTES IN DATA CENTRES


The last survey of Intel Labs [5] states that CPU is the most power consuming source of a server, followed by memory and losses due to the power supply (Figure 1).

Figure 1. Power consumption by servers components [5] The continuous increase of frequency and capacity of memory chips have led to the improvement of CPU power-efficiency but also to the additional requirements

to a cooling system and power delivery infrastructure, i.e. Uninterruptible Power Supplies (UPS), Power Distribution Units (PDU) [6]. These extra equipments work all the time, as well as the storage and network infrastructures, to allow a high availability of the resources. Thus, even if the server is idle, it will consume between 70 and 90% of its peak power. Indeed, the transformation which converts alternative current into continuous current in order to feed server components, leads to significant power losses due to the inefficiency of the current technology [6]. The efficiency of power supplies are maximized at loads within the range of 50-75%, while most data centres create a load of 10-15% wasting the majority of the consumed electricity and leading to average power losses of 60- 80% [5]. A lot of electricity is also wasted to keep network hosts fully powered on, just to maintain their network presence [7]. Indeed, network cards are always powered on (when the node is on) even when they have nothing to do. It represents an important waste of energy with the scaling effects [9]. Likewise, a significant part of data centres power is drawn by storage. The power consumption of disks is composed of a fixed portion (the idle state which includes the spindle motor) and a dynamic portion (l/O workload, data transfers, moving of the disk head during a seek operation) which represent about one third of the disk's total consumption [10]. I/Os can be reordered so as to shorten the seek distance and thus reduce power consumption. But, the consumption of a node does not depend only on its architecture and on the application it is running. It also depends on e.g., its position on the rack and its temperature [7]. A cooler node will consume less energy, since it will start its fans less often. According to [8], fans can represent 5% of the consumption of a typical server. Therefore the physical servers repartition within data centres must be carefully planned in order to maximize the cooling flux efficiency. To conclude, the resource management system can also be a source of waste. Processors are now able to change their working frequency and voltage on-demand in order to reduce power consumption. Indeed, they feature Pstates which define different frequencies supported by the processor and C-states which propose several CPU idle states, allowing server to adapt the performance of CPU to the workload. Nevertheless, taking advantage of these new features requires two things: the right BIOS version and configuration, and a modern operating system. Today,

this technology is commonly available on modern server nodes, although rarely exploited by administrators [7]. This shows than even if technology is available, its understanding is not widespread yet, leading to misbehaviour and thus electricity wastes on the cloud.

3.

VIRTUALISATION PROMISES AND CURRENT RESULTS

As we have previously seen, the major electricity waste in a data centre is the combined energy-inefficiency of the servers physical components leading to a consumption of at least 70% of its peak power even if it is idle. Therefore, one of the most critical goals for an IT company is to maximize the workload of their machines. In that context, the need of server consolidation emerges. While before one server could just ran one application resulting on a poor use of CPUs performance, virtualisation allow now servers to host at the same time different operating systems with different configurations running different applications maximizing their workload. Thanks to that, companies can reduce their pool of machines and cut their costs. But the most important feature for the environment is the flexibility that virtualisation brings to the data centres resources management. The applications and their infrastructures are packaged into virtual machines (VM) making them portable to any server powered with a hypervisor. Thus, live migrations of VM from machines to machines can be performed according to the workload of servers or, as we will see later, to their geographical locations and the ecological characteristics of their energy sources. However, before considering virtualisation as the best solution to reduce our footprint on environment, we have to verify that the computational overhead due to both running multiple VM on the same machine and performing live migrations do not exceed the potential electricity economy that promise the use of this technology. In [7], researchers measure the consumption of servers with and without virtualisation when confronting with CPU stress test (cpuburn) and high network activity (iperf). They observed the same results for both machines. They also observed the same electricity consumption in idle mode. These experiments tend to prove that if the server is powered with a recent hypervisor, the use of virtualisation will not imply any computational overhead for the system.

Nevertheless, after virtualizing a system, some resources may still be unused by the applications, because each of them do not have the same needs (i.e. an application with a large memory footprint might have a small CPU power footprint). This can potentially lead to an underutilization of some resources in the system [11]. Therefore to manage efficiently the servers performance, the hypervisor needs some decision algorithms in order to prioritize the accesses to resources by applications. This leads to wastes of performance since the hypervisor needs computational resources and this reduces the possibility to put several virtual machines on the same node. Moreover, Ye and al. [12] study the performance of workloads in consolidated mode. The survey shows that due to the shared resource contention (core, cache, memory, etc.), the server consolidation feature a computational overhead, resulting on a reduction of about 10 to 30% of the workload performance compared to its individual mode. Therefore, depending on the type of resources needed by applications in a particular node, virtualisation may have an impact on performance and thus on energy-efficiency. In the same way, migrations costs are often wrongfully considered as negligible. In [7], six cpuburn in six different virtual machines (Xen Server 5.0) are launched at 10 seconds on Cloud node 1 with a one second interval. Then, all the virtual machines are migrated to Cloud node 2.

To sum up, virtualisation is a mature technology that works well and is here to stay. It saves energy and money in companies utility bill, increases computing throughput, frees up floor space, and facilitates load migration and disaster recovery [13]. However, we should put the last advances of the field into perspective by considering the constraints related to virtualisations computational overhead and live migrations costs, which are often not taken into account in the results. Aware of that, we will now study different advanced algorithms empowered by virtualisation.

4.

LAST ADVANCES AND DISCUSSION

In section 2 and 3, we have seen that managing efficiently the workload and the temperature of CPU within a rack can reduce considerably the energy consumption of nodes. Nathunji and Schwan [14] proposed an energy management system architecture that addresses both problems. The system is divided into two layers: local and global. On the local level, it leverages guest operating systems power management strategies. Consolidation of VMs is then handled by global policies applying live migration to reallocate VMs according to the information emitted by the local level. Beloglazov and al. [15] improves this system by proposing efficient heuristics for dynamic adaption of allocation of VMs in runtime, applying live migration according to current utilization of resources and thus minimizing energy consumption. Like in [14] the system feature local and global managers, but also a dispatcher which split the workload between global managers in order to ensure both quality of service (QoS) requirements and minimisation of the energy consumption. One of the most interesting features of these studies is that each nodes resources are monitored by the local level and migrations are performed according to:

Figure 2. Migration of 6 Virtual Machines Results (Figure 2) show that there is an "expensive" moment in terms of energy during the migration when the two nodes consume energy for the same virtual machine. This show that migration costs should be taking into account when discussing about the efficiency of live VM migrations algorithms over MAN/WAN networks. Moreover, the energy consumption of the network equipments (routers, switches ) during these operations is considerable even if often neglected in the results.

The thermal state of CPU, addressing the problem of temperature management within the rack, The workload of servers, allowing the management system to move VMs, as much as possible, from lowuse servers and then turn those servers off to save the high idle consumption of nodes and cool down the rack. But also the type of workload assigned to the node, which allow a distinction between CPU operations and I/O operations and therefore solve the problem of resources underutilization within a server.

Figure 3. The system architecture [15] Therefore, these algorithms are dealing with all the problems raised in section 3 and are good tools to reduce the energy wastes due to the servers physical components inefficiency and to the computational overhead of virtualisation. In [15], the system achieves energy savings up to 83% compared to a traditional one. However, minimizing the electricity consumption of IT infrastructures is not the absolute path to reduce our carbon footprint. Indeed even if the data centres consume less electricity, they will still release Greenhouse Gas (GHG) if they are powered by fossil energy, and thus have a considerable impact on environment. Aware of that, researchers have developed another type of resource management system based on the concept that a carbon neutral network must consist of data centres built in proximity to clean power sources and user applications must be moved to be executed in these data centres. In this context, the GreenStar network [16], which is powered entirely by green energy, has been created. It is composed of two parts: a set of clouds each representing a data centre with its power and network accessories and a hub node that link them up and is responsible of the VM live migrations. Lemay & al. [17] have developed a simple energy distribution algorithm for this network: when green energy source is available at a spoke site, perform data processing, otherwise, run applications at the hub. In that application, the hub node sets up the connectivity for physical and data link layers using dynamic services and then pushes virtual machines (VMs) or software virtual routers from the hub to sun and wind nodes (spoke nodes) when green power is available. Through Web interfaces, users may determine GHG emission boundaries based on information providing VM power and energy sources, and then take actions to reduce GHG emissions. Thus, according to their calculation model, with data centres including 1560 CPUs and assuming that the core network has 9 nodes, the researchers estimate that the GSN may save up to 986 carbon credits compared to network powered by fossil energy.

Still with the idea of privileging networks which have the tiniest carbon footprint, Moghaddam and al. [18] used the concept of Virtual Private Cloud (VPC) to build a system that maximize a cost function, which is set to carbon footprint reduction. A Virtual Private Cloud (VPC) is a cloud identity consisting of a network of data centres connected to one another in a WAN. On the presented Low Carbon VPC (LCVPC), the carbon footprint of each data centre is processed according to its energy consumption pondered by a function gd(t) which represents the cleanness of all the energy sources combined to power the data centre. Unlike the previous algorithm which assumes that the energy cost of VM live migration is negligible [17], the LCVPC algorithm consider this cost as a contribution to the total cost function of the cloud carbon footprint. Thus, before migrating VMs, the management system computes the carbon footprint reduction gained by pushing them on a greener data centre but moderates it with the cost of the live migration it will have to perform. Then, the data centre which maximizes the carbon footprint reduction is chosen. In addition, the presented management system uses the Probability Density Function (PDF)-based selection of memory pages developed in [19] to minimize the effect of a VM with a busy memory on migration downtime, reducing the overall effect of server consolidation on the data centres power consumption.

Figure 4. Network carbon footprint for LAN-based clouds and VPCs Figure 4 shows an important reduction of the overall network carbon footprint. The first carbon footprint corresponds to an even distribution of VMs on the servers, without any optimization. We see that this carbon footprint is at the highest level. The second carbon footprint corresponds to server consolidation within each data centre, with the help of virtualization technology. The third carbon footprint corresponds to data centre consolidation over WAN connections. Here, data centres are consolidated according to their greenness factor and their resource availability.

The carbon footprint in this case is smaller than that for server consolidation. The network was then tested under different loads, and the results show a significant carbon footprint reduction through VPC data centre consolidation compared to traditional one.

[10]

5.

CONCLUSION
[11]

Virtualisation is a promising technology that tends to reduce most of the energy wastes within data centres and maximize the performance rate of their servers. As we have previously seen, there are still improvements to find in order to minimize its computational overhead, but modern energy management algorithms by dynamically migrating virtual machines over WAN counterbalance its energy cost. The combination of the different advances seen in section 4 let us hope in a future zero-carbon footprint network but to achieve this goal, companies should also consider technologies such as memory compression or request discrimination as a possible supplement to virtualisation [20].

[12]

[13]

[14]

6.
[1] [2]

REFERENCES
[15]

[3]

[4]

[5]

[6]

[7]

[8]

[9]

L. Barroso, The price of performance, Queue, ACM Press, vol. 3, no. 7, p. 53, 2005. R. Brown et al., Report to congress on server and data center energy efficiency: Public law 109-431, Lawrence Berkeley National Laboratory, 2008. Jonathan Koomey, Growth in Data center electricity use 2005 to 2010, Oakland, CA: Analytics Press, 2011. The Climate Group, SMART2020: Enabling the low carbon economy in the information age, Report on behalf of the Global eSustainability Initiative, 2008. L. Minas and B. Ellison, Energy Efficiency for Information Technology: How to Reduce Power Consumption in Servers and Data Centers, Intel Press, Aug. 2009. Anton Beloglazov1, Rajkumar Buyya1, Young Choon Lee and Albert Zomaya, A Taxonomy and Survey of Energy-Efficient Data Centers and Cloud Computing Systems, Technical Report CLOUDSTR-2010, 2010. Anne-Cecile Orgerie, Laurent Lerevre and JeanPatrick Gelas, Demystifying energy consumption in Grids and Clouds, Green Computing Conference 2010 International, 2010. X. Fan, W.-D. Weber, and L. A. Barroso, Power provisioning for a warehouse-sized computer, 34th annual international symposium on Computer architecture, pages 13-23, New York, NY, USA, 2007. ACM. M. Gupta and S. Singh, Greening of the internet, SIGCOMM'03: Proceedings of the 2003 conference

[16] [17]

[18]

[19]

[20]

on Applications, technologies, architectures, and protocols for computer communications, pages 1926, New York, NY, USA, 2003, ACM. M. Allalouf, Y. Arbitman, M. Factor, R. I. Kat, K. Meth, and D. Naor, Storage modelling for power estimation, SYSTOR'09: Proceedings of SYSTOR 2009: The Israeli Experimental Systems Conference, pages 1-10, New York, NY, USA, 2009, ACM. J. Torres, D. Carrera, K. Hogan, R. Gavalda, V. Beltran, and N. Poggi, Reducing wasted resources to help achieve green data centers. In International Symposium on Parallel and Distributed Processing (IPDPS 2008), pages 1-8. IEEE, April 2008. Kejiang Ye, Dawei Huang, Xiaohong Jiang, Huajun Chen, Shuang Wu, Virtual Machine Based Energy-Efficient Data Center Architecture for Cloud Computing: A Performance Perspective, Green Computing and Communications (GreenCom), 2010. R. Talaber, T. Brey, and L. Lamers, Using Virtualization to Improve Data Center Efficiency, Technical report, The Green Grid, 2009. R. Nathuji and K. Schwan, Virtualpower: Coordinated power management in virtualized enterprise systems, ACM SIGOPS Operating Systems Review, vol. 41, no. 6, pp. 265278, 2007. Beloglazov Anton, Buyya, Rajkumar, Energy Efficient Resource Management in Virtualized Cloud Data Centers, Cluster, Cloud and Grid Computing (CCGrid), 2010. The GreenStar Network Project, http://greenstarnetwork.com. Nguyen K., Lemay M., St. Arnaud B., Cheriet M., Convergence of Cloud Computing and Network Virtualization Towards a Zero-Carbon Network, Internet Computing, IEEE, 2011. Moghaddam F., Cheriet M., Kim Khoa Nguyen, Low Carbon Virtual Private Clouds, Cloud Computing (CLOUD), 2011 IEEE International Conference, p. 259 266, 2011. Moghaddam F.F., Cheriet M., Decreasing live virtual machine migration down-time using a memory page selection based on memory change PDF, International Conference on Sensing and Control (ICNSC), pp. 355-359, 2010. Torres J., Carrera D., Hogan K., Gavalda R., Beltran V., Poggi, N., Reducing wasted resources to help achieve green data centers, Parallel and Distributed Processing, 2008. IPDPS 2008. IEEE International Symposium, p. 1 8, 2008.

Potrebbero piacerti anche