Sei sulla pagina 1di 20

TABLE

OF

CONTENTS
3

Scope & Objective

Introduction of Virtualization 4 Analysis of Virtualization 8 How Virtualization Works 16 Benefits of Virtualization 19 Challenges/Opportunities 22 Conclusion 23

SCOPE & OBJECTIVES OF VIRTUALIZATION


When the topic of virtualization comes up, the focus is typically on the specific IT asset that is to be virtualized - the sprawling server farm, the underutilized storage resources, or the PC fleet that is extremely expensive to maintain and virtually impossible to secure. Taking this object-oriented viewpoint is fine, but, in our view, limiting when compared with thinking about and utilizing the powerful capabilities of virtualization in an integrated, strategic initiative across our entire IT enterprise. Virtualization impacts the objectives of greatest concern to Angel IT departments and offers the potential to drive significant improvements in: Asset utilization IT manageability Data security User flexibility Business continuity

OBJECTIVES OF VIRTUALIZATION
1.

Server Consolidation and Infrastructure Optimization: Virtualization makes it possible to achieve significantly higher resource utilization by pooling common infrastructure resources and breaking the legacy one application to one server model. Physical Infrastructure Cost Reduction: With virtualization, we can reduce the number of servers and related IT hardware in the data center. This leads to reductions in real estate, power and cooling requirements, resulting in significantly lower IT costs. Improved Operational Flexibility & Responsiveness: Virtualization offers a new way of managing IT infrastructure and can help IT administrators spend less time on repetitive tasks such as provisioning, configuration, monitoring and maintenance. Increased Application Availability & Improved Business Continuity: Eliminate planned downtime and recover quickly from

2.

3.

4.

unplanned outages with the ability to securely backup and migrate entire virtual environments with no interruption in service.
5.

Improved Desktop Manageability & Security: Deploy, manage and monitor secure desktop environments that end users can access locally or remotely, with or without a network connection, on almost any standard desktop, laptop or tablet PC.

INTRODUCTION TO VIRTUALIZATION
What is virtualization and why use it Virtualization is a method of running multiple independent virtual operating systems on a single physical computer. It is a way of maximizing physical resources to maximize the investment in hardware. Virtualization technology is a way of achieving higher server density. However, it does not actually increase total computing power; it decreases it slightly because of overhead. We can exploit this newly found hardware power by increasing the number of logical operating systems it hosts. This slashes the majority of hardware acquisition and maintenance costs that can result in significant savings for any company or organization. When to use virtualization Virtualization is the perfect solution for applications that are meant for smallto medium-scale usage. Virtualization should not be used for highperformance applications where one or more servers need to be clustered together to meet performance requirements of a single application because the added overhead and complexity would only reduce performance. For instance, if we are essentially taking a 12 GHz server (four cores times three GHz) and chopping it up into 16 750 MHz servers. But if eight of those servers are in off-peak or idle mode, the remaining eight servers will have nearly 1.5 GHz available to them. While some in the virtualization industry like to tout high CPU utilization numbers as an indication of optimum hardware usage, this advice should not be taken to the extreme where application responsiveness gets excessive. A simple rule of thumb is to never let a server exceed 50% CPU utilization during
3

peak loads; and more importantly, never let the application response times exceed a reasonable SLA .Most modern servers being used for in-house server duties are utilized from 1 to 5% CPU. Running eight operating systems on a single physical server would elevate the peak CPU utilization to around 50%, but it would average much lower since the peaks and valleys of the virtual operating systems will tend to cancel each other out more or less. For servers with extremely high storage or hardware I/O requirements, it would be wise to run them on bare metal even if their CPU requirements can be met inside a Virtual environment.

HOW

TO AVOID THE

"ALL

OUR EGGS IN ONE BASKET" SYNDROME

One of the big concerns with virtualization is the "all our eggs in one basket" syndrome. Is it really wise to put all of our critical servers into a single physical server? The answer is absolutely not! The easiest way to avoid this liability is to make sure that a single service isn't only residing on a single server. Let's take for example the following server types:

DIET/Odin Connect WEB SERVICES FTP DHCP

ANYWHERE HTTP DNS

We can put each of these types of servers on at least two physical servers and gain complete redundancy. These types of services are relatively easy to cluster because they're easy to switch over when a single server fails. When a single physical server fails or needs servicing, the other virtual server on the other physical server would automatically pick up the slack. By straddling multiple physical servers, these critical services never need to be down because of a single hardware failure. For more complex services such as an Exchange Server, Microsoft SQL, MySQL, or Oracle, clustering technologies could be used to synchronize two logical servers hosted across two physical servers; this method would generally cause some downtime during the transition, which could take up to
4

five minutes. This isn't due to virtualization but rather the complexity of clustering which tends to require time for transitioning. An alternate method for handling these complex services is to migrate the virtual server from the primary physical server to the secondary physical server. In order for this to work, something has to constantly synchronize memory from one physical server to the other so that a failover could be done in milliseconds while all services can remain functional. Physical to virtual server migration Any respectable virtualization solution will offer some kind of P2V (Physical to Virtual) migration tool. The P2V tool will take an existing physical server and make a virtual hard drive image of that server with the necessary modifications to the driver stack so that the server will boot up and run as a virtual server. The benefit of this is that we don't need to rebuild our servers and manually reconfigure them as a virtual server. So if we have a data center full of aging servers running on sub-GHz servers, these are the perfect candidates for P2V migration. We could literally take a room with 128 sub-GHz legacy servers and put them into eight 1U dual-socket quad-core servers with dual-Gigabit Ethernet and two independent iSCSI storage arrays all connected via a Gigabit Ethernet switch. The annual hardware maintenance costs alone on the old server hardware would be enough to pay for all of the new hardware! Just imagine how clean our server room would look after such a migration. It would all fit inside of one rack and give we lots of room to grow. As an added bonus of virtualization, we get a disaster recovery plan because the virtualized images can be used to instantly recover all our servers .With virtualization, we can recover that Active Directory and Exchange Server in less than an hour by rebuilding the virtual server from the P2V image. Patch management for virtualized servers Patch management of virtualized servers isn't all that different with regular servers because each virtual operating system is its own independent virtual hard drive. We still need a patch management system that patches all of our servers, but there may be interesting developments in the future where we may be able to patch multiple operating systems at the same time if they

share some common operating system or application binaries. Ideally, it is possible to assign a patch level to an individual or a group of similar servers. Licensing and support considerations A big concern with virtualization is software licensing. Software licensing often dwarfs hardware costs, so it would be foolish to run a Rs.1 Lakh software license on a machine on a shared piece of hardware. In this situation, it's best to run that license on the fastest physical server possible without any virtualization layer adding overhead. For something like Windows Server 2003 Standard Edition, we would need to pay for each virtual session running on a physical box. The exception to this rule is if we have the Enterprise Edition of Windows Server 2003, which allows us to run four virtual copies of Windows Server 2003 on a single machine with only one license. This Microsoft licensing policy applies to any type of virtualization technology that is hosting the Windows Server 2003 guest operating systems. If we're running open source software, we don't have to worry about licensing because that's always freewhat we need to be concerned about is the support contracts. If we're considering virtualizing open source operating systems or open source software, make sure we calculate the support costs. If the support costs are substantial for each virtual instance of the software we're going to run, it's best to squeeze the most out of our software costs by putting it on its own dedicated server. It's important to remember that hardware is often dwarfed by software licensing and/or support costs. There are licensing and support considerations for the virtualization technology itself. The good news is that all the major virtualization players have some kind of free solution to get we started.

ANALYSIS ON VIRTUALIZATION
Heres the analysis of the potential to lower IT costs by moving to a virtualized infrastructure finds that the savings can be significant: Adopting a simple virtualized infrastructure can result in a reduction of up to approx. 35% of total annual server costs per user compared with an unvirtualized static x86 server configuration . An optimally managed or "advanced virtualization" infrastructure, described as an infrastructure that includes penetration of virtualized servers of more than 25%, storage virtualization, and the use of systems management tools, can deliver a total reduction of up to 52% per user per year.

Integrated solutions from vendors such as HP, which offers the HP Insight Dynamics - VSE in conjunction with the companys HP c-Class Blade System products (using the HP Virtual Connect technology . a means of vitalizing Ethernet and Fibre Channel network connectivity for blades), provide essentially all of the same benefits of a basic virtualization scenario through a hardware based solution.

Table 1 compares the annual server cost per user for three types of deployment: 1. Unvirtualized:-physical x86 server/physical OS usage/no virtualization and systems configured at less than 10% capacity 2. Basic virtualization: - x86 server consolidation via virtualization without advanced functionality such as live migration and with limited automation and management applied selectively; systems achieve from 20% to 40% capacity utilization; common deployment for test and development scenarios, but limited production use.

3. Advanced virtualization: - widely virtualized infrastructure (>25%), including both server virtualization and at least some storage virtualization; use of management tools and automation tools such as workload redistribution and automatic workload migration, used both on live VMs and on cold OS images for meeting service-level agreements and availability goals; systems achieve 40% to 60% or more capacity utilization

TABLE 1 Business Value of Virtualized Deployment : Total Costs Total Costs per User per Year (Rs) Unvirtualized Basic virtualization Advanced virtualization 8250 5350 4000 Savings Versus Unvirtualized (%) NA Up to 35 Up to 52

Source: Business Value of Virtualization Research, 2008 IT is believed that organizations using x86 solutions today should begin to adopt nextgenerationsolutions as quickly as product adoption timelines permit. By so doing, organizations will gain better utilization of server resources and reductions in acquisition, deployment, and power and cooling costs. Further, the reduction of staffing costs and increasing business agility translate into long-term benefits that for years to come will deliver ongoing returns on the investment required to put this in place initially. THE BUSINESS VALUE OF VIRTUALIZATION

The following figures and tables compare business value accruing from the move from an unvirtualized environment to a virtualized environment or from a basic virtualization scenario to an advanced virtualization scenario.

Figure includes the first business value elements that go beyond hard total cost of ownership data . The reduction of downtime hours on an annual basis and the
9

significant reduction in time to launch applications. While there are multiple contributors to this downward shift, a few items stand out and deserve discussion: More standardized configurations of servers. Because a virtualized environment requires a level of standardization of the underlying operating system, it becomes easier to drive uptime through consistent configuration and patching of server operating system. No longer does each operating system require one or more unique drivers that are specific to a particular hardware configuration; instead, all operating systems map to the same portfolio of drivers provided by the virtualization software. This also helps ease the deployment of new applications because the underlying operating system is far more likely to be in a known and well-understood configuration.

Ability to migrate workloads easily. In the case of downtime reduction,operating systems can be moved from one server to another to facilitate repairs or maintenance, avoiding the lengthy downtime normally associated with that service. In the past, operating systems were tightly married to the underlying hardware, making it impossible to move the workload to an alternate server on a short-term basis. Even without live migration, it is possible to suspend an operating system and its workload, relocate it to another physical server, and bring it back up in only minutes.

Ability to snapshot and replicate operating systems for test and configuration purposes. When IT deploys new applications, it now becomes possible, with little more than some mouse clicks, to replicate environments that can be used for testing and experimentation. "Trialing" a new application in a server operating system becomes easy and virtually risk free.

Figure 3 presents detailed elements depicting the measurable cost-saving metrics that come with a move to a virtualized infrastructure.

10

Software costs remain consistent, or may even increase slightly, as one moves to a fully managed infrastructure, while hardware costs fall dramatically. One of the most significant expense items is the staffing costs. However a substantial drop in staffing costs is realistic in a move to a basic virtualization scenario, with further gains possible with a move to an advanced virtualization scenario.

HOW VIRTUALIZATION WORKS.


VMware Infrastructure is the most widely deployed software for optimizing and managing IT environments through virtualization from the desktop to the data center. VMware Infrastructure abstracts the operating system from the hardware its running on, providing standardized virtual hardware for operating systems and their applications that
11

enables the virtual machines to run simultaneously and independently on one or more shared processors. With virtualization, customers can easily consolidate many disparate server workloads onto more reliable and higher performance hardware.

VMware Infrastructure transforms a mix of industry standard x86 servers and their existing processors, memory, disk and networking into a pool of logical computing resources. Operating systems and their applications are isolated into secure and portable virtual machines. System resources are then dynamically allocated to each virtual machine based on need and prioritization, providing mainframe-class capacity utilization and control of server resources. Virtual machines can run on any physical server in a resource pool and be shifted between those servers seamlessly with zero downtime. As a result, virtual machines can be dynamically and automatically allocated to the most appropriate host in the resource pool to guarantee service levels to software applications.

12

By aggregating hardware resources into resource pools, IT environments can be optimized to dynamically support changing business needs while ensuring flexibility and efficient utilization of hardware resources. VMware Infrastructure provides a set of capabilities that make the entire IT environment more serviceable, available and efficient than physical hardware alone. Traditionally, companies have had to assemble a patchwork of various operating system or software application specific solutions for high availability, resource optimization and security. Because the virtualization layer is the first software installed on the bare metal, VMware Infrastructure can provide these capabilities consistently for all virtual machines. Standardizing the entire IT environment on the consistent virtualization-based distributed services is like creating an assembly line for IT that builds reliability, predictability and efficiency. A virtual machine is like a physical server, only instead of being a box of electronics, it is a set of software files. Each virtual machine represents a complete system with processors, memory, networking, storage and BIOS so that operating systems and software applications run in virtual machines, just like in a physical server, without any modification. The figure to the right shows the standard virtual components presented to every virtual machine, regardless of variations in the hardware present in the physical server. Based on their inherent partitioning, isolation and encapsulation, virtual machines offer many advantages over physical servers. Virtual machines: Run on industry standard x86 physical servers Have full access to all physical server resources such as CPU, memory, disk, networking, and peripherals, allowing them to run any software application. Are completely isolated, providing secure processing, networking and data storage.
13

Can run concurrently with other virtual machines for optimal hardware utilization. Are encapsulated in software files so that they can be provisioned, backed up or restored with the ease of a file copy. Are portable, so full systems including virtual hardware, operating systems and fully configured applications can be easily moved from one physical server to another, even while running. Can incorporate distributed resource management and high availability capabilities that provide better service levels to software applications than static physical infrastructure. Can be built and distributed as plug-and-play virtual appliances that contain the entire stack of virtual hardware, operating system, and fully configured software applications for rapid deployment Without Virtualization With Virtualization

14

THE BENEFITS

OF

VIRTUALIZATION

Virtualization provides the following positive impacts:

User density increases dramatically. The average user density grows by a factor of three on a per-server basis, while number of users per server manage goes up by a factor of between four and five times.

Availability improves. System availability goes up even for basic virtualization. The real benefit comes from an advanced virtualization scenario in which downtime drops by 50%.

Scalability is a click away. Once virtualized, an application that needs more scalability can be moved to a server that can fulfill that requirement with little more than a few clicks of the mouse.

Cost reductions occur. Cost reductions occur across the board, but with future deployments, customers can move to server operating systems that

offer unlimited virtualization rights, extending their savings dramatically in many cases. The use of software tools to manage and optimize system resources has been a common practice in the IT industry for many years. As IT infrastructure has grown in complexity with the proliferation of distributed systems, networks, Web-based applications, and most recently, virtual servers, software for infrastructure management has become an essential requirement for smooth IT operations in the datacenter. Management software is also essential for delivering high-quality IT services to the business organization and to end users. System management software is commonly used to support operational functions such as asset discovery and inventory, server provisioning, performance and availability monitoring and management, change and configuration management, and problem management. Across these functions, management software provides a no. of key benefits that can result in cost savings and operational efficiencies.
15

These benefits include the following:

Automation of routine tasks. Management software can be used to automate routine tasks, such as monitoring common infrastructure alerts and automating responses for known conditions, leading to an increase in IT staff efficiency. Leveraging staff resources. Use of management software helps increase the proportion of staff time used for productive work, increasing business value.

Higher availability. System and network uptime plus application and database availability are key requirements for conducting business today. Downtime has direct costs to the business that come from loss of business opportunity and decreased end-user productivity. Faster response to incidents. This can occur in a number of ways, ranging from automated responses to simple alerts and alarms, to automatic creation of trouble and repair tickets for service desk functions, to problem determination and resolution aids such as event correlation, impact analysis, and root cause analysis. Cost savings and improved return on investment (ROI). The overall effect of using automated software tools for IT infrastructure management typically results in cost savings and positive ROI. Areas for cost savings include reduced hardware and software costs, IT staff efficiency, end-user productivity, and enhanced operations for business applications, including reduced downtime and faster performance.

Virtualization: New System Management Challenges

16

While server virtualization brings many advantages in terms of cost savings and operating efficiencies, it also brings new and expanded requirements for system management. Server virtualization based on hypervisors introduces a new layer between the operating system and the hardware and creates new objects to manage. These objects include virtual server host systems, guest virtual machines stored in VHD libraries or deployed on host servers, as well as the need to manage "guest" operating systems and applications deployed in virtual machines. Typically, server virtualization results in some level of operating system proliferation,or "virtual machine sprawl," which may substantially increase the overall number of server operating system images that need to be managed by system administrators. System management software is needed to perform the standard management functions required for physical systems, but now for virtual machine images as well. Particular requirements exist for migrating physical server images to virtual images (P2V), as well as managing growing libraries of virtual images many of which will be retained in cold storage but will still need to be inventoried and managed/maintained. Some functions, such as performance management, require extended capabilities to properly represent systems and applications running in guest environments. System management functions can be well served by an approach that combines physical and virtual server management under a common umbrella that integrates functions and views , such as consolidated views that show all deployed physical and virtual servers with associated resources. In addition, management of the virtualized server environment needs to follow established IT processes or process standards for system management functions, such as ITIL best practices. In advanced virtualization cases, software for adaptive policy-based management and orchestration can be used to automate resource optimization and complex workflow scenarios to meet service-level requirements.

CONTINUING BENEFITS Figure compares cost reductions for a basic virtualization scenario and an advanced virtualization scenario against the baseline unvirtualized environment. Hardware cost reductions for each scenario differ only slightly as these savings typically come from a one-time cost reduction that does not differ
17

substantially for basic virtualization versus advanced virtualization infrastructures. As depicted in Figure , the move from basic virtualization to advanced virtualization has a major impact on the cost reductions associated with downtime reduction.

Figure compares the annual cost of operations for a server configuration that is deployed in an unvirtualized mode with the annual cost of operations for basic virtualization and advanced virtualization scenarios

CHALLENGES/OPPORTUNITIES
Any new technology faces challenges even as it opens doors to powerful new opportunities. Virtualization software and the associated systems management tools that enable an advanced virtualization deployment certainly face challenges such as the following:

Moving from distributed to consolidated infrastructure. IT organizations face the challenge of generating executive backing for that initial investment in moving from distributed to consolidate to save more money later.

18

Aligning and/or minimizing management tools in use. Most organizations now support multiple management tools in their infrastructure. Proponents or departments invested in particular tools will resist consolidation down to a smaller number of tools. Consolidating version inconsistencies. Moving to an advanced virtualization infrastructure mandates ensuring a high degree of consistency in the virtual servers that run on the infrastructure. This goal, as attractive as it sounds, contrasts starkly with the typical deployments at most companies. The physicalto-virtual migration/consolidation activities will become more complex andinvolved than the simple, straightforward, and easily accomplished migrationsthat advertisements often suggest.

Once beyond these challenges, the customer realizes the following opportunities:

Cost reductions and business value. The cost reductions are potentially huge and can continue to accrue as additional servers are migrated from a distributed physical infrastructure to a virtualized consolidated infrastructure. The potential upside for organizations remains huge. Agility benefits that are real. While agility benefits come first and foremost from having a solid management system in place, layering that solid management toolset on top of a virtual infrastructure multiplies those benefits. Reduction of unscheduled downtime. A virtual infrastructure is a two-pronged tool, which makes it easier both to reduce or totally eliminate unscheduled downtime and to minimize unscheduled downtime. The other prong is that by consolidating more operating systems aboard a smaller number of physical servers, those physical servers each become increasingly critical resources because of all the software loaded aboard those machines. Green IT benefits. Moving to a virtualized infrastructure that reduces the number of physical servers has a direct impact on power and cooling requirements and associated carbon emissions. Even better, moving to a virtualized x86 infrastructure may delay or eliminate the need for datacenter expansion. For some organizations, it may actually lead to datacenter consolidation.

CONCLUSION

19

Virtualization delivers compelling business value today, increasing by a factor of three the number of users supported per server, improving availability of servers, enabling application scalability, and reducing costs across the board. In particular, the introduction of modern virtualization solutions based on blade architectures, which can offer both intelligent configuration and management and the ability to perform physical-to physical migration, can help promote uptime and efficient resource usage, particularly when used in direct combination with the high-quality hypervisors available on the market today. These same technologies can lower costs directly through an immediate reduction of power and cooling costs and subsequently deliver a long-term benefit through lower IT administrative costs that continue to benefit IT organizations and their parent companies year after year It is believed that these same benefits can accrue for integrated virtual and physical management solutions . It is believed that organizations using x86 solutions today should move to adopt such a next-generation solution. In the process, they will gain better utilization of server resources and reductions in acquisition, deployment, and power and cooling costs. Further, the long-term benefit of reducing staff costs and increasing business agility leads to long-term benefits that will continue for years to deliver returns on the investment required to put this in place initially.

20

Potrebbero piacerti anche