Sei sulla pagina 1di 30

INCREASE FLEXIBILITY AND LOWER YOUR TCO WITH SERVER VIRTUALIZATION

Server virtualization is changing the way IT organizations operate. And Intel® server processors are

extending the benefits of software virtualization far beyond consolidation to increase data center flexibility,

productivity, and TCO. Intel® server processors with virtualization built in, such as the Intel® Xeon®

processor 5600 series, the Intel® Xeon® processor 7500 series, and the Intel® Itanium® processor 9300

series, deliver these benefits.

To help you get the most out of virtualization, Intel has built a better physical server platform with unique

hardware-assist features. These hardware-assist features accelerate fundamental virtualization processes

throughout the platform to reduce latencies and avoid potential bottlenecks, giving you better value from

your server and software investments.

Because Intel server processors let you do more with less, you can realize energy efficiency and reduce

server sprawl. And with remote management features, the ease of failover, and seamless live migrations,

you can keep both you and your users happy with reduced support costs and increased uptime levels.

Migrate applications across generations

One of the most exciting capabilities is the ability to migrate live applications across
server nodes and even across local or wide area networks. This capability gives you
unprecedented flexibility in resource management while reducing downtime and
simplifying disaster recovery. Intel® Virtualization Technology FlexMigration (Intel®
VT FlexMigration) also allows you to combine multiple generations into the same
virtualized server pool to extend failover, load balancing, and disaster recovery
capability, letting you get the most value out of your existing investment.

With benefits in productivity, utilization, energy savings, manageability, and service


levels, Intel server processors are a compelling investment both for your IT organization
and for your business as a whole.
Virtualization:

Increasing manageability, security, and flexibility in IT environments, virtualization


technologies like hardware-assisted Intel® Virtualization Technology (Intel®
VT)¹ combined with software-based virtualization solutions provide maximum
system utilization by consolidating multiple environments into a single server or
PC. By abstracting the software away from the underlying hardware, a world of
new usage models opens up that reduce costs, increase management efficiency,
strengthen security, while making your computing infrastructure more resilient
in the event of a disaster.

Concept

Virtualization is the creation of a virtual (rather than actual) version of something, such
as an operating system, a server, a storage device or network resources.

Virtualization can be viewed as part of an overall trend in enterprise IT that includes


autonomic computing, a scenario in which the IT environment will be able to manage
itself based on perceived activity, and utility computing, in which computer processing
power is seen as a utility that clients can pay for only as needed. The usual goal of
virtualization is to centralize administrative tasks while improving scalability and work
loads.

Hardware virtualization
From Wikipedia, the free encyclopedia
This article needs additional citations for verification.
Please help improve this article by adding reliable references. Unsourced material may
be challenged and removed.(April 2010)

In computing, hardware virtualization is a virtualization of computers or operating systems. It hides


the physical characteristics of a computing platform from users, instead showing
another abstract computing platform.[1][2] The software that controls the virtualization used to be called
a "control program" at its origins, but nowadays the terms hypervisor or virtual machine monitor are
preferred.
Contents
[hide]

• 1 Concept

• 2 Reasons for virtualization

• 3 Full virtualization

• 4 Hardware-assisted virtualization

• 5 Partial virtualization

• 6 Paravirtualization

• 7 Operating system-level virtualization

• 8 Hardware Virtualization Disaster Recovery

• 9 See also

• 10 References

• 11 External links

[edit]Concept

The term "virtualization" was coined in the 1960s, to refer to a virtual machine (sometimes
called pseudo machine), a term which itself dates from the experimental IBM M44/44X system.[citation
needed]
The creation and management of virtual machines has been called platform virtualization,
or server virtualization, more recently.

Platform virtualization is performed on a given hardware platform by host software (a control program),
which creates a simulated computer environment, a virtual machine, for its guest software. The guest
software is not limited to user applications; many hosts allow the execution of complete operating
systems. The guest software executes as if it were running directly on the physical hardware, with
several notable caveats. Access to physical system resources (such as the network access, display,
keyboard, and disk storage) is generally managed at a more restrictive level than the host processor
and system-memory. Guests are often restricted from accessing specific peripheral devices, or may be
limited to a subset of the device's native capabilities, depending on the hardware access policy
implemented by the virtualization host.

Virtualization often exacts performance penalties, both in resources required to run the hypervisor, and
as well as in reduced performance on the virtual machine compared to running native on the physical
machine.
[edit]Reasons for virtualization
In case of server consolidation, many small physical servers are replaced by one larger physical
server, to increase the utilization of costly hardware resources such as CPU. Although hardware is
consolidated, typically OSs are not. Instead, each OS running on a physical server becomes converted
to a distinct OS running inside a virtual machine. The large server can "host" many such "guest" virtual
machines. This is known as Physical-to-Virtual (P2V) transformation.

A virtual machine can be more easily controlled and inspected from outside than a physical one, and
its configuration is more flexible. This is very useful in kernel development and for teaching operating
system courses.[3]

A new virtual machine can be provisioned as needed without the need for an up-front hardware
purchase. Also, a virtual machine can easily be relocated from one physical machine to another as
needed. For example, a salesperson going to a customer can copy a virtual machine with the
demonstration software to his laptop, without the need to transport the physical computer. Likewise, an
error inside a virtual machine does not harm the host system, so there is no risk of breaking down the
OS on the laptop.

Because of the easy relocation, virtual machines can be used in disaster recovery scenarios.

However, when multiple VMs are concurrently running on the same physical host, each VM may
exhibit a varying and unstable performance, which highly depends on the workload imposed on the
system by other VMs, unless proper techniques are used for temporal isolation among virtual
machines.

There are several approaches to platform virtualization.

Examples of virtualization scenarios:

Running one or more applications that are not supported by the host OS

A virtual machine running the required guest OS could allow the desired applications to be
run, without altering the host OS.

Evaluating an alternate operating system

The new OS could be run within a VM, without altering the host OS.

Server virtualization

Multiple virtual servers could be run on a single physical server, in order to more fully utilize
the hardware resources of the physical server.

Duplicating specific environments


A virtual machine could, depending on the virtualization software used, be duplicated and
installed on multiple hosts, or restored to a previously backed-up system state.

Creating a protected environment

If a guest OS running on a VM becomes infected with malware, the host operating system's
exposure to the risk may be limited, depending on the configuration of the virtualization
software.
[edit]Full virtualization
Main article: Full virtualization

Logical diagram of full virtualization.

In full virtualization, the virtual machine simulates enough hardware to


allow an unmodified "guest" OS (one designed for the same instruction
set) to be run in isolation. This approach was pioneered in 1966 with the
IBM CP-40 and CP-67, predecessors of the VM family. Examples outside
the mainframe field include Parallels Workstation, Parallels Desktop for
Mac, VirtualBox, Virtual Iron,Oracle VM, Virtual PC, Virtual Server, Hyper-
V, VMware Workstation, VMware Server (formerly GSX
Server), QEMU, Adeos, Mac-on-Linux, Win4BSD, Win4Lin Pro,
and Egenera vBlade technology.

[edit]Hardware-assisted virtualization
Main article: Hardware-assisted virtualization
In hardware-assisted virtualization, the hardware provides architectural
support that facilitates building a virtual machine monitor and allows guest
OSes to be run in isolation.[4] Hardware-assisted virtualization was first
introduced on the IBM System/370 in 1972, for use with VM/370, the first
virtual machine operating system. In 2005 and
2006, Intel and AMD provided additional hardware to support
virtualization. Sun Microsystems (now Oracle Corporation) added similar
features in their UltraSPARC T-Series processors in 2005. Examples of
virtualization platforms adapted to such hardware include Linux
KVM, VMware Workstation, VMware Fusion, Microsoft Virtual
PC, Xen, Parallels Desktop for Mac, Oracle VM Server for
SPARC, VirtualBox and Parallels Workstation.

Hardware platforms with integrated virtualization technologies include:

 x86 (and x86-64)—AMD-V (previously known as Pacifica), Intel VT-


x (previously known as Vanderpool)

 IOMMU implementations by both AMD and Intel.

 Power Architecture (IBM, Power.org)


 Virtage (Hitachi)
 UltraSPARC T1, T2, T2+, SPARC T3 (Oracle Corporation)

[edit]Partial virtualization
This section does not cite any references or
sources.
Please help improve this article by adding citations to reliable
sources. Unsourced material may be challenged andremoved. (April
2010)

In partial virtualization, including address space virtualization, the virtual


machine simulates multiple instances of much of an underlying hardware
environment, particularly address spaces.[clarification needed] Usually, this means
that entire operating systems cannot run in the virtual machine – which
would be the sign of full virtualization – but that many applications can
run. A key form of partial virtualization is address space virtualization, in
which each virtual machine consists of an independent address space.
This capability requires address relocation hardware, and has been
present in most practical examples of partial virtualization.[citation needed]
Partial virtualization was an important historical milestone on the way
to full virtualization. It was used in the first-generation time-sharing
system CTSS, in the IBM M44/44X experimental paging system, and
arguably systems like MVS and the Commodore 64 (a couple of 'task
switch' programs).[dubious – discuss][citation needed] The term could also be used to
describe any operating system that provides separate address spaces for
individual users or processes, including many that today would not be
considered virtual machine systems. Experience with partial virtualization,
and its limitations, led to the creation of the first full virtualization system
(IBM's CP-40, the first iteration of CP/CMSwhich would eventually
become IBM's VM family). (Many more recent systems, such as Microsoft
Windows and Linux, as well as the remaining categories below, also use
this basic approach.[dubious – discuss][citation needed])

Partial virtualization is significantly easier to implement than full


virtualization. It has often provided useful, robust virtual machines,
capable of supporting important applications. Partial virtualization has
proven highly successful for sharing computer resources among multiple
users.[citation needed]

However, in comparison with full virtualization, its drawback is in


situations requiring backward compatibility or portability. It can be hard to
anticipate precisely which features have been used by a given application.
If certain hardware features are not simulated, then any software using
those features will fail.

[edit]Paravirtualization

Main article: Paravirtualization

In paravirtualization, the virtual machine does not necessarily simulate


hardware, but instead (or in addition) offers a special API that can only be
used by modifying[clarification needed] the "guest" OS. This system call to
the hypervisor is called a "hypercall" in TRANGO, Xen and KVM; it is
implemented via a DIAG ("diagnose") hardware instruction in
IBM's CMS under VM[clarification needed] (which was the origin of the
termhypervisor). Examples include IBM's LPARs,[5] Win4Lin 9x,
Sun's Logical Domains, z/VM,[citation needed] and TRANGO.
[edit]Operating system-level virtualization
Main article: Operating system-level virtualization

In operating system-level virtualization, a physical server is virtualized at


the operating system level, enabling multiple isolated and secure
virtualized servers to run on a single physical server. The "guest" OS
environments share the same OS as the host system – i.e. the same OS
kernel is used to implement the "guest" environments. Applications
running in a given "guest" environment view it as a stand-alone system.
The pioneer implementation was FreeBSD jails; other examples
include Solaris Containers, OpenVZ, Linux-VServer, AIX Workload
Partitions, Parallels Virtuozzo Containers, and iCore Virtual Accounts.

[edit]Hardware Virtualization Disaster


Recovery
A Disaster Recovery (DR) plan is good business practice for a hardware
virtualization platform solution. DR of a virtualization environment can
ensure high rate of availability during a wide range of situations that
disrupt normal business operations. Continued operations of VMs is
mission critical and a can compensate for concerns of hardware
performance and maintenance requirements. A hardware virtualization
DR environment will involve hardware and software protection solutions
based on business continuity needs.[6][7]

Hardware virtualization DR methods

Tape backup for software data long-term archival needs

This common method can be used to store data offsite but can be a difficult and lengthy
process to recover your data. Tape backup data is only as good as the latest copy stored.
Tape backup methods will require a backup device and ongoing storage material.

Whole-file and application replication

The implementation of this method will require control software and storage capacity for
application and data file storage replication typically on the same site. The data is replicated
on a different disk partition or separate disk device and can be a scheduled activity for most
servers and is implemented more for database-type applications.

Hardware and software redundancy


This solution provides the highest level of disaster recovery protection for a hardware
virtualization solutions providing duplicate hardware and software replication in two distinct
geographic areas.[8]

Memory virtualization
From Wikipedia, the free encyclopedia

In computer science, memory virtualization decouples volatile random access memory (RAM)
resources from individual systems in the data center, and then aggregates those resources into a
virtualized memory pool available to any computer in the cluster.[citation needed] The memory pool is
accessed by the operating system or applications running on top of the operating system. The
distributed memory pool can then be utilized as a high-speed cache, a messaging layer, or a large,
shared memory resource for a CPU or a GPU application.

Contents
[hide]

• 1 Description

• 2 Benefits

• 3 Products

• 4 Implementations

o 4.1 Application level integration

o 4.2 Operating System Level Integration

• 5 Background

• 6 See also

• 7 References

[edit]Description

Memory virtualization allows networked, and therefore distributed, servers to share a pool of memory
to overcome physical memory limitations, a common bottleneck in software performance.[citation
needed]
With this capability integrated into the network, applications can take advantage of a very large
amount of memory to improve overall performance, system utilization, increase memory usage
efficiency, and enable new use cases. Software on the memory pool nodes (servers) allows nodes to
connect to the memory pool to contribute memory, and store and retrieve data. Management software
manages the shared memory, data insertion, eviction and provisioning policies, data assignment to
contributing nodes, and handles requests from client nodes. The memory pool may be accessed at the
application level or operating system level. At the application level, the pool is accessed through an
API or as a networked file system to create a high-speed shared memory cache. At the operating
system level, a page cache can utilize the pool as a very large memory resource that is much faster
than local or networked storage.

Memory virtualization implementations are distinguished from shared memory systems. Shared
memory systems do not permit abstraction of memory resources, thus requiring implementation with a
single operating system instance (i.e. not within a clustered application environment).

Memory virtualization is also different from memory-based storage such as solid state disks (SSDs).
They both allow sharing the memory space (i.e. RAM, flash memory) in a cluster, but SSDs use an
overly complicated and less efficient interface, identical to the interface of hard disk drives.

[edit]Benefits

 Improves memory utilization via the sharing of scarce resources

 Increases efficiency and decreases run time for data intensive and I/O bound applications

 Allows applications on multiple servers to share data without replication, decreasing total
memory needs

 Lowers latency and provides faster access than other solutions such as SSD, SAN or NAS

 Scales linearly as memory resources are added to the cluster and made available to the
memory pool.[citation needed]

[edit]Products

 RNA networks Memory Virtualization Platform - A low latency memory pool, implemented as
a shared cache and a low latency messaging solution.

 Gigaspaces - Application platforms for Java and .Net environments that offer an alternative to
traditional application-servers. Includes in-memory data grid for grid computing.

 ScaleMP - A platform to combine resources from multiple computers for the purpose of
creating a single computing instance.

 Wombat Data Fabric – A memory based messaging fabric for delivery of market data in
financial services.
[edit]Implementations

[edit]Application level integration


In this case, applications running on connected computers connect to the memory pool directly
through an API or the file system.

Cluster implementing memory virtualization at the application level. Contributors 1...n contribute memory to the
pool. Applications read and write data to the pool using Java or C APIs, or a file system API.

[edit]Operating System Level Integration


In this case, the operating system connects to the memory pool, and makes pooled memory available
to applications.

Cluster
implementing
memory
virtualization.
Contributors
1...n contribute
memory to the
pool. The operating system connects to the memory pool through the page cache system. Applications consume
pooled memory via the operating system.
[edit]Background

Memory virtualization technology follows from memory management architectures and virtual
memory techniques. In both fields, the path of innovation has moved from tightly coupled relationships
between logical and physical resources to more flexible, abstracted relationships where physical
resources are allocated as needed.

Virtual memory systems abstract between physical RAM and virtual addresses, assigning virtual
memory addresses both to physical RAM and to disk-based storage, expanding addressable memory,
but at the cost of speed. NUMA and SMP architectures optimize memory allocation within multi-
processor systems. While these technologies dynamically manage memory within individual
computers, memory virtualization manages the aggregated memory of multiple networked computers
as a single memory pool.

In tandem with memory management innovations, and a number of virtualization techniques have
arisen to make the best use of available hardware resources. Application virtualization was
demonstrated in mainframe systems first. The next wave was storage virtualization, as servers
connected to storage systems such as NAS or SAN in addition to, or instead of, on-board hard disk
drives. Server virtualization, orFull virtualization, partitions a single physical server into multiple virtual
machines, consolidating multiple instances of operating systems onto the same machine for the
purpose of efficiency and flexibility. In both storage and server virtualization, the applications are
unaware that the resources they are using are virtual rather than physical, so efficiency and flexibility
are achieved without application changes. In the same way, memory virtualization allocates the
memory of an entire networked cluster of servers with memory among the computers in that cluster.

I/O virtualization
From Wikipedia, the free encyclopedia

Input/output (I/O) virtualization is a methodology to simplify management, lower costs and improve
performance of servers in enterprise environments. I/O virtualization environments are created by
abstracting the upper layer protocols from the physical connections.[1]

The technology enables one physical adapter card to appear as multiple virtual network interface
cards (vNICs) and virtual host bus adapters(vHBAs).[2] Virtual NICs and HBAs function as conventional
NICs and HBAs, and are designed to be compatible with existing operating systems, Hypervisors, and
applications. To networking resources (LANs and SANs), they appear as normal cards.

In the physical view, virtual I/O replaces a server’s multiple I/O cables with a single cable that provides
a shared transport for all network and storage connections. That cable (or commonly two cables for
redundancy) connects to an external device, which then provides connections to the data center
networks.[2]
Contents
[hide]

• 1 Reasons for I/O virtualization

• 2 Benefits

• 3 List of hardware with virtual I/O support

• 4 References

[edit]Reasons for I/O virtualization


Server I/O is a critical component to successful and effective server deployments, particularly with
virtualized servers. To accommodate multiple applications, virtualized servers demand more
network bandwidth and connections to more networks and storage. According to a survey, 75% of
virtualized servers require 7 or more I/O connections per device, and are likely to require more
frequent I/O reconfigurations.[3]

In virtualized data centers, I/O performance problems are caused by running numerous virtual
machines (VMs) on one server. In early server virtualization implementations, the number of virtual
machines per server was typically limited to six or less. But it was found that it could safely run seven
or more applications per server, often using 80 percent of total server capacity, an improvement over
the average 5 to 15 percent utilized with non-virtualized servers .

However, increased utilization created by virtualization placed a significant strain on the server’s I/O
capacity. Network traffic, storage traffic, and inter-server communications combine to impose
increased loads that may overwhelm the server's channels, leading to backlogs and idleCPUs as they
wait for data.[4]

Virtual I/O addresses performance bottlenecks by consolidating I/O to a single connection whose
bandwidth ideally exceeds the I/O capacity of the server itself, thereby ensuring that the I/O link itself is
not a bottleneck. That bandwidth is then dynamically allocated in real time across multiple virtual
connections to both storage and network resources. In I/O intensive applications, this approach can
help increase both VM performance and the potential number of VMs per server.[2]

Virtual I/O systems that include Quality of service (QoS) controls can also regulate I/O bandwidth to
specific virtual machines, thus ensuring predictable performance for critical applications. QoS thus
increases the applicability of server virtualization for both production server and end-user applications.
[4]
[edit]Benefits

 Management agility: By abstracting upper layer protocols from physical connections, I/O
virtualization provides greater flexibility, greater utilization and faster provisioning when compared
to traditional NIC and HBA card architectures.[1] Virtual I/O technologies can be dynamically
expanded and contracted (versus traditional physical I/O channels that are fixed and static), and
usually replace multiple network and storage connections to each server with a single cable that
carries multiple traffic types.[5] Because configuration changes are implemented in software rather
than hardware, time periods to perform common data center tasks – such as adding servers,
storage or network connectivity – can be reduced from days to minutes.[6]

 Reduced cost: Virtual I/O lowers costs and enables simplified server management by using
fewer cards, cables, and switch ports, while still achieving full network I/O performance.[7] It also
simplifies data center network design by consolidating and better utilizing LAN and SAN network
switches.[8]

 Reduced cabling: In a virtualized I/O environment, only one cable is needed to connect
servers to both storage and network traffic. This can reduce data center server-to-network, and
server-to-storage cabling within a single server rack by more than 70 percent, which equates to
reduced cost, complexity, and power requirements. Because the high-speed interconnect is
dynamically shared among various requirements, it frequently results in increased performance as
well.[8]

 Increased density: I/O virtualization increases the practical density of I/O by allowing more
connections to exist within a given space. This in turn enables greater utilization of dense 1U high
servers and blade servers that would otherwise be I/O constrained.

Blade server chassis enhance density by packaging many servers (and hence many I/O connections)
in a small physical space. Virtual I/O consolidates all storage and network connections to a single
physical interconnect, which eliminates any physical restrictions on port counts. Virtual I/O also
enables software-based configuration management, which simplifies control of the I/O devices. The
combination allows more I/O ports to be deployed in a given space, and facilitates the practical
management of the resulting environment.[9]

Application virtualization
From Wikipedia, the free encyclopedia
Application virtualization is an umbrella term that describes software technologies that improve
portability, manageability and compatibility of applications by encapsulating them from the
underlying operating system on which they are executed. A fully virtualized application is not installed
in the traditional sense[1], although it is still executed as if it were. The application is fooled at runtime
into believing that it is directly interfacing with the original operating system and all the resources
managed by it, when in reality it is not. In this context, the term "virtualization" refers to the artifact
being encapsulated (application), which is quite different to its meaning in hardware virtualization,
where it refers to the artifact being abstracted (physical hardware).

Contents
[hide]

• 1 Description

• 2 Related Technologies

• 3 Benefits of application virtualization

• 4 Limitations of application virtualization

• 5 See also

• 6 References

[edit]Description

Limited application virtualization is used in modern operating systems such a Microsoft


Windows and Linux. For example, IniFileMappings were introduced with Windows NT to virtualize (into
the Registry) the legacy INI files of applications originally written for Windows 3.1.[2]Similarly, Windows
Vista implements limited file and Registry virtualization so that legacy applications that try to save user
data in a system location that was writeable in older versions of Windows, but is now only writeable by
highly privileged system software, can work on the new Windows system without the obligation of the
program having higher-level security privileges (which would carry security risks).[3]

Full application virtualization requires a virtualization layer.[4] Application virtualization layers replace
part of the runtime environment normally provided by the operating system. The layer intercepts all file
and Registry operations of virtualized applications and transparently redirects them to a virtualized
location, often a single file.[5] The application never knows that it's accessing a virtual resource instead
of a physical one. Since the application is now working with one file instead of many files and registry
entries spread throughout the system, it becomes easy to run the application on a different computer
and previously incompatible applications can be run side-by-side. Examples of this technology for the
Windows platform are Cameyo, Ceedo, Evalaze, InstallFree, Citrix XenApp, Novell ZENworks
Application VIrtualization, Endeavors Technologies Application Jukebox, Microsoft Application
Virtualization, Software Virtualization Solution, VMware ThinApp and InstallAware Virtualization.

[edit]Related Technologies
Technology categories that fall under application virtualization include:

 Application Streaming. Pieces of the application's code, data, and settings are delivered when
they're first needed, instead of the entire application being delivered before startup. Running the
packaged application may require the installation of a lightweight client application. Packages are
usually delivered over a protocol such as HTTP, CIFS or RTSP.[6]

 Desktop Virtualization/Virtual Desktop Infrastructure (VDI). The application is hosted in a VM


or blade PC that also includes the operating system (OS). These solutions include a management
infrastructure for automating the creation of virtual desktops, and providing for access control to
target virtual desktop. VDI solutions can usually fill the gaps where application streaming falls
short.

[edit]Benefits of application virtualization

 Allows applications to run in environments that do not suit the native application
(e.g. Wine allows Microsoft Windows applications to run on Linux).

 May protect the operating system and other applications from poorly written or buggy code.

 Uses fewer resources than a separate virtual machine.

 Run applications that are not written correctly, for example applications that try to store user
data in a read-only system-owned location.

 Run incompatible applications side-by-side, at the same time[6] and with minimal regression
testing against one another.[7]

 Maintain a standard configuration in the underlying operating system across multiple


computers in an organization, regardless of the applications being used, thereby keeping costs
down.

 Implement the security principle of least privilege by removing the requirement for end-users
to have Administrator privileges in order to run poorly written applications.

 Simplified operating system migrations.[6]

 Accelerated application deployment, through on-demand application streaming.[6]

 Improved security, by isolating applications from the operating system.[6]


 Enterprises can easily track license usage. Application usage history can then be used to
save on license costs.

 Fast application provisioning to the desktop based upon user's roaming profile.

 Allows applications to be copied to portable media and then imported to client computers
without need of installing them.[8]

[edit]Limitations of application virtualization

 Not all software can be virtualized. Some examples include applications that require a device
driver and 16-bit applications that need to run in shared memory space.[9]

 Some types of software such as anti-virus packages and application that require heavy OS
integration, such as WindowBlinds or StyleXPare difficult to virtualize.

 Only file and Registry-level compatibility issues between legacy applications and newer
operating systems can be addressed by application virtualization. For example, applications that
don't manage the heap correctly will not execute on Windows Vista as they still allocate memory in
the same way, regardless of whether they are virtualized or not.[10] For this reason, specialist
application compatibility fixes (shims) may still be needed, even if the application is virtualized.[11]

Storage virtualization is a concept and term used within computer science. Specifically, storage
systems may use 'virtualization' concepts as a tool to enable better functionality and more
advanced features within the storage system.

Broadly speaking, a 'storage system' is also known as a storage array or Disk array or a filer.
Storage systems typically utilize specialized hardware and software along with disk drives in
order to provide very fast and reliable storage for computing and data processing. Storage
systems are complex, and may be thought of as a special purpose computer designed to provide
storage capacity along with advanced data protection features. Disk drives are only one element
within a storage system, along with hardware and special purpose embedded software within the
system.

Storage systems can provide either block accessed storage, or file accessed storage. Block
access is typically delivered over Fibre Channel,SAS, FICON or other protocols. File access is
often provided using NFS or CIFS protocols.

Within the context of a storage system, there are two primary types of virtualization that can
occur:
 Block Virtualization
 File Virtualization

Block virtualization used in this context refers to the abstraction (separation) of logical
storage from physical storage so that it may be accessed without regard to physical storage or
heterogeneous structure. This separation allows the administrators of the storage system greater
flexibility in how they manage storage for end users.[1]

File Virtualization - File-level virtualization addresses the NAS challenges by eliminating the
dependencies between the data accessed at the file level and the location where the files are
physically stored. This provides opportunities to optimize storage utilization and server
consolidation and to perform nondisruptive file migrations

Operating system-level virtualization is a server virtualization method where the kernel of


an operating system allows for multiple isolated user-space instances, instead of just one. Such
instances (often called containers, VEs, VPSs or jails) may look and feel like a real server, from
the point of view of its owner. On Unix systems, this technology can be thought of as an
advanced implementation of the standardchroot mechanism. In addition to isolation mechanisms,
the kernel often provides resource management features to limit the impact of one container's
activities on the other containers..

Uses
Operating system-level virtualization is commonly used in virtual hosting environments, where it is
useful for securely allocating finite hardware resources amongst a large number of mutually-
distrusting users. It is also used, to a lesser extent, for consolidating server hardware by moving
services on separate hosts into containers on the one server.

Other typical scenarios include separating several applications to separate containers for
improved security, hardware independence, and added resource management features.

OS-level virtualization implementations that are capable of live migration can be used for dynamic
load balancing of containers between nodes in a cluster.

[edit]Advantages and disadvantages


[edit]Overhead

This form of virtualization usually imposes little or no overhead, because programs in virtual
partition use the operating system's normalsystem call interface and do not need to be subject
to emulation or run in an intermediate virtual machine, as is the case with whole-system
virtualizers (such as VMware and QEMU) or paravirtualizers (such as Xen and UML). It also does
not require hardware assistance to perform efficiently.

[edit]Flexibility

Operating system-level virtualization is not as flexible as other virtualization approaches since it


cannot host a guest operating system different from the host one, or a different guest kernel. For
example, with Linux, different distributions are fine, but other OS such as Windows cannot be
hosted. This limitation is partially overcome in Solaris by its branded zones feature, which
provides the ability to run an environment within a container that emulates a Linux 2.4-based
release or an older Solaris release.

[edit]Storage

Some operating-system virtualizers provide file-level copy-on-write mechanisms. (Most


commonly, a standard file system is shared between partitions, and partitions which change the
files automatically create their own copies.) This is easier to back up, more space-efficient and
simpler to cache than the block-level copy-on-write schemes common on whole-system
virtualizers. Whole-system virtualizers, however, can work with non-native file systems and create
and roll back snapshots of the entire system state.

Virtual machine
From Wikipedia, the free encyclopedia

A virtual machine (VM) is a software implementation of a programmable machine, where the software
implementation is constrained within another computer at a higher or lower level of symbolic
abstraction

Definitions
Virtual is a term that originally came from optics, to understand objects in a mirror. Objects in a
mirror are reflections of an actual physical object but mirrors are not actually that object. This
means that the image looks exactly like the actual object and looks to be in the same location

A virtual machine (VM) is a software implementation of a machine (i.e. a computer) that executes
programs like a physical machine. Virtual machines are separated into two major categories,
based on their use and degree of correspondence to any real machine. A system virtual
machine provides a complete system platform which supports the execution of a
complete operating system (OS). In contrast, a process virtual machine is designed to run a
single program, which means that it supports a single process. An essential characteristic of a
virtual machine is that the software running inside is limited to the resources and abstractions
provided by the virtual machine—it cannot break out of its virtual world.
A virtual machine was originally defined by Popek and Goldberg as "an efficient, isolated
duplicate of a real machine". Current use includes virtual machines which have no direct
correspondence to any real hardware.[1]

[edit]System virtual machines


See also: Hardware virtualization and Comparison of platform virtual machines

System virtual machines (sometimes called hardware virtual machines) allow the sharing of the
underlying physical machine resources between different virtual machines, each running its own
operating system. The software layer providing the virtualization is called a virtual machine
monitor or hypervisor. A hypervisor can run on bare hardware (Type 1 or native VM) or on top of
an operating system (Type 2 orhosted VM).

The main advantages of VMs are:

 multiple OS environments can co-exist on the same computer, in strong isolation from
each other
 the virtual machine can provide an instruction set architecture (ISA) that is somewhat
different from that of the real machine
 application provisioning, maintenance, high availability and disaster recovery[2]

The main disadvantages of VMs are:

 a virtual machine is less efficient than a real machine when it accesses the hardware
indirectly
 when multiple VMs are concurrently running on the same physical host, each VM may
exhibit a varying and unstable performance (Speed of Execution, and not results) , which
highly depends on the workload imposed on the system by other VMs, unless proper
techniques are used for temporal isolation among virtual machines.

Multiple VMs each running their own operating system (called guest operating system) are
frequently used in server consolidation, where different services that used to run on individual
machines in order to avoid interference are instead run in separate VMs on the same physical
machine.
The desire to run multiple operating systems was the original motivation for virtual machines, as it
allowed time-sharing a single computer between several single-tasking OSes. In some respects,
a system virtual machine can be considered a generalization of the concept of virtual
memory that historically preceded it. IBM's CP/CMS, the first systems to allow full virtualization,
implemented time sharing by providing each user with a single-user operating system, the CMS.
Unlike virtual memory, a system virtual machine allowed the user to use privileged instructions in
their code. This approach had certain advantages, for instance it allowed users to add
input/output devices not allowed by the standard system.[3]

The guest OSes do not have to be all the same, making it possible to run different OSes on the
same computer (e.g., Microsoft Windows andLinux, or older versions of an OS in order to support
software that has not yet been ported to the latest version). The use of virtual machines to
support different guest OSes is becoming popular in embedded systems; a typical use is to
support a real-time operating system at the same time as a high-level OS such as Linux or
Windows.

Another use is to sandbox an OS that is not trusted, possibly because it is a system under
development. Virtual machines have other advantages for OS development, including better
debugging access and faster reboots.[4]

[edit]Process virtual machines


See also: Application virtualization, Run-time system, and Comparison of application virtual
machines

A process VM, sometimes called an application virtual machine, runs as a normal application
inside an OS and supports a single process. It is created when that process is started and
destroyed when it exits. Its purpose is to provide a platform-independent programming
environment that abstracts away details of the underlying hardware or operating system, and
allows a program to execute in the same way on any platform.

A process VM provides a high-level abstraction — that of a high-level programming


language (compared to the low-level ISA abstraction of the system VM). Process VMs are
implemented using an interpreter; performance comparable to compiled programming languages
is achieved by the use of just-in-time compilation.

This type of VM has become popular with the Java programming language, which is implemented
using the Java virtual machine. Other examples include the Parrot virtual machine, which serves
as an abstraction layer for several interpreted languages, and the .NET Framework, which runs
on a VM called the Common Language Runtime.
A special case of process VMs are systems that abstract over the communication mechanisms of
a (potentially heterogeneous) computer cluster. Such a VM does not consist of a single process,
but one process per physical machine in the cluster. They are designed to ease the task of
programming parallel applications by letting the programmer focus on algorithms rather than the
communication mechanisms provided by the interconnect and the OS. They do not hide the fact
that communication takes place, and as such do not attempt to present the cluster as a single
parallel machine.

Unlike other process VMs, these systems do not provide a specific programming language, but
are embedded in an existing language; typically such a system provides bindings for several
languages (e.g., C and FORTRAN). Examples are PVM (Parallel Virtual Machine) and MPI
(Message Passing Interface). They are not strictly virtual machines, as the applications running
on top still have access to all OS services, and are therefore not confined to the system model
provided by the "VM".

[edit]Techniques

[edit]Emulation of the underlying raw hardware (native execution)


This approach is described as full virtualization of the hardware, and can be implemented using a
Type 1 or Type 2 hypervisor. (A Type 1 hypervisor runs directly on the hardware; a Type 2
hypervisor runs on another operating system, such as Linux). Each virtual machine can run any
operating system supported by the underlying hardware. Users can thus run two or more different
"guest" operating systems simultaneously, in separate "private" virtual computers.

The pioneer system using this concept was IBM's CP-40, the first (1967) version of
IBM's CP/CMS (1967–1972) and the precursor to IBM'sVM family (1972–present). With the VM
architecture, most users run a relatively simple interactive computing single-user operating
system,CMS, as a "guest" on top of the VM control program (VM-CP). This approach kept the
CMS design simple, as if it were running alone; the control program quietly provides multitasking
and resource management services "behind the scenes". In addition to CMS, VM users can run
any of the other IBM operating systems, such as MVS or z/OS. z/VM is the current version of VM,
and is used to support hundreds or thousands of virtual machines on a given mainframe. Some
installations use Linux for zSeries to run Web servers, where Linux runs as the operating system
within many virtual machines.

Full virtualization is particularly helpful in operating system development, when experimental new
code can be run at the same time as older, more stable, versions, each in a separate virtual
machine. The process can even be recursive: IBM debugged new versions of its virtual machine
operating system, VM, in a virtual machine running under an older version of VM, and even used
this technique to simulate new hardware.[5]

The standard x86 processor architecture as used in modern PCs does not actually meet
the Popek and Goldberg virtualization requirements. Notably, there is no execution mode where
all sensitive machine instructions always trap, which would allow per-instruction virtualization.

Despite these limitations, several software packages have managed to provide virtualization on
the x86 architecture, even though dynamic recompilation of privileged code, as first implemented
by VMware, incurs some performance overhead as compared to a VM running on a natively
virtualizable architecture such as the IBM System/370 or Motorola MC68020. By now, several
other software packages such asVirtual PC, VirtualBox, Parallels Workstation and Virtual
Iron manage to implement virtualization on x86 hardware.

Intel and AMD have introduced features to their x86 processors to enable virtualization in
hardware.

[edit]Emulation of a non-native system


Virtual machines can also perform the role of an emulator, allowing software applications
and operating systems written for another computer processor architecture to be run.

Some virtual machines emulate hardware that only exists as a detailed specification. For
example:

 One of the first was the p-code machine specification, which allowed programmers to
write Pascal programs that would run on any computer running virtual machine software that
correctly implemented the specification.
 The specification of the Java virtual machine.
 The Common Language Infrastructure virtual machine at the heart of the Microsoft
.NET initiative.
 Open Firmware allows plug-in hardware to include boot-time diagnostics, configuration
code, and device drivers that will run on any kind of CPU.

This technique allows diverse computers to run any software written to that specification; only the
virtual machine software itself must be written separately for each type of computer on which it
runs.

[edit]Operating system-level virtualization


Operating System-level Virtualization is a server virtualization technology which
virtualizes servers on an operating system (kernel) layer. It can be thought of as partitioning: a
single physical server is sliced into multiple small partitions (otherwise called virtual environments
(VE),virtual private servers (VPS), guests, zones, etc.); each such partition looks and feels like a
real server, from the point of view of its users.

For example, Solaris Zones supports multiple guest OSes running under the same OS (such as
Solaris 10). All guest OSes have to use the same kernel level and cannot run as different OS
versions. Solaris native Zones also requires that the host OS be a version of Solaris; other OSes
from other manufacturers are not supported.[citation needed],however you need to use Solaris Branded
zones to use another OSes as zones.

Another example is System Workload Partitions (WPARs), introduced in the IBM AIX 6.1
operating system. System WPARs are software partitions running under one instance of the
global AIX OS environment.

The operating system level architecture has low overhead that helps to maximize efficient use of
server resources. The virtualization introduces only a negligible overhead and allows running
hundreds of virtual private servers on a single physical server. In contrast, approaches such
as full virtualization (like VMware) and paravirtualization (like Xen or UML) cannot achieve such
level of density, due to overhead of running multiple kernels. From the other side, operating
system-level virtualization does not allow running different operating systems (i.e. different
kernels), although different libraries, distributions etc. are possible.

Hypervisor

In computing, a hypervisor, also called virtual machine monitor (VMM), is one of


many virtualization techniques which allow multipleoperating systems, termed guests, to run
concurrently on a host computer, a feature called hardware virtualization. It is so named because
it is conceptually one level higher than a supervisor. The hypervisor presents to the guest
operating systems a virtual operating platform and monitors the execution of the guest operating
systems. Multiple instances of a variety of operating systems may share the virtualized hardware
resources. Hypervisors are installed on server hardware whose only task is to run guest operating
systems. Non-hypervisor virtualization systems are used for similar tasks on dedicated server
hardware, but also commonly on desktop, portable and even handheld computers.

Mobile virtualization

Mobile virtualization is a technology that enables multiple operating systems or virtual


machines to run simultaneously on a mobile phone or connected wireless device. It uses
a hypervisor to create secure separation between the underlying hardware and the software that
runs on top of it. Virtualization technology has been used widely for many years in other fields
such as data servers (storage virtualization) and personal computers (desktop virtualization).

In 2008, the mobile industry became interested in using the benefits of virtualization technology
for cell phones and other devices like tablets, netbooks and machine-to-machine (M2M) modules.
[1]
With mobile virtualization, mobile devices can be manufacturered more cheaply through the re-
use of software and hardware, which shortens development time. One such example is using
mobile virtualization to create low-cost Android smartphones.[2] Semiconductor vendors such
as ST-Ericsson have adopted mobile virtualization as part of their low-cost Android platform
strategy.[3]

Another use case for mobile virtualization is in the enterprise market. Today, many consumers
carry two mobile phones: one for business use and another for personal use. With mobile
virtualization, mobile phones can support multiple profiles on the same hardware, so that the
enterprise IT department can securely manage one profile (in a virtual machine), and the mobile
operator can separately manage the other profile (in a virtual machine)[4].

Mobile virtualization can support mobile devices using a single-core or a multi-core processor. In
September 2010, ARM announced[5] that it would support a mobile virtualization extension in its
ARM Cortex A-15 processor.

Database virtualization
Database virtualization is the decoupling of the database layer, which lies between the storage
and application layers within the application stack. Virtualization at the database layer allows
hardware resources to be extended to allow for better sharing of resources between applications
and users, as well as enable more scalable computing.

[edit]Concept

Data virtualization allows users to access various sources of disparately located data without
knowing or caring where the data actually resides (Broughton). Database virtualization allows the
use of multiple instances of a DBMS, or different DBMS platforms, simultaneously and in a
transparent fashion regardless of their physical location. These practices are often employed
in data mining and data warehousingsystems. With the recent 2009 advances in cloud computing
and data virtualization technologies, companies started utilizing database virtualization to
enable enterprise search, legacy to cloud migration, providing secure data access to consumption
applications like Business Intelligence and reporting. Queplix introduced in 2009 a concept of
data globalization, which in addition to abstracting of the physical data also globalizes common
data structures or objects across disperse data sources. For example, a virtual Customer record
can be created from multiple enterprise databases containing common data. Such new
implementation of data virtualization enables ubiquitous 360 degree view of the data across the
enterprise.

Data Management Challenges


In most computing applications, data is paired with a given application such that it is not feasible
to make that data available to other applications. This has led to a problem in which disparate
data silos cannot communicate with each other. Data fragmentation also comes from the many
different date primitives used by applications such as SQL, LDAP and XML.

Virtual Data Partitioning


The act of partitioning data stores as a database grows has been in use for several decades.
There are two primary ways that data has been partitioned inside legacy data
management solutions:

I. Shared All Databases–an architecture that assumes all database cluster nodes share a
single partition. Inter-node communications is used to synchronize update activities
performed by different nodes on the cluster. Shared-all data management systems are
limited to single-digit node clusters.
II. Shared-Nothing Databases–an architecture in which all data is segregated to internally
managed partitions with clear, well-defined data location boundaries. Shared-nothing
databases require manual partition management.

In virtual partitioning, logical data is abstracted from physical data by autonomously


creating and managing large number of data partitions (100s to 1000s). Because they
are autonomously maintained, resources required to manage the partitions are
minimal. This kind of massive partitioning results in:

 partitions that are small, efficiently managed and load balanced; and
 systems that do not required re-partitioning events to define additional partitions ,
even when hardware is changed

This virtual architecture converges together the “shared-all” and “shared nothing”
architectures allowing scalability through multiple data partitions and cross-partition
querying and transaction processing without full partition scanning.
[edit]Advantages

 Added flexibility and agility to the existing computing infrastructure


 Enhanced database performance
 Pool and share computing resources
 Simplify administration and management
 Increases fault tolerance
 Provide real-time back-up of important business data
 Lower total cost of ownership

Desktop virtualization
From Wikipedia, the free encyclopedia

Desktop virtualization (sometimes called client virtualization[1]), as a concept, separates a personal


computer desktop environment from a physical machine using a client–server model of computing.
The model stores the resulting "virtualized" desktop on a remote central server, instead of on the local
storage of a remote client; thus, when users work from their remote desktop client, all of the programs,
applications, processes, and data used are kept and run centrally. This scenario allows users to
access their desktops on any capable device, such as a traditional personal computer, notebook
computer, smartphone, or thin client.

Virtual desktop infrastructure, sometimes referred to as virtual desktop interface[2] (VDI) is


the server computing model enabling desktop virtualization, encompassing the hardware and software
systems[3] required to support the virtualized environment.[4

Technical definition
Desktop virtualization[5] involves encapsulating and delivering either access to an entire
information system environment or the environment itself to a remote client device. The client
device may use an entirely different hardware architecture than that used by the projected
desktop environment, and may also be based upon an entirely different operating system.

The desktop virtualization model allows the use of virtual machines to let multiple network
subscribers maintain individualized desktops on a single, centrally located computer or server.
The central machine may operate at a residence, business, or data center. Users may be
geographically scattered, but all may be connected to the central machine by a local area
network, a wide area network, or the public Internet.

Technical definition
Desktop virtualization[5] involves encapsulating and delivering either access to an entire
information system environment or the environment itself to a remote client device. The client
device may use an entirely different hardware architecture than that used by the projected
desktop environment, and may also be based upon an entirely different operating system.

The desktop virtualization model allows the use of virtual machines to let multiple network
subscribers maintain individualized desktops on a single, centrally located computer or server.
The central machine may operate at a residence, business, or data center. Users may be
geographically scattered, but all may be connected to the central machine by a local area
network, a wide area network, or the public Internet.

Network virtualization
From Wikipedia, the free encyclopedia
This article may require cleanup to meet Wikipedia's quality
standards. Please improve this article if you can. The talk page may contain
suggestions. (January 2008)

In computing, Network Virtualization is the process of combining hardware and software network
resources and network functionality into a single, software-based administrative entity, a virtual
network. Network virtualization involves platform virtualization, often combined with resource
virtualization.

Network virtualization is categorized as either external, combining many networks, or parts of


networks, into a virtual unit, or internal, providing network-like functionality to the software containers
on a single system. Whether virtualization is internal or external depends on the implementation
provided by vendors that support the technology.

Components of a virtual network


Various equipment and software vendors offer network virtualization by combining any of the
following:

 Network hardware, such as switches and network adapters, also known as network
interface cards (NICs)
 Networks, such as virtual LANs (VLANs) and containers such as virtual machines (VMs)
and Solaris Containers
 Network storage devices
 Network media, such as Ethernet and Fibre Channel

Following is a survey of common network virtualization scenarios and examples of vendor


implementation of these scenarios.

[edit]External network virtualization


Some vendors offer external network virtualization, in which one or more local networks are
combined or subdivided into virtual networks, with the goal of improving the efficiency of a large
corporate network or data center. The key components of an external virtual network is the VLAN
and the network switch. Using VLAN and switch technology, the system administrator can
configure systems physically attached to the same local network into different virtual networks.
Conversely, VLAN technology enables the system administrator to combine systems on separate
local networks into a VLAN spanning the segments of a large corporate network.

[edit]Examples of external network virtualization


Cisco Systems Service-Oriented Network Architecture enables external network virtualization
through use of the network switch hardware and VLAN software. In this scenario, systems that
are physically connected to the same network switch can be configured as members of different
VLANs.

Hewlett Packard has implemented external network virtualization through their X Blade
Virtualization technologies. Chief among these is Virtual Connect, which allows system
administrators to combine local area networks and storage networks into a singly wired and
administered network entity.

[edit]Internal network virtualization


Other vendors offer internal network virtualization. Here a single system is configured with
containers, such as the Xen domain, combined with hypervisor control programs or pseudo-
interfaces such as the VNIC, to create a “network in a box.” This solution improves overall
efficiency of a single system by isolating applications to separate containers and/or pseudo
interfaces.

[edit]Examples of internal network virtualization


Citrix and Vyatta have built a Virtual Network Stack combining Vyatta's routing, firewall and IPsec
VPN functionality with Citrix Netscaler load balancer, Branch Repeater WAN optimization and
Access Gateway SSL VPN. The vNetworkStack project is defining entire virtualized network
architectures for branch offices, datacenters and cloud computing environments.

OpenSolaris network virtualization features (see OpenSolaris Network Virtualization and


Resource Control) enable the "network in the box" scenario. The features of the OpenSolaris
Crossbow Project provide the ability for containers such as zones or virtual machines on a single
system to share resources and exchange data. Major Crossbow features include VNIC pseudo-
interfaces and virtual switches, which emulate network connectivity by enabling containers to
exchange data without having to pass that data onto the external network.

Microsoft Virtual Server uses virtual machines such as those provided by Xen to create a network
in the box scenario for x86 systems. These containers can run different operating systems, such
as Windows or Linux, and be associated with or independent of a system's NIC.

[edit]Combined internal and external network virtualization


Some vendors offer both internal and external network virtualization software in their product line.
For example, VMware provides products that offer both internal and external network
virtualization. VMware's basic approach is network in the box on a single system, using virtual
machines that are managed by hypervisor software. VMware then provides its VMware
Infrastructure software to connect and combine networks in multiple boxes into an external
virtualization scenario.

Potrebbero piacerti anche