Sei sulla pagina 1di 24

openBench Labs

Infrastructure Virtualization

Analysis: Extend a Fibre Channel SAN and


Leverage Virtual Infrastructure via iSCSI
Analysis: Extend a Fibre Channel SAN and
Leverage Virtual Infrastructure via iSCSI

Author: Jack Fegreus, Ph.D.


Chief Technology Officer
openBench Labs
http://www.openBench.com
October 30, 2007

Jack Fegreus is Chief Technology Officer at openBench Labs, which


consults with a number of independent publications. He currently serves as
CTO of Strategic Communications, Editorial Director of Open magazine
and contributes to InfoStor and Virtualization Strategy. He has served as
Editor in Chief of Data Storage, BackOffice CTO, Client/Server Today, and
Digital Review. Previously Jack served as a consultant to Demax Software
and was IT Director at Riley Stoker Corp. Jack holds a Ph.D. in Mathematics
and worked on the application of computers to symbolic logic.
Table of Contents

Table of Contents

Executive Summary 04

Assessment Scenario 07

Real Performance, Virtual Advantage 15

Concentrator Value 23

03
Executive Summary

Executive Summary
“For cost-conscious IT decision makers, StoneFly Storage
Concentrators, incorporate a virtualization engine for storage
provisioning and management in order to add another important
advantage: the ability to cut operating costs.”

For iSCSI storage networking over standard Gigabit Ethernet


connections, the StoneFly™ Storage Concentrator™ i4000 is an appliance
for providing storage
openBench Labs Test Briefing: provisioning using iSCSI over
StoneFly Storage Concentrator i4000 an Ethernet LAN. Via the
1) Logical volume management services: The StoneFly Storage StoneFusion OS, a specialized
Concentrator presents administrators with a uniform logical OS built on the Linux kernel,
representation of physical storage resources to simplify operations. the StoneFly iSCSI Storage
2) Web-based GUI for storage provisioning: System administrators Concentrator integrates the
create iSCSI target volumes by allocating blocks of storage and power of an iSCSI router with
authorize the use of those volumes by individual host systems via an extensive management
HTML interface resident on the Storage Concentrator.
services. As a result, this
3) Higher I/O operations per second: Intelligent iSCSI storage packet
routing, processes data and commands concurrently, increasing
StoneFly appliance presents IT
system efficiency and storage throughput. with an exceptional
4) Volume copying: To support content distribution, such as the mechanism for extending the
distribution of a VM from a template, a copy volume function makes benefits of an existing Fibre
an exact copy of a spanned volume, a mirror volume, or a Snapshot Channel SAN to a much
Live Volume. broader base of clients. Not
5) Image mirroring: To support business continuity functions with no the least of these extended
single point of failure, StoneFly Reflection provides administrators
clients are virtual machines
with an easy way to create, detach, reattach, and promote mirror
images of volumes. (VMs) running in a VMware®
Virtual Infrastructure (VI).

IT can quickly install one or more of the Stonefly Storage Concentrators


utilizing existing Ethernet and FC infrastructure. Once installed, IT can
leverage the concentrator's storage-provisioning engine to provide
advanced storage management, business continuity, and disaster recovery
functions, In particular, StoneFusion is quite robust in providing storage
virtualization, both synchronous and asynchronous mirroring, snapshots,
and active/active clustering. Moreover, IT can leverage the appliance's sup-
port for heterogeneous hosts and storage devices to increase the utilization
of storage resources via storage pooling.

Maximizing storage resource utilization is extremely important for

04
Executive Summary

CIOs, who are frequently under the gun to provide a more demonstrably
responsive IT infrastructure to meet rapidly accelerating changes in
business cycles. As a result of that pressure, IT must frequently deploy
new resources or repurpose existing resources. More importantly, it is not
the acquisition of resources so much as the management of those
resources that is the biggest driver of IT costs. The general rule of thumb
is that operating costs for managing storage on a per-gigabyte basis are
three to ten times greater than the capital costs of storage acquisition.
That's because provisioning and management tasks associated with
storage resources are highly labor-intensive and often burdened by the
bureaucratic inefficiencies.

With regard to IT management costs, the 2006 McKinsey survey of sen-


ior IT executives revealed that systems and storage virtualization had
become critically important to CIOs. What makes virtualization a top-of-
mind proposition for CIOs today is the ability of virtual devices to be
isolated from the constraints of physical limitations. By separating function
from physical implementation, IT can manage that resource as a generic
device based on its function. That means system administrators can narrow
their operations focus from a plethora of proprietary devices to a limited
number of generic resource pools.

That's why system and storage virtualization share the spotlight in the
McKinsey CIO survey. What's more, deriving the maximal benefits from
system virtualization in a VI environment requires storage virtualization as
a necessary prerequisite. The issues of availability and mobility of both a
VM and its data plays an important role in such daily operational tasks as
load balancing and system testing. Not surprisingly, VM availability and
mobility really rise to the forefront in a disaster recovery scenario. The
image of files stranded on storage directly attached to a nonfunctional serv-
er makes a bad poster for high availability.

SAN technology has long been the premier means of consolidating stor-
age resources and streamlining management in large data centers.
Nonetheless, storage virtualization for physical servers and commercial
operating systems, such as Microsoft® Windows and Linux®, is burdened
with complexity because most commercial operating systems assume exclu-
sive ownership of storage volumes.

Storage virtualization in a VI environment, however, is a much simpler


proposition as the file system for VMware ESX, dubbed VMFS, eliminates
the burning issue of exclusive volume ownership. By handling distributed

05
Executive Summary

file locking between systems, VMFS renders the issue of volume ownership
moot. That opens the door to using iSCSI to extend the benefits of physical
and functional separation via a cost-effective lightweight SAN. As a result,
iSCSI has become de rigueur in large datacenters for ESX servers.

More importantly for cost-conscious IT decision makers, StoneFly


Storage Concentrators incorporate a storage virtualization engine for stor-
age provisioning and management in order to add another important
advantage: the ability to cut operating costs. System administrators can use
the StoneFusion management GUI to perform critical storage management
tasks from virtualization to the creation of volume copies and snapshots
and even the configuration of synchronous and asynchronous mirrors. As a
result, a system administrator servicing an iSCSI client can directly handle
the labor-intensive storage management tasks that would normally require
coordination with a storage administrator.

06
Assessment Scenario

Assessment Scenario
“By performing all partitioning and management functions for virtual
storage volumes on the iSCSI concentrator and not on the FC array,
openBench Labs was able to leverage key capabilities of StoneFusion to
reduce operating costs by enabling system administrators to carry out
tasks that normally require co-ordination with a storage administrator.”

STAND-ALONE PHYSICAL SERVER TESTING


To assess the Stonefly Storage
Concentrator i4000, openBench
Labs set up two test scenarios. In
the initial scenario, we
concentrated on determining
performance parameters for
traditional physical servers. In
this scenario, we ran Windows
Server® 2003 SP2 and Novel
SUSE® Linux Enterprise Server
(SLES) 10 SP1 on an HP Proliant
ML350 G3 server. This server
sported a 2.4GHz Xeon
processor, 2GB of RAM, and an
embedded Gigabit Ethernet TOE.
We also installed a QLogic 4050
hardware iSCSI HBA.

In our second scenario, we


used our initial test results as a
template for server consolida-
tion. Utilizing two quad-
processor servers running ESX
3.0.1, openBench Labs tested
iSCSI performance on an ESX
host server in supporting a VM
datastore hosting a virtual work
volume. These tests were done in
the context of replacing an HP
Proliant ML350 G3 server with a VM. In addition, we tested the volume
copy and advanced image management functionality of StoneFusion in

07
Assessment Scenario

our VI environment. In those tests, we assessed the StoneFusion functions


as a means of enhancing the distribution of VM operating systems from
templates and bolstering business continuity for disaster recovery.

Along with our Stonefly The StoneFusion man-


i4000 iSCSI Storage agement GUI provides a
Concentrator on the iSCSI "Discover" button, which is
used to launch a process
side of our SAN fabric, we
that automatically discov-
employed a NETGEAR ers new storage resources.
GSM7324 level 3 managed What's more, StoneFusion
Gigabit Ethernet switch also automatically discov-
and several QLogic 4050 ers any HTML-based man-
agement utilities. That pro-
iSCSI HBAs. We employed
vided us with the ability to
the QLogic iSCSI HBA to bring up StorView, the
maximize throughput storage management GUI
from the StoneFly i4000 by for the nStor FC-FC array
eliminating all of the directly from within
StoneFusion.
overhead associated with
iSCSI packet processing.

On the Fibre Channel


side of our fabric, we
utilized a QLogic SANbox
9200 switch, an nStor 4540
storage array, and an IBM
DS4100 storage array. We
chose the IBM®
TotalStorage DS4100 as the
primary array for providing
backend storage for two
reasons: its large storage
capacity and its robust I/O
caching capability.

To support numerous
iSCSI client systems, storage
capacity is often a primary
concern when configuring
an iSCSI fabric. Using low-cost high-capacity SATA drives, we were able to
configure our IBM DS4100 array with 3.2TB of storage: From that pool, we
assigned 1.6TB to the StoneFly i4000 in bulk via a single LUN.

08
Assessment Scenario

For our tests, however, For a uniform test


rapid response to environment, we configured
excessively high numbers all volumes that would be
of I/O operations per used in benchmark tests
using 1.6TB of storage
second (IOPS) trump imported from an IBM
capacity. That's because DS4100 array. In particular,
our oblLoad benchmark we consumed 750GB in
generates high numbers of creating a number of 25GB
IOPS to stress all the partitions to support VM
operating systems and
components of a SAN 50GB partitions to support
fabric. With respect to our user data for applications
analysis, the IBM DS4100 on both VM and physical
provides an excellent bal- systems. More importantly,
ance of capacity with I/O we could now use all of the
advanced provisioning fea-
responsiveness. For I/O tures that are part of the
performance, our DS4100 StoneFusion OS. This
sports two independent proved to be extremely
controllers, each of which important when working
features a highly with VMs.
configurable 1GB cache
and dual 2Gbit FC ports.

By performing all
partitioning and manage-
ment functions for virtual
storage volumes on the
iSCSI concentrator and not
on the FC array,
openBench Labs was able
to leverage key capabilities
of StoneFusion to reduce
operating costs by enabling
system administrators to
carry out tasks that
normally require
co-ordination with a
storage administrator. In
particular, we were able to
consolidate storage from
multiple FC arrays into a
pool that could be managed
from the StoneFly i4000. More importantly, we were able to configure

09
Assessment Scenario

logical volumes—dubbed resource targets in the iSCSI vernacular—and


export them to client systems without any regard for the sources of the
blocks within the pool.

To maintain consistency Using the StoneFusion


in benchmark management GUI, we
performance, which is provisioned logical
volumes for benchmarking
highly dependent on the
manually. In this way, we
disk drive characteristics, had complete control over
controller caching, and the source of disk blocks
RAID configuration from the resource pool of
associated with the FC-based storage that had
been created on the
underlying storage array,
StoneFly i4000.
openBench Labs created
all volumes that would be
used for performance
benchmarking explicitly
with disk blocks imported
via the 1.6TB LUN from
the DS4100 array.

BENCHMARK BASICS
Like all other storage transport protocols, iSCSI performance has two
dimensions: data throughput, which is typically measured in MB per
second, and data accessibility, which is measured in I/O operations
completed per second (IOPS). To assess overall iSCSI performance, we
ran our oblDisk and oblLoad benchmarks, which measure throughput
and accessibility respectively.

The oblDisk benchmark simulates high-end multimedia-especially


video related-I/O operations by reading data sequentially using a range of
I/O request sizes-from 4-to-128 KB. In contrast, the oblLoad benchmark
simulates database access in a high-volume, transaction-processing
environment using small-typically 8KB-I/O requests, which are random
within defined localities. In particular, oblLoad measures the total number
of IOPS that can be completed with the constraint that average response
time never exceeds 100ms. In so doing, oblLoad generates much more
overhead for a host system than oblDisk.

As a system running oblLoad generates greater numbers of IOPS, a


storage system that can keep pace fulfilling those requests will in turn
create more overhead on the requesting system, which must process more

10
Assessment Scenario

network packets and SCSI commands. To eliminate this overhead from our
host server, we installed a QLogic iSCSI HBA for use in physical server tests.
In addition, for TCP packet processing, which a TOE offloads, the QLogic
HBA also handles the processing of the embedded SCSI packets.

The oblLoad benchmark launches an increasing number of disk I/O


daemons that initiate a series of read/write requests—typically 8KB in size.
One portion of the requests is directed at a fixed hot spot representing the
index tables of a database. The remaining portion is randomly distributed
over the entire volume.

That hot spot provides a means to test the caching capabilities of the
underlying storage system. As the number of disk daemons increases, so too
should the effectiveness of the array controller's caching increase within the
hot spot. As earlier noted, the IBM DS4100 storage system's robust ability to
support the dynamic tuning of cache performance is precisely why we
chose that array to support our tests.

VIRTUAL CONSOLIDATION
The standalone tests on the HP ProLiant ML350 G3 servers also provid-
ed an interesting case study for server consolidation through system, storage
and network virtualization. Virtualization extends the power of IT to inno-
vate by providing the means to leverage logical representations of resources.
Whether through aggregation or deconstruction, virtualized resources are
not restricted by physical configuration, implementation, or geographic
location: That makes a virtual representation more powerful and able to
provide greater benefits than the original physical configuration. When
maximally exploited by IT, virtualization becomes a platform for innova-
tion for which the benefits move far beyond basic reductions in the total
cost of ownership (TCO).

Scattered application servers and data storage systems often reduce


administrator productivity and increase vulnerability. In responding to
those issues, many sites began consolidating physical servers into farms
of 1U and 2U servers in rows of racks. Nonetheless, that wave of
consolidation did little to help improve resource utilization and often
made matters worse by creating serious environmental issues centered on
power and cooling.

As a result, IT is moving away from physical server consolidation and


toward virtual server consolidation. With 4 to 8 virtual serves running on
a single physical server, IT can centralize resources, address growing

11
Assessment Scenario

datacenter environmental issues, and make dramatic improvements in


resource utilization. What's more, system virtualization compounds the
opportunities to leverage both the operational and performance
efficiencies of a SAN.

To assess the
performance of the
StoneFly i4000 in a VI
environment,
openBench Labs set up
two quad-processor
servers: an HP ProLiant
DL580 G3 and a Dell
1900. Both servers ran
VMware® ESX v3.0.1
and hosted from one-
to-four simultaneous
VMs that were running
either Windows Server
2003 SP2 or SUSE
Linux Enterprise Server
10 SP1.

For IT to get the


maximum value from a
VM, any constraints
that bind that VM to a
physical server should
be avoided. First and
foremost, there will be
the need to handle load
balancing and failover
of virtual machines. In
addition, there will be
the need to move VM
configurations in and
out of development,
test, and production
environments. What's more, VMotion™ now makes it easy to move virtual
machines dynamically among host servers running ESX 3.

12
Assessment Scenario

That means all virtual StoneFusion uses the


machines on all physical unique ID of each iSCSI
hosts must be capable of initiator on a client host as
the primary means to con-
accessing the same storage trol access to virtual vol-
resources, and that makes umes. With a QLogic iSCSI
a SAN essential. HBA installed on our HP
Nonetheless, it is the ProLiant DL580 server, the
advanced capabilities of VMware software initiator
and the iSCSI HBA
VMware to leverage SAN appeared as separately
storage that makes a light- addressable hosts.
weight iSCSI SAN an
almost defining
characteristic for VMware
sites.

On each server running ESX, we set up a virtual switch-based LAN


using two gigabit TOEs, which were teamed by ESX. Similarly, the
StoneFly i4000 automatically teamed its two TOEs.

On the ESX server's virtual LAN, we created a VMware kernel port for
the VMware software initiator to enable iSCSI connections. In addition,
we also installed a QLogic iSCSI HBA on each ESX server. Within the VI
console, the iSCSI HBA immediately appeared as an iSCSI-based Storage
Adapter. Through either the hardware HBA or the software initiator, ESX
handled every iSCSI connection.

The StoneFly i4000 also When authorizing access


distinguished each of the to a volume, the
iSCSI initiators on each of Challenge-Handshake
the ESX servers as separate Authentication Protocol
(CHAP) can be invoked in
hosts. As a result, we were conjunction with the iSCSI
able to use the StoneFly initiator ID for added
management GUI to assign security. For our volume
read-write access rights for Win02, which contained a
volumes explicitly to either VM running Windows
Server 2003, we granted
the ESX server's software full access to both of our
initiator or the QLogic ESX servers via their
iSCSI HBA. In turn, the VI VMware iSCSI initiator. The
Client properly displayed VMFS DLM ensured that
every iSCSI target exported only one server at a time
could open and start the
from the StoneFly i4000 as Win02 VM image.
connected to the appropriate iSCSI host initiator. What's more, as we

13
Assessment Scenario

created more volumes on the StoneFly i4000 and granted access to an


initiator associated with a particular ESX server, a rescan of storage
adapters on the VI Client would make them visible.

On host severs Whether connected to


running VMware the ESX server via the
ESX 3, physical VMware initiator or the
resources are QLogic iSCSI HBA, all
aggregated and storage resources, such as
our VM-Win02 volume,
presented to system were aggregated into a
administrators as virtual storage pool under
shared pools of ESX and presented to the
uniform devices. All VMs as direct attached
of the target iSCSI SCSI disks.
volumes exported to
either the software
iSCSI initiator or the
iSCSI HBA were
pooled by the ESX
sever and presented
to the virtual machines as direct-attached SCSI disks.

More importantly, storage virtualization in a VMware Virtual


Infrastructure (VI) environment is a far less complex proposition than
storage virtualization in an FC SAN using physical systems. Commercial
operating systems, such as Microsoft® Windows and Linux®, assume
exclusive ownership of their storage volumes, As a result, neither
Windows nor Linux incorporates a distributed file locking mechanism in
its file system. A distributed lock manager (DLM) is essential if multiple
systems are to maintain a consistent view of a volume's contents. Without
a DLM, virtualization of volume ownership is the only means of prevent-
ing the corruption of disk volumes. That has made SAN management the
exclusive domain of storage administrators at most enterprise-class sites
working with physical systems.

On the other hand, the file system for ESX, dubbed VMFS, has a built-
in mechanism to handle distributed file locking. Thanks to that
mechanism, exclusive volume ownership is not a burning issue in a VI
environment. What's more, VMFS avoids the massive overhead that a
DLM typically imposes: VMFS simple treats each disk volume as a single-
file image in a way that is loosely analogous to an ISO-formatted
CDROM. When a VM’s OS mounts a disk, it opens a disk-image file;

14
Real Performance, Virtual Advantage

VMFS locks that file; and the VM's OS gains exclusive ownership of the
disk volume. Using StoneFusion's
management GUI,
openBench Labs was able
With the issue of to invoke a rich collection
volume ownership moot, of storage manage utilities.
iSCSI becomes a perfect Among these utilities are a
way to extend the benefits number of high-availability
of physical and functional tools to create copies and
maintain mirror images of
separation via a more cost- volumes. Within a small VI
effective, easy-to-manage, environment, system
lightweight IP SAN fabric. administrators can also
That has made iSCSI de utilize these tools in
rigueur for ESX servers in conjunction with the basic
VI client software to
large datacenters. provide simple VM
template management
By using the StoneFly capabilities that would
i4000 Storage Concentrator running the StoneFusion OS to anchor an normally require an
iSCSI fabric, IT can limit the involvement of storage administrators with additional server running
the VMware Virtual Center.
the iSCSI fabric. A storage administrator will only be needed to provision
the iSCSI concentrator with bulk storage from an FC SAN array. System
administrators can easily manage the storage provisioning needs of their
iSCSI client systems, including ESX servers, by invoking the storage
provisioning functions within StoneFusion.

Real Performance, Virtual Advantage


“With both physical and virtual systems sustaining 10,000 IOPS on
throughout loads using 8KB data packets, the StoneFly i4000 provided
exceptional performance in routing FC data traffic over a 1-Gb Ethernet
fabric via iSCSI.”

PHYSICAL BASELINE
We began testing on an HP Proliant ML350 G3 server running
Windows Server 2003. Thanks to Microsoft's freely available software ini-
tiator, systems running a Windows OS have become the premier platform
for iSCSI. Though far less prevalent than the Microsoft iSCSI initiator, the
StoneFusion OS also supports the Microsoft Internet Storage Name Service
(iSNS). By registering with iSNS, the StoneFly i4000 insures automatic
discovery by the Microsoft initiator.

15
Real Performance, Virtual Advantage

CH
LABS
The Qlogic iSCSI HBA IOPS throughput patterns
O PE N BE N

oblLoad v2.0 also supports iSNS, so it for oblLoad using the


10000 too will discover the QLogic HBA and the
• server's embedded TOE
• StoneFly i4000 iSCSI Concentrator StoneFly i4000
9000 IBM DS4100 Storage Array
were remarkably similar.
HP ML350 G3 Server
automatically. What's Absolute performance
8000 Windows Server 2003 SP2 more, the QLogic iSCSI measured in total IOPS,
•• •• •• • •
• HBA off loads all iSCSI however, was distinctly
7000
packet processing—a higher for the QLogic iSCSI
•• •• • TOE only off loads the HBA. This was especially
I/Os per second

6000 •
• true for small numbers of
5000
processing of the TCP daemons, which is the
packets that encapsulate time that the host is most
4000 • QLA4050 iSCSI HBA the SCSI command sensitive to changes in
• • MS Initiator and Ethernet TOE
packets—and thereby overhead. With more than
3000
12 daemons, the
provides a distinct edge in
difference in the number of
2000
• • processing IOPS. This is IOPS completed varied by
• very significant for less than 2%.
1000
maximizing performance
0
0 2 4 6 8 10 12 14 16 18 20 of the StoneFly i4000,
Number of daemon processes which was able to sustain a
load of 10,000 IOPS with
8KB data requests.
We observed a very
LABS
On Linux™, the push different pattern in IOPS
CH

for iSCSI has lagged


OPENBEN

oblLoad v2.0 performance on SLES.


900 behind Windows. The Using the Open-iSCSI
StoneFly i4000 iSCSI Concentrator merging of the Linux- initiator, IOPS performance

800
IBM DS4100 Storage Array
HP ML350 G3 Server
•• iSCSI project into the rose steadily as the
SUSE Linux Enterprise Server 10 SP1 • • number of oblLoad
700
• Open-iSCSI project in daemons rose to six. In
2005 has helped to contrast, IOPS performance
• • • continued to rise beyond 6
600 quicken the pace of
• •• • •
I/Os per second

adoption by providing daemons as performance


500 • •
• • • • Linux distributions with a
diverged dramatically.
• • More importantly, IOPS
400 universal iSCSI option to performance is invariant
•• include within their pack-
• • with the size of I/O
300 • ages. requests because of the
• • way the Linux kernel
200 bundles I/O, Using large
• • • QLA4050 iSCSI HBA (8KB i/O) The new Open-iSCSI 64KB I/O requests, IOPS
100 • • Open-iSCSI Initiator and Ethernet TOE (8KB I/O)
package is partitioned into performance was little
• QLA4050 iSCSI HBA (64KB i/O)
user and kernel different from 8KB I/O. The
0 components. In user space, implications for
0 2 4 6 8 10 12 14 16 18 20
command line interface applications that rely on
Number of daemon processes
large-block I/O, such as
(CLI) modules handle OLAP, are significant.

16
Real Performance, Virtual Advantage

configuration and control, which is still a very manual task that requires
each iSCSI target portals to be explicitly defined. More importantly, the
developers classify the current Open-iSCSI release as "semi-stable." As a
result, the initiator remains as an optional component in most Linux
server distributions.

SLES 10 attempts to improve the usability of the Open-iSCSI initiator by


adding a GUI within its YAST system management framework to simplify
iSCSI resource configuration for system administrators. Every time we tried
to configure the initiator via YAST, however, our server crashed. On the
other hand, the Open-iSCSI CLI modules worked perfectly and made short
work of connecting the server to the StoneFly i4000.

CH
LABS Nonetheless, peak IOPS For sequential I/O, the
O PENBEN

oblDisk v3.0 performance for iSCSI on SLES10— bundling of requests by


150 even with the QLogic iSCSI HBA— Linux can be leveraged
StoneFly i4000 iSCSI Concentrator into a distinct advantage
trailed peak iSCSI performance on
• • • • • using the StoneFly i4000,
125 IBM DS4100 Storage Array
HP ML350 G3 Server
Windows Server 2003 by an order of which can stream data at
• magnitude. This is a function of the
Throughput (MB per second)

wire speed. Using the


• way Linux bundles I/O and has oblDisk benchmark to read
100
• nothing to do with the StoneFly very large files sequential-
ly, the only factor that lim-
i4000. It is, however, a condition that
75 ited throughput was the
• the StoneFly i4000 can exploit. client's ability to accept
data coming from the
50
Windows Server 2003 The StoneFusion OS is tuned for StoneFly i4000.
• • QLA4050 iSCSI HBA
SUSE Linux Enterprise Server 10
high data throughput. As a result, we
• QLA4050 iSCSI HBA were able to run oblLoad with 64KB
25 •
I/O requests, which can be found in
0
multi-dimensional Business
0 16 32 48 64 80 96 112 128 Intelligence application scenarios, and
Unbuffered sequential read size (KB)
measure the same level of IOPS while
moving 8 times more data.

The ability to deliver high data throughput levels is particularly


important in supporting high-end multimedia applications, especially
when dealing with streaming video. Both Linux and Windows client
systems were able to stream large multi-gigabyte files sequentially at
wire—1Gbps—speed through the StoneFly i4000.

VIRTUALIZATION AND SAN SYMBIOSIS


In final phase of testing of the StoneFly i4000, openBench Labs utilized
two quad-processor servers to run a VMware Infrastructure 3

17
Real Performance, Virtual Advantage

environment. This advanced third-generation platform virtualizes an


entire IT infrastructure including servers, storage, and networks. For the
openBench Labs test scenario, we focused our attention on the problem of
consolidating four servers along the lines of our HP ProLiant ML350 G3
system on a single quad-processor server, such as an HP ProLiant DL580
G3 or a Dell PowerEdge 1900.

The VMware ESX Server provides two ways to make virtual storage
volumes accessible to virtual machines. The first way is to use a VMFS
datastore to encapsulate a VM's disk-in a way that is analogous to a
CD-ROM image file. The VM disk is a single large VMFS file that is
presented to the VM's OS as a SCSI disk drive, which contains a file
system with many individual files. In this scheme, VMFS provides a
distributed lock manager (DLM) for the VMFS volume and its content of
VM disk images. With a DLM, a datastore can contain multiple VM disk
files that are accessed by multiple ESX Servers.

The OS of the VM issues I/O commands to what appears to be a local


SCSI drive connected to a local SCSI controller. In practice, the block
read/write requests are passed to the VMkernel where a physical device
driver, such as the driver for the QLogic iSCSI HBA, forwards the
read/write requests and directs them to the actual physical hardware device.

That scheme of employing a DLM can put I/O loads on a VMFS-


formatted volume that are significantly higher than the loads on a volume
in a single-host, single-operating-system environment. To meet those
loads, VMFS has been tuned as a high-performance file system for storing
large, monolithic virtual disk files. Tuning an array for a particular
application becomes irrelevant when using a VM disk file. When a VM's
files are encapsulated in a specially formatted disk file, the fine-grain
storage tuning associated with a physical machine looses its relevance. The
effectiveness of the VMFS tuning scheme would immediately become
evident when we tested IOPS performance on a Linux VM.

The alternative to VMFS is to use a raw LUN formatted with a native


file system associated with the virtual machine (VM). Using a raw device
as though it were a VMFS-hosted file requires a VMFS-hosted pointer file
to redirect I/O requests from a VMFS volume to the raw LUN. This
scheme is dubbed Raw Device Mapping (RDM). What drives the RDM
scenario is the need to share data with external physical machines.

While openBench Labs ran functionality tests of RDM volumes, we

18
Real Performance, Virtual Advantage

chose to utilize unique VMware datastores to encapsulate single virtual


volumes in our benchmark tests. Given that the default block size for
VMFS is 1MB, we followed two fundamental rules of thumb in provision-
ing backend storage for the StoneFly i4000:
1. Put as many spindles into the underlying FC array as possible.
2. Make the FC array's stripe size as large as possible.

In particular, we utilized 7-drive arrays with a stripe size of 256KB—the


default for high-end UNIX® systems—in the IBM DS4100 storage system.
With our storage system sporting two independent disk controllers with a
1-GB cache, we garnered a significant boost in our IOPS performance tests
by exploiting read-ahead track caching. As a result, the issues at hand for
performance became the ability for the StoneFly i4000 to pass that backend
FC throughput forward over iSCSI and the ability of the client’s hardware
and software initiators to keep pace with the storage concentrator.

BLURRING REAL AND VIRTUAL DIFFERENCES


In provisioning 50GB logical drives for testing, ESX would create a
sparse file within the specified VMFS volume. Once the virtual machine
environment was provisioned, we repeated the stand-alone server tests for
each OS with a single virtual machine running on the server. To measure
scalability, openBench Labs repeated the tests on multiple virtual machines.

CH
LABS We began testing iSCSI In terms of IOPS
OPENBEN

oblLoad v2.0 performance on a VMware performance, utilizing the


10000
• ESX Server with virtual QLogic iSCSI HBA on ESX
StoneFly i4000 iSCSI Concentrator
• IBM DS4100 Storage Array machines running and then virtualizing the
9000 HP ML350 G3 Server
VMware ESX Server 3.03 volume as a direct
Virtual Machine: Windows Server 2003 SP2 Windows Server 2003 SP2. attached SCSI drive
8000 • • • • • With a 50GB datastore
• provided the same level of
7000 • • mounted via the QLogic performance as measured
• • • • HBA, the number of IOPS using the iSCSI HBA with a

I/Os per second

6000 • •• • completed by oblLoad was physical Windows server.


• • • • Without the iSCSI HBA,
5000 • • • virtually identical to the performance did not reflect
number completed on our the boost in performance
4000 • • base HP Proliant ML350 that the StoneFly i4000
3000 • server system running was able to pass on from
• QLA4050 HBA Windows Server 2003 the IMB DS4100 array.
• QLA4050 HBA VMware ESX Server Windows Server 2003 SP2.
2000 • • • VMware Initiator and Ethernet TOE

1000 By far, the most extraor-


0
dinary results occurred
0 2 4 6 8 10 12 14 16 18 20 when we ran SUSE Linux
Number of daemon processes
Enterprise Server (SLES) 10

19
Real Performance, Virtual Advantage

SP1 within a VM. In this case, IOPS performance improved with both the
QLogic iSCSI HBA and with the VMware iSCSI initiator in conjunction
with the Ethernet TOE as compared to running a physical server.

CH
LABS With a VM running Using a ReiserFS-
OPENBEN

oblLoad v2.0 SLES, however, that boost formatted data volume


2000 to VMFS performance contained in a VMFS
StoneFly i4000 iSCSI Concentrator datastore, IOPS
IBM DS4100 Storage Array • propelled IOPS well beyond
1800 HP ML350 G3 Server performance on a VM
VMware ESX Server 3.03
Virtual Machine: SUSE Linux Enterprise Server SP1 • what we had measured with outperformed a physical
1600 • • a physical machine. While
• • • server even when ESX
1400
the basic pattern for IOPS utilized its software
• throughput remained the initiator and the physical

I/Os per second

1200
• • same, the net performance server employed a
• • hardware iSCSI HBA. In
1000 • result was a throughput particular, IOPS
• • level that was often on a performance rose by
800
• • • • scale showing an absolute upwards of 200% over a
• increase in performance physical server when we
600 • • • • • • • used the VMware's iSCSI
that was often on the order
400 • • • initiator. The jump in
• • • • QLA4050 HBA SLES 10 of 200-to-250% higher for performance was on the
200 • QLA4050 HBA VMware ESX Server any given number of order of 300% using the
• • VMware Initiator and Ethernet TOE oblLoad disk daemons. QLogic iSCSI HBA on the
0 ESX server.
0 2 4 6 8 10 12 14 16 18 20
Number of daemon processes With both physical and
virtual systems sustaining
10,000 IOPS throughout loads using 8KB data packets, the StoneFly i4000
provided exceptional performance in routing FC data traffic over a 1-Gb
Ethernet fabric via iSCSI. Nonetheless, it was in the added provisioning
features of StoneFusion that the StoneFly i4000 made the biggest impact
in managing a VI environment.

In a VI environment, one of the key efficiencies for IT operations is the


notion of a template installation. Since the prime goal of systems
virtualization is to maximize resource utilization, multiple VMs will be
running on a host server at any instance in time. To avoid the overhead of
installing multiple instances of an OS, VMware supports the concept of
creating an OS installation template and then cloning that template the
next time that the OS is to be installed. In a VI environment, the creation
of templates is handled by the VMware Virtual Center software, which
requires a separate system running Windows Server along with a
commercial database, such as SQL Server or Oracle, to keep track of all
disk images.

20
Real Performance, Virtual Advantage

Similar functionality Adding a mirror image to


can be leveraged using the a volume is a relatively
StoneFly i4000 Storage trivial task within the
StoneFusion Management
Concentrator through the GUI. To create a clone of
StoneFusion image our VM-Win02 volume, we
management functions for only needed to identify the
volumes. While best volume and determine the
practices call for number of mirrors to
create. Once that was
maintaining offline done, it was just as easy to
template volumes for this detach the newly created
task, we were able to use mirror and promote the
any volume at any time, new image as VM-Win03
provided that we were able take that volume offline. in order to create a new
independent, stand-alone
volume.
To clone a volume image, we first needed to shutdown all VMs
running on that virtual volume and close any iSCSI sessions that were
open for that volume with any ESX servers. Once this was done, we could
begin the rather simple process of adding a mirror image to the volume,
which is normally done to provide for high availability in either a
disaster/recovery or a backup scenario.

The creation of a mirror Monitoring the backend


is a remarkable fast and FC SAN traffic of the
efficient process under StoneFly i4000 at the
QLogic SANbox switch
StoneFusion. We monitored revealed the efficiency of
the FC switch port that was StoneFusion when creating
connected to the StoneFly a mirror for our VM-Win02
i4000 during the process of volume. Full duplex reads
creating a mirror. Read and and writes were running at
95MB per second. Even
write data throughput more remarkable was our
remained fully synchronized inability to discern any
during the process as reads imbalance or difference
and writes took place in between read and write
lockstep at a pace of 45MB traffic coming to and from
the i4000 Storage
per second each, which Concentrator.
resulted in a full duplex I/O
throughput rate of 95MB
per second. At that rate, the
process of generating an OS
clone complete with any additional software applications was merely a
matter of minutes.

21
Real Performance, Virtual Advantage

Once the StoneFly image creation process had completed, we simply


authorized access to the new volume for our ESX servers. Next, by initiating
a re-scan of the appropriate storage adapter on each ESX server, the VMFS
formatted volume was automatically made a member of the storage
resource pool on each ESX server and identified as a snapshot of Win02.

In the final stage of the Once the clone of virtual


process, we browsed the volume VM-Win02 was
successfully connected to
VMFS datastore and added
one of our ESX servers, we
the cloned VM to the pool added the copied OS to the
of virtual machines on inventory pool of VMs as
each ESX server. On pow- oblVM-Win03. When that
ering on the new VM for VM was started for the first
time, the ESX server rec-
the first time, the ESX
ognized the ID of the new
server would recognize VM as belonging to its
that this VM had an exist- source VM, oblVM-Win02.
ing identifier and would At that point the ESX serv-
request confirmation that er would request if this VM
was a copy and whether it
it should either retain or
should create a new ID.
create a new ID for this
VM. Once that was
completed, we were done
with the process of creating
a new VM.

22
Concentrator Value

Concentrator Value
“ESX system administrators can leverage the high-availability functions
of the StoneFusion OS, including the creation of snapshots and mir-
rors, to generate and maintain OS
templates and distribute data files as VMs are migrated in a VI envi-
ronment.”

DOING IT
For CIOs
StoneFly i4000 Storage Concentrator Quick ROI today, two
top-of-mind
1) Aggregate and Manage FC Array Storage for Better Resource Utilization propositions
2) Extended iSCSI Provisioning Functionality are resource
3) Advanced HA Functionality Including Snapshots and Mirrors consolidation
4) Fibre Channel Path Management and Automatic Ethernet TOE Teaming and resource
5) 10,000 IOPS Benchmark Throughput (8KB Requests with Windows Server 2003) virtualization.
6) 133MB/s Benchmark Sequential I/O Throughput (SUSE Linux Enterprise Server 10) Both are
considered to
be excellent ways to reduce IT operations costs through efficient and
effective utilization of IT resources, extending from capital equipment to
human capital. Via the StoneFusion OS storage-provisioning engine, the
StoneFly i4000 Storage Concentrator can directly help raise the utilization
rate of FC storage while extending the benefits of storage virtualization to a
broad array of new client systems over Ethernet.

With resource virtualization, IT can separate the functions of resources


from the physical implementations of resources. This makes it possible for
IT to concentrate on managing a small number of generic pools rather than
a broad array of proprietary devices, making it far easier to create rules and
procedures for utilization. That decoupling also allows storage resources to
be physically distributed and yet centrally managed in a virtual storage
pool. As a result, SANs allow administrators to more easily take advantage
of robust reliability, availability and scalability (RAS) features for data
protection and recovery, such as snapshots and replication.

That synergy makes virtualization of systems, storage, and networks a


holistic necessity. Nonetheless, SAN infrastructure costs have historically
presented a significant hurdle to SAN adoption and expansion. As a
result, the benefits of SAN architecture have not been spread beyond
servers in computer centers.

23
Concentrator Value

Traditional storage virtualization on an FC SAN, however, is a far more


complex proposition than storage virtualization in a VMware Virtual
Infrastructure (VI) environment. Traditional operating systems assume
exclusive ownership of their storage volumes. Unlike ESX, their file systems
do not include a distributed file locking mechanism and a way to keep
multiple systems with a consistent view of a volume's contents. That makes
storage virtualization an important component of SAN management and
the exclusive domain of storage administrators at enterprise-class sites.

On the other hand, exclusive volume ownership is not an issue for ESX
servers, since VMFS handles distributed file locking. In addition, the files
in a VMFS volume are single-file images of VM disks. This means that
when a VM mounts a disk image, VMFS locks that image as a VMFS file
and the VM has exclusive ownership of its disk volume.

With the issue of ownership moot for VMFS datastores, iSCSI becomes
a perfect way to cost effectively extend the benefits of physical and
functional separation from an FC SAN. With the StoneFly i4000, that
functionality can be further leveraged by allowing system administrators
to take on many of the storage provisioning tasks that normally require
coordination with a storage administrator. What’s more, StoneFusion's
built-in advanced RAS storage management features make it easy to create
virtual-disk templates for VM operating systems in order to standardize
IT configurations and simplify system provisioning.

By initially provisioning bulk storage to the StoneFly i4000, interaction


with storage administrators is minimized as ESX system administrators can
address all of the iSCSI issues, including data security. On top of that, ESX
system administrators can leverage the high-availability functions of the
StoneFusion OS, such as snapshots and mirroring, and apply those features
to the creation and maintenance of OS templates, and to the distribution of
data files as VMs are migrated in a VI environment. As a result, the StoneFly
i4000 can open the door to all of the advanced features of a VI environment
while constraining the costs of operations management.

24

Potrebbero piacerti anche