Sei sulla pagina 1di 53

Managing the information that drives the enterprise

STORAGE Vol. 9 No. 5 July/August 2010

Good Match:
iSCSI and vSphere
iSCSI storage is a good fit for virtualized servers.
Here’s what you need to know to make it all work.
P. 13

ALSO INSIDE
5 The new primary storage
8 The selling of IT
22 Continuous data protection: It’s back and better
30 How to turn storage into a service
38 Prescription for medical records
45 New techs fill gaps in cloud storage
48 Data protection is not enough
1 52 Users like efficiency of combo arrays
STORAGE sponsors | july/august 2010

R E G IO N A L S O L U T I O N P R O V I D E R S

2 Storage July/August 2010


STORAGE 5
inside | july/august 2010

Editorial: The New Primary Storage


EDITORIAL It took long enough, but there are signs primary
storage systems are ready for the 21st century. Find out which
technologies are poised to turn the tide. by RICH CASTAGNA
IT Still an Awkward Fit at Most Companies
8 STORWARS You’d think by now that the perception of IT would
have changed, but most IT operations are treated as expense
centers, fiefdoms or afterthoughts, rather than critical parts
of the business. by TONY ASARO
Using iSCSI Storage With vSphere
13 To realize the biggest benefits of a vSphere installation, you
need networked storage, and iSCSI is often touted as the best
fit for virtual servers. We go under the hood of VMware vSphere
to show you how to make iSCSI work for virtual machines.
by ERIC SIEBERT

Continuous Data Protection . . . It’s Back!


22 When CDP products first appeared, the benefits were clear,
but few shops wanted to add a new infrastructure to their
backup operations. But CDP is making a comeback, and it
might just be the future of data backup. by W. CURTIS PRESTON
Create a Storage Service for Your Company
30 Often maligned (but more often misunderstood), the ITIL
framework is a tool that can help transform your storage
environment into an efficient storage service organization.
by THOMAS WOODS

Hospitals Strive for Centralized Image Archives


38 New regulations mandate the digitization and retention of
medical records, leaving hospital IT pros looking to cut costs
by centralizing image archives. But application silos and
political infighting aren’t making it easy. by BETH PARISEAU
Cloud Storage Ecosystems Mature
45 HOT SPOTS A number of vendors have emerged that provide a
bridge to cloud storage services, as well as extended security,
availability and portability to cloud storage service provider
offerings. by TERRI MCCLURE
Align Data Protection With Business Importance
48 READ/WRITE There’s a big difference between backup and
business continuity. Any-point-in-time technologies can
extend data protection so application use is also protected.
by JEFF BOLES

Unified Storage Offers Savings and Efficiency


52 SNAPSHOT Unified, or multiprotocol, arrays put file and block
storage in the same box. It’s convenient and, according to our
survey, an efficient way to manage storage. by RICH CASTAGNA

Vendor Resources
53 Useful links from our advertisers.

3 Storage July/August 2010 Cover illustration by ENRICO VARRASSO


DCIG Storage
Buyer’s Guide
Reviews & Rankings of
over 70 Products from
more than 20 Vendors
editorial | rich castagna
Data protection is
not enough

The new primary storage

w
A technology borrowed from backup may end
up the biggest thing to happen to storage in a long time.
for medical records

HEN THE BEATLES sang “You say want a revolution” back in 1968, you can be
Prescription

sure they weren’t singing about data storage. Changes to storage technologies
happen so slowly that it’s sometimes hard to recognize them even while
they’re happening. It’s more like evolution, and at a Darwinian pace at that.
Storage can be a real snoozer sometimes, so a couple of current develop-
ments are notable not only for the changes they’re likely to bring, but for
the pace of that change. Maybe “revolution” is too strong a word, but
sleepy old storage is about to get quite a shakeup.
Solid-state storage clearly ranks as a game-changer that will undoubt-
edly alter the face of storage. But that story’s going to take a little more
time to develop. Data deduplication, on the
Turning storage

The level of interest


into a service

other hand, is poised to rattle some cages


right now.
Data deduplication is all the rage for backup. in backup dedupe
The level of interest in backup dedupe has been has been pinning
pinning the popularity meter for a few years
now, even if our research shows that only the popularity meter
about a quarter of all storage shops are for a few years now.
actually using it. But even with an installed
base that might be falling short of the hype around dedupe, the technology
obviously has legs and there are a significant number of vendors offering a
data protection

variety of products. So, the other 75% of data storage shops are bound to come
Continuous

around eventually.
But instead of coming late to the backup dedupe game, those potential
dedupe users might skip backup and go directly to primary storage for their
initial dedupe fix. I never would have thought that six months or so ago, but
there’s been so much happening on the data reduction in primary storage
(or what I like to call “DRIPS”) front that not only does it now loom as a
bona fide game-changer for primary storage technology, but it could pick
up momentum fast enough to slow down the backup dedupe express.
Of course, dedupe for backup and DRIPS are two entirely different things
iSCSi and vSphere

even if some of the technologies they share are essentially the same. Back-
Good match:

up dedupe can help keep backups within their windows, provide faster
restores, and cut down tape use and handling, and, in doing so, can save a
few bucks by reducing the amount of disk capacity needed for backup data
before it ultimately ends up on tape. All good stuff and in some environments
the benefits might be considerable, but not all results are dramatic. And a
lot of shops apparently aren’t yet sold on the savings or don’t think their

5 Copyright 2010, TechTarget. No part of this publication may be transmitted or reproduced in any form, or by any means, without permission in writing from
Storage May 2010
the publisher. For permissions or reprint information, please contact Mike Kelly, VP and Group Publisher (mkelly@techtarget.com).
STORAGE
Data protection is
not enough

backup operations are in need of such an expensive fix.


But everyone is grappling with primary storage. Our last purchasing survey
showed that, on average, companies would add 40 TB of new disk storage this
year, with bigger companies looking at adding more than twice that capacity.
Even small companies are feeling the strain, looking at 25 TB of new disk in
2010. Cutting back on the amount of primary storage you need to add—and
even cutting back what you’re using now—will put the drama back in the
dedupe story and give beleaguered budgets a break. And it will profoundly
for medical records

change primary and near-line storage systems.


Prescription

The change is happening right now. Credit NetApp for putting DRIPS on the
map, even though the firm didn’t do all that much to promote it. But what was
once a small field of players is now rapidly expanding. DRIPS products are pop-
ping up all over: EMC joined the fray by adding compression to its midrange ar-
rays; HP says its new StoreOnce will run on its XP9000 storage systems within a
year; and just about every other major storage vendor has added (or announced
plans to add) data dedupe, compression, single instancing or some combination
of these data-crunching methods to their mainline storage products.
Adding to the intrigue are recent announcements from two companies that
know something about data reduction, with new products that have the potential
Turning storage
into a service

to accelerate the adoption of primary data reduction. Ocarina Networks and


Permabit Technology Corp. each essentially sucked the secret sauce out of their
appliances and tricked them out with APIs that, according to both vendors, will
make it easy for storage system vendors to add
data reduction to their existing products . . . with In the past, storage
emphasis on “existing.” Permabit says it has a
significant OEM partner or two in the fold, and Ocari-
array vendors have
na’s relationship with prospective OEM Dell got so been loath to inte-
chummy that the Round Rock gang announced
they’re acquiring Ocarina. You can expect more
grate technologies
data protection

that may chip away


Continuous

of these experienced dedupe vendors to pursue


packaging their code this way, and a lot more
interest from the storage system vendors.
at their own new
In the past, storage array vendors have been disk sales.
loath to integrate technologies that may chip
away at their own new disk sales, so it took some time for things like thin provi-
sioning to become widely available options. But DRIPS is different. Maybe it’s
just the result of a convergence of the constant capacity struggle meeting up
with the success of backup dedupe, or a fundamental right of storage managers
iSCSi and vSphere

to get to use their storage arrays as efficiently as possible. Either way it’s one
Good match:

of those rare moments when the tech industry is creating something that ad-
dresses a real problem, rather than creating solutions in search of problems that
probably don’t yet exist. Storage managers have gotten a taste of dedupe with
backup and they want more. 2

Rich Castagna (rcastagna@storagemagazine.com) is editorial director of the Storage


Media Group.

* Click here for a sneak peek at what’s coming up in the September 2010 issue.
6 Storage July/August 2010
Slow Backup?
Big Solution.
Announcing Deduplication for the Rest of Us.
Have you been considering deploying deduplication, but think it is too complex and expensive for
your data center? Well think no more. The DXi6500 disk appliance with deduplication reduces
disk requirements by 90% or more and leverages replication for DR. The DXi6500 delivers
simplicity and flexibility – all at an affordable price. With faster backup, improved restores,
OpenStorage connectivity and a reduction in the time you’ll spend managing data protection,
it’s the ideal solution for your medium-sized business.

The Quantum DXi6500. Simple, Affordable Deduplication.


Contact us to learn more at (800) 677-6268 or visit www.quantum.com/dxi6500

Preserving The World’s Most Important Data. Yours.™


©2010 Quantum Corporation. All rights reserved.
StorWars | tony asaro
Data protection is
not enough

IT still an awkward fit


at most companies

i
However you look at it—top down or bottom up—
most IT operations are treated as expense
for medical records

centers, fiefdoms or afterthoughts,


Prescription

rather than critical parts of the business.

T WAS PREDICTED in the early ’90s that outsourcing your IT operations to


IT experts like IBM and EDS would be the wave of the future. The thinking
was that organizations should focus on their core competencies, what-
ever they may be, and let IT companies come in and do what they do
best. Large professional services organizations hired the IT staffs of the
very customers they had contracts with so that the transition would be
Turning storage
into a service

seamless. Service-level agreements (SLAs) were created so businesses


would be assured that their IT needs were being met. In turn, IT service
providers were free to optimize and stream-
line as long as their customers were happy
with the service. There was a ton of buzz
IT should be a part
around this idea: It was heralded as the new of a company’s
way to manage IT and was going to revolution-
ize business. But the IT outsourcing market
DNA and core to
ultimately failed after a short period of its business.
massive growth.
data protection

Some of the same outsourcing language is now being used in IT


Continuous

cloud services. CEOs are considering handing over their entire IT infra-
structure to companies that know how to manage it better than they
do. Again, the logic is that they want to get out of managing IT and
instead focus on their core competencies. And there’s a ton of buzz,
hype and rhetoric. Sound familiar?
However, the logic is flawed and those executives are mistaken. IT
should be a part of a firm’s DNA and core to its business. Most CEOs
don’t have technology backgrounds and may be intimidated by IT or just
ignorant of its value. The CFO, a CEO’s typical second-in-command, usu-
iSCSi and vSphere

ally sees IT as overhead and a necessary cost of doing business. This


Good match:

attitude leaves the CIO to focus on keeping everything running smoothly


in the data center vs. being a visionary who aligns IT with the business.
I had an interesting conversation with the director of IT for a financial
firm in New York City. He told me his CIO was interested in getting lower
prices from their IT vendors, but not to reduce the IT costs to the busi-
ness—any money saved was to be used to fund other projects. This

8 Storage July/August 2010


STORAGE
Data protection is
not enough

seems reasonable since the role of IT and the goal of the CIO is to
ensure that the “lights always stay on.” Then he said something that’s
probably obvious to everyone but me: “The last thing our CIO wants is
to reduce his budget. Money is power, and the bigger his budget, the
more power he has.”
I was talking to an IT professional about
“The last thing our
for medical records

implementing a storage system that was


easier to manage than his current setup.
Prescription

I told him the customers I’ve talked to CIO wants is to


about that particular system said it re- reduce his budget.
quired practically no expertise and little
management. “I guess that would put me Money is power,
out of a job,” was his response. He was a and the bigger his
stone-cold expert at their expensive and
complicated storage system and had no budget, the more
interest in changing products. power he has.”
I met with a team of IT professionals
Turning storage

who were moving in a direction on a particular project that was driven


into a service

by one person. Most of the team felt it was the wrong decision but
they had no alternative solution. However, the person who drove the
process was willing to put a stake in the ground and make a choice. In
a room full of silent people, the one voice that speaks up will be heard.
The entire leadership chain in businesses and other organizations
doesn’t serve IT very well. The CEO is typically not an IT expert, and lacks
sufficient knowledge of or passion for technology. The CFO considers IT
as overhead. The CIO is focused on keeping things up and running, as
well as maintaining or increasing their budget. And the IT professionals
data protection

themselves either cling to what they know or don’t have enough infor-
Continuous

mation about what is out there. In the midst of it all, none of these
stakeholders considers how IT can merge with the business or how IT
can be leveraged to come up with new ways to generate revenue, create
new markets or change business models.
There are, of course, exceptions that counter this analysis. But the
majority of businesses are caught in this quandary. It’s easy to solve
this problem on paper, but nearly impossible in practice. Business
executives need to be more IT savvy; CIOs need to be “incentivized”
to have a greater impact on the success of the business; and IT profes-
iSCSi and vSphere

sionals have to make it a part of their job to be up on the latest and


Good match:

greatest architectures and technologies. And they all need to use the
right sides of their brains a little more to come up with creative ideas
on how IT can improve and grow their businesses. 2

Tony Asaro is senior analyst and founder of Voices of IT (www.VoicesofIT.com).

9 Storage July/August 2010


ENERATION
CKUP
TS NOW

It’s about disk. It’s about networks. It’s about time. EMC is the leader in disk-based backup and recovery.
Learn more now. www.EMC.com/products/category/backup-recovery.htm

EMC2, EMC, and where information lives are registered trademarks or trademarks of EMC Corporation in the United States and other countries. © Copyright 2010 EMC Corporation. All rights reserved.
STORAGE
Data protection is
not enough

Virtualizing NAS
STORAGE
COMING IN SEPTEMBER
10 Tips for Fine-Tuning Quality Awards V:
for medical records

Your Storage Network Midrange Arrays


Prescription

Two great reasons to con-


sider virtualizing your NAS Storage networking is the In this fifth installment of
systems: file content is most-often overlooked our Quality Awards survey
growing at an unprecedent- connective tissue that ties program, storage users
ed rate and the capacity servers to storage. The rate the field of midrange
limitation of most NAS technology has always had storage arrays for service
systems typically results in its own particular intrica- and reliability. We focus
multiple, isolated pools of cies, but it’s now getting on five key areas: sales-
file storage. You can virtual- even more complex with force competence, initial
ize your NAS systems with virtualized servers, mixed product quality, product
an appliance, with software storage environments and features, reliability and
Turning storage

that runs on a server or in multiprotocol arrays. We’ll technical support. In


into a service

the array itself. We describe look at the most likely our four previous Quality
the pros and cons of each causes of poor network Awards for midrange
approach, and suggest performance and describe arrays, StorageTek,
when each is most how these bottlenecks can EqualLogic, Compellent
appropriate. be alleviated. and Dell came out on top.

And don’t miss our monthly columns and commentary,


or the results of our Snapshot reader survey.
data protection
Continuous

TechTarget Storage Media Group

STORAGE
Vice President of Editorial Site Editor
Mark Schlack Ellen O’Brien

Editorial Director Senior News Director Site Editor


Rich Castagna Dave Raffo Susan Troy
Site Editor
Senior Managing Editor Senior News Director Andrew Burton
Kim Hefner Dave Raffo
Managing Editor
iSCSi and vSphere

Senior Editor Senior News Writer Heather Darcy


Ellen O’Brien Sonia Lelii
Good match:

Features Writer
Creative Director Senior Managing Editor Todd Erickson TechTarget Conferences
Maureen Joyce Kim Hefner
Executive Editor and Director of Editorial Events
Contributing Editors Associate Site Editor Independent Backup Expert Lindsay Jeanloz
Tony Asaro Megan Kellett W. Curtis Preston
James Damoulakis Editorial Events Associate
Steve Duplessie Editorial Assistant Jacquelyn Hinds
Jacob Gsoedl David Schneider
Storage magazine
Storage magazine 275 Grove Street
Subscriptions: Newton, MA 02466
www.SearchStorage.com editor@storagemagazine.com

11 Storage July/August 2010


1.
888.
343.
7627|s
ales
@ov
erl
ands
tor
age.
com
Data protection is

USING iSCSI STORAGE


not enough

WITH vSPHERE
To realize the greatest benefits of a vSphere
for medical records

installation, you need networked storage. iSCSI is


Prescription

a good fit for vSphere; here’s how to make it work.


By Eric Siebert
Turning storage
into a service
data protection
Continuous

t O TAP INTO some of VMware vSphere’s advanced features such as VMotion,


iSCSi and vSphere

fault tolerance, high availability and the VMware Distributed Resource


Good match:

Scheduler, you need to have shared storage for all of your hosts. vSphere’s
proprietary VMFS file system uses a special locking mechanism to allow
multiple hosts to connect to the same shared storage volumes and the
virtual machines (VMs) on them. Traditionally, this meant you had to imple-
ment an expensive Fibre Channel SAN infrastructure, but iSCSI and NFS
network storage are now more affordable alternatives.

13 Storage July/August 2010


STORAGE
Data protection is
not enough

Focusing on iSCSI, we’ll describe how to set it up and configure it


properly for vSphere hosts, as well as provide some tips and best prac-
tices for using iSCSI storage with vSphere. In addition, we’ve included
the results of a performance benchmarking test for the iSCSI/vSphere
pairing, with performance comparisons of the various configurations.

VMware WARMS UP TO iSCSI


for medical records

iSCSI networked storage was first supported by VMware with ESX 3.0. It
Prescription

works by using a client called an initiator to send SCSI commands over a


LAN to SCSI devices (targets) located on a remote storage device. Because
iSCSI uses traditional network-
ing components and the TCP/IP
protocol, it doesn’t require iSCSI PROS AND CONS
special cables and switches
as Fibre Channel does. Here is a summary of the
iSCSI initiators can be advantages and disadvantages in
software based or hardware using iSCSI storage for virtual servers.
based. Software initiators use

Ç
Turning storage

iSCSI advantages
into a service

device drivers that are built


into the VMkernel to use • Usually lower cost to implement than Fibre
Ethernet network adapters Channel storage
and protocols to write to a • Software initiators can be used for ease of
remote iSCSI target. Some use and lower cost; hardware initiators can
characteristics of software be used for maximum performance
initiators are: • Block-level storage that can be used with
• Use Ethernet network VMFS volumes
interface cards (NICs) and
• Speed and performance is greatly increased
native VMkernel iSCSI stack
with 10 Gbps Ethernet
data protection

• Good choice for blade


Continuous

servers and servers with • Uses standard network components (NICs,


limited expansion slots switches, cables)
• Cheaper than using iSCSI disadvantages
hardware initiators

I
• As iSCSI is most commonly deployed as a
• Can be CPU-intensive software protocol, it has additional CPU
due to the additional over- overhead compared to hardware-based
head of protocol processing storage initiators
• ESX server can’t boot
• Can’t store Microsoft Cluster Server shared
from a software-based initiator;
iSCSi and vSphere

LUNs (unless you use an iSCSI initiator


ESXi can by using iSCSI Boot
Good match:

inside the guest operating system)


Firmware Table (iBFT)
• Performance is typically not as good as
Hardware initiators use that of Fibre Channel SANs
a dedicated iSCSI host bus • Network latency and non-iSCSI network
adapter (HBA) that includes traffic can reduce performance
a network adapter, a TCP/IP

14 Storage July/August 2010


STORAGE
Data protection is
not enough

offload engine (TOE) and a SCSI adapter to help improve the perform-
ance of the host server. Characteristics of hardware initiators include:
• Moderately better I/O performance than software initiators
• Uses less ESX server host resources, especially CPU
• ESX server is able to boot from a hardware initiator

To find out the advantages and disadvantages of iSCSI storage com-


for medical records

pared to other storage protocols, see the sidebar “iSCSI pros and cons,”
(p. 14).
Prescription

iSCSI is a good alternative to using Fibre Channel storage as it will


likely be cheaper to implement while providing very good performance.
vSphere now supports 10 Gbps Ethernet,
which provides a big performance boost iSCSI is a good
over 1 Gbps Ethernet. The biggest risks
in using iSCSI are the CPU overhead alternative to using
from software initiators, which can be Fibre Channel storage
offset by using hardware initiators, and
a more fragile and volatile network infra- as it will likely be
Turning storage

cheaper to implement
into a service

structure that can be mitigated by com-


pletely isolating iSCSI traffic from other
network traffic. while providing very
For vSphere, VMware rewrote the good performance.
entire iSCSI software initiator stack to
make more efficient use of CPU cycles; this resulted in significant
efficiency and throughput improvements compared to VMware Infra-
structure 3. Those results were achieved by enhancing the VMkernel
efficiency. Support was also added for the bidirectional Challenge-Hand-
shake Authentication Protocol (CHAP), which provides better security by
data protection

requiring both the initiator and target to authenticate with each other.
Continuous

PLANNING AN iSCSI/vSPHERE IMPLEMENTATION


You’ll have to make a number of decisions when planning to use iSCSI
storage with vSphere. Let’s first consider iSCSI storage devices.
You can pretty much use any type of iSCSI storage device with vSphere
because the hosts connect to it using standard network adapters, initiators
and protocols. But you need to be aware of two things. First, vSphere offi-
cially supports only specific models of vendor iSCSI storage devices
(listed on the vSphere Hardware Compatibility Guide). That means if you
iSCSi and vSphere

call VMware about a problem and it’s related to the storage device, they
Good match:

may ask you to call the storage vendor for support. The second thing to
be aware of is that not all iSCSI devices are equal in performance; gen-
erally, the more performance you need, the more it’ll cost you. So make
sure you choose your iSCSI device carefully so that it matches the disk
I/O requirements of the applications running on the VMs that will be
using it.

15 Storage July/August 2010


STORAGE
Data protection is
not enough

PERFORMANCE TESTING: iSCSI PLUS vSPHERE


IT’S A GOOD IDEA to do some benchmarking of your iSCSI storage device to see the
throughput you’ll get under different workload conditions and to test the effects
of different vSphere configuration settings.
Iometer is a good testing tool that lets you configure many different work-
load types. You can install and run Iometer inside a virtual machine (VM); for
for medical records

best results, create two virtual disks on the VM: one on a local data store for
Prescription

the operating system and another on the iSCSI data store to be used exclusively
for testing. Try to limit the activity of other VMs on the host and access to the
data store while the tests are running. You can find four prebuilt tests that you
can load into Iometer to test both max throughput and real-world workloads at
www.mez.co.uk/OpenPerformanceTest.icf.
We ran Iometer tests using a modest configuration consisting of a Hewlett-
Packard Co. ProLiant ML110 G6 server, a Cisco Systems Inc. SLM2008 Gigabit Smart
Switch and an Iomega ix4-200d iSCSI array.
The test results, shown below, compare the use of the standard LSI Logic SCSI
controller in a virtual machine and use of the higher performance Paravirtual SCSI
Turning storage
into a service

controller. The tests were performed on a Windows Server 2008 VM with 2 GB RAM
and one vCPU on a vSphere 4.0 Update 1 host; tests were run for three minutes. The
results show the Paravirtual controller performing better than the LSI Logic con-
troller; the difference may be more pronounced when using higher-end hardware.

iSCSI PERFORMANCE TEST RESULTS


LSI Logic SAS controller Paravirtual SCSI controller

IOPS MBps IOPS MBps


data protection
Continuous

Max throughput : 100% read/0% write, 100% sequential (32K) 1,829 57 1,908 60
Max throughput: 100% read/0% write, 100% sequential (8K) 6,656 52 6,812 53
Max throughput: 50% read/50% write, 100% sequential (32K) 1,616 50 1,630 51
Max throughput: 50% read/50% write, 100% sequential (8K) 5,602 44 5,708 45
Real life: 65% read/35% write, 40% sequential/60% random (32K) 73 2.27 75 2.36
Real life: 65% read/35% write, 40% sequential/60% random (8K) 120 .94 123 .96
Random: 70% read/30% write, 0% sequential/100% random (32K) 53 1.65 55 1.72
Random: 70% read/30% write, 0% sequential/100% random (8K) 88 .69 89 .70
iSCSi and vSphere
Good match:

There are also some network considerations. For optimum iSCSI


performance, it’s best to create an isolated network. This ensures that
no other traffic will interfere with the iSCSI traffic, and also helps pro-
tect and secure it. Don’t even think of using 100 Mbps NICs with iSCSI;
it’ll be so painfully slow that it will be unusable for virtual machines. At a
minimum, you should use 1 Gbps NICs, and go for 10 Gbps NICs if that’s

16 Storage July/August 2010


STORAGE
Data protection is
not enough

within your budget. If you’re concerned about host server resource


overhead, consider using hardware initiators (TOE adapters). If you opt
for TOE adapters, make sure they’re
on VMware’s Hardware Compatibility If you opt for TOE
Guide. If you use one that’s not support-
ed, there’s a good chance vSphere will adapters, make
see it as a standard NIC and you’ll sure they’re on
for medical records

lose the TOE benefits. Finally, use


multipathing for maximum reliability; VMware’s Hardware
Prescription

you should use at least two NICs Compatibility Guide.


(not bridged/multi-port) connected to
two different physical network switches, just as you would when con-
figuring Fibre Channel storage.

CONFIGURING iSCSI IN vSPHERE


Once your iSCSI environment is set up, you can configure it in vSphere. The
method for doing this will differ depending on whether you’re using soft-
ware or hardware initiators. We’ll cover the software initiator method first.
Turning storage
into a service

Configuring with software initiators: Software initiators for iSCSI


are built into vSphere as a storage adapter; however, to use them you
must first configure a VMkernel port group on one of your virtual switches
(vSwitches). The software iSCSI networking for vSphere leverages the
VMkernel interface to connect to iSCSI targets, and all network traffic
between the host and target occurs over the NICs assigned to the
vSwitch the VMkernel interface is located on. You can have more than
one VMkernel interface on a single vSwitch or multiple vSwitches. The
VMkernel interface is also used for VMotion, fault-tolerance logging
traffic and connections to NFS storage devices. While you can use
data protection

one VMkernel interface for multiple things, it’s highly recommended


Continuous

to create a separate vSwitch and VMkernel interface exclusively for


iSCSI connections. You should also have two NICs attached to the
vSwitch for failover and multi-pathing. If you have multiple NICs and
VMkernel interfaces, you should make sure you bind the iSCSI VMkernel
interfaces to the correct NICs. (See VMware’s iSCSI SAN Configuration
Guide for more information.)
Once the vSwitch and VMkernel interface is configured, you can con-
figure the software iSCSI adapter. Select Configuration/Storage Adapters
in the vSphere Client to see the software iSCSI adapter listed; select it
iSCSi and vSphere

and click Properties to configure it. On the General tab, you can enable
Good match:

the adapter and configure CHAP authentication (highly recommended).


On the Dynamic Discovery tab, you can add IP addresses to have iSCSI
targets automatically discovered; optionally, you can use the Static
Discovery tab to manually enter target names. After entering this infor-
mation, go back to the Storage Adapters screen and click on the Rescan
button to scan the device and find any iSCSI targets.

18 Storage July/August 2010


STORAGE
Data protection is
not enough

VMFS VOLUME BLOCK SIZES


BY DEFAULT, VMFS volumes are created
with a 1 MB block size that allows a BLOCK, VIRTUAL DISK SIZES
single virtual disk (vmdk) to be created
up to a maximum of 256 GB. Once you Block size Maximum virtual disk file size
set a block size on a VMFS volume, it
for medical records

1 MB 256 GB (default)
can’t be changed. Instead, you need to 2 MB 512 GB
Prescription

move all the virtual machines (VMs) 4 MB 1,024 GB


from the volume, and then delete it 8 MB 2,048 GB
and recreate it with a new block size.
Therefore, make sure you choose a
block size that works for your configuration based on current and future needs.
The chart shown here lists the block size choices and related maximum virtual
disk size.
Choosing a larger block size won’t impact disk performance and will only affect
the minimum amount of disk space that files will take up on your VMFS volumes.
Turning storage

Block size is the amount of space a single block of data takes up on the disk; the
into a service

amount of disk space a file takes up will be based on a multiple of the block size.
However, VMFS does employ sub-block allocation so small files don’t take up an
entire block. Sub-blocks are always 64 KB regardless of the block size chosen. There
is some wasted disk space, but it’s negligible as VMFS volumes don’t have a large
number of files on them, and most of the files are very large and not affected that
much by having a bigger block size. In most cases, it’s probably best to use an 8 MB
block size when creating a VMFS volume, even if you’re using smaller volume sizes,
as you may decide to grow the volume later on.
data protection
Continuous

Configuring with hardware initiators: The process is similar for


hardware initiators, but they don’t use the VMkernel networking, so that
step can be skipped. TOE adapters are technically network adapters, but
they’ll show up on the Storage Adapters screen instead. Select them, click
Properties and configure them in a manner similar to software initiators
by entering the appropriate information on the General, Dynamic Discovery
and Static Discovery tabs. You’ll need to assign IP addresses to the TOEs
on the General screen as they don’t rely on the VMkernel networking.
Once the initiators are set up and your iSCSI disk targets have been
iSCSi and vSphere

discovered, you can add them to your hosts as VMFS volumes. Select a
Good match:

host, click on the Configuration tab and choose Storage. Click Add Stor-
age and a wizard will launch; for the disk type select Disk/LUN, which is
for block-based storage devices. (The Network File System type is used
for adding file-based NFS disk storage devices.) Select your iSCSI target
from the list of available disks, give it a name and then choose a block
size. When you finish, the new VMFS data store will be created and ready
to use.

19 Storage July/August 2010


STORAGE
Data protection is
not enough

BEST PRACTICES FOR USING iSCSI STORAGE WITH vSPHERE


Once iSCSI disks have been configured, they’re ready to be used by
virtual machines. The best practices listed here should help you get
the maximum performance and reliability out of your iSCSI data stores.
• The performance of iSCSI storage is highly dependent on network
health and utilization. For best results, always isolate your iSCSI traffic
onto its own dedicated network.
for medical records

• You can configure only one software initiator on an ESX Server


host. When configuring a vSwitch that will provide iSCSI connectivity,
Prescription

use multiple physical NICs to provide redundancy. Make sure you bind
the VMkernel interfaces to the NICs in the vSwitch so multi-pathing is
configured properly.
• Ensure the NICs used in your iSCSI When configuring
vSwitch connect to separate network
switches to eliminate single points of a vSwitch that
failure. will provide iSCSI
• vSphere supports the use of jumbo
frames with storage protocols, but connectivity, use
Turning storage

multiple physical NICs


into a service

they’re only beneficial for very specific


workloads with very large I/O sizes. Also,
your back-end storage must be able to to provide redundancy.
handle the increased throughput by having
a large number (15+) of spindles in your RAID group or you’ll see no
benefit. If your I/O sizes are smaller and your storage is spindle-bound,
you’ll see little or no increase in performance using jumbo frames. Jumbo
frames can actually decrease performance in some cases, so you should
perform benchmark tests before and after enabling jumbo frames to see
their effect. Every end-to-end component must support and be configured
data protection

for jumbo frames, including physical NICs and network switches,


Continuous

vSwitches, VMkernel ports and iSCSI targets. If any one component


isn’t configured for jumbo frames, they won’t work.
• Use the new Paravirtual SCSI (PVSCSI) adapter for your virtual
machine disk controllers as it offers maximum throughput and
performance over the standard LSI Logic and BusLogic adapters in most
cases. For very low I/O workloads, the LSI Logic adapter works best.
• To set up advanced multi-pathing for best performance, select
Properties for the iSCSI storage volume and click on Manage Paths.
You can configure the Path Selection Policies using the native VMware
iSCSi and vSphere

multi-pathing or third-party multi-pathing plug-ins if available. When


Good match:

using software initiators, create two VMkernel interfaces on a vSwitch;


assign one physical NIC to each as Active and the other as Unused;
use the esxcli command to bind one VMkernel port to the first NIC and
the second VMkernel port to the second NIC. Using Round Robin instead
of Fixed or Most Recently Used (MRU) will usually provide better per-
formance. Avoid using Round Robin if you’re running Microsoft Cluster
Server on your virtual machines.

20 Storage July/August 2010


STORAGE
Data protection is
not enough

iSCSI GUIDES AVAILABLE


VMware provides detailed guides for implementing iSCSI storage for
vSphere. Two useful guides available from VMware include the iSCSI
SAN Configuration Guide and the iSCSI Design Considerations and
Deployment Guide. 2

Eric Siebert is an IT industry veteran with more than 25 years of experience


for medical records

who now focuses on server administration and virtualization. He’s the


author of VMware VI3 Implementation and Administration (Prentice Hall,
Prescription

2009).
Turning storage
into a service
data protection
Continuous
iSCSi and vSphere
Good match:

21 Storage July/August 2010


Data protection is
not enough

Continuous
data
protection . . .
for medical records

IT’S BACK!
Prescription

When CDP products first


appeared a few years ago,
the benefits were clear, but
implementation and other
Turning storage
into a service

issues quickly stifled interest.


Now CDP is making a comeback,
and it might just be the future
of data backup.
By W. Curtis Preston
data protection
Continuous

Continuous data protection (CDP) and related products


are the future of backup. There’s no question CDP
products failed to live up to the hype when they first
appeared several years ago. But it’s also true that the
way CDP was (and is) designed solves virtually every
major problem that has plagued backup and recovery
systems for decades, and offers recovery time objec-
iSCSi and vSphere

tives (RTOs) and recovery point objectives (RPOs) that


Good match:

traditional backup systems can only dream of. Current


CDP products have also addressed most of the short-
comings the first batch of products had. The CDP buzz
may be gone, but the reality of CDP is stronger than ever.

22 Storage July/August 2010


STORAGE
Data protection is
not enough

A few years ago it seemed like every other booth at storage trade
shows was occupied by a CDP vendor, and a steady stream of technical
articles extolled the virtues of continuous data protection. But hardly
anybody bought the story or the products. Some pundits even joked
that CDP stood for “Customers Didn’t Purchase.” The failure of continuous
data protection was so complete that only two of the original CDP vendors
for medical records

were left standing. The others were acquired by larger companies


that believed in the technology enough to buy a product that often
Prescription

had few or no customers.

WHY CDP 1.0 TANKED


So, if CDP was such a good idea, why didn’t anyone buy it? There are
several reasons.
First, most of the companies offering CDP were startups. The worry,
of course, was that you would invest money, time and energy in a
startup and its product only to see the company go out of business.
Sadly, those fears were realized in
Turning storage

While you could techni-


into a service

this case: Asempra Technologies


Inc., Double-Take Software Inc.,
FilesX, Kashya Inc., Mendocino Soft- cally run a CDP system
ware Inc., Revivio Inc., Storactive in parallel with your
Inc. and XOsoft Inc. were all ac-
quired by other companies, and traditional backup system,
some (although not all) of these very few people had the
acquisitions resulted in very rocky
experiences for the few customers budget or time to do that.
who had purchased their CDP products.
data protection

Continuous data protection was also a big pill to swallow. While


Continuous

you could technically run a CDP system in parallel with your tradi-
tional backup system, very few people had the budget or time to do
that. Therefore, you had to justify replacing your production backup
system with CDP. But because it was so different from what people
were used to, CDP was hard to fully understand and was a hard sell
to replace traditional backup.
Another real problem was that the products sometimes weren’t fully
up to the task. For example, users were often forced to choose between
an on-site or off-site copy of their data because most CDP products
iSCSi and vSphere

couldn’t deliver both. This meant one product had to be used for opera-
Good match:

tional recovery and another for disaster recovery (DR). Many CDP products
were also ignorant of the applications they were backing up. Continuous
data protection vendors said they had no more of a requirement to
understand applications than a storage array did. Technically true perhaps,
but it didn’t give users the warm fuzzy feeling they were used to; they
wanted a CDP product that was application-aware. CDP also required a lot

23 Storage July/August 2010


STORAGE
Data protection is
not enough

more storage than traditional backup products or snapshots, so CDP


customers were unable to have very long retention periods. This
required them to have a separate solution for long-term retention.
Finally, many people viewed CDP as the Star Trek of the backup in-
dustry—a great idea before its time. Star Trek, maybe not fully under-
stood when it first aired, was canceled after three seasons. Similarly,
many people thought CDP was a solution looking for a problem, and
for medical records

most shops could meet their backup and recovery requirements with-
Prescription

WHAT IS NEAR-CDP?

WHEN CONTINUOUS DATA PROTECTION (CDP) products first appeared, they


created quite a buzz, and marketing departments love buzz. But
there were other companies with products that continuously
protected data, and they wanted to use the CDP moniker, too.
CDP vendors like Kashya Inc. and Revivio Inc. objected, saying
Turning storage
into a service

that snapshots weren’t CDP. They also noted that snapshots can
only recover to a particular point in time, while continuous data
protection can recover to any point in time. Hence the term near-
CDP was coined, allowing snapshot-based vendors to steal some
of the CDP buzz.
But years later, the term near-CDP is still not in the Storage
Networking Industry Association (SNIA) lexicon. Purists say you’re
either continuous or you’re not, but others think it’s still the best
term we have to describe snapshots coupled with replication.
Near-CDP systems have more in common with CDP than with
data protection

traditional backup. CDP and near-CDP systems transfer only


Continuous

changed blocks to the backup system. There are no repeated full


backups, and if only a few bytes change in a file, only a few bytes
are sent to the recovery system. They also transfer the changed
blocks to the recovery systems throughout the day, rather than in
a large batch process at night. And both CDP and near-CDP systems
provide instantaneous recovery and can offer recovery points from
a few seconds to an hour, depending on implementation.
The only important difference between CDP and near-CDP is
the ability of continuous data protection to offer a recovery point
iSCSi and vSphere
Good match:

objective (RPO) of zero (or almost zero), and it doesn’t require the
creation of application-aware snapshots up front. However, most
CDP users create snapshots anyway and recover to those snap-
shots, preferring a known stable point in time to a more recent
recovery point that will require a crash recovery process. So, maybe
the CDP vs. near-CDP debate is a lot of arguing over nothing.

24 Storage July/August 2010


STORAGE
Data protection is
not enough

CDP PRODUCT SAMPLER


out completely changing the way they
AppAssure Software Inc. • Replay 4
did backups, which was required with
continuous data protection. Atempo Inc. • Live Backup

CA Technologies • CA ARCserve Replication


NEW LIFE FOR CDP
There are now several CDP products Cofio Software Inc. • AIMstor CDP
for medical records

that are doing quite well, so what EMC Corp. • RecoverPoint


changed? Perhaps the most impor-
Prescription

FalconStor Software Inc. • FalconStor


tant change is that most of today’s
Continuous Data Protector
CDP products are offered by main-
stream backup vendors. In fact, IBM • Tivoli Storage Manager FastBack
almost every major backup software
InMage • DR-Scout
company now has a CDP offering.
Users don’t have to accept an all- Symantec Corp. • NetBackup RealTime
new paradigm and an all-new backup Vision Solutions Inc. • Double-Take Backup
vendor to get CDP functionality.
The next big reason for the resur-
Turning storage

gence of CDP is that the products


into a service

have come a long way since they first appeared on the market. For
example, you no longer have to choose between an on-site and off-
site copy; you can have both with a single product.
Today’s successful CDP systems also know a lot more about the
data they’re backing up. They offer integration points with many popular
applications such as Microsoft Exchange, Oracle and SQL Server. While
a true CDP product doesn’t need to create snapshots and can recover
to any point in time, this integration allows the application or backup
system administrator to create points in time where a known good
data protection

copy of the data resides. Administrators may opt to not use these
Continuous

known good recovery points during a recovery operation, but they


have the peace of mind of knowing they’re there.
And, like Star Trek, it may be time for CDP: The Next Generation.
Some servers have grown tremendously in just the last few years, and
the RTOs and RPOs for those large servers have become more stringent.
Consider a 300 TB database that’s mission critical for a company, with
potentially millions of people using their service 24/7. The database
backup system has to provide an instant recovery with no loss of
records; this is only possible with CDP.
iSCSi and vSphere

Also figuring into the picture are data loss notification laws, enacted
Good match:

by 35 states and the European Union, that require many companies


to add encryption systems to allow them to safely transport personal
information on backup tapes. However, encryption systems can be
expensive, cause slow backups and require management of encryption
keys. With CDP, a company can have on-site and off-site copies of their
data without ever touching a tape, thus avoiding encryption entirely.

25 Storage July/August 2010


STORAGE
Data protection is
not enough

Server virtualization has taken off during the last few years, and the
technology could benefit from continuous data protection. While you
may not have individual servers with data stores in the double-digit
terabyte range, it’s possible the storage used by VMware, Microsoft
Hyper-V or Citrix Systems XenServer is indeed that big. Consider what
would happen if a 15 TB storage array containing virtual machine (VM)
images suddenly disappeared—it could take out dozens or hundreds of
for medical records

virtual machines. Couple that with the fact that backing up and recovering
Prescription

those virtual machines using tradi-


tional methods is one of the more A good CDP product
difficult tasks a backup system
architect has to consider. Physics
places no more load on
is your enemy; 20 virtual machines your VM than a typical
on a single physical machine per-
form like one physical machine
virus protection package,
during backup. and it’s able to recover one
But if physics is your enemy,
or all of your VMs instanta-
Turning storage

CDP is your best friend. A good CDP


into a service

product places no more load on neously with no data loss.


your VM than a typical virus protec-
tion package, and it’s able to recover one or all of your VMs instanta-
neously with no data loss. Server virtualization alone could herald
the comeback of continuous data protection.

A LOOK INSIDE CDP


The Storage Networking Industry Association (SNIA) defines CDP “as a
methodology that continuously captures or tracks data modifications
data protection

and stores changes independent of the primary data, enabling recovery


Continuous

points from any point in the past . . . data changes are continuously
captured . . . stored in a separate location . . . [and RPOs] are arbitrary
and need not be defined in advance of the actual recovery.”
Please note that you don’t see the word “snapshot” above. While it’s
true that many of today’s CDP systems allow users to create known
recovery points in advance, they’re not required. To be considered CDP,
a system must be able to recover to any point in time, not just to when
snapshots are taken.
CDP systems start with a data tap or write splitter. Writes destined
iSCSi and vSphere

for primary storage are “tapped” or “split” into two paths; each write is
Good match:

sent to its original destination and also to the CDP system. The data
tap may be an agent in the protected host or it can reside somewhere
in the storage network. Running as an agent in a host, the data tap has
little to no impact on the host system because all the “heavy lifting” is
done elsewhere. CDP products that insert their data taps in the storage
network can use storage systems designed for this purpose, such as

26 Storage July/August 2010


STORAGE
Data protection is
not enough

Brocade Communications Systems Inc.’s Storage Application Services


API, Cisco Systems’ MDS line and its SANTap Service feature or EMC
Clarion’s built-in splitter functionality. Some CDP systems offer a
choice of where their data tap is placed.
Users then need to define a consistency group of volumes and
hosts that have to be recovered to the same point in time. Some CDP
systems allow the creation of a “group of groups” that contains multi-
for medical records

ple consistency groups, creating


Prescription

multiple levels of granularity Some CDP systems allow


without sacrifice. Users may also
choose to perform application-level the creation of a “group
snapshots on the protected hosts, of groups” that contains
such as placing Oracle in backup
mode or performing Volume Shad- multiple consistency
ow Copy Service (VSS) snapshots groups, creating multiple
on Windows. (Remember, snapshots
aren’t required.) Some CDP systems levels of granularity
without sacrifice.
Turning storage

simply record these application-


into a service

level snapshots when they happen,


while others provide assistance to perform them. It’s very helpful
when the continuous data protection system maintains a centralized
record of application-level snapshots, as they can be very useful.
Each write is transferred to the first recovery device, which is typically
another appliance and storage array somewhere else within the data
center. This proximity to the data being protected allows the writes to be
either synchronously replicated or asynchronously replicated with a very
short lag time. Even if a CDP system supports synchronous replication,
data protection

most users opt for asynchronous replication to avoid any performance


Continuous

impact on the production system. A CDP system may support an adaptive


replication mode where it replicates synchronously when possible, but
defaults to asynchronous during periods of high activity.
The data is stored in two places: the recovery volume and the recovery
journal. The recovery volume is the replicated copy of the volume being
protected and will be used in place of the protected volume during a
recovery. The recovery journal stores the log of all writes in the order
they were performed on the protected volume; it’s used to roll the
recovery volume forward or backward in time during a recovery. It may
iSCSi and vSphere

also be used as a high-speed buffer where all writes are stored before
Good match:

they’re applied to the recovery volume. This design allows the recovery
volume to be on less-expensive storage as long as the recovery journal
uses storage that is as fast as or faster than the protected volume.
Once data has been copied to the first recovery device it can then
be replicated off-site. Due to the behavior of WAN links, the CDP system
needs to deal with variances in the available bandwidth. So it has to be
able to “get behind” and “catch up” when these conditions change.

27 Storage July/August 2010


STORAGE
Data protection is
not enough

With some systems you can define an acceptable lag time (from a few
seconds to an hour or more), which translates into the RPO of the
replicated system. The CDP system sends all of the writes that hap-
pened as one large batch. If an individual block was modified several
times during the time period, you can specify that only the last change
is sent in a process known as “write folding.” This obviously means
that the disaster recovery copy won’t have the same level of recovery
for medical records

granularity as the on-site recovery system, but it may also mean the
Prescription

difference between a system that works and one that doesn’t.


Modern continuous data protection also offers a built-in, long-term
storage alternative. You can pick a
short time range (e.g., from 12:00:00 A CDP system can
pm to 12:00:30 pm every day) and
tell the CDP system to keep only instantaneously present
the blocks it needs to maintain only a LUN to whatever
those recovery points, and to delete
the blocks that were changed in application needs to use
Turning storage

between. Users who take applica- it for recovery or testing,


into a service

tion-level snapshots typically coor-


dinate them to coincide with their rolled forward or backward
recovery points for consistency to whatever point in time
purposes. This deletion of extraneous
changes allows the CDP system to desired.
retain data for much longer periods of
time. For longer retention periods, it’s also possible to back up one of
these recovery points to tape and then expire it from disk. Many com-
panies use all three approaches: retention of every change for a few
data protection

days, hourly recovery points for a week or so, then daily recovery
Continuous

points after that, followed by tape copies after 90 days or so.


The true wonder of continuous data protection is how it handles a
recovery. A CDP system can instantaneously present a LUN to whatever
application needs to use it for recovery or testing, rolled forward or
backward to whatever point in time desired. (As noted, many users
choose to roll the recovery volume back to a point in time when they
created an application consistent image. Although this means they’ll
lose any changes between that point in time and the current time,
many prefer rolling back to a known consistent image rather than
iSCSi and vSphere

going through the crash recovery process.)


Good match:

Depending on the product, the recovery LUN may be the actual recovery
volume (rolled forward or backward), a virtual volume designed mainly
for testing a restore, or something in the middle where the recovery
volume is presented to the application as if it has already been rolled
forward or backward, when in reality the actual rolling forward or back-
ward is happening in the background. Some systems can simultaneously
present multiple points in time from the same recovery volume.

28 Storage July/August 2010


STORAGE
Data protection is
not enough

Once the original production system has been repaired, the recovery
process is reversed. The recovery volume is used to rebuild the original
production volume by replicating the data back to its original location.
(If the system was merely down and didn’t need to be replaced, it’s
usually possible just to update it to the current point in time by sending
over only the changes that have happened since the outage.) With the
original volume brought up to date, the application can be moved back
for medical records

to its original location and the direction of replication reversed.


Prescription

Compare that description of a typical CDP-based recovery scenario


to the recovery process required by a traditional backup system, and
you should get a good idea of why continuous data protection is the
future of backup and recovery. 2

W. Curtis Preston is an executive editor in TechTarget’s Storage Media Group


and an independent backup expert.
Turning storage
into a service
data protection
Continuous
iSCSi and vSphere
Good match:

29 Storage July/August 2010


Create a
Data protection is
not enough

storage
service
for medical records
Prescription

for your
company
Turning storage
into a service

Often maligned (but more


often misunderstood), the ITIL

i
framework can help transform
your storage environment into
an efficient storage service
organization. By Thomas Woods
data protection
Continuous

F YOU’RE A data storage management professional, odds are that at some point you’ll
be asked to help align your IT organization with the Information Technology Infra-
structure Library (ITIL) service management framework. But before your eyes glaze
over and you think it’s just another one of those theoretical approaches to managing
IT, think again: ITIL can make your job, and your life, a lot easier.
ITIL is a set of British best practices that provide guidance on how to implement
IT service management (ITSM), a framework specifically designed to confront and
reduce IT organizational complexity. As a storage professional, you could benefit
greatly from an ITIL implementation if you think you’re currently spending too
iSCSi and vSphere

much time on any—or all—of the following tasks:


Good match:

• Working on non-storage issues


• Reworking storage implementations because of design flaws
• Phone support
• Maintaining non-storage-specific tracking tools (home-grown change, incident
or request tools)
• Creating one-off reports with little notice

30 Storage July/August 2010


STORAGE
Data protection is
not enough

• Tracking assets
• Coordinating work with other teams
• Working on projects that don’t fully meet end-user requirements
or maximize return on investment
• Responding to storage outages
• Not sharpening your storage skills
for medical records

Storage management doesn’t happen in a vacuum. Storage teams


Prescription

have a long list of other corporate groups they have to work with directly
or indirectly: end users, help desks, call centers, first- and second-line
operations support, as well as monitoring, server, security, asset,
auditing, configuration, architecture, engineering planning and finance
teams. If all that interfacing isn’t enough, the storage team also has
to provide meaningful reports to all levels of management to ensure
operational objectives are being met. From a storage management
perspective, the goal of ITSM isn’t to direct storage administrators
on how to do their jobs, but to focus more on:
• Aligning storage teams to work better with other teams to
Turning storage
into a service

achieve organizational goals


• Leveraging and automating common processes and tools when
possible

From an ITIL perspective, storage management is classified as a


“technical management function,” and as such, ITIL directs storage
management teams to:
• Maintain storage technical skills, and support documentation and
maintenance schedules
• Write procedures and train front-line service desk, call center
data protection

and operational support teams


Continuous

• Own relationships with data storage vendors


• Design and maintain storage systems
• Act as an escalation point for incident and problem management
involving storage subsystems
• Be fully engaged in all five defined phases of the ITIL service
lifecycle

THE SERVICE CONCEPT


Working with multiple teams that may have many different systems
iSCSi and vSphere

for change control, problem management, incident management and


Good match:

customer communications places a huge burden on the typical enter-


prise storage function. But that’s where ITIL excels because it provides
a framework on how best to coordinate storage management work
with other teams. ITIL’s main mechanism to address organizational
complexity is through the concept of a service, which is defined as
follows by ITIL v3: “A ‘service’ is a means of delivering value to customers

31 Storage July/August 2010


STORAGE
Data protection is
not enough

by facilitating outcomes customers want to achieve without the owner-


ship of specific costs and risks.”
ITIL v3 breaks service management into five distinct phases (with
corresponding publications):
• Service strategy
• Service design
• Service transition
for medical records

• Service operation
Prescription

• Continual service improvement

Storage functions are required to participate fully in all phases of the


service lifecycle. The ITIL v3 service operations guide (sections 5.3.2, 5.6,
6.5 and 6.6) outlines what storage and archive are responsible for. Read
those sections, but don’t be disappointed with the lack of detailed infor-
mation. Don’t expect ITIL to be delivered to your storage team; as a storage
“technical function,” you’re expected to create services and service com-
ponents that deliver value to your customers. To achieve that, ITIL
attempts to ensure that no single IT
Turning storage
into a service

process, function or technology is


working against the greater good.
ITIL provides a frame-
ITIL provides a framework to ensure work to ensure that
that important storage functional
decisions affecting storage architec-
important storage
tures and processes are deployed in functional decisions
a way that best supports the
organization.
affecting storage
Service strategy: The goal of architectures and
developing a service strategy is to
processes are
data protection

create service-level requirements that


Continuous

are then delivered to the service design deployed in a way


phase. The first and most basic step
is to determine what services will be
that best supports
offered to which user communities. the organization.
After a set of services is defined, the
next step is to define important attributes for the service. In addition
to base functionality, there are many other service attributes that are
important to the service’s end users and customers, including avail-
ability, cost, security policies, performance and time to deliver. The
iSCSi and vSphere

storage function plays a critical role in service strategy by verifying


Good match:

that storage design assumptions are in line with anticipated service


offering functionality and costs.
During the service strategy phase, service architects will determine
whether there will be a separate storage service or if the storage func-
tionality be offered as a component of a more comprehensive service.
There are a number of factors that need to be considered when decid-

32 Storage July/August 2010


STORAGE
Data protection is
not enough

ing to create a separate storage service, including:


• Organizational size. Organizations that have large storage envi-
ronments may benefit from a separate storage service consisting of
NAS, SAN and/or backup components that clearly defines the storage
service offering and tracks service compliance via service metrics.
• Organization breadth. Organizations with a broad application base
that use repeatable or very similar storage solutions may benefit from
for medical records

a consistent and measurable storage service offering.


Prescription

• Organizational agility. Organizations that operate in turbulent


or fast-paced markets, in which storage is an important part of their
product delivery, may benefit from a separate storage service offering.
For example, an email service provider
may provide free storage as part of
its offering. To support the consumer-
The service catalog
facing email service, an internal storage is usually divided into
service may be created to ensure that
the storage infrastructure is properly
two parts: internal
aligned and updated with new catalog entries that
Turning storage
into a service

capabilities at a speed required


by the marketplace.
are viewable and
When creating a storage service, the available to order by
first step is to decide on what entries
the service catalog should include based
internal IT groups,
on customer requirements (see “A sam- and external catalog
ple storage service catalog,” p. 34). The
service catalog is usually divided into
entries that are for
two parts: internal catalog entries that groups outside of IT.
are viewable and available to order by
data protection

internal IT groups, and external catalog entries that are for groups
Continuous

outside of IT.
Service-level requirements should specify what’s required from
the service, the availability of the service and security levels; the time
to deliver objectives should also be included. For a storage service
that includes backup, NAS, SAN and data archiving, attributes that
are important to the service customer are time to deliver, perform-
ance, recovery point objectives (RPOs), recovery time objectives
(RTOs) and cost; all should be detailed in a catalog for the service.
The storage function should play a major role in helping to define
iSCSi and vSphere

these attributes.
Good match:

The majority of the service catalog won’t be available to the end


users, but will be used by other services. For example, a server hosting
service may include SAN options for an end user requiring large
amounts of storage. The designer of the hosting service needs to
be aware of how storage service attributes such as availability, RPO,
RTO and performance affect the hosting service attributes.

33 Storage July/August 2010


STORAGE
Data protection is
not enough

A SAMPLE STORAGE SERVICE CATALOG


A storage service catalog defines the storage resources that are available
to the user community as well as to other IT functional teams. The catalog
clearly spells out the nature of each service, including a description, the
for medical records

costs associated with the service, expected performance and availability.


Prescription

Service offering* Notes Visibility Cost* Performance Maintenance window


SAN Storage Platinum Tape backup not IT only $1.50 per month per High: 15,000 rpm None
(Dual Data Center) included protected GB drives
SAN Storage Gold Tape backup not IT only $1.00 per month per High: 15,000 rpm Sunday 03:00 to
included GB drives 12:00
SAN Storage Silver Tape backup not IT only $0.35 per month per Medium: 10,000 rpm Sunday 03:00 to
included GB drives 12:00
SAN Storage Bronze Tape backup not IT only $0.10 per month per Low: 7,200 rpm Sunday 03:00 to
included GB drives 12:00
Turning storage

SAN Point-in-Time Copy Tape backup not IT only $1 per month per GB High: 15,000 rpm Sunday 03:00 to
into a service

(SAN Storage Gold included; copy exists drives 12:00


add-on only) within same data
center; not available
for SAN Storage
Silver or SAN
Storage Bronze
Shared File System Tape backup not IT and end user $0.10 per month per Low: 7,200 rpm Sunday 03:00 to
needed; cross-data GB drives 12:00
center point-in-time
disk backups per-
formed every day;
data protection

backups kept for


Continuous

30 days
Backup—High Performance One copy of backup IT only $0.50 per GB per High: 15,000 rpm Sunday 03:00 to
kept on cross-data month drives 12:00
center backup sub-
system disk cache;
good for large data-
bases with aggres-
sive recovery time
objectives
Backup—Standard Weekly full backup IT only $0.07 per GB on Standard Sunday 03:00 to
kept cross-data tape per month 12:00
iSCSi and vSphere

center for 90 days;


Good match:

file system backups


only for changed
files
Long-Term Archive IT and end user $0.07 per dual data Standard Sunday 03:00 to
center copy 12:00
* The services and costs show here are examples only, and aren’t related to any particular storage environment; they have been devised to demonstrate how costs will vary based on the type of
service offered.

34 Storage July/August 2010


STORAGE
Data protection is
not enough

Service design: The storage function also plays an important role in


the service design phase of the service lifecycle. Based on service-level
requirements, the storage function is required to create the plans on
how storage-specific requirements will be achieved. The results of the
service design phase are a service design package that should include
details on the end state of the storage solution. Guidance on how to
transition the storage service components to operations should be a
for medical records

part of the service-level design package.


Prescription

Service transition: In the service transition phase, the service design


package is implemented and set into operation. Following proper change
management and deployment principles, the storage function prepares
service desk and level-1 and level-2 storage support teams with proper
diagnostics and maintenance schedule procedures. The storage team
also maintains the service and technical documentation that supports
the storage components. The team will take the lead in coordinating
storage system changes as dictated by the service design package,
while also owning the relationships
with the outside storage vendors and
ITIL describes the
Turning storage
into a service

service providers.
Another focus of ITIL is process and attributes of change,
service commonality. ITIL describes the
attributes of change, request fulfillment,
request fulfillment,
capacity, event, availability, problem, capacity, event,
incident, and configuration and asset
management processes, as well as the
availability, problem,
attributes of a service. An important incident, and configu-
part of moving to a service model is
mapping the various processes and
ration and asset man-
agement processes,
data protection

service roles to the storage functional


Continuous

teams.
Service operations and continual
as well as the attributes
service improvement: Storage techni- of a service.
cal management plays a direct role in
technical operations. As stewards of data storage technology, this team
is responsible for planning storage technology and technology upgrades,
evaluating technologies and maintaining storage skills. The storage
function must also monitor operations, and implement and oversee
service improvements during the continual service improvement
iSCSi and vSphere

phase. The storage function will train front-line operations and request
Good match:

fulfillment teams to perform repeatable, low-risk storage management


tasks such as rerunning a backup, provisioning and exporting a file
system, and presenting a LUN. High-risk tasks, such as partitioning an
array, SAN configuration and filer policy setup, should be performed by
high-level storage management rather than general operations teams.

35 Storage July/August 2010


STORAGE
Data protection is
not enough

REAL-WORLD IMPLEMENTATION ISSUES


In all but the simplest ITIL implementations, there will be tension created
by the extra work placed on the storage function to implement ITIL vs.
the expected enterprise benefits. Implementing the ITIL framework,
especially at the beginning of an ITIL project, may sometimes appear to
add work with little benefit. This may be because not all of the services
or all storage environments will be a part of the initial ITIL release. During
for medical records

the transitional period a storage functional team may therefore be


Prescription

required to support legacy tools and the new enterprise tools, thus
increasing the number of interfaces the storage teams have to main-
tain and monitor. Hopefully, it will only be a temporary condition. Most
of the time the ITIL processes are more rigorous in addressing change
and risks, and they tend to split activities into multiple tasks. For example,
before an ITIL implementation, an on-call person might have simply
received a page or other alert and then addressed the issue. With
ITIL processes in place, this simple act may be divided into an “event,”
“incident,” “RFC” (request for change) and a “problem”—with each of
these put into a different tracking database and requiring a different
Turning storage
into a service

set of actions and roles.


But at the macro-enterprise level, ITIL benefits are more apparent:
• Improved coordination. Cross-functional process teams work
together to implement policies in a standardized way, using common
tools and a common language to help reduce the level of organizational
confusion or misunderstanding. Because of the benefits of scale, an ITIL
implementation may cost-justify process automation opportunities that
previously weren’t justified at a storage-function-only level.
• Reduced complexity. ITIL should help reduce or eliminate redun-
dant processes, tools, technologies, queues and interfaces that the
data protection

storage team has to work with.


Continuous

• Increased transparency. Service- and enterprise process-level


reporting will provide management and auditors better quality and
more actionable reports.
• More successful releases. Introduction of new functionality and
updates will have a better chance of being successful and maximizing
return on investment.
• Reduced outages. Better process-handoff definitions will result
in fewer outages. Most causes of operational outages that involve
storage subsystems aren’t a direct result of a storage administrator
iSCSi and vSphere

error or storage subsystem failures; root causes of storage outages


Good match:

usually involve a handoff error, such as:


—Incorrect server name provided for a LUN deletion
—Cold backup scheduled during production window
—Scheduled NAS filer outage impacted production servers
inadvertently

36 Storage July/August 2010


STORAGE
Data protection is
not enough

EFFICIENCY AND SUSTAINABILITY


The main goal of using the ITIL framework is to ensure the IT organization
delivers value to the organization in an efficient and sustainable way. ITIL
provides a framework that helps align the organization so it’s better posi-
tioned to achieve the overall objectives of IT and the organization. This
should be incentive enough to fully support your ITIL program, but as a
storage professional you have an extra incentive: ITIL compliance will
for medical records

allow you to spend more time working on storage-specific projects and


architectures which, in turn, will allow you to better maintain, sharpen
Prescription

and expand your storage skill set. 2

Tom Woods is currently global ITIL services transition manager at Ford


Motor Company. At Ford, Tom has held storage operations, engineering and
architecture positions, and has supervised the backup and NAS teams.
Turning storage
into a service
data protection
Continuous
iSCSi and vSphere
Good match:

37 Storage July/August 2010


Data protection is
not enough
for medical records
Prescription

Hospitals strive for


Turning storage
into a service

centralized image archives


New regulations mandate the digitization and retention

h
of medical records, leaving hospital IT pros looking to cut
data protection
Continuous

costs by centralizing image archives. But there are many


technical and political hurdles to overcome. By Beth Pariseau

OSPITAL AND MEDICAL center IT departments are struggling to control the storage of elec-
tronic medical images as new regulations require digitization and retention of med-
ical records. Many of the issues related to these efforts will be familiar to enterprise
IT pros in other industries, from application integration and centralization of IT assets
for delivery as a service to internal customers, to coping with regulatory requirements
iSCSi and vSphere

that contribute to data growth.


Good match:

But when it comes to the healthcare sector, the same decisions are magnified
because the business is literally life and death. Hospital IT managers say that in ad-
dition to technical integration issues, interdepartmental politics, and the question
of who will assume the risk for the creation and preservation of medical image data
make the effort to bring image data under more efficient centralized control an
uphill battle.

38 Storage July/August 2010


STORAGE
Data protection is
not enough

As healthcare IT has evolved, users say data storage systems for


different imaging systems, such as X-rays and cardiology images, have
been purchased by the department running each picture archiving and
communication system (PACS) application, and are often sold by PACS
vendors as turnkey packages that include storage hardware. Today,
large medical centers are contending with growing islands of storage
while coping with shrinking budgets. As regulations and data continue
for medical records

to mount, hospital CIOs and IT admins are looking to centralize image


Prescription

archives to make them more manageable and cost-efficient.


“It’s like trying to herd cats
to do integrated PACS archiving,”
said Michael Passe, storage archi-
“It’s like trying
tect at Beth Israel Deaconess to herd cats
Medical Center (BIDMC) in Boston.
However, “the storage platform
to do integrated
is managed by IT and, long term, PACS archiving.”
we’d like to offer it as a service —MICHAEL PASSE,
Turning storage

where we control the budget storage architect, Beth Israel


into a service

instead of 20 different people Deaconess Medical Center


doing different projects,” he said.
Michael Biedermann, systems analyst at University of New Mexico
Hospital in Albuquerque, said PACS vendor Royal Philips Electronics of the
Netherlands is hosting archival data for the hospital’s main radiology sys-
tem, which takes some of the burden of data growth off IT’s hands. “It’s
mostly the smaller systems that we’re still running in-house that we’re
trying to get a handle on,” Biedermann said. “When they were originally
presented to us, IT was never given a real roadmap of how this was all
data protection

going to work. When it all started, we had no clue what to try to prepare
Continuous

for. Now we’re trying to kind of fix this all on the fly. That’s why this uni-
versal vendor-neutral archive, if it truly existed, would be a godsend.”
Other hospitals are also contending with facility and staffing limitations
that make managing multiple, growing islands of storage all but impossi-
ble. “We’re a small shop. We have two people that are network folks
and do data center and storage, and kind of wear all the hats,” said Marty
Botticelli, CIO at Boston-based New England Baptist Hospital (NEBH).
Massachusetts requires X-ray images to be retained for 30 years.
iSCSi and vSphere

THE GOAL: VENDOR-NEUTRAL ARCHIVES


Good match:

Michael J. Cannavo, president of Image Management Consultants, a


Winter Springs, Fla.–based PACS consulting firm, has coined the terms
“vendor-neutral archive” (VNA) and “vendor enterprise archive” (VEA) to
describe the vision of centralized archives for image data, as well as
centralized archives that blend not only multiple imaging formats but
other types of electronic medical record data in an integrated repository.

39 Storage July/August 2010


STORAGE
Data protection is
not enough

“A VNA is a standards-based archive that works independent of the


PACS provider and stores all data in non-proprietary interchange formats,”
Cannavo said. A true VNA, he argues, would provide “context management”
(metadata) that allows information to be transferred seamlessly among
disparate PACS operations through the Digital Imaging and Communi-
cations in Medicine (DICOM) and Health Level 7 (HL7) standards, with-
out requiring data migration and reformatting. Such an archive would
for medical records

cut down on the expense and time required to migrate data as tech-
Prescription

nology advances, improve disaster recovery (DR) and business continuity


(BC), and provide a centralized “one-stop shop” for patient data readily
accessible by physicians.
“You don’t want to have to migrate the entire repositories or have to
have multiple architectures with support and service contracts,” Cannavo
said. “A ton of education needs to go on about using centralized archives.”

STANDARDS AREN’T THE ONLY ISSUE


One of the main obstacles to “Our facility is
Turning storage
into a service

achieving VNA and VEA utopia is


that the DICOM standard is applied
over 100 years
differently by different vendors. old and there
“DICOM is not a standard about
what to communicate; [it says] ‘If
are not a lot of
you communicate, here’s what to places for us to
do,’” said Michael Valante, global
business lead for enterprise imag-
just continue to put on-site
ing informatics at Philips Health- spinning disk.”
care. “The number of [metadata] —MARTY BOTTICELLI, CIO,
data protection

elements mandatory in DICOM is New England Baptist Hospital


Continuous

not equal to the amount of metadata


that actually needs to be communicated. Different modalities within the
same organization may deliver different data elements with different
exams.”
Healthcare IT managers say there are a multitude of other logistical
hurdles to clear as well, including data center space, budgetary restrictions,
internal politics, and the difficulty of performing data migrations and
forklift upgrades.
NEBH’s Botticelli said space is at a premium in his facility, located
iSCSi and vSphere

in one of the more expensive cities for real estate in the country. “Our
Good match:

facility is over 100 years old and there are not a lot of places for us to
just continue to put on-site spinning disk,” he said.
Internal budget negotiations and territorial politics that have grown
up around disparate systems can also be a barrier to centralization.
According to BIDMC’s Passe, his organization is rolling out EMC Corp.’s
Atmos system as a centralized, scalable archive for images. There are

40 Storage July/August 2010


STORAGE
Data protection is
not enough

two 120 TB “cubes” installed and storing cardiovascular diagnostic data


among multiple modalities in the cardiac space, and BIDMC may add an-
other cube for radiology, currently approximately 100 TB stored on an EMC
Centera. But Passe said different departments might compete for budget
and revenue within hospitals, and sometimes don’t want to share systems
with other departments or are hesitant to cede control to IT.
“We had a green-field opportunity
for medical records

with cardiology. They don’t have 100


“This, to me,
Prescription

TB worth of images,” Passe said.


“We’ve been waiting to kind of pull is something
[radiology] into the fold but we
needed some kind of track record . . .
we want to do
having IT manage the long-term and we have to
stuff. We’re still working through
some of this stuff with radiology.”
do, but when
Even when everyone is on I think of everything else
board, budget issues can thwart
on the plate, I can see this
Turning storage

efforts. “This, to me, is something


into a service

we want to do and we have to do, being a much lower priority.”


but when I think of everything else —BRAD BLAKE, director of infrastructure
on the plate, I can see this being and engineering, Boston Medical Center
a much lower priority,” said Brad
Blake, director of infrastructure and engineering at Boston Medical
Center (BMC). “This is all stuff that we all want to do, but I just can’t
fathom it being a priority or something that’s going to get what little
budget money is available in the coming year.”
The difficulty of migrating data can also impede centralization efforts,
data protection

according to Image Management Consultants’ Cannavo, who predicts


Continuous

it will take years to see widespread progress on such projects. “Data


migration is not cheap, not fast and you have to plan before you can
use it,” he said. “As it stands, virtually everyone out there has to migrate
from one format to another.”

SOME RESISTANCE FROM PACS VENDORS


Users also say PACS vendors can stand in the way of merging archives. “If
you look at PACS systems in general there’s not a lot of standardization
across a lot of these solutions. Each solution is vendor specific, and they
iSCSi and vSphere

package [hardware with it] so it’s a whole solution when they sell it,” said
Good match:

Irwin Teodoro, director of systems integration at Laurus Technologies Inc.,


a VAR with consulting and sales practices dedicated to healthcare IT.
NEBH’s Botticelli said he’s looking to use Iron Mountain Inc.’s Digital
Records Center for Medical Images (DRCMI) to get around his data center
space problem, but has run into resistance from his PACS vendors. “It’s
not like we’re the first ones in the country to do this, but when we brought

41 Storage July/August 2010


STORAGE
Data protection is
not enough

this solution to the table there was just a lot of resistance: ‘How are you
going to migrate the data?’ and ‘We need to sign off on it,’” he said. “It’s
taken us a long time—a good six months—to really utilize the technology
that we feel would provide us a vendor-neutral solution for medical imaging.”
Other users have encountered
similar resistance, but not just
because PACS vendors are trying
“PACS vendors
for medical records

to protect their own bottom lines. are not open to


Prescription

“PACS vendors are not open to re-


mote hosted or centralized [storage]
remote hosted
unless it’s theirs. [But] it’s really or centralized
driven by the performance guaran-
tees that health systems are requir-
[storage]
ing from the vendors,” said Steven unless it’s theirs. [But]
Roth, vice president and CIO at Pin-
nacleHealth System, Harrisburg, Pa.
it’s really driven by the
PinnacleHealth has much of performance guarantees
Turning storage

its PACS data archived offsite by


that health systems are
into a service

Philips, which also assumes the risk


for uptime and performance, Roth requiring from the vendors.”
said. “In our example, we had . . . —STEVEN ROTH, vice president and CIO,
an agreement with Philips where PinnacleHealth System
they are at risk for delivering [fast]
response times with five-nines reliability, and they basically have said,
‘We will not take the risk at that level of performance unless we can
manage and control the environment,’” Roth said.
But others in healthcare IT argue that providing performance and
data protection

availability is what IT departments are in the business of doing. “It’s not


Continuous

up to the vendor to put you in a compliant mode. It’s up to us,” said Jim
Touchstone, senior systems engineer at Mississippi Baptist Health Systems
(MBHS) in Jackson. The hospital worked with its vendors to put together
its own integrated data center stack. The infrastructure includes a pri-
mary IBM DS8300 SAN storage system shared among modalities and an
IBM N5200 array for disk-based near-line archiving.
“This 8000 array . . . we’ve had it on our floor for four years and, knock
on wood, never had one minute of downtime with it,” Touchstone said.
“We’ve had the N5200 for four years, and we’ve failed over before because
iSCSi and vSphere

we had a drive go out, but we never had any downtime. Our network is
Good match:

fully redundant; there are answers for the availability problem.”


“As an industry we’re not at odds with or opposing . . . what our
users want to do,” said Philips Healthcare’s Valante. In addition to the
Health Insurance Portability and Accountability Act (HIPAA), the Ameri-
can Recovery and Reinvestment Act (ARRA) and state requirements
for retention of records, devices hosting PACS information are regulated

42 Storage July/August 2010


STORAGE
Data protection is
not enough

by the Food and Drug Administration (FDA). Compliance with those reg-
ulations is also a risk PACS vendors assume for customers, Valante
argues. “Once you start opening those doors, you’re accountable to
FDA filing,” he said. “We do a lot of exhaustive validation and testing.”

SOME HOSPITALS UNDAUNTED “They must all


for medical records

“They must all go to the same


school because I heard that same go to the same
Prescription

thing about it being a certified, FDA


approved, blah blah blah. When
school because
push came to shove, I didn’t listen I heard that
and [my PACS vendor] supported
it,” said Michael Knocke, CIO at
same thing
Kansas Spine Hospital in Wichita. about it being a certified,
The hospital has approximately
6 TB of capacity on primary, local
FDA approved, blah blah
secondary and remote secondary blah. When push came
Turning storage
into a service

Compellent Technologies Inc.


SANs, and runs PACS applications
to shove, I didn’t listen
from a single vendor across the and [my PACS vendor]
board. (Knocke declined to identify
the vendor.)
supported it.”
—MICHAEL KNOCKE, CIO,
Knocke acknowledges he had Kansas Spine Hospital
some advantages that larger facilities
don’t have, including a smaller data set and a single PACS vendor for all
modalities. “But that didn’t make it easy,” he said. “At the time that I
moved my images from where they were being stored originally to a
data protection

Compellent SAN, the [PACS system] OEM vendor was not happy. They
Continuous

basically said, ‘No you can’t do that’ and I said ‘Well, I’m going to do it’
and they begrudgingly continued to support me even though there
were threats.”
Micha Ronen, PACS administrator at Phoenix-based Sun Health Corp.,
which was acquired by Banner Health in 2008, is in the process of
merging his NetApp Inc.-based PACS archiving systems into Banner
Health’s Bycast grid (Bycast Inc. was acquired by NetApp in 2010). “One
of the beauties of Bycast is that for us, migration is not an issue,” Ro-
nen said, because it can layer over heterogeneous storage repositories
iSCSi and vSphere

without requiring data to be moved. At least, that’s how Bycast has


Good match:

worked historically; NetApp has since said it will not support third-party
storage systems going forward under its Bycast-based StorageGRID
software unless they’re fronted by its V-Series gateway and made to
look like NetApp storage.
Ronen is unconcerned with this change. “For us, I don’t foresee any
problem. They would definitely like to get a bigger share of the storage
we have and so will have to accommodate our environment.”

43 Storage July/August 2010


STORAGE
Data protection is
not enough

ARCHIVE ALTERNATIVE: CLOUD STORAGE


While some users forge ahead with internal centralization efforts, other
healthcare IT managers are eyeing private or public cloud storage as a
solution to data growth and manageability in the face of obstacles to
centralization.
“In Massachusetts, the reten- “The internal
for medical records

tion for medical records was 30


years, seven years for images cloud is like
Prescription

and office records, and now dipping a toe in


[there’s talk] they’re going to
change that regulation to 15 the water. When
years, but that’s still 15 years of we first looked
storage,” said NEBH’s Botticelli.
“That’s huge. For us, we just at it, it seemed like the right
don’t have buildings and data thing; it’s just that every-
center space to put up the
spinning disk. It’s a challenge.” body’s terrified of the
Turning storage

BIDMC’s Passe is sticking security model and the SLA


into a service

with the internal private cloud


route for now using Atmos. “At- model when you start to go
mos support for multi-tenancy into the ‘real [public] cloud.’”
gives the ability to securely wall —MICHAEL PASSE, storage architect,
off one department’s data from Beth Israel Deaconess Medical Center
another and show them what
their usage is,” he said. “The internal cloud is like dipping a toe in the
water. When we first looked at it, it seemed like the right thing; it’s just
that everybody’s terrified of the security model and the SLA model
data protection

when you start to go into the ‘real [public] cloud.’


Continuous

“I can see [external public clouds] being useful if you’re spinning off
organizations, or if you’re bringing in new external organizations, or if
you have to bring in data from external organizations that you’re being
paid on a contract basis to read,” Passe continued. “When [PACS vendors]
make the leap and start to look at REST in earnest, it’s huge . . . Web 2.0
includes the metadata with the object, so imagine not having to have
the databases . . . that eliminates one more piece of the puzzle or at
least makes it redundant so if the database crashes, you can rebuild it
from the objects themselves.”
iSCSi and vSphere

“In theory, vendor neutrality would eliminate a lot of costs in the


Good match:

environment,” Laurus Technologies’ Teodoro said. “I just don’t know the


feasibility of [centralization]. I think healthcare organizations will look
at the cloud first.” 2

Beth Pariseau is a former senior news writer for TechTarget’s Storage Media
Group; she is now assigned to TechTarget’s Data Center Media Group.

44 Storage July/August 2010


hot spots | terri mcclure
Data protection is
not enough

Cloud storage

e
ecosystems mature
A number of vendors have emerged that provide
a bridge to cloud storage services, as well as
extended security, availability and portability
for medical records
Prescription

to cloud storage service provider offerings.

VERY NEW TECHNOLOGY follows a predictable path from drawing boards and
beta tests to product hypes and launches. Along the way, users often
follow their own inevitable route: becoming enchanted, then unsure and,
finally, cynical.
After that, the necessary ecosystems emerge to fill in the technology
gaps and ensure that real user requirements are satisfied. Value propositions
that solve real-world user pain points emerge, and we slowly start to see
Turning storage
into a service

adoption. Roughly two years after cloud storage was first floated, that’s
where we find ourselves.
Most cloud storage service provider
solutions on the market today offer basic
Cloud storage enable-
storage capacity and data protection, ment vendors offer
such as RAID or remote mirroring, as
well as RESTful APIs for porting applica-
caching algorithms
tions. This is all good. But challenges that ensure active
arise because most applications in
today’s IT shops do not speak REST
data is stored locally,
mitigating issues with
data protection

(Representational State Transfer)


Continuous

because they require block interfaces


such as iSCSI, SCSI or Fibre Channel;
distance from cloud
or file interfaces such as NFS or CIFS. providers.
Every cloud storage vendor has its own
RESTful API, but they’re proprietary; so once data is stored there, it’s diffi-
cult (if not impossible) to move it to another service provider. In addition,
users are concerned about security, accessibility and availability once data
leaves the four walls of the data center.
To address these issues we’re seeing a new category of cloud storage
iSCSi and vSphere

ecosystem vendors emerge. These cloud storage enablement vendors


Good match:

should fill the gaps inherent in many cloud storage products.

CLOUD STORAGE ENABLEMENT PLATFORMS


We’ve seen a number of vendors emerge in the past year that not only pro-
vide a bridge to cloud storage services but bring extended security, availability
and portability to cloud storage service provider offerings. On the surface,

45 Storage July/August 2010


STORAGE
Data protection is
not enough

much of the messaging sounds very similar, which it should, as they solve
similar problems. But putting them all on the same playing field would be
like comparing a Data Robotics Drobo FS to a NetApp FAS2000; both are
network drives and both store data, but they have very different use cases.
On a generic level, enablement platforms offer the following:
Data portability. Cloud storage enablement vendors write to the propri-
for medical records

etary cloud storage service provider APIs and act as a translation layer
that offers a standards-based interface to the application. Application in-
Prescription

terfaces don’t need to be rewritten if users switch cloud service providers.


Some enablement platforms can mirror between cloud storage service
providers, offering the ultimate in portability because data can be moved
seamlessly by mirroring and then cutting over.
Integration with existing environment. Cloud storage becomes “plug
and play” when an enablement vendor is used because the platform pres-
ents a standards-based application interface into the IT environment.
Data availability. Enablement vendors bring snapshot capabilities to
cloud storage. This ability to snapshot the contents of a cloud data store
Turning storage

protects against accidental deletions and software corruption issues that


into a service

could otherwise render cloud storage risky (at best) or unusable.


Data security. Security continues to be a priority for users considering
cloud storage, and security capabilities vary by enablement vendor. At the
very least, cloud storage enablement vendors offer encryption and leave
the encryption keys in the hands of the subscribers. Users must ensure
their enablement platform encrypts data in flight and at rest—from the
time it leaves the data center and throughout the flight path.
Local-like performance. Cloud storage enablement vendors offer caching
algorithms that ensure active data is stored locally, mitigating issues with
data protection

distance from cloud providers. Though cloud storage isn’t likely to be in a posi-
Continuous

tion to support OLTP applications any time soon, most applications in the data
center don’t require big iron IOPS and cloud storage could sufficiently meet
performance requirements for those.
Data reduction technology. Compression and data deduplication can
help reduce the overall amount of data stored, as well as bandwidth
requirements and costs.
Cloud storage enablement vendors include Nasuni Corp., which offers
a Windows file server product that could be used to store user files or to
create an off-site copy of Windows file-based data, among other use cases.
iSCSi and vSphere

TwinStrata Inc. offers a block storage interface to the subscriber that pro-
Good match:

vides an iSCSI interface and could be integrated into a backup or disaster


recovery scenario. Both Nasuni and TwinStrata are sold as virtual appliances.
Cirtas Systems Inc., still in semi-stealth mode, offers a hardware appliance
that acts as a local cloud storage controller, where the target storage sits
in the cloud. It also presents an iSCSI interface that could drop into a user
environment to support tier 2 applications. Panzura Inc. and StorSimple Inc.

46 Storage July/August 2010


STORAGE
Data protection is
not enough

have introduced offerings that target Microsoft Exchange and SharePoint,


helping users to scale and protect these applications.
Most of these vendors don’t compete head to head. It’s therefore important
to do your homework to understand the use case and application-specific
benefits each brings to the table.

EMERGING STANDARDS
for medical records

Cloud storage may not be a proprietary world forever. The Storage Networking
Prescription

Industry Association (SNIA) has a working committee dedicated to creating


the Cloud Data Management Interface (CDMI) standard for communication
with and between cloud storage service providers. While this may bring
portability to data, it doesn’t obviate the need for cloud storage enable-
ment vendors. Users will still need an intelligent layer to mitigate latency
and provide availability, security, integration with the existing environment,
and the ability to address the other issues raised here. Standards will help
with adoption, but enablement vendors bring a standards-based interface
to the data center that will give users immediate access to the larger ben-
Turning storage
into a service

efits of the potential cost reduction and IT elasticity that cloud storage
brings to the table. 2

Terri McClure is a storage analyst at Enterprise Strategy Group, Milford, Mass.


data protection
Continuous
iSCSi and vSphere
Good match:

47 Storage July/August 2010


read/write | jeff boles
Data protection is
not enough

Align data protection


with business importance

t
There’s a big difference between backup and business
continuity. Any-point-in-time technologies can extend
for medical records

data protection so that application use is protected as well.


Prescription

ODAY’S BUSINESSES CARRY a thread that stretches back to the beginning of


the historical record—every business still revolves around some valuable
resource. For modern businesses, that valuable resource is digital informa-
tion. But when it comes to protecting that information, the digital era has
brought with it exposure to more threats from more directions than ever
before. These threats can come from employees, bad software, faulty hard-
Turning storage

ware and numerous other sources. A business risks not only a potential loss
into a service

of information, but also the loss of “use” of the associated information system,
which is an irrecoverable loss of time.
Most veteran data storage managers have painful tales of the consequences
of loss of use. It may start with something as simple as a transactional
messaging system that loses messages into the ether during an outage.
And dropped messages can mean loss of revenue, loss of customers, lost deals
or bids, or legal exposure from a failure to meet compliance requirements.

PROTECTING USE, NOT JUST DATA


data protection

So it’s a full blown head-scratcher that we have such little ability to protect
Continuous

our use of information without turning to complex solutions that involve


many layers of infrastructure and application integration. It’s equally per-
plexing that protecting use is addressed by an entirely separate market
from the solutions that protect data. Data protection represents one of the
biggest slices of the data storage market, yet it’s often isolated from prod-
ucts designed to protect use—solutions that more often than not involve
multiple technologies such as primary storage with snapshots, replication
technology, application synchronization tools, failover and failback solutions,
and others.
iSCSi and vSphere

The consequences of the separation are fairly steep. Many organizations


Good match:

end up with a false sense of protection; those that see the shortcomings
of a pure data protection strategy often have no better alternative than to
pursue massive, complex distributed disaster recovery (DR) projects. The
cost implications associated with this disconnect with business require-
ments are astounding. DR infrastructures aren’t only expensive to imple-
ment, they’re equally expensive to maintain. The alternatives may involve

48 Storage July/August 2010


STORAGE
Data protection is
not enough

many complex IT components that are difficult to maintain, including host


agent replication solutions, redundant storage arrays, redundant servers,
replication licenses, WAN optimization devices and so forth. So organizations
buy into more complexity than they would prefer because failing to have good
use protection might put them out of business.

THE PITs
for medical records

For most businesses, backup is a daily operation that’s about capturing


Prescription

a copy of important data at a daily point in time (PIT). Whether using full
backups, or some combination of fulls, incrementals and differentials, back-
up has always captured a frozen PIT and then sat idle until the next backup
window. This periodic cramming of data through a backup infrastructure
has created an enormous industry that’s constantly innovating with new
technologies like data deduplication and virtual tape libraries (VTLs).

LEVERAGING PIT
So what should you do? By leveraging various combinations of often over-
Turning storage
into a service

looked backup technologies, companies can better align their data protec-
tion with application requirements. Behind this service-oriented alignment
of data protection is the concept of
business-vital applications and the By leveraging various
premise that these applications
should be more deeply protected
combinations of often
with a higher priority, speed and overlooked backup
frequency.
Any-point-in-time (APIT) technolo-
technologies, companies
gies are at the core of this approach. can better align their
data protection

APIT apps can granularly capture


data protection with
Continuous

data as it changes in real-time, effec-


tively allowing retrieval of data from application requirements.
any historic point in time. APIT tech-
nology coupled with application intelligence can be used to reconstruct
immediately usable data, which allows nearly immediate recovery in the
event of a system or data loss.
First-generation APIT technology didn’t go beyond pure data protection,
but that’s changing. Nearly every data protection vendor has snapped up or
built its own APIT technology, with some now moving beyond data protection
iSCSi and vSphere

to APIT use protection. Granted, there isn’t a seamless failover as with active-
Good match:

active clustering approaches, but for many systems this invisible-to-the-


user, nearly instantaneous recovery more than meets requirements.

PROTECTING USE
The capabilities needed for a better approach to data protection—one that
encompasses protecting use—include:

49 Storage July/August 2010


STORAGE
Data protection is
not enough

1. Built on any-point-in-time fundamentals. Not just the latest point


in time, but the ability to roll back to any point in time, on top of an optimized
data store that only captures the data that has changed
2. Built for instantaneous access. Instantaneous access to a protection
repository without copy or restore operations is a fundamental prerequisite
for use protection
3. Data protection integrated. Use protection must be integrated with
for medical records

backup tools and processes, ideally looking like little more than yet another
Prescription

tier of backup, within a single management toolset


4. Protected data that can be moved. A use protection system should
also be data movement enabled, allowing businesses to replicate data
beyond the confines of one location
Use protection can turn
RETHINK DATA PROTECTION PROCESSES
There are many ways use protection backup into a continuous
can alter data protection practices operation that does
for the better. A look at a few oppor-
away with the limitations
Turning storage
into a service

tunities will shed a little light on the


possibilities. of backup windows,
Do away with recovery point
objective (RPO) and recovery time while making protected
objective (RTO). Use protection can
turn backup into a continuous opera-
data and applications
tion that does away with the limita- immediately usable.
tions of backup windows, while making
protected data and applications immediately usable. Moreover, data behind
well-integrated solutions can be easily moved from a capture repository
data protection

into traditional disk- or tape-based backup systems. The benefits of this


Continuous

approach, especially for virtual server environments, shouldn’t be overlooked.


Next-generation bare metal. Bare-metal recovery technologies have
long been a must-have technology. But APIT technologies combined with
server virtualization can make complex image builds, specialized agents
and driver management a thing of the past. Instant recovery from an APIT
repository, replicated anywhere and combined with the abstracted, vanilla
configuration of virtual servers means a server can be recovered to any
hardware anywhere with the click of a button.
Applications everywhere. Data on disk is easy to move, but data is
iSCSi and vSphere

particularly easy to move when it’s free from needless redundancies and
Good match:

captured as a relatively low bandwidth stream. Use-protection systems


can capture data across distances, or in multiple locations, and enable a
wide range of consolidation and heightened availability approaches. Multiple
small offices could be protected by a single system in a branch office, with
the branch office protected by the data center. Small office systems could
failover to a spare server at the branch office, and branch-office systems could
switch to data center systems as needed.

50 Storage July/August 2010


STORAGE
Data protection is
not enough

VENDORS ARE RESPONDING


There are many data protection vendors today—such as BakBone Software
Inc., CA, CommVault Systems Inc., EMC Corp., Hewlett-Packard Co., IBM,
Symantec Corp. and others—with an assortment of protection technologies,
including APIT technologies.
Organizations should make “use protection” and dynamic recovery
for medical records

capabilities priorities. We focus our data protection efforts on mission-


critical data, and a solution that protects use is the only suitable approach
Prescription

to complement those efforts. Companies will be able to keep key apps


operating under the worst conditions without the overhead of protection
excess. 2

Jeff Boles is a senior analyst at Taneja Group. He can be reached at


jeff@tanejagroup.com.
Turning storage
into a service
data protection
Continuous
iSCSi and vSphere
Good match:

51 Storage July/August 2010


snapshot
Data protection is

Unified storage offers savings and efficiency


not enough

“Two for the price of one” has always been an effective marketing strategy, and while
it’s not entirely accurate when talking about multiprotocol or unified storage, the idea
does seem to have some appeal for many data storage shops. In our latest Snapshot
survey, 53% of respondents reported that their companies use multiprotocol arrays. It’s
not surprising why: 35% of these users said using disk capacity more efficiently was
the main reason they went the multiprotocol route. But for 29%, the cost savings of
having both block and file storage in one box was the key motivator, closely followed by
those who felt it would be easier to manage the two together (28%). Current users seem
for medical records

to be sold on the unified storage concept; 80% have more than one multiprotocol array
Prescription

installed. Users were roughly split on how they divvy up their unified arrays: 37% allot more
capacity for files, 34% allocate more capacity for block and 29% use an even split. Thirty-
nine percent of the multiprotocol deployments were real 2-for-1 deals, with a single ar-
ray replacing separate file and block arrays. For 41%, their new multiprotocol systems
were additions to their environments. Mixing two IP-based protocols (NAS and iSCSI) was
the most popular configuration (75%). So how’s it all working out? Pretty nicely, it seems,
as 44% are “very satisfied” with their multiprotocol experiences. —Rich Castagna

What’s the main reason you What has been the greatest benefit
Turning storage

selected a multiprotocol array? of the multiprotocol array in your shop?


into a service

5% Other We use disk capacity


39%
more efficiently
3% Lacked
physical
Saved money
resources for 28%
two systems 28% 35%
Easier than Use capacity
managing more Easier to manage
two separate efficiently than a dedicated file or 25%
arrays block array
29%
Cheaper to Saved on space
and power 3%
combine file and
data protection
Continuous

block in one box 0% 10 20 30 40

4
Why haven’t you installed any multiprotocol arrays yet?

28% Don’t need the additional storage capacity at


this time
21% Prefer to keep block and file storage separate
Satisfaction with 16% We see no advantage to combining block and
file storage in one system
multiprotocol
arrays, on a 1-to-5 14% Our preferred storage vendor doesn’t offer
multiprotocol arrays
iSCSi and vSphere

scale where 5 is
11% Evaluating/Planning implementation
Good match:

“Very satisfied”
10% Other

“ We have no fixed SAN/NAS ratio. Storage is dynamically


provisioned (and reprovisioned) as necessary.”
—Survey respondent

52 Storage July/August 2010


STORAGE
Data protection is
not enough

Check out the following resources from our sponsors:

3PAR
for medical records

Major Considerations for Deploying a Thinly Provisioned Storage Solution


Prescription

3PAR: A Decade of Utility Storage Innovation

EMC Backup and Recovery Solutions, page 10


EGuide: Best Practices for Data Protection and Recovery in Virtual Environments

E-Guide: How Dedupe and Virtualization Work Together


Turning storage
into a service

Nexsan Technologies, page 4


DCIG 2010 Midrange Array Buyer’s Guide

OCZ Technology Group Inc., page 17


OCZ Enterprise Solutions -- Your Application, Our Engineers

OCZ Consumer Solutions -- Learn Why We are the SSD Experts


data protection
Continuous

Overland Storage, page 12


Store. Protect. Archive. Overland 1.2PB of Storage Capacity FREE Product Promotion

Overland Storage Microsite: One-stop Shop for Product Info and Real World Deployment Scenarios Product
iSCSi and vSphere

Quantum Corporation, page 7


Good match:

E-Guide: Dedupe Dos and Don'ts -- Data Deduplication Technology Best Practices

Quantum Goes Beyond Deduplication

53 Storage July/August 2010

Potrebbero piacerti anche