Documenti di Didattica
Documenti di Professioni
Documenti di Cultura
Good Match:
iSCSI and vSphere
iSCSI storage is a good fit for virtualized servers.
Here’s what you need to know to make it all work.
P. 13
ALSO INSIDE
5 The new primary storage
8 The selling of IT
22 Continuous data protection: It’s back and better
30 How to turn storage into a service
38 Prescription for medical records
45 New techs fill gaps in cloud storage
48 Data protection is not enough
1 52 Users like efficiency of combo arrays
STORAGE sponsors | july/august 2010
R E G IO N A L S O L U T I O N P R O V I D E R S
Vendor Resources
53 Useful links from our advertisers.
w
A technology borrowed from backup may end
up the biggest thing to happen to storage in a long time.
for medical records
HEN THE BEATLES sang “You say want a revolution” back in 1968, you can be
Prescription
sure they weren’t singing about data storage. Changes to storage technologies
happen so slowly that it’s sometimes hard to recognize them even while
they’re happening. It’s more like evolution, and at a Darwinian pace at that.
Storage can be a real snoozer sometimes, so a couple of current develop-
ments are notable not only for the changes they’re likely to bring, but for
the pace of that change. Maybe “revolution” is too strong a word, but
sleepy old storage is about to get quite a shakeup.
Solid-state storage clearly ranks as a game-changer that will undoubt-
edly alter the face of storage. But that story’s going to take a little more
time to develop. Data deduplication, on the
Turning storage
variety of products. So, the other 75% of data storage shops are bound to come
Continuous
around eventually.
But instead of coming late to the backup dedupe game, those potential
dedupe users might skip backup and go directly to primary storage for their
initial dedupe fix. I never would have thought that six months or so ago, but
there’s been so much happening on the data reduction in primary storage
(or what I like to call “DRIPS”) front that not only does it now loom as a
bona fide game-changer for primary storage technology, but it could pick
up momentum fast enough to slow down the backup dedupe express.
Of course, dedupe for backup and DRIPS are two entirely different things
iSCSi and vSphere
even if some of the technologies they share are essentially the same. Back-
Good match:
up dedupe can help keep backups within their windows, provide faster
restores, and cut down tape use and handling, and, in doing so, can save a
few bucks by reducing the amount of disk capacity needed for backup data
before it ultimately ends up on tape. All good stuff and in some environments
the benefits might be considerable, but not all results are dramatic. And a
lot of shops apparently aren’t yet sold on the savings or don’t think their
5 Copyright 2010, TechTarget. No part of this publication may be transmitted or reproduced in any form, or by any means, without permission in writing from
Storage May 2010
the publisher. For permissions or reprint information, please contact Mike Kelly, VP and Group Publisher (mkelly@techtarget.com).
STORAGE
Data protection is
not enough
The change is happening right now. Credit NetApp for putting DRIPS on the
map, even though the firm didn’t do all that much to promote it. But what was
once a small field of players is now rapidly expanding. DRIPS products are pop-
ping up all over: EMC joined the fray by adding compression to its midrange ar-
rays; HP says its new StoreOnce will run on its XP9000 storage systems within a
year; and just about every other major storage vendor has added (or announced
plans to add) data dedupe, compression, single instancing or some combination
of these data-crunching methods to their mainline storage products.
Adding to the intrigue are recent announcements from two companies that
know something about data reduction, with new products that have the potential
Turning storage
into a service
to get to use their storage arrays as efficiently as possible. Either way it’s one
Good match:
of those rare moments when the tech industry is creating something that ad-
dresses a real problem, rather than creating solutions in search of problems that
probably don’t yet exist. Storage managers have gotten a taste of dedupe with
backup and they want more. 2
* Click here for a sneak peek at what’s coming up in the September 2010 issue.
6 Storage July/August 2010
Slow Backup?
Big Solution.
Announcing Deduplication for the Rest of Us.
Have you been considering deploying deduplication, but think it is too complex and expensive for
your data center? Well think no more. The DXi6500 disk appliance with deduplication reduces
disk requirements by 90% or more and leverages replication for DR. The DXi6500 delivers
simplicity and flexibility – all at an affordable price. With faster backup, improved restores,
OpenStorage connectivity and a reduction in the time you’ll spend managing data protection,
it’s the ideal solution for your medium-sized business.
i
However you look at it—top down or bottom up—
most IT operations are treated as expense
for medical records
cloud services. CEOs are considering handing over their entire IT infra-
structure to companies that know how to manage it better than they
do. Again, the logic is that they want to get out of managing IT and
instead focus on their core competencies. And there’s a ton of buzz,
hype and rhetoric. Sound familiar?
However, the logic is flawed and those executives are mistaken. IT
should be a part of a firm’s DNA and core to its business. Most CEOs
don’t have technology backgrounds and may be intimidated by IT or just
ignorant of its value. The CFO, a CEO’s typical second-in-command, usu-
iSCSi and vSphere
seems reasonable since the role of IT and the goal of the CIO is to
ensure that the “lights always stay on.” Then he said something that’s
probably obvious to everyone but me: “The last thing our CIO wants is
to reduce his budget. Money is power, and the bigger his budget, the
more power he has.”
I was talking to an IT professional about
“The last thing our
for medical records
by one person. Most of the team felt it was the wrong decision but
they had no alternative solution. However, the person who drove the
process was willing to put a stake in the ground and make a choice. In
a room full of silent people, the one voice that speaks up will be heard.
The entire leadership chain in businesses and other organizations
doesn’t serve IT very well. The CEO is typically not an IT expert, and lacks
sufficient knowledge of or passion for technology. The CFO considers IT
as overhead. The CIO is focused on keeping things up and running, as
well as maintaining or increasing their budget. And the IT professionals
data protection
themselves either cling to what they know or don’t have enough infor-
Continuous
mation about what is out there. In the midst of it all, none of these
stakeholders considers how IT can merge with the business or how IT
can be leveraged to come up with new ways to generate revenue, create
new markets or change business models.
There are, of course, exceptions that counter this analysis. But the
majority of businesses are caught in this quandary. It’s easy to solve
this problem on paper, but nearly impossible in practice. Business
executives need to be more IT savvy; CIOs need to be “incentivized”
to have a greater impact on the success of the business; and IT profes-
iSCSi and vSphere
greatest architectures and technologies. And they all need to use the
right sides of their brains a little more to come up with creative ideas
on how IT can improve and grow their businesses. 2
It’s about disk. It’s about networks. It’s about time. EMC is the leader in disk-based backup and recovery.
Learn more now. www.EMC.com/products/category/backup-recovery.htm
EMC2, EMC, and where information lives are registered trademarks or trademarks of EMC Corporation in the United States and other countries. © Copyright 2010 EMC Corporation. All rights reserved.
STORAGE
Data protection is
not enough
Virtualizing NAS
STORAGE
COMING IN SEPTEMBER
10 Tips for Fine-Tuning Quality Awards V:
for medical records
the array itself. We describe look at the most likely our four previous Quality
the pros and cons of each causes of poor network Awards for midrange
approach, and suggest performance and describe arrays, StorageTek,
when each is most how these bottlenecks can EqualLogic, Compellent
appropriate. be alleviated. and Dell came out on top.
STORAGE
Vice President of Editorial Site Editor
Mark Schlack Ellen O’Brien
Features Writer
Creative Director Senior Managing Editor Todd Erickson TechTarget Conferences
Maureen Joyce Kim Hefner
Executive Editor and Director of Editorial Events
Contributing Editors Associate Site Editor Independent Backup Expert Lindsay Jeanloz
Tony Asaro Megan Kellett W. Curtis Preston
James Damoulakis Editorial Events Associate
Steve Duplessie Editorial Assistant Jacquelyn Hinds
Jacob Gsoedl David Schneider
Storage magazine
Storage magazine 275 Grove Street
Subscriptions: Newton, MA 02466
www.SearchStorage.com editor@storagemagazine.com
WITH vSPHERE
To realize the greatest benefits of a vSphere
for medical records
Scheduler, you need to have shared storage for all of your hosts. vSphere’s
proprietary VMFS file system uses a special locking mechanism to allow
multiple hosts to connect to the same shared storage volumes and the
virtual machines (VMs) on them. Traditionally, this meant you had to imple-
ment an expensive Fibre Channel SAN infrastructure, but iSCSI and NFS
network storage are now more affordable alternatives.
iSCSI networked storage was first supported by VMware with ESX 3.0. It
Prescription
Ç
Turning storage
iSCSI advantages
into a service
I
• As iSCSI is most commonly deployed as a
• Can be CPU-intensive software protocol, it has additional CPU
due to the additional over- overhead compared to hardware-based
head of protocol processing storage initiators
• ESX server can’t boot
• Can’t store Microsoft Cluster Server shared
from a software-based initiator;
iSCSi and vSphere
offload engine (TOE) and a SCSI adapter to help improve the perform-
ance of the host server. Characteristics of hardware initiators include:
• Moderately better I/O performance than software initiators
• Uses less ESX server host resources, especially CPU
• ESX server is able to boot from a hardware initiator
pared to other storage protocols, see the sidebar “iSCSI pros and cons,”
(p. 14).
Prescription
cheaper to implement
into a service
requiring both the initiator and target to authenticate with each other.
Continuous
call VMware about a problem and it’s related to the storage device, they
Good match:
may ask you to call the storage vendor for support. The second thing to
be aware of is that not all iSCSI devices are equal in performance; gen-
erally, the more performance you need, the more it’ll cost you. So make
sure you choose your iSCSI device carefully so that it matches the disk
I/O requirements of the applications running on the VMs that will be
using it.
best results, create two virtual disks on the VM: one on a local data store for
Prescription
the operating system and another on the iSCSI data store to be used exclusively
for testing. Try to limit the activity of other VMs on the host and access to the
data store while the tests are running. You can find four prebuilt tests that you
can load into Iometer to test both max throughput and real-world workloads at
www.mez.co.uk/OpenPerformanceTest.icf.
We ran Iometer tests using a modest configuration consisting of a Hewlett-
Packard Co. ProLiant ML110 G6 server, a Cisco Systems Inc. SLM2008 Gigabit Smart
Switch and an Iomega ix4-200d iSCSI array.
The test results, shown below, compare the use of the standard LSI Logic SCSI
controller in a virtual machine and use of the higher performance Paravirtual SCSI
Turning storage
into a service
controller. The tests were performed on a Windows Server 2008 VM with 2 GB RAM
and one vCPU on a vSphere 4.0 Update 1 host; tests were run for three minutes. The
results show the Paravirtual controller performing better than the LSI Logic con-
troller; the difference may be more pronounced when using higher-end hardware.
Max throughput : 100% read/0% write, 100% sequential (32K) 1,829 57 1,908 60
Max throughput: 100% read/0% write, 100% sequential (8K) 6,656 52 6,812 53
Max throughput: 50% read/50% write, 100% sequential (32K) 1,616 50 1,630 51
Max throughput: 50% read/50% write, 100% sequential (8K) 5,602 44 5,708 45
Real life: 65% read/35% write, 40% sequential/60% random (32K) 73 2.27 75 2.36
Real life: 65% read/35% write, 40% sequential/60% random (8K) 120 .94 123 .96
Random: 70% read/30% write, 0% sequential/100% random (32K) 53 1.65 55 1.72
Random: 70% read/30% write, 0% sequential/100% random (8K) 88 .69 89 .70
iSCSi and vSphere
Good match:
and click Properties to configure it. On the General tab, you can enable
Good match:
1 MB 256 GB (default)
can’t be changed. Instead, you need to 2 MB 512 GB
Prescription
Block size is the amount of space a single block of data takes up on the disk; the
into a service
amount of disk space a file takes up will be based on a multiple of the block size.
However, VMFS does employ sub-block allocation so small files don’t take up an
entire block. Sub-blocks are always 64 KB regardless of the block size chosen. There
is some wasted disk space, but it’s negligible as VMFS volumes don’t have a large
number of files on them, and most of the files are very large and not affected that
much by having a bigger block size. In most cases, it’s probably best to use an 8 MB
block size when creating a VMFS volume, even if you’re using smaller volume sizes,
as you may decide to grow the volume later on.
data protection
Continuous
discovered, you can add them to your hosts as VMFS volumes. Select a
Good match:
host, click on the Configuration tab and choose Storage. Click Add Stor-
age and a wizard will launch; for the disk type select Disk/LUN, which is
for block-based storage devices. (The Network File System type is used
for adding file-based NFS disk storage devices.) Select your iSCSI target
from the list of available disks, give it a name and then choose a block
size. When you finish, the new VMFS data store will be created and ready
to use.
use multiple physical NICs to provide redundancy. Make sure you bind
the VMkernel interfaces to the NICs in the vSwitch so multi-pathing is
configured properly.
• Ensure the NICs used in your iSCSI When configuring
vSwitch connect to separate network
switches to eliminate single points of a vSwitch that
failure. will provide iSCSI
• vSphere supports the use of jumbo
frames with storage protocols, but connectivity, use
Turning storage
2009).
Turning storage
into a service
data protection
Continuous
iSCSi and vSphere
Good match:
Continuous
data
protection . . .
for medical records
IT’S BACK!
Prescription
A few years ago it seemed like every other booth at storage trade
shows was occupied by a CDP vendor, and a steady stream of technical
articles extolled the virtues of continuous data protection. But hardly
anybody bought the story or the products. Some pundits even joked
that CDP stood for “Customers Didn’t Purchase.” The failure of continuous
data protection was so complete that only two of the original CDP vendors
for medical records
you could technically run a CDP system in parallel with your tradi-
tional backup system, very few people had the budget or time to do
that. Therefore, you had to justify replacing your production backup
system with CDP. But because it was so different from what people
were used to, CDP was hard to fully understand and was a hard sell
to replace traditional backup.
Another real problem was that the products sometimes weren’t fully
up to the task. For example, users were often forced to choose between
an on-site or off-site copy of their data because most CDP products
iSCSi and vSphere
couldn’t deliver both. This meant one product had to be used for opera-
Good match:
tional recovery and another for disaster recovery (DR). Many CDP products
were also ignorant of the applications they were backing up. Continuous
data protection vendors said they had no more of a requirement to
understand applications than a storage array did. Technically true perhaps,
but it didn’t give users the warm fuzzy feeling they were used to; they
wanted a CDP product that was application-aware. CDP also required a lot
most shops could meet their backup and recovery requirements with-
Prescription
WHAT IS NEAR-CDP?
that snapshots weren’t CDP. They also noted that snapshots can
only recover to a particular point in time, while continuous data
protection can recover to any point in time. Hence the term near-
CDP was coined, allowing snapshot-based vendors to steal some
of the CDP buzz.
But years later, the term near-CDP is still not in the Storage
Networking Industry Association (SNIA) lexicon. Purists say you’re
either continuous or you’re not, but others think it’s still the best
term we have to describe snapshots coupled with replication.
Near-CDP systems have more in common with CDP than with
data protection
objective (RPO) of zero (or almost zero), and it doesn’t require the
creation of application-aware snapshots up front. However, most
CDP users create snapshots anyway and recover to those snap-
shots, preferring a known stable point in time to a more recent
recovery point that will require a crash recovery process. So, maybe
the CDP vs. near-CDP debate is a lot of arguing over nothing.
have come a long way since they first appeared on the market. For
example, you no longer have to choose between an on-site and off-
site copy; you can have both with a single product.
Today’s successful CDP systems also know a lot more about the
data they’re backing up. They offer integration points with many popular
applications such as Microsoft Exchange, Oracle and SQL Server. While
a true CDP product doesn’t need to create snapshots and can recover
to any point in time, this integration allows the application or backup
system administrator to create points in time where a known good
data protection
copy of the data resides. Administrators may opt to not use these
Continuous
Also figuring into the picture are data loss notification laws, enacted
Good match:
Server virtualization has taken off during the last few years, and the
technology could benefit from continuous data protection. While you
may not have individual servers with data stores in the double-digit
terabyte range, it’s possible the storage used by VMware, Microsoft
Hyper-V or Citrix Systems XenServer is indeed that big. Consider what
would happen if a 15 TB storage array containing virtual machine (VM)
images suddenly disappeared—it could take out dozens or hundreds of
for medical records
virtual machines. Couple that with the fact that backing up and recovering
Prescription
points from any point in the past . . . data changes are continuously
captured . . . stored in a separate location . . . [and RPOs] are arbitrary
and need not be defined in advance of the actual recovery.”
Please note that you don’t see the word “snapshot” above. While it’s
true that many of today’s CDP systems allow users to create known
recovery points in advance, they’re not required. To be considered CDP,
a system must be able to recover to any point in time, not just to when
snapshots are taken.
CDP systems start with a data tap or write splitter. Writes destined
iSCSi and vSphere
for primary storage are “tapped” or “split” into two paths; each write is
Good match:
sent to its original destination and also to the CDP system. The data
tap may be an agent in the protected host or it can reside somewhere
in the storage network. Running as an agent in a host, the data tap has
little to no impact on the host system because all the “heavy lifting” is
done elsewhere. CDP products that insert their data taps in the storage
network can use storage systems designed for this purpose, such as
also be used as a high-speed buffer where all writes are stored before
Good match:
they’re applied to the recovery volume. This design allows the recovery
volume to be on less-expensive storage as long as the recovery journal
uses storage that is as fast as or faster than the protected volume.
Once data has been copied to the first recovery device it can then
be replicated off-site. Due to the behavior of WAN links, the CDP system
needs to deal with variances in the available bandwidth. So it has to be
able to “get behind” and “catch up” when these conditions change.
With some systems you can define an acceptable lag time (from a few
seconds to an hour or more), which translates into the RPO of the
replicated system. The CDP system sends all of the writes that hap-
pened as one large batch. If an individual block was modified several
times during the time period, you can specify that only the last change
is sent in a process known as “write folding.” This obviously means
that the disaster recovery copy won’t have the same level of recovery
for medical records
granularity as the on-site recovery system, but it may also mean the
Prescription
days, hourly recovery points for a week or so, then daily recovery
Continuous
Depending on the product, the recovery LUN may be the actual recovery
volume (rolled forward or backward), a virtual volume designed mainly
for testing a restore, or something in the middle where the recovery
volume is presented to the application as if it has already been rolled
forward or backward, when in reality the actual rolling forward or back-
ward is happening in the background. Some systems can simultaneously
present multiple points in time from the same recovery volume.
Once the original production system has been repaired, the recovery
process is reversed. The recovery volume is used to rebuild the original
production volume by replicating the data back to its original location.
(If the system was merely down and didn’t need to be replaced, it’s
usually possible just to update it to the current point in time by sending
over only the changes that have happened since the outage.) With the
original volume brought up to date, the application can be moved back
for medical records
storage
service
for medical records
Prescription
for your
company
Turning storage
into a service
i
framework can help transform
your storage environment into
an efficient storage service
organization. By Thomas Woods
data protection
Continuous
F YOU’RE A data storage management professional, odds are that at some point you’ll
be asked to help align your IT organization with the Information Technology Infra-
structure Library (ITIL) service management framework. But before your eyes glaze
over and you think it’s just another one of those theoretical approaches to managing
IT, think again: ITIL can make your job, and your life, a lot easier.
ITIL is a set of British best practices that provide guidance on how to implement
IT service management (ITSM), a framework specifically designed to confront and
reduce IT organizational complexity. As a storage professional, you could benefit
greatly from an ITIL implementation if you think you’re currently spending too
iSCSi and vSphere
• Tracking assets
• Coordinating work with other teams
• Working on projects that don’t fully meet end-user requirements
or maximize return on investment
• Responding to storage outages
• Not sharpening your storage skills
for medical records
have a long list of other corporate groups they have to work with directly
or indirectly: end users, help desks, call centers, first- and second-line
operations support, as well as monitoring, server, security, asset,
auditing, configuration, architecture, engineering planning and finance
teams. If all that interfacing isn’t enough, the storage team also has
to provide meaningful reports to all levels of management to ensure
operational objectives are being met. From a storage management
perspective, the goal of ITSM isn’t to direct storage administrators
on how to do their jobs, but to focus more on:
• Aligning storage teams to work better with other teams to
Turning storage
into a service
• Service operation
Prescription
internal IT groups, and external catalog entries that are for groups
Continuous
outside of IT.
Service-level requirements should specify what’s required from
the service, the availability of the service and security levels; the time
to deliver objectives should also be included. For a storage service
that includes backup, NAS, SAN and data archiving, attributes that
are important to the service customer are time to deliver, perform-
ance, recovery point objectives (RPOs), recovery time objectives
(RTOs) and cost; all should be detailed in a catalog for the service.
The storage function should play a major role in helping to define
iSCSi and vSphere
these attributes.
Good match:
SAN Point-in-Time Copy Tape backup not IT only $1 per month per GB High: 15,000 rpm Sunday 03:00 to
into a service
30 days
Backup—High Performance One copy of backup IT only $0.50 per GB per High: 15,000 rpm Sunday 03:00 to
kept on cross-data month drives 12:00
center backup sub-
system disk cache;
good for large data-
bases with aggres-
sive recovery time
objectives
Backup—Standard Weekly full backup IT only $0.07 per GB on Standard Sunday 03:00 to
kept cross-data tape per month 12:00
iSCSi and vSphere
service providers.
Another focus of ITIL is process and attributes of change,
service commonality. ITIL describes the
attributes of change, request fulfillment,
request fulfillment,
capacity, event, availability, problem, capacity, event,
incident, and configuration and asset
management processes, as well as the
availability, problem,
attributes of a service. An important incident, and configu-
part of moving to a service model is
mapping the various processes and
ration and asset man-
agement processes,
data protection
teams.
Service operations and continual
as well as the attributes
service improvement: Storage techni- of a service.
cal management plays a direct role in
technical operations. As stewards of data storage technology, this team
is responsible for planning storage technology and technology upgrades,
evaluating technologies and maintaining storage skills. The storage
function must also monitor operations, and implement and oversee
service improvements during the continual service improvement
iSCSi and vSphere
phase. The storage function will train front-line operations and request
Good match:
required to support legacy tools and the new enterprise tools, thus
increasing the number of interfaces the storage teams have to main-
tain and monitor. Hopefully, it will only be a temporary condition. Most
of the time the ITIL processes are more rigorous in addressing change
and risks, and they tend to split activities into multiple tasks. For example,
before an ITIL implementation, an on-call person might have simply
received a page or other alert and then addressed the issue. With
ITIL processes in place, this simple act may be divided into an “event,”
“incident,” “RFC” (request for change) and a “problem”—with each of
these put into a different tracking database and requiring a different
Turning storage
into a service
h
of medical records, leaving hospital IT pros looking to cut
data protection
Continuous
OSPITAL AND MEDICAL center IT departments are struggling to control the storage of elec-
tronic medical images as new regulations require digitization and retention of med-
ical records. Many of the issues related to these efforts will be familiar to enterprise
IT pros in other industries, from application integration and centralization of IT assets
for delivery as a service to internal customers, to coping with regulatory requirements
iSCSi and vSphere
But when it comes to the healthcare sector, the same decisions are magnified
because the business is literally life and death. Hospital IT managers say that in ad-
dition to technical integration issues, interdepartmental politics, and the question
of who will assume the risk for the creation and preservation of medical image data
make the effort to bring image data under more efficient centralized control an
uphill battle.
going to work. When it all started, we had no clue what to try to prepare
Continuous
for. Now we’re trying to kind of fix this all on the fly. That’s why this uni-
versal vendor-neutral archive, if it truly existed, would be a godsend.”
Other hospitals are also contending with facility and staffing limitations
that make managing multiple, growing islands of storage all but impossi-
ble. “We’re a small shop. We have two people that are network folks
and do data center and storage, and kind of wear all the hats,” said Marty
Botticelli, CIO at Boston-based New England Baptist Hospital (NEBH).
Massachusetts requires X-ray images to be retained for 30 years.
iSCSi and vSphere
cut down on the expense and time required to migrate data as tech-
Prescription
in one of the more expensive cities for real estate in the country. “Our
Good match:
facility is over 100 years old and there are not a lot of places for us to
just continue to put on-site spinning disk,” he said.
Internal budget negotiations and territorial politics that have grown
up around disparate systems can also be a barrier to centralization.
According to BIDMC’s Passe, his organization is rolling out EMC Corp.’s
Atmos system as a centralized, scalable archive for images. There are
package [hardware with it] so it’s a whole solution when they sell it,” said
Good match:
this solution to the table there was just a lot of resistance: ‘How are you
going to migrate the data?’ and ‘We need to sign off on it,’” he said. “It’s
taken us a long time—a good six months—to really utilize the technology
that we feel would provide us a vendor-neutral solution for medical imaging.”
Other users have encountered
similar resistance, but not just
because PACS vendors are trying
“PACS vendors
for medical records
up to the vendor to put you in a compliant mode. It’s up to us,” said Jim
Touchstone, senior systems engineer at Mississippi Baptist Health Systems
(MBHS) in Jackson. The hospital worked with its vendors to put together
its own integrated data center stack. The infrastructure includes a pri-
mary IBM DS8300 SAN storage system shared among modalities and an
IBM N5200 array for disk-based near-line archiving.
“This 8000 array . . . we’ve had it on our floor for four years and, knock
on wood, never had one minute of downtime with it,” Touchstone said.
“We’ve had the N5200 for four years, and we’ve failed over before because
iSCSi and vSphere
we had a drive go out, but we never had any downtime. Our network is
Good match:
by the Food and Drug Administration (FDA). Compliance with those reg-
ulations is also a risk PACS vendors assume for customers, Valante
argues. “Once you start opening those doors, you’re accountable to
FDA filing,” he said. “We do a lot of exhaustive validation and testing.”
Compellent SAN, the [PACS system] OEM vendor was not happy. They
Continuous
basically said, ‘No you can’t do that’ and I said ‘Well, I’m going to do it’
and they begrudgingly continued to support me even though there
were threats.”
Micha Ronen, PACS administrator at Phoenix-based Sun Health Corp.,
which was acquired by Banner Health in 2008, is in the process of
merging his NetApp Inc.-based PACS archiving systems into Banner
Health’s Bycast grid (Bycast Inc. was acquired by NetApp in 2010). “One
of the beauties of Bycast is that for us, migration is not an issue,” Ro-
nen said, because it can layer over heterogeneous storage repositories
iSCSi and vSphere
worked historically; NetApp has since said it will not support third-party
storage systems going forward under its Bycast-based StorageGRID
software unless they’re fronted by its V-Series gateway and made to
look like NetApp storage.
Ronen is unconcerned with this change. “For us, I don’t foresee any
problem. They would definitely like to get a bigger share of the storage
we have and so will have to accommodate our environment.”
“I can see [external public clouds] being useful if you’re spinning off
organizations, or if you’re bringing in new external organizations, or if
you have to bring in data from external organizations that you’re being
paid on a contract basis to read,” Passe continued. “When [PACS vendors]
make the leap and start to look at REST in earnest, it’s huge . . . Web 2.0
includes the metadata with the object, so imagine not having to have
the databases . . . that eliminates one more piece of the puzzle or at
least makes it redundant so if the database crashes, you can rebuild it
from the objects themselves.”
iSCSi and vSphere
Beth Pariseau is a former senior news writer for TechTarget’s Storage Media
Group; she is now assigned to TechTarget’s Data Center Media Group.
Cloud storage
e
ecosystems mature
A number of vendors have emerged that provide
a bridge to cloud storage services, as well as
extended security, availability and portability
for medical records
Prescription
VERY NEW TECHNOLOGY follows a predictable path from drawing boards and
beta tests to product hypes and launches. Along the way, users often
follow their own inevitable route: becoming enchanted, then unsure and,
finally, cynical.
After that, the necessary ecosystems emerge to fill in the technology
gaps and ensure that real user requirements are satisfied. Value propositions
that solve real-world user pain points emerge, and we slowly start to see
Turning storage
into a service
adoption. Roughly two years after cloud storage was first floated, that’s
where we find ourselves.
Most cloud storage service provider
solutions on the market today offer basic
Cloud storage enable-
storage capacity and data protection, ment vendors offer
such as RAID or remote mirroring, as
well as RESTful APIs for porting applica-
caching algorithms
tions. This is all good. But challenges that ensure active
arise because most applications in
today’s IT shops do not speak REST
data is stored locally,
mitigating issues with
data protection
much of the messaging sounds very similar, which it should, as they solve
similar problems. But putting them all on the same playing field would be
like comparing a Data Robotics Drobo FS to a NetApp FAS2000; both are
network drives and both store data, but they have very different use cases.
On a generic level, enablement platforms offer the following:
Data portability. Cloud storage enablement vendors write to the propri-
for medical records
etary cloud storage service provider APIs and act as a translation layer
that offers a standards-based interface to the application. Application in-
Prescription
distance from cloud providers. Though cloud storage isn’t likely to be in a posi-
Continuous
tion to support OLTP applications any time soon, most applications in the data
center don’t require big iron IOPS and cloud storage could sufficiently meet
performance requirements for those.
Data reduction technology. Compression and data deduplication can
help reduce the overall amount of data stored, as well as bandwidth
requirements and costs.
Cloud storage enablement vendors include Nasuni Corp., which offers
a Windows file server product that could be used to store user files or to
create an off-site copy of Windows file-based data, among other use cases.
iSCSi and vSphere
TwinStrata Inc. offers a block storage interface to the subscriber that pro-
Good match:
EMERGING STANDARDS
for medical records
Cloud storage may not be a proprietary world forever. The Storage Networking
Prescription
efits of the potential cost reduction and IT elasticity that cloud storage
brings to the table. 2
t
There’s a big difference between backup and business
continuity. Any-point-in-time technologies can extend
for medical records
ware and numerous other sources. A business risks not only a potential loss
into a service
of information, but also the loss of “use” of the associated information system,
which is an irrecoverable loss of time.
Most veteran data storage managers have painful tales of the consequences
of loss of use. It may start with something as simple as a transactional
messaging system that loses messages into the ether during an outage.
And dropped messages can mean loss of revenue, loss of customers, lost deals
or bids, or legal exposure from a failure to meet compliance requirements.
So it’s a full blown head-scratcher that we have such little ability to protect
Continuous
end up with a false sense of protection; those that see the shortcomings
of a pure data protection strategy often have no better alternative than to
pursue massive, complex distributed disaster recovery (DR) projects. The
cost implications associated with this disconnect with business require-
ments are astounding. DR infrastructures aren’t only expensive to imple-
ment, they’re equally expensive to maintain. The alternatives may involve
THE PITs
for medical records
a copy of important data at a daily point in time (PIT). Whether using full
backups, or some combination of fulls, incrementals and differentials, back-
up has always captured a frozen PIT and then sat idle until the next backup
window. This periodic cramming of data through a backup infrastructure
has created an enormous industry that’s constantly innovating with new
technologies like data deduplication and virtual tape libraries (VTLs).
LEVERAGING PIT
So what should you do? By leveraging various combinations of often over-
Turning storage
into a service
looked backup technologies, companies can better align their data protec-
tion with application requirements. Behind this service-oriented alignment
of data protection is the concept of
business-vital applications and the By leveraging various
premise that these applications
should be more deeply protected
combinations of often
with a higher priority, speed and overlooked backup
frequency.
Any-point-in-time (APIT) technolo-
technologies, companies
gies are at the core of this approach. can better align their
data protection
to APIT use protection. Granted, there isn’t a seamless failover as with active-
Good match:
PROTECTING USE
The capabilities needed for a better approach to data protection—one that
encompasses protecting use—include:
backup tools and processes, ideally looking like little more than yet another
Prescription
particularly easy to move when it’s free from needless redundancies and
Good match:
“Two for the price of one” has always been an effective marketing strategy, and while
it’s not entirely accurate when talking about multiprotocol or unified storage, the idea
does seem to have some appeal for many data storage shops. In our latest Snapshot
survey, 53% of respondents reported that their companies use multiprotocol arrays. It’s
not surprising why: 35% of these users said using disk capacity more efficiently was
the main reason they went the multiprotocol route. But for 29%, the cost savings of
having both block and file storage in one box was the key motivator, closely followed by
those who felt it would be easier to manage the two together (28%). Current users seem
for medical records
to be sold on the unified storage concept; 80% have more than one multiprotocol array
Prescription
installed. Users were roughly split on how they divvy up their unified arrays: 37% allot more
capacity for files, 34% allocate more capacity for block and 29% use an even split. Thirty-
nine percent of the multiprotocol deployments were real 2-for-1 deals, with a single ar-
ray replacing separate file and block arrays. For 41%, their new multiprotocol systems
were additions to their environments. Mixing two IP-based protocols (NAS and iSCSI) was
the most popular configuration (75%). So how’s it all working out? Pretty nicely, it seems,
as 44% are “very satisfied” with their multiprotocol experiences. —Rich Castagna
What’s the main reason you What has been the greatest benefit
Turning storage
4
Why haven’t you installed any multiprotocol arrays yet?
scale where 5 is
11% Evaluating/Planning implementation
Good match:
“Very satisfied”
10% Other
3PAR
for medical records
Overland Storage Microsite: One-stop Shop for Product Info and Real World Deployment Scenarios Product
iSCSi and vSphere
E-Guide: Dedupe Dos and Don'ts -- Data Deduplication Technology Best Practices