Sei sulla pagina 1di 99

Copyright © 2010 EMC Corporation. Do not Copy - All Rights Reserved.

Welcome to SAN Solutions Design Concepts V4.


EMC provides downloadable and printable versions of the student materials for your benefit, which can be accessed from the
Supporting Materials Tab.
Copyright © 2010 EMC Corporation. All rights reserved.
These materials may not be copied without EMC's written consent.
EMC believes the information in this publication is accurate as of its publication date. The information is subject to change
without notice.
THE INFORMATION IN THIS PUBLICATION IS PROVIDED ―AS IS.‖ EMC CORPORATION MAKES NO
REPRESENTATIONS OR WARRANTIES OF ANY KIND WITH RESPECT TO THE INFORMATION IN THIS
PUBLICATION, AND SPECIFICALLY DISCLAIMS IMPLIED WARRANTIES OF MERCHANTABILITY OR FITNESS
FOR A PARTICULAR PURPOSE.
Use, copying, and distribution of any EMC software described in this publication requires an applicable software license.
EMC² , EMC, EMC ControlCenter, AdvantEdge, AlphaStor, ApplicationXtender, Avamar, Captiva, Catalog Solution, Celerra,
Centera, CentraStar, ClaimPack, ClaimsEditor, ClaimsEditor, Professional, CLARalert, CLARiiON, ClientPak, CodeLink,
Connectrix, Co-StandbyServer, Dantz, Direct Matrix Architecture, DiskXtender, DiskXtender 2000, Document Sciences,
Documentum, EmailXaminer, EmailXtender, EmailXtract, enVision, eRoom, Event Explorer, FLARE, FormWare, HighRoad,
InputAccel,InputAccel Express, Invista, ISIS, Max Retriever, Navisphere, NetWorker, nLayers, OpenScale, PixTools,
Powerlink, PowerPath, Rainfinity, RepliStor, ResourcePak, Retrospect, RSA, RSA Secured, RSA Security, SecurID,
SecurWorld, Smarts, SnapShotServer, SnapView/IP, SRDF, Symmetrix, TimeFinder, VisualSAN, VSAM-Assist, WebXtender,
where information lives, xPression, xPresso, Xtender, Xtender Solutions; and EMC OnCourse, EMC Proven, EMC Snap, EMC
Storage Administrator, Acartus, Access Logix, ArchiveXtender, Authentic Problems, Automated Resource Manager, AutoStart,
AutoSwap, AVALONidm, C-Clip, Celerra Replicator, CLARevent, Codebook Correlation Technology, Common Information
Model, CopyCross, CopyPoint, DatabaseXtender, Digital Mailroom, Direct Matrix, EDM, E-Lab, eInput, Enginuity, FarPoint,
FirstPass, Fortress, Global File Virtualization, Graphic Visualization, InfoMover, Infoscape, MediaStor, MirrorView, Mozy,
MozyEnterprise, MozyHome, MozyPro, NetWin, OnAlert, PowerSnap, QuickScan, RepliCare, SafeLine, SAN Advisor, SAN
Copy, SAN Manager, SDMS, SnapImage, SnapSure, SnapView, StorageScope, SupportMate, SymmAPI, SymmEnabler,
Symmetrix DMX, UltraFlex, UltraPoint, UltraScale, Viewlets, VisualSRM are trademarks of EMC Corporation.
All other trademarks used herein are the property of their respective owners.

SAN Solutions Design Concepts V4 Part 1 - 1


Copyright © 2010 EMC Corporation. Do not Copy - All Rights Reserved.

The objectives for SAN Solutions Design Concepts V4 Parts 1 and 2 are shown here. Please take
a moment to read them.

SAN Solutions Design Concepts V4 Part 1 - 2


Copyright © 2010 EMC Corporation. Do not Copy - All Rights Reserved.

The objectives for this module are shown here. Please take a moment to read them.

SAN Solutions Design Concepts V4 Part 1 - 3


Copyright © 2010 EMC Corporation. Do not Copy - All Rights Reserved.

Let’s begin with an overview of planning and why it is crucial in the design of a SAN. This
lesson explains the general processes and procedures involved with planning and designing a
storage area network. It provides an overview of the methods for developing a SAN design that
will allow the most efficient data access in an environment.

SAN Solutions Design Concepts V4 Part 1 - 4


Copyright © 2010 EMC Corporation. Do not Copy - All Rights Reserved.

As with any solution, understanding the requirements is crucial. Analysis is the starting point of
any SAN design. Ask yourself what the purpose for implementing a SAN is. Some of the goals
could be availability, scalability, consolidation, or disaster recovery. Identifying the driving
factors behind SAN deployment forms the foundation of your design.
The physical environment should also be examined as part of the analysis. Hardware
components, performance statistics, and distance requirements are crucial considerations in the
design. This also enables you to verify that the components are included in the EMC Support
Matrices.
After gathering information about the storage, connectivity, and server environments, the
planning and design process can begin. Using the gathered data, informed decisions can be
made in terms of the topology implemented, protocols used, storage allocation layouts and
distance extensions. This course presents an introduction to each of these concepts.
Throughout each step of the process, results should be documented. This makes the creation of
the actual design much easier. The first ―official‖ documentation that is produced is the Draft
Design. The Draft Design is used to verify the layout and check for any inaccuracies in the
blueprint. After the review, make any necessary changes, verify accuracy, and prepare to
implement the design.

SAN Solutions Design Concepts V4 Part 1 - 5


Copyright © 2010 EMC Corporation. Do not Copy - All Rights Reserved.

Follow these guidelines during the planning phase: From the equipment needs, gather the
necessary software driver and firmware revisions needed. Define the most-critical to least-
critical server and storage configurations and then plan from there. It is easy to migrate servers
and storage with dual Fibre Channel ports, but the server and storage configurations take more
thought. Applications in direct-attached storage environments require some downtime, unless
clustering is used.
Evaluate the gathered information about customer equipment and requirements and what has
been planned more than once. Get an agreement from customers that this is what they had in
mind.

SAN Solutions Design Concepts V4 Part 1 - 6


Copyright © 2010 EMC Corporation. Do not Copy - All Rights Reserved.

The following are some of the metrics that should be gathered prior to putting together a SAN
design: make, model and OS-patch version of servers, I/O rate of the servers, block size of the
data to be stored, data criticality and availability, location of components, and the type of data to
be stored.

SAN Solutions Design Concepts V4 Part 1 - 7


Copyright © 2010 EMC Corporation. Do not Copy - All Rights Reserved.

After gathering information about the storage, connectivity, and server environments, you can
begin the design. This is not an easy task for the designer. The customer might not be able to
provide you with the required information, so you may need to research this data. Do not focus
on the SAN components before performing a comprehensive analysis. Many factors are
involved in this analysis and a cursory review of the environment may cause an incomplete or
insufficient SAN design.
Pre-qualify the environment. Request a list of systems and verify their compatibility with your
EMC SAN implementation. Determine the availability of cards at the vintage and O/S level.
With the customer, negotiate a policy point between connecting all systems and only some
systems. In many environments, it is impractical to upgrade a system that is one year or less
from the end-of-lease. Determine if there are new systems, applications, or services driving the
SAN project; and assist the customer with selecting compatible products.

SAN Solutions Design Concepts V4 Part 1 - 8


Copyright © 2010 EMC Corporation. Do not Copy - All Rights Reserved.

SAN implementations must be approached with thorough analysis and planning. SAN planning
requires exact study and documentation of the customer’s environment. Usually a conceptual or
high-level design exists from either presales or planning activities. However, to successfully
deploy a complex network without any unnecessary delays, an enormous amount of detail needs
to be considered and documented.
The process starts with determining the high-level SAN fabric topology and what type of
switches go in which locations. After the physical topology is determined, the logical topology
needs to be designed, including zoning and VSANs. For Inter-Switch Links, it is important to
determine the required number of links. These architectural aspects of the design need to be
documented to create a final blueprint for the installation.

SAN Solutions Design Concepts V4 Part 1 - 9


Copyright © 2010 EMC Corporation. Do not Copy - All Rights Reserved.

Consider the benefits to the customer in using the concepts of managing information over its
lifecycle. Leveraging EMC products and procedures may reduce costs and maintenance efforts.
Review the customer’s requirements and analyze where the management of information across
its lifecycle benefits their business needs. Prepare a presentation that outlines the proposed
changes and enhancements and explain to the customer the benefits and what these changes
mean in both cost and time to the project.

SAN Solutions Design Concepts V4 Part 1 - 10


Copyright © 2010 EMC Corporation. Do not Copy - All Rights Reserved.

There are many tools to gather data and develop a baseline for performance. EMC maintains
tools to help in the data gathering phase.

SAN Solutions Design Concepts V4 Part 1 - 11


Copyright © 2010 EMC Corporation. Do not Copy - All Rights Reserved.

The environmental data must be compiled and used to determine the basic building blocks of the storage area
network design. As an example, you may be implementing a relatively simple infrastructure, such as one that
includes 50 servers, three FC switches, and two storage arrays. This could be considered a ―traditional‖ SAN. A
traditional SAN environment utilizes the Fibre Channel protocol. This standard includes switch and hub
specifications and was designed as a non-routable MAN, or Metropolitan Area Network, protocol.
You have also determined that your SAN has to provide data availability and allow for growth. This will drive your
choice of connectivity device and topology. Data availability indicates that the hosts will have a minimum of two
HBAs which will increase the overall port count. The switches and directors must support the overall port density.
You want to ensure that there is no single point of failure thorough the entire fabric for HA support. This also drives
your topology decision. Even though you are starting with a traditional SAN, which could be considered a simple
design, one of your goals is growth support. You must consider how a simple design can, and probably will, grow
to support your infrastructure. With that in mind, the best design to build toward is core-to-edge. However, there are
many topologies that can be implemented. We will examine the characteristics of all topologies and their merits
later in this module.
Another consideration in the design is distance. In a traditional SAN environment, distance could be considered a
limitation. Traditional Fibre Channel spans distances measured in hundreds of meters, with extensions possible up
to about 200 kilometers using technologies such as DWDM (Dense Wave Division Multiplexing). However, there
is a growing need to deploy solutions over even greater distances, often thousands of kilometers. IP-based storage
protocols have emerged and thrived in response to these factors. TCP/IP is a mature protocol with no inherent
distance limitations. It has therefore become an attractive choice for the underlying transport mechanism in such
long-distance applications. Currently, the major protocols used for long-distance SAN extensions are FCIP and
iFCP.
Even where distance is not a problem, IP-based block storage using iSCSI can provide a relatively low-cost
solution. This information could impact your design. We examined this later in this module.

SAN Solutions Design Concepts V4 Part 1 - 12


Copyright © 2010 EMC Corporation. Do not Copy - All Rights Reserved.

This lesson discusses PowerPath deployment considerations.

SAN Solutions Design Concepts V4 Part 1 - 13


Copyright © 2010 EMC Corporation. Do not Copy - All Rights Reserved.

In an active-active storage array, if multiple interfaces exist to a LUN, they all provide equal
access to the logical device. Active-active means all interfaces to a device are active
simultaneously. In a configuration that includes an active-active array, PowerPath can spread the
work load across all zoned paths. In addition, PowerPath can failover across any zoned path to
the LUN. EMC Symmetrix, IBM ESS, Hitachi Lightning, and EMC Invista are examples of
active-active arrays.
In the active-passive array, a LUN is assigned to port 0 and port 1 on Storage Processor A. In
this system, SPA is designated as the primary or active route to the device, and therefore all I/O
is directed down the paths through SPA to the device. PowerPath load balances I/O across these
active paths as shown by the green arrows.
In the active-passive array, the LUN can also be accessed through Storage Processor B but only
after the device has been re-assigned, or trespassed, to SPB. This path is referred to as a passive
path. PowerPath does not send I/O down passive paths. Passive paths are shown by the orange
arrows.

SAN Solutions Design Concepts V4 Part 1 - 14


Copyright © 2010 EMC Corporation. Do not Copy - All Rights Reserved.

Open Systems clustering technology manages application availability by detecting failures and
restarting high availability applications on a surviving cluster node. The deployment of
PowerPath in the cluster eliminates the application downtime due to a channel failure.
PowerPath will detect the channel failure and use alternate channels so that the cluster software
does not have to reconfigure the cluster to keep the applications running.
PowerPath improves the availability of the applications running in the cluster. Many clusters are
deployed to provide performance scalability. PowerPath’s load balancing can help the customer
maximize performance and get the greatest value from their cluster investment.

SAN Solutions Design Concepts V4 Part 1 - 15


Title
Month Year

The features of each PowerPath application are listed on this slide. Take a few minutes to look
them over and become familiar with them.

SAN Solutions Design Concepts V4 Part 1 - 16


Copyright © 2010 EMC Corporation. Do not Copy - All Rights Reserved.

One of the main value propositions is the ability to deploy PowerPath throughout an IT
environment. As you can see from the matrix, PowerPath supports most major operating
systems, virtual platforms and storage arrays for both EMC and non-EMC storage.
Note that PowerPath OS and storage array support is based upon qualification. It varies across
PowerPath Multipathing, Migration Enabler and Encryption with RSA. Refer to the latest
EMC Support Matrix for additional details.

SAN Solutions Design Concepts V4 Part 1 - 17


Title
Month Year

This slide summarizes PowerPath’s broad support for operating systems, virtual
platforms, storage arrays, connectivity and clusters. In general, PowerPath is a
completely heterogeneous solution.

Check the E-Lab Interoperability Navigator for updated connectivity options.

Please refer to the ESM for the latest support information.

SAN Solutions Design Concepts V4 Part 1 - 18


Copyright © 2010 EMC Corporation. Do not Copy - All Rights Reserved.

PowerPath has several licenses available. Full PowerPath licenses permit the user to take
advantage of the full set of PowerPath load balancing and path failover functionality.
A PowerPath SE license supports back-end failover only.
A PowerPath/VE license enables full PowerPath multi-pathing in a virtual environment.
Currently supported environments are listed. Check Powerlink for more up-to-date information
regarding supported environments.

SAN Solutions Design Concepts V4 Part 1 - 19


Copyright © 2010 EMC Corporation. Do not Copy - All Rights Reserved.

This slide lists current PowerPath load balancing policies. Please take a moment to review them.

SAN Solutions Design Concepts V4 Part 1 - 20


Copyright © 2010 EMC Corporation. Do not Copy - All Rights Reserved.

PowerPath V5.1 and greater integrates ALUA which is a pseudo active-active communication
method used to pass I/Os between Storage Processors. The primary benefits of PowerPath with
ALUA on CLARiiON is that PowerPath is optimized to work with CLARiiON arrays, thereby
providing balanced distribution of LUNs across SPs offering consistent and predictable
performance.
PowerPath with ALUA on CLARiiON supports concurrent use of ALUA and non-ALUA modes,
and can seamlessly switch between these modes. PowerPath uses optimized paths, but will
automatically and continuously monitor and adjust active paths between optimized and non-
optimized. Any path adjustments happen more rapidly than auto-trespass as the host does not
have to initiate I/O down the alternate path.
Other benefits of ALUA mode to note are that the feature is an enabler for handling back-end
failures without host failover. The user-friendly interface is straightforward, displaying auto-
detected ALUA or non-ALUA connections with CLARiiON nice names. ALUA allows the
operator to be less concerned with connections as host applications can send an I/O for a LUN to
either SP.

SAN Solutions Design Concepts V4 Part 1 - 21


Copyright © 2010 EMC Corporation. Do not Copy - All Rights Reserved.

EMC PowerPath Migration Enabler is EMC’s solution for nondisruptive data migrations during
planned-downtime situations. This functionality gives IT professionals more flexibility in the
time it takes to perform migrations.

SAN Solutions Design Concepts V4 Part 1 - 22


Copyright © 2010 EMC Corporation. Do not Copy - All Rights Reserved.

EMC/RSA host-based encryption integrates PowerPath path management technology with RSA
Key Manager & Encryption technology. CLARiiON and Symmetrix arrays are supported; other
non-EMC arrays are in qualification. Data is encrypted at the volume level. The PowerPath for
Data at Rest Encryption facility supports Open Systems platforms.
Implementation is immediate and non-disruptive. No application or hardware modifications are
required. Performance testing has shown little to no performance impact, although it is
configuration dependent.

SAN Solutions Design Concepts V4 Part 1 - 23


Copyright © 2010 EMC Corporation. Do not Copy - All Rights Reserved.

PowerPath with encryption provides a solution for accessing replicated data. In this example,
PowerPath encryption is installed on the source and target host. Encrypted data is written to the
source array. The data is then replicated to a target array. Once the target host is configured both
to access the replica and to access the appropriate encryption key via the RSA Key Manager, the
target host then has access to the encrypted data. The result is a straightforward integration of a
given security model, aligned with a corporate security policy, with ongoing business continuity
operations.

SAN Solutions Design Concepts V4 Part 1 - 24


Copyright © 2010 EMC Corporation. Do not Copy - All Rights Reserved.

Now that we have established the basics for SAN design, let’s look at some of the key aspects of
FC SAN design in more detail. We will review the Fibre Channel Protocol and examine FC
SAN topologies and data flow concepts.

SAN Solutions Design Concepts V4 Part 1 - 25


Copyright © 2010 EMC Corporation. Do not Copy - All Rights Reserved.

The physical topology can be described as actual hardware components in a fabric and the Fibre
Channel cabling that interconnects them. A physical topology also includes the geographical
locations of the switches and the distances between them.
Some examples of the components and concepts used to describe the physical topology of a
fabric are the number of switches in the fabric, the number of hops between any two switches,
the number of ports per switch, the number of ISLs between switches, and the physical distance
between any two switches.
Final identification of your physical topology and later expansion of that topology relies on your
ability to not only understand the individual impacts of the issues, but also your selection of data
protection schemes, logical topology, and management paradigm.

SAN Solutions Design Concepts V4 Part 1 - 26


Copyright © 2010 EMC Corporation. Do not Copy - All Rights Reserved.

When describing a particular physical topology, we can discuss it in terms of its number of tiers.
The number of tiers in the fabric is based on the number of switches that are traversed between
the farthest two points in the fabric. It should be noted that this number is based on the
infrastructure constructed by the fabric topology and does not concern itself with how the
storage and server are connected across the switches.
Increasing the number of tiers in a fabric also increases the distance that a fabric management
message must travel to reach every switch in the fabric. Increasing that distance can affect the
time it takes to propagate and complete a fabric reconfiguration event (for example, adding a
new switch), or zone set propagation event. The diagram displays one-tier, two-tier, and three-
tier physical fabrics.
As the figure shows, a single-tier physical topology has a single switch. A two-tier topology has
up to two switches between any two endpoints in the fabric. A three-tier topology has up to three
switches between any two end points in the fabric. Currently, EMC recommends that the size of
the fabric not exceed three hops, which equates to a four-tier physical fabric topology.

SAN Solutions Design Concepts V4 Part 1 - 27


Copyright © 2010 EMC Corporation. Do not Copy - All Rights Reserved.

Single switch and mesh are the simplest SAN implementations. With the advent of newer
directors with higher port density, they may be the best solution for the single location SAN in
which no distance considerations are addressed.
When using a single switch, the design aspects of ISLs and hop counts are negated and
bandwidth is not a consideration because the backplane of the directors can support full
connections in the director class switches. This is also the simplest to manage and maintain.
A full-mesh fabric is any collection of Fibre Channel switches in which each switch is connected
to every other switch in the fabric by one or more ISLs. For best host and storage accessibility,
EMC recommends that a full-mesh fabric contain no more than four switches.

SAN Solutions Design Concepts V4 Part 1 - 28


Copyright © 2010 EMC Corporation. Do not Copy - All Rights Reserved.

A fabric design that has gained wide acceptance in the industry, core/edge fabric is extremely
flexible and easy to scale. It is based on the assumption that there will always be more host ports
than storage ports, and that providing equal access to the storage from anywhere in the fabric is
a marketable benefit to fabric management and storage administrators.
A core/edge design is built by consolidating storage access into a centrally accessible pool at the
logical center of the fabric. From this core, we can attach as many edge switches as necessary to
service the hosts that require access to this storage. Each edge switch will be connected to each
core switch, maintaining the fabric's accessibility as well its robustness.
A simple core/edge fabric is designed to provide all hosts with single-hop access to all storage in
the fabric. EMC recommends that the storage core be built out as a full mesh to perpetuate
multiple paths to the storage, multiple paths for fabric management, and shortest-path access to
all switches in the fabric.

SAN Solutions Design Concepts V4 Part 1 - 29


Copyright © 2010 EMC Corporation. Do not Copy - All Rights Reserved.

A compound core/edge can be formed by merging two or more simple core/edge fabrics into a
single fabric environment. Initially, hosts would be assigned volume access rights to storage that
was available one hop away. Core switches that are one hop away from an edge switch in a
compound or complex core/edge fabric are known as primary core switches. A core switch that
is a primary core switch for one edge switch may also be a secondary core switch for an edge
switch that is two physical hops away.
Once storage is exhausted for an edge switch's primary core switches, these hosts would then be
assigned to storage that was two hops away on the secondary core switch. Access to the shared
storage traffic on a secondary core switch would traverse the ISLs at the back end of the fabric.
The compound core/edge model maintains a robust, highly efficient traffic model, while
reducing the required ISLs and thus increasing the available ports for both storage and host
attachments. It also offers a simple method for the expansion of two or more simple core/edge
fabrics into a single environment. By connecting the core switches from simple core/edge
fabrics into a full mesh, you can easily create a compound core topology.
Both the compound and the complex core/edge design models produce a physically larger-tiered
fabric, which could result in slightly longer fabric management propagation times over smaller,
more compact designs. Also, neither compound nor complex core/edge fabrics provide for
single-hop access to all storage.

SAN Solutions Design Concepts V4 Part 1 - 30


Copyright © 2010 EMC Corporation. Do not Copy - All Rights Reserved.

In a complex core/edge fabric, the layout of each edge changes slightly from the compound
core/edge model. Each edge switch is attached to two of the core switches in a round-robin
fashion. This figure provides a simple diagram of how the edge switches may be attached to
core switches. The diagram omits the pairs of edge switches that can be attached to the diagonal
pairs of switches.
Once storage is exhausted for an edge switch's primary core switches, these hosts would then be
assigned to storage that was two hops away on the secondary core switch. Access to the shared
storage traffic on a secondary core switch would traverse the ISLs at the back end of the fabric.

SAN Solutions Design Concepts V4 Part 1 - 31


Copyright © 2010 EMC Corporation. Do not Copy - All Rights Reserved.

The complex core/edge fabric inherits the benefits from both the simple core/edge and the
compound core/edge designs. The complex core/edge increases overall fabric availability by
limiting the effects on the edge switches from multiple failures on the core switches. Since the
edge switches are more evenly distributed, a failure of any two core switches would result in
fewer accessibility impacts to edge switches and attached hosts. Further availability can be
added by spreading hosts across edge switches that are not connected to the same set of core
switches. Note that switch failures are rare and unexpected, and multiple simultaneous failures
of Fibre Channel components are rarer still.
While the potential availability of the complex core fabric is increased over other designs, the
designs are more complex and may add to management and troubleshooting time if care is not
taken to document the environment.

SAN Solutions Design Concepts V4 Part 1 - 32


Copyright © 2010 EMC Corporation. Do not Copy - All Rights Reserved.

ISLs add redundancy to the fabric to protect the network from component failures. The amount
of redundancy that needs to be added to the fabric depends on factors such as the business value
and the amount of resources that can be spared for increased availability.
When adding ISLs in a fabric, always connect each switch to at least two other switches in the
fabric. This ensures multiple paths to the edge switches if one of the intermediate switches or
paths to those switches fails.
ISL utilization should always be monitored to identify unused, under or over utilized ISLs.
Unused ISLs could become candidates for removal if they do not represent the only secondary
path a host would have to its storage in the event of a switch or ISL failure.

SAN Solutions Design Concepts V4 Part 1 - 33


Copyright © 2010 EMC Corporation. Do not Copy - All Rights Reserved.

Frames are routed across the fabric via an algorithm that uses a combination of lowest cost and
shortest-path-first routing. Lowest cost refers to the speed of the links in the routes. As the speed
of the link increases, the cost of the route decreases. SPF refers to the number of ISL hops
between the host and its storage.
EMC strongly recommends that you construct your fabric to have multiple equal, lowest-cost,
shortest-path routes between any combination of host and storage. This means that you may
have two ISLs between every switch in the fabric or you may have single links between
switches, but multiple equal-cost/length paths that travel through different switch combinations.
Routing tables on each switch are updated and recalculated during events that change the status
of links in the system. Routes are assigned to devices for each direction of the communication
and the route one way may differ from the return route. The routes are assigned based on a
round-robin approach that is initiated as the device is logged into the fabric. These routes are
static for as long as the device is logged in or for as long as routes do not have to be recalculated
due to a fabric event.

SAN Solutions Design Concepts V4 Part 1 - 34


Copyright © 2010 EMC Corporation. Do not Copy - All Rights Reserved.

Performance is dependent on the number and size of I/O requests per second. A reasonable
design guideline can be estimated when using the EMC fan-out recommendations for storage
ports. This can be referred to as over subscription. ISL over subscription is the ratio of input
ports that might cross between switches to the number of ISLs over which the traffic could
cross.
EMC currently recommends two Symmetrix Fibre Channel director groups per ISL for initial
fabric planning.

SAN Solutions Design Concepts V4 Part 1 - 35


Copyright © 2010 EMC Corporation. Do not Copy - All Rights Reserved.

Trunking involves the aggregation of several physical ISLs between any two adjacent switches
into one logical unit for the purposes of ISL load balancing. Trunks of ISLs can now distribute
their load more evenly across all participants. Each vendor handles trunking slightly differently.

SAN Solutions Design Concepts V4 Part 1 - 36


Copyright © 2010 EMC Corporation. Do not Copy - All Rights Reserved.

Brocade ISL trunking is an optional product available for all Brocade 2 Gbit/sec Fibre Channel
fabric switches or directors. This technology is used for optimizing performance and simplifying
the management of a multi-switch SAN fabric. When two, three, or four adjacent ISLs are used
to connect two switches, the switches automatically group the ISLs into a single logical ISL or
trunk.

SAN Solutions Design Concepts V4 Part 1 - 37


Copyright © 2010 EMC Corporation. Do not Copy - All Rights Reserved.

Port Channels provide a point-to-point connection over an ISL or EISL. Multiple links can be
combined into a Port Channel. FSPF sees a Port Channel as one logical ISL. Link failures within
a Port Channel do not cause the switch to rebuild its routing tables unless all paths in that Port
Channel are no longer available.
There is high availability on an ISL. If one link fails, traffic previously carried on this link is
switched to the remaining links.

SAN Solutions Design Concepts V4 Part 1 - 38


Copyright © 2010 EMC Corporation. Do not Copy - All Rights Reserved.

Usually topologies are designed using switches from the same vendor. This presents a problem
when consolidating SANs made from different vendor switches. EMC supports a mode called
Open Fabric to interconnect B-Series, MDS-Series and/or M-Model switches and can be used in
such special situations.
This slide provides an example of possible Open Fabric or Native Mode configurations.
Technically, Open Fabric is not really a topology but more of a supported configuration.

SAN Solutions Design Concepts V4 Part 1 - 39


Copyright © 2010 EMC Corporation. Do not Copy - All Rights Reserved.

In this lesson, we discuss the design and deployment of iSCSI.

SAN Solutions Design Concepts V4 Part 1 - 40


Copyright © 2010 EMC Corporation. Do not Copy - All Rights Reserved.

SCSI is a popular family of protocols which enable systems to communicate with I/O devices,
especially storage devices. SCSI protocols are request/response application protocols with a
common standardized architecture model and basic command set, as well as standardized
command sets for different device classes.
As system interconnects move from the classical bus structure to a network structure, SCSI has
to be mapped to network transport protocols. IP networks now meet the performance
requirements of fast system interconnects and, as such, are good candidates to "carry" SCSI.
While existing networking gear can be used if available, EMC-qualified configurations do have
specific requirements for the LAN being deployed for iSCSI usage. Please refer to the Support
Matrix for LAN requirements.

SAN Solutions Design Concepts V4 Part 1 - 41


Copyright © 2010 EMC Corporation. Do not Copy - All Rights Reserved.

iSCSI was developed by the Internet Engineering Task Force. Since support for iSCSI is in a
rapidly evolving state, it is especially critical to qualify the configuration under proposal using
the latest available support information. This information can be found in the EMC Support
Matrix and in the release notes for current versions of all hardware and software components
involved in the design. Also refer to the appropriate sections in the latest Network Topology
Guide for supported topologies.

SAN Solutions Design Concepts V4 Part 1 - 42


Copyright © 2010 EMC Corporation. Do not Copy - All Rights Reserved.

All hosts with NICs require an EMC-qualified version of the iSCSI initiator software, which is
typically a free download for supported components. iSCSI initiator host software is also
available for other operating systems, but these are not supported.
Currently, the only iSCSI HBA that is qualified for use on Windows and Linux hosts only is the
Qlogic QLA4010. Refer to the support matrix for the EMC-qualified firmware and driver
versions.

SAN Solutions Design Concepts V4 Part 1 - 43


Copyright © 2010 EMC Corporation. Do not Copy - All Rights Reserved.

It is critical that the storage array be properly sized to meet the anticipated needs from all active
hosts.
Array sizing requires two different perspectives. Do you have sufficient raw capacity as in
Gigabytes of usable storage? And will the system meet the I/O throughput performance
requirements of all existing and newly-added hosts?
With larger numbers of active hosts, array caching strategies usually become less effective and
disk spindle speeds become more significant from a performance perspective. When sizing the
number of disks for anticipated throughput rate, you may need to provision significantly more
raw gigabytes of data than what is strictly required or specified.

SAN Solutions Design Concepts V4 Part 1 - 44


Copyright © 2010 EMC Corporation. Do not Copy - All Rights Reserved.

For manual discovery, the host is configured with the iSCSI address of one target. The host
issues SEND_TARGET to discover all the other available targets. It could be considered
cumbersome, but is guaranteed to work with any set of targets.
SLP and iSNS both require compatible software in the target array when FC array ports are
presented as iSCSI targets by bridging.
Check the array documentation or the router documentation to check for availability of these
features.

SAN Solutions Design Concepts V4 Part 1 - 45


Copyright © 2010 EMC Corporation. Do not Copy - All Rights Reserved.

Refer to the latest version of the EMC Support Matrix to precisely identify those restrictions that
apply to your specific configuration and topology. These restrictions are in the footnotes at the
end of each support table.

SAN Solutions Design Concepts V4 Part 1 - 46


Copyright © 2010 EMC Corporation. Do not Copy - All Rights Reserved.

An iSCSI SAN is ideal for users interested in implementing new networked storage
environments. Since iSCSI runs on familiar IP networks, there is no need to set up and learn a
new networking infrastructure. To build an iSCSI storage network in a data center, NICs or
iSCSI host bus adapters can be used in servers, along with iSCSI storage devices and a
combination of switches and routers. iSCSI uses the same block-level SCSI commands as direct-
attached storage. iSCSI provides compatibility with user applications such as file systems,
databases, and web serving, allowing users to realize the full benefit of a SAN.

SAN Solutions Design Concepts V4 Part 1 - 47


Copyright © 2010 EMC Corporation. Do not Copy - All Rights Reserved.

Co-existence of Fibre Channel SANs and iSCSI SANS is possible with qualified bridges. There
are some restrictions on the environment. The network must be a local layer 2 network dedicated
solely to the iSCSI configuration and must be engineered with no packet loss or duplication.
iSCSI sessions may need to be manually re-established and a pre-site qualification is required
for each implementation.
Network design is key to making sure iSCSI works. Real-world implementations require Gigabit
Ethernet. Consider iSCSI a local area technology; segregate iSCSI traffic from general traffic.
Layer 2 VLANs are particularly good for this type of design. Oversubscription is OK for general
use LANs, but not for iSCSI.

SAN Solutions Design Concepts V4 Part 1 - 48


Copyright © 2010 EMC Corporation. Do not Copy - All Rights Reserved.

With few exceptions, if the underlying Ethernet network is functioning properly, iSCSI performs
remarkably fast. Generally, it is recommended to segment off the iSCSI traffic so it is not routed
or mixed with public traffic. Unless there is network saturation, there should not be any issues.
Although iSCSI technology is capable of a 12 to 1 initiator to target ratio, EMC currently
supports 8 to 1.

SAN Solutions Design Concepts V4 Part 1 - 49


Copyright © 2010 EMC Corporation. Do not Copy - All Rights Reserved.

In this lesson, we examine the technologies that enable connection and management of remote
SANs.

SAN Solutions Design Concepts V4 Part 1 - 50


Copyright © 2010 EMC Corporation. Do not Copy - All Rights Reserved.

IP-based storage protocols have emerged and thrive in response to a variety of factors.
Traditional Fibre Channel spans distances measured in hundreds of meters, with extensions
possible up to about 200 kilometers using extension technologies such as DWDM. However,
there is a need to deploy solutions over even greater distances, often thousands of kilometers.
TCP/IP is a mature protocol with no inherent distance limitations. It has become an attractive
choice for the underlying transport mechanism in such long-distance applications. Currently, the
major protocol used for long-distance SAN extensions is FCIP.

SAN Solutions Design Concepts V4 Part 1 - 51


Copyright © 2010 EMC Corporation. Do not Copy - All Rights Reserved.

SAN technology was originally designed as an alternative to direct attached storage, operating
within campus or MAN distance. In the light of recent events such as natural disasters,
terrorism, and power concerns, geographically dispersed SANs have become a necessity.
When choosing an extension technology, several factors should be taken into consideration.
Some typical uses of FCIP for SAN extension are as follows.
Data Replication enables synchronous or asynchronous recovery between storage arrays to
support regulatory requirements and to meet SLAs. Network latency may be a factor in disk I/O
service time and application performance with synchronous replication, so factors affecting
latency must be considered, including distance and traffic delays in connectivity devices.
Remote management, monitoring, and BUR enable remote access for administration, for
example, backup for disaster recovery using tape or disk. Host initiator to remote storage
enables access to storage arrays in another site or data center.

SAN Solutions Design Concepts V4 Part 1 - 52


Copyright © 2010 EMC Corporation. Do not Copy - All Rights Reserved.

There are strict rules for supported topologies using multi-protocol routers. Please refer to the
Network Topology Guide for extensive coverage of supported topologies.

SAN Solutions Design Concepts V4 Part 1 - 53


Copyright © 2010 EMC Corporation. Do not Copy - All Rights Reserved.

FC-to-FC routing connects SAN islands to enable shared access across storage resources from
any fabric, with the benefit of administration and fault isolation of separately managed fabrics.

SAN Solutions Design Concepts V4 Part 1 - 54


Copyright © 2010 EMC Corporation. Do not Copy - All Rights Reserved.

FCIP tunneling can be accomplished with supported multi-protocol routers. At each site, all
storage ports that need to use the FCIP link must be isolated into a single VSAN. When the FCIP
link is established, the two VSANs will merge. This results in a single VSAN that spans the
distance.
Those host and storage ports that generate or service local I/O traffic only need to be in one or
more separate VSANs. The intent is to separate all DR or replication traffic from host-to-storage
local traffic.
The diagram shows a supported topology with two routers in each of two different sites: LOCAL
and REMOTE. If router redundancy is not required, it is allowable to use one router only in each
site.

SAN Solutions Design Concepts V4 Part 1 - 55


Copyright © 2010 EMC Corporation. Do not Copy - All Rights Reserved.

Cisco switches allow one physical switch to be carved into several virtual fabrics called VSANs,
or Virtual SANS. When creating VSANs, multiple switches are not needed. A VSAN can be
created on one switch. VSANs offer the ability to build larger consolidated fabrics and still
maintain the required security and isolation between applications beyond what is currently
offered through zoning. This technology scales SANs beyond current limitations providing
secure, cost-effective, and manageable advantages.

SAN Solutions Design Concepts V4 Part 1 - 56


Copyright © 2010 EMC Corporation. Do not Copy - All Rights Reserved.

A VSAN provides the ability to create separate virtual fabrics on top of the same redundant
physical infrastructure. Instead of building a completely physically isolated switch or group of
switches, VSAN enables the achievement of the same isolated environments, while eliminating
the added expense of building physically separate fabrics. Spare ports within the fabric can be
quickly and non-disruptively assigned to existing VSANs. Using VSANs, the same security and
isolation can be replicated virtually on the same physical infrastructure.
VSANs provide hardware-based isolation, plus a full replicated set of Fibre Channel services for
each VSAN.

SAN Solutions Design Concepts V4 Part 1 - 57


Copyright © 2010 EMC Corporation. Do not Copy - All Rights Reserved.

The definitions of the distance extension technologies that are being used in storage area
networks are shown on the slide.

SAN Solutions Design Concepts V4 Part 1 - 58


Copyright © 2010 EMC Corporation. Do not Copy - All Rights Reserved.

DWDM systems support standard SONET/SDH short-reach optical interfaces to which any
SONET/SDH compliant "client" device can attach. Within the DWDM system, a device called a
transponder converts the SONET/SDH compliant optical signal from the client back to an
electrical signal. This electrical signal is then used to drive a DWDM laser. Each transponder
within the system converts its client's signal to a slightly different wavelength. The wavelengths
from all of the transponders in the system are then optically multiplexed onto a single fiber.
In the receive direction of the DWDM system, the reverse process takes place. Individual
wavelengths are filtered from the multiplexed fiber and fed to individual transponders, which
convert the signal to electrical and drive a standard SONET/SDH interface to the client. The
total amount of signals that can be multiplexed varies with vendor equipment, although current
systems allow for up to 56 lambdas.

SAN Solutions Design Concepts V4 Part 1 - 59


Copyright © 2010 EMC Corporation. Do not Copy - All Rights Reserved.

FCIP solutions encapsulate Fibre Channel packets and transport them via TCP/IP enabling
applications that were developed to run over Fibre Channel SANs to be supported under FCIP.
This enables organizations to leverage their current IP infrastructure and management resources
to interconnect and extend Fibre Channel SANs.
FCIP can transport existing Fibre Channel services across the IP network such that two or more
interconnected SANs can appear as a single large SAN and be managed by traditional SAN
management applications. FCIP enables SAN applications to support additional protocols
without modification. These applications might include disk mirroring between buildings in a
campus network or remote replication over the WAN.

SAN Solutions Design Concepts V4 Part 1 - 60


Copyright © 2010 EMC Corporation. Do not Copy - All Rights Reserved.

Now we look at some applications of the concepts previously discussed.

SAN Solutions Design Concepts V4 Part 1 - 61


Copyright © 2010 EMC Corporation. Do not Copy - All Rights Reserved.

Fibre Channel over Ethernet, or FCoE, is a new technology protocol in the process of being
defined by the T11 standards committee. It expands FC into the Ethernet environment. As a
physical interface, it uses Converged Enhanced Ethernet NICs, FCoE HBAs, or CNAs.
Basically FCoE allows Fibre Channel frames to be encapsulated within Ethernet frames,
providing a transport protocol more efficient than TCP/IP.

SAN Solutions Design Concepts V4 Part 1 - 62


Copyright © 2010 EMC Corporation. Do not Copy - All Rights Reserved.

FCoE is a protocol that supports the direct mapping of Fibre Channel over Ethernet. A generic
Ethernet network may lose frames due to congestion. A proper implementation of appropriate
Ethernet extensions allows a full duplex Ethernet link to provide lossless behavior.
The protocol mapping defining Fibre Channel over Ethernet uses an underlying Ethernet layer
composed only of full duplex links providing a lossless behavior when carrying FCoE frames.
The Lossless Ethernet layer provides sequential delivery of FCoE frames.

SAN Solutions Design Concepts V4 Part 1 - 63


Copyright © 2010 EMC Corporation. Do not Copy - All Rights Reserved.

Today, each application class has its own interface—Ethernet for networking, Fibre Channel for
storage, and Infiniband for clustering. The result is three different networks, each with an
adapter for each system or server, three cables and switches, three skill sets and tools, and three
different management facilities.
FCoE uses Converged Enhanced Ethernet. The result of a converged network is fewer adapters,
cables, and switches. This results in lower costs and better utilization of resources.

SAN Solutions Design Concepts V4 Part 1 - 64


Copyright © 2010 EMC Corporation. Do not Copy - All Rights Reserved.

Emulex Converged Network Adapters are intelligent multi-protocol adapters that provide host
LAN and Fibre Channel SAN connectivity over 10 Gbps Ethernet using FCoE and Enhanced
Ethernet functionality.
The QLogic family offers 10 Gigabit per second speed and full hardware offload for FCoE
protocol processing. Full hardware offload for FCoE protocol processing reduces system CPU
utilization for I/O operations, which leads to faster application performance and higher levels of
consolidation in virtualized systems.

SAN Solutions Design Concepts V4 Part 1 - 65


Copyright © 2010 EMC Corporation. Do not Copy - All Rights Reserved.

Converged Network Adapter, or CNA, appears to the host as two PCI devices. One is a network
adapter and the other a Fibre Channel adapter. If the request is a network transaction, it is
delivered to the lossless MAC. In the case of Fibre Channel, the frames are encapsulated as
FCoE by the FCoE encapsulation engine. They are later sent to the lossless MAC for delivery.
Received traffic is processed by the lossless MAC. It filters FCoE and delivers traffic either to
the Ethernet NIC, if it is a network transaction, or is decapsulated by the FCoE engine. The
FCoE engine is forwarded to the Fibre Channel HBA device.

SAN Solutions Design Concepts V4 Part 1 - 66


Copyright © 2010 EMC Corporation. Do not Copy - All Rights Reserved.

Converged Enhanced Ethernet eliminates Ethernet’s lossy behavior and makes it suitable for
transporting storage networking. The four main additions are Priority-based Flow Control, DCB
Capability Exchange Protocol, Congestion Notification, and Enhanced Transmission Selection.

SAN Solutions Design Concepts V4 Part 1 - 67


Copyright © 2010 EMC Corporation. Do not Copy - All Rights Reserved.

The FCoE layer in Fibre Channel over Ethernet encapsulates the higher layer Fibre Channel
content. It allows for FCoE Nodes and FCoE Forwarders to communicate through Ethernet ports
over a Lossless Ethernet network. FCoE Virtual Links replace the physical Fibre Channel links
by encapsulating FC frames in Ethernet frames. An FCoE Virtual Link is identified by the pair
of MAC addresses of the two link end points.

SAN Solutions Design Concepts V4 Part 1 - 68


Copyright © 2010 EMC Corporation. Do not Copy - All Rights Reserved.

FCoE frames only have Ethernet overhead. They differ from iSCSI, FCIP, and iFCP stacking
because there are no IP frames encapsulating FCoE. Getting rid of IP frames increases the
efficiency of every frame sent.
Jumbo frames have always existed in Ethernet. They are frames that contain a payload of more
than 1500 bytes. Jumbo frames were not very popular with regular Ethernet because speeds
were too slow and frames were likely to get lost. With enhanced, lossless Ethernet, jumbo
frames are viable since speeds are much faster. FCoE must use jumbo frames.

SAN Solutions Design Concepts V4 Part 1 - 69


Copyright © 2010 EMC Corporation. Do not Copy - All Rights Reserved.

The FCP layer operates normally. The data frames are created and passed to the FCoE layer to
be encapsulated and transported over the Ethernet layer.

SAN Solutions Design Concepts V4 Part 1 - 70


Copyright © 2010 EMC Corporation. Do not Copy - All Rights Reserved.

This section introduces information on operational aspects of FCoE.

SAN Solutions Design Concepts V4 Part 1 - 71


Copyright © 2010 EMC Corporation. Do not Copy - All Rights Reserved.

FCoE Link End Points have virtual ports associated with them. Notice the different virtual ports
in the diagram. End ports on the hosts are VN_Ports; fabric ports are VF_Ports; and ports for
switch ISL are VE_Ports.

SAN Solutions Design Concepts V4 Part 1 - 72


Copyright © 2010 EMC Corporation. Do not Copy - All Rights Reserved.

The MAC addresses on the NEX-5020 switches associated with the VE_Ports and VF_Ports are
universal MAC addresses from a switch pool assigned to the manufacturer by IEEE.
VN_Port addresses can be assigned to the ENode in one of two ways. SPMA is where the MAC
address is burned in by the CNA manufacturer, or configured by an administrator. This option is
not used at this time.

SAN Solutions Design Concepts V4 Part 1 - 73


Copyright © 2010 EMC Corporation. Do not Copy - All Rights Reserved.

The Connectrix NEX-5020 uses Fabric Provided MAC Addresses or FPMA. With this option,
the switch assigns the MAC address to the attached VN_Port. The VN_Port is created and the
MAC address is assigned during the Fibre Channel Login process. FPMA addresses are not
universal, but local addresses to the SAN. These addresses have OUIs with the U/L bit set to 1.

SAN Solutions Design Concepts V4 Part 1 - 74


Copyright © 2010 EMC Corporation. Do not Copy - All Rights Reserved.

The most common FCoE implementation is the integration of an FCoE switch, connected either
by ISL as an E_Port, or used as a tunnel to a Fibre Channel fabric containing storage arrays. The
FCoE switch connects to a network switch as well. The hosts have dual path connection through
CNAs to the FCoE switch.
The current EMC FCoE implementation recommendations include having the EMC storage
connected through Fibre Channel and hosts connected through FCoE. The hosts must have two
CNAs each to process FCoE frames. There is no support for Virtual Ports at this time. EMC only
supports Ethernet connectivity directly to the FCoE switch device. Additional switches and
routers are not supported at this time.

SAN Solutions Design Concepts V4 Part 1 - 75


Copyright © 2010 EMC Corporation. Do not Copy - All Rights Reserved.

Let’s look at some applications of the concepts previously discussed.

SAN Solutions Design Concepts V4 Part 1 - 76


Copyright © 2010 EMC Corporation. Do not Copy - All Rights Reserved.

After gathering information about the storage, connectivity, and server environments, the design
process can begin. This is not an easy task for the designer. Most of the information required
may not be known by the customer and requires research. Do not focus on the SAN components
until a comprehensive analysis is performed. Many factors are involved in this analysis. A
cursory review of the environment may cause an incomplete or insufficient SAN design.
When planning a SAN implementation, determine preliminary requirements including: distance,
environment, and performance.

SAN Solutions Design Concepts V4 Part 1 - 77


Copyright © 2010 EMC Corporation. Do not Copy - All Rights Reserved.

Designing a fabric involves many variables that require consideration. Each variable entails a
separate design decision that must be made. Each design decision will help you create a fabric
design that is appropriate for your business information model.
When considering building redundancy into the environment, you must always weigh the
opportunity cost of how much redundancy you need or want. Opportunity cost is the cost
associated with the possible impact of not using the resources for other activities. For example,
each extra port that is used for redundant ISLs cannot be used to attach more servers and storage
to the environment.
Fabric topologies that aggregate traffic into unbalanced scenarios should be avoided. For
example, an intermediate switch that has two ISLs coming in from the server-tier switches and
only one ISL leaving toward the storage-tier switches can lead to performance degradation.
Resource consolidation includes both physical and logical consolidation. Physical consolidation
involves the physical movement of resources to a centralized location. Now that these resources
are located together, you may be able to more efficiently use facility resources, such as power
protection, personnel, and physical security. The trade-off that comes with physical
consolidation is the loss of resilience against a site failure.

SAN Solutions Design Concepts V4 Part 1 - 78


Copyright © 2010 EMC Corporation. Do not Copy - All Rights Reserved.

The biggest impact to SAN performance is realized through a solid SAN architecture and
design. The infrastructure should be chosen carefully to ensure that the customer’s requirements
are addressed in the most efficient way possible.
First, determine the number of ports required. Then add 20% to the calculation for near-term
expansion. Underestimating the number of connections that need to be attached to the SAN
causes MANY problems. Even if these ports are not used at the outset of the installation, extra
ports prove to be invaluable when it comes time to troubleshoot connectivity problems. Extra
ports can also be utilized as ISLs when expanding a fabric with multiple chassis.
The next step in building an effective and supportable SAN is looking at allocating and
managing the resources that are added to the SAN. LUN masking can be controlled from either
the host or server side or from the storage device itself. LUN masking is a means of slicing up
the physical storage into logical partitions that can be presented and accessed by the servers. The
applications that are implemented must be understood and examined. Application requirements
may dictate some infrastructure requirements that must be architected into the SAN design. If
the applications require 100% availability, then all devices that support this application must be
designed with redundant connections. Servers require multiple HBA cards to ensure multiple
failover paths are available. Switches that are deployed must support the high availability.

SAN Solutions Design Concepts V4 Part 1 - 79


Copyright © 2010 EMC Corporation. Do not Copy - All Rights Reserved.

For the most up-to-date information, always consult the EMC Support Matrix, available through
E-Lab Interoperability Navigator at elabnavigator.EMC.com, under the PDFs and Guides tab.
Refer to the EMC Networked Storage Topology Guide, also available through E-Lab
Interoperability Navigator. It provides a top-down view of networked storage and assists the
network designer in designing a suitable networked storage infrastructure. Documentation and
release notes can be found on Powerlink.

SAN Solutions Design Concepts V4 Part 1 - 80


Copyright © 2010 EMC Corporation. Do not Copy - All Rights Reserved.

Enterprises may require SAN connectivity to multiple sites, including disaster recovery or
business continuance centers, user access to storage between multiple campuses, and remote and
branch offices.
Remote SAN connectivity may need to support multiple applications, including data replication,
remote back up, and remote data access.
Different sites and applications might indicate a range of connectivity requirements, including
high bandwidth and low latency for synchronous data replication over short distances or lower
bandwidth and higher latency for asynchronous replication over longer distances.

SAN Solutions Design Concepts V4 Part 1 - 81


Copyright © 2010 EMC Corporation. Do not Copy - All Rights Reserved.

Consolidation of the SAN fabric is the first step toward achieving a key goal for many
enterprises—storage consolidation. The consolidation of localized low-end storage systems into
centralized, high-performance systems enables more efficient use of storage resources. Even a
small gain in efficiency can result in a significant reduction in TCO.
A mix of high-end and low-end applications leads to a mix of high-end and low-end storage.
Follow these general guidelines for selecting storage for each application. Fibre Channel storage
arrays with Fibre Channel disks are used for high-performance, low-latency applications.
Slower, lower-cost serial drive arrays can be used for low-end applications. Rarely accessed data
can be put on lower-cost storage arrays instead of on tape. The ATA drives can be used for back-
ups instead of tape for better access to data.
Keep in mind that you might need to build out the LAN to support iSCSI. This may mean
upgrading to Gigabit Ethernet, installing additional cabling, implementing VLANs, or
implementing multi-protocol label switching.

SAN Solutions Design Concepts V4 Part 1 - 82


Copyright © 2010 EMC Corporation. Do not Copy - All Rights Reserved.

When calculating drive IOPS, you must remember that this figure depends on spindle speed.
There are calculators available on Powerlink to assist in determining optimal IOPS count.

SAN Solutions Design Concepts V4 Part 1 - 83


Copyright © 2010 EMC Corporation. Do not Copy - All Rights Reserved.

The term ―best practice‖ describes a process rather than a series of documents or steps. When
we talk about a best practice, we must remember that what is best for one situation may not be
optimal in another situation. For example, deciding that a Core/Edge design is the ―best‖
topology and applying that to a simple SAN with one host and four storage ports results in a
SAN that is too large, too expensive, and unwieldy for the customer.
It is preferable to use a compilation of references to determine the best practice for a given
scenario. The two principal sources of information for designing a SAN are the Topology Guide
and the EMC Support Matrix. These documents can be found on Powerlink. This information is
updated monthly and should be referenced for the most up to date qualifications and designs.

SAN Solutions Design Concepts V4 Part 1 - 84


Copyright © 2010 EMC Corporation. Do not Copy - All Rights Reserved.

Calculate the total storage port count. This is the minimum number. By reason of performance,
you may wish to increase the number. Good topology design minimizes the impact of an
interruption caused by a failed component. To accomplish this, the design needs to consider the
end-to-end channel. End-to-end analysis begins with the HBA, a linkage to the switch, the
components of the switch, particularly the port cards, the link to the storage and the Fibre
Channel Director in Symmetrix, or port in the CLARiiON. You need to determine the
availability strategy. Maximum Availability requires dual redundant fabric. High Availability
requires single fabric, dual connectivity.

SAN Solutions Design Concepts V4 Part 1 - 85


Copyright © 2010 EMC Corporation. Do not Copy - All Rights Reserved.

To calculate a total port count:


 Identify the number of Fibre Channel HBAs in each server, and calculate the total.
 Identify the number of Fibre Channel Director ports in each Symmetrix, and calculate the total.
 Add the totals from steps 1 and 2.
 Divide the total from step 3 by (number of ports in the device selected – ports of one card). This allows four
spare ports in each unit in the case a card goes bad and is an optimal scenario – recall that in an HP
environment moving the cable to another card only preserves connectivity but does not adjust the Volume
Manager’s view of the devices. Just in case you forgot, round up, as there are no half ports. This is the
minimum number of directors or switches required.
Consider availability and performance requirements after determining the minimum required port count.
For full redundancy, although linked, we create two fabrics, with hardware divided between cabinets. The minimum
number of rack locations is 2, dividing the equipment equally. Divide first by two and then by rack density to
determine the number of racks required.
Calculate the number of service processors required, or if remote service processors are required according to the
selected platform.
Special Requirements and Adjustments:
Determine requirements for extended distance ports,
consider cabling support if there are no options for
using proper cable. Remember that extended link .
support requires a change in buffer credits.
(BB_Credit) to maintain link efficiency.
Determine requirements for interswitch links.

SAN Solutions Design Concepts V4 Part 1 - 86


Copyright © 2010 EMC Corporation. Do not Copy - All Rights Reserved.

Switch port assignment does not affect performance in a director, but does affect performance in
a switch. Switches are bounded by the application specific integrated circuits, or ASICs, causing
an unequal performance when an ASIC boundary is crossed. This cost may be negligible in a
low performance application, but a recognizable cost in a high throughput application.
Design within ASIC boundaries when practical. Symmetrix or CLARiiON port assignment
always affects performance. In a consolidation topology, the workload of multiple servers
combines into a single port. Therefore, summarize the I/O workloads in terms of steady state,
peak, batch, and backup.
Design for 70% of measured maximum IO/sec or MB/sec per storage port. Design switch
environments within ASIC boundaries. Overloading ports results in response time degradation.
Set system design goals based upon customers’ desired maximum utilization. This reduces the
fan-out values. Recognize that the fan-out ratio is not a function of the number of servers that
can be attached to a channel, but rather the summary I/Os and block size against the theoretical
throughput capabilities of Fibre Channel.

SAN Solutions Design Concepts V4 Part 1 - 87


Copyright © 2010 EMC Corporation. Do not Copy - All Rights Reserved.

When configuring a SAN, the event begins with a capacity problem and ends with a short-term
resolution to the problem. Purchase decisions were based upon a capacity plan, yet the
implementation often accounts for pent up demand, reducing the expected capabilities of the
environment. Connectivity is often an issue. Each operating system has its own limits, in part
based on HBA capabilities.

SAN Solutions Design Concepts V4 Part 1 - 88


Copyright © 2010 EMC Corporation. Do not Copy - All Rights Reserved.

Accessibility refers to the ability of hosts to access the storage that is required to service their
applications. Accessibility can be measured on the ability to physically connect and
communicate with the individual storage arrays, as well as the ability to provide enough
bandwidth resources to meet the full-access performance requirements. A storage array that is
physically accessible, but cannot be accessed within accepted performance limits because of
over-saturated paths to the device, may be just as useless as an array that cannot be reached
physically.
Accessibility’s link to available bandwidth leads us to consider the differences in building a
statistical bandwidth infrastructure and a guaranteed bandwidth infrastructure. Guaranteed
bandwidth infrastructures provide enough bandwidth resources for the full potential of the
devices on the fabric to be used simultaneously. Statistical bandwidth fabrics are developed to
handle only a fraction of the potential bandwidth.
As the complexity of the fabric increases, more emphasis and reliance must be placed with the
management application and adherence to well-defined management policies. Complexity can
also impact how an environment is secured.

SAN Solutions Design Concepts V4 Part 1 - 89


Copyright © 2010 EMC Corporation. Do not Copy - All Rights Reserved.

Availability is a measurement of the amount of time that data can be accessed, compared to the
amount of time the data is not accessible because of issues in the environment. Lack of
availability might be a result of failures in the environment that cause a total loss of paths to the
device, or it might be an event that caused so much bandwidth congestion that the access
performance renders the device virtually unavailable.
Availability is impacted not only by the choice of components used to build the fabric, but also
by the ability to build redundancy into the environment. The correct amount of redundancy
allows processes to gracefully failover to secondary paths and continue to operate effectively.
Too little redundancy built into the fabric can cause bandwidth congestion, performance
degradation, or in some cases a loss of availability.

SAN Solutions Design Concepts V4 Part 1 - 90


Copyright © 2010 EMC Corporation. Do not Copy - All Rights Reserved.

Storage consolidation and a SAN often precede server consolidation. The best candidates for
consolidation are systems within the same family, first by workgroup or job purpose and second
by server type. Preparing for consolidation during SAN planning reduces a lot of risk, and
simplifies the transition process.
Resource consolidation includes the concepts of both physical and logical consolidation.
Physical consolidation involves the physical movement of resources to a centralized location.
Logical consolidation is associated with bringing components under a unified management
infrastructure and creating a shared resource pool, such as a SAN. Logical consolidation does
not allow you to take full advantage of the site consolidation benefits, but it does maintain site
failure resilience.

SAN Solutions Design Concepts V4 Part 1 - 91


Copyright © 2010 EMC Corporation. Do not Copy - All Rights Reserved.

Flexibility is a measure of how rapidly you are able to deploy, shift, and redeploy new storage
and host assets in a dynamic fashion without interrupting your currently running environment.
An example of flexibility is your ability to simply connect new storage into the fabric and then
zone it to any host in the fabric. You can do this without any interruption in the I/O to any of the
other hosts in your environment. Flexibility can also be seen in the ability of the Fibre Channel
directors to perform code loads, component replacement and insertion while the system is
running without any noticeable impact to the hosts.

SAN Solutions Design Concepts V4 Part 1 - 92


Copyright © 2010 EMC Corporation. Do not Copy - All Rights Reserved.

Scalability is a measure of how easily a fabric can be extended so that it can accept more
storage, more hosts, or more switches. More storage and servers not only covers the physical
connections required to attach these components, but also the internal and external bandwidth
required to handle the actual throughput of these devices during usage.
External bandwidth is associated with the number of ISLs a switch can support, as well as the
individual bandwidth of each ISL. Some switches now support both 2 Gb/s and 1 Gb/s ISLs.
Other switches support a feature called trunking, which allows sharing the bandwidth resources
of multiple ISLs simultaneously.
Scalability is also enhanced by a switching component's ability to allow the online insertion of
port expansion cards or additional optics. Allowing hot insertion of devices promotes the
purchase and usage of partially populated chassis that can be upgraded as the need arises.

SAN Solutions Design Concepts V4 Part 1 - 93


Copyright © 2010 EMC Corporation. Do not Copy - All Rights Reserved.

Security refers to the ability to protect your operations from external and internal malicious
intrusions, as well as the ability to protect yourself from accidental or unintentional data access
by unauthorized parties.
Security can range from the restriction of physical access to the servers, storage, and switches by
placing them in a locked room, or the logical security associated with zoning, volume accessing
and masking, SID lockdown, and port binding.
Increasing the level of security directly impacts the flexibility of the environment. As security is
increased, changes become more complicated. Whenever a new security policy is desired, it
should be documented and reviewed for its impact on the accessibility, flexibility, and
supportability of the environment.

SAN Solutions Design Concepts V4 Part 1 - 94


Copyright © 2010 EMC Corporation. Do not Copy - All Rights Reserved.

Supportability is the measure of how easy it is to effectively identify and troubleshoot issues, as
well as identify and implement a viable repair solution in the environment. The ability to
troubleshoot may be enhanced through good fabric designs, purposeful placement of servers and
storage on the fabric, and a switch's ability to identify and report issues on the switch itself or in
the fabric.
The supportability measurement takes into effect the usefulness of internal error reporting,
logging, and any diagnostic utilities that are shipped with the component. A product that is
easily supported is an asset to an organization, because of its ability to be brought back online
without the time delays associated with shipping replacements or from scheduling visits for on-
site service representatives. Many switches have the ability to identify issues and, through policy
management, initiate automatic fail-over and recovery procedures. Some switches, as well as the
Symmetrix, also have the ability to identify issues and initiate call-home procedures to alert
support personnel of the issue.

SAN Solutions Design Concepts V4 Part 1 - 95


Copyright © 2010 EMC Corporation. Do not Copy - All Rights Reserved.

Available mechanisms that promote a secure SAN include: access control, zoning, LUN
masking, port binding, management keys, protocols, encryption, and physical access control
mechanisms. These mechanisms can vary by topology, vendor, and business needs.

SAN Solutions Design Concepts V4 Part 1 - 96


Copyright © 2010 EMC Corporation. Do not Copy - All Rights Reserved.

Secure SAN architectures usually require that multiple security domains, or zones, be
implemented and that these security zones be formally documented and controlled to meet
regulatory auditing requirements. Security zones can exist between servers and switches,
between switches, between SAN management systems and switches, and between administrators
and access control management systems.
In this example, you can see where security enhancements are a valuable addition to the SAN
design.

SAN Solutions Design Concepts V4 Part 1 - 97


Copyright © 2010 EMC Corporation. Do not Copy - All Rights Reserved.

This slide presents a few points to consider when planning a SAN.

SAN Solutions Design Concepts V4 Part 1 - 98


Copyright © 2010 EMC Corporation. Do not Copy - All Rights Reserved.

These are the key points covered in this training. Please take a moment to review them.
This concludes the training. Please proceed to the Course Completion slide to take the
Assessment.

SAN Solutions Design Concepts V4 Part 1 - 99

Potrebbero piacerti anche