Sei sulla pagina 1di 33

Configuring VMware vSphere Hosts to

Access Dell SCv2000 and SC4020 Arrays


with SAS Front-end HBAs
Dell Storage Engineering
July 2016

A Dell Deployment and Configuration Guide


Revisions
Date Description

October 2015 Initial release

February 2016 Updated technical support information

July 2016 Updated to include support for SC4020 storage arrays with the release of SCOS 7.1

Acknowledgments
Author: Chuck Armstrong

THIS WHITE PAPER IS FOR INFORMATIONAL PURPOSES ONLY, AND MAY CONTAIN TYPOGRAPHICAL ERRORS AND TECHNICAL INACCURACIES.
THE CONTENT IS PROVIDED AS IS, WITHOUT EXPRESS OR IMPLIED WARRANTIES OF ANY KIND.
2015 - 2016 Dell Inc. All rights reserved. Dell and the Dell logo are trademarks of Dell Inc. in the United States and/or other jurisdictions. All other
marks and names mentioned herein may be trademarks of their respective companies.

2 Configuring VMware vSphere Hosts to Access Dell SCv2000 and SC4020 Arrays with SAS Front-end HBAs | 3027-CD-VM
Table of contents
Revisions ............................................................................................................................................................................................. 2
Acknowledgments ............................................................................................................................................................................ 2
Executive summary .......................................................................................................................................................................... 4
Audience ............................................................................................................................................................................................. 4
1 Introduction ................................................................................................................................................................................ 5
1.1 Dell Storage SCv2000 Series overview ....................................................................................................................... 5
1.2 Dell Storage SC4020 overview ..................................................................................................................................... 6
1.3 VMware vSphere overview ............................................................................................................................................ 6
1.4 Transport options overview .......................................................................................................................................... 6
2 SC Series array with SAS host path configuration options ................................................................................................. 8
2.1 Multi-path configuration ............................................................................................................................................... 8
2.2 Single path configuration ............................................................................................................................................. 11
2.3 Hybrid configuration .................................................................................................................................................... 13
3 Configure VMware hosts to access SC Series storage using SAS ................................................................................... 14
3.1 Prerequisite steps to prepare the environment ....................................................................................................... 14
3.2 Install and configure SAS HBAs in the VMware hosts ............................................................................................. 15
3.3 Connect VMware hosts to the SC Series array with SAS cables ........................................................................... 16
3.4 Create server and cluster objects on the SCv2000 or SC4020 ........................................................................... 17
3.4.1 Create server objects.................................................................................................................................................... 18
3.4.2 Create server cluster object on the SCv2000 or SC4020 ..................................................................................... 21
3.5 Create and map storage volumes to VMware hosts .............................................................................................. 23
3.6 Connect to mapped storage, create datastore, and configure Multi-path settings ......................................... 25
3.6.1 vCenter environment ................................................................................................................................................... 25
3.6.2 Standalone VMware ESXi 6.0/5.5 server ................................................................................................................... 28
A Technical Support and resources ......................................................................................................................................... 33
A.1 Referenced or recommended documentation ....................................................................................................... 33

3 Configuring VMware vSphere Hosts to Access Dell SCv2000 and SC4020 Arrays with SAS Front-end HBAs | 3027-CD-VM
Executive summary
The Dell SCv2000 and SC4020 arrays provide the option of leveraging serial-attached SCSI (SAS) host
bus adapters (HBAs) to connect physical servers to external storage. The SCv2000 and SC4020 arrays with
SAS front-end ports present a transport configuration option in addition to Fibre Channel and iSCSI. This
paper provides administrators with step-by-step guidance for configuring VMware vSphere hosts with
SAS HBAs to access storage on the SCv2000 and SC4020 arrays when they are equipped with SAS front-
end ports.

Audience
This document was written for SAN and VMware administrators seeking additional guidance for
configuring vSphere hosts with SAS HBAs to access storage on SCv2000 and SC4020 arrays when
equipped with SAS front-end ports.

4 Configuring VMware vSphere Hosts to Access Dell SCv2000 and SC4020 Arrays with SAS Front-end HBAs | 3027-CD-VM
1 Introduction
This guide provides administrators with supplemental information on how to configure VMware vSphere
hosts with SAS HBAs to access SAN storage on the SCv2000 and SC4020 arrays when they are equipped
with SAS front-end ports. SAS front-end support was first offered with the SCv2000 array and has now
been extended to include the SC4020 array with the release of SCOS 7.1.

For most environments, either Fibre Channel or iSCSI will continue to be the preferred front-end transport
option for connecting hosts with SAN storage. These transports offer better scale and design flexibility
than direct-attached SAS, but they also require additional switch hardware and expertise. Some of the
design factors to consider when using SAS HBAs are covered in detail below.

1.1 Dell Storage SCv2000 Series overview


The SCv2000 arrays are an entry-level SAN that offer many of the same features as other SC Series arrays
at entry-level affordability. The SCv2000 arrays are available with three different front-end transport
options: Fibre Channel, iSCSI, or SAS. Customers decide which type of front-end connectivity is right for
their environment at the time of purchase.

Figure 1 Front and rear views of an SCv2000 array with SAS front-end ports

For more information about the full line of SCv2000 arrays and the different transport options, refer to the
following documentation at http://www.dell.com/support/home/us/en/04/product-
support/product/storage-sc2000/manuals.

SCv2000 Release Notes


SCv2000 Getting Started Guide
SCv2000 System Deployment Guide
SCv2000 Owners Manual

5 Configuring VMware vSphere Hosts to Access Dell SCv2000 and SC4020 Arrays with SAS Front-end HBAs | 3027-CD-VM
1.2 Dell Storage SC4020 overview
The SC4020 arrays are a full-featured SAN that is also available with three different front-end transport
options: Fibre Channel, iSCSI, or SAS. As with the SCv2000, customers decide which type of front-end
connectivity is right for their environment at time of purchase.

Figure 2 Front and rear views of the SC4020 with SAS front-end ports

For more information about the full line of the SC4020, refer to the owners manual at
http://www.dell.com/support/home/us/en/19/product-support/product/dell-compellent-
sc4020/manuals.

1.3 VMware vSphere overview


This document assumes the reader is familiar with the basics of VMware vSphere and its components.
Several online sites offer detailed information on vSphere and vCenter, and as such, that information will
not be replicated here. This guide focuses specifically on how to configure vSphere hosts with SAS HBAs
to access SAN storage on SCv2000 and SC4020 arrays when configured with SAS front-end ports.

For more information on VMware platforms, the VMware KnowledgeBase is a great place to start.

1.4 Transport options overview


For most environments, either Fibre Channel or iSCSI will continue to be the preferred method for
configuring front-end connectivity between the servers and SAN storage. Fibre Channel and iSCSI are
mature, proven technologies that can scale to include a large number of hosts and storage across multiple
locations. When configured with redundant fabrics, these transports offer highly resilient and reliable data
transfer between SAN storage and hosts. However, they require additional components (such as switches,
HBAs and cabling) and expertise to support the technology.

6 Configuring VMware vSphere Hosts to Access Dell SCv2000 and SC4020 Arrays with SAS Front-end HBAs | 3027-CD-VM
For small or entry level environments with fewer hosts, SAS front-end connectivity can provide
administrators with the same performance and resiliency as a Fibre Channel or iSCSI fabric without the
cost and complexity of additional hardware components. There are, however, a few considerations to
keep in mind when choosing SAS:

Scale: The number of physical hosts per SCv2000 or SC4020 array is limited to a maximum of four (with
path redundancy) or eight (without path redundancy).

Design: The physical hosts must be located in close proximity to the SC Series array, typically in the same
or an adjacent rack. A typical SAS cable is from one to six meters in length.

7 Configuring VMware vSphere Hosts to Access Dell SCv2000 and SC4020 Arrays with SAS Front-end HBAs | 3027-CD-VM
2 SC Series array with SAS host path configuration options
When the SCv2000 or SC4020 array is configured with SAS front-end ports, eight ports (four per
controller) are available to connect server hosts with SAS HBAs to SAN storage. When hosts are configured
for multi-path, the SC Series array will support a maximum of four hosts (two SAS ports per host). When
hosts are configured for single-path, the SC Series array will support a maximum of eight hosts (one SAS
port per host). Hybrid designs are also possible with some hosts configured to use single-path and some
configured to use multi-path.

Figure 3 SCv2000/SC4020 with dual integrated controllers, front-end SAS ports circled in red

Note: The customer assumes all risks associated with the design they choose to implement, and
therefore, should carefully consider the pros, cons and risks associated with configuring hosts for single-
path or multi-path. Enterprise grade hardware often allows administrators the design flexibility to enable
configurations that may not follow typical best practices. However, alternate designs might still provide
acceptable levels of protection and performance for a given environment.

2.1 Multi-path configuration


To provide redundancy and ensure uptime, verify that each host always has at least two data paths to
external storage. In this way, if a single data path fails, another data path will allow the host to continue to
access SAN storage. VMware vSphere offers native Multi-path I/O (MPIO) support that is easy to configure.

As shown in Figure 3, when an SCv2000 or SC4020 array is configured with SAS front-end ports, it
provides eight ports (four per controller). When leveraging MPIO, the SC Series has capacity for up to four
physical hosts (two SAS ports per host).

Figure 4, Figure 5 and Figure 6 illustrate examples of three possible VMware vSphere configurations when
using SAS front-end ports with MPIO.

8 Configuring VMware vSphere Hosts to Access Dell SCv2000 and SC4020 Arrays with SAS Front-end HBAs | 3027-CD-VM
As shown in Figure 4, an SC Series array supports up to four hosts with a multi-path configuration and a
4-node vSphere cluster. To protect against a service event that affects a single controller, connect each
host to a SAS port on each controller.

Dell R630 host with dual-port SAS PCIe cards


1

1 2 750W 750W

iDRAC

Node 4

1 2 750W 750W

iDRAC

Node 3 4-node
vSphere
cluster
1

1 2 750W 750W

iDRAC

Node 2

1 2 750W 750W

iDRAC

Node 1

MGMT REPL 1 2 3 4 1 2 3 4 7 6 5 4

3 2 1 0

A B

1 2 3 4
12G-SAS-4 Type B

Type B 12G-SAS-4
4 3 2 1

B A

0 1 2 3

4 5 6 7 4 3 2 1 4 3 2 1 REPL MGMT

SCv2000/SC4020 with SAS front-end ports


Figure 4 4-node vSphere cluster with multi-path

9 Configuring VMware vSphere Hosts to Access Dell SCv2000 and SC4020 Arrays with SAS Front-end HBAs | 3027-CD-VM
In Figure 5, an SC Series array is configured with two 2-node vSphere clusters.

Dell R630 host with dual-port SAS PCIe cards


1

1 2 750W 750W

iDRAC

Node 2 2-node
vSphere
cluster
1

1 2 750W 750W

iDRAC

Node 1

1 2 750W 750W

iDRAC

Node 2 2-node
vSphere
cluster
1

1 2 750W 750W

iDRAC

Node 1

MGMT REPL 1 2 3 4 1 2 3 4 7 6 5 4

3 2 1 0

A B

1 2 3 4
12G-SAS-4 Type B

Type B 12G-SAS-4
4 3 2 1

B A

0 1 2 3

4 5 6 7 4 3 2 1 4 3 2 1 REPL MGMT

SCv2000/SC4020 with SAS front-end ports


Figure 5 Two 2-node vSphere clusters with multi-path

10 Configuring VMware vSphere Hosts to Access Dell SCv2000 and SC4020 Arrays with SAS Front-end HBAs | 3027-CD-VM
As shown in Figure 6, administrators can configure the attached physical hosts in a variety of ways,
including stand-alone and clustered vSphere servers, up to four total hosts.

Dell R630 host with dual-port SAS PCIe cards

1 2
1
750W 750W
Stand-alone
vSphere host
iDRAC

Host 1

1 2 750W 750W

iDRAC

Node 3

1 3-node
1 2 750W 750W
vSphere
iDRAC
cluster
Node 2

1 2 750W 750W

iDRAC

Node 1

MGMT REPL 1 2 3 4 1 2 3 4 7 6 5 4

3 2 1 0

A B

1 2 3 4
12G-SAS-4 Type B

Type B 12G-SAS-4
4 3 2 1

B A

0 1 2 3

4 5 6 7 4 3 2 1 4 3 2 1 REPL MGMT

SCv2000/SC4020 with SAS front-end ports


Figure 6 Array configured with a 3-node vSphere cluster and a stand-alone vSphere host

2.2 Single path configuration


While acknowledging that it is neither a best practice, nor recommended, to configure a host with only a
single path to external storage, it is important to note that VMware offers node-level protection for
vSphere clusters. With vSphere clusters, node redundancy (High Availability and Fault Tolerance) allows
virtual machine (VM) resources or other workloads to continue running (or be restarted) on other nodes if
a node fails. Because of the added resiliency with node-level protection when clustering with vSphere, the
impact of a single node failure due to a single-path design is reduced.

11 Configuring VMware vSphere Hosts to Access Dell SCv2000 and SC4020 Arrays with SAS Front-end HBAs | 3027-CD-VM
While connecting hosts with SAS HBAs to an SC Series array using a single path is not recommended,
especially when supporting critical workloads with high I/O demand, there are some use cases where this
might be desirable. One case is to achieve better SAN utilization in a permitting environment (such as a
test or development environment). The benefit of better SAN utilization by being able to attach more hosts
might outweigh the increased risk of a host failure due to a single-path design.

If administrators are running critical workloads but need to attach more than four hosts, then the SC Series
array with SAS front-end is probably not the right solution. In this case, leveraging Fibre Channel or iSCSI
would be the recommended best practice.

Dell R630 host with dual-port SAS PCIe cards


1 1

1 2 750W 750W 1 2 750W 750W

iDRAC iDRAC

Node 2 Node 4 4-node


vSphere
cluster
1 1

1 2 750W 750W 1 2 750W 750W

iDRAC iDRAC

Node 1 Node 3

1 2
1
750W 750W 1 2
1
750W 750W
2-node vSphere
iDRAC iDRAC
cluster
Node 1 Node 2

1 2
1
750W 750W 1 2
1
750W 750W
2-node vSphere
iDRAC iDRAC
cluster
Node 1 Node 2

MGMT REPL 1 2 3 4 1 2 3 4 7 6 5 4

3 2 1 0

A B

1 2 3 4
12G-SAS-4 Type B

Type B 12G-SAS-4
4 3 2 1

B A

0 1 2 3

4 5 6 7 4 3 2 1 4 3 2 1 REPL MGMT

SCv2000/SC4020 with SAS front-end ports


Figure 7 Array configured with eight single-path vSphere hosts in three clusters

As shown in Figure 7, when using a single path with an SC Series array, up to eight hosts can be configured
in a variety of stand-alone or cluster configurations. In the example, three separate vSphere clusters are
configured from the eight nodes. Note that the nodes in each cluster are split evenly between the two
SC Series array controllers to help protect the environment in the event that a service event affects one of
the controllers. This design allows half of the nodes in each cluster to stay on line, assuming that the
unaffected nodes are able to maintain access to the shared disks.

12 Configuring VMware vSphere Hosts to Access Dell SCv2000 and SC4020 Arrays with SAS Front-end HBAs | 3027-CD-VM
2.3 Hybrid configuration
An SC Series array also supports a mix of hosts with both single-path and multi-path configurations.

Dell R630 host with dual-port SAS PCIe cards


1 2
1
750W 750W 1 2
1
750W 750W
2-node multi-path
iDRAC iDRAC
vSphere cluster
Node 1 Node 2

1 2
1
750W 750W 1 2
1
750W 750W
2-node single-path
iDRAC iDRAC
vSphere cluster
Node 1 Node 2

1 2
1
750W 750W 1 2
1
750W 750W
2-node single-path
iDRAC iDRAC
vSphere cluster
Node 1 Node 2

MGMT REPL 1 2 3 4 1 2 3 4 7 6 5 4

3 2 1 0

A B

1 2 3 4
12G-SAS-4 Type B

Type B 12G-SAS-4
4 3 2 1

B A

0 1 2 3

4 5 6 7 4 3 2 1 4 3 2 1 REPL MGMT

SCv2000/SC4020 with SAS front-end ports


Figure 8 Array with a mix of both single and multi-path vSphere clusters

In this example, the SC Series array is configured with six vSphere hosts. The top two-node cluster hosts a
critical workload that has a high I/O demand, and is therefore configured for MPIO. To maximize the
storage investment, single-path is used to configure two additional 2-node clusters running less critical
workloads with lower I/O demand. As noted in Section 2.2, for each cluster that is using a single-path
cluster split the nodes between the two SC Series array controllers. This helps protect the environment if a
service event affects one of the two SC Series array controllers.

Variations of the above configuration examples (figures 4-8) are possible based on the needs of the
environment.

13 Configuring VMware vSphere Hosts to Access Dell SCv2000 and SC4020 Arrays with SAS Front-end HBAs | 3027-CD-VM
3 Configure VMware hosts to access SC Series storage using
SAS
This part of the document provides instructions for configuring VMware vSphere hosts with SAS HBAs to
access SAN storage on SC Series arrays with SAS front-end ports.

3.1 Prerequisite steps to prepare the environment


Before configuring the vSphere hosts with SAS HBAs to access an SC Series array, review and complete
the tasks listed in Table 1.

Note: It is outside the scope of this document to cover the basic setup of the SC Series array or vSphere
hosts.

Table 1 Prerequisite steps checklist


Tasks

At least one SCv2000 or SC4020 array that is equipped with SAS Front-end ports must be
configured and available.

SCv2000 arrays require Dell Enterprise Manager (EM) Client software version 2015 R1 or newer
to support SAS front-end. Use this software to discover, configure and manage the SCv2000
array. Installing a Data Collector is supported but not required. Refer to the release notes and
administrator guide as needed.

SC4020 arrays require Dell Storage Manager (DSM) version 2016 R2 or newer to support SAS
front-end. In addition, SC4020 arrays with SAS front-end must be running Storage Center
Operating System (SCOS) 7.1 or newer.

Note: Enterprise Manager was rebranded as Dell Storage Manager in 2016.

Obtain a supported SAS PCIe HBA interface card for each of your vSphere hosts, along with
one (for single-path) or two (for multi-path) SAS cables of the desired length. Each server host
must have a compatible PCIe slot available for this HBA.

As of the date of this document, the only supported SAS card is:

Dell 12Gb SAS HBA, Dell part number 405-AAES (LSI Chipset)

For more information, refer to the Dell PowerEdge Controller 9 HBA Users Guide

Other 6Gbps and 12Gbps SAS HBAs are not currently supported for SAS front-end. Hardware
requirements may change. For the latest information on supported hardware, refer to the Dell
Storage Capability Matrix.

14 Configuring VMware vSphere Hosts to Access Dell SCv2000 and SC4020 Arrays with SAS Front-end HBAs | 3027-CD-VM
Tasks

Verify that your physical server hosts are supported. As of the date of this document, only Dell
PowerEdge generation 13 (G13) servers are supported. Older PowerEdge servers and non-Dell
servers are not supported. The servers should have on-board disk for the OS deployment as
boot-from-SAN is not supported with SAS HBAs.

Rack-mount your server hosts. They must be deployed in close proximity to the SCv2000 or
SC4020 (within reach of the SAS cables).

Stage your server hosts with a supported VMware vSphere ESXi OS version and patch them to
the desired level. vSphere ESXi 5.1 Update 2 or newer is required to support the Dell 12Gbps
SAS HBA drivers. vSphere 6.0 is used in the examples shown in this document. As a best
practices recommendation, use the Dell Server Lifecycle Controller to update the internal
hardware components of your hosts and to broker the OS install using the latest Dell server
driver pack.

3.2 Install and configure SAS HBAs in the VMware hosts


After the prerequisite steps are complete, install a supported SAS HBA in each host.

Figure 9 Dell server Lifecycle Controller

Note: If the Dell 12 Gbps SAS HBA is installed in the server before it is staged with the OS, the server
Lifecycle Controller (press [F10] at boot) can be used to update the firmware and install the drivers from
the latest Dell server driver pack (from ftp.dell.com no user name or password required)

15 Configuring VMware vSphere Hosts to Access Dell SCv2000 and SC4020 Arrays with SAS Front-end HBAs | 3027-CD-VM
Figure 10 Install a SAS HBA in an available PCIe slot

1. Following electrostatic discharge (ESD) safety precautions, turn off the vSphere host and install a
supported SAS HBA card in to an available PCIe slot. In this example, the HBA is installed in a full-
height PCI slot on a Dell R630 server. Half-height PCIe slots are also supported with this particular
HBA.
2. Turn on the vSphere host and follow the steps at http://www.dell.com/support/article/HOW11081
to update the SAS HBA driver. The HBA in this example is listed with a description of Avago (LSI
Logic) / Symbios Logic Avago (LSI) 3008.

Note: Even though vSphere may detect and install a native driver for the SAS HBA, it is important to
update this driver following the Dell Support article at http://www.dell.com/support/article/HOW11081.

3. Repeat steps 1 and 2 above to install and update Dell 12 Gbps SAS HBAs for additional vSphere
server hosts.

3.3 Connect VMware hosts to the SC Series array with SAS cables
Review the vSphere host configuration options in Section 2 for multi-path, single-path, and hybrid-path.
Many different configurations are supported based on the number of available vSphere hosts and the
needs of the environment.

16 Configuring VMware vSphere Hosts to Access Dell SCv2000 and SC4020 Arrays with SAS Front-end HBAs | 3027-CD-VM
Dell R630 host with dual-port SAS PCIe cards
1

1 2 750W 750W

iDRAC

Node 2 2-node multi-


path vSphere
cluster
1

1 2 750W 750W

iDRAC

Node 1

MGMT REPL 1 2 3 4 1 2 3 4 7 6 5 4

3 2 1 0

A B

1 2 3 4
12G-SAS-4 Type B

Type B 12G-SAS-4
4 3 2 1

B A

0 1 2 3

4 5 6 7 4 3 2 1 4 3 2 1 REPL MGMT

SCv2000/SC4020 with SAS front-end ports


Figure 11 Configuration example showing an array and two R630 hosts with SAS

In this example, instructions are provided for configuring a two-node vSphere cluster using two Dell
PowerEdge R630 servers, two Dell 12Gbps SAS HBAs, and four SAS cables for multi-path.

Modify these steps to fit the design of your environment.

1. Starting with the first vSphere host, connect a SAS cable from the server to SAS Port 1 in the top
controller module of the SC Series array. The server does not need to be turned off before
connecting or removing SAS cables.
2. In this example, since multi-path will be used, connect a second SAS cable from the host to SAS
Port 1 in the bottom controller of the SC Series array. If the host will not be configured to use
multi-path, then skip this step.

Note: Cable and configure one server at a time so that it is easier to determine the SAS paths that are
associated with a specific host when completing the server configuration steps on the SC Series array. In
addition, this method makes troubleshooting easier if there is a path issue.

3.4 Create server and cluster objects on the SCv2000 or SC4020


After connecting the host to the SC Series array, create server and cluster objects using the Dell Storage
Client.

17 Configuring VMware vSphere Hosts to Access Dell SCv2000 and SC4020 Arrays with SAS Front-end HBAs | 3027-CD-VM
3.4.1 Create server objects
1. Launch the Dell Storage Client and connect to the SCv2000 or SC4020 array. It is possible to
connect to a Data Collector or directly to the array. This example explains how to connect directly
to the array by entering its management IP address (10.211.18.170 in this example) and credentials.

2. Click the Storage tab, expand Volumes and Servers, and create folders and subfolders that
logically group your volume and server objects. In this example, a simple tree was created.

18 Configuring VMware vSphere Hosts to Access Dell SCv2000 and SC4020 Arrays with SAS Front-end HBAs | 3027-CD-VM
3. Right click on the appropriate Servers subfolder and select Create Server.

Note: The Create Server from VMware vSphere or vCenter option is currently not operational with front-
end SAS configuration. VMware has plans to include this functionality with front-end SAS configurations
in the next release of vSphere.

4. In the wizard, provide a name for the host server. In this example, the server is named SASESXi01.
Select the correct operating system from the drop down list. Check the box for the associated SAS
HBA. Since only one vSphere host is currently connected, only one HBA object is listed. Click OK
to complete the wizard.

19 Configuring VMware vSphere Hosts to Access Dell SCv2000 and SC4020 Arrays with SAS Front-end HBAs | 3027-CD-VM
5. The new server object is now listed under the vSphere Cluster 01 folder. Click the Connectivity tab
in the middle pane to verify the presence of dual paths: a path to both the top and bottom
SCv2000 or SC4020 controllers.

6. Repeat the steps above (starting in Section 3.4) to cable and configure additional vSphere hosts. In
this example, a 2nd server is added named SASESXi02.

20 Configuring VMware vSphere Hosts to Access Dell SCv2000 and SC4020 Arrays with SAS Front-end HBAs | 3027-CD-VM
3.4.2 Create server cluster object on the SCv2000 or SC4020
Use the two new host servers, SASESXi01 and SASESXi02, to create a two-node vSphere cluster. In order
to make it easier to manage cluster volumes on the array, create a server cluster object with the desired
host servers as members of the cluster. In this example, the member servers are the two nodes SASESXi01
and SASESXi02.

1. Right click on the appropriate server folder and select Create Server Cluster.

21 Configuring VMware vSphere Hosts to Access Dell SCv2000 and SC4020 Arrays with SAS Front-end HBAs | 3027-CD-VM
2. Click Add Server to Cluster and add the desired vSphere hosts (in this example SASESXi01 and
SASESXi02), and then click OK.

3. The selected vSphere hosts are now listed below the server cluster object.

22 Configuring VMware vSphere Hosts to Access Dell SCv2000 and SC4020 Arrays with SAS Front-end HBAs | 3027-CD-VM
3.5 Create and map storage volumes to VMware hosts
Now that the server cluster object has been created on the SCv2000 or SC4020, the next step is to create
and map storage to the cluster object. In this example, two volumes (150GB and 170GB) are created and
mapped to the cluster server object.

1. Right click the desired volumes folder and select Create Volume.

2. Provide an intuitive name for the volume. In this example, the first volume created is a shared
datastore: SAS-DS-01-150GB. Click Next.

3. Define a capacity for the new volume. For the first volume in this case, the size is set to 150 GB.
Click Next.

23 Configuring VMware vSphere Hosts to Access Dell SCv2000 and SC4020 Arrays with SAS Front-end HBAs | 3027-CD-VM
4. Select the disk folder if multiple disk types are present in the array and click Next.
5. Assign a Replay (snapshot) profile. In this example, the Daily profile is selected. Create a new
Replay profile if desired; click Next.
6. In the Servers tree, select the desired server cluster object and click Next. In this example, the
object is named SAS vSphere ESXi Cluster 01.

7. Configure any desired replications settings and click Next. No replication is configured in this
example.
8. Review the settings on the Summary screen and then click Finish.
9. Repeat steps 1 8 above to create and map at least one additional data volume to the server
cluster object. In this example, a 170 GB volume named SAS-DS-02-170GB is created and mapped
to the SAS vSphere ESXi Cluster 01 server cluster object. This volume will also be a shared
datastore.
10. Click the server cluster object and under the Mappings tab, the two new volumes along with the
mapping details are displayed. Each volume has two paths listed for each host in the cluster to
total four paths.

24 Configuring VMware vSphere Hosts to Access Dell SCv2000 and SC4020 Arrays with SAS Front-end HBAs | 3027-CD-VM
3.6 Connect to mapped storage, create datastore, and configure
Multi-path settings
This section covers connecting to SAS storage presented from the SCv2000 or SC4020 array to one or
more VMware vSphere hosts. Steps are provided for both standalone environments as well as those
managed by VMware vCenter Server.

Note: Mapping volumes to VMware vSphere clusters or stand-alone VMware vSphere hosts for creating
datastores one at a time helps to properly correlate volumes to datastores.

3.6.1 vCenter environment


VMware vCenter is an enterprise management platform used to manage multiple VMware vSphere hosts
through a single application. The interface used to connect to and manage this type of environment is the
VMware vSphere Web Client that is connected to the vCenter server.

The following steps were used to connect to previously mapped storage and create a new datastore from
that storage when in a vCenter-managed environment. Proper configuration of the VMware integrated
MPIO is covered as well.

3.6.1.1 Connect to mapped storage


1. From the vSphere Web Client, click Hosts and Clusters > select a host > Manage tab > Storage >
Storage Adapters > SAS HBA adapter. In this case, the SAS HBA is Avago (LSI)3008.
2. Click the Rescan Adapter icon.

25 Configuring VMware vSphere Hosts to Access Dell SCv2000 and SC4020 Arrays with SAS Front-end HBAs | 3027-CD-VM
3. Following the rescan of the SAS HBA, the new storage should be shown on the Devices tab under
Adapter Details.

3.6.1.2 Create datastore


1. From the vSphere Web Client, select Hosts and Clusters > select a host > Related Objects >
Datastores to display the existing datastores.
2. Click the Add Storage icon (it has a green plus sign) to start the Add Storage Wizard.

26 Configuring VMware vSphere Hosts to Access Dell SCv2000 and SC4020 Arrays with SAS Front-end HBAs | 3027-CD-VM
3. Select the type of Datastore. In this example, VMFS is selected. Click Next.

4. Enter the name for the new datastore (SCv2020-SAS-DS-01-150GB) and then click Next.

Note: Best practices include giving the datastore the same name as (or one containing) the volume name
on the storage array.

5. Upon completing the New Datastore wizard, verify the information and click Finish.

27 Configuring VMware vSphere Hosts to Access Dell SCv2000 and SC4020 Arrays with SAS Front-end HBAs | 3027-CD-VM
6. Click the Refresh icon for the new datastore to be displayed in the Datastores list.

7. Repeat steps 1 6 for each additional datastore.


8. For additional VMware vSphere hosts to which the new storage is mapped, rescan the SAS HBA to
refresh the datastores list.

3.6.1.3 Configure Multi-path settings


The VMware native Multi-path uses a default Path Selection Policy (PSP). For best results, modify the
default PSP to Round Robin, as described in section 6.9 of Dell Storage Center Best Practices with VMware
vSphere 6.x.

3.6.2 Standalone VMware ESXi 6.0/5.5 server


A standalone VMware ESXi 6.0/5.5 server environment refers to the absence of vCenter management. In
this type of environment, the vSphere client is used for management of the ESXi server. Use the following
steps to connect to previously mapped storage and create a new datastore from that storage. Proper
configuration of the VMware integrated MPIO is also covered.

28 Configuring VMware vSphere Hosts to Access Dell SCv2000 and SC4020 Arrays with SAS Front-end HBAs | 3027-CD-VM
3.6.2.1 Connect to mapped storage
1. In the vSphere Client, select Host > Configuration tab > Storage Adapters.

2. Right-click the SAS HBA (in this case, Avago (LSI)3008) and click Rescan.

3. The new SAS storage is displayed in the Details section with the SAS HBA highlighted.

29 Configuring VMware vSphere Hosts to Access Dell SCv2000 and SC4020 Arrays with SAS Front-end HBAs | 3027-CD-VM
3.6.2.2 Create a datastore
1. To create a new datastore from the newly mapped SAS storage from the vSphere Client, navigate
to the ESXi host > Configuration tab > Storage (under Hardware) and then click Add Storage.

2. Follow the storage creation wizard instructions.

30 Configuring VMware vSphere Hosts to Access Dell SCv2000 and SC4020 Arrays with SAS Front-end HBAs | 3027-CD-VM
3. Select the new LUN that is listed.

4. After completing the Add Storage wizard, the new datastore will appear in the datastores list for
the ESXi host. In this example, the datastore was named SCv2020-SAS-DS-01-150GB.

Note: Best practices include giving the datastore the same name as (or one containing) the volume name
on the storage array.

5. Repeat steps 1 5 to create additional datastores as needed.

31 Configuring VMware vSphere Hosts to Access Dell SCv2000 and SC4020 Arrays with SAS Front-end HBAs | 3027-CD-VM
3.6.2.3 Configure Multi-path settings
The VMware native Multi-path uses a default Path Selection Policy (PSP). For best results, modify the
default PSP to Round Robin, as described in section 6.9 of Dell Storage Center Best Practices with VMware
vSphere 6.x.

32 Configuring VMware vSphere Hosts to Access Dell SCv2000 and SC4020 Arrays with SAS Front-end HBAs | 3027-CD-VM
A Technical Support and resources
Dell.com/support is focused on meeting customer needs with proven services and support.

A.1 Referenced or recommended documentation


Dell TechCenter is an IT Community where Dell customers and employees can share knowledge, best
practices, and information about Dell products and installations.
Storage Solutions Technical Documents on Dell TechCenter provide expertise that helps to ensure
customer success on Dell Storage platforms.

Referenced or recommended Dell publications:


Dell SCv2000 documentation library (release notes, installation guide, owners manual, etc.)
http://www.dell.com/support/home/us/en/04/product-support/product/storage-
sc2000/manuals
Dell SCv4020 owners manual
http://www.dell.com/support/home/us/en/19/product-support/product/dell-compellent-
sc4020/manuals
Dell 12 Gbps SAS PCIe HBA Users Guide
http://topics-cdn.dell.com/pdf/dell-sas-hba-12gbps_User's%20Guide_en-us.pdf
Dell Storage Compatibly Matrix
http://en.community.dell.com/dell-groups/dtcmedia/m/mediagallery/20438558
Dell Storage Center Best Practices with VMware vSphere 5.x
http://en.community.dell.com/techcenter/extras/m/white_papers/20437942
Dell Storage Center Best Practices with VMware vSphere 6.x
http://en.community.dell.com/techcenter/extras/m/white_papers/20441056
Dell SC series storage technical content library (whitepapers, videos, best practices, etc.)
http://en.community.dell.com/techcenter/storage/w/wiki/5018.compellent-technical-content

Referenced or recommended VMware publications:


VMware Knowledge Base for VMware vSphere and vCenter:
http://kb.vmware.com/selfservice/microsites/microsite.do

33 Configuring VMware vSphere Hosts to Access Dell SCv2000 and SC4020 Arrays with SAS Front-end HBAs | 3027-CD-VM

Potrebbero piacerti anche