Sei sulla pagina 1di 95

Here is Your Customized Document

Your Configuration is:


Action to Perform - Plan configuration Configuration Type - Basic Storage-System Model - CX4-120 Connection Type - Fibre Channel Switch or Boot from SAN Server Operating System - HP-UX Management Tool - EMC Navisphere Manager

Reporting Problems
To send comments or report errors regarding this document, please email: UserCustomizedDocs@emc.com. For issues not related to this document, contact your service provider. Refer to Document ID: 1423524

Content Creation Date 2010/10/5

Content Creation Date 2010/10/5

Content Creation Date 2010/10/5

CLARiiON CX4 Series

Planning Your Basic CX4-120 Storage-System Switch Configuration with an HP-UX Server

This guide introduces the CLARiiON CX4-120 storage system with UltraFlex technology in Fibre Channel switch configurations with an HP-UX server. You should read this guide: If you are considering the purchase of one of these storage systems and want to understand its features, or Before you plan the installation of one of these storage systems. This guide has worksheets for planning: Hardware components Management port network and security login information File systems and storage-system disks (LUNs and thin LUNs)

For information on planning replication and/or data mobility software (MirrorView, SnapView, SAN Copy) configurations for your storage system, use the Plan Configuration link under Storage-system tasks on the CX4 support website.

These worksheets assume that you are familiar with the servers (hosts) that will use the storage systems and with the operating systems on these servers. For each storage system that you will configure, complete a separate copy of the worksheets included in this document. For the most current, detailed, and complete CX4 series configuration rules and sample configurations, refer to the E-Lab Interoperability Navigator on the Powerlink website (http://Powerlink.EMC.com). Be sure to read the notes for the parts relevant to the configuration that you are planning. For background information on the storage system, read the Hardware and Operational Overview and Technical Specifications for your storage system. You can generate the latest version of these documents
1

using the customized documentation Learn about storage system link under Storage-system tasks on the storage-system support website. Topics in this document are: About the storage system ............................................................... Storage-system Fibre Channel components ..................................... Storage-system management .......................................................... Basic storage concepts.................................................................... File systems and LUNs .................................................................. 3 13 29 48 75

Planning Your Basic CX4-120 Storage-System Switch Configuration with an HP-UX Server

About the storage system


Major topics are: Storage-system overview, page 3 Fibre Channel overview, page 4 Storage-system connection limits and rules, page 8 Types of storage-system installations, page 9

Storage-system overview
The storage system provides terabytes of disk storage capacity in flexible configurations and highly available data at a low cost. End-to-end data transfer rates are up to: 8 Gb/s for Fibre Channel connections in any storage system 10 Gb/s for iSCSI connections in a storage system with UltraFlexiSCSI I/O modules 10 Gb/s for Fibre Channel over Ethernet (FCoE) connections in a storage system with UltraFlex FCoE I/O modules The storage system consists of: One storage processor enclosure (SPE) One or more separate disk-array enclosures (DAEs) One or two standby power supplies (SPSs) The storage processor enclosure does not include disks, and requires at least one disk-array enclosure (DAE) with a minimum of 5 disks. A maximum of 8 separate disk-array enclosures are supported for a total of 120 disks. A DAE connects to a back-end bus, which consists of two redundant loops one loop associated with a back-end port on SP A and one loop associated with the corresponding back-end port on SP B. Since each SP has two back-end ports, the storage system has one back-end bus.You can connect a maximum of eight DAEs to one back-end bus. Storage processor enclosure The storage processor enclosure (SPE) components are:
Planning Your Basic CX4-120 Storage-System Switch Configuration with an HP-UX Server
3

Two storage processors (SP A and SP B) that provide the RAID (redundant array of independent disks) features of the storage system and control disk activity and host I/O. Four power/cooling modules, two associated with SP A and two associated with SP B. Disk-array enclosures The storage systems disk-array enclosures are 4 Gb/s UltraPoint (point-to-point) enclosures (DAEs) that support either high-performance Fibre Channel disk modules or economical SATA (Serial Advanced Technology Attachment, SATA II) disk modules. The DAE also supports Enterprise Flash Drive Fibre Channel modules. These modules are solid state disk (SSD) Fibre Channel modules, also known as Flash or SSD disk modules or disks. You can mix Flash and standard FC disk modules, but not Flash and SATA disk modules, within a DAE. You cannot mix SATA and Fibre Channel disk modules within a DAE, but you can integrate and connect FC and SATA enclosures within a storage system. The enclosures operate at either a 2 or 4 Gb/s bus speed (2 Gb/s components, including disks, cannot operate on a 4 Gb/s bus).

Fibre Channel overview


Fibre Channel is a high-performance serial protocol that allows transmission of both network and I/O data. It is a low-level protocol, independent of data types, and supports such formats as SCSI and IP. The Fibre Channel standard supports two physical protocols that the storage system supports: Arbitrated loop (FC-AL) for direct connection to a host (server) Switch fabric connection to a host (server) A Fibre Channel arbitrated loop is a circuit consisting of nodes. Each node has a unique address, called a Fibre Channel arbitrated loop address. A Fibre Channel switch fabric is a set of point-to-point connections between nodes; each connection is made through one or more Fibre Channel switches. Each node may have its own unique address, but the path between nodes is governed by a switch. Each node is either a server adapter (initiator) or a target (storage system). Fibre Channel switches are not considered nodes. Optical
4

Planning Your Basic CX4-120 Storage-System Switch Configuration with an HP-UX Server

cables connect nodes directly to the storage system or to switches. An optical cable can transmit data over great distances for connections that span entire enterprises and can support remote disaster recovery systems. We strongly recommend the use of OM3 50 m cables for all optical connections. Each device in an arbitrated loop or a switch fabric is a server adapter (initiator) or a target (storage-system Fibre Channel SP data port). Figure 1 shows an initiator node and target node.
Server adapter (initiator) Node Adapter Storage system (target) Connection Node
EMC1802

Figure 1

Fibre Channel nodes - initiator and target connections (1 of up to 6 connections to an SP shown)

In addition to one or more storage systems, a Fibre Channel storage configuration has two main components: A server component (host bus adapter driver with adapter and software) Interconnection components (cables based on Fibre Channel standards and switches) Fibre Channel initiator components (host bus adapter and driver) The host bus adapter is a printed-circuit board that slides into an I/O slot in the servers cabinet. Under the control of a driver, the adapter transfers data between server memory and one or more storage systems over a Fibre Channel connection. Fibre Channel target components Target components are the target portals that accept and respond to requests from an initiator. The Fibre Channel target portals are the front-end Fibre Channel data ports on the storage-system SP. Each SP has 2 or 6 Fibre Channel front-end ports per SP. The Fibre Channel front-end (data) ports communicate with Fibre Channel switches
Planning Your Basic CX4-120 Storage-System Switch Configuration with an HP-UX Server
5

or servers. The connectivity speeds supported by these front-end ports depends on the type of Fibre Channel UltraFlex I/O module that has the ports. The 4 Gb/s Fibre Channel I/O modules support 1/2/4 GB/s front-end connectivity and the 8 Gb/s Fibre Channel I/O modules support 2/4/8 Gb/s front-end connectivity. You cannot use 8 Gb/s Fibre Channel I/O modules in an 1 Gb/s Fibre Channel environment. You can use 4 Gb/s Fibre Channel I/O modules in an 8 Gb/s environment if the Fibre Channel interconnection components auto-adjust their speeds to 4 Gb/s. Fibre Channel interconnection components The interconnection components consist of optical cables between components and Fibre Channel switches. The maximum length of the optical cable between a storage system and a server or switch ranges from 10 to 500 meters (11 to 550 yards), depending on the type of cable and operating speed. With extenders, connections between servers, switches, and other devices can span up to 60 kilometers (36 miles) or more. This ability to span great distances is a major advantage of using optical cables. We strongly recommend the use of OM3 50 m cables for all optical connections. Details on cable lengths and rules for using them are in Table 9. Fibre channel switches A Fibre channel switch, which is required for shared storage in a storage area network (SAN), connects all the nodes cabled to it using a fabric topology. A switch adds serviceability and scalability to any installation; it allows online insertion and removal of any device on the fabric and maintains integrity if any connected device stops participating. A switch also provides server-to-storage-system access control and point-to-point connections. Figure 2 shows a Fibre channel switch.

Planning Your Basic CX4-120 Storage-System Switch Configuration with an HP-UX Server

Server
Adapter

Server
Adapter

Server
Adapter

SP

SP

SP

SP
EMC1805

Storage system
Figure 2

Storage system

Fibre channel switch connections (1 of up to 6 connections to one SP in each storage system shown)

You can cascade switches (connect one switch port to another switch port) for additional port connections. Fibre Channel switch zoning Switch zoning lets an administrator define paths between connected nodes based on the nodes unique World Wide Name. Each zone includes a server adapter node and/or one or more SP nodes. We recommend single-initiator zoning, which limits each zone to a single HBA port (initiator). In Figure 3, the dotted lines show the zone that allows server 1 has access to one SP in storage systems 1 and 2; server 1 has no access to any other SP.
Server 1 Server 2 Server 3

Adapter

Zone

SP SP Storage system 1
Figure 3

Sample Fibre Channel switch zone

Planning Your Basic CX4-120 Storage-System Switch Configuration with an HP-UX Server

Adapter

Switch

SP SP Storage system 2

Adapter

SP SP Storage system 3
EMC1806

To illustrate switch zoning, Figure 3 shows just one HBA per server and one switch. Normally, such installations include multiple HBAs per server and two or more switches. In general a server should be zoned to 2 ports on each SP in a redundant configuration. If you do not define a zone in a switch, all adapter ports connected to the switch can communicate with all SP ports connected to the switch. Fibre Channel switches are available with 8, 16, 32, or more ports. They are compact units that fit into a rackmount cabinet. If your servers (hosts) and storage systems will be far apart, you can place the switches closer to the servers or storage systems, as convenient. A switch is technically a repeater, not a node, in a Fibre Channel loop. However, it is bound by the same cabling distance rules as a node.

Storage-system connection limits and rules


For an initiator to communicate with a storage-system target, it must be registered with the storage system. Table 1 lists the number of initiators that can be registered with the storage system.
Table 1 Number of initiators that can be registered with a storage system Maximum number of initiators FLARE version FLARE 04.29 or later FLARE 04.28 Per SP 256 128 Per storage system 512 256

A CNA can run both 10 GbE iSCSI and FCoE at the same time. As a general rule, a single server cannot connect to the same storage system through both the storage systems iSCSI data ports and FCoE data ports or through both the storage systems iSCSI data ports and Fibre Channel data ports. The same general rule applies to servers in a cluster group connected to the storage system. For example, you must not connect one server in the cluster to the storage systems iSCSI data ports and another server in the cluster to the storage systems FCoE or Fibre Channel data ports. A single server with both Fibre Channel HBAs and CNAs can connect through the same FCoE switch to the same storage system through the storages Fibre Channel data ports and FCoE data ports.

Planning Your Basic CX4-120 Storage-System Switch Configuration with an HP-UX Server

Servers with virtual machines or virtual systems that run different instances of the Navisphere Host Agent than the kernel system runs are the exception to this rule. The initiators on servers that the host agent registers with the storage system for the kernel system and that the host agent registers with the storage system for the virtual machines appear to be from different servers. As a result, you can connect them to different storage groups. You can attach a single server to a CX4 series, CX3 series, or CX series storage system and an AX4-5 series or AX series storage system at the same time only if the following conditions are met: The server is running the Unisphere Server Utility and/or the Unisphere Host Agent or version 6.26.5 or later of the Navisphere Server Utility and/or the Navisphere Host Agent. The AX4-5 series and AX series storage systems are running Navisphere Manager software. The master of the domain with these storage systems is one of the following: A CX4 series storage system A CX3 series storage system running FLARE 03.26.xxx.5.014 or later A CX series storage system running FLARE 02.24.xxx.5.018 or later An AX4-5 storage system running FLARE 02.23.050.5.5xx or later Either a Unisphere management station or a Navisphere management station running the Navisphere UIs version 6.28 or later.

Types of storage-system installations


You can use a storage system in any of several types of installation: Unshared direct with one server is the simplest and least costly. Shared or clustered direct lets multiple servers share the storage system. Shared switched with two or more Fibre Channel switch fabrics or network switches or routers lets multiple servers share the resources of several storage systems in a storage area network
Planning Your Basic CX4-120 Storage-System Switch Configuration with an HP-UX Server
9

(SAN). Shared switched or network storage systems can have multiple paths to each SP, providing multipath I/O for dynamic load sharing and greater throughput. Figure 4 shows the three types of storage-system installation.
Unshared direct (one or two servers) Server Shared or clustered direct (two servers) Server Server Shared switched (multiple servers, multiple paths to SPs) Server Server Server

Adapter

Adapter

Adapter

Adapter

Adapter

Adapter

Adapter

Adapter

Adapter

Adapter

Adapter

Adapter

Server component

Interconnection component Storage component Path 1 Path 2


Figure 4

FC or FCoE switch or LAN

FC or FCoE switch or LAN

SP A SP B Storage system

SP A SP B Storage system

SP A SP B Storage system

SP A SP B Storage system

SP A SP B Storage system

EMC1826b

Types of storage-system installation

The shared or clustered direct installation can be either shared (that is, with storage groups on the storage system enabled to control LUN access) or clustered (that is, with operating system cluster software controlling LUN access). In a clustered configuration, data access control on the storage system can be either enabled or disabled. The number of servers in the cluster varies with the operating system. About shared switched or network storage and storage area networks A storage area network (SAN) is one or more storage devices connected to servers through switches to provide a central location for disk storage. Centralizing disk storage among multiple servers has many advantages, including: Highly available data Flexible association between servers and storage capacity Centralized management for fast, effective response to users data storage needs
10

Planning Your Basic CX4-120 Storage-System Switch Configuration with an HP-UX Server

Easier file backup and recovery A SAN is based on shared storage; that is, the SAN requires that storage-system storage groups are enabled to provide flexible access control to storage-system LUNs. Within the SAN, a network connection to each SP in the storage system lets you configure and manage the storage system. Figure 5 shows the components of a SAN.
Server Server Server

Adapter

Adapter

Adapter

Path 1 Path 2

FC or FCoE switch or LAN

SP A

Storage system
Figure 5 Components of a SAN

In a Fibre Channel environment, the switches can control data access to storage systems through the use of switch zoning. Switch zoning cannot selectively control data access to LUNs in a storage system because each SP appears as a single Fibre Channel device to the switch fabric. Switch zoning and restrictive authentication can prevent or allow communication with an SP, but not with specific disks or LUNs attached to an SP. For access control with LUNs, a different solution is required: storage groups. Storage groups Storage groups are the central component of shared storage; a storage system that is unshared (that is, dedicated to a single server) does not need to use storage groups. When you configure shared storage, you create a storage group and specify which server(s) can access it (read from and/or write to it).

Planning Your Basic CX4-120 Storage-System Switch Configuration with an HP-UX Server

Adapter
SP B

Adapter

FC or FCoE switch or LAN

Adapter
SP A

SP B
EMC1810c

Storage system

11

More than one server can access the same storage group only if all the servers run cluster software. The cluster software enforces orderly access to the shared storage group LUNs. Figure 6 shows a simple shared storage configuration consisting of one storage system with two storage groups. One storage group serves a cluster of two servers running the same operating system, and the other storage group serves a database server with a different operating system. Each server is configured with two independent paths to its data, including separate host bus adapters, switches, and SPs, so there is no single point of failure for access to its data.
Highly available cluster File server Operating system A
Adapter Adapter

Mail server Operating system A


Adapter Adapter

Database server Operating system B


Adapter Adapter

FC or FCoE switch or LAN

FC or FCoE switch or LAN

Path 1 Path 2

SP A

SP B

LUN Cluster storage group LUN LUN LUN LUN Database server storage group
Figure 6

Physical storage system

LUN LUN
EMC1811c

Sample shared storage configuration

12

Planning Your Basic CX4-120 Storage-System Switch Configuration with an HP-UX Server

Storage-system Fibre Channel components


This section helps you plan the hardware components adapters, cables, storage system requirements, and site requirements for each server in your installation. Major topics are: Storage-system hardware components, page 13 Hardware components worksheet, page 25 Cache worksheet, page 28

Storage-system hardware components


The basic storage-system hardware components are: Storage processor enclosure (SPE) a sheet-metal housing with a front cover (bezel), midplane, and slots for the following components: A pair of redundant storage processors (SP A and SP B), each with a CPU module and an I/O carrier with slots for UltraFlex I/O modules Four power supply/system cooling modules (referred to as power/cooling modules), two associated with SP A and two associated with SP B Two separate standby power supplies (SPSs) support write caching and provide the highest data availability. One or more disk-array enclosures (DAEs) with slots for 15 disks. One DAE with at least five disks is required. Figure 7 and Figure 8 show the SPE components. If the enclosure provides slots for two identical components, the component in slot A is called component-name A. The second component is called component-name B. For increased clarity, the following figures depict the SPE outside of the rack cabinet. Your SPE may arrive installed in a rackmount cabinet.

Planning Your Basic CX4-120 Storage-System Switch Configuration with an HP-UX Server

13

Power/cooling modules A0 - A1

Power/cooling modules B0 - B1

CPU module A
Figure 7

CPU module B

CL4135

SPE components (front with bezel removed)

SP B

SP A

10/100/1000

10/100/1000

Management module B
Figure 8

Management module A

CL4134

SPE components (back)

Storage processor The storage processor (SP) provides the intelligence of the storage system. Using its own proprietary software (called FLARE Operating Environment), the SP processes the data written to or read from the disk modules, and monitors the disk modules. An SP consists of a CPU module printed-circuit board with two central processing module and memory modules, associated UltraFlex I/O modules, and status lights. Each SP uses UltraFlex I/O modules for Fibre Channel (FC), FCoE, and iSCSI front-end port connectivity to hosts (servers) and Fibre Channel (FC) back-end port connectivity to disks with the standard configurations listed in Table 2.

14

Planning Your Basic CX4-120 Storage-System Switch Configuration with an HP-UX Server

Table 2

Standard SP port configurations iSCSI server ports (see note) 2 FC server ports (see note) 2 FCoE ports (see note) 2 FC back-end ports 1

Storage system CX4-120

Each SP can have one optional UltraFlex I/O module for additional iSCSI, Fibre Channel, or FCoE server ports. Each SP also has an Ethernet connection through which the EMC Navisphere management software lets you configure and reconfigure the LUNs and storage groups in the storage system. Since each SP connects to a network, you can still reconfigure your system, if needed, should one SP fail. UltraFlex I/O modules Table 3 lists the number of I/O modules the storage system supports and the slots the I/O modules can occupy. More slots are available for optional I/O modules than the maximum number of optional I/O modules supported because some slots are occupied by required I/O modules. With the exception of slots A0 and B0, the slots occupied by the required I/O modules can vary between configurations. Figure 9 shows the I/O module slot locations and the I/O modules for the standard minimum configuration with 1 GbE iSCSI modules. The 1 GbE iSCSI modules shown in this example could be 10 GbE iSCSI or FCoE I/O modules.
Table 3 Number of supported I/O modules per SP All I/O modules Storage system CX4-120 Number supported per SP 3 SP A slots A0-A2 SP B slots B0-B2 Optional I/O modules Number supported per SP 1 SP A slots A1-A2 SP B slots B1-B2

Planning Your Basic CX4-120 Storage-System Switch Configuration with an HP-UX Server

15

10/100/1000

10/100/1000

B0
Figure 9

B1

B2

B3

B4

A0

A1

A2

A3

A4

CL4127

I/O module slot locations (1 GbE iSCSI and FC I/O modules for a standard minimum configuration shown)

The following types of modules are available: 4 or 8 Gb Fibre Channel (FC) modules with either: 2 back-end (BE) ports for disk bus connections and 1 front-end (FE) port for server I/O connections (connection to a switch or server HBA). or 4 front-end (FE) ports for server I/O connections (connection to a switch or server HBA). The 8 Gb FC module requires FLARE 04.28.000.5.7xx or later. 10 Gb Ethernet (10 GbE) FCoE module with 2 FCoE front-end (FE) ports for server I/O connections (connection to a FCoE switch and from the switch to the server CNA). The 10 GbE FCoE module requires FLARE 04.30.000.5.5xx or later. 1 Gb Ethernet (1 GbE) or 10 Gb Ethernet (10 GbE) iSCSI module with 2 iSCSI front-end (FE) ports for network server iSCSI I/O connections (connection to a network switch, router, server NIC, or iSCSI HBA). The 10 GbE iSCSI module requires FLARE 04.29 or later.

16

Planning Your Basic CX4-120 Storage-System Switch Configuration with an HP-UX Server

Table 4 lists the I/O modules available for the storage system and the number of each module that is standard and/or optional.
Table 4 I/O modules per SP Number of modules per SP Module 4 or 8 Gb FC module: 1 BE port (0) 2 FE ports (2, 3) (port 1 not used) 4 or 8 Gb FC module: 4 FE ports (0, 1, 2, 3) 10 GbE FCoE module: 2 FE ports (0, 1) 1 or 10 GbE iSCSI module: 2 FE ports (0, 1) Standard 1 Optional 0

0 1 or 0 (see note 1) 1 or 0 (see note 1)

1 1 (see note 2) 1 (see note 2)

Note 1: The standard system has either 1 FCoE module or 1 iSCSI module per SP, but not both types. Note 2: The maximum number of 10 GbE FCoE modules or 10 GbE iSCSI I/O modules per SP is 1.

IMPORTANT Always install I/O modules in pairs one module in SP A and one module in SP B. Both SPs must have the same type of I/O modules in the same slots. Slots A0 and B0 always contain a Fibre Channel I/O module with one back-end port and two front-end ports. The other available slots can contain any type of I/O module that is supported for the storage system.

The actual number of each type of optional Fibre Channel, FCoE, and iSCSI I/O modules supported for a specific storage-system configuration is limited by the available slots and the maximum number of Fibre Channel, FCoE, and iSCSI front-end ports supported for the storage system. Table 5 lists the maximum number of Fibre Channel, FCoE, and iSCSI FE ports per SP for the storage system.

Planning Your Basic CX4-120 Storage-System Switch Configuration with an HP-UX Server

17

Table 5

Maximum number of front-end (FE) ports per SP Maximum Fibre Channel FE ports per SP Storage system CX4-120 6 4 Maximum FCoE FE ports per SP Maximum iSCSI FE ports per SP (see note) 4

Note: The maximum number of 10 GbE iSCSI ports per SP is 2.

Back-end (BE) port connectivity Each FC back-end port has a connector for a copper SFP-HSSDC2 (small form factor pluggable to high speed serial data connector) cable. Back-end connectivity cannot exceed 4 Gb/s regardless of the I/O modules speed. Table 6 lists the FC modules that support the back-end bus.
Table 6 FC I/O module ports supporting back-end bus Storage system and FC modules CX4-120 FC module in slots A0 and B0 Bus 0 (port 0) Back-end bus (module port)

Fibre Channel (FC) front-end connectivity Each 4 Gb or 8 Gb FC front-end port has an SFP shielded Fibre Channel connector for an optical cable. The FC front-end ports on a 4 Gb FC module support 1, 2, or 4 Gb/s connectivity, and the FC front-end ports on an 8 Gb FC module support 2, 4, or 8 Gb/s connectivity. You cannot use the FC front-end ports on an 8 Gb FC module in a 1 Gb/s Fibre Channel environment. You can use the FC front-end ports on a 4 Gb FC module in an 8 Gb/s Fibre Channel environment if the FC switch or HBA ports to which the modules FE ports connect auto-adjust their speed to 4 Gb/s. Storage-system caching The storage systems have an SP cache consisting of dynamic random access memory (DRAM) on each storage processor (SP). A standby power supply (SPS) protects data in the cache from power loss. If line power fails, the SPS provides sufficient power to let the storage system write cache contents to the vault disks. The vault disks are standard disk modules that store user data but have space reserved for outside
18

Planning Your Basic CX4-120 Storage-System Switch Configuration with an HP-UX Server

operating system control. When power returns, the storage system reads the cache information from the vault disks, and then writes it to the file systems on the disks. This design ensures that all write-cached information reaches its destination. During normal operation, no I/O occurs with the vault; therefore, a disks role as a vault disk has no effect on its performance. Storage-system caching improves read and write performance for LUNs. Write caching, particularly, helps write performance an inherent problem for RAID types that require writing to multiple disks. Read and write caching improve performance in two ways: For a read request If a read request seeks information that is already in the SP read or write cache, the storage system can deliver it immediately, much faster than a disk access can. For a write request The storage system writes updated information to SP write-cache memory instead of to disk, allowing the server to continue as if the write had actually completed. The write to disk from cache memory occurs later, at the most expedient time. If the request modifies information that is in the cache waiting to be written to disk, the storage system updates the information in the cache before writing it to disk; this requires just one disk access instead of two. The FAST Cache is based on the locality of reference of the data set requested. A data set with high locality of reference and that is most frequently accessed is a good candidate for promotion to the FAST Cache. By promoting the data set to the FAST Cache the storage system services any subsequent requests for this data faster from the Flash disks that make up the FAST Cache, thus reducing the load on the disks in the LUNs that contain the data (the underlying disks). Applications such as File and OLTP (online transaction processing) have data sets that can benefit from the FAST Cache. Disks The disks available in different capacities fit into slots in the DAE. The storage system supports 4 Gb/s DAEs with the high-performance Fibre Channel disks or economical serial ATA (SATA) disks. The 1 TB SATA disks operate on a 4 Gb/s back-end bus like the 4 Gb FC disks, but have a 3 Gb/s bandwidth. Since they have a Fibre Channel interface to the back-end loop, these disks are sometimes referred to as Fibre Channel disks. For information on the currently available disks and their usable capacities, refer to the EMC CX4 Series Storage
Planning Your Basic CX4-120 Storage-System Switch Configuration with an HP-UX Server
19

Systems Disk and FLARE OE Matrix (P/N 300-007-437) on the EMC Powerlink website. The 1 TB, 5.4K rpm disks are available only in a DAE that is fully populated with these disks, and they cannot be mixed with or replaced by the 1 TB, 7.2 rpm disks in a DAE. Each disk has a unique ID that you use when you create a RAID group containing the disk or when you monitor the disks operation. The ID is derived from the Fibre Channel loop number, enclosure address, and disk slot in the enclosure. Enclosure 0 on bus 0 in the storage system must contain disks with IDs 000 through 004. The remaining disk slots can be empty unless they are 1 TB, 5.4K rpm disks, in which case all the disks in the DAE must be 1 TB, 5.4K rpm disks. You can mix Flash (SSD) disks and standard Fibre Channel disks, but not Flash and SATA disks, in the same enclosure.You cannot mix Fibre Channel and SATA disks in the same enclosure. Disk power savings Some disks have a power savings (spin-down) option, which lets you assign power savings settings to a RAID group composed of these disks in a storage system running FLARE 04.29 or later. If power savings is enabled for both the storage system and the RAID group, the disks in the RAID group will transition to the low power state after being idle for at least 30 minutes. Power savings is not supported for a RAID group if any LUN in the RAID group is participating in a MirrorView/A, MirrorView/S, SnapView, or SAN Copy session. Background verification of data (sniffing) does not continue when disks are in a low power state. For the currently available disks that support power savings, refer to the EMC CX4 Series Storage Systems Disk and FLARE OE Matrix (P/N 300-007-437) on the EMC Powerlink website. Basic requirements for shared storage and unshared configurations For shared switch storage, you need the components described below. Components for shared switched storage For shared switched storage, you must use a high-availability configuration. The minimum hardware required for shared switched storage is two servers, each with two Fibre Channel HBAs, two Fibre Channel switch fabrics with one switch per fabric, and one storage system. You can use more servers, more Fibre Channel switches per fabric, and more storage systems (up to four are allowed).

20

Planning Your Basic CX4-120 Storage-System Switch Configuration with an HP-UX Server

Dimensions and weight


Table 7 Component SPE Dimensions Height: 8.90 cm (3.50 in) Width: 44.50 cm (17.50 in) Depth: 62.60 cm (24.25 in) Height: 13.34 cm (5.25 in) Width: 45.0 cm (17.72 in) Depth: 35.56 cm (14.00 in) Height: 4.02 cm (1.58 in) Mounting tray width: 42.1 cm (16.5 in) Depth: 60.33 cm (23.75 in) CX4-120 hardware dimensions and weight Vertical size 2 NEMA units Weight (see notes) 23.81 kg (52.5 lb)

DAE

3 NEMA units

30.8 kg (68 lb) with 15 disks

SPS

1 NEMA units

10.8 kg (23.5 lb) per SPS

Notes: Weights do not include mounting rails. Allow 2.3- 4.5 kg (5-10 lb) for a rail set. A fully configured DAE includes 15 disk drives that typically weigh 1.0 kg to 1.1 kg (2.25 to 2.4 lb) each. The weights listed in this table do not describe enclosures with Enterprise Flash Drives (solid state disk drives with Flash memory, or SSD drives). Each SSD drive module weighs 20.8 ounces (1.3 lb).

Cabinet for hardware components The 19-inch wide cabinet, prewired with AC power strips and ready for installation, has the dimensions listed in Table 8.
Table 8 Dimension Height (internal, usable for devices) Height (overall) Width (usable for devices) Width (overall) Depth (front to back rail) Depth (overall) Cabinet dimensions 40U-C cabinet 40U (179 cm; 70 in) from floor pan to cabinet top (fan installed) 190 cm (75 in) NEMA 19 in standard; rail holes 45.78 cm (18.31 in) apart center-to-center 60 cm (24 in) 60 cm (24 in) 98.425 cm (39.37 in) without front door 103.75 cm (41.5 in) with optional front door Weight (empty) 177 kg (390 lb) maximum without front door 200 kg (440 lb) maximum with optional front door Maximum total device weight supported 945 kg (2100 lb)

This cabinet accepts combinations of:

Planning Your Basic CX4-120 Storage-System Switch Configuration with an HP-UX Server

21

2U SPE 3U DAE 1U 1200W SPS 1U or 2U switch The cabinet requires 200 to 240 volts AC at 50/60 Hz, and includes 2 to 4 power strips with compatible outlets. Plug options are L6-30P and IEC 309 30 A. Filler panels of various sizes are available. Data cable and configuration guidelines Each Fibre Channel data port that you use on the storage system requires an optical cable connected to either a server HBA port or a switch port. The cabling between the SPE and the DAEs and between the DAEs is copper. Generally, you should minimize the number of cable connections, since each connection degrades the signal slightly and shortens the maximum distance of the signal. SP optical cabling to a switch or server Optical cables connect the small form-factor pluggable (SFP) modules on the storage processors (SPs) to the external Fibre Channel or 10 Gb Ethernet environment. EMC strongly recommends the use of OM3 50 m cables for all optical connections. Table 9 lists the optical cables that are available for your storage system.
Table 9 Optical cables Cable type 50 m Operating speed 1.0625 Gb 2.125 Gb 4 Gb 8 Gb Length 2 m (6.6 ft) minimum to 500 m (1,650 ft) maximum 2 m (6.6 ft) minimum to 300 m (990 ft) maximum 2 m (6.6 ft) minimum to 150 m (495 ft) maximum OM3: 1 m (3.3 ft) minimum to 150 m (495 ft) maximum OM2: 1 m (3.3 ft) minimum to 50 m (165 ft) maximum OM3: 1 m (3.3 ft) minimum to 300 m (990 ft ) maximum

10 Gb

22

Planning Your Basic CX4-120 Storage-System Switch Configuration with an HP-UX Server

Cable type

Operating speed

Length OM2: 1 m (3.3 ft) minimum to 82 m (270 ft) maximum

62.5 m

1.0625 Gb 2.125 Gb 4 Gb

2 m (6.6 ft) minimum to 300 m (985 ft) maximum 2 m (6.6 ft) minimum to 150 m (492 ft) maximum 2 m (6.6 ft) minimum to 70 m (231 ft) maximum

Notes: All cables are multi-mode, dual LC with a bend radius of 3 cm (1.2 in) minimum.

The maximum length for either the 62.5 m or 50 m cable (noted in the table above) includes two connections or splices between source and destination.

CAUTION EMC does not recommend mixing 62.5 m and 50 m optical cables in the same link. In certain situations you can add a 50 m adapter cable to the end of an already installed 62.5 m cable plant. Contact your service representative for details.

Planning Your Basic CX4-120 Storage-System Switch Configuration with an HP-UX Server

23

SP-to-DAE and DAE-to-DAE copper cabling The expansion port interface to the DAE is copper cabling. The following copper cables are available:
Cable type SFP-to-HSSDC2 for SP-to-DAE HSSDC2-to-HSSDC2 for DAE-to-DAE Length 2 m (6.6 ft) 5 m (16.5 ft) 2 m (6.6 ft) 5 m (16.5 ft) 8 m (26.4 ft)

The cable connector can be either a direct-attach shielded SFP (small form-factor pluggable) module or an HSSDC2 (high speed serial data connector), as detailed below: SP connector Shielded, 150 differential, shield-bonded to SFP plug connector shell (360), SFF-8470 150 specification for SFP transceiver. DAE connector Shielded, 150 differential, shield-bonded to plug connector shell (360) FC-PI-2 standard, revision 13 or later for HSSDC2. DAE enclosure addresses Each disk enclosure in a Fibre Channel bus must have a unique enclosure address (also called an EA, or enclosure ID) that identifies the enclosure and determines disk module IDs. In many cases, the factory sets the enclosure address before shipment to coincide with the rest of the system; you will need to reset the selection if you install the enclosure into your rack independently. The enclosure address ranges from 0 through 7. Figure 10 shows sample back-end connections for a storage system with eight DAEs on its bus. The figure shows a configuration with DAE2P or DAE3Ps as the only disk-array enclosures. Environments with a mix of DAE2s and DAE2Ps and/or DAE3Ps follow the same EA, bus balancing, and cabling conventions whenever possible and practical. Each DAE supports two completely redundant loops.

24

Planning Your Basic CX4-120 Storage-System Switch Configuration with an HP-UX Server

B EA7/Bus 0
EXP PRI
! ! !

A
#
PRI PRI EXP PRI
!

EA6/Bus 0
EXP

EA5/Bus 0
EXP PRI
!

EA4/Bus 0
EXP PRI
!

EA3/Bus 0
EXP PRI
!

EA2/Bus 0
EXP PRI
!

EA1/Bus 0
EXP PRI
!

EA0/Bus 0
EXP PRI
!

!
10/100/1000

!
1 2 1 2 2 2 1 2
10/100/1000 10/100/1000

SP B

10/100/1000

1000 Base-X

10/100/1000

10/100/1000

10/100/1000

10/100/1000

1000 Base-X

Figure 10

Sample storage-system configuration with eight DAEs

Hardware components worksheet


Use the worksheet in Table 10 and the cable planning template in Figure 11 to plan the hardware components you want. Some installation types do not have switches or multiple servers.

Planning Your Basic CX4-120 Storage-System Switch Configuration with an HP-UX Server

EXP

PRI

PRI

EXP

#
PRI
!

EXP

! ! !

EXP

PRI

PRI

EXP

#
PRI
!

EXP

! ! !

EXP

PRI

PRI

EXP

#
PRI
!

EXP

! ! !

EXP

PRI

PRI

EXP

#
PRI
!

EXP

! ! !

EXP

PRI

PRI

EXP

#
PRI
!

EXP

! ! !

EXP

PRI

PRI

EXP

#
PRI
!

EXP

! ! !

EXP

PRI

PRI

EXP

#
PRI
!

EXP

! ! !

EXP

PRI

#
!

EXP

SP A
CL4124

25

Table 10

Hardware components worksheet Server information

Server name: Adapters in server: Server name: Adapters in server: Server name: Adapters in server: Server name: Adapters in server:

Server operating system:

Server operating system:

Server operating system:

Server operating system:

Storage-system components SPEs: DAEs: Cabinets: Fibre Channel switch information 32-port: 24-port: 16-port: 8-port:

Cables between server and Fibre Channel switch ports cable A Cable A1 number: Cable A2 number: Cable A3 number: Cable A4 number: Length (m or ft): Length (m or ft): Length (m or ft): Length (m or ft):

Cables between Fibre Channel switch ports and storage-system SP Fibre Channel data ports cable B Cable B1 (up to 6 per CX4-120 or CX4-240 SPE SP, optical) number: Cable B2 (up to 6 per CX4-120 or CX4-240 SPE SP, optical) number: Cables between enclosures cable C Cable C1 (copper) number (2 per SPE): Cable C2 (copper) number (2 per DAE): Length (m or ft): Length (m or ft): Length (m or ft): Length (m or ft):

26

Planning Your Basic CX4-120 Storage-System Switch Configuration with an HP-UX Server

Switch 1 DAE LCC LCC

Adapter

Adapter

Adapter

Adapter

Adapter

Adapter

A1

A2

A3

A4

Am

An

Cable between server and switch

Switch 2 Cable between enclosures

B1 C2 C2 C2 C2 C2
Path 1 Path 2 Other paths not cabled
Figure 11

DAE LCC LCC DAE LCC LCC DAE LCC LCC DAE LCC LCC DAE LCC LCC SPE SP B SP A

C2 C2 C2 C2 C2 C1

B2

Cable between switch and storage system

C1

EMC3571

Cable planning template for a shared storage system (two front-end cables per SP shown)

Planning Your Basic CX4-120 Storage-System Switch Configuration with an HP-UX Server

27

Cache worksheet
Use the worksheet in Table 11 to plan your cache configuration.
Table 11 Cache worksheet Cache information Read cache size: Cache page size: Write cache size:

Cache information You can use SP memory for read/write caching. You can use different cache settings for different times of day. For example, for user I/O during the day, use more write cache; for sequential batch jobs at night, use more read cache. Generally, write caching improves performance far more than read caching. Read caching is nonetheless crucial for good sequential read performance, as seen in backups and table scans. The ability to specify caching on a LUN basis provides additional flexibility, since you can use caching for only the units that will benefit from it. Read and write caching is recommended for any type of RAID group or pool, particularly RAID 6 or RAID 5. You can enable caching for specific LUNs in a RAID Group and for all the LUNs in a pool, which allows you to tailor your cache resources according to priority. The maximum cache size per SP is 598 MB, and the maximum write cache size is 598 MB. Read cache size If you want a read cache, enter the read cache size you want. Write cache size Enter the write cache size that you want. Generally, we recommend that the write cache should be the maximum allowed size, which is 600 MB per SP. Cache page size Cache page size applies to both read and write caches. It can be 2, 4, 8, or 16 KB. As a general guideline, we suggest 8 KB. The ideal cache page size depends on the servers operating system and application.

28

Planning Your Basic CX4-120 Storage-System Switch Configuration with an HP-UX Server

Storage-system management
This section describes the storage-system management ports and the management applications you can use, and provides the appropriate planning worksheets. Major topics are: Storage-system management ports, page 29 Storage-system management ports worksheet, page 30 CLARalert software, page 40 CLARalert worksheet, page 40 Navisphere management software, page 41 Navisphere Analyzer, page 46 Optional Navisphere Quality of Service Manager, page 46 Navisphere management worksheet, page 47

Storage-system management ports


The storage system has two management ports, one per SP, through which you manage the storage system. For storage-system initialization, these ports must be connected to a host on the network from which the storage system will be initialized. This host must be on the same subnet as these ports. Initialization assigns network and security characteristics to each SP. After initialization, these ports are used for storage-system management, which can be done from any host with a supported browser on the same network as these ports. A storage system running FLARE 04.29 or later supports one virtual port with VLAN tagging for each management port. If a management port is connected to a switch, you can: Create a trunk port on the switch per IEEE 802.1q standards. Configure the trunk port to pass along network traffic with the VLAN tag for the virtual management port and for any other virtual ports that you want.

Planning Your Basic CX4-120 Storage-System Switch Configuration with an HP-UX Server

29

Configure the truck port to drop all other traffic.

Storage-system management ports worksheet


Record network information for the storage-system management ports in Table 12. Your network administrator should provide most of this information, except for the login information.
Table 12 Storage-system management port information Storage-system information Storage-system serial number: Physical network information IPv4 SP port information (default Internet protocol) SP A SP B IP address: IP address: Subnet mask: Subnet mask: Gateway: Gateway:

IPv6 SP port information (manual configuration only) Global prefix: Gateway: Virtual port network information Storage processor SP A SP B Login information Username: Password: Role: Monitor Manager Administrator Administrator Security Virtual port VLAN ID IP address

Local replication

Replication

Replication and recovery

Storage-system information Fill out the storage-system information section of the worksheet using the information that follows. Storage-system serial number The hardware serial number (TLA S/N) is on a tag that is hanging from the back middle of the storage processor enclosure (Figure 12).

30

Planning Your Basic CX4-120 Storage-System Switch Configuration with an HP-UX Server

10/100

/1000

1 3

0 1

0 10/100/1000

10/100/1000

10/100

/1000

10/100/1000

CL4037

Figure 12

Location of the storage-system serial number on the SPE

Physical network information Fill out the physical network information section of the worksheet using the information that follows. The management ports support both IPv4 and IPv6 Internet Protocols concurrently. IPv4 is the default and you must provide IPv4 addresses. If your network supports IPv6, you can choose to use IPv6 with automatic or manual configuration. For automatic configuration, you do not need to provide any information. For manual configuration, you must provide the global prefix and the gateway. IPv4 SP port information Fill out the IPv4 SP port information section of the worksheet using the information that follows. IP address Enter the static network IPv4 address (for example, 128.222.78.10) for connecting to the management port of a storage processor (SP A or

Planning Your Basic CX4-120 Storage-System Switch Configuration with an HP-UX Server

31

SP B). Do not use IP addresses 128.221.1.248 through 128.221.1.255, 192.168.1.1, or 192.168.1.2. Subnet mask Enter the subnet mask for the LAN to which the storage system is connected for management, for example, 255.255.255.0. Gateway Enter the gateway address if the LAN to which the storage-system management port is connected has a gateway. IPv6 SP port information Fill out the IPv6 SP port information section of the worksheet using the information that follows. Global prefix Enter the IPv6 global prefix, which is 2000::/3. Gateway Enter the gateway address if the LAN to which the storage-system management port is connected has a gateway. Virtual port information The virtual port for the management port supports 802.1q VLAN tagging. FLARE 04.29 or later is required for virtual ports and VLAN tagging. Navisphere Manager always represents the management ports as virtual ports on storage systems running FLARE 04.29 or later. Fill out the virtual port information section of the worksheet using the information that follows. Virtual port Enter the name for the virtual port. VLAN ID Enter a number between 1 and 4095, and the number must be unique. IP address Enter the network IP (Internet Protocol) address (for example, 128.222.78.10) for connecting to the virtual port. Do not use IP addresses 128.221.1.248 through 128.221.1.255, 192.168.1.1, or 192.168.1.2.

32

Planning Your Basic CX4-120 Storage-System Switch Configuration with an HP-UX Server

Login information Fill out the Login information section of the worksheet using the information that follows. Username Enter a username for the management interface. It must start with a letter and may contain 1 to 32 letters and numbers. The name may not contain punctuation, spaces, or special characters. You can use uppercase and lowercase characters. Usernames are case-sensitive. For example, ABrown is a different username from abrown. Your network administrator may provide the username. If not, then you need to create one. Password Enter a password for connecting to the management interface. It may contain 1 to 32 characters, consisting of uppercase and lowercase letters and numbers. As with the username, passwords are case-sensitive. For example, Azure23 is a different password than azure23. The password is valid only for the username you specified. Your network administrator may provide the password. If not, then you need to create one. User roles Four basic user roles are available - monitor, manager, administrator, and security administrator. All users, except a security administrator, can monitor the status of a storage system. Users with the manager role can also configure a storage system. Administrators can maintain user accounts, as well as configure a storage system. Security administrators can only manage security and domain settings. Select the role you want for the user. For more information on roles, refer to Authentication, page 42.

EMC Secure Remote Support IP Client for CLARiiON


ESRS IP Client for CLARiiON software does the following: Monitors storage systems within a domain for error events. Automatically and securely sends alerts (call homes) to your service provider about events that require service provider notification. Allows your service provider to securely connect remotely through the monitor station into your monitored storage system to help troubleshoot storage-system issues.
Planning Your Basic CX4-120 Storage-System Switch Configuration with an HP-UX Server
33

Sends a notification e-mail to you (the customer) when an alert is sent to your service provider. We recommend that you use EMC Secure Remote Support (ESRS) IP Client for CLARiiON instead of the CLAR alert software if your environment meets the requirements for the ESRS IP Client for CLARiiON software. Figure 13 shows the communications infrastructure for Call Home feature of the ESRS IP Client for CLARiiON, and Figure 14 shows the communication infrastructure for the remote access feature of the ESRS IP Client for CLARiiON.
Centralized monitoring station running ESRS IP Client for CLARiiON Call Home Service provider
(SSL)

(SSL)

(SSL) (SSL)

LAN
(SSL) (SSL)

Event

Event

Event

CX4-120

CX4-240
Figure 13

CX4-480

ESRS IP Client for CLARiiON communication information infrastructure for Call Home

34

Planning Your Basic CX4-120 Storage-System Switch Configuration with an HP-UX Server

Centralized monitoring station running ESRS IP Client for CLARiiON Remote access
(SSL)

Service provider

(SSL)

(SSL) (SSL)

LAN
(SSL) (SSL)

CX4-120

CX4-240
Figure 14

CX4-480

ESRS IP Client for CLARiiON communications information infrastructure for remote access

The ESRS IP Client for CLARiiON: Requires a monitor station, which is a host or virtual machine running a supported Windows operating system and the ESRS IP Client for CLARiiON software. The monitor station must: Must have a 1.8 GHz or higher computer with at least 2 GB of available storage for the ESRS IP Client Not be a server (host connected to storage-system data ports). Must be connected to the same network as your storage-system management ports and connected to the Internet through a proxy server. Must have a static DCP reserved IP address.

Planning Your Basic CX4-120 Storage-System Switch Configuration with an HP-UX Server

35

Must be connected over the network to a portal system, which is a storage system running the required FLARE version.

ESRS IP Client for CLARiiON worksheet


If you want to use ESRS IP Client for CLARiiON to monitor your storage system, record the information in Table 13.
Table 13 ESRS IP Client for CLARiiON worksheet Monitor station network information Host identifier: IP address: Portal system network information Portal identifier: Username: IP address: Password: Proxy server network information Protocol: Username: HTTPS SOCKS IP address or network name: Password: Customer notification information e-mail address: SMTP server name or IP address: Powerlink credentials Usename: Password: Monitored system information System name or IP address: Username: Port: Password: Customer contact information Name: e-mail address: Phone: Site:

Monitor station network information Fill out the monitor station network information section of the worksheet using the information that follows

36

Planning Your Basic CX4-120 Storage-System Switch Configuration with an HP-UX Server

Host identifier Enter the optional name or description that identifies the Windows host that will be the monitor station. IP address Enter the network IP (Internet Protocol) address (for example, 128.222.78.10) for connecting to the monitor station. This address must be a static or DHCP reserved IP address. The portal system uses this IP address to manage the monitor station. Since the monitor station can have multiple NICs, you must specify an IP address. This IP address cannot be 128.221.1.250, 128.221.1.251,192.168.1.1, or 192.168.1.2 because these addresses are reserved. Portal system network information Fill out the portal system network information section of the worksheet using the information that follows. Portal identifier Enter the optional identifier (such as a hostname) for the storage system that is or will be the portal system. IP address Enter the network IP address for connecting to the portal system. This IP address is the IP address that you assign to one of the storage systems SPs when you initialize it. Username Enter the username for logging into the portal. Password Enter the password for the username for logging into the portal. Proxy server network information Depending on your network configuration, you may have to connect to the Internet (or services outside the local network) through a proxy server. In this situation, the server running the ESRS IP Client for CLARiiON uses the proxy server settings to access the proxy server so it can access Powerlink and send alerts to your service provider. Fill out the proxy server network information section of the worksheet using the information that follows. Protocol Enter the protocol (HTTPS or SOCKS) for connecting to the proxy server.
Planning Your Basic CX4-120 Storage-System Switch Configuration with an HP-UX Server
37

IP address or network name Enter the network IP address or network name for connecting to the proxy server. The IP address cannot be 128.221.1.250, 128.221.1.251,192.168.1.1, or 192.168.1.2 because these addresses are reserved. Username Enter the username for accessing the proxy server if it requires authentication. The SOCKS protocol requires authentication, and the HTTPS protocol does not. Password Enter the password for the username. Customer notification information Customer notification information is information about the person or group to be notified when ESRS sends alerts to the service provider. Fill out the customer notification information section of the worksheet using the information that follows. e-mail address Enter the e-mail address of the person or group to notify when ESRS sends an alert to your service provider. SMTP server name or IP address Enter the address of the server in your corporate network that sends e-mail over the Internet. You can choose to have ESRS use e-mail as your backup communication for the Call Home feature and to use e-mail to notify you when ESRS sends alerts (remote notification events) to your service provider. To use e-mail in these cases, you must provider your SMPT server name or IP address, in addition to your e-mail address. Powerlink credentials The ESRS IP Client for CLARiiON software is available from Powerlink. To use the ESRS IP Client for CLARiiON Installation wizard to install this software on the monitor station, you must provide your Powerlink credentials. Fill out the Powerlink credentials section of the worksheet using the information that follows. Username Enter the Powerlink username for the person who will install the ESRS IP Client for CLARiiON on the monitor station.

38

Planning Your Basic CX4-120 Storage-System Switch Configuration with an HP-UX Server

Password Enter the Powerlink password for the user. Monitored system information Fill out the monitored system information section of the worksheet using the information that follows. System name or IP address Enter the name or network IP address for connecting to your storage system. This IP address is the IP address that you assign to one of the storage systems SPs when you initialize it. Port Enter the port on the SP for ESRS to access. Username Enter the name of user that will monitor your storage system and will have monitoring access to your storage system. Password Enter the users password. Customer contact information Customer contact information is the information that the service provider needs to contact the person at your storage-system site if the service provider cannot fix the problem with your storage system online. Fill out the customer contact information section of the worksheet using the information that follows. Name Enter the name of the person at your storage-system site to contact. e-mail Enter the e-mail address of the contact person. Phone Enter the phone number of the contact person.

Planning Your Basic CX4-120 Storage-System Switch Configuration with an HP-UX Server

39

Site Enter the identifier of the site with your storage system.

CLARalert software
CLARalert software monitors your storage systems operation for error events and automatically notifies your service provider of any error events. It requires: A monitor station, which is a host running a supported Windows operating system. This monitor station cannot be a server (host connected to storage-system data ports) and must be on the same network as your storage-system management ports. A portal system, which is a storage system running the required FLARE version. We recommend that you use EMC Secure Remote Support (ESRS) IP Client for CLARiiON instead of the CLAR alert software if your environment meets the requirements for the ESRS IP Client for CLARiiON software.

CLARalert worksheet
Record the network information for the CLARalert monitor station and the portal system in Table 14.
Table 14 CLARalert worksheet Monitor station network information Host identifier: IP address: Portal system network information Portal identifier: IP address: Username: Password:

Monitor station network information Fill out the monitor station network information section of the worksheet using the information that follows Host identifier Enter the optional name or description that identifies the Windows host that will be the monitor station.

40

Planning Your Basic CX4-120 Storage-System Switch Configuration with an HP-UX Server

IP address Enter the network IP (Internet Protocol) address (for example, 128.222.78.10) for connecting to the monitor station. This address must be a static or DHCP reserved IP address. The portal system uses this IP address to manage the monitor station. Since the monitor station can have multiple NICs, you must specify an IP address. This IP address cannot be 128.221.1.250, 128.221.1.251,192.168.1.1, or 192.168.1.2 because these addresses are reserved. Portal system network information Fill out the portal system network information section of the worksheet using the information that follows. Portal identifier Enter the optional identifier (such as a hostname) for the storage system that is or will be the portal system. IP address Enter the network IP address for connecting to the portal system. This IP address is the IP address that you assign to one of the storage systems SPs when you initialize it. Username Enter the username for logging into the portal. Password Enter the password for the username for logging into the portal.

Navisphere management software


The Navisphere management software consists of the following software products: Navisphere Manager Unisphere Server Utility for supported operating systems Unisphere Host Agent for supported operating systems Navisphere host-based command line interface (CLI) for supported operating systems Navisphere Manager Navisphere Manager (called manager) lets you manage multiple storage systems on multiple servers simultaneously. It includes an
Planning Your Basic CX4-120 Storage-System Switch Configuration with an HP-UX Server
41

event monitor that checks storage systems for fault conditions and can notify you and/or customer service if any fault condition occurs. Some of the tasks that you can perform with Navisphere Manager are: Manage server connections to the storage system Create RAID groups and thin pools and LUNs on the RAID groups and thin pools Create storage groups Manipulate caches Examine storage-system status and events recorded in the storage-system event logs Transfer control from one SP to the other Manager features a user interface (UI) with extensive online help. All CX4 storage systems running FLARE 04.29.000.5.xxx or earlier ship with Navisphere Manager installed and enabled. Navisphere provides the following security functions and benefits: Authentication Authorization Privacy Audit Authentication Manager uses password-based authentication that is implemented by the storage management server on each storage system in the domain. You assign a username and password when you create either global or local user accounts. Global user accounts apply to all storage systems in a domain and local user accounts apply to a specific storage system. A global user account lets you manage user accounts from a single location. When you create or change a global user account or add a new storage system to a domain, Manager automatically distributes the global account information to all storage systems in the domain. Authorization Manager bases authorization on the role associated with the authenticated user. Four roles are available monitor, manager, administrator, security administrator. All users, except a security administrator, can monitor the status of a storage system. Users with the manager role can also configure a storage system. Administrators
42

Planning Your Basic CX4-120 Storage-System Switch Configuration with an HP-UX Server

can maintain user accounts, as well as configure a storage system. Security administrators can only manage security and domain settings. Privacy Manager encrypts all data that passes between the browser and storage management server, as well as the data that passes between storage-system management servers. This encryption protects the transferred data whether it is on local networks behind corporate firewalls or on the Internet. Audit Manager maintains an SP event log that contains a time-stamped record for each event. This record includes information such as an event code and event description. Manager also adds time-stamped audit records to the SP event log each time a user logs in or enters a request. These records include information about the request and the requestor. Unisphere Server Utility, Unisphere Host Agent, and Navisphere CLI The Unisphere Server Utility and the Unisphere Host Agent are provided for different operating systems. The Unisphere Server Utility replaces the Navisphere Server Utility and the Unisphere Host Agent replaces the Navisphere Host Agent. You should install the server utility on each server connected to the storage system. Depending on your application needs, you can also install the host agent on each server connected to a storage system to: Monitor storage-system events and notify personnel by e-mail, page, or modem when any designated event occurs. Retrieve LUN world wide name (WWN) and capacity information from Symmetrix storage systems. Register the servers HBAs with the storage system. Alternatively, you can use the Unisphere Server Utility to register the servers HBAs with the storage system. Table 15 describes the host registration differences between the host agent and the server utility

Planning Your Basic CX4-120 Storage-System Switch Configuration with an HP-UX Server

43

Table 15 Function Pushes LUN mapping and OS information to the storage system.

Host Registration differences between the host agent and the server utility Unisphere Server Utility CX4 series, CX3 series, and CX series storage systems No LUN mapping information is not sent to the storage system. Only the servers name, ID, and IP address are sent to the storage system. Note: The text Manually Registered appears next to the hostname icon in the Manager UI indicating that the host agent was not used to register this server. CX4 series, CX3 series, and CX series storage systems No You must manually update the information by starting the utility or you can create a script to run the utility. Since you run the server utility on demand, you have more control as to how often or when the utility is executed. CX4 series, CX3 series, and CX series storage systems No LUN mapping information is not sent to the storage system. Note that if you are using the server utility to upload a high-availability report to the storage system, you must have network connectivity.

Unisphere Host Agent Yes LUN mapping information is displayed in the Manager UI next to the LUN icon or with the CLI using the lunmapinfo command.

Runs automatically to send information to the storage system.

Yes No user interaction is required.

Requires network connectivity to the storage system.

Yes Network connectivity allows LUN mapping information to be available to the storage system.

The Navisphere CLI provides commands that implement Navisphere Manager UI functions for in addition to commands that implement the functions of the UIs for the optional data replication and data mobility software. A major benefit offered by the CLI is the ability to write command scripts to automate management operations. Using Navisphere manager software With Navisphere manager you can assign storage systems on an intranet or Internet to a storage domain. For any installation, you can create one or more domains, provided that each storage system is in only one domain. Each storage domain must have at least one member with Manager installed. Each storage system in the domain is accessible from any other in the domain. Using an Internet browser, you point at a storage system that has Manager installed. The security software then prompts you to log in. After logging in, depending on the privileges of your account, you can monitor, manage, and/or define user accounts for any storage system in the domain. You cannot view storage systems outside the domain from within the domain.

44

Planning Your Basic CX4-120 Storage-System Switch Configuration with an HP-UX Server

You can run the Internet browser on any supported station (often a PC or laptop) with a network controller. At least one storage system ideally at least two for higher availability in a domain must have or Navisphere Manager installed, preferably Unisphere. Figure 15 shows an Internet configuration that connects 9 storage systems. It shows 2 domains, a U.S. Division domain with 5 storage systems (4 systems on SANs) and a European Division with 4 storage systems. The 13 servers that use the storage system may be connected to the same or a different network, but the intranet shown is the one used to manage the storage systems.

Planning Your Basic CX4-120 Storage-System Switch Configuration with an HP-UX Server

45

Server

Server

Server

Server

Server

Server

Switch Fabric

Switch Fabric

Switch Fabric

Switch Fabric
Server

Storage System

Storage System

Storage System

Storage System

Storage System

Domain 1 - U.S. Division Internet Browser Domain 2 - European Division


Storage System Storage System Storage System

Storage System

Internet

Server

Server

Server

Server

Server

Server
EMC2276

Figure 15

Storage domains on the Internet

Navisphere Analyzer
Navisphere Analyzer lets you measure, compare, and chart the performance of SPs, LUNs, and disks to help you anticipate and find bottlenecks in your storage system.

Optional Navisphere Quality of Service Manager


The optional Navisphere Quality of Service Manager ( or Navisphere QoS Manager) lets you allocate storage-system performance resources on an application-by-application basis. You can use Navisphere QoS Manager to solve performance conflicts in consolidated environments where multiple applications share the same storage system. Within

46

Planning Your Basic CX4-120 Storage-System Switch Configuration with an HP-UX Server

storage-system capacity, Navisphere QoS Manager lets you meet specific performance targets for applications, and create performance thresholds to prevent applications from monopolizing storage-system performance.

Navisphere management worksheet


The worksheet in Table 16 will help you plan your Navisphere storage management configuration.
Table 16 Storage-system name: Navisphere Analyzer Server name: Server name: Server name: Server name: Server name: Server name: Server name: Server name: Storage management worksheet for Navisphere software Domain name: Navisphere QoS Manager Operating system: Operating system: Operating system: Operating system: Operating system: Operating system: Operating system: Operating system:

Planning Your Basic CX4-120 Storage-System Switch Configuration with an HP-UX Server

47

Basic storage concepts


This section explains traditional provisioning and the Virtual Provisioning software concepts that you should understand to plan your storage-system configuration. Major topics are: Traditional provisioning concepts, page 48 Virtual Provisioning concepts, page 49 Virtual Provisioning versus traditional provisioning, page 50 Basic RAID concepts, page 52 Supported RAID types, page 53 RAID type benefits and trade-offs, page 64 RAID type guidelines for RAID groups or thin pools, page 69 Sample applications for RAID group or thin pool types, page 70 Fully automated storage tiering (FAST), page 72

Traditional provisioning concepts


Traditional provisioning allows you to assign storage capacity that is physically available on the storage-system disks to a server (host). You allocate this physical capacity using LUNs that you create on RAID groups. LUNs on RAID groups are often called RAID group LUNs. RAID groups A RAID group is a set of disks of the same type on which you create LUNs. These disks can be the vault disks (000004). A RAID group has one of the following RAID types: RAID 6, RAID 5, RAID 3, RAID 1, RAID 1/0, RAID 0, individual disk, or hot spare. LUN (traditional LUN) A traditional LUN is a logical unit that groups space on the disks in a RAID group into one span of disk storage space and looks like an individual disk to a servers operating system. The capacity for each LUN you create is distributed equally across the disks in the RAID group. The amount of physical space allocated to a LUN is the same as the user capacity that the servers operating system sees. The storage capacity of a LUN is set when you create the LUN; however, you can expand it using metaLUNs, that is, by adding one or more other LUNs.
48

Planning Your Basic CX4-120 Storage-System Switch Configuration with an HP-UX Server

If the storage system is running FLARE 04.29 or later, you can shrink a RAID group LUN. A RAID group LUN can be a hot spare.

Virtual Provisioning concepts


Virtual Provisioning, unlike traditional provisioning, allows you to assign more storage capacity to a server (host) than is physically available by using thin pools on which you create thin LUNs. Virtual provisioning is available on storage systems running FLARE 04.28.000.5.5xx or later and that have the optional virtual provisioning enabler installed. Thin pools Thin pools are supported for storage systems running FLARE 4.28.000.5.5xx or 04.29.000.5.xxx and with the Virtual Provisioning enabler installed. A thin pool is a set of disks that shares its user capacity with one or more thin LUNs. We recommend that all disks within the pool have the same capacity. A thin pool has the RAID 6 or RAID 5 type. RAID 6 is the default RAID type. The storage-system software monitors storage demands on pools and adds storage capacity to them, as required, up to a certain specified amount. Thin LUN Thin LUNs are supported for storage systems with the Thin Provisioning enabler installed. A thin LUN is a logical unit of storage in a pool that looks like an individual disk to an operating system. A thin LUN competes with other thin LUNs in the pool for the pools available storage. The capacity of the thin LUN that is visible to the server is independent of the available physical storage in the pool. To a server, a thin LUN behaves very much like RAID group LUN . Unlike a RAID group LUN or thick LUN however, a thin LUN can run out of disk space if the pool to which it belongs runs out of disk space. By default, the storage system issues a warning alert when 70% of the pools space has been consumed; and, when 85% of the space has been consumed, it issues a critical alert. You can customize these thresholds that determine when the alerts are issued. As thin LUNs continue consuming the pools space, both alerts continue to report the actual percentage of consumed space. A thin LUN uses slightly more capacity than the amount of user data written to it due to the metadata required

Planning Your Basic CX4-120 Storage-System Switch Configuration with an HP-UX Server

49

to reference the data. Unlike a RAID group LUN or a thick LUN, a thin LUN can run out of space. A thin LUN cannot be a hot spare.

Virtual Provisioning versus traditional provisioning


Table 17 lists virtual provisioning and traditional provisioning trade-offs.
Table 17 Virtual Provisioning and traditional provisioning trade-offs Traditional provisioning RAID 6, RAID 5, RAID 3, RAID 1/0, and RAID 1 RAID groups or individual disk or hot spare Fully supported Fully supported Fully supported for Windows Server 2008 hosts connected to a storage system running FLARE 04.29 or later. Fully supported All the disks in a RAID group must be of the same type.

Virtual Provisioning RAID types MetaLUNs LUN expansion LUN shrinking RAID 6 and RAID 5 thin pools Not supported Not supported Not supported

LUN migration Disk usage

Fully supported Disks in a thin pool can be Flash (SSD) disks only if all the disks in the pool are Flash disks. You cannot intermix Flash disks and other disks in a pool. The disks in a thin pool cannot be vault disks 000-004. When you create a thin LUN, a minimum of 2 GB of space on the pool is reserved for the thin LUN. Space is assigned to a pool on an as-needed basis. Since the thin LUNs on a pool complete for the pools space, a pool can run out of space for its thin LUNs. You cannot create a hot spare on a pool. Any hot spare, except one that is a Flash (SSD) disk, can be a spare for any disk in a pool. A hot spare that is a Flash disk can only be a spare for Flash disks. Thin LUN performance is typically lower than LUN performance.

Space efficiency

When you create a LUN, the LUN is assigned physical space on the RAID group equal to the LUNs size. This space is always available to the LUN even if it does not actually use the space.

Hot sparing

You can create a hot spare on a RAID group. Any hot spare, except one that is a Flash (SSD) disk, can be a spare for any disk in a RAID group. A hot spare that is a Flash disk can only be a spare for Flash disks. LUN performance is typically faster than thin LUN performance.

Performance

50

Planning Your Basic CX4-120 Storage-System Switch Configuration with an HP-UX Server

Virtual Provisioning Manual administrative Use with SnapView Thin pools require less manual administrative than RAID groups. A thin LUN can be a snapshot source LUN, a clone LUN, a clone source LUN, but not a clone private LUN, and not in the reserved LUN pool. Mirroring with thin LUNs as primary or secondary images is supported only between storage systems running FLARE 04.29 or later. For mirroring between storage systems running FLARE 04.29, the primary image, secondary image, or both images can be thin LUNs. Thin LUNs are supported for only for SAN Copy sessions in the following configurations: Within a storage system running FLARE 04.29 or later Between systems running FLARE 04.29 or later Between systems running FLARE 04.29 or later and systems running FLARE 04.28.005.504 or later. The source LUN must be on the storage system that owns the SAN Copy session.

Traditional provisioning RAID groups require more manual administrative than pools. Fully supported for traditional LUNs

Use with MirrorView/A or MirrorView/S

Fully supported for traditional LUNs

Use with SAN Copy

Fully supported for traditional LUNs in all configurations.

Guidelines for using Virtual and traditional provisioning To decide when to use Virtual or traditional provisioning consider the following guidelines: Use Virtual Provisioning when: Ease of use is more important than absolute performance. Applications have controlled capacity growth and moderate to low performance requirements. Examples: Unstructured data in file systems Archives Data warehousing Research and development Use traditional provisioning when: Absolute performance is most important.
Planning Your Basic CX4-120 Storage-System Switch Configuration with an HP-UX Server
51

Applications have high I/O activity and low latency requirements. Examples: Classic online transaction processing (OLTP) applications Backups of raw devices Databases that initialize every block

Basic RAID concepts


This section discusses disk stripping, mirroring, pools and LUNs. Disk striping Using disk stripes, the storage-system hardware can read from and write to multiple disks simultaneously and independently. By allowing several read/write heads to work on the same task at once, disk striping can enhance performance. The amount of information read from or written to each disk makes up the stripe element size. The stripe size is the stripe element size multiplied by the data disks (not mirror or parity disks) in a RAID group or thin pool. For example, assume a stripe element size of 128 sectors (the default), then: For a RAID 6 storage pool with 6 disks (the equivalent of 4 data disks and 2 parity disks), the stripe size is 128 x 4 or 512 sectors per stripe. For a RAID 5 storage pool with 5 disks (equivalent of 4 data disks and 1 parity disk), the stripe size is 128 x 4 or 512 sectors per stripe. For a RAID 1/0 storage pool with 6 disks (3 data disks and 3 mirror disks), the stripe size is 128 x 3 or 384 sectors per stripe. The storage system uses disk striping with most RAID types. Mirroring Mirroring maintains a copy of a logical disk image that provides continuous access if the original image becomes inaccessible. The system and user applications continue running on the good image without interruption. You can create a mirror by binding disks as a RAID 1 group (mirrored pair) or a RAID 1/0 group (mirrored RAID 0 group); the hardware will then mirror the disks automatically.
52

Planning Your Basic CX4-120 Storage-System Switch Configuration with an HP-UX Server

Pools and LUNs You can create multiple LUNs on one RAID group or thin pool, and then allot each LUN to a different user or application on a server. For example, you could create three LUNs with 100, 400, and 573 GB of storage capacity for temporary, mail, and customer files. Note that the storage capacity of a RAID group LUNis the actual physical capacity on the storage system, whereas the storage capacity of a thin LUN may not be actual physical capacity. One disadvantage of multiple LUNs on a storage pool is that I/O to each LUN may affect I/O to others on the RAID group or thin pool; that is, if traffic to one LUN is very heavy, I/O performance with other LUNs may be degraded. The main advantage of multiple LUNs per RAID group or thin pool is the ability to divide the enormous amount of disk space that a RAID group or thin pool provides. Figure 16 shows three LUNs on one 5-disk RAID group or thin pool.

LUN 0 temp LUN 1 mail LUN 2 customers Disk


Figure 16

LUN 0 temp LUN 1 mail LUN 2 customers Disk

LUN 0 temp LUN 1 mail LUN 2 customers Disk

LUN 0 temp LUN 1 mail LUN 2 customers Disk

LUN 0 temp LUN 1 mail LUN 2 customers Disk


EMC1814a

Multiple LUNs on a RAID group or thin pool

Supported RAID types


This section discusses the RAID 6, RAID 5, RAID 3, RAID 1, RAID 1/0, and RAID 0 types and also individual disks, hot spares, and proactive sparing. RAID 6 (double distributed parity) RAID 6 is supported for all RAID groups or thin pools. A single RAID 6 group usually consists of 6 or 12 disks, but can have 4, 8, 10, 14, or 16 disks. On a RAID 6 group, you can create a maximum

Planning Your Basic CX4-120 Storage-System Switch Configuration with an HP-UX Server

53

of 256 LUNs the maximum number of LUNs per RAID group to allocate disk space to users and applications that are on different servers. A single RAID 6 thin pool consists of a minimum of 4 disks up to the maximum number of disks per thin pool supported by the storage system. On a RAID 6 thin pool, you can create up to the maximum number of LUNs supported by the storage system to allocate disk space to users and applications that are on different servers. Table 18 lists these maximum limits.
Table 18 Thin pool disk and thin LUN limits Maximum number of Disks per thin pool FLARE version FLARE 04.29 or later FLARE 04.28 40 20 Disks in all thin pools per storage system 80 40 Thin pools per storage system 20 10 Thin LUNs per storage system 512 256

A RAID 6 group or thin pool uses disk striping. In a RAID 6 group or thin pool, some space is dedicated to parity and the remaining disk space is for data. The storage system writes two independent sets of parity information row parity and diagonal parity that lets the group or thin pool continue operating if one or two disks fail or if a hard media error occurs during a single-disk rebuild. When you replace the failed disks, the SP rebuilds, or with proactive sparing, continues rebuilding, the group or thin pool by using the information stored on the working disks. Performance is degraded while the SP rebuilds the group or thin pool. This degradation is lessened with proactive sparing. During the rebuild, the storage system continues to function and gives users access to all data, including data stored on the failed disks.
Proactive sparing creates a hot spare (a proactive spare) of a disk that is becoming prone to errors by copying the contents of the disk to a hot spare. Subsequently, you can remove the disk before it fails and the proactive spare then takes its place.

54

Planning Your Basic CX4-120 Storage-System Switch Configuration with an HP-UX Server

A RAID 6 group or thin pool distributes parity evenly across all drives so that parity drives are not a bottleneck for write operations. Figure 17 shows user and parity data with the default stripe element size of 128 sectors (65,536 bytes) in a 6-disk RAID 6 group or thin pool. Notice that the disk block addresses in the stripe proceed sequentially from the first disk to the second, third, fourth, fifth, and sixth, then back to the first, and so on.
Stripe blocks First disk

8 stripe elements


CL3708

Second disk

Third disk

Row parity data Diagonal parity data User data

Stripe

Fourth disk

Fifth disk

Sixth disk

Figure 17

RAID 6 group or thin pool

A RAID 6 group or thin pool offers good read performance and good write performance. Write performance benefits greatly from storage-system caching. RAID 5 (distributed parity) RAID 5 is supported for RAID groups and thin pools.

Planning Your Basic CX4-120 Storage-System Switch Configuration with an HP-UX Server

55

A single RAID 5 group usually consists of 5 disks, but can have 3 to 16 disks. On a RAID 5 group, you can create up to 256 LUNs the maximum number of LUNs per RAID group to allocate disk space to users and applications that are on different servers. A single RAID 5 thin pool consists of a minimum of 3 disks up to the maximum number of disks per pool supported by the storage system. On a pool, you can create up to the maximum number of LUNs supported by the storage system to allocate disk space to users and applications on that are on different servers. Table 18 lists these maximum limits. A RAID 5 group or thin pool uses disk striping. The storage system writes parity information that lets the group or thin pool continue operating if a disk fails. When you replace the failed disk, the SP rebuilds, or with proactive sparing continues rebuilding, the group or thin pool using the information stored on the working disk. Performance is degraded while the SP rebuilds the group or thin pool. This degradation can be lessened by using the proactive sparing feature. During the rebuild, the storage system continues to function and gives users access to all data, including data stored on the failed disk. Figure 18 shows user and parity data with the default stripe element size of 128 sectors (65,536 bytes) in a 5-disk RAID 5 group or thin pool. Notice that the disk block addresses in the stripe proceed sequentially from the first disk to the second, third, fourth, and fifth, then back to the first, and so on.

56

Planning Your Basic CX4-120 Storage-System Switch Configuration with an HP-UX Server

Stripe blocks First disk

8 stripe elements


CL3719

Second disk

Third disk Stripe Fourth disk

Parity data User data

Fifth disk

Figure 18

RAID 5 group or thin pool

RAID 5 groups or thin pools offer excellent read performance and good write performance. Write performance benefits greatly from storage-system caching. RAID 3 (single disk parity) RAID 3 is supported for RAID groups only. A single RAID 3 group consists of five or nine disks and uses disk striping. To obtain the best bandwidth performance with a RAID 3 LUN, you need to limit concurrent access to the LUN. For example, a RAID 3 group may have multiple LUNs, but the highest bandwidth is achieved with one to four threads of concurrent, large I/O. Performance is degraded while the SP rebuilds the group. This degradation can be lessened by using the proactive sparing feature. However, the storage system continues to function and gives users access to all data, including data stored on the failed disk. Figure 19 shows user and parity data with the default stripe element size of 128 sectors (65,536 bytes) in a RAID 3 group. Notice that the disk
Planning Your Basic CX4-120 Storage-System Switch Configuration with an HP-UX Server
57

block addresses in the stripe proceed sequentially from the first disk to the second, third, and fourth, then back to the first, and so on.
Stripe blocks First disk

Stripe elements


CL3720

Second disk

Third disk Stripe Fourth disk

Parity data User data

Fifth disk

Figure 19

RAID 3 group

RAID 3 differs from RAID 6 and RAID 5 in one major way. With a RAID 3 group, the parity information is stored on one disk; with a RAID 6 or RAID 5 group or thinpool, it is stored on all disks. RAID 3 can perform sequential I/O better than RAID 6 or RAID 5, but does not handle random access as well. RAID 3 is best thought of as a specialized RAID 5 for applications with the large or sequential I/O. However, with the write cache enabled for RAID 3 LUNs, RAID 3 is equivalent to RAID 5, and can handle some level of concurrency. A RAID 3 group works well for applications that use I/O with blocks 64 KB and larger. By using both the read and write cache, a RAID 3 group can handle several concurrent streams of access. RAID 3 groups do not require any special buffer area. No fixed memory is required to use write cache with RAID 3. Simply allocate write cache as you would for RAID 5, and ensure that caching is turned on for the
58

Planning Your Basic CX4-120 Storage-System Switch Configuration with an HP-UX Server

LUNs in the RAID 3 groups. Access to RAID 3 LUNs is compatible with concurrent access to LUNs of other RAID types on the storage system. RAID 1 (mirrored pair) RAID 1 is supported for RAID groups only. A single RAID 1 group consists of two disks that are mirrored automatically by the storage-system hardware. With a RAID 1 group, you can create multiple RAID 1 LUNs to apportion disk space to different users, servers, and applications. RAID 1 hardware mirroring within the storage system is not the same as software mirroring, remote mirroring, or hardware mirroring for other kinds of disks. Functionally, the difference is that you cannot manually stop mirroring on a RAID 1 mirrored pair, and then access one of the images independently. If you want to use one of the disks in such a mirror separately, you must unbind the mirror (losing all data on it), rebind the disk as the type you want, and software format the newly bound LUN. With a storage system, RAID 1 hardware mirroring has the following advantages: Automatic operation (you do not have to issue commands to initiate it) Physical duplication of images Rebuild period that you can select during which the SP recreates the second image after a failure With a RAID 1 mirrored pair, the storage system writes the same data to both disks, as shown in Figure 20.
First disk

User data

Second disk

CL3722

Figure 20

RAID 1 mirrored pair

Planning Your Basic CX4-120 Storage-System Switch Configuration with an HP-UX Server

59

RAID 1/0 (mirrored nonredundant array) RAID 1/0 is supported for RAID groups only. A single RAID 1/0 group consists of 2, 4, 6, 8, 10, 12, 14, or 16 disks. These disks make up 2 mirror images, with each image including 2 to 8 disks. The hardware automatically mirrors the disks. A RAID 1/0 group uses disk striping. It combines the speed advantage of RAID 0 with the redundancy advantage of mirroring. With a RAID 1/0 group, you can create up to 128 RAID 1/0 LUNs to apportion disk space to different users, servers, and applications. Figure 21 shows the distribution of user data with the default stripe element size of 128 sectors (65,536 bytes) in a 6-disk RAID 1/0 group. Notice that the disk block addresses in the stripe proceed sequentially from the first mirrored disks (first and fourth disks) to the second mirrored disks (second and fifth disks), to the third mirrored disks (third and sixth disks), and then from the first mirrored disks, and so on.

60

Planning Your Basic CX4-120 Storage-System Switch Configuration with an HP-UX Server

Stripe blocks First disk of primary image

Stripe element


User data

Second disk of primary image Stripe

Third disk of primary image

First disk of secondary image


CL3721

Second disk of secondary image

Third disk of secondary image

Figure 21

RAID 1/0 group

A RAID 1/0 group can survive the failure of multiple disks, provided that one disk in each image pair survives. RAID 0 (nonredundant RAID striping) RAID 0 is supported for RAID groups only.

CAUTION A RAID 0 group provides no protection for your data. EMC does not recommend using a RAID 0 group unless you have some way of protecting your data, such as software mirroring.

Planning Your Basic CX4-120 Storage-System Switch Configuration with an HP-UX Server

61

A single RAID 0 group consists of 3 to a maximum of 16 disks. A RAID 0 group uses disk striping, in which the hardware writes to or reads from multiple disks simultaneously. You can create up to 128 LUNs in any RAID 0 group. Unlike the other RAID levels, with RAID 0 the hardware does not maintain parity information on any disk; this type of group has no inherent data redundancy. As a result, if any failure (including an unrecoverable read error) occurs on a disk in the LUN, the information on the LUN is lost. RAID 0 offers enhanced performance through simultaneous I/O to different disks. A desirable alternative to RAID 0 is RAID 1/0, which does protect your data. Proactive sparing is not supported for a RAID 0 group. Individual disk The individual disk type is supported for RAID groups only. An individual disk unit is a disk bound to be independent of any other disk in the cabinet. An individual unit has no inherent high availability, but you can make it highly available by using software mirroring with another individual unit. Hot spare A hot spare is a dedicated replacement disk on which users cannot store information. A hot spare is global: if any disk in a RAID 6 group or thin pool, RAID 5 group or thin pool, RAID 3 group, RAID 1/0 group, or RAID 1 group fails, the SP automatically rebuilds the failed disks structure on the hot spare. When the SP finishes rebuilding, the RAID group or thin pool functions as usual, using the hot spare instead of the failed disk. When you replace the failed disk, the SP copies the data from the former hot spare onto the replacement disk. When the copy is done, the RAID group or thin pool consists of disks in the original slots, and the SP automatically frees the hot spare to serve as a hot spare again. A hot spare is most useful when you need the highest data availability. It eliminates the time and effort needed for someone to notice that a disk has failed, find a suitable replacement disk, and insert the disk. When you plan to use a hot spare, make sure the disk has the capacity to serve in any RAID group or thin pool in the storage system. A RAID

62

Planning Your Basic CX4-120 Storage-System Switch Configuration with an HP-UX Server

group or thin pool cannot use a hot spare that is smaller than a failed disk in the RAID group or thin pool. You can have one or more hot spares per storage system. You can make any disk in the storage system a hot spare, except a disk that stores FLARE or the write cache vault; that is, a hot spare can be any disk except disk IDs 000 through 004. If you use hot spares of different sizes, the storage system will automatically use the hot spare of the proper size in place of a failed disk.

CAUTION Do not use a SATA disk as a spare for a Fibre-Channel-based LUN, and do not use a Fibre Channel disk as a spare for a SATA-based LUN. A hot spare that is a Flash (SSD) disk can be used only as a spare for a Flash disk. If you have Flash disks in a RAID group , you should create at least one hot spare that is a Flash disk. An example of hot spare usage follows in Figure 22.
000 001 002 003 004 005 006 007 008 009 0010 0011 0012 0013 0014
RAID 5 RAID 5 RAID 1 Hot spare RAID 1
1. RAID 5 groups consist of disks 0-4 and 5-9; mirrored pairs are disks 10-11 and 12-13, disk 14 is a hot spare. 2. Disk 3 fails. 3. RAID 5 group becomes disks 0, 1, 2, 14, and 4; now no hot spare is available. 4. System operator replaces failed disk 3 with a functional module. 5. Disk 14 copies data to new disk 3. 6. Once again, RAID 5 group consists of disks 0-4 and the hot spare is disk 14. EMC3445

Figure 22

How a hot spare works

Planning Your Basic CX4-120 Storage-System Switch Configuration with an HP-UX Server

63

Proactive sparing Proactive sparing lets you proactively create a hot spare of a disk that is becoming prone to errors (a proactive candidate). The proactive sparing operation copies the contents of a disk to a hot spare before the disk fails. Subsequently, you can remove the disk from the storage system before it fails and the hot spare then takes its place. The proactive sparing operation is initiated automatically or manually. When the storage-system software identifies certain types or frequencies of errors on a disk, it identifies the disk as a proactive candidate, and automatically begins the proactive sparing operation. The storage-system software copies the contents of the proactive candidate to the proactive hot spare. Additionally, you can manually copy all the data from a proactive candidate to a proactive spare using Navisphere Manager. When a proactive sparing copy operation completes, the proactive candidate is faulted. When you replace the faulted disk, the storage system copies the data from the proactive spare to the replacement disk. Any available hot spare can be a proactive spare, but only one hot spare can be used for proactive sparing at a time. If the storage system has only one hot spare, it can be a proactive spare. Table 19 lists the number of concurrent proactive spares supported per storage system.
Table 19 Proactive spares per RAID type Unit RAID type RAID 6, RAID 5, RAID 3 RAID 1 RAID 1/0 Number of proactive spares 1 1 per pair 1 per mirrored pair

Proactive sparing is not supported for RAID 0 or individual disk units.

RAID type benefits and trade-offs


This section discuses performance, storage flexability, data availability and disk space for the different RAID types. Performance RAID 6 and RAID 5, with individual access, provide high read throughput by allowing simultaneous reads from each disk in the RAID group or thin pool. RAID 6 and RAID 5 write performance is

64

Planning Your Basic CX4-120 Storage-System Switch Configuration with an HP-UX Server

excellent when the storage system uses write caching. RAID 6 group performance is better than RAID 6 thin pool performance and RAID 5 group performance is better than RAID 5 thin pool performance. RAID 3, with parallel access, provides high throughput for sequential requests. Large block sizes (more than 64 KB) are most efficient. RAID 3 attempts to write full stripes to the disk, and avoid parity update operations. Generally, the performance of a RAID 3 group increases as the size of the I/O request increases. Read performance increases incrementally with read requests up to 1 MB. Write performance increases incrementally for sequential write requests that are greater than 256 KB. RAID 1 read performance will be higher than that of an individual disk while write performance remains approximately the same as that of an individual disk. A RAID 0 group (nonredundant RAID striping) or RAID 1/0 group (mirrored RAID 0 group) can have as many I/O operations occurring simultaneously as there are disks in the group. In general, the performance of RAID 1/0 equals the number of disk pairs times the RAID 1 performance number. If you want high throughput for a specific LUN, use a RAID 1/0 or RAID 0 group. A RAID 1/0 group requires at least two disks; a RAID 0 group requires at least three disks. A RAID 1/0 group can have as many I/O operations occurring simultaneously as there are disks in the group. If you want high throughput for a specific LUN, use a RAID 1/0 group . A RAID 1/0 group requires at least two disks. If you create multiple LUNs on a group , the LUNs share the group disks, and the I/O demands of each LUN affect the I/O service time for the other LUN. Storage flexibility On a CX4-120 storage system you can create up to 1024 LUNs, and 512 of these LUNs can be thin LUNs. On a RAID group you can create up to 256 LUNs, and on a thin pool you can create up to 512 LUNs. The number of LUNs that you can create adds flexibility, particularly with large disks, since it lets you apportion LUNs of various sizes to different servers, applications, and users.

Planning Your Basic CX4-120 Storage-System Switch Configuration with an HP-UX Server

65

Data availability and disk space usage in RAID groups If data availability is critical and you cannot afford to wait hours to replace a disk, rebind it, make it accessible to the operating system, and load its information from backup, then use a redundant RAID group RAID 6, RAID 5, RAID 3, RAID 1 group, or RAID 1/0 or a redundant thin pool RAID 6 or RAID 5. If data availability is not critical, or disk space usage is critical, bind an individual unit. Figure 23 illustrates disk usage in RAID group configurations and Figure 24 illustrates disk usage in thin pool configurations.

66

Planning Your Basic CX4-120 Storage-System Switch Configuration with an HP-UX Server

RAID 6 group 1st disk user and parity data 2nd disk user and parity data 3rd disk user and parity data 4th disk user and parity data 5th disk user and parity data 6th disk user and parity data RAID 1 group (mirrored pair) 1st disk user data 2nd disk redundant data 80% user data 20% parity data

RAID 5 group 1st disk user and parity data 2nd disk user and parity data 3rd disk user and parity data 4th disk user and parity data 5th disk user and parity data

RAID 3 group 1st disk user data 2nd disk user data 3rd disk user data 4th disk user data 5th Disk parity data

67% user data 33% parity data

RAID 1/0 group 50% user data 50% redundant data 1st disk user data 2nd disk redundant data 3rd disk user data 4th disk redundant data 5th disk user data

RAID 0 group (nonredundant array) 1st disk user data 2nd disk user data 3rd disk user data

100% user data

50% user data 50% redundant data

Hot spare Reserved


Figure 23

Individual disk unit No user data User data 100% user data

6th disk redundant data


EMC1820

Disk space usage in sample RAID group configurations

Planning Your Basic CX4-120 Storage-System Switch Configuration with an HP-UX Server

67

RAID 6 pool 1st disk user and parity data 2nd disk user and parity data 3rd disk user and parity data 4th disk user and parity data 5th disk user and parity data 6th disk user and parity data No user data 80% user data 20% parity data

RAID 5 pool 1st disk user and parity data 2nd disk user and parity data 3rd disk user and parity data 4th disk user and parity data 5th disk user and parity data Hot spare Reserved
EMC1820b

67% user data 33% parity data

Figure 24

Disk space usage in sample thin pool configurations

A RAID 1 or RAID 1/0 group provides very high data availability. It is more expensive than a RAID 6, RAID 5, or RAID 3 group, since only 50 percent of the total disk capacity is available for user data. A RAID 6, RAID 5, or RAID 3 group provides high data availability, but requires more disks than a RAID 1 group. A RAID 6 group provides the highest data availability of these three groups. Likewise a RAID 6 thin pool provides higher data availability than a RAID 5 thin pool. In a RAID 6 group or thin pool, the disk space available for user data is the total capacity of the of disks in RAID group or thin pool minus the capacity of two disks in the RAID group or thin pool. In a RAID 5 group or thin pool or a RAID 3 group, the disk space available for user data is the total capacity of the of disks in RAID group or thin pool minus the capacity of one disk in the RAID group or thin pool. For example, in a 6-disk RAID 6 group or thin pool or a 5-disk RAID 5 group or thin pool, the capacity of 4 disks is available for user data, which is 67% for RAID 6 or 80% for RAID 5 of the groups or pools total disk capacity. So RAID 6, RAID 5, and RAID 3 groups use disk space much more efficiently than a RAID 1 group. A RAID 6, RAID 5, or RAID 3 group is usually more suitable than a RAID 1 group for

68

Planning Your Basic CX4-120 Storage-System Switch Configuration with an HP-UX Server

applications where high data availability, good performance, and efficient disk space usage are all of relatively equal importance. A RAID 0 group (nonredundant RAID striping) provides all its disk space for user files, but does not provide any high-availability features. For high availability, you should use a RAID 1/0 group instead. A RAID 1/0 group provides the best combination of performance and availability, at the highest cost per GB of disk space. An individual unit, like a RAID 0 group, provides no high-availability features. All its disk space is available for user data.

RAID type guidelines for RAID groups or thin pools


To decide when to use a RAID 6 group or thin pool, RAID 5 group or thin pool, RAID 3 group, RAID 1 group, RAID 1/0 group or pool, a RAID 0 group, individual disk unit, or hot spare, you need to weigh these factors: Importance of data availability Importance of performance Amount of data stored Cost of disk space Use the following guidelines to decide on RAID types. Use a RAID 6 (double distributed parity) or RAID 5 (distributed parity) group or thin pool for applications where: Data availability is very important. A RAID 6 group or thin pool provides higher availability than a RAID 5 group or thin pool, but uses more overhead than a RAID 5 group or thin pool. The performance of a RAID 6 group or RAID 5 group is better than the performance of a RAID 6 thin pool or a RAID 5 thin pool, respectively. Large volumes of data will be stored. Multitask applications use I/O transfers of different sizes. Excellent read and good write performance is needed (write performance is very good with write caching). You want the flexibility of multiple LUNs per RAID group or thin pool.
Planning Your Basic CX4-120 Storage-System Switch Configuration with an HP-UX Server
69

Use a RAID 3 (single-disk parity) group for applications where: Data availability is very important. Large volumes of data will be stored. Similar access patterns are likely and random access is unlikely. The highest possible bandwidth performance is required. Use a RAID 1 (mirrored pair) group for applications where: Data availability is very important. Speed of write access is important and write activity is heavy. Use a RAID 1/0 (mirrored nonredundant array) group for applications where: Data availability is critically important. Overall performance is very important. Use a RAID 0 (nonredundant RAID striping) group for applications where: High availability is not important. You can afford to lose access to all data stored on a LUN if a single disk fails. Overall performance is very important. Use an individual unit for applications where: High availability is not important. Speed of write access is somewhat important. Use a hot spare where: In any RAID 6, RAID 5, RAID 3, RAID 1/0 or RAID 1 group, high availability is so important that you want to regain data redundancy quickly without human intervention if any disk in the group fails. Minimizing the degraded performance caused by disk failure in a RAID 6 group, RAID 5 group, or RAID 3 group is important.

Sample applications for RAID group or thin pool types


This section describes some sample applications for which you would want to use the different RAID types for RAID group or thin pools.
70

Planning Your Basic CX4-120 Storage-System Switch Configuration with an HP-UX Server

RAID 6 (distributed dual parity) or RAID 5 (distributed parity) group or thin pool A RAID 6 or RAID 5 group or thin pool is useful as a database repository or a database server that uses a normal or low percentage of write operations (writes are 33 percent or less of all I/O operations). Use a RAID 6 or RAID 5 group or thin pool where multitasking applications perform I/O transfers of different sizes. Write caching can significantly enhance the write performance of a RAID 6 or RAID 5 group or thin pool. For higher data availability, use a RAID 6 group or thin pool instead of a RAID 5 group or thin pool. The performance of a LUN in RAID 6 group is typically better than the performance of a thin LUN in a RAID 6 thin pool; and likewise, the performance of a LUN in a RAID 5 group is typically better than the performance of a thin LUN in a RAID 5 thin pool. For example, a RAID 6 or RAID 5 group or thin pool is suitable for multitasking applications that require a large history database with a high read rate, such as a database of legal cases, medical records, or census information. A RAID 6 or RAID 5 group or thin pool also works well with transaction processing applications, such as an airline reservations system, where users typically read the information about several available flights before making a reservation, which requires a write operation. You could also use a RAID 6 or RAID 5 group or thin pool in a retail environment, such as a supermarket, to hold the price information accessed by the point-of-sale terminals. Even though the price information may be updated daily requiring many write operations, it is read many more times during the day. RAID 3 (single-disk parity) group A RAID 3 group is ideal for high-bandwidth reads or writes, that is, applications that perform either logically sequential I/O or use large I/O sizes (stripe size or larger). Using read and write caching, several applications can read and write from a RAID 3 group. Random access in a RAID 3 group is not optimal, so the ideal applications for RAID 3 are backup to disk, real-time data capture, and storage of extremely large files. You might use a RAID 3 group for a single-task application that does large I/O transfers, like a weather tracking system, geologic charting application, medical imaging system, or video storage application. RAID 1 (mirrored pair) group A RAID 1 (mirrored pair) group is useful for logging or record-keeping applications because it requires fewer disks than a RAID 0
Planning Your Basic CX4-120 Storage-System Switch Configuration with an HP-UX Server
71

(nonredundant array) group, and provides high availability and fast write access. Or you could use it to store daily updates to a database that resides on a RAID 6 or RAID 5 group or thin pool, and then, during off-peak hours, copy the updates to the database on the RAID 6 or RAID 5 group or thin pool. Unlike a RAID 1/0 group, a RAID 1 group is not expandable to more than two disks. RAID 0 (nonredundant RAID striping) group Use a RAID 0 group where the best overall performance is important. A RAID 0 group is useful for applications using short-term data to which you need quick access. RAID 1/0 group A RAID 1/0 group provides the best balance of performance and availability. You can use it very effectively for any of the RAID 6 or RAID 5 applications. The performance of a LUN in a RAID 1/0 group is typically better than the performance of a thin LUN in a RAID 1/0 pool. Individual unit An individual unit is useful for print spooling, user file exchange areas, or other such applications, where high availability is not important or where the information stored is easily restorable from backup. The performance of an individual unit is slightly less than a standard disk not in a storage system. The slight degradation results from SP overhead. Hot spare A hot spare provides no data storage but enhances the availability of each RAID 6 group or thin pool, RAID 5 group or thin pool, RAID 3 group, RAID 1 group, and RAID 1/0 group in a storage system. Use a hot spare where you must regain high availability quickly without human intervention if any disk in such a RAID group or thin pool fails. A hot spare also minimizes the period of degraded performance after a disk failure in a RAID 6 group or thin pool, RAID 5 group or thin pool, or RAID 3 group. Proactive sparing minimizes it even more for a disk failure in any of these RAID groups or thin pools.

Fully automated storage tiering (FAST)


Storage tiering lets you assign different categories of data to different types of storage to reduce total storage costs. You can base data categories on levels of protection needed, performance requirements, frequency of use, costs, and other considerations. The purpose of tiered
72

Planning Your Basic CX4-120 Storage-System Switch Configuration with an HP-UX Server

storage is to retain the most frequently accessed of most important data on fast, high performance (most expensive) disks, and move the less frequently accessed and less important data to low performance (less expensive) disks. Within a storage pool that is not a RAID group, storage from similarly performing disks are grouped together to form a tier of storage. For example, if you have Flash (SSD) disks, Fibre Channel (FC) disks, and Serial Advanced Technology Attachment (SATA) disks in the pool, the Flash disks form a tier, the FC disks form a tier, and the SATA disks form a tier. Based on your input or internally computed usage statistics, portions of LUNs (slices) can be moved between tiers to maintain a service level close to the highest performing storage tier in the pool, even when some portion of the pool consists of lower performing (less expensive) disks. The tiers from highest to lowest are Flash, FC, and SATA. FAST is not supported for RAID groups because all the disks in a RAID group, unlike in a pool, must be of the same type (all Flash, all FC, or all SATA). The lowest performing disks in a RAID group determine a RAID groups overall performance. Two types of tiered storage are available: Initial tier placement Auto-tier placement Initial tier placement Initial tier policies are available for storage systems running FLARE 04.30.000.5.xxx or later and do not require the FAST enabler. Initial tier placement requires that you manually specify the storage tier on which you want to initially place the LUNs data, and then either manually migrate the LUN to relocate the data to a different tier or install the FAST enabler, which once installed, will perform the migration automatically. Table 20 describes the policies for initial tier placement.
Table 20 Initial tier placement policy Optimize for Pool Performance (default) Highest Tier Available Lowest Tier Available Initial tier settings for LUNs in a pool (FAST enabler not installed) Description No tier setting specified. Sets the preferred tier for initial data placement to the highest tier available. Sets the preferred tier for initial data placement to the lowest tier available.

Planning Your Basic CX4-120 Storage-System Switch Configuration with an HP-UX Server

73

FAST policies FAST policies are available for storage systems running FLARE 04.30.000.5.xxx or later with the FAST enabler installed. The FAST feature automatically migrates the data between storage tiers to provide the lowest total cost of ownership. Pools are configured with different types of disks (Flash/SSD, FC, and SATA) and the storage-system software continually tracks the usage of the data stored on the LUNs in the pools. Using these LUN statistics, the FAST feature relocates data blocks (slices) of each LUN to the storage tier that is best suited for the data, based on the policies described in Table 21.
Table 21 FAST policy Auto Tier Highest Tier Available Lowest Tier Available No Data Movement FAST policies for LUNs in a pool (FAST enabler installed) Description Moves data to a tier based on the LUN performance statistics (LUN usage rate). Moves data to the highest possible tier. Moves data to the lowest possible tier. Moves no data between tiers, and retains the current tier placement.

If you install the FAST enabler on a storage system with an initial tier placement setting specified, the storage-system software bases the FAST policies on the initial tier policies, as shown in Table 22.
Table 22 Interaction between Initial tier placement settings and FAST policies Default FAST policy after FAST enabler installed Auto-Tier Highest Available Tier Lowest Available Tier No Data Movement (retains the initial tier placement settings)

Initial tier placement before FAST enabler installed Optimize for Pool Performance Highest Available Tier Lowest Available Tier n/a

74

Planning Your Basic CX4-120 Storage-System Switch Configuration with an HP-UX Server

File systems and LUNs


This section will help you plan your storage use the applications to run, the LUNs that will hold the applications, and the storage group that will belong to each server. It provides background information, shows sample installations with switched and direct storage, and provides worksheets for planning your storage installation. Unless stated otherwise, the term LUN applies to all LUNs RAID group LUNs and thin LUNs. Major topics are: Multiple paths to LUNs, page 75 DAE requirements, page 76 Disk IDs and locations in DAEs, page 76 Disk configuration rules and recommendations, page 76 Sample shared switched or network configuration, page 78 Application and LUN worksheet, page 79 RAID group and thin pool worksheet, page 82 LUN and storage group worksheet, page 84 LUN details worksheet, page 86

Multiple paths to LUNs


A shared storage-system configuration includes two or more servers and one or more storage systems. Often shared storage installations include two or more switches or routers. In properly configured shared storage (switched or direct), each server has at least two paths to each LUN in the storage system. The storage-system FLARE Operating Environment (OE) detects all paths and, using optional failover software (such as EMC PowerPath), can automatically switch to the other path, without disrupting applications, if a device such as a host bus adapter or cable fails. With two adapters and two or more ports per SP zoned to them, PowerPath can send I/O to each available path in a user-selectable sequence (multipath I/O) for load sharing and greater throughput. An unshared storage configuration has one server and one storage system. If the server has two adapters it can have two paths to each
Planning Your Basic CX4-120 Storage-System Switch Configuration with an HP-UX Server
75

LUN. With two adapters, PowerPath performs the same function as with shared systems: it automatically switches to the other path if a device such as host bus adapter or cable fails.

DAE requirements
The storage system must have a minimum of one DAE with five disks. A maximum of 8 DAEs are supported for a total of 120 disks.Each back-end bus can support eight DAEs.

Disk IDs and locations in DAEs


Disk IDs have the form bed, where:
b e d is the back-end loop (also referred to as a back-end bus) number (0 ) is the enclosure number, set on the enclosure rear panel (0 for the first DAE) is the disk position in the enclosure (left is 0, right is 14)

Navisphere Manager displays disk IDs as b-e-d, and Navisphere CLI recognizes disk IDs as b-e-d. Figure 25 shows the IDs for disks in DAEs.
030 031 032 033 034 035 036 037 038 039 0310 0311 0312 0313 0314
DAE

000 001 002 003 004 005 006 007 008 009 0010 0011 0012 0013 0014

Bus 0 enclosures

010 011 012 013 014 015 016 017 018 019 0110 0111 0112 0113 0114

020 021 022 023 024 025 026 027 028 029 0210 0211 0212 0213 0214
DAE

DAE

DAE

Vault disks

Note: Unisphere or Navisphere Manager displays disk IDs as n-n-n. CLI recognizes disk IDs as n_n_n

SPE
EMC3424

Figure 25

IDs for disks on the single bus in a storage system

Disk configuration rules and recommendations


The following rules and recommendations apply to the storage system:

76

Planning Your Basic CX4-120 Storage-System Switch Configuration with an HP-UX Server

You cannot use a Flash (SSD) disk as: A vault disk disks 000004 (enclosure 0, bus 0, disks 0-4) A hot spare for any disk except another Flash disk For more information on Flash disk usage, refer to the Best Practices documentation on the Powerlink website (http://Powerlink.EMC.com). You cannot use disks 000004 (enclosure 0, bus 0, disks 0-4) as a hot spare. Do not use: A SATA disk as a hot spare for a Fibre-Channel-based LUN A Fibre Channel disk as a spare for a SATA-based LUN The hardware reserves about 62 GB on each of disks 000004 (vault disks) for the cache vault and internal tables. To improve SP performance and conserve disk space, you should avoid binding other disks into a RAID group that includes any vault disk. Any disk you include in a RAID group with a vault disk 000004 is bound to match the lower unreserved capacity, resulting in lost storage of several gigabytes per disk. The extra space on the vault disks is a good place for an archive. To fully use disk space, all disks in the RAID group should have the same capacity because all disks in a group are bound to match the smallest capacity disk. The first five drives (000004) should always be the same size. If a storage system uses both SATA and Fibre Channel disks, do not mix SATA and Fibre Channel disks within a DAE. If a storage system uses disks of different capacities and/or rpm (for example, 300 GB or 400 GB, or 10K or 15K rpm) within a DAE, then we recommend that you place them in a logical order to avoid mixing disks with different speeds in the same RAID group. One possible order is the following: Place disks with the highest capacity in the first (leftmost) slots, followed by disks with lower capacities. Within any specific capacity, place drives with the highest speed first, followed by disks with lower speeds. If possible, do not mix disks with different speeds in a RAID group.

Planning Your Basic CX4-120 Storage-System Switch Configuration with an HP-UX Server

77

Sample shared switched or network configuration


Figure 26 shows a sample shared storage system connected to three servers: two servers in a cluster and one server running a database management program. Note that each server has a completely independent connection to SP A and SP B. The storage system in the figure is a single-cabinet storage system. You can also configure a storage system with multiple cabinets.

78

Planning Your Basic CX4-120 Storage-System Switch Configuration with an HP-UX Server

Highly Available Cluster Database Server (DS) Operating System B File Server (FS) Operating System A Mail Server (MS) Operating System B

Switch fabric or network SP A FS R5 Specs FS R5 UsersG_O MS R5 ISP A Mail MS R5 Users Database Server Storage Group FS R5 Apps

Switch fabric or network SP B FS R5 UsersA_F FS R5 UsersT_Z Disk IDs 120-1214

Cluster Storage Group

FS R5 UsersP_S

020-0214 110-1114 010-0114

MS R5 MS R5 ISP B Mail ISP C Mail MS R5 Specs MS R5 Apps DS R5 Users (6 disks)

100-1014

DS R5 Dbase1

DS R5 Dbase2

DS R5 Dbase3

000-0014

EMC2405

Figure 26

Sample shared switched storage configuration

Application and LUN worksheet


Use the Application and LUN worksheet in Table 23 to list the applications you will run, and the RAID type and size of the LUN that will hold them. For each application, write the application name, file system (if any), RAID type, LUN ID (ascending integers, starting

Planning Your Basic CX4-120 Storage-System Switch Configuration with an HP-UX Server

79

with 0), disk space required, and finally the name of the servers and operating systems that will use the LUN.
Table 23 Application Application worksheet LUN or thin LUN RAID type LUN or thin LUN ID Disk space required (GB) Server hostname and operating system

File system, partition, or drive

Application Enter the application name or type. File system, partition, or drive Enter the planned file system, partition, or drive name. LUN RAID type Enter the RAID type for the RAID group or thin pool for this file system, partition, or drive. You can create one or more LUNs on the RAID group or one or more thin LUNs on the thin pool. LUN ID Enter the number for the LUN ID. The LUN ID is assigned when you create the LUN. By default, the ID of the first LUN bound is 0, the second 1, and so on. Each LUN ID must be unique within the storage system, regardless of its storage group or RAID group or thin pool. The maximum number of LUNs supported on one host bus adapter depends on the operating system.

80

Planning Your Basic CX4-120 Storage-System Switch Configuration with an HP-UX Server

Disk space required (GB) Enter the largest amount of disk space this application will need plus a factor for growth. If you find in the future that you need more space for this application, you can expand the capacity of the LUN. For more information, refer to the Navisphere Manager online help, which is available on the Powerlink website (http://Powerlink.EMC.com). Server hostname and operating system Enter the server hostname (or, if you do not know the name, a short description that identifies the server) and the operating system name, if you know it.

Planning Your Basic CX4-120 Storage-System Switch Configuration with an HP-UX Server

81

RAID group and thin pool worksheet


Use the worksheet in Table 24 to select the disks that will make up the RAID groups and thin pools that the storage system will use. Complete as many of the RAID group and thin pool sections as needed for the storage system.
Table 24 Storage-system number or name: RAID group Thin pool RAID type: Automatic Manual Disk selection: Use power saving eligible disks (automatic selection only) RAID group and thin pool worksheet

RAID group ID or thin pool name:

Thin pool space alert threshold: Default Threshold value (manual selection only): RAID group Thin pool

Manual

RAID group ID or thin pool name:

RAID type:

Automatic Manual Disk selection: Use power saving eligible disks (automatic selection only)

Thin pool space alert threshold: Default Threshold value (manual selection only): RAID group Thin pool

Manual

RAID group ID or thin pool name:

RAID type:

Automatic Manual Disk selection: Use power saving eligible disks (automatic selection only)

Thin pool space alert threshold: Default Threshold value (manual selection only): RAID group Thin pool

Manual

RAID group ID or thin pool name:

RAID type:

Automatic Manual Disk selection: Use power saving eligible disks (automatic selection only)

Thin pool space alert threshold: Default Threshold value (manual selection only): RAID group Thin pool

Manual

82

Planning Your Basic CX4-120 Storage-System Switch Configuration with an HP-UX Server

Storage-system number or name: RAID group ID or thin pool name: RAID type: Manual Automatic Disk selection: Use power saving eligible disks (automatic selection only)

Thin pool space alert threshold: Default Threshold value (manual selection only):

Manual

Storage-system number or name Enter the number or name that identifies the storage system. RAID group or thin pool Select the appropriate box. RAID group ID or thin pool name Enter either the number to use for the RAID group ID or the name to use for the thin pool. When you create a RAID group, you can either assign an ID to it or have one assigned automatically. If the storage system assigns the RAID group IDs automatically, the ID of the first RAID group is 0, the second 1, and so on. Each RAID group ID must be unique within the storage system. When you create a thin pool, you must assign a name to it, even though an ID is assigned to it automatically. Each thin pool name must be unique within the storage system. The ID of the first thin pool is always 0, the second 1, and so on. RAID type Enter the RAID type for the RAID group or thin pool. Disk selection Select either Automatic if you want Navisphere Manager to select the disk for the RAID group or thin pool or Manual if you want to select the disks yourself. If you checked the Manual box, enter the IDs of the disks that will make up the RAID group or thin pool. The capacity of the RAID group or thin pool is the result of the capacity and number of the disks selected less the overhead of the RAID type selected. If you selected automatic disk selection for a RAID group, you can choose to use power saving eligible disks for the RAID group and to assign power saving settings to these disks, so that the disks transition to a low power state when the following conditions are met: The storage system is running FLARE 04.29 or later. Power saving for the RAID group containing the disks and the storage system is enabled. All disks in the RAID group:
Planning Your Basic CX4-120 Storage-System Switch Configuration with an HP-UX Server
83

Support disk power savings. Have been idle for at least 30 minutes. No LUNs in the RAID group are participating in replication and/or data mobility software (SnapView, MirrorView/A, MirrorView/S, SAN Copy) sessions. The RAID group does not include metaLUNs. For information on the currently available disks that support power savings, refer to the EMC CX4 Series Storage Systems Disk and FLARE OE Matrix (P/N 300-007-437) on the EMC Powerlink website. To use power saving disks in the RAID group, select the User power saving eligible disks box. Thin pool space alert threshold By default, the storage system issues a warning alert when 70% of the thin pools space has been consumed and a critical alert when 85% of the space has been consumed. As thin LUNs continue consuming the thin pools space, both alerts continue to report the actual percentage of consumed space. However, you can set the threshold for the warning alert. If you selected manual disk selection, enter either Default (70%) or the threshold value that you want to trigger the alert warning that the thin pool space is filling up. We recommend that you set the value somewhere between 50% and 75%.

LUN and storage group worksheet


Use the worksheet in Table 25 to select the disks that will make up the LUNs and storage groups in each storage system. Complete a worksheet for each storage group in your configuration.
Table 25 Storage-system number or name: Storage group ID or name: LUN ID or name: RAID type: RAID 0 LUN ID or name: RAID type: RAID 0 RAID 6 RAID 1/0 RAID 6 RAID 1/0 RAID 5 RAID 3 RAID 1 Hot spare RAID 1 Hot spare LUN capacity: Server hostname: LUN capacity: LUN and storage group worksheet

Individual disk RAID 5 RAID 3

Individual disk

84

Planning Your Basic CX4-120 Storage-System Switch Configuration with an HP-UX Server

LUN ID or name:

RAID type: RAID 0

RAID 6 RAID 1/0 RAID 6 RAID 1/0 RAID 6 RAID 1/0 RAID 6 RAID 1/0 RAID 6 RAID 1/0 RAID 6 RAID 1/0

RAID 5

RAID 3

RAID 1 Hot spare RAID 1 Hot spare RAID 1 Hot spare RAID 1 Hot spare RAID 1 Hot spare RAID 1 Hot spare

LUN capacity:

Individual disk RAID 5 RAID 3

LUN ID or name:

RAID type: RAID 0

LUN capacity:

Individual disk RAID 5 RAID 3

LUN ID or name:

RAID type: RAID 0

LUN capacity:

Individual disk RAID 5 RAID 3

LUN ID or name:

RAID type: RAID 0

LUN capacity:

Individual disk RAID 5 RAID 3

LUN ID or name:

RAID type: RAID 0

LUN capacity:

Individual disk RAID 5 RAID 3

LUN ID or name:

RAID type: RAID 0

LUN capacity:

Individual disk

Storage-system number or name Enter the number or name that identifies the storage system. The storage system assigns the storage group ID number when you create the storage group. By default, the ID of the first storage group is 0, the second 1, and so on. Each storage group ID must be unique within the storage system. Server hostname Enter the name or IP address for the server. Storage group ID or name Enter the ID or name that identifies the storage group. LUN name or ID Enter either the name for the LUN or the number for the LUN ID. The storage system assigns the LUN ID when you create the LUN. By default, the ID of the first LUN is 0, the second 1, and so on. Each LUN ID must be unique within the storage system. RAID type Select the RAID type of the LUN, which is the RAID type of the RAID group or thin pool on which it was created. Only RAID 6 and RAID 5 types are available for thin pools.

Planning Your Basic CX4-120 Storage-System Switch Configuration with an HP-UX Server

85

LUN capacity Enter the user capacity for the LUN.

LUN details worksheet


Use the worksheet in Table 26 to plan the individual LUNs. Complete as many blank worksheets as needed for all LUNs in the storage system.
Table 26 LUN details worksheet Storage-system information Storage-system number or name: SP A information IP address or hostname: SP B information IP address or hostname: LUN information LUN ID: RAID group ID or thin pool name: RAID type: SP caching (RAID group LUN only): SP owner: SP A SP B LUN size (GB): Disk IDs: SP back-end bus Memory (MB): Memory (MB):

RAID group or thin pool name size (GB): RAID 6 RAID 5

RAID 3 Write only

RAID 1 Read only

RAID 0 None

RAID 1/0

Individual disk

Hot spare

Read and write

Servers that can access this LUNs storage group: Operating system information Device name: File system, partition, or drive:

86

Planning Your Basic CX4-120 Storage-System Switch Configuration with an HP-UX Server

LUN or thin LUN information LUN ID: RAID group ID or thin pool name: RAID type: SP caching (RAID group LUN only): SP owner: SP A SP B LUN or thin pool name size (GB): RAID 1 Read only Disk IDs: SP back-end bus

RAID group or thin pool name size (GB): RAID 6 RAID 5

RAID 3 Write only

RAID 0 None

RAID 1/0

Individual disk

Hot spare

Read and write

Servers that can access this LUNs storage group: Operating system information Device name: File system, partition, or drive: LUN or thin LUN information LUN or thin LUN ID: RAID group ID or thin pool name: RAID type: SP caching (RAID group LUN only): SP owner: SP A SP B LUN size (GB): Disk IDs: SP back-end bus

RAID group or thin pool size (GB): RAID 6 RAID 5

RAID 3 Write only

RAID 1 Read only

RAID 0 None

RAID 1/0

Individual disk

Hot spare

Read and write

Servers that can access this LUNs or thin LUNs storage group:

Storage-system information Fill in the storage-system information section of the worksheet using the information that follows. Storage-system number or name Enter the number or name that identifies the storage system. SP A and SP B information Fill out the SP A and SP B information section in the worksheet using the information that follows. IP address or hostname Enter the IP address or hostname for the SP. The IP address is required for connecting to the SP. You do not need to complete it now, but you will need it when the storage system is installed so that you can set up communication with the SP.
Planning Your Basic CX4-120 Storage-System Switch Configuration with an HP-UX Server
87

Memory Enter the amount of memory on the SP. LUN information Fill in the LUN information section of the worksheet using the information that follows. LUN ID Enter the number for the LUN ID. The LUN ID is a number assigned when you create a LUN on a RAID group or a thin LUN on a thin pool. By default, the ID of the first LUN bound is 0, the second 1, and so on. Each LUN ID must be unique within the storage system, regardless of its storage group or RAID group. The maximum number of LUNs supported on one host bus adapter depends on the operating system. SP owner Select the SP that you want to own the LUN. You can let the management program automatically select the SP to balance the workload between SPs; to do so, leave this entry blank. SP back-end buses A back-end bus consists of a physical back-end loop on one SP that is paired with its counterpart physical back-end loop on the other SP to create a redundant bus. Each SP supports one physical back-end loop that is paired with its counterpart SP to create a redundant bus (bus 0). The bus designation appears in the disk ID (in the form bed, where b is the back-end bus number, e is the enclosure number, and d is the disk position in the enclosure). For example, 013 indicates the fourth disk on bus 0, in enclosure 1 (numbering 0, 1, 2, 3... from the left) in the storage system. Navisphere Manager displays disk IDs as b-e-d, and Navisphere CLI recognizes disk IDs as b-e-d. RAID group ID or thin pool name Enter the number for the RAID group ID or the name for the thin pool. When you create a RAID group, you can either assign an ID to it or have the storage system assign one automatically. If the storage system assigns the RAID group IDs automatically, the ID of the first RAID group is 0, the second 1, and so on. Each RAID group ID must be unique within the storage system. When you create athin pool, you must assign a name to it, even though the storage system assigns an ID to it automatically. Each thin pool name must be unique within the storage system. The ID of the first thin pool is always 0, the second 1, and so on.
88

Planning Your Basic CX4-120 Storage-System Switch Configuration with an HP-UX Server

RAID group or thin pool size Enter the user-available capacity in gigabytes (GB) of the whole RAID group or thin pool. For a RAID group, the user capacity is assigned to physical storage on the disks in the group when you create the group. For a thin LUN, the user capacity is assigned to physical storage in a capacity-on-demand basis from a shared thin pool of disks. The storage system monitors and adds storage capacity, as required, to each thin pool until its physical capacity is reached. You can determine the RAID group capacity using Table 27.
Table 27 RAID group capacity RAID group RAID 6 group RAID 5 or RAID 3 group RAID 1/0 or RAID 1 group RAID 0 group Individual unit Disk capacity (GB) disk-size x (number-of-disks - 2) disk-size x (number-of-disks - 1) (disk-size x number-of-disks) / 2 disk-size x number-of-disks disk-size

The 1 TB SATA disks operate on a 4 Gb/s back-end bus like the 4 Gb FC disks, but have a 3 Gb/s bandwidth. Since they have a Fibre Channel interface to the back-end loop, these disks are sometimes referred to as Fibre Channel disks. The currently available disks and their usable disk space are listed in EMC CX4 Series Storage Systems Disk and FLARE OE Matrix (P/N 300-007-437) on the EMC Powerlink website. The vault disks must all have the same capacity and same speed. The 1 TB, 5.4K rpm SATA disks are available only in a DAE that is fully populated with these disks. Do not mix 1 TB, 5.4K rpm SATA disks with 1 TB, 1.2K rpm SATA disks in the same DAE, and do not replace a 1 TB, 5.4K rpm SATA disk with a 1 TB, 1.2K rpm SATA disk or a 1 TB, 1.2K rpm SATA disk with a 1 TB, 5.4K rpm SATA disk. LUN size Enter the user-available capacity in gigabytes (GB) of the LUN. The total user capacity of all the LUNs on a RAID group cannot exceed the total user capacity of the RAID group. You can make LUN size the same size or smaller than the user capacity of the RAID group. You might make a LUN smaller than the RAID group size if you want a RAID 5 group with a large capacity and you want to place many smaller capacity LUNs on it, for example, to specify a LUN for each

Planning Your Basic CX4-120 Storage-System Switch Configuration with an HP-UX Server

89

user. You can make the thin LUN size greater than the user capacity of the thin pool because thin provisioning assigns storage to the server in a capacity-on-demand basis from a shared thin pool. The storage system monitors and adds storage capacity, as required to each thin pool. If you want multiple LUNs per RAID group or multiple thin LUNs per thin pool, then use a RAID group/LUN series or thin pool/thin LUN series of entries for each LUN or thin LUN. The LUNs is a RAID group or thin pool share the same total performance capability of the disk drives, so plan carefully. Disk IDs Enter the IDs of all disks that will make up the LUN or hot spare. These are the same disk IDs you specified on the previous worksheet. For example, for a RAID 5 group or thin pool in a DAE on bus 0 (disks 10 through 14), enter 0010, 0011, 0012, 0013, and 0014. RAID type Copy the RAID type from the previous worksheet, for example, RAID 5 or hot spare. For a hot spare (strictly speaking not a LUN at all), skip the rest of this LUN entry and continue to the next LUN entry (if any). SP Caching Select the type of SP caching you want read and write, write, read, or none for this LUN. FAST (tiering) policy (pool LUN only) If the storage system has two of more types of disks (FC, SATA, Flash) and the FAST enabler installed, select the policy for placing data on this storage as it is written to the storage Auto-Tier, Highest Available Tier, Lowest Available Tier, No Data Movement. Servers that can access this LUNs storage group For switched shared storage or shared or clustered direct storage, enter the name of each server (copied from the LUN and storage group worksheet). For unshared direct storage, this entry does not apply. Operating system information Fill out the operating system information section of the worksheet using the information that follows. Device name Enter the operating system device name, if this is important and if you know it. Depending on your operating system, you may not be able to complete this field now.
90

Planning Your Basic CX4-120 Storage-System Switch Configuration with an HP-UX Server

File system, partition, or drive Write the name of the file system, partition, or drive letter for this LUN. This is the same name you wrote on the application worksheet. On the following line, write any pertinent notes, for example, the file system mount-point or graft-point directory pathname (from the root directory). If any of this storage systems LUNs will be shared with another server, and the other server is the primary owner of this LUN, write secondary. (As mentioned earlier, if the storage system will be used by two servers, complete one of these worksheets for each server.)

Planning Your Basic CX4-120 Storage-System Switch Configuration with an HP-UX Server

91

Copyright 2010 EMC Corporation. All Rights Reserved. EMC believes the information in this publication is accurate as of its publication date. The information is subject to change without notice. THE INFORMATION IN THIS PUBLICATION IS PROVIDED "AS IS." EMC CORPORATION MAKES NO REPRESENTATIONS OR WARRANTIES OF ANY KIND WITH RESPECT TO THE INFORMATION IN THIS PUBLICATION, AND SPECIFICALLY DISCLAIMS IMPLIED WARRANTIES OF MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE. Use, copying, and distribution of any EMC software described in this publication requires an applicable software license. For the most up-to-date regulatory document for your product line, go to the Technical Documentation and Advisories section on EMC Powerlink. For the most up-to-date listing of EMC product names, see EMC Corporation Trademarks on EMC.com. All other trademarks mentioned herein are the property of their respective owners.
92

Planning Your Basic CX4-120 Storage-System Switch Configuration with an HP-UX Server

Potrebbero piacerti anche