Sei sulla pagina 1di 880

Front cover

Implementing the IBM System Storage SAN Volume Controller V6.1


Install, use, and troubleshoot the SAN Volume Controller Become familiar with the exciting new GUI Learn how to use the Easy Tier function

Jon Tate Angelo Bernasconi Alexandre Chabrol Peter Crowhurst Frank Enders Ian MacQuarrie

ibm.com/redbooks

International Technical Support Organization Implementing the IBM System Storage SAN Volume Controller V6.1 May 2011

SG24-7933-00

Note: Before using this information and the product it supports, read the information in Notices on page xvii.

First Edition (May 2011) This edition applies to Version 6, Release 1, Modification 0 of the IBM System Storage SAN Volume Controller.

Copyright International Business Machines Corporation 2011. All rights reserved. Note to U.S. Government Users Restricted Rights -- Use, duplication or disclosure restricted by GSA ADP Schedule Contract with IBM Corp.

Contents
Notices . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xvii Trademarks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xviii Preface . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xix The team who wrote this book . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xix Now you can become a published author, too! . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xxii Comments welcome. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xxii Stay connected to IBM Redbooks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xxiii Chapter 1. Introduction to storage virtualization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.1 Storage virtualization terminology . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.2 User requirements driving storage virtualization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.2.1 Benefits of using the SVC . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.3 What is new in SVC V6.1.0 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.4 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1 2 5 5 6 6

Chapter 2. IBM System Storage SAN Volume Controller. . . . . . . . . . . . . . . . . . . . . . . . . 7 2.1 Brief history of the SAN Volume Controller . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8 2.2 SVC architectural overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8 2.2.1 SAN Volume Controller topology . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11 2.3 SVC terminology . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12 2.4 SAN Volume Controller components . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13 2.4.1 Nodes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13 2.4.2 I/O Groups . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14 2.4.3 Cluster. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14 2.4.4 MDisks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15 2.4.5 Quorum disk . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16 2.4.6 Disk tier . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16 2.4.7 Storage pool . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16 2.4.8 Volumes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18 2.4.9 Easy Tier performance function . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19 2.4.10 Hosts . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20 2.4.11 Maximum supported configurations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20 2.5 Volume overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21 2.5.1 Image mode volumes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21 2.5.2 Managed mode volumes. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22 2.5.3 Cache mode and cache-disabled volumes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23 2.5.4 Mirrored volumes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24 2.5.5 Thin-provisioned volumes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25 2.5.6 Volume I/O governing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28 2.6 iSCSI overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28 2.6.1 Use of IP addresses and Ethernet ports . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 30 2.6.2 iSCSI volume discovery . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31 2.6.3 iSCSI authentication . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 32 2.6.4 iSCSI multipathing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 32 2.7 Advanced Copy Services overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33 2.7.1 Synchronous/Asynchronous remote copy. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33 2.7.2 FlashCopy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 34 2.8 SVC cluster overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35
Copyright IBM Corp. 2011. All rights reserved.

iii

2.8.1 Quorum disks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.8.2 Split I/O groups or split cluster . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.8.3 Cache . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.8.4 Cluster management. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.8.5 IBM System Storage Productivity Center . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.8.6 User authentication . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.8.7 SVC roles and user groups . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.8.8 SVC local authentication . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.8.9 SVC remote authentication and single sign-on . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.9 SVC hardware overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.9.1 Fibre Channel interfaces . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.9.2 LAN interfaces . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.10 Solid-state drives. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.10.1 Storage bottleneck problem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.10.2 Solid-state drive solution . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.10.3 Solid-state drive market . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.10.4 Solid-state drives and SVC V6.1. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.11 Easy Tier . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.11.1 Evaluation mode . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.11.2 Automatic data placement mode . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.12 What is new with SVC 6.1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.12.1 SVC 6.1 supported hardware list, device driver, and firmware levels . . . . . . . . . 2.12.2 SVC 6.1.0 new features . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.13 Useful SVC web links . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Chapter 3. Planning and configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.1 General planning rules . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.2 Physical planning . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.2.1 Preparing your uninterruptible power supply unit environment . . . . . . . . . . . . . . . 3.2.2 Physical rules . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.2.3 Cable connections . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.3 Logical planning . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.3.1 Management IP addressing plan . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.3.2 SAN zoning and SAN connections . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.3.3 iSCSI IP addressing plan . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.3.4 Back-end storage subsystem configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.3.5 SVC cluster configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.3.6 Split-cluster configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.3.7 Storage Pool configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.3.8 Virtual disk configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.3.9 Host mapping (LUN masking) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.3.10 Advanced Copy Services . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.3.11 SAN boot support . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.3.12 Data migration from a non-virtualized storage subsystem . . . . . . . . . . . . . . . . . 3.3.13 SVC configuration backup procedure . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.4 Performance considerations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.4.1 SAN. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.4.2 Disk subsystems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.4.3 SVC. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.4.4 Performance monitoring . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

36 37 38 39 39 41 42 43 44 46 48 49 49 49 50 51 51 52 52 52 52 53 53 55 57 58 59 60 60 63 63 64 65 71 74 76 77 79 81 83 84 89 90 90 91 91 91 92 93

Chapter 4. SAN Volume Controller initial configuration . . . . . . . . . . . . . . . . . . . . . . . . 95 4.1 Managing the cluster . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 96

iv

Implementing the IBM System Storage SAN Volume Controller V6.1

4.1.1 TCP/IP requirements for SAN Volume Controller . . . . . . . . . . . . . . . . . . . . . . . . . 96 4.2 System Storage Productivity Center overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 98 4.2.1 IBM System Storage Productivity Center hardware . . . . . . . . . . . . . . . . . . . . . . 100 4.2.2 SVC installation planning information for System Storage Productivity Center . 100 4.3 Setting up the SVC cluster . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 101 4.3.1 Introducing the service panels . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 101 4.3.2 Prerequisites . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 104 4.3.3 Initiating cluster creation from the front panel . . . . . . . . . . . . . . . . . . . . . . . . . . . 105 4.4 Configuring the GUI . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 107 4.4.1 Completing the Create Cluster Wizard . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 108 4.4.2 Changing the default superuser password . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 117 4.4.3 Configuring the Service IP Addresses . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 119 4.4.4 Postrequisites . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 121 4.5 Secure Shell overview. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 122 4.5.1 Generating public and private SSH key pairs using PuTTY . . . . . . . . . . . . . . . . 123 4.5.2 Uploading the SSH public key to the SVC cluster. . . . . . . . . . . . . . . . . . . . . . . . 125 4.5.3 Configuring the PuTTY session for the CLI. . . . . . . . . . . . . . . . . . . . . . . . . . . . . 126 4.5.4 Starting the PuTTY CLI session . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 130 4.5.5 Configuring SSH for AIX clients . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 132 4.6 Using IPv6 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 132 4.6.1 Migrating a cluster from IPv4 to IPv6 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 133 4.6.2 Migrating a cluster from IPv6 to IPv4 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 136 Chapter 5. Host configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.1 Host attachment overview for IBM System Storage SAN Volume Controller . . . . . . . 5.2 SVC setup . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.2.1 Fibre Channel and SAN setup overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.2.2 Port mask . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.3 iSCSI . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.3.1 Initiators and targets . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.3.2 Nodes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.3.3 IQN . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.3.4 Setting up the host server . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.3.5 Volume discovery . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.3.6 Authentication . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.4 AIX-specific information . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.4.1 Configuring the AIX host . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.4.2 Operating system versions and maintenance levels. . . . . . . . . . . . . . . . . . . . . . 5.4.3 HBAs for IBM System p hosts. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.4.4 Configuring fast fail and dynamic tracking . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.4.5 Installing the 2145 host attachment support package. . . . . . . . . . . . . . . . . . . . . 5.4.6 Subsystem Device Driver Path Control Module . . . . . . . . . . . . . . . . . . . . . . . . . 5.4.7 Configuring assigned volume using SDDPCM . . . . . . . . . . . . . . . . . . . . . . . . . . 5.4.8 Using SDDPCM . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.4.9 Creating and preparing volumes for use with AIX V6.1 and SDDPCM. . . . . . . . 5.4.10 Expanding an AIX volume. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.4.11 Running SVC commands from an AIX host system . . . . . . . . . . . . . . . . . . . . . 5.5 Windows-specific information . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.5.1 Configuring Windows Server 2000, 2003, 2008 hosts . . . . . . . . . . . . . . . . . . . . 5.5.2 Configuring Windows . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.5.3 Hardware lists, device driver, HBAs, and firmware levels. . . . . . . . . . . . . . . . . . 5.5.4 Host adapter installation and configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.5.5 Changing the disk timeout on Microsoft Windows Server. . . . . . . . . . . . . . . . . . 137 138 138 139 143 144 144 145 145 146 147 147 149 149 150 150 150 152 152 154 156 157 157 158 159 159 159 160 160 162

Contents

5.5.6 Installing the SDD driver on Windows. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.5.7 Installing the SDDDSM driver on Windows. . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.6 Discovering assigned volumes in Windows Server 2000 and Windows Server 2003. 5.6.1 Extending a Windows Server 2000 or Windows Server 2003 volume . . . . . . . . 5.7 Example configuration - attaching an SVC to Windows Server 2008 host . . . . . . . . . 5.7.1 Installing SDDDSM on a Windows Server 2008 host . . . . . . . . . . . . . . . . . . . . . 5.7.2 Installing SDDDSM . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.7.3 Attaching SVC volumes to Windows Server 2008 . . . . . . . . . . . . . . . . . . . . . . . 5.7.4 Extending a Windows Server 2008 volume . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.7.5 Removing a disk on Windows . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.8 Using the SVC CLI from a Windows host . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.9 Microsoft Volume Shadow Copy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.9.1 Installation overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.9.2 System requirements for the IBM System Storage hardware provider . . . . . . . . 5.9.3 Installing the IBM System Storage hardware provider . . . . . . . . . . . . . . . . . . . . 5.9.4 Verifying the installation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.9.5 Creating the free and reserved pools of volumes . . . . . . . . . . . . . . . . . . . . . . . . 5.9.6 Changing the configuration parameters . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.10 Specific Linux (on Intel) information . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.10.1 Configuring the Linux host . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.10.2 Configuration information . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.10.3 Disabling automatic Linux system updates . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.10.4 Setting queue depth with QLogic HBAs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.10.5 Multipathing in Linux . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.10.6 Creating and preparing the SDD volumes for use . . . . . . . . . . . . . . . . . . . . . . 5.10.7 Using the operating system MPIO . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.10.8 Creating and preparing MPIO volumes for use. . . . . . . . . . . . . . . . . . . . . . . . . 5.11 VMware configuration information. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.11.1 Configuring VMware hosts . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.11.2 Operating system versions and maintenance levels. . . . . . . . . . . . . . . . . . . . . 5.11.3 HBAs for hosts running VMware . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.11.4 VMware storage and zoning guidance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.11.5 Setting the HBA timeout for failover in VMware . . . . . . . . . . . . . . . . . . . . . . . . 5.11.6 Multipathing in ESX. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.11.7 Attaching VMware to volumes. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.11.8 Volume naming in VMware . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.11.9 Setting the Microsoft guest operating system timeout . . . . . . . . . . . . . . . . . . . 5.11.10 Extending a VMFS volume . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.11.11 Removing a datastore from an ESX host . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.12 Sun Solaris support information . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.12.1 Operating system versions and maintenance levels. . . . . . . . . . . . . . . . . . . . . 5.12.2 SDD dynamic pathing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.13 Hewlett-Packard UNIX configuration information . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.13.1 Operating system versions and maintenance levels. . . . . . . . . . . . . . . . . . . . . 5.13.2 Multipath solutions supported . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.13.3 Coexistence of SDD and PV Links . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.13.4 Using an SVC volume as a cluster lock disk. . . . . . . . . . . . . . . . . . . . . . . . . . . 5.13.5 Support for HP-UX with greater than eight LUNs . . . . . . . . . . . . . . . . . . . . . . . 5.14 Using SDDDSM, SDDPCM, and SDD web interface . . . . . . . . . . . . . . . . . . . . . . . . 5.15 Calculating the queue depth . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.16 Further sources of information . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.16.1 Publications containing SVC storage subsystem attachment guidelines . . . . .

162 164 166 171 176 176 179 181 187 187 190 191 191 192 192 196 197 198 200 200 201 201 201 202 206 208 208 212 213 213 213 214 214 215 215 218 219 219 221 222 222 222 223 223 223 223 224 224 224 225 226 226

vi

Implementing the IBM System Storage SAN Volume Controller V6.1

Chapter 6. Data migration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.1 Migration overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.2 Migration operations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.2.1 Migrating multiple extents (within a storage pool) . . . . . . . . . . . . . . . . . . . . . . . . 6.2.2 Migrating extents off an MDisk that is being deleted. . . . . . . . . . . . . . . . . . . . . . 6.2.3 Migrating a volume between storage pools. . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.2.4 Migrating the volume to image mode . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.2.5 Migrating a volume between I/O Groups. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.2.6 Monitoring the migration progress. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.3 Functional overview of migration. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.3.1 Parallelism . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.3.2 Error handling . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.3.3 Migration algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.4 Migrating data from an image mode volume . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.4.1 Image mode volume migration concept . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.4.2 Migration tips. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.5 Data migration for Windows using the SVC GUI. . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.5.1 Windows Server 2008 host system connected directly to the DS4700. . . . . . . . 6.5.2 Adding the SVC between the host system and the DS4700. . . . . . . . . . . . . . . . 6.5.3 Importing the migrated disks into an online Windows Server 2008 host. . . . . . . 6.5.4 Adding the SVC between the host and DS4700 using the CLI . . . . . . . . . . . . . . 6.5.5 Migrating a volume from managed mode to image mode. . . . . . . . . . . . . . . . . . 6.5.6 Migrating the volume from image mode to image mode . . . . . . . . . . . . . . . . . . . 6.5.7 Removing image mode data from the SVC. . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.5.8 Map the free disks onto the Windows Server 2008. . . . . . . . . . . . . . . . . . . . . . . 6.6 Migrating Linux SAN disks to SVC disks. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.6.1 Connecting the SVC to your SAN fabric . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.6.2 Preparing your SVC to virtualize disks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.6.3 Moving the LUNs to the SVC . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.6.4 Migrating the image mode volumes to managed MDisks . . . . . . . . . . . . . . . . . . 6.6.5 Preparing to migrate from the SVC . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.6.6 Migrating the volumes to image mode volumes . . . . . . . . . . . . . . . . . . . . . . . . . 6.6.7 Removing the LUNs from the SVC . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.7 Migrating ESX SAN disks to SVC disks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.7.1 Connecting the SVC to your SAN fabric . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.7.2 Preparing your SVC to virtualize disks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.7.3 Moving the LUNs to the SVC . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.7.4 Migrating the image mode volumes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.7.5 Preparing to migrate from the SVC . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.7.6 Migrating the managed volumes to image mode volumes . . . . . . . . . . . . . . . . . 6.7.7 Removing the LUNs from the SVC . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.8 Migrating AIX SAN disks to SVC volumes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.8.1 Connecting the SVC to your SAN fabric . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.8.2 Preparing your SVC to virtualize disks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.8.3 Moving the LUNs to the SVC . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.8.4 Migrating image mode volumes to volumes . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.8.5 Preparing to migrate from the SVC . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.8.6 Migrating the managed volumes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.8.7 Removing the LUNs from the SVC . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.9 Using SVC for storage migration. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.10 Using volume mirroring and thin-provisioned volumes together . . . . . . . . . . . . . . . . 6.10.1 Zero detect feature . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.10.2 Volume mirroring with thin-provisioned volumes. . . . . . . . . . . . . . . . . . . . . . . .

227 228 228 228 229 229 230 231 232 232 232 233 233 235 235 237 237 238 240 254 257 260 264 274 277 278 280 281 285 288 291 294 295 298 299 301 304 307 310 312 313 316 318 319 324 326 328 331 332 335 336 336 338 vii

Contents

Chapter 7. Easy Tier . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.1 Overview of Easy Tier . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.2 Easy Tier concepts . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.2.1 SSD arrays and MDisks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.2.2 Disk tiers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.2.3 Single tier storage pools . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.2.4 Multiple tier storage pools . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.2.5 Easy Tier process . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.2.6 Easy Tier operating modes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.2.7 Easy Tier activation. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.3 Easy Tier implementation considerations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.3.1 Prerequisites . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.3.2 Implementation rules. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.3.3 Limitations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.4 Measuring and activating Easy Tier . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.4.1 Measuring by using the Storage Advisor Tool . . . . . . . . . . . . . . . . . . . . . . . . . . 7.5 Using Easy Tier with the SVC CLI . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.5.1 Initial cluster status . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.5.2 Turning on Easy Tier evaluation mode . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.5.3 Creating a multitier storage pool . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.5.4 Setting the disk tier . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.5.5 Checking a volumes Easy Tier mode. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.5.6 Final cluster status . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.6 Using Easy Tier with the SVC GUI . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.6.1 Setting the disk tier on MDisks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.6.2 Checking Easy Tier status . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Chapter 8. Advanced Copy Services. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.1 FlashCopy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.1.1 Business requirement . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.1.2 Backup . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.1.3 Restore . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.1.4 Moving and migrating data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.1.5 Application testing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.1.6 Host considerations to ensure FlashCopy integrity. . . . . . . . . . . . . . . . . . . . . . . 8.1.7 FlashCopy attributes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.2 Reverse FlashCopy. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.2.1 FlashCopy and Tivoli Storage Manager . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.3 FlashCopy functional overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.4 Implementing SVC FlashCopy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.4.1 FlashCopy mappings . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.4.2 Multiple Target FlashCopy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.4.3 Consistency Groups . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.4.4 FlashCopy indirection layer. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.4.5 Grains and the FlashCopy bitmap. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.4.6 Interaction and dependency between Multiple Target FlashCopy mappings . . . 8.4.7 Summary of the FlashCopy indirection layer algorithm. . . . . . . . . . . . . . . . . . . . 8.4.8 Interaction with the cache . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.4.9 FlashCopy and image mode disks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.4.10 FlashCopy mapping events . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.4.11 FlashCopy mapping states . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.4.12 Thin-provisioned FlashCopy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.4.13 Background copy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

345 346 346 346 347 347 347 348 349 350 350 350 350 351 352 352 353 354 354 356 357 358 358 359 359 361 363 364 364 364 364 364 365 365 365 366 367 369 370 370 371 371 374 374 375 377 377 378 379 381 383 384

viii

Implementing the IBM System Storage SAN Volume Controller V6.1

8.4.14 Synthesis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.4.15 Serialization of I/O by FlashCopy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.4.16 Event handling . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.4.17 Asynchronous notifications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.4.18 Interoperation with Metro Mirror and Global Mirror . . . . . . . . . . . . . . . . . . . . . . 8.4.19 FlashCopy presets . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.5 Metro Mirror. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.5.1 Metro Mirror overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.5.2 Remote copy techniques . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.5.3 Metro Mirror features. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.5.4 Multiple Cluster Mirroring . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.5.5 Importance of write ordering . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.5.6 Remote copy intercluster communication . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.5.7 Metro Mirror attributes. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.5.8 Methods of synchronization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.5.9 Metro Mirror states and events . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.5.10 Practical use of Metro Mirror . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.5.11 Valid combinations of FlashCopy, Metro Mirror, and Global Mirror . . . . . . . . . 8.5.12 Metro Mirror configuration limits . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.6 Metro Mirror commands . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.6.1 Listing available SVC cluster partners . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.6.2 Creating the SVC cluster partnership . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.6.3 Creating a Metro Mirror Consistency Group . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.6.4 Creating a Metro Mirror relationship . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.6.5 Changing a Metro Mirror relationship . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.6.6 Changing a Metro Mirror Consistency Group . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.6.7 Starting a Metro Mirror relationship. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.6.8 Stopping a Metro Mirror relationship . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.6.9 Starting a Metro Mirror Consistency Group. . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.6.10 Stopping a Metro Mirror Consistency Group. . . . . . . . . . . . . . . . . . . . . . . . . . . 8.6.11 Deleting a Metro Mirror relationship . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.6.12 Deleting a Metro Mirror Consistency Group . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.6.13 Reversing a Metro Mirror relationship . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.6.14 Reversing a Metro Mirror Consistency Group. . . . . . . . . . . . . . . . . . . . . . . . . . 8.6.15 Background copy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.7 Global Mirror . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.7.1 Intracluster Global Mirror. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.7.2 Intercluster Global Mirror. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.7.3 Asynchronous remote copy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.7.4 SVC Global Mirror features . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.7.5 Global Mirror relationship between master and auxiliary volumes . . . . . . . . . . . 8.7.6 Importance of write ordering . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.7.7 Global Mirror Consistency Groups . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.7.8 Distribution of work among nodes. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.7.9 Background copy performance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.7.10 Thin-provisioned background copy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.8 Global Mirror process . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.8.1 Methods of synchronization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.8.2 Global Mirror states and events . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.8.3 Practical use of Global Mirror . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.8.4 Valid combinations of FlashCopy, Metro Mirror, and Global Mirror . . . . . . . . . . 8.8.5 Global Mirror configuration limits. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.9 Global Mirror commands . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Contents

385 385 385 386 387 387 388 389 389 390 391 394 396 397 397 398 405 406 406 407 407 408 408 409 409 410 410 410 411 411 411 412 412 412 413 413 413 413 413 414 416 417 417 419 419 420 420 420 421 428 429 429 430 ix

8.9.1 Listing the available SVC cluster partners . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.9.2 Creating an SVC cluster partnership. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.9.3 Creating a Global Mirror Consistency Group . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.9.4 Creating a Global Mirror relationship . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.9.5 Changing a Global Mirror relationship. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.9.6 Changing a Global Mirror Consistency Group . . . . . . . . . . . . . . . . . . . . . . . . . . 8.9.7 Starting a Global Mirror relationship . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.9.8 Stopping a Global Mirror relationship . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.9.9 Starting a Global Mirror Consistency Group . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.9.10 Stopping a Global Mirror Consistency Group . . . . . . . . . . . . . . . . . . . . . . . . . . 8.9.11 Deleting a Global Mirror relationship. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.9.12 Deleting a Global Mirror Consistency Group. . . . . . . . . . . . . . . . . . . . . . . . . . . 8.9.13 Reversing a Global Mirror relationship . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.9.14 Reversing a Global Mirror Consistency Group . . . . . . . . . . . . . . . . . . . . . . . . . Chapter 9. SAN Volume Controller operations using the command-line interface. . 9.1 Normal operations using CLI. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.1.1 Command syntax and online help. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.2 Working with managed disks and disk controller systems . . . . . . . . . . . . . . . . . . . . . 9.2.1 Viewing disk controller details . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.2.2 Renaming a controller . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.2.3 Discovery status . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.2.4 Discovering MDisks. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.2.5 Viewing MDisk information . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.2.6 Renaming an MDisk . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.2.7 Including an MDisk . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.2.8 Adding MDisks to a storage pool . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.2.9 Showing MDisks in a storage pool . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.2.10 Working with a storage pool . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.2.11 Creating a storage pool. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.2.12 Viewing storage pool information . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.2.13 Renaming a storage pool . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.2.14 Deleting a storage pool . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.2.15 Removing MDisks from a storage pool . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.3 Working with hosts . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.3.1 Creating a Fibre Channel-attached host . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.3.2 Creating an iSCSI-attached host . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.3.3 Modifying a host . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.3.4 Deleting a host . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.3.5 Adding ports to a defined host . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.3.6 Deleting ports . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.4 Working with the Ethernet port for iscsi. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.5 Working with volumes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.5.1 Creating a volume . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.5.2 Volume information . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.5.3 Creating a thin-provisioned volume. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.5.4 Creating a volume in image mode . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.5.5 Adding a mirrored volume copy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.5.6 Splitting a mirrored volume . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.5.7 Modifying a volume . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.5.8 I/O governing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.5.9 Deleting a volume . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.5.10 Expanding a volume . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

430 433 434 434 435 435 436 436 437 437 437 438 438 438 439 440 440 441 441 442 442 442 444 445 445 446 447 447 447 449 449 450 450 451 451 452 454 455 455 456 457 458 458 460 462 462 463 467 469 469 471 471

Implementing the IBM System Storage SAN Volume Controller V6.1

9.5.11 Assigning a volume to a host . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.5.12 Showing volumes to host mapping . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.5.13 Deleting a volume to host mapping. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.5.14 Migrating a volume . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.5.15 Migrating a fully managed volume to an image mode volume . . . . . . . . . . . . . 9.5.16 Shrinking a volume . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.5.17 Showing a volume on an MDisk . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.5.18 Showing which volumes are using a storage pool . . . . . . . . . . . . . . . . . . . . . . 9.5.19 Showing which MDisks are used by a specific volume . . . . . . . . . . . . . . . . . . . 9.5.20 Showing from which storage pool a volume has its extents . . . . . . . . . . . . . . . 9.5.21 Showing the host to which the volume is mapped . . . . . . . . . . . . . . . . . . . . . . 9.5.22 Showing the volume to which the host is mapped . . . . . . . . . . . . . . . . . . . . . . 9.5.23 Tracing a volume from a host back to its physical disk . . . . . . . . . . . . . . . . . . . 9.6 Scripting under the CLI for SVC task automation . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.6.1 Scripting structure . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.7 SVC advanced operations using the CLI . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.7.1 Command syntax . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.7.2 Organizing on window content . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.8 Managing the cluster using the CLI. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.8.1 Viewing cluster properties . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.8.2 Changing cluster settings . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.8.3 Performing cluster authentication . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.8.4 iSCSI configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.8.5 Modifying IP addresses. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.8.6 Supported IP address formats . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.8.7 Setting the cluster time zone and time . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.8.8 Starting statistics collection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.8.9 Stopping statistics collection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.8.10 Determining the status of a copy operation. . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.8.11 Shutting down a cluster. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.9 Nodes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.9.1 Viewing node details . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.9.2 Adding a node . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.9.3 Renaming a node . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.9.4 Deleting a node . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.9.5 Shutting down a node . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.10 I/O Groups. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.10.1 Viewing I/O Group details . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.10.2 Renaming an I/O Group . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.10.3 Adding and removing hostiogrp . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.10.4 Listing I/O Groups . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.11 Managing authentication . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.11.1 Managing users using the CLI . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.11.2 Managing user roles and groups . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.11.3 Changing a user . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.11.4 Audit log command . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.12 Managing Copy Services . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.12.1 FlashCopy operations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.12.2 Setting up FlashCopy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.12.3 Creating a FlashCopy Consistency Group . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.12.4 Creating a FlashCopy mapping . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.12.5 Preparing (pre-triggering) the FlashCopy mapping. . . . . . . . . . . . . . . . . . . . . . 9.12.6 Preparing (pre-triggering) the FlashCopy Consistency Group . . . . . . . . . . . . .
Contents

472 474 474 474 475 476 477 477 477 478 479 479 480 481 482 485 485 486 487 487 487 488 488 489 490 490 491 492 492 493 494 494 495 496 496 497 498 498 498 499 499 500 500 501 502 502 503 503 504 505 505 507 508 xi

9.12.7 Starting (triggering) FlashCopy mappings. . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.12.8 Starting (triggering) FlashCopy Consistency Group . . . . . . . . . . . . . . . . . . . . . 9.12.9 Monitoring the FlashCopy progress . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.12.10 Stopping the FlashCopy mapping . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.12.11 Stopping the FlashCopy Consistency Group . . . . . . . . . . . . . . . . . . . . . . . . . 9.12.12 Deleting the FlashCopy mapping . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.12.13 Deleting the FlashCopy Consistency Group . . . . . . . . . . . . . . . . . . . . . . . . . . 9.12.14 Migrating a volume to a thin-provisioned volume . . . . . . . . . . . . . . . . . . . . . . 9.12.15 Reverse FlashCopy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.12.16 Split-stopping of FlashCopy maps . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.13 Metro Mirror operation. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.13.1 Setting up Metro Mirror . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.13.2 Creating an SVC partnership between ITSO-CLS1 and ITSO-CLS4 . . . . . . . . 9.13.3 Creating a Metro Mirror Consistency Group . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.13.4 Creating the Metro Mirror relationships. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.13.5 Creating a stand-alone Metro Mirror relationship for MM_App_Pri. . . . . . . . . . 9.13.6 Starting Metro Mirror . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.13.7 Starting a Metro Mirror Consistency Group. . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.13.8 Monitoring the background copy progress . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.13.9 Stopping and restarting Metro Mirror . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.13.10 Stopping a stand-alone Metro Mirror relationship . . . . . . . . . . . . . . . . . . . . . . 9.13.11 Stopping a Metro Mirror Consistency Group. . . . . . . . . . . . . . . . . . . . . . . . . . 9.13.12 Restarting a Metro Mirror relationship in the Idling state. . . . . . . . . . . . . . . . . 9.13.13 Restarting a Metro Mirror Consistency Group in the Idling state . . . . . . . . . . 9.13.14 Changing copy direction for Metro Mirror . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.13.15 Switching copy direction for a Metro Mirror relationship . . . . . . . . . . . . . . . . . 9.13.16 Switching copy direction for a Metro Mirror Consistency Group . . . . . . . . . . . 9.13.17 Creating an SVC partnership among many clusters . . . . . . . . . . . . . . . . . . . . 9.13.18 Star configuration partnership . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.14 Global Mirror operation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.14.1 Setting up Global Mirror . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.14.2 Creating an SVC partnership between ITSO-CLS1 and ITSO-CLS4 . . . . . . . . 9.14.3 Changing link tolerance and cluster delay simulation . . . . . . . . . . . . . . . . . . . . 9.14.4 Creating a Global Mirror Consistency Group . . . . . . . . . . . . . . . . . . . . . . . . . . 9.14.5 Creating Global Mirror relationships . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.14.6 Creating the stand-alone Global Mirror relationship for GM_App_Pri. . . . . . . . 9.14.7 Starting Global Mirror . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.14.8 Starting a stand-alone Global Mirror relationship . . . . . . . . . . . . . . . . . . . . . . . 9.14.9 Starting a Global Mirror Consistency Group . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.14.10 Monitoring background copy progress . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.14.11 Stopping and restarting Global Mirror . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.14.12 Stopping a stand-alone Global Mirror relationship . . . . . . . . . . . . . . . . . . . . . 9.14.13 Stopping a Global Mirror Consistency Group . . . . . . . . . . . . . . . . . . . . . . . . . 9.14.14 Restarting a Global Mirror relationship in the Idling state . . . . . . . . . . . . . . . . 9.14.15 Restarting a Global Mirror Consistency Group in the Idling state . . . . . . . . . . 9.14.16 Changing direction for Global Mirror . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.14.17 Switching copy direction for a Global Mirror relationship . . . . . . . . . . . . . . . . 9.14.18 Switching copy direction for a Global Mirror Consistency Group . . . . . . . . . . 9.15 Service and maintenance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.15.1 Upgrading software . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.15.2 Running maintenance procedures . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.15.3 Setting up SNMP notification . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.15.4 Set syslog event notification . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xii
Implementing the IBM System Storage SAN Volume Controller V6.1

508 510 510 511 512 512 513 514 518 520 520 521 522 524 524 525 526 527 528 529 529 530 531 531 532 532 533 535 536 541 542 543 545 546 547 548 548 549 549 550 551 551 552 553 554 554 554 556 557 557 562 565 565

9.15.5 Configuring error notification using an email server . . . . . . . . . . . . . . . . . . . . . 9.15.6 Analyzing the event log . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.15.7 License settings . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.15.8 Listing dumps . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.16 Backing up the SVC cluster configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.16.1 Prerequisites . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.17 Restoring the SVC cluster configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.17.1 Deleting configuration backup. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.18 Working with the SVC Quorum MDisk . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.18.1 Listing the SVC Quorum MDisk . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.18.2 Changing the SVC Quorum Disk . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.19 Working with the Service Assistant menu . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.19.1 SVC CLI Service Assistant menu . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.20 SAN troubleshooting and data collection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.21 T3 recovery process . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Chapter 10. SAN Volume Controller operations using the GUI. . . . . . . . . . . . . . . . . . 10.1 SVC normal operations using the GUI . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10.1.1 Introduction to SVC normal operations using the GUI . . . . . . . . . . . . . . . . . . . 10.1.2 Organizing on window content . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10.1.3 Help. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10.2 Working with External Disk Controllers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10.2.1 Viewing Disk Controller details . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10.2.2 Renaming a disk controller . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10.2.3 Discovering MDisks from the External panel . . . . . . . . . . . . . . . . . . . . . . . . . . 10.3 Working with Storage Pools . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10.3.1 Viewing Storage Pool information . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10.3.2 Discovering MDisks. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10.3.3 Creating Storage Pools . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10.3.4 Renaming a Storage Pool . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10.3.5 Deleting a Storage Pool . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10.3.6 Adding or removing MDisks from a Storage Pool . . . . . . . . . . . . . . . . . . . . . . . 10.3.7 Showing the volumes that are associated with a Storage Pool . . . . . . . . . . . . 10.4 Working with managed disks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10.4.1 MDisk information . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10.4.2 Renaming an MDisk . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10.4.3 Discovering MDisks. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10.4.4 Adding MDisks to a Storage Pool . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10.4.5 Removing MDisks from a Storage Pool . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10.4.6 Including an excluded MDisk . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10.4.7 Activating EasyTier . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10.5 Migration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10.6 Working with hosts . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10.6.1 Host information . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10.6.2 Creating a host . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10.6.3 Renaming a host . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10.6.4 Modifying a host . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10.6.5 Deleting a host . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10.6.6 Adding ports . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10.6.7 Deleting ports . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10.6.8 Creating or modifying the host mapping . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10.6.9 Deleting a host mapping . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10.6.10 Deleting all host mappings for a given host . . . . . . . . . . . . . . . . . . . . . . . . . .

566 566 567 568 572 573 574 574 575 575 575 576 576 577 578 579 580 580 584 589 590 590 591 592 592 592 594 595 597 598 599 599 599 599 602 602 603 604 606 606 608 608 610 612 617 618 620 621 624 625 627 629

Contents

xiii

10.7 Working with volumes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10.7.1 Volume information . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10.7.2 Creating a volume . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10.7.3 Renaming a volume . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10.7.4 Modifying a volume . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10.7.5 Modifying thin-provisioning volume properties . . . . . . . . . . . . . . . . . . . . . . . . . 10.7.6 Deleting a volume . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10.7.7 Creating or modifying the host mapping . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10.7.8 Deleting a host mapping . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10.7.9 Deleting all host mappings for a given volume . . . . . . . . . . . . . . . . . . . . . . . . . 10.7.10 Shrinking a volume . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10.7.11 Expanding a volume . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10.7.12 Shrinking the real capacity of a thin-provisioned volume . . . . . . . . . . . . . . . . 10.7.13 Expanding the real capacity of a thin provisioned volume . . . . . . . . . . . . . . . 10.7.14 Migrating a volume . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10.7.15 Adding a mirrored copy to an existing volume . . . . . . . . . . . . . . . . . . . . . . . . 10.7.16 Deleting a mirrored copy from a volume mirror. . . . . . . . . . . . . . . . . . . . . . . . 10.7.17 Splitting a volume copy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10.7.18 Validating volume copies . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10.7.19 Migrating to a thin-provisioned volume using volume mirroring . . . . . . . . . . . 10.7.20 Creating a volume in image mode . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10.7.21 Migrating a volume to an image mode volume . . . . . . . . . . . . . . . . . . . . . . . . 10.7.22 Creating an image mode mirrored volume . . . . . . . . . . . . . . . . . . . . . . . . . . . 10.8 Copy Services: managing FlashCopy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10.8.1 Creating a FlashCopy Mapping. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10.8.2 Creating and starting a snapshot preset with a single click . . . . . . . . . . . . . . . 10.8.3 Creating and starting a clone preset with a single click . . . . . . . . . . . . . . . . . . 10.8.4 Creating and starting a backup preset with a single click . . . . . . . . . . . . . . . . . 10.8.5 Creating a FlashCopy Consistency Group . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10.8.6 Creating FlashCopy mappings in a Consistency Group . . . . . . . . . . . . . . . . . . 10.8.7 Show Dependent Mappings . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10.8.8 Moving a FlashCopy mapping to a Consistency Group . . . . . . . . . . . . . . . . . . 10.8.9 Removing a FlashCopy mapping from a Consistency Group . . . . . . . . . . . . . . 10.8.10 Modifying a FlashCopy mapping. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10.8.11 Renaming a FlashCopy mapping . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10.8.12 Renaming a Consistency Group . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10.8.13 Deleting a FlashCopy mapping . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10.8.14 Deleting a FlashCopy Consistency Group . . . . . . . . . . . . . . . . . . . . . . . . . . . 10.8.15 Starting FlashCopy mappings . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10.8.16 Starting a FlashCopy Consistency Group . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10.8.17 Stopping the FlashCopy Consistency Group . . . . . . . . . . . . . . . . . . . . . . . . . 10.8.18 Stopping the FlashCopy mapping . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10.8.19 Migrating between a fully allocated volume and a Space-Efficient volume. . . 10.8.20 Reversing and splitting a FlashCopy mapping . . . . . . . . . . . . . . . . . . . . . . . . 10.9 Copy Services: managing Remote Copy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10.9.1 Cluster partnership . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10.9.2 Creating the SVC partnership between two remote SVC Clusters . . . . . . . . . . 10.9.3 Creating stand-alone remote copy relationships. . . . . . . . . . . . . . . . . . . . . . . . 10.9.4 Creating a Consistency Group . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10.9.5 Renaming a Consistency Group . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10.9.6 Renaming a Remote Copy relationship . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10.9.7 Moving a stand-alone Remote Copy relationship to a Consistency Group. . . . 10.9.8 Removing stand-alone Remote Copy relationship from a Consistency Group . xiv
Implementing the IBM System Storage SAN Volume Controller V6.1

630 631 634 641 642 644 646 647 649 652 653 655 657 659 660 662 665 666 667 669 671 671 671 672 674 685 686 687 689 691 696 696 697 698 699 700 701 702 703 704 706 708 709 709 710 712 714 716 719 723 724 725 726

10.9.9 Starting a Remote Copy relationship . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10.9.10 Starting a Remote Copy Consistency Group . . . . . . . . . . . . . . . . . . . . . . . . . 10.9.11 Switching the copy direction for a Remote Copy relationship . . . . . . . . . . . . . 10.9.12 Switching the copy direction for a Consistency Group . . . . . . . . . . . . . . . . . . 10.9.13 Stopping a Remote Copy relationship. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10.9.14 Stopping a Consistency Group . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10.9.15 Deleting stand-alone Remote Copy relationships . . . . . . . . . . . . . . . . . . . . . . 10.9.16 Deleting a Consistency Group . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10.10 Managing the cluster using the GUI . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10.10.1 System Status information . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10.10.2 View I/O groups and their associated nodes. . . . . . . . . . . . . . . . . . . . . . . . . . 10.10.3 View cluster properties . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10.10.4 Renaming an SVC cluster. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10.10.5 Shutting down a cluster. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10.10.6 Upgrading software . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10.11 Managing I/O Groups . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10.11.1 View I/O group properties . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10.11.2 Modifying I/O group properties . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10.12 Managing nodes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10.12.1 View node properties . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10.12.2 Renaming a node . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10.12.3 Adding a node to the cluster . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10.12.4 Removing a node from the cluster . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10.13 Troubleshooting . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10.13.1 Recommended Actions panel . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10.13.2 Event Log panel . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10.13.3 Run fix procedure . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10.13.4 Support panel . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10.14 User Management. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10.14.1 Creating a user . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10.14.2 Modifying user properties . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10.14.3 Removing a user password. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10.14.4 Removing a user SSH Public Key. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10.14.5 Deleting a user . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10.14.6 Creating a user group . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10.14.7 Modifying user group properties . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10.14.8 Deleting a user group . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10.14.9 Audit log information . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10.15 Configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10.15.1 Configuring Network . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10.15.2 Configuring the Service IP addresses. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10.15.3 iSCSI configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10.15.4 Fibre Channel information. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10.15.5 Event notifications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10.15.6 Email notifications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10.15.7 SNMP notifications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10.15.8 Using the Advanced panel . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10.15.9 Date and Time . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10.15.10 Licensing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10.15.11 Upgrading software . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10.15.12 Setting GUI Preferences . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10.16 Upgrading SVC software. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10.16.1 Precautions before upgrade . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Contents

727 729 730 732 733 734 736 737 738 738 739 739 740 741 744 745 745 745 747 747 748 750 751 753 753 757 764 766 769 772 773 775 776 777 778 780 782 783 786 786 788 789 791 792 792 794 797 797 798 799 799 801 801 xv

10.16.2 SVC software upgrade test utility . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10.16.3 Upgrade procedure . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10.17 Service Assistant with the GUI . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10.17.1 Placing an SVC node into Service State. . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10.17.2 Exiting an SVC node from Service State . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10.17.3 Rebooting an SVC node . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10.17.4 Collect Logs page . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10.17.5 Manage Cluster page . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10.17.6 Recover Cluster . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10.17.7 Reinstall software . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10.17.8 Upgrade Manually . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10.17.9 Modify WWNN. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10.17.10 Change Service IP . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10.17.11 Configure CLI access . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10.17.12 Restart Service . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Appendix A. Performance data and statistics gathering. . . . . . . . . . . . . . . . . . . . . . . SVC performance overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Performance considerations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . SVC. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Performance monitoring . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Collecting performance statistics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Performance data collection and Tivoli Storage Productivity Center for Disk . . . . . . . .

802 802 809 811 813 815 816 817 818 818 819 820 820 821 821 823 824 824 824 824 824 826

Appendix B. Terminology . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 829 Commonly encountered terms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 830 Related publications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . IBM Redbooks publications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Other publications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Online resources . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . How to get IBM Redbooks publications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Help from IBM . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 837 837 837 838 839 839

Index . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 841

xvi

Implementing the IBM System Storage SAN Volume Controller V6.1

Notices
This information was developed for products and services offered in the U.S.A. IBM may not offer the products, services, or features discussed in this document in other countries. Consult your local IBM representative for information on the products and services currently available in your area. Any reference to an IBM product, program, or service is not intended to state or imply that only that IBM product, program, or service may be used. Any functionally equivalent product, program, or service that does not infringe any IBM intellectual property right may be used instead. However, it is the user's responsibility to evaluate and verify the operation of any non-IBM product, program, or service. IBM may have patents or pending patent applications covering subject matter described in this document. The furnishing of this document does not give you any license to these patents. You can send license inquiries, in writing, to: IBM Director of Licensing, IBM Corporation, North Castle Drive, Armonk, NY 10504-1785 U.S.A. The following paragraph does not apply to the United Kingdom or any other country where such provisions are inconsistent with local law: INTERNATIONAL BUSINESS MACHINES CORPORATION PROVIDES THIS PUBLICATION "AS IS" WITHOUT WARRANTY OF ANY KIND, EITHER EXPRESS OR IMPLIED, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF NON-INFRINGEMENT, MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE. Some states do not allow disclaimer of express or implied warranties in certain transactions, therefore, this statement may not apply to you. This information could include technical inaccuracies or typographical errors. Changes are periodically made to the information herein; these changes will be incorporated in new editions of the publication. IBM may make improvements and/or changes in the product(s) and/or the program(s) described in this publication at any time without notice. Any references in this information to non-IBM websites are provided for convenience only and do not in any manner serve as an endorsement of those websites. The materials at those websites are not part of the materials for this IBM product and use of those websites is at your own risk. IBM may use or distribute any of the information you supply in any way it believes appropriate without incurring any obligation to you. Information concerning non-IBM products was obtained from the suppliers of those products, their published announcements or other publicly available sources. IBM has not tested those products and cannot confirm the accuracy of performance, compatibility or any other claims related to non-IBM products. Questions on the capabilities of non-IBM products should be addressed to the suppliers of those products. This information contains examples of data and reports used in daily business operations. To illustrate them as completely as possible, the examples include the names of individuals, companies, brands, and products. All of these names are fictitious and any similarity to the names and addresses used by an actual business enterprise is entirely coincidental. COPYRIGHT LICENSE: This information contains sample application programs in source language, which illustrate programming techniques on various operating platforms. You may copy, modify, and distribute these sample programs in any form without payment to IBM, for the purposes of developing, using, marketing or distributing application programs conforming to the application programming interface for the operating platform for which the sample programs are written. These examples have not been thoroughly tested under all conditions. IBM, therefore, cannot guarantee or imply reliability, serviceability, or function of these programs.

Copyright IBM Corp. 2011. All rights reserved.

xvii

Trademarks
IBM, the IBM logo, and ibm.com are trademarks or registered trademarks of International Business Machines Corporation in the United States, other countries, or both. These and other IBM trademarked terms are marked on their first occurrence in this information with the appropriate symbol ( or ), indicating US registered or common law trademarks owned by IBM at the time this information was published. Such trademarks may also be registered or common law trademarks in other countries. A current list of IBM trademarks is available on the Web at http://www.ibm.com/legal/copytrade.shtml The following terms are trademarks of the International Business Machines Corporation in the United States, other countries, or both:
AIX 5L AIX BladeCenter DB2 developerWorks DS4000 DS6000 DS8000 FlashCopy GPFS IBM Systems Director Active Energy Manager IBM Power Systems Redbooks Redpaper Redbooks (logo) Solid System i System p System Storage DS System Storage System x Tivoli TotalStorage WebSphere XIV z/OS zSeries

The following terms are trademarks of other companies: Snapshot, and the NetApp logo are trademarks or registered trademarks of NetApp, Inc. in the U.S. and other countries. Java, and all Java-based trademarks are trademarks of Sun Microsystems, Inc. in the United States, other countries, or both. Microsoft, Windows, and the Windows logo are trademarks of Microsoft Corporation in the United States, other countries, or both. Intel Xeon, Intel, Intel logo, Intel Inside logo, and Intel Centrino logo are trademarks or registered trademarks of Intel Corporation or its subsidiaries in the United States and other countries. UNIX is a registered trademark of The Open Group in the United States and other countries. Linux is a trademark of Linus Torvalds in the United States, other countries, or both. Other company, product, or service names may be trademarks or service marks of others.

xviii

Implementing the IBM System Storage SAN Volume Controller V6.1

Preface
This IBM Redbooks publication is a detailed technical guide to the IBM System Storage SAN Volume Controller (SVC) Version 6.1.0. SAN Volume Controller is a virtualization appliance solution which maps virtualized volumes that are visible to hosts and applications to physical volumes on storage devices. Each server within the storage area network (SAN) has its own set of virtual storage addresses that are mapped to physical addresses. If the physical addresses change, the server continues running using the same virtual addresses that it had before. Therefore, volumes or storage can be added or moved while the server is still running. The IBM virtualization technology improves the management of information at the block level in a network, thus enabling applications and servers to share storage devices on a network. This book is intended for readers who need to implement the SVC at a 6.1.0 release level with a minimum of effort.

The team who wrote this book


This book was produced by a team of specialists from around the world working at Brocade Communications Systems, San Jose, and the International Technical Support Organization, San Jose Center. Jon Tate is a Project Manager for IBM System Storage SAN Solutions at the International Technical Support Organization, San Jose Center. Before joining the ITSO in 1999, he worked in the IBM Technical Support Center, providing Level 2 and 3 support for IBM storage products. Jon has 25 years of experience in storage software and management, services, and support, and is both an IBM Certified IT Specialist and an IBM SAN Certified Specialist. He is also the UK Chairman of the Storage Networking Industry Association. Angelo Bernasconi is a Certified ITS Senior Storage and SAN Software Specialist with IBM Italy. He has 24 years of experience in the delivery of maintenance and professional services for IBM Enterprise clients in z/OS and open systems. He holds a degree in Electronics and his areas of expertise include storage hardware, SAN, storage virtualization, deduplication, and disaster recovery solutions. He has written extensively about SAN and virtualization products in IBM Redbooks and Redpaper publications, and he is the Technical Leader of the Italian Open System Storage Professional Services Community. Angelo works for the Italian Solution design Center of Excellence (SdCoE). Alexandre Chabrol is an IT Specialist with IBM France. His areas of expertise include Storage Virtualization solutions, storage hardware, SAN, x86 and BladeCenter servers, Windows, Linux and VMware operating systems. He holds an MS degree in Computer Science from Paris Dauphine, France, and has worked in the IT industry for seven years. He works on customer performance benchmarks in the Products and Solutions Support Center in Montpellier (PSSC), and is part of the EMEA benchmarks center. Alexandre coauthored the IBM Redbooks publication Tuning IBM System x Servers for Performance, SG24-5287-05.

Copyright IBM Corp. 2011. All rights reserved.

xix

Peter Crowhurst is a Consulting IT Specialist in the Systems and Technology Group, IBM Australia. He has 33 years of experience in the IT industry, including eight years working in a customer organization as an applications and systems programmer and in network planning and design. He joined IBM 25 years ago and has worked mainly in a large systems technical pre-sales support role for zSeries and storage products. Peter has coauthored three IBM Redbooks publications about ESS and SVC products. Frank Enders has worked for the last four years for EMEA IBM System Storage SAN Volume Controller Level 2 support in Mainz, Germany, providing pre-sales and post-sales support. He has been with IBM Germany for 16 years, starting as a disk production technician with IBM Mainz and later working in the magnetic head production area. In 2001, IBM ceased disk production in Mainz and Frank joined ESCC Mainz as a member of the Installation Readiness team for the DS8000, DS6000, and IBM System Storage SAN Volume Controller. During that time he also studied for four years to earn a diploma in Electrical Engineering. Ian MacQuarrie is a Senior Technical Staff Member in the IBM Systems and Technology Group, San Jose, California. He has 26 years of experience in Enterprise Storage Systems, working in a variety of test and support roles. He is currently a member of the STG Field Assist Team (FAST), which supports clients through critical account engagements, availability assessments, and technical advocacy. His areas of expertise include Storage Area Network, Open Systems storage solutions, and performance analysis. Ian has coauthored a previous IBM Redbooks publication about SVC Best Practices and Performance Guidelines.

Figure 1 Jon, Peter, Angelo, Alexandre, Ian, and Frank

We extend our thanks to the following people for their contributions to this project, including the development and PFE teams in Hursley. Matt Smith was also instrumental in moving any issues and ensuring that they maintained a high profile. In particular, we thank the previous authors of this book: Matt Amanat Pall Beck Angelo Bernasconi Steve Cody Sean Crawford Sameer Dhulekar Werner Eggli Katja Gebuhr Deon George xx
Implementing the IBM System Storage SAN Volume Controller V6.1

Amarnath Hiriyannappa Thorsten Hoss Juerg Hossli Philippe Jachimczyk Kamalakkannan J Jayaraman Dan Koeck Bent Lerager Craig McKenna Andy McManus Joao Marcos Leite Barry Mellish Suad Musovich Massimo Rosati Fred Scholten Robert Symons Marcus Thordal Xiao Peng Zhao Thanks also to the following people for their contributions to previous editions, and to those who contributed to this edition: Chris Canto Peter Eccles Carlos Fuente Alex Howell Colin Jewell Geoff Lane Andrew Martin Paul Merrison Steve Randle Lucy Harris (nee Raw) Bill Scales Dave Sinclair Matt Smith Steve White Barry Whyte Evelyn Wick IBM Hursley Marc Bruni IBM Houston Larry Chiu Paul Muench IBM Almaden Bill Wiegand IBM Advanced Technical Support Sharon Wang IBM Chicago Chris Saul IBM San Jose

Preface

xxi

Lisa Dorr IBM Colorado Tina Sampson IBM Tucson Rita Roque IBM Rochester Yan H. Chu IBM San Jose Sangam Racherla IBM ITSO Special thanks to the Brocade staff for their unparalleled support of this residency in terms of equipment and support in many areas: Jim Baldyga Mansi Botadra Yong Choi Silviano Gaona Brian Steffler Marcus Thordal Steven Tong Brocade Communications Systems

Now you can become a published author, too!


Here's an opportunity to spotlight your skills, grow your career, and become a published authorall at the same time! Join an ITSO residency project and help write a book in your area of expertise, while honing your experience using leading-edge technologies. Your efforts will help to increase product acceptance and customer satisfaction, as you expand your network of technical contacts and relationships. Residencies run from two to six weeks in length, and you can participate either in person or as a remote resident working from your home base. Find out more about the residency program, browse the residency index, and apply online at: ibm.com/redbooks/residencies.html

Comments welcome
Your comments are important to us! We want our books to be as helpful as possible. Send us your comments about this book or other IBM Redbooks publications in one of the following ways: Use the online Contact us review Redbooks form found at: ibm.com/redbooks Send your comments in an email to: redbooks@us.ibm.com

xxii

Implementing the IBM System Storage SAN Volume Controller V6.1

Mail your comments to: IBM Corporation, International Technical Support Organization Dept. HYTD Mail Station P099 2455 South Road Poughkeepsie, NY 12601-5400

Stay connected to IBM Redbooks


Find us on Facebook: http://www.facebook.com/IBMRedbooks Follow us on Twitter: http://twitter.com/ibmredbooks Look for us on LinkedIn: http://www.linkedin.com/groups?home=&gid=2130806 Explore new Redbooks publications, residencies, and workshops with the IBM Redbooks weekly newsletter: https://www.redbooks.ibm.com/Redbooks.nsf/subscribe?OpenForm Stay current on recent Redbooks publications with RSS Feeds: http://www.redbooks.ibm.com/rss.html

Preface

xxiii

xxiv

Implementing the IBM System Storage SAN Volume Controller V6.1

Chapter 1.

Introduction to storage virtualization


In this chapter we define the concept of storage virtualization, and then present an overview explaining how you can apply virtualization to help address todays challenging storage requirements.

Copyright IBM Corp. 2011. All rights reserved.

1.1 Storage virtualization terminology


Although storage virtualization is a term that is used extensively throughout the storage industry, it can be applied to a wide range of technologies and underlying capabilities. In reality, most storage devices can technically claim to be virtualized in one form or another. Therefore, we must start by defining the concept of storage virtualization as used in this book. This is how IBM defines storage virtualization: Storage virtualization is a technology that makes one set of resources look and feel like another set of resources, preferably with more desirable characteristics. It is a logical representation of resources that is not constrained by physical limitations: It hides part of the complexity. It adds or integrates new function with existing services. It can be nested or applied to multiple layers of a system. When discussing storage virtualization, it is important to understand that virtualization can be implemented at various layers within the I/O stack. We have to clearly distinguish between virtualization at the disk layer and virtualization at the file system layer. The focus of this book is virtualization at the disk layer, which is more specifically referred to as block-level virtualization, or block aggregation layer. A discussion of file system virtualization is beyond the scope of this book. However, if you are interested in file system virtualization, refer to IBM General Parallel File System (GPFS) or IBM Scale Out Network Attached Storage (SONAS), which is based on GPFS. To obtain more information and an overview of GPFS, visit the following website: http://www-03.ibm.com/systems/software/gpfs/ To obtain more information about SONAS, visit the following website: http://www-03.ibm.com/systems/storage/network/sonas/ The Storage Networking Industry Associations (SNIA) block aggregation model (Figure 1-1 on page 3) provides a useful overview of the storage domain and its layers. The figure shows the three layers of a storage domain: the file, the block aggregation, and the block subsystem layers. The model splits the block aggregation layer into three sublayers. Block aggregation can be realized within hosts (servers), in the storage network (storage routers and storage controllers), or in storage devices (intelligent disk arrays). The IBM implementation of a block aggregation solution is the IBM System Storage SAN Volume Controller (SVC). The SVC is implemented as a clustered appliance in the storage network layer. Chapter 2, IBM System Storage SAN Volume Controller on page 7 explains the reasons why IBM chose to implement its IBM System Storage SAN Volume Controller in the storage network layer.

Implementing the IBM System Storage SAN Volume Controller V6.1

Figure 1-1 SNIA block aggregation model

The key concept of virtualization is to decouple the storage from the storage functions required in todays storage area network (SAN) environment.

Decoupling means abstracting the physical location of data from the logical representation of the data. The virtualization engine presents logical entities to the user and internally manages the process of mapping these entities to the actual location of the physical storage.
The actual mapping performed is dependent upon the specific implementation, as is the granularity of the mapping, which can range from a small fraction of a physical disk, up to the full capacity of a physical disk. A single block of information in this environment is identified by its logical unit number (LUN), which is the physical disk, and an offset within that LUN, which is known as a logical block address (LBA). Note that the term physical disk is used in this context to describe a piece of storage that might be carved out of a RAID array in the underlying disk subsystem. Specific to the SVC implementation, the address space that is mapped between the logical entity is referred to as volume, and the physical disk is referred to as managed disks (MDisks). Figure 1-2 on page 4 shows an overview of block-level virtualization.

Chapter 1. Introduction to storage virtualization

Figure 1-2 Block-level virtualization overview

The server and application are only aware of the logical entities, and access these entities using a consistent interface that is provided by the virtualization layer. The functionality of a volume that is presented to a server, such as expanding or reducing the size of a volume, mirroring a volume, creating a FlashCopy, thin provisioning, and so on, is implemented in the virtualization layer. It does not rely in any way on the functionality that is provided by the underlying disk subsystem. Data that is stored in a virtualized environment is stored in a location-independent way, which allows a user to move or migrate data between physical locations, referred to as storage pools. We refer to block-level storage virtualizations as the cornerstones of virtualization. These are the core benefits that a product such as the SVC can provide over traditional directly attached or SAN storage. The SVC provides the following benefits: The SVC provides online volume migration while applications are running, which is possibly the greatest single benefit for storage virtualization. This capability allows data to be migrated on and between the underlying storage subsystems without any impact to the servers and applications. In fact, this migration is performed without the knowledge by servers and applications that it even occurred. The SVC simplifies storage management by providing a single image for multiple controllers and a consistent user interface for provisioning heterogeneous storage. The SVC provides enterprise-level copy services functions. Performing the copy services functions within the SVC removes dependencies on the storage subsystems, thereby enabling the source and target copies to be on other storage subsystem types. Storage utilization can be increased by pooling storage across the SAN. System performance is often improved with SVC as a result of volume striping across multiple arrays or controllers and the additional cache it provides. The SVC delivers these functions in a homogeneous way on a scalable and highly available platform, over any attached storage, and to any attached server. 4
Implementing the IBM System Storage SAN Volume Controller V6.1

1.2 User requirements driving storage virtualization


In todays environment there is an emphasis on a smarter planet and dynamic infrastructure. Thus, there is a need for a storage environment that is as flexible as the application and server mobility. Business demands change quickly. These key client concerns drive storage virtualization: Growth in data center costs Inability of IT organizations to respond quickly to business demands Poor asset utilization Poor availability or service levels Lack of skilled staff for storage administration You can see the importance of addressing the complexity of managing storage networks by applying the total cost of ownership (TCO) metric to storage networks. Industry analyses show that storage acquisition costs are only about 20% of the TCO. Most of the remaining costs are related to managing the storage system. But how much of the management of multiple systems, with separate interfaces, can be handled as a single entity? In a non-virtualized storage environment, every system is an island that needs to be managed separately.

1.2.1 Benefits of using the SVC


The SVC can reduce the number of separate environments that need to managed down to a single environment. It provides a single interface for storage management. After the initial configuration of the storage subsystems, all of the day-to-day storage management operations are performed from the SVC. Because SVC provides advanced functions such as mirroring and FlashCopy, there is no need to purchase them again for each new disk subsystem. Today, it is typical that open systems run at significantly less than 50% of the usable capacity provided by the RAID disk subsystems. Using the installed raw capacity in the disk subsystems will, dependent on the RAID level that is used, show utilization numbers of less than 35%. A block-level virtualization solution, such as the SVC, can allow capacity utilization to increase to approximately 75 to 80%. With SVC, free space does not need to be maintained and managed within each storage subsystem, which further increases capacity utilization.

Chapter 1. Introduction to storage virtualization

1.3 What is new in SVC V6.1.0


IBM System Storage SAN Volume Controller (SVC) V6.1.0 has been designed to provide significant new capabilities to assist storage administrators in their responsibilities, thereby enabling them to manage even broader and more complex storage infrastructures and achieve maximum performance out of their storage. V6.1.0 delivers enhancements to SVC in the areas of performance, volume management, system scalability, and user interface. SVC delivers IBM System Storage Easy Tier, which automates data placement throughout the SVC storage pool onto two tiers of storage to intelligently align the system with current workload requirements. Although it allows for manual control, Easy Tier includes the ability to automatically and non-disruptively relocate data (at the extent level) from one tier to the other in either direction to achieve the best storage performance available. Clients benefit from a newly designed user interface that not only delivers many more functional enhancements and greater ease of use, but can be accessed from anywhere on the network through a web browser. Enhancements to the user interface include greater flexibility of views, increased number of characters allowed for naming objects, display of the underlying CLI commands being executed, and improved user customization. Customers using Tivoli Storage Productivity Center and IBM System Director will also have greater integration points and launch in-context capabilities with the new SVC Console. SVC increases the flexibility of the storage it manages by raising the supported managed disk (MDisk) size to 1 PB. A new, larger extent size of 8 GB is also introduced, which increases the maximum managed storage per SVC cluster up to 32 PBs.

1.4 Summary
Storage virtualization is no longer merely a concept or an unproven technology. All major storage vendors offer storage virtualization products. Making use of storage virtualization as the foundation for a flexible and reliable storage solution helps enterprises to better align business and IT by optimizing the storage infrastructure and storage management to meet business demands. The IBM System Storage SAN Volume Controller is a mature, sixth-generation virtualization solution that uses open standards and is consistent with the Storage Networking Industry Association (SNIA) storage model. The SVC is an appliance-based in-band block virtualization process, in which intelligence, including advanced storage functions, is migrated from individual storage devices to the storage network. The IBM System Storage SAN Volume Controller can improve the utilization of your storage resources, simplify your storage management, and improve the availability of your applications.

Implementing the IBM System Storage SAN Volume Controller V6.1

Chapter 2.

IBM System Storage SAN Volume Controller


In this chapter we explain the major concepts underlying the IBM System Storage SAN Volume Controller (SVC). We begin by presenting a brief history of the SVC product, and then provide you with an architectural overview. After defining SVC terminology, we describe software and hardware concepts and the additional functionalities that will be available with the newest release. Finally, we provide links to websites where you can find further information about SVC.

Copyright IBM Corp. 2011. All rights reserved.

2.1 Brief history of the SAN Volume Controller


The IBM implementation of block-level storage virtualization, the IBM System Storage SAN Volume Controller (SVC), is based on an IBM project that was initiated in the second half of 1999 at the IBM Almaden Research Center. The project was called COMmodity PArts Storage System, or COMPASS. One goal of this project was to create a system almost exclusively composed of off-the-shelf standard parts. As with any enterprise-level storage control system, it had to deliver a level of performance and availability comparable to the highly optimized storage controllers of previous generations. The idea of building a storage control system based on a scalable cluster of lower performance servers, instead of a monolithic architecture of two nodes, is still a compelling idea. COMPASS also had to address a major challenge for the heterogeneous open systems environment, namely to reduce the complexity of managing storage on block devices. The first documentation covering this project was released to the public in 2003 in the form of the IBM Systems Journal, Vol. 42, No. 2, 2003, The software architecture of a SAN storage control system, by J. S. Glider, C. F. Fuente, and W. J. Scales, which you can read at this website: http://ieeexplore.ieee.org/xpl/freeabs_all.jsp?arnumber=5386853 The results of the COMPASS project defined the fundamentals for the product architecture. The announcement of the first release of the IBM System Storage SAN Volume Controller took place in July 2003. Each of the following releases brought new and more powerful hardware nodes, which approximately doubled the I/O performance and throughput of its predecessors, provided new functionality, and offered additional interoperability with new elements in host environments, disk subsystems, and the storage area network (SAN). The most recently released hardware node, the 2145-CF8, is based on IBM System x server technology with an Intel Xeon 5500 2.4 GHz quad-core processor (Nehelam), 24 GB of cache, and four 8 Gbps Fibre Channel ports, two x 1Gbe ports. It is capable of supporting up to four internal Solid State Drives (SSDs). Currently, IBM has shipped 21,500 SVC engines running in more than 6900 SVC systems worldwide. With the new V6.1.0 release of SVC introduced in this book, we have a new concept of tiered disk and storage pools capable of containing multiple disk tiers. A major new function called IBM System Storage Easy Tier provides sub_lun hot spot management with automatic data migration between disk tiers, and the ability to create disk RAID arrays on internally connected storage. Additionally, the SVC user interface (GUI) has been completely redesigned to simplify administrative tasks and improve the overall user experience.

2.2 SVC architectural overview


The IBM System Storage SAN Volume Controller is a SAN block aggregation virtualization appliance that is designed for attachment to a variety of host computer systems.

Implementing the IBM System Storage SAN Volume Controller V6.1

There are two major approaches in use today to be considered for the implementation of block-level aggregation and virtualization: Symmetric: in-band appliance The device is a SAN appliance that sits in the data path, and all I/O flows through the device. This kind of implementation is also referred to as symmetric virtualization or in-band. The device is both target and initiator. It is the target of I/O requests from the host perspective, and the initiator of I/O requests from the storage perspective. The redirection is performed by issuing new I/O requests to the storage. The SVC uses symmetric virtualization. Asymmetric: out-of-band or controller-based The device is usually a storage controller that provides an internal switch for external storage attachment. In this approach, the storage controller intercepts and redirects I/O requests to the external storage as it does for internal storage. The actual I/O requests are themselves redirected. This kind of implementation is also referred to as asymmetric virtualization or out-of-band. Figure 2-1 shows variations of the two virtualization approaches.

Figure 2-1 Overview of block-level virtualization architectures

Although these approaches provide essentially the same cornerstones of virtualization, there can be interesting side effects, as discussed here.

Chapter 2. IBM System Storage SAN Volume Controller

The controller-based approach has high functionality, but it fails in terms of scalability or upgradability. Because of the nature of its design, there is no true decoupling with this approach, which becomes an issue for the life cycle of this solution, such as a controller. You will be challenged with data migration issues and questions, such as how to reconnect the servers to the new controller, and how to reconnect them online without any impact to your applications. Be aware that with this approach, you not only replace a controller but also implicitly replace your entire virtualization solution. In addition to replacing the hardware, can also be necessary to update or repurchase the licenses for the virtualization feature, advanced copy functions, and so on. With a SAN or fabric-based appliance solution that is based on a scale-out cluster architecture, life cycle management tasks such as adding or replacing new disk subsystems or migrating data between them, are extremely simple. Servers and applications remain online, data migration takes place transparently on the virtualization platform, and licenses for virtualization and copy services require no update; that is, they require no additional costs when disk subsystems are replaced. Only the fabric-based appliance solution provides an independent and scalable virtualization platform that can provide enterprise-class copy services; is open for future interfaces and protocols; allows you to choose the disk subsystems that best fit your requirements; and does not lock you into specific SAN hardware. For these reasons, IBM has chosen the SAN or fabric-based appliance approach for the implementation of the IBM System Storage SAN Volume Controller (SVC). The SVC possesses the following key characteristics: It is highly scalable, providing an easy growth path to two-n nodes (grow in a pair of nodes). It is SAN interface-independent. It actually supports FC and iSCSI, but is also open for future enhancements. It is host-independent, for fixed block-based Open Systems environments. It is external storage RAID controller-independent, providing a continual and ongoing process to qualify additional types of controllers. It is able to utilize disks internally located within the nodes (solid state disks). It is able to utilize disks locally attached to the nodes (SAS drives). On the SAN storage that is provided by the disk subsystems, the SVC can offer the following services. It can create and manage a single pool of storage attached to the SAN. It can manage multiple tiers of storage. It provides block-level virtualization (logical unit virtualization). It provides automatic block-, or sub-LUN-, level data migration between storage tiers. It provides advanced functions to the entire SAN, such as: Large scalable cache Advanced Copy Services FlashCopy (point-in-time copy) Metro Mirror and Global Mirror (remote copy, synchronous/asynchronous)

It provides nondisruptive and concurrent data migration. 10


Implementing the IBM System Storage SAN Volume Controller V6.1

This list of features will grow with each future release, because the layered architecture of the SVC can easily implement new storage features.

2.2.1 SAN Volume Controller topology


SAN-based storage is managed by the SVC in one or more pairs of SVC hardware nodes, referred to as a cluster. These nodes are attached to the SAN fabric, along with RAID controllers and host systems. The SAN fabric is zoned to allow the SVC to see the RAID controllers, and for the hosts to see the SVC. The hosts are not allowed to see or operate on the same physical storage (LUN) from the RAID controller that has been assigned to the SVC. Storage controllers can be shared between the SVC and direct host access as long as the same LUNs are not shared. The zoning capabilities of the SAN switch must be used to create distinct zones to ensure this rule is enforced. SAN fabrics may include standard FC, iSCSI over Gigabit Ethernet, or possible future types such as FC over Ethernet. Figure 2-2 on page 12 shows a conceptual diagram of a storage system utilizing the SVC. It shows a number of hosts that are connected to a SAN fabric or LAN. In practical implementations that have high availability requirements (the majority of the target clients for SVC), the SAN fabric cloud represents a redundant SAN. A redundant SAN consists of a fault-tolerant arrangement of two or more counterpart SANs, thereby providing alternate paths for each SAN-attached device. Both scenarios (using a single network and using two physically separate networks) are supported for iSCSI-based and LAN-based access networks to the SVC. Redundant paths to volumes can be provided in both scenarios. For simplicity, Figure 2-2 on page 12 shows only one SAN fabric and two zones, namely host and storage. In a real environment, it is a best practice to use two redundant SAN fabrics. The SVC can be connected to up to four fabrics. Zoning details are described in 3.3.2, SAN zoning and SAN connections on page 65.

Chapter 2. IBM System Storage SAN Volume Controller

11

Figure 2-2 SVC conceptual and topology overview

A cluster of SVC nodes are connected to the same fabric and present logical disks (virtual disk) or volumes to the hosts. These volumes are created from managed LUNs or MDisks that are presented by the RAID disk subsystems. There are two distinct zones shown in the fabric: A host zone, in which the hosts can see and address the SVC nodes A storage zone, in which the SVC nodes can see and address the MDisk/logical unit numbers (LUNs) presented by the RAID subsystems. Hosts are not permitted to operate on the RAID LUNs directly, and all data transfer happens through the SVC nodes. This design is commonly described as symmetric virtualization. For iSCSI-based access, using two networks and separating iSCSI traffic within the networks by using a dedicated virtual local area network (VLAN) path for storage traffic will prevent any IP interface, switch, or target port failure from compromising the host servers access to the volumes LUNs.

2.3 SVC terminology


To provide a higher level of consistency between IBM storage products, the terminology used with SVC V6.1, and therefore through the rest of this book, has changed when compared to previous SVC releases. Table 2-1 on page 13 summarizes the main changes.

12

Implementing the IBM System Storage SAN Volume Controller V6.1

Table 2-1 New SVC terminology mapping 6.1.0 SAN Volume Controller terminology event Previous SAN Volume Controller term error Description An occurrence of significance to a task or system. Events can include completion or failure of an operation, a user action, or the change in state of a process. The process of controlling which hosts have access to specific volumes within a cluster. A collection of storage capacity that provides the capacity requirements for a volume. The ability to define a storage unit (full system, storage pool, volume) with a logical capacity size that is larger than the physical capacity assigned to that storage unit. A discrete unit of storage on disk, tape, or other data recording medium that supports a form of identifier and parameter list, such as a volume label or input/output control.

host mapping

VDisk-to-host mapping

storage pool

managed disk (MDisk) group

thin provisioning (or thin-provisioned)

space-efficient

volume

virtual disk (VDisk)

For a detailed glossary containing the terms and definitions used in the SAN Volume Controller see Appendix B, Terminology on page 829.

2.4 SAN Volume Controller components


The SVC product provides block-level aggregation and volume management for attached disk storage. In simpler terms, the SVC manages a number of back-end storage controllers or locally attached disks and maps the physical storage within those controllers or disk arrays into logical disk images, or volumes, that can be seen by application servers and workstations in the SAN. The SAN is zoned so that the application servers cannot see the back-end physical storage, which prevents any possible conflict between the SVC and the application servers both trying to manage the back-end storage. The SVC is based on the following components, which are discussed in more detail in later sections of this chapter.

2.4.1 Nodes
Each SAN volume Controller hardware unit is called a node. The node provides the virtualization for a set of volumes, cache, and copy services functions. SVC nodes are deployed in pairs and multiple pairs make up a cluster. A cluster can consist of between one and four SVC node pairs.
Chapter 2. IBM System Storage SAN Volume Controller

13

One of the nodes within the cluster will be known as the configuration node. The configuration node manages the configuration activity for the cluster. If this node fails, the cluster will choose a new node to become the configuration node. Because the nodes are installed in pairs, each node provides a failover function to its partner node in the event of a node failure.

2.4.2 I/O Groups


Each pair of SVC nodes is also referred to as an I/O Group. An SVC cluster can have from one to four I/O Groups. A specific volume is always presented to a host server by a single I/O Group of the cluster. When a host server performs I/O to one of its volumes, all the I/Os for a specific volume are directed to one specific I/O Group in the cluster. Also, under normal conditions, the I/Os for that specific volume are always processed by the same node within the I/O Group. This node is referred to as the preferred node for this specific volume. Both nodes of an I/O Group act as the preferred node for their own specific subset of the total number of volumes that the I/O Group presents to the host servers. A maximum of 2048 volumes per I/O group is allowed. However, both nodes also act as failover nodes for their respective partner node within the I/O Group. Therefore, a node will take over the I/O workload from its partner node, if required. Thus, in an SVC-based environment, the I/O handling for a volume can switch between the two nodes of the I/O Group. For this reason it is mandatory for servers that are connected through FC to use multipath drivers to be able to handle these failover situations. The SVC I/O Groups are connected to the SAN so that all application servers accessing volumes from this I/O Group have access to this group. Up to 256 host server objects can be defined per I/O Group. The host server objects can access volumes that are provided by this specific I/O Group. If required, host servers can be mapped to more than one I/O Group within the SVC cluster; therefore, they can access volumes from separate I/O Groups. You can move volumes between I/O Groups to redistribute the load between the I/O Groups; however, moving volumes between I/O Groups cannot be done concurrently with host I/O and will require a brief interruption to remap the host.

2.4.3 Cluster
The cluster consists of between one and four I/O Groups. Certain configuration limitations are then set for the individual cluster. For example, the maximum number of volumes supported per cluster is 8192, or the maximum managed disk supported is 32 PB per cluster. All configuration, monitoring, and service tasks are performed at the cluster level. Configuration settings are replicated to all nodes in the cluster. To facilitate these tasks, a management IP address is set for the cluster. A process is provided to back up the cluster configuration data onto disk so that the cluster can be restored in the event of a disaster. Note that this method does not back up application data. Only SVC cluster configuration information is backed up. For the purposes of remote data mirroring, two or more clusters must form a partnership prior to creating relationships between mirrored volumes.

14

Implementing the IBM System Storage SAN Volume Controller V6.1

For details about the Maximum Configurations applicable to the Cluster, I/O Group and nodes, select the restrictions hot link in the section corresponding to your SVC code level: http://www-03.ibm.com/systems/storage/software/virtualization/svc/interop.html

2.4.4 MDisks
The SVC cluster and its I/O Groups view the storage that is presented to the SAN by the back-end controllers as a number of disks or LUNs, known as managed disks or MDisks. Because the SVC does not attempt to provide recovery from physical disk failures within the back-end controllers, an MDisk is usually provisioned from a RAID array. The application servers, however, do not see the MDisks at all. Instead they see a number of logical disks, known as virtual disks or volumes, which are presented by the SVC I/O Groups through the SAN (FC) or LAN (iSCSI) to the servers. The MDisks are placed into storage pools where they are divided up into a number of extents, which can range in size from 16 MB to 8182 MB, as defined by the SVC administrator. A volume is host-accessible storage that has been provisioned out of one Storage Pool, or if it is a mirrored volume, out of two Storage Pools. The maximum size of an MDisk is 1 PB. An SVC cluster supports up to 4096 MDisks. At any point in time, an MDisk is in one of the following three modes: Unmanaged MDisk An MDisk is reported as unmanaged when it is not a member of any storage pool. An unmanaged MDisk is not associated with any volumes and has no metadata stored on it. SVC will not write to an MDisk that is in unmanaged mode, except when it attempts to change the mode of the MDisk to one of the other modes. SVC can see the resource, but it is not assigned to a storage pool. Managed MDisk Managed mode MDisks are always members of a storage pool, and they contribute extents to the storage pool. Volumes (if not operated in image mode) are created from these extents. MDisks operating in managed mode might have metadata extents allocated from them and can be used as quorum disks. This is the most common and normal mode of an MDisk. Image mode MDisk Image mode provides a direct block-for-block translation from the MDisk to the volume by using virtualization. This mode is provided to satisfy three major usage scenarios: Image mode allows virtualization of MDisks already containing data that was written directly and not through an SVC; rather, it was created by a direct-connected host. This mode allows a client to insert the SVC into the data path of an existing storage volume or LUN with minimal downtime. Chapter 6, Data migration on page 227, provides details of the data migration process. Image mode allows a volume that is managed by the SVC to be used with the native copy services function provided by the underlying RAID controller. To avoid the loss of data integrity when the SVC is used in this way, it is important that you disable the SVC cache for the volume. SVC provides the ability to migrate to image mode, which allows the SVC to export volumes and access them directly from a host without the SVC in the path. Each MDisk presented from an external disk controller has an online path count that is the number of nodes having access to that MDisk. The maximum count is the maximum paths detected at any point in time by the cluster. The current count is what the cluster sees at
Chapter 2. IBM System Storage SAN Volume Controller

15

this point in time. A current value less than the maximum can indicate that SAN fabric paths have been lost. See 2.5.1, Image mode volumes on page 21 for more details. Starting with SVC 6.1, internal SSD drives do not appear as MDisks. Internal SSDs will be used and appear as disk drives, and therefore additional RAID protection is required. Note: Users of internal solid-state devices (SSDs) on the SAN Volume Controller 2145-CF8 cannot install SVC 6.1.0 at this time.

2.4.5 Quorum disk


A quorum disk is a managed disk (MDisk) that contains a reserved area for use exclusively by the cluster. The cluster uses quorum disks to break a tie when exactly half the nodes in the cluster remain after a SAN failure; this is referred to as split brain. There are three candidate quorum disks. However, only one quorum disk is active at any time. Quorum disks are discussed in more detail in 2.8.1, Quorum disks on page 36.

2.4.6 Disk tier


It is likely that the MDisks (LUNs) presented to the SVC cluster will have various performance attributes due to the type of disk or RAID array that they reside on. The MDisks may be on 15 K RPM Fibre Channel or SAS disk, Nearline SAS or SATA, or even solid state disk (SSDs). Therefore, a storage tier attribute is assigned to each MDisk, with the default being generic_hdd. With SVC V6.1 a new tier 0 (zero) level disk attribute is available for SSDs and it is known as generic_ssd.

2.4.7 Storage pool


A storage pool is a collection of up to 128 MDisks that provides the pool of storage from which volumes are provisioned. A single cluster can manage up to 128 storage pools. The size of these pools can be changed (expanded or shrunk) at run time by adding or removing MDisks, without taking the storage pool or the volumes offline. At any point in time, an MDisk can only be a member in one storage pool, with the exception of image mode volumes; see 2.5.1, Image mode volumes on page 21 for more information about this topic. Figure 2-3 on page 17 illustrates the relationships of the SVC entities to each other.

16

Implementing the IBM System Storage SAN Volume Controller V6.1

Figure 2-3 Overview of SVC cluster with I/O Group

Each MDisk in the storage pool is divided into a number of extents. The size of the extent will be selected by the administrator at the creation time of the storage pool and cannot be changed later. The size of the extent ranges from 16 MB up to 8 GB. It is a best practice to use the same extent size for all storage pools in a cluster; this is a prerequisite for supporting volume migration between two storage pools. If the storage pool extent sizes are not the same, then you must use volume mirroring (see 2.5.4, Mirrored volumes on page 24) to copy volumes between pools. SVC limits the number of extents in a cluster to 222 ~= 4 million. Because the number of addressable extents is limited, the total capacity of an SVC cluster depends on the extent size that is chosen by the SVC administrator. The capacity numbers that are specified in Table 2-2 for an SVC cluster assume that all defined storage pools have been created with the same extent size.
Table 2-2 Extent size-to-addressability matrix Extent size maximum 16 MB 32 MB 64 MB 128 MB 4096 MB Cluster capacity 64 TB 128 TB 256 TB 512 TB 16 PB Extent size maximum 256 MB 512 MB 1024 MB 2048 MB 8192 MB Cluster capacity 1 PB 2 PB 4 PB 8 PB 32 PB

For most clusters, a capacity of 1 to 2 PB is sufficient. A best practice is to use 256 MB or, for larger clusters, 512 MB as the standard extent size.

Chapter 2. IBM System Storage SAN Volume Controller

17

Single-tiered storage pool


MDisks used in a single-tiered storage pool should have the following characteristics to avoid inducing performance problems and other issues: They have the same hardware characteristics, for example, the same RAID type, RAID array size, disk type, and disk revolutions per minute (RPMs). The disk subsystems providing the MDisks must have similar characteristics, for example, maximum input/output operations per second (IOPS), response time, cache, and throughput. The MDisks used are of the same size and are therefore MDisks that provide the same number of extents. If that is not feasible, you will need to check the distribution of the volumes extents in that storage pool. For further details, see SAN Volume Controller Best Practices and Performance Guidelines, SG24-7521, which is available at this website: http://www.redbooks.ibm.com/abstracts/sg247521.html?Open

Multitiered storage pool


A multitiered storage pool will have a mix of MDisks with more than one type of disk tier attribute. For example, a storage pool containing a mix of generic_hdd AND generic_ssd MDisks. A multitiered storage pool will therefore contain MDisks with various characteristics, as opposed to a single-tier storage pool. However, it is a best practice for each tier to have MDisks of the same size and MDisks that provide the same number of extents. Multi-tiered storage pools are used to enable the automatic migration of extents between disk tiers using the SVC Easy Tier function. These storage pools are described in more detail in Chapter 7, Easy Tier on page 345.

2.4.8 Volumes
Volumes are logical disks presented to the host or application servers by the SVC. The hosts cannot see the MDisks; they can only see the logical volumes created from combining extents from a storage pool.
There are three types of volumes: striped, sequential, and image. These types are determined by the way in which the extents are allocated from the storage pool, as explained here: A volume created in striped mode has extents allocated from each MDisk in the storage pool in a round-robin fashion. With a sequential mode volume, extents are allocated sequentially from an MDisk. Image mode is a one-to-one mapped extent mode volume. Using striped mode is the best method to use for most cases. However, sequential extent allocation mode can slightly increase the sequential performance for certain workloads. Figure 2-4 on page 19 shows striped volume mode and sequential volume mode, and illustrates how the extent allocation from the storage pool differs.

18

Implementing the IBM System Storage SAN Volume Controller V6.1

Figure 2-4 Storage Pool extents overview

You can allocate the extents for a volume in many ways. The process is under full user control at volume creation time and can be changed at any time by migrating single extents of a volume to another MDisk within the storage pool. Chapter 6, Data migration on page 227, Chapter 10, SAN Volume Controller operations using the GUI on page 579, and Chapter 9, SAN Volume Controller operations using the command-line interface on page 439, provide detailed explanations about how to create volumes and migrate extents by using the GUI or CLI.

2.4.9 Easy Tier performance function


Easy Tier is a performance function that will automatically migrate or move extents off a volume to, or from, one MDisk storage tier to another MDisk storage tier. Easy Tier monitors the host I/O activity and latency on the extents of all volumes with the Easy Tier function turned on in a multitier storage pool over a 24-hour period. Next, it creates an extent migration plan based on this activity and then dynamically moves high activity or hot extents to a higher disk tier within the storage pool. It will also move extents whose activity has dropped off or cooled from the high-tier MDisks back to a lower-tiered MDisk. Note: The Easy Tier function may be turned on or off at the storage pool level and volume level. To experience the potential benefits of using Easy Tier in your environment before actually installing expensive solid-state disks (SSDs), you can turn on the Easy Tier function for a single level storage pool. Next, turn on the Easy Tier function for the volumes within that pool. Easy Tier will then start monitoring activity on the volume extents in the pool.

Chapter 2. IBM System Storage SAN Volume Controller

19

Easy Tier will create a migration report every 24 hours on the number of extents that would be moved if the pool were a multitiered storage pool. So even though Easy Tier extent migration is not possible within a single tier pool, the Easy Tier statistical measurement function is available. The Easy Tier function can make it more appropriate to use smaller storage pool extent sizes. The usage statistics file can be offloaded from the SVC nodes. Then you can use an IBM Storage Advisor Tool to create a summary report. Contact your IBM representative or IBM Business Partner for more information about the Storage Advisor Tool. For more detailed information about Easy Tier functionality, see Chapter 7, Easy Tier on page 345.

2.4.10 Hosts
Volumes can be mapped to a host to allow access for a specific server to a set of volumes. A host within the SVC is a collection of HBA worldwide port names (WWPNs) or iSCSI qualified names (IQNs), defined on the specific server. Note that iSCSI names are internally identified by fake WWPNs, or WWPNs that are generated by the SVC. Volumes can be mapped to multiple hosts, for example, a volume that is accessed by multiple hosts of a server cluster. iSCSI is an alternative means of attaching hosts. However, all communication with back-end storage subsystems, and with other SVC clusters, is still through FC. Node failover can be handled without having a multipath driver installed on the iSCSI server. An iSCSI-attached server can simply reconnect after a node failover to the original target IP address, which is now presented by the partner node. To protect the server against link failures in the network or host bus adapter (HBA) failures, using a multipath driver is mandatory. Volumes are LUN masked to the hosts HBA WWPNs by a process called host mapping. Mapping a volume to the host makes it accessible to the WWPNs or iSCSI names (IQNs) that are configured on the host object. For a SCSI over Ethernet connection, the IQN identifies the iSCSI target (destination) adapter. Host objects can have both IQNs and WWPNs.

2.4.11 Maximum supported configurations


For details about the maximum configurations applicable to the cluster, I/O Group and nodes, select the restrictions hot link in the section of the SVC support site that corresponds to your SVC code level: http://www-03.ibm.com/systems/storage/software/virtualization/svc/interop.html Several limits have changed with SVC 6.1. The following list includes several of the more important limits. For details, consult the SVC support site. 256 WWNNs per storage controller 1 PB MDisk 8 GB Extents Long Object Names - up to 63 characters See 2.12, What is new with SVC 6.1 on page 52 for a more detailed explanation of the new features.

20

Implementing the IBM System Storage SAN Volume Controller V6.1

2.5 Volume overview


The maximum size of a single volume is 256 TB. A single SVC cluster supports up to 8192 volumes. Volumes have the following characteristics or attributes: Volumes can be created and deleted. Volumes can be resized (expand or shrink). Volume extents can be migrated at run time to another MDisk or storage pool. Volumes can be created as fully allocated or thin-provisioned. A conversion from a fully allocated to a thin-provisioned volume and vice versa can be done at run time. Volumes can be stored in multiple storage pools (mirrored) to make them resistant to disk subsystem failures or to improve the read performance. Volumes can be mirrored synchronously or asynchronously for longer distances. An SVC cluster can run active volume mirrors to a maximum of three other SVC clusters, but not from the same volume. Volumes can be copied using FlashCopy. Multiple snapshots and quick restore from snapshots (reverse flash copy) are supported. Volumes have two major modes: image mode and managed mode. Managed mode volumes have two policies: the sequential policy and the striped policy. Policies define how the extents of a volume are allocated from a storage pool.

2.5.1 Image mode volumes


Image mode volumes are used to migrate LUNs that were previously mapped directly to host servers over to the control of SVC. Image mode provides a one-to-one mapping between the logical block addresses (LBAs) between a volume and an MDisk. Image mode volumes have a minimum size of one block (512 bytes) and always occupy at least one extent. An image mode MDisk is mapped to one and only one image mode volume. The volume capacity that is specified must be equal to the size of the image mode MDisk. When you create an image mode volume, the specified MDisk must be in unmanaged mode and must not be a member of a storage pool. The MDisk is made a member of the specified storage pool (Storage Pool_IMG_xxx) as a result of the creation of the image mode volume. The SVC also supports the reverse process in which a managed mode volume can be migrated to image mode volumes. If a volume is migrated to another MDisk, it is represented as being in managed mode during the migration and is only represented as an image mode volume after it has reached the state where it is a straight-through mapping. An image mode MDisk is associated with exactly one volume. The last extent is partial, not filled, if the (image mode) MDisk is not a multiple of the MDisk Groups extent size. An image mode volume is a pass-through one-to-one map of its MDisk. It cannot be a quorum disk and will not have any SVC metadata extents assigned to it. Managed or image mode MDisks are always members of a storage pool. It is a best practice to put image mode MDisks in a dedicated storage pool and use a special name for it (for example, Storage Pool_IMG_xxx). Remember that the extent size chosen for

Chapter 2. IBM System Storage SAN Volume Controller

21

this specific storage pool must be the same as the extent size in which you plan to migrate the data to. All of the SVC copy services functions can be applied to image mode disks.

Figure 2-5 Image mode volume versus striped volume

2.5.2 Managed mode volumes


Volumes operating in managed mode provide a full set of virtualization functions. Within a storage pool, SVC supports an arbitrary relationship between extents on (managed mode) volumes and extents on MDisks. Each volume extent maps to exactly one MDisk extent. Figure 2-6 on page 23 represents this diagrammatically. It shows a volume that is made up of a number of extents shown as V0 to V7. Each of these extents is mapped to an extent on one of the MDisks: A, B, or C. The mapping table stores the details of this indirection. Notice that several of the MDisk extents are unused. There is no volume extent that maps to them. These unused extents are available for use in creating new volumes, migration, expansion, and so on.

22

Implementing the IBM System Storage SAN Volume Controller V6.1

Figure 2-6 Simple view of block virtualization

The allocation of a specific number of extents from a specific set of MDisks is performed by the following algorithm: if the set of MDisks from which to allocate extents contains more than one disk, extents are allocated from MDisks in a round-robin fashion. If an MDisk has no free extents when its turn arrives, its turn is missed and the round-robin moves to the next MDisk in the set that has a free extent. When creating a new volume, the first MDisk from which to allocate an extent is chosen in a pseudo-random way rather than simply choosing the next disk in a round-robin fashion. The pseudo-random algorithm avoids the situation whereby the striping effect inherent in a round-robin algorithm places the first extent for a large number of volumes on the same MDisk. Placing the first extent of a number of volumes on the same MDisk can lead to poor performance for workloads that place a large I/O load on the first extent of each volume, or that create multiple sequential streams.

2.5.3 Cache mode and cache-disabled volumes


Under nominal conditions, a volumes read and write data is held in the cache of its preferred node, with a mirrored copy of write data held in the partner node of the same I/O group. However, it is possible to create a volume with cache disabled, which means the I/Os are passed directly through to the back-end storage controller rather than being held in the nodes cache. Having cache-disabled volumes makes it possible to use the native copy services in the underlying RAID array controller for MDisks (LUNs) that are used as SVC image mode volumes. Using SVC copy services rather than the underlying disk controller copy services gives better results.

Chapter 2. IBM System Storage SAN Volume Controller

23

2.5.4 Mirrored volumes


The mirrored volume feature provides a simple RAID-1 function; thus, a volume will have two physical copies of its data. This allows the volume to remain online and accessible even if one of the MDisks sustains a failure that causes it to become inaccessible. The two copies of the volume are typically allocated from separate storage pools or by using image-mode copies. The volume can participate in FlashCopy and a Remote Copy relationships, it is serviced by an I/O Group, and it has a preferred node. Each copy is not a separate object and cannot be created or manipulated except in the context of the volume. Copies are identified through the configuration interface with a copy ID of their parent volume. This copy ID can be either 0 or 1. The feature provides a point-in-time copy functionality that is achieved by splitting a copy from the volume. Note, however, that the mirrored volume feature does not address other forms of mirroring based on Remote Copy (sometimes called Hyperswap), which mirrors volumes across I/O Groups or clusters. It is also not intended to manage mirroring or remote copy functions in back-end controllers. Figure 2-7 provides an overview of volume mirroring.

Figure 2-7 Volume mirroring overview

A second copy can be added to a volume with a single copy, or removed from a volume with two copies. Checks prevent the accidental removal of the only remaining copy of a volume. A newly created, unformatted volume with two copies will initially have the two copies in an out-of-synchronization state. The primary copy will be defined as fresh and the secondary copy as stale. The synchronization process will update the secondary copy until it is fully synchronized. This is done at the default synchronization rate or at a rate defined when creating the volume or modifying it. The synchronization status for mirrored volumes is recorded on the quorum disk.

24

Implementing the IBM System Storage SAN Volume Controller V6.1

If a two-copy mirrored volume is created with the format parameter, then both copies are formatted in parallel and the volume comes online when both operations are complete with the copies in sync. If mirrored volumes are expanded or shrunk, all of their copies are also expanded or shrunk. If it is known that MDisk space, which will be used for creating copies, is already formatted, or if the user does not require read stability, then a no synchronization option can be selected which declares the copies as synchronized (even when they are not). To minimize the time required to resynchronize a copy that has become out of sync, only the 256 KB grains that have been written to since the synchronization was lost are copied. This approach is known as an incremental synchronization. Only the changed grains need to be copied to restore synchronization. Important: An unmirrored volume can be migrated from one location to another by simply adding a second copy to the desired destination, waiting for the two copies to synchronize, and then removing the original copy 0. This operation can be stopped at any time. The two copies can be in separate storage pools with separate extent sizes. Where there are two copies of a volume, one copy is known as the primary copy. If the primary is available and synchronized, reads from the volume are directed to it. The user can select the primary when creating the volume, or can change it later. Placing the primary copy on a high-performance controller will maximize the read performance of the volume. The write performance will be constrained if one copy is on a lower-performance controller. This is because writes must complete to both copies before the volume can provide acknowledgment to the host that the write completed successfully. Remember that writes to both copies must complete to be considered successfully written even if volume mirroring has one copy in a solid-state drive storage pool and the second copy in a storage pool containing resources from a disk subsystem. A volume with copies can be checked to see whether all of the copies are identical or consistent. If a medium error is encountered while reading from one copy, it will be repaired using data from the other copy. This consistency check is performed asynchronously with host I/O. Important: Mirrored volumes can be taken offline if there is no quorum disk available. This behavior occurs because the synchronization status for mirrored volumes is recorded on the quorum disk. Mirrored volumes consume bitmap space at a rate of 1 bit per 256 KB grain, which translates to 1 MB of bitmap space supporting 2 TB-worth of mirrored volume. The default allocation of bitmap space in 20 MB, which supports 40 TB of mirrored volumes. If all 512 MB of variable bitmap space is allocated to mirrored volumes, 1 PB of mirrored volumes can be supported.

2.5.5 Thin-provisioned volumes


Volumes can be configured to be either thin-provisioned or fully allocated. A thin-provisioned volume will behave with respect to application reads and writes as though they were fully allocated. When creating a thin-provisioned volume, the user will specify two capacities: the real physical capacity allocated to the volume from the storage pool, and its

Chapter 2. IBM System Storage SAN Volume Controller

25

virtual capacity available to the host. In a fully allocated volume, these two values will be the same. Thus, the real capacity will determine the quantity of MDisk extents that will be initially allocated to the volume. The virtual capacity will be the capacity of the volume reported to all other SVC components (for example, FlashCopy, Cache, and Remote Copy) and to the host servers. The real capacity is used to store both the user data and the metadata for the thin-provisioned volume. The real capacity can be specified as an absolute value or a percentage of the virtual capacity. Thin-provisioned volumes can be used as volumes assigned to the host; by FlashCopy to implement thin-provisioned FlashCopy targets; and also with the mirrored volumes feature. When a thin-provisioned volume is initially created, a small amount of the real capacity will be used for initial metadata. Write I/Os to grains of the thin volume that have not previously been written to will cause grains of the real capacity to be used to store metadata and the actual user data. Write I/Os to grains that have previously been written to will update the grain where data was previously written. The grain size is defined when the volume is created and can be 32 KB, 64 KB, 128 KB, or 256 KB. Figure 2-8 illustrates the thin-provisioning concept.

Figure 2-8 Conceptual diagram of thin-provisioned volume

Thin-provisioned volumes store both user data and metadata. Each grain of data requires metadata to be stored. This means the I/O rates that are obtained from thin-provisioned volumes will be less than fully allocated volumes. The metadata storage overhead will never be greater than 0.1% of the user data. The overhead is independent of the virtual capacity of the volume. If you are using thin-provisioned volumes in a FlashCopy map, then for best performance use the same grain

26

Implementing the IBM System Storage SAN Volume Controller V6.1

size as the map grain size. If you are using the thin-provisioned volume directly with a host system, then use a small grain size. Thin-provisioned volume format: Thin-provisioned volumes do not need formatting. A read I/O, which requests data from unallocated data space, will return zeroes. When a write I/O causes space to be allocated, the grain will be zeroed prior to use. However, if the node is a CF8, space is not allocated for a host write that contains all zeros. The formatting flag will be ignored when a thin volume is created or when the real capacity is expanded; the virtualization component will never format the real capacity of a thin-provisioned volume. The real capacity of a thin volume can be changed if the volume is not in image mode. Increasing the real capacity allows a larger amount of data and metadata to be stored on the volume. Thin-provisioned volumes use the real capacity provided in ascending order as new data is written to the volume. If the user initially assigns too much real capacity to the volume, the real capacity can be reduced to free storage for other uses. A thin-provisioned volume can be configured to autoexpand. This feature causes the SVC to automatically add a fixed amount of additional real capacity to the thin volume as required. Autoexpand therefore attempts to maintain a fixed amount of unused real capacity for the volume. This amount is known as the contingency capacity. The contingency capacity is initially set to the real capacity that is assigned when the volume is created. If the user modifies the real capacity, the contingency capacity is reset to be the difference between the used capacity and real capacity. A volume that is created without the autoexpand feature, and thus has a zero contingency capacity, will go offline as soon as the real capacity is used and needs to expand. Autoexpand will not cause the real capacity to grow much beyond the virtual capacity. The real capacity can be manually expanded to more than the maximum that is required by the current virtual capacity, and the contingency capacity will be recalculated. To support the autoexpansion of thin-provisioned volumes, the storage pools from which they are allocated have a configurable capacity warning. When the used capacity of the pool exceeds the warning capacity, a warning event is logged. For example, if a warning of 80% has been specified, the event will be logged when 20% of the free capacity remains. Thin-provisioned volume performance: Thin-provisioned volumes require additional I/O operations to read and write metadata to the back-end storage, which also generates additional load on the SVC nodes. Therefore, avoid using thin-provisioned volumes for high performance applications, or for any workloads with a high write I/O component. A thin-provisioned volume can be converted nondisruptively to a fully allocated volume, or vice versa, by using the volume mirroring function. For example, you can add a thin-provisioned copy to a fully allocated primary volume and then remove the fully allocated copy from the volume after they are synchronized. The fully allocated to thin-provisioned migration procedure uses a zero-detection algorithm so that grains containing all zeros do not cause any real capacity to be used. Note: Consider using thin-provisioned volumes as targets in Flash Copy relationships. Using them as a target in Metro Mirror or Global Mirror relationships makes no sense because during the initial synchronization, the target will become fully allocated.

Chapter 2. IBM System Storage SAN Volume Controller

27

2.5.6 Volume I/O governing


It is possible to constrain I/O operations so that the maximum amount of I/O activity a host can perform on a volume can be limited over a specific period of time. This governing feature can be used to satisfy a Quality Of Service requirement or a contractual obligation (for example, if a client agrees to pay for I/Os performed, but will not pay for I/Os beyond a certain rate). Only Read, Write and Verify commands that access the physical medium are subject to I/O governing. The governing rate can be set in I/Os per second or MB per second. It can be altered by changing the throttle value through the svcinfo chvdisk command and specifying the -rate parameter. I/O governing: I/O governing on Metro Mirror or Global Mirror secondary volumes does not affect the data copy rate from the primary. Governing has no effect on FlashCopy or data migration I/O rates. An I/O budget is expressed as a number of I/Os, or MBs, over a minute. The budget is evenly divided between all SVC nodes that service that volume, that is, between the nodes that form the I/O Group of which that volume is a member. The algorithm operates two levels of policing. While a volume on each SVC node receives I/O at a rate lower than the governed level, no governing is performed. However, when the I/O rate exceeds the defined threshold, then adjustments to the policy are made. A check is made every minute to see that each node is continuing to receive I/O below the threshold level. Whenever this check shows that the host has exceeded its limit on one or more nodes, then policing begins for new I/Os. The following conditions exist while policing is in force: A budget allowance is calculated for a one- second period. I/Os are counted over a period of a second. If I/Os are received in excess of the one-second budget on any node in the I/O Group, those I/Os and later I/Os are pended. When the second expires, a new budget is established, and any pended I/Os are redriven under the new budget. This algorithm might cause I/O to backlog in the front-end, which might eventually cause a Queue Full Condition to be reported to hosts that continue to flood the system with I/O. If a host stays within its one-second budget on all nodes in the I/O Group for a period of one minute, then the policing is relaxed and monitoring takes place over the one-minute period as before.

2.6 iSCSI overview


iSCSI is an alternative means of attaching hosts to the SVC. All communications with back-end storage subsystems and with other SVC clusters only occur through FC. The iSCSI function is a software function that is provided by the SVC code, not hardware. In the simplest terms, iSCSI allows the transport of SCSI commands and data over a TCP/IP network, based on IP routers and Ethernet switches. iSCSI is a block-level protocol that encapsulates SCSI commands into TCP/IP packets and thereby leverages an existing IP network, instead of requiring expensive FC HBAs and a SAN fabric infrastructure.

28

Implementing the IBM System Storage SAN Volume Controller V6.1

A pure SCSI architecture is based on the client/server model. A client (for example, server or workstation) initiates read or write requests for data from a target server (for example, a data storage system). Commands, which are sent by the client and processed by the server, are put into the Command Descriptor Block (CDB). The server executes a command, and completion is indicated by a special signal alert. The major functions of iSCSI include encapsulation and the reliable delivery of CDB transactions between initiators and targets through the TCP/IP network, especially over a potentially unreliable IP network. The concepts of names and addresses have been carefully separated in iSCSI: An iSCSI name is a location-independent, permanent identifier for an iSCSI node. An iSCSI node has one iSCSI name, which stays constant for the life of the node. The terms initiator name and target name also refer to an iSCSI name. An iSCSI address specifies not only the iSCSI name of an iSCSI node, but also a location of that node. The address consists of a host name or IP address, a TCP port number (for the target), and the iSCSI name of the node. An iSCSI node can have any number of addresses, which can change at any time, particularly if they are assigned by way of Dynamic Host Configuration Protocol (DHCP). An SVC node represents an iSCSI node and provides statically allocated IP addresses. Each iSCSI node, that is, an initiator or target, has a unique iSCSI Qualified Name (IQN), which can have a size of up to 255 bytes. The IQN is formed according to the rules adopted for Internet nodes. The iSCSI qualified name format is defined in RFC3720 and contains (in order) these elements: The string iqn. A date code specifying the year and month in which the organization registered the domain or sub-domain name used as the naming authority string. The organizational naming authority string, which consists of a valid, reversed domain or a subdomain name. Optionally, a colon (:), followed by a string of the assigning organizations choosing, which must make each assigned iSCSI name unique. For SVC, the IQN for its iSCSI target is specified as: iqn.1986-03.com.ibm:2145.<clustername>.<nodename> On a Windows server, the IQN, that is, the name for the iSCSI Initiator, can be defined as: iqn.1991-05.com.microsoft:<computer name> The IQNs can be abbreviated used a descriptive name, known as an alias. An alias can be assigned to an initiator or a target. The alias is independent of the name and does not have to be unique. Because it is not unique, the alias must be used in a purely informational way. It cannot be used to specify a target at login or used during authentication. Both targets and initiators can have aliases. An iSCSI name provides the correct identification of an iSCSI device irrespective of its physical location. Remember, the IQN is an identifier, not an address.

Chapter 2. IBM System Storage SAN Volume Controller

29

Be careful: Before changing cluster or node names for an SVC cluster that has servers connected to it by way of SCSI, be aware that because the cluster and node name are part of the SVCs IQN, you can lose access to your data by changing these names. The SVC GUI will display a specific warning, but the CLI does not display a warning. The iSCSI session, which consists of a login phase and a full feature phase, is completed with a special command. The login phase of the iSCSI is identical to the FC port login process (PLOGI). It is used to adjust various parameters between two network entities and to confirm the access rights of an initiator. If the iSCSI login phase is completed successfully, the target confirms the login for the initiator; otherwise, the login is not confirmed and the TCP connection breaks. As soon as the login is confirmed, the iSCSI session enters the full feature phase. If more than one TCP connection was established, then iSCSI requires that each command and response pair must go through one TCP connection. Thus, each separate read or write command will be carried out without the necessity to trace each request for passing separate flows. However, separate transactions can be delivered through separate TCP connections within one session. Figure 2-9 illustrates an overview of the various block-level storage protocols and shows where the iSCSI layer is positioned.

Figure 2-9 Overview of block-level protocol stacks

2.6.1 Use of IP addresses and Ethernet ports


The SVC node hardware has two Ethernet ports. The configuration details of the two Ethernet ports can be displayed by the GUI, CLI, or panel on the front of the node. There are two kinds of IP addresses: Cluster management IP address This address is used for access to the SVC CLI, SVC GUI, and to the Common Information Model Object Manager (CIMOM) that runs on the SVC configuration node. 30
Implementing the IBM System Storage SAN Volume Controller V6.1

Only one node, the configuration node, presents a cluster management IP address at any one time. There can be two cluster management IP addresses, one for each of the two Ethernet ports. Configuration node failover is also supported. Port IP address This address is used to perform iSCSI I/O to the cluster. Each node can have a port IP address for each of its ports. Figure 2-10 shows an overview of the IP addresses on an SVC node port and illustrates how these IP addresses are moved between the nodes of an I/O Group. The management IP addresses and the iSCSI target IP addresses will fail over to the partner node N2 if node N1 fails (and vice versa). The iSCSI target IPs will fail back to their corresponding ports on node N1 when node N1 is running again.

Figure 2-10 SVC 6.1 IP address overview

It is a best practice to keep all of the eth0 ports on all of the nodes in the cluster on the same subnet. The same applies for the eth1 ports; however, it can be a separate subnet to the eth0 ports. In an SVC cluster running V6.1 code, there is a maximum of 256 iSCSI sessions per SAN volume Controller iSCSI target. You can find detailed examples of the SVC port configuration in Chapter 9, SAN Volume Controller operations using the command-line interface on page 439 and in Chapter 10, SAN Volume Controller operations using the GUI on page 579.

2.6.2 iSCSI volume discovery


The iSCSI target implementation on the SVC nodes uses the hardware offload features that are provided by the nodes hardware. This implementation results in minimal impact on the nodes CPU load for handling iSCSI traffic, and simultaneously delivers excellent throughput

Chapter 2. IBM System Storage SAN Volume Controller

31

(up to 95 MBps user data) on each of the two 1 Gbps LAN ports. The use of jumbo frames (maximum transmission unit (MTU) sizes greater than 1,500 bytes) is best practice. Hosts can discover volumes through one of the following mechanisms: Internet Storage Name Service (iSNS) SVC can register itself with an iSNS name server; you set the IP address of this server by using the svctask chcluster command. A host can then query the iSNS server for available iSCSI targets. Service Location Protocol (SLP) The SVC node runs an SLP daemon, which responds to host requests. This daemon reports the available services on the node, such as the CIMOM service that runs on the configuration node; the iSCSI I/O service can now also be reported. iSCSI Send Target request The host can also send a Send Target request using the iSCSI protocol to the iSCSI TCP/IP port (port 3260).

2.6.3 iSCSI authentication


Authentication of the host sever from the SVC cluster is optional and is disabled by default. The user can choose to enable Challenge Handshake Authentication Protocol (CHAP) authentication, which involves sharing a CHAP secret between the SVC cluster and the host. The SVC as authenticator sends a challenge message to the specific server (peer). The server responds with a value that is checked by the SVC. If there is a match, the SVC acknowledges the authentication. If not, the SVC will terminate the connection and will not allow any I/O to volumes. A CHAP secret can be assigned to each SVC host object. The host must then use CHAP authentication to begin a communications session with a node in the cluster. A CHAP secret can also be assigned to the cluster. Volumes are mapped to hosts and LUN masking is applied using the same methods used for FC LUNs. Because iSCSI can be used in networks where data security is a concern, the specification allows for separate security methods. You can set up security, for example, through a method such as IPSec, which is transparent for higher levels such as iSCSI because it is implemented at the IP level. Details regarding securing iSCSI can be found in RFC3723, Securing Block Storage Protocols over IP, which is available at this website: http://tools.ietf.org/html/rfc3723

2.6.4 iSCSI multipathing


Multipathing drivers means that the host can send commands down multiple paths to the SVC to the same volume. A fundamental multipathing difference exists between FC and iSCSI environments.
If FC-attached hosts see their FC target, and volumes go offline, for example, due to a problem in the target node, its ports, or the network, then the host has to use a separate SAN path to continue I/O. A multipathing driver is therefore always required on the host. SCSI-attached hosts see a pause in I/O when a (target) node is reset, but (this action is the key difference) the host is reconnected to the same IP target that reappears after a short 32
Implementing the IBM System Storage SAN Volume Controller V6.1

period of time and its volumes continue to be available for I/O. iSCSI allows failover without host multipathing. To achieve this, the partner node in the I/O group takes over the port IP addresses and iSCSI names of a failed node. Be aware: With the iSCSI implementation in SVC, an IP address failover/failback between partner nodes of an I/O Group will only take place in cases of a planned or unplanned node restart - node offline. When the partner node returns to online status, there is a delay of 5 minutes before failback occurs for the IP addresses and iSCSI names. A host multipathing driver for iSCSI is required if you want these capabilities: Protecting a server from network link failures Protecting a server from network failures, if the server is connected through two separate networks Providing load balancing on the servers network links

2.7 Advanced Copy Services overview


The SVC supports the following copy services: Synchronous remote copy Asynchronous remote copy FlashCopy with a full target Block virtualization and data migration Copy services functions are implemented within a single SVC cluster or between multiple SVC clusters. The copy services layer sits above, and operates independently of, the function or characteristics of the underlying disk subsystems used to provide storage resources to an SVC cluster.

2.7.1 Synchronous/Asynchronous remote copy


The general application of remote copy seeks to maintain two copies of data. Often the two copies will be separated by distance, but not necessarily. The remote copy can be maintained in one of two modes: synchronous or asynchronous. With the SVC, Metro Mirror and Global Mirror are the IBM branded terms for the functions that are synchronous remote copy and asynchronous remote copy.

Synchronous remote copy ensures that updates are committed at both the primary and the secondary before the application considers the updates complete; therefore, the secondary is fully up-to-date if it is needed in a failover. However, the application is fully exposed to the latency and bandwidth limitations of the communication link to the secondary. In a truly remote situation, this extra latency can have a significant adverse effect on application performance.
Special configuration guidelines exist for SAN fabrics that are used for data replication. It is necessary to consider the distance and available bandwidth of the intersite links. The SVC Support Portal contains details regarding these guidelines: http://www-947.ibm.com/support/entry/portal/Overview/Hardware/System_Storage/Stora ge_software/Storage_virtualization/SAN_Volume_Controller_%282145%29 Refer to 8.5, Metro Mirror on page 388 for more details about SVCs synchronous mirroring.
Chapter 2. IBM System Storage SAN Volume Controller

33

In asynchronous remote copy, the application is provided acknowledgement that the write is complete prior to the write being committed at the secondary. Thus, on a failover, certain updates (data) might be missing at the secondary. The application must have an external mechanism for recovering the missing updates if possible. This mechanism can involve user intervention. Recovery on the secondary site involves bringing up the application on this recent backup and then rolling forward or backward to the most recent commit point. The asynchronous remote copy must present at the secondary a view to the application that might not contain the latest updates, but is always consistent. If consistency has to be guaranteed at the secondary, then applying updates in an arbitrary order is not an option. At the primary side, the application is enforcing an ordering implicitly by not scheduling an I/O until a previous dependent I/O has completed. By applying I/Os at the secondary in the order that they were completed at the primary, the secondary will always reflect a state that will have been seen at the primary if we had frozen I/O there. The SVC Global Mirror protocol operates to identify small groups of I/Os, which are known to be active concurrently in the primary cluster. The process to identify these groups of I/Os does not significantly contribute to the latency of these I/Os when they execute at the primary. These groups are applied at the secondary in the order in which they were executed at the primary. The secondary data copy is not accessible for application I/O. However, the SVC allows read-only access to the secondary storage when it contains a consistent image. This capability is only intended to allow boot time operating system discovery to complete without error so that any hosts at the secondary site can be ready to start the applications with minimum delay, if required. For example, many operating systems need to read logical block address (LBA) 0 to configure a logical unit. The underlying storage at the primary or secondary of a remote copy will normally be RAID storage, but it can be any storage that can be managed by the SVC. Refer to 8.7, Global Mirror on page 413 for more details about SVCs asynchronous mirroring. Most clients will aim to automate failover or recovery of the remote copy through failover management software. SVC provides Simple Network Management Protocol (SNMP) traps and interfaces to enable this automation. IBM Support for automation is provided by IBM Tivoli Storage Productivity Center for Replication. The Tivoli documentation can also be accessed online at the IBM Tivoli Storage Productivity Center information center: http://publib.boulder.ibm.com/infocenter/tivihelp/v4r1/index.jsp

2.7.2 FlashCopy
FlashCopy makes a copy of a source volume on a target volume. The original content of the target volume is lost. After the copy operation has started, the target volume has the contents of the source volume as it existed at a single point in time. Although the copy operation takes time, the resulting data at the target appears as though the copy was made instantaneously. FlashCopy is sometimes described as an instance of a Time-Zero copy (T0) or a Point in Time (PiT) copy technology. Although the FlashCopy operation takes a finite time, this time is several orders of magnitude less than the time that is required to copy the data using conventional techniques.

34

Implementing the IBM System Storage SAN Volume Controller V6.1

FlashCopy can be performed on multiple source and target volumes. FlashCopy permits the management operations to be coordinated so that a common single point in time is chosen for copying target volumes from their respective source volumes. This capability allows a consistent copy of data, which spans multiple volumes. SVC also permits multiple Target volumes to be FlashCopied from the same Source volume. This capability can be used to create images from separate points in time for the Source volume, and create multiple images from a Source volume at a common point in time. Source and Target volumes can be thin-provisioned volumes. Reverse FlashCopy enables target volumes to become restore points for the source volume without breaking the FlashCopy relationship and without having to wait for the original copy operation to complete. SVC supports multiple targets and thus multiple rollback points. Most clients aim to integrate the FlashCopy feature for point in time copies and quick recovery of their applications and databases. IBM Support is provided by Tivoli Storage FlashCopy Manager: http://www-01.ibm.com/software/tivoli/products/storage-flashcopy-mgr/ You can read a detailed description of FlashCopy copy services in Chapter 8, Advanced Copy Services on page 363.

2.8 SVC cluster overview


In simple terms, a cluster is a collection of servers that together provide a set of resources to a client. The key point is that the client has no knowledge of the underlying physical hardware of the cluster. The client is isolated and protected from changes to the physical hardware. This arrangement offers many benefits including, most significantly, high availability. Resources on clustered servers act as highly available versions of unclustered resources. If a node (an individual computer) in the cluster is unavailable or too busy to respond to a request for a resource, then the request is transparently passed to another node that is capable of processing it. The clients are unaware of the exact locations of the resources they are using. The SVC is a collection of up to eight cluster nodes, which are added in pairs. These nodes are managed as a set (cluster), and they present a single point of control to the administrator for configuration and service activity. The eight-node limit for an SVC cluster is a limitation imposed by the microcode and is not a limit of the underlying architecture. Larger cluster configurations might be available in the future. The SVC demonstrated its ability to scale during a 2008 project: http://www-03.ibm.com/press/us/en/pressrelease/24996.wss Based on a 14-node cluster, coupled with solid-state drive controllers, the project achieved a data rate of over one million IOPS with a response time of under 1 millisecond (ms). Although the SVC code is based on a purpose-optimized Linux kernel, the clustering feature is not based on Linux clustering code. The cluster software used within SVC, that is, the event manager cluster framework, is based on the outcome of the COMPASS research project. It is the key element that isolates the SVC application from the underlying hardware nodes. The cluster software makes the code portable and provides the means to keep the single instances of the SVC code running on separate cluster nodes in sync. Node restarts (during a

Chapter 2. IBM System Storage SAN Volume Controller

35

code upgrade), adding new nodes, or removing old nodes from a cluster or node failures therefore cannot impact the SVCs availability. It is key for all active nodes of a cluster to know that they are members of the cluster. Especially in situations such as the split-brain scenario where single nodes lose contact with other nodes, it is key to have a solid mechanism to decide which nodes form the active cluster. A worst case scenario is a cluster that splits into two separate clusters. Within an SVC cluster, the voting set and a quorum disk are responsible for the integrity of the cluster. If nodes are added to a cluster, they get added to the voting set. If nodes are removed, they will also quickly be removed from the voting set. Over time the voting set, and thus the nodes in the cluster, can completely change so that the cluster has migrated onto a completely separate set of nodes from the set on which it started. The SVC cluster implements a dynamic quorum. Following a loss of nodes, if the cluster can continue operation, the cluster will adjust the quorum requirement so that further node failure can be tolerated. The lowest Node Unique ID in a cluster becomes the boss node for the group of nodes, and it proceeds to determine (from the quorum rules) whether the nodes can operate as the cluster. This node also presents the maximum two-cluster IP addresses on one or both of its nodes Ethernet ports to allow access for cluster management.

2.8.1 Quorum disks


The cluster uses the quorum disk for two purposes: as a tie breaker in the event of a SAN fault, when exactly half of the nodes that were previously members of the cluster are present; and to hold a copy of important cluster configuration data. Just over 256 MB is reserved for this purpose on each quorum disk candidate. There is only one active quorum disk in a cluster; however, the cluster uses three MDisks as quorum disk candidates. The cluster automatically selects the actual active quorum disk from the pool of assigned quorum disk candidates. If a tiebreaker condition occurs, then the one-half portion of the cluster nodes that is able to reserve the quorum disk after the split has occurred locks the disk and continues to operate. The other half stops its operation. This design prevents both sides from becoming inconsistent with each other. When MDisks are added to the SVC cluster, the SVC cluster checks the MDisk to see if it can be used as a quorum disk. If the MDisk fulfills the requirements, the SVC will assign the three first MDisks added to the cluster as quorum candidates. One of them is selected as the active quorum disk. Note: To be considered eligible as a quorum disk, an LUN must meet the following criteria: It must be presented by a disk subsystem that is supported to provide SVC quorum disks. It has been manually allowed to be a quorum disk candidate using the svctask chcontroller -allow_quorum yes command. It must be in managed mode (no image mode disks). It must have sufficient free extents to hold the cluster state information, plus the stored configuration metadata. It must be visible to all of the nodes in the cluster.

36

Implementing the IBM System Storage SAN Volume Controller V6.1

If possible, the SVC will place the quorum candidates on separate disk subsystems. After the quorum disk has been selected, however, no attempt is made to ensure that the other quorum candidates are presented through separate disk subsystems. Important: Quorum disk placement verification and adjustment to separate storage systems (if possible) reduces the dependency from a single storage system and can increase the Quorum disk availability significantly. Quorum disk candidates and the active quorum disk in a cluster can be listed by the svcinfo lsquorum command. When the set of quorum disk candidates has been chosen, it is fixed. However, a new quorum disk candidate can be chosen in one of these conditions: When the administrator requests that a specific MDisk is to become a quorum disk by using the svctask setquorum command When an MDisk that is a quorum disk is deleted from a storage pool When an MDisk that is a quorum disk changes to image mode An offline MDisk will not be replaced as a quorum disk candidate. For disaster recovery purposes a cluster needs to be regarded as a single entity, so the cluster and the quorum disk need to be colocated. There are special considerations concerning the placement of the active quorum disk for a stretched or split cluster and split I/O Group configurations. Details are available at this website: http://www-01.ibm.com/support/docview.wss?rs=591&uid=ssg1S1003311 Important: Running an SVC cluster without a quorum disk can seriously affect your operation. A lack of available quorum disks for storing metadata will prevent any migration operation (including a forced MDisk delete). Mirrored volumes can be taken offline if there is no quorum disk available. This behavior occurs because synchronization status for mirrored volumes is recorded on the quorum disk. During the normal operation of the cluster, the nodes communicate with each other. If a node is idle for a few seconds, a heartbeat signal is sent to ensure connectivity with the cluster. If a node fails for any reason, the workload that is intended for it is taken over by another node until the failed node has been restarted and readmitted to the cluster (which happens automatically). If the microcode on a node becomes corrupted, resulting in a failure, the workload is transferred to another node. The code on the failed node is repaired, and the node is readmitted to the cluster (again, all automatically).

2.8.2 Split I/O groups or split cluster


An I/O group is formed by a pair of SVC nodes. These nodes act as failover nodes for each other, and hold mirrored copies of cached volume writes. See 2.4.2, I/O Groups on page 14 for more information. Normally these nodes are physically located within the same rack, in the same computer room. To provide protection against failures that affect an entire location (for example, a power failure), you can split a single cluster between two physical locations, up to 10 km apart. In this configuration, special attention must be given to the quorum disks to ensure successful cluster failover.

Chapter 2. IBM System Storage SAN Volume Controller

37

Generally, when the nodes in a cluster have been split across sites, the SVC cluster must be configured as listed here: Site 1 contains half of SAN Volume Controller cluster nodes + one quorum disk candidate. Site 2 contains half of SAN Volume Controller cluster nodes + one quorum disk candidate. Site 3 contains an active quorum disk. This configuration ensures that a quorum disk is always available, even after a single site failure. All internode communication between SVC node ports in the same cluster must not cross ISLs. The same is also true for SVC to back-end disk controllers. This means that the FC path between sites cannot use an inter-switch ISL path. The remote node must have a direct path to the switch that its partner and other cluster nodes are connected to. To reach the 10 km maximum distance, Long Wave SFPs must be used in the node. Other SVC configuration rules also continue to apply, for example the Ethernet port, eth0 on every SVC node, local or remote site, must still be connected to the same subnet or subnets. For more details about split cluster configuration, see 3.3.6, Split-cluster configuration on page 77.

2.8.3 Cache
The primary benefit of storage cache is to improve I/O response time. Reads and writes to a magnetic disk drive suffer from both seek and latency time at the drive level, which can result in from one to 10 ms of response time (for an enterprise-class disk). The new 2145-CF8 nodes combined with SVC 6.1 provide 24 GB memory per node, or 48 GB per I/O Group, or 192 GB per SVC cluster. The SVC provides a flexible cache model, and the nodes memory can be used as read or write cache. The size of the write cache is limited to a maximum of 12 GB of the nodes memory. Dependent on the current I/O conditions on a node, the entire 24 GB of memory can be fully used as read cache. Cache is allocated in 4 KB segments. A segment will hold part of one track. A track is the unit of locking and destage granularity in the cache. The cache virtual track size is 32 KB (eight segments). A track might only be partially populated with valid pages. The SVC coalesces writes up to the 32 KB track size if the writes reside in the same tracks prior to destage; for example, if 4 KB is written into a track, another 4 KB is written to another location in the same track. Therefore, the blocks written from the SVC to the disk subsystem can be any size between 512 bytes up to 32 KB. When data is written by the host, the preferred node within the I/O Group saves the data in its cache. Before the cache returns completion to the host, the write must be mirrored to the partner node, or copied in the cache of its partner node, for availability reasons. After having a copy of the written data, the cache returns completion to the host. A volume that has not received a write updates during the last two minutes will automatically have all modified data destaged to disk. If one node of an I/O Group is missing, due to a restart or a hardware failure, the remaining node empties all of its write cache and proceeds in a operation mode, which is referred to as write-through mode. A node operating in write-through mode writes data directly to the disk subsystem before sending an I/O complete status message back to the host. Running in this mode can degrade the performance of the specific I/O Group.

38

Implementing the IBM System Storage SAN Volume Controller V6.1

Write cache is partitioned by storage pool. This feature restricts the maximum amount of write cache that a single storage pool can allocate in a cluster. Table 2-3 shows the upper limit of write cache data that a single storage pool in a cluster can occupy.
Table 2-3 Upper limit of write cache per storage pool One storage pool 100% Two storage pools 66% Three storage pools 40% Four storage pools 33% More than four storage pools 25%

For in-depth information about SVC cache partitioning, it is important to read IBM SAN Volume Controller 4.2.1 Cache Partitioning, REDP-4426, which is available at this website: http://www.redbooks.ibm.com/abstracts/redp4426.html?Open An SVC node will treat part of its physical memory as non-volatile. Non-volatile means that its contents are preserved across power losses and resets. Bitmaps for Flash Copy and Remote Mirroring relationships, the Virtualization Table and the Write Cache are items in the non-volatile memory. In the event of a disruption or external power loss, the physical memory is copied to a file in the file system on the nodes internal disk drive, so that the contents can be recovered when external power is restored. The uninterruptible power supply units, which are delivered with each nodes hardware, ensure that there is sufficient internal power to keep a node operational to perform this dump when external power is removed. After dumping the content of the non-volatile part of the memory to disk, the SVC node shuts down.

2.8.4 Cluster management


The SVC can be managed by one of the following interfaces: A textual command-line interface (CLI) accessed through a Secure Shell connection (SSH), for example PuTTY. A web browser-based graphical user interface (GUI). Tivoli Productivity Center (TPC) Basic Edition or Standard Edition. The basic edition is supplied with the SVC System Storage Productivity Center Console. Starting with SVC 6.1, the GUI and a web server are installed in the SVC cluster nodes. This means that any browser, if pointed at the cluster IP address, is able to access the management GUI.

Management console
The management console for SVC is referred to as the IBM System Storage Productivity Center (SSPC). SSPC is a hardware and software solution that includes a suite of storage infrastructure management software that can centralize, automate, and simplify the management of complex and heterogeneous storage environments.

2.8.5 IBM System Storage Productivity Center


IBM System Storage Productivity Center is based on server hardware (IBM System x-based) and a set of preinstalled and optional software modules. Several of these preinstalled modules provide base functionality only. Modules providing enhanced functionality can be activated by installing separate licenses.

Chapter 2. IBM System Storage SAN Volume Controller

39

IBM System Storage Productivity Center contains the functions listed here. Tivoli Integrated Portal IBM Tivoli Integrated Portal is a standards-based architecture for web administration. The installation of Tivoli Integrated Portal is required to enable single sign-on (SSO) for Tivoli Storage Productivity Center. Tivoli Storage Productivity Center now installs Tivoli Integrated Portal along with Tivoli Storage Productivity Center. Tivoli Storage Productivity Center IBM Tivoli Storage Productivity Center Basic Edition is preinstalled on the IBM System Storage Productivity Center server. There are several other commercially available products of Tivoli Storage Productivity Center that provide additional functionality beyond Tivoli Storage Productivity Center Basic Edition. You can activate these packages by adding the specific licenses to the preinstalled Basic Edition: Tivoli Storage Productivity Center for Disk allows you to monitor storage systems for performance. Tivoli Storage Productivity Center for Data allows you to collect and monitor file systems and databases. Tivoli Storage Productivity Center Standard Edition is a bundle that includes all of the other packages, along with SAN planning tools that make use of information that is collected from the Tivoli Storage Productivity Center components. Tivoli Storage Productivity Center for Replication The functions of Tivoli Storage Productivity Center for Replication provide the management of the IBM FlashCopy, Metro Mirror, and Global Mirror capabilities for the DS8000, IBM SAN Volume Controller and others. This package can also be activated by installing the specific licenses. Web Browser to access the GUI SSH Client (PuTTY) DS CIM agents Windows Server 2008 Enterprise Edition Several base software packets that are required for Tivoli Productivity Center Optional software packages, such as anti-virus software or DS3000/4000/5000 Storage Manager, can be installed on the IBM System Storage Productivity Center server by the client. Using Tivoli Storage Productivity Center or IBM System Director provides greater integration points and launch in-context capabilities. Figure 2-11 on page 41 provides an overview of the SVC management components. We describe the details in Chapter 4, SAN Volume Controller initial configuration on page 95. You can obtain further details about the IBM System Storage Productivity Center in IBM System Storage Productivity Center Users Guide Version 1 Release 4, SC27-2336, and in IBM System Storage Productivity Center Introduction and Planning Guide, SC23-8824.

40

Implementing the IBM System Storage SAN Volume Controller V6.1

Figure 2-11 SVC management overview

2.8.6 User authentication


SVC 6.1 has two authentication methods, Local and Remote: Local authentication Authentication is performed on the SVC cluster. Local users must provide a password, a Secure Shell (SSH) key, or both. Remote authentication Remote authentication uses a remote authentication server, which for SVC is the Tivoli Embedded Security Services, to validate passwords. The Tivoli Embedded Security Services is part of the Tivoli Integrated Portal, which is one of the three components that come with Tivoli Productivity Center (Tivoli Productivity Center, Tivoli Productivity Center for Replication, and Tivoli Integrated Portal) that are preinstalled on the IBM System Storage Productivity Center. Each SVC cluster can have multiple users defined. The cluster maintains an audit log of successfully executed commands, indicating which users performed which actions at what times.

SVC user names


User names must be unique and can contain up to 256 printable ASCII characters: Forbidden characters are single quotation mark (), colon (:), percent symbol (%), asterisk (*), comma (,), and double quotation marks (). A user name cannot begin or end with a blank. Passwords for local users can be up to 64 printable ASCII characters. There are no forbidden characters, but passwords cannot begin or end with blanks.

Chapter 2. IBM System Storage SAN Volume Controller

41

SVC superuser
There is a special local user called the superuser that always exists on every cluster. It cannot be deleted. Its password is set by the user during cluster initialization. The superuser password can be reset from the nodes front panel, and this function can be disabled, although doing this makes the cluster inaccessible if all of the users forget their passwords or lose their SSH keys. To register an SSH key for the superuser to provide command-line access, use Service Assistant Configure CLI Access to assign a temporary key. However, the key will be lost during a node restart so the more permanent way is to add the key through the normal GUI, that is, use the User Management superuser Properties panels. The superuser is always a member of user group 0, which has the most privileged role within the SVC.

SVC Service Assistant Tool


SVC V6.1 introduces a new method for performing service tasks on the system. As well as being able to perform various service tasks from the front panel, you can also service a node through an Ethernet connection using a web browser to access a GUI interface. The function is called the Service Assistant Tool and requires you to enter the superuser password during login.

2.8.7 SVC roles and user groups


Each user group is associated with a single role. The role for a user group cannot be changed, but additional new user groups (with one of the defined roles) can be created. User groups are used for local and remote authentication. Because SVC knows of five roles, there are, by default, five user groups defined in an SVC cluster; see Table 2-4.
Table 2-4 User groups User group ID 0 1 2 3 4 User group SecurityAdmin Administrator CopyOperator Service Monitor Role SecurityAdmin Administrator CopyOperator Service Monitor

The access rights for a user belonging to a specific user group are defined by the role that is assigned to the user group. It is the role that defines what a user can or cannot do on an SVC cluster. Table 2-5 on page 43 shows the roles ordered (from the top) by starting with the least privileged Monitor role down to the most privileged SecurityAdmin role.

42

Implementing the IBM System Storage SAN Volume Controller V6.1

Table 2-5 Commands permitted for each role Role Monitor Allowed commands All svcinfo or informational commands, plus: svctask finderr, dumperrlog, dumpinternallog,chcurrentuser, ping, svcconfig backup All commands allowed for Monitor role, plus: applysoftware, setlocale, addnode, rmnode, cherrstate,writesernum, detectmdisk, includemdisk, clearerrlog, cleardumps, settimezone, stopcluster,startstats, stopstats, and settime All commands allowed for Monitor role, plus: prestartfcconsistgrp, startfcconsistgrp, stopfcconsistgrp, chfcconsistgrp, prestartfcmap, startfcmap, stopfcmap, chfcmap, startrcconsistgrp, stoprcconsistgrp, switchrcconsistgrp, chrcconsistgrp, startrcrelationship, stoprcrelationship, switchrcrelationship,chrcrelationship, and chpartnership All commands, except: chauthservice, mkuser, rmuser, chuser, mkusergrp,rmusergrp, chusergrp, and setpwdreset All commands.

Service

CopyOperator

Administrator

SecurityAdmin

2.8.8 SVC local authentication


Local users are users that are managed entirely on the cluster without the intervention of a remote authentication service. Local users must have a password, an SSH public key, or both. The password is used for authentication and the SSH key is used for command-line or file transfer (SecureCopy) access. Therefore, for local users, if no SSH key has been specified, the user can only access the SVC cluster through the GUI.
Local users: Be aware that local users are created per each SVC cluster. Each user has a name, which must be unique across all users in one cluster. If you want to allow access for a user on multiple clusters, you have to define the user in each cluster with the same name and the same privileges. A local user always belongs to only one user group. Figure 2-12 on page 44 shows an overview of local authentication within the SVC.

Chapter 2. IBM System Storage SAN Volume Controller

43

Figure 2-12 Simplified overview of SVC local authentication

2.8.9 SVC remote authentication and single sign-on


You can configure an SVC cluster to use a remote authentication service. Remote users are users that are managed by the remote authentication service and require command-line or file-transfer access. Remote users only have to be defined in the SVC cluster if command-line access is required. No local user is required for GUI-only remote access. For command-line access, the remote authentication flag has to be set and an SSH key and its password have to be defined for the user. Remember that for users requiring CLI access with remote authentication, the password must be defined locally for the users. Remote users cannot belong to any user group, because the remote authentication service, for example, a Lightweight Directory Access Protocol (LDAP) directory server such as IBM Tivoli Directory Server or Microsoft Active Directory, will deliver the user group information. Figure 2-13 on page 45 gives an overview of SVC remote authentication.

44

Implementing the IBM System Storage SAN Volume Controller V6.1

Figure 2-13 Simplified overview of SVC 6.1 remote authentication

The authentication service supported by SVC is the Tivoli Embedded Security Services server component level 6.2. The Tivoli Embedded Security Services server provides the following key features: Tivoli Embedded Security Services isolates the SVC from the actual directory protocol in use, which means that the SVC communicates only with Tivoli Embedded Security Services to get its authentication information. The type of protocol that is used to access the central directory or the kind of the directory system that is used is transparent to SVC. Tivoli Embedded Security Services provides a secure token facility that is used to enable single sign-on (SSO). SSO means that users do not have to log in multiple times when using what appears to them to be a single system. It is used within Tivoli Productivity Center. When SVC access is launched from within Tivoli Productivity Center, the user will not have to log on to the SVC, because the user has already logged in to Tivoli Productivity Center.

Using a remote authentication service


Follow these steps to use SVC with a remote authentication service. 1. Configure the cluster with the location of the remote authentication server. Change settings using the following command: svctask chauthservice....... View current settings using the following command: svcinfo lscluster....... SVC supports either an HTTP or HTTPS connection to the Tivoli Embedded Security Services server. If the HTTP option is used, the user and password information is transmitted in clear text over the IP network.
Chapter 2. IBM System Storage SAN Volume Controller

45

2. Configure user groups on the cluster matching those user groups that are used by the authentication service. For each group of interest that is known to the authentication service, there must be an SVC user group with the same name and the remote setting enabled. For example, you can have a group called sysadmins, whose members require the SVC Administrator role. Configure this group using the following command: svctask mkusergrp -name sysadmins -remote -role Administrator If none of a users groups match any of the SVC user groups, the user is not permitted to access the cluster. 3. Configure users that do not require SSH access. Any SVC users that are to be used with the remote authentication service and do not require SSH access need to be deleted from the system. The superuser cannot be deleted; it is a local user and cannot use the remote authentication service. 4. Configure users that do require SSH access. Any SVC users that are to be used with the remote authentication service and do require SSH access must have their remote setting enabled and the same password set on the cluster and the authentication service. The remote setting instructs SVC to consult the authentication service for group information after the SSH key authentication step to determine the users role. The need to configure the users password on the cluster in addition to the authentication service is due to a limitation in the Tivoli Embedded Security Services server software. 5. Configure the system time. For correct operation, both the SVC cluster and the system running the Tivoli Embedded Security Services server must have the exact same view of the current time; the easiest way is to have them both use the same Network Time Protocol (NTP) server. Failure to follow this step can lead to poor interactive performance of the SVC user interface or incorrect user-role assignments. Also, Tivoli Productivity Center leverages the Tivoli Integrated Portal infrastructure and its underlying WebSphere Application Server capabilities to make use of an LDAP registry and enable single sign-on (SSO). You can obtain more information about implementing SSO within Tivoli Productivity Center 4.1 in the chapter about LDAP authentication support and single sign-on in IBM Tivoli Storage Productivity Center V4.1 Release Guide, SG24-7725, which is available at this website: http://www.redbooks.ibm.com/redpieces/abstracts/sg247725.html?Open

2.9 SVC hardware overview


The current SVC 6.1 hardware nodes, as defined in the underlying COMPASS architecture, are based on Intel processors with standard PCI-X adapters to interface with the SAN and the LAN.

46

Implementing the IBM System Storage SAN Volume Controller V6.1

Note: Since the writing of this book, IBM has announced the IBM System Storage SAN Volume Controller Storage Engine offers 10 Gigabit Ethernet connectivity. For more information about this topic, see:
http://www-01.ibm.com/common/ssi/cgi-bin/ssialias?infotype=an&subtype=ca&appname=gpateam &supplier=897&letternum=ENUS111-083

The new SVC Storage Engine adds 10 Gigabit Ethernet connectivity to help improve throughput. This solution includes a Common Information Model (CIM) Agent to enable unified storage management based on open standards for units that comply with CIM Agent standards. The new SVC 2145-CF8 Storage Engine has the following key hardware features: Intel Core i7 Xeon 5500 2.4 GHz quad-core processor (Nehelam) 24 GB memory, with future growth possibilities Four 8 Gbps FC ports Up to four solid-state drives, enabling scale-out high performance solid-state drive support with SVC V5.1 Two, redundant power supplies Double bandwidth compared to its predecessor node (2145-8G4) Up to double IOPS compared to its predecessor node (2145-8G4) A 19-inch rack-mounted enclosure IBM Systems Director Active Energy Manager-enabled The 2145-CF8 nodes can be easily integrated within existing SVC clusters. The nodes can be intermixed in pairs within existing SVC clusters. Mixing node types in a cluster results in volume performance characteristics dependant on the node type in the volumes I/O Group. The standard nondisruptive cluster upgrade process can be used to replace older engines with new 2145-CF8 engines, see IBM SAN Volume Controller Software Installation and Configuration Guide, GC27-2286, for more information about this topic. Integration into existing clusters requires that the cluster runs at least SVC 5.1 level code. The 2145-CF8 only runs SVC V5.1 or above. The nodes are 1U high, fit into 19-inch racks, and use the same uninterruptible power supply unit models as previous models. Figure 2-14 shows the front-side view of the SVC 2145-CF8 node.

Chapter 2. IBM System Storage SAN Volume Controller

47

Figure 2-14 SVC 2145-CF8 storage engine

Remember that several SVC features, such as iSCSI, are software features and are therefore available on all nodes types running SVC V5.1 or above.

2.9.1 Fibre Channel interfaces


The IBM SAN Volume Controller provides link speeds of 2/4/8 Gbps on SVC 2145-CF8 nodes. The nodes come with a 4-port HBA. The FC ports on these node types auto-negotiate the link speed that is used with the FC switch. The ports normally operate at the maximum speed that is supported by both the SVC port and the switch. However, if a large number of link errors occur, the ports might operate at a lower speed than what is supported. The actual port speed for each of the four ports can be displayed through the GUI, the CLI, the nodes front panel, and also by light-emitting diodes (LEDs) that are placed at the rear of the node. For details, consult the node-specific SVC hardware installation guides: IBM System Storage SAN volume Controller Model 2145-CF8 Hardware Installation Guide, GC52-1356 IBM System Storage SAN volume Controller Model 2145-8G4 Hardware Installation Guide, GC27-2220 The SVC imposes no limit on the FC optical distance between SVC nodes and host servers. FC standards, along with small form-factor pluggable optics (SFP) capabilities and cable type, dictate the maximum FC distances that are supported. If longwave SFPs are used in the SVC nodes, the longest supported FC link between the SVC and switch is 10 km (6.21 miles). Table 2-6 shows the actual cable length that is supported with shortwave SFPs.
Table 2-6 Overview of supported cable length FC-O OM1 (M6) standard 62.2/125 microseconds 150 m 70 m OM2 (M5) standard 50/125 microseconds 300 m 150 m OM3 (M5E) optimized 50/125 microseconds-300 500 m 380 m

2 Gbps FC 4 Gbps FC

48

Implementing the IBM System Storage SAN Volume Controller V6.1

FC-O

OM1 (M6) standard 62.2/125 microseconds 21 m

OM2 (M5) standard 50/125 microseconds 50 m

OM3 (M5E) optimized 50/125 microseconds-300 150 m

8 Gbps FC limiting

Table 2-7 shows the rules that apply with respect to the number of interswitch link (ISL) hops allowed in a SAN fabric between SVC nodes or the cluster.
Table 2-7 Number of supported ISL hops Between nodes in an I/O Group 0 (connect to the same switch) Between nodes in separate I/O Groups 0 (connect to the same switch) Between nodes and the disk subsystem 1 (recommended: 0, connect to the same switch) Between nodes and the host server Maximum 3

2.9.2 LAN interfaces


The 2145-CF8 node has two 1 Gbps LAN ports available. The cluster configuration node can be accessed on either eth0 or eth1. The cluster can have two IPv4 and two IPv6 addresses that are used for configuration purposes (CLI or CIMOM access). The cluster can therefore be managed by SSH clients or GUIs on System Storage Productivity Centers on separate physical IP networks. This capability provides redundancy in the event of a failure of one of these IP networks. Support for iSCSI introduces one additional IPv4 and one additional IPv6 address for each SVC node port; these IP addresses are independent of the cluster configuration IP addresses. See Figure 2-10 on page 31 for an IP address overview.

2.10 Solid-state drives


Solid-state drives can be used, or more specifically, single layer cell (SLC) or multilayer cell (MLC) NAND Flash-based disks (for the sake of simplicity, they are referred to as solid-state drives elsewhere in this book), to overcome a growing problem that is known as the memory/storage bottleneck. At the time of writing, internal SSD drives are not supported with SVC 6.1.

2.10.1 Storage bottleneck problem


The memory/storage bottleneck describes the steadily growing gap between the time required for a CPU to access data located in its cache/memory (typically in nanoseconds) and data located on external storage (typically in milliseconds). Although CPUs and cache/memory devices continually improve their performance, this is not true in general for mechanical disks that are used as external storage. Figure 2-15 illustrates these access time differences.

Chapter 2. IBM System Storage SAN Volume Controller

49

Figure 2-15 The memory/storage bottleneck

The actual times shown are not that important, but note the dramatic difference between accessing data that is located in cache and data that is located on external disk. We have added a second scale to Figure 2-15, which gives you an idea of how long it takes to access the data in a scenario where a single CPU cycle takes 1 second. This scale gives you an idea of the importance of future storage technologies closing or reducing the gap between access times for data stored in cache/memory versus access times for data stored on a external medium. Since magnetic disks were first introduced by IBM in 1956 (RAMAC), they have shown a remarkable performance regarding capacity growth, form factor/size reduction, price decrease ($/GB), and reliability. However, the number of I/Os that a disk can handle and the response time that it takes to process a single I/O have not improved at the same rate, although they have certainly improved. In actual environments, we can expect from todays enterprise-class FC serial-attached SCSI (SAS) disk up to 200 IOPS per disk with an average response time (a latency) of approximately 6 ms per I/O. To summarize, todays rotating disks continue to advance in capacity (several TBs), form factor/footprint (3.5 inches, 2.5 inches, and 1.8 inches), and price ($/GB), but they are not getting much faster. The limiting factor is the number of revolutions per minute (RPM) that a disk can perform (say 15,000). This factor defines the time that is required to access a specific data block on a rotating device. There will likely be small improvements in the future, but a big step, such as doubling the RPM, if technically even possible, inevitably has an associated increase in power consumption and a price that will be an inhibitor.

2.10.2 Solid-state drive solution


Solid-state drives can provide a solution for this dilemma. No rotating parts mean improved robustness and lower power consumption. A remarkable improvement in I/O performance and a massive reduction in the average I/O response times (latency) are the compelling reasons to use solid-state drives in todays storage subsystems.

50

Implementing the IBM System Storage SAN Volume Controller V6.1

Enterprise-class solid-state drives deliver typically 50,000 read and 20,000 write IOPs with latencies of typically 50us for reads and 800us for writes. Their form factors (2.5 inches/3.5 inches) and their interfaces (FC/SAS/Serial Advanced Technology Attachment (SATA)) make them easy to integrate into existing disk shelves.

2.10.3 Solid-state drive market


The solid-state drive storage market is rapidly evolving. The key differentiator among todays solid-state drive products that are available on the market is not the storage medium, but the logic in the disk internal controllers. The top priorities in todays controller development are: optimally handling what is referred to as wear-out leveling, which defines the controllers capability to ensure a devices durability; and closing the remarkable gap between read and write I/O performance. Todays solid-state drive technology is only a first step into the world of high performance persistent semiconductor storage. A group of the approximately 10 most promising technologies are collectively referred to as Storage Class Memory (SCM).

Storage Class Memory


SCM promises a massive improvement in performance (IOPS), areal density, cost, and energy efficiency compared to todays solid-state drive technology. IBM Research is actively engaged in these new technologies. You can obtain details of nanoscale devices at this website: http://www.almaden.ibm.com/st/nanoscale_st/nano_devices/ You can obtain details of Storage Class Memory at this website: http://tinyurl.com/plk7as You can read a comprehensive and worthwhile overview of the solid-state drive technology in a subset of the well known Spring 2010 and 2009 SNIA Technical Tutorials, which are available on the SNIA website: http://www.snia.org/education/tutorials/2010/spring#solid When these technologies become a reality, it will fundamentally change the architecture of todays storage infrastructures.

2.10.4 Solid-state drives and SVC V6.1


Users of internal solid-state devices (SSDs) on the SAN Volume Controller 2145-CF8 cannot install SVC 6.1.0 at this time.

External SSD drives


The SVC is able to manage solid-state drives in externally attached storage controllers or enclosures. The solid-state drives would be configured as an array with a LUN and be presented to the SVC as a normal MDisk. The solid-state MDisk tier then needs to be set by the svctask chmdisk -tier generic_ssd command or the GUI. The SSD MDisks can then be placed into a single SSD tier storage pool and high workload volumes manually selected and placed into the pool to gain the performance benefits of SSDs.

Chapter 2. IBM System Storage SAN Volume Controller

51

For a more effective use of SSDs, place the SSD MDisks into a multitiered storage pool combined with HDD MDisks (generic_hdd tier). Then, with Easy Tier turned on, it will automatically detect and migrate high workload extents onto the solid-state MDisks.

2.11 Easy Tier


Determining the amount of data activity in an SVC extent and when to move the extent to an appropriate storage performance tier is usually too complex a task to manage manually. Easy Tier is a performance optimization function that overcomes this issue. It will automatically migrate or move extents belonging to a volume to or from one MDisk storage tier to another MDisk storage tier. Easy Tier monitors the host I/O activity and latency on the extents of all volumes with the Easy Tier function turned on in a multitier storage pool over a 24-hour period. It then creates an extent migration plan based on this activity, and will dynamically move high activity or hot extents to a higher tier within the storage pool. It will also move extents whose activity has dropped off or cooled from the high tier MDisks back to a lower tiered MDisk. Because this migration works at the extent level and not at the volume level, it is often referred to as sub-LUN migration. The Easy Tier function may be turned on or off at the storage pool and volume level.

2.11.1 Evaluation mode


To experience the potential benefits of using Easy Tier in your environment before actually installing expensive solid-state disks (SSDs), you can turn on the Easy Tier function for a single level storage pool. Next, turn on the Easy Tier function for the volumes within that pool. Easy Tier will then start monitoring activity on the volume extents in the pool. Easy Tier will create a migration report every 24 hours on the number of extents that would be moved if the pool were a multitiered storage pool. So even though Easy Tier extent migration is not possible within a single tier pool, the Easy Tier statistical measurement function is available. The usage statistics file can be offloaded from the SVC configuration node using the GUI (Troubleshooting Support Download). Then you can use the Storage Advisor Tool to create the statistics report. A web browser is used to view the output of the Storage Advisor Report. Contact your IBM representative or IBM Business Partner for more information about the Storage Advisor Tool.

2.11.2 Automatic data placement mode


For Easy Tier to provide automatic extent migration, you need to have a storage pool that contains MDisks with separate disk tiers, thus a multitiered storage pool. Then you need to set the -easytier parameter to on or auto for the storage pool and on for the volumes. The volumes must be either striped or mirrored for Easy Tier to migrate extents. See Chapter 7, Easy Tier on page 345 for more details about Easy Tier operation and management.

2.12 What is new with SVC 6.1


This section highlights the new features that SVC 6.1 brings.

52

Implementing the IBM System Storage SAN Volume Controller V6.1

Note: Since the writing of this book, IBM has announced IBM System Storage SAN Volume Controller Version 6.2. For more information about this topic, see:
http://www-01.ibm.com/common/ssi/cgi-bin/ssialias?infotype=AN&subtype=CA&htmlfid=897/ ENUS211-175&appname=USN

IBM System Storage SAN Volume Controller (SVC) V6.2.0 is designed to provide improved tracking by introducing real-time performance monitoring. Immediate performance information can be received, including CPU utilization and I/O rates, to monitor environmental changes and troubleshoot, and when pairing up the information with historical detailed data from Tivoli Storage Productivity Center, can be better positioned to develop the best performance solutions. VMware virtual environments can be improved with SVC by using the vStorage API for Array Integration (VAAI). This new API delegates certain VMware functions to SVC to enhance performance. In the vSphere 4.1 release, this offload capability to SVC supports full copy, block zeroing, and hardware-assisted locking. The introduction of 10 Gigabit Ethernet (GbE) hardware for the SAN Volume Controller allows clients to continue to focus on cost efficiency with higher network performance by offering 10 Gigabit iSCSI host attachment. SVC is scalable to manage up to 32 PB of storage by allowing managed disks to be as large as 256 TB on select storage systems. IBM System Storage Easy Tier is designed to automate data placement throughout the SVC managed disk group onto two tiers of storage to intelligently align the system with current workload requirements. It is now available for use with solid-state devices installed on SVC 2145 models CF8 and CG8. SVC interoperability now supports additional storage products, including HP StorageWorks P9500 Disk Array, Hitachi Data Systems Virtual Storage Platform, Texas Memory Systems RamSan-620, and EMC VNX models.

2.12.1 SVC 6.1 supported hardware list, device driver, and firmware levels
With the SVC 6.1 release, as in every release, IBM offers functional enhancements and new hardware that can be integrated into existing or new SVC clusters and also interoperability enhancements or new support for servers, SAN switches, and disk subsystems. See the most current information at this website: http://www-03.ibm.com/systems/storage/software/virtualization/svc/interop.html

2.12.2 SVC 6.1.0 new features


The following list summarizes the new V6.1 features, most of which have been described previously: Terminology There have been changes to the SVC terminology to bring it in line with other IBM storage products. See 2.3, SVC terminology on page 12 for a full list of these changes.

Chapter 2. IBM System Storage SAN Volume Controller

53

New Cluster capacities SVC increases the flexibility of the storage it manages by raising the supported managed disk (MDisk) size to 1 PB. A new, larger extent size of 8 GB is also introduced which increases the maximum managed storage per SVC cluster up to 32 PBs. Increased WWNN support The number of storage WWNNs that can attach to SVC is now 256 WWNNs per cluster. This is especially important when attaching to storage controllers that are designed to claim one WWNN per Fibre Channel port (WWPN). Long Object Names - maximum 63 characters All objects in a cluster (hosts, volumes, MDisks) have user-defined or system-generated names. When creating an object you can now define a more meaningful name because the maximum length has been increased to 63 characters. SVC V4.3 or V5.1 clusters will show truncated volume names when partnered for copy services functions with a V6.1 cluster. Tiered storage Deploying tiered storage is an important strategy for controlling storage cost, where various types of storage with various performance and cost characteristics are used to match separate business requirements. To meet these requirements, the SAN volume controller now supports multiple tiers of storage or MDisks and multitiered storage pools. IBM System Storage Easy Tier Easy Tier automates data placement throughout the SVC multitiered storage pool to intelligently align the placement of a volumes extents with current workload requirements. Easy Tier includes the ability to automatically and nondisruptively relocate data (at the extent level) from one tier to another in either direction to achieve the best storage performance available. New GUI user interface A newly designed user interface that delivers many more functional enhancements and greater ease of use is provided. Enhancements to the user interface include greater flexibility of views, increased number of characters allowed for naming objects, display of the command lines being executed, and improved user customization. Clients using Tivoli Storage Productivity Center and IBM System Director will also have greater integration points and launch in-context capabilities. The new GUI interface and its associated web server now run in the SVC cluster rather than on the SSPC console; thus, it can be accessed directly from any w browser. IBM Storwize V7000 - array support When SVC V6.1 is running on an IBM Storwize V7000, users of solid-state devices (SSDs) are now able to safeguard their data beyond volume mirroring because the SVC now provides RAID (0, 1, 5, 6, and 10) control. Therefore, you can create arrays on internally attached disk SAS or SSD. New CLI commands There are new functions enabled through the CLI; for example, the ability to view and update the license values of the cluster. There is also an entire set of new CLI commands to support drives, arrays, and enclosures of the Storwize V7000 product. Events Events are errors, warnings, and informational messages generated by SVC. If you are familiar with specific SVC error codes from previous releases of SVC, note that several numbers have changed in the V6.1 release.

54

Implementing the IBM System Storage SAN Volume Controller V6.1

Service Assistant Tool SVC V6.1 introduces a new method for performing service tasks on the system. In addition to being able to perform service tasks from the front panel, you can also service a node through an Ethernet connection using either a web browser (GUI) or command-line interface. This new function is called the Service Assistant Tool. The service assistant GUI interface is available through a new service assistant IP address that you assign on each node. iSCSI enhancements The enhancements include multisession iSCSI for improved failover performance improvements, and persistent reserve support for MSCS over iSCSI. Vmware iSCSI support has also been enhanced. Additional support for back-end controllers Several additional storage controllers are now supported, including EMC VMAX. To see the full list of supported controllers, visit the Interoperability website listed in 2.13, Useful SVC web links on page 55.

2.13 Useful SVC web links


The SVC Support Page is at the following website: http://www-947.ibm.com/systems/support/supportsite.wss/selectproduct?taskind=4&bra ndind=5000033&familyind=5329743&typeind=0&modelind=0&osind=0&psid=sr&continue.x=1 The SVC Home Page is at the following website: http://www-03.ibm.com/systems/storage/software/virtualization/svc/ The SVC Interoperability Page is at the following website: http://www-03.ibm.com/systems/storage/software/virtualization/svc/interop.html SVC online documentation is at the following website: http://publib.boulder.ibm.com/infocenter/svcic/v3r1m0/index.jsp lBM Redbooks publications about SVC are available at the following website: http://www.redbooks.ibm.com/cgi-bin/searchsite.cgi?query=SVC

Chapter 2. IBM System Storage SAN Volume Controller

55

56

Implementing the IBM System Storage SAN Volume Controller V6.1

Chapter 3.

Planning and configuration


In this chapter we describe the steps that are required when you plan the installation of an IBM System Storage SAN Volume Controller (SVC) in your storage network. We look at the implications for your storage network and also discuss performance considerations.

Copyright IBM Corp. 2011. All rights reserved.

57

3.1 General planning rules


Important: At the time of writing the statements we make are correct, but they may change over time. Always verify any statements made in this book with the SAN Volume Controller Supported Hardware List, Device Driver, Firmware and Recommended Software Levels at: http://www-01.ibm.com/support/docview.wss?uid=ssg1S1003697 To achieve the most benefit from the SVC, preinstallation planning must include several important steps. These steps will ensure that SVC provides the best possible performance, reliability, and ease of management for your application needs. Proper configuration also helps minimize downtime by avoiding changes to the SVC and the storage area network (SAN) environment to meet future growth needs. Tip: For comprehensive information about the topics discussed here, see IBM System Storage SAN Volume Controller: Planning Guide, GA32-0551. We also go into much more depth about these topics in SAN Volume Controller Best Practices and Performance Guidelines, SG24-7521, which is available at this website: http://www.redbooks.ibm.com/abstracts/sg247521.html?Open Follow these steps when planning for the SVC: 1. Collect and document the number of hosts (application servers) to attach to the SVC, the traffic profile activity (read or write, sequential or random), and the performance requirements (I/O per second (IOPS)). 2. Collect and document the storage requirements and capacities: The total back-end storage already present in the environment to be provisioned on the SVC The total back-end new storage to be provisioned on the SVC The required virtual storage capacity that is used as a fully managed virtual disk (volume) and used as a Space-Efficient volume The required storage capacity for local mirror copy (volume mirroring) The required storage capacity for point-in-time copy (FlashCopy) The required storage capacity for remote copy (Metro and Global Mirror) Per host: Storage capacity, the host logical unit number (LUN) quantity, and sizes 3. Define the local and remote SAN fabrics and clusters, if a remote copy or a secondary site is needed. 4. Define the number of clusters and the number of pairs of nodes (between 1 and 4) for each cluster. Each pair of nodes (an I/O Group) is the container for the volume. The number of necessary I/O Groups depends on the overall performance requirements. 5. Design the SAN according to the requirement for high availability and best performance. Consider the total number of ports and the bandwidth needed between the host and the SVC, the SVC and the disk subsystem, between the SVC nodes, and for the inter-switch link (ISL) between the local and remote fabric. 6. Design the iSCSI network according to the requirements for high availability and best performance. Consider the total number of ports and the bandwidth needed between the host and the SVC.

58

Implementing the IBM System Storage SAN Volume Controller V6.1

7. Determine the SVC service IP address and the IBM System Storage Productivity Center (SVC console). 8. Determine the IP addresses for the SVC cluster and for the host that is connected through iSCSI connections. 9. Define a naming convention for the SVC nodes, the host, and the storage subsystem. 10.Define the managed disks (MDisks) in the disk subsystem. 11.Define the Storage Pools. The Storage Pools depend on the disk subsystem in place and the data migration requirements. 12.Plan the logical configuration of the volume within the I/O Groups and the Storage Pools in such a way as to optimize the I/O load between the hosts and the SVC. 13.Plan for the physical location of the equipment in the rack. SVC planning can be categorized into two types: Physical planning Logical planning We describe these planning types in more detail in the following sections.

3.2 Physical planning


There are several key factors for you to consider when performing the physical planning of an SVC installation. The physical site must have the following characteristics: Power, cooling, and location requirements are present for the SVC and the uninterruptible power supply units. SVC nodes and their uninterruptible power supply units must be in the same rack. Place SVC nodes belonging to the same I/O Group in separate racks. Plan for two separate power sources if you have ordered a redundant AC power switch (available as an optional feature). An SVC node is one Electronic Industries Association (EIA) unit high. Each uninterruptible power supply unit that comes with SVC V6.1 is one EIA unit high. The uninterruptible power supply unit shipped with the earlier version of the SVC is two EIA units high. The IBM System Storage Productivity Center (SVC Console) is two EIA units high: one unit for the server and one unit for the keyboard and monitor. Other hardware devices can be in the same SVC rack, such as IBM System Storage DS4000, SAN switches, Ethernet switch, and other devices. Consider the maximum power rating of the rack; it must not be exceeded.

Chapter 3. Planning and configuration

59

3.2.1 Preparing your uninterruptible power supply unit environment


Ensure that your physical site meets the installation requirements for the uninterruptible power supply unit. Uninterruptible power supply unit: The 2145 UPS-1U is a Powerware 5115.

2145 UPS-1U
The 2145 Uninterruptible Power Supply-1U (2145 UPS-1U) is one EIA unit high and is shipped, and can only operate, on the following node types: SAN Volume Controller 2145-CF8 SAN Volume Controller 2145-8A4 SAN Volume Controller 2145-8G4 SAN Volume Controller 2145-8F4 When configuring the 2145 UPS-1U, the voltage that is supplied to it must be 200 - 240 V, single phase. Tip: The 2145 UPS-1U has an integrated circuit breaker and does not require external protection.

3.2.2 Physical rules


The SVC must be installed in pairs to provide high availability, and each node in the cluster must be connected to a separate uninterruptible power supply unit. Be aware of the following considerations: Each SVC node of an I/O Group must be connected to a separate uninterruptible power supply unit. Each uninterruptible power supply unit pair that supports a pair of nodes must be connected to a separate power domain (if possible) to reduce the chances of input power loss. The uninterruptible power supply units, for safety reasons, must be installed in the lowest positions in the rack. If necessary, move lighter units toward the top of the rack to make way for the uninterruptible power supply units. The power and serial connection from a node must be connected to the same uninterruptible power supply unit; otherwise, the node will not start. The 2145-CF8, 2145-8A4, 2145-8G4, and 2145-8F4 hardware models must be connected to a 5115 uninterruptible power supply unit. They will not start with a 5125 uninterruptible power supply unit. Important: Do not share the SVC uninterruptible power supply unit with any other devices. Figure 3-1 on page 61 shows a power cabling example for the 2145-CF8.

60

Implementing the IBM System Storage SAN Volume Controller V6.1

Figure 3-1 2145-CF8 power cabling

There are guidelines to follow for Fibre Channel (FC) cable connections. Occasionally, the introduction of a new SVC hardware model means that there are internal changes. One example is the worldwide port name (WWPN) mapping in the port mapping. The 2145-8G4 and 2145-CF8 have the same mapping. Figure 3-2 on page 62 shows the WWPN mapping.

Chapter 3. Planning and configuration

61

Figure 3-2 WWPN mapping

Figure 3-3 on page 63 shows a sample layout where nodes within each I/O Group have been split between separate racks. This protects against power failures and other events that only affect a single rack.

62

Implementing the IBM System Storage SAN Volume Controller V6.1

Figure 3-3 Sample rack layout

3.2.3 Cable connections


Create a cable connection table or documentation following your environments documentation procedure to track all of the connections that are required for the setup: Nodes Uninterruptible power supply unit Ethernet iSCSI connections FC ports IBM System Storage Productivity Center (SVC Console)

3.3 Logical planning


For logical planning, we cover these topics: Management IP addressing plan SAN zoning and SAN connections iSCSI IP addressing plan Back-end storage subsystem configuration SVC cluster configuration Split-cluster configuration Storage Pool configuration
Chapter 3. Planning and configuration

63

Volume configuration Host mapping (LUN masking) Advanced copy functions SAN boot support Data migration from non-virtualized storage subsystems SVC configuration backup procedure

3.3.1 Management IP addressing plan


For management, remember these rules: In addition to an FC connection, each node has an Ethernet connection for configuration and error reporting. Each SVC cluster needs at least one IP address for management and one IP address per node to be used for service, with the new Service Assistant feature available with SVC 6.1. The service IP address is usable only from the no config node or when the SVC cluster is in service mode, and remember that service mode is a disruptive operation. Both IP addresses must be in the same IP subnet; see Figure 3-1 on page 61.
Example 3-1 Management IP address sample

management IP add. 10.11.12.120 service IP add. 10.11.12.121 Each node in an SVC cluster needs to have at least one Ethernet connection. With SVC 6.1, the cluster management is now performed through an embedded GUI running on the nodes. A separate console such as the traditional SVC Hardware Management Console (HMC) or IBM System Storage Productivity Center (SSPC) is no longer required to access the management interface. To access the management GUI you direct a web browser at the system management IP address. The cluster must first be created specifying either an IPv4 or an IPv6 cluster address for port 1. After the cluster is created, additional IP addresses can be created on port 1 and port 2 until both ports have an IPv4 and an IPv6 address defined. This allows the cluster to be managed on separate networks, which provides redundancy in the event of a network failure. Figure 3-4 on page 65 shows the IP configuration possibilities.

64

Implementing the IBM System Storage SAN Volume Controller V6.1

Figure 3-4 IP configuration possibilities

Support for iSCSI provides one additional IPv4 and one additional IPv6 address for each Ethernet port on every node. These IP addresses are independent of the cluster configuration IP addresses. When accessing the SVC through the GUI or Secure Shell (SSH), choose one of the available IP addresses to connect to. There is no automatic failover capability so if one network is down, use an IP address on the alternate network. Clients may be able to use intelligence in domain name servers (DNS) to provide partial failover.

3.3.2 SAN zoning and SAN connections


SAN storage systems using the SVC can be configured with two, or up to eight, SVC nodes, arranged in an SVC cluster. These SVC nodes are attached to the SAN fabric, along with disk subsystems and host systems. The SAN fabric is zoned to allow the SVCs to see each others nodes and the disk subsystems, and for the hosts to see the SVCs. The hosts are not able to directly see or operate LUNs on the disk subsystems that are assigned to the SVC cluster. The SVC nodes within an SVC cluster must be able to see each other and all of the storage that is assigned to the SVC cluster.

Chapter 3. Planning and configuration

65

The zoning capabilities of the SAN switch are used to create three distinct zones. SVC 6.1 supports 2 Gbps, 4 Gbps, or 8 Gbps FC fabric, depending on the hardware platform and on the switch where the SVC is connected. In an environment where you have a fabric with multiple speed switches, best practice is to connect the SVC and the disk subsystem to the switch operating at the highest speed. All SVC nodes in the SVC cluster are connected to the same SANs, and they present volumes to the hosts. These volumes are created from Storage Pools that are composed of MDisks presented by the disk subsystems. There must be three distinct zones in the fabric. SVC cluster zone: Create one zone per fabric with all of the SVC ports cabled to this fabric to allow SVC internode communication. Host zones: Create an SVC host zone for each server accessing storage from the SVC cluster. Storage zone: Create one SVC storage zone for each storage subsystem that is virtualized by the SVC.

Zoning considerations for Metro Mirror and Global Mirror


Ensure that you are familiar with the constraints for zoning a switch to support Metro Mirror and Global Mirror partnerships. SAN configurations that use intracluster Metro Mirror and Global Mirror relationships do not require additional switch zones. SAN configurations that use intercluster Metro Mirror and Global Mirror relationships require the following additional switch zoning considerations: For each node in a cluster, zone exactly two Fibre Channel ports to exactly two Fibre Channel ports from each node in the partner cluster. If dual-redundant ISLs are available, then split the two ports from each node evenly between the two ISLs. That is, exactly one port from each node should be zoned across each ISL. Local cluster zoning continue to follow the standard requirement for all ports on all nodes in a cluster to be zoned to one another. Attention: Failure to follow these configuration rules will expose the cluster to the following condition and can result in loss of host access to volumes. If an intercluster link becomes severely and abruptly overloaded, the local Fibre Channel fabric can become congested to the extent that no Fibre Channel ports on the local SVC nodes are able to perform local intracluster heartbeat communication. This can, in turn, result in the nodes experiencing lease expiry events, in which a node will reboot to attempt to re-establish communication with the other nodes in the cluster. If all nodes lease expire simultaneously, this can lead to a loss of host access to volumes for the duration of the reboot events.

Configure your SAN so that FC traffic can be passed between the two clusters. To configure the SAN this way, you can connect the clusters to the same SAN, merge the SANs, or use routing technologies. Configure zoning to allow all of the nodes in the local fabric to communicate with all of the nodes in the remote fabric.

66

Implementing the IBM System Storage SAN Volume Controller V6.1

Optionally, modify the zoning so that the hosts that are visible to the local cluster can recognize the remote cluster. This capability allows a host to have access to data in both the local and remote clusters. Verify that cluster A cannot recognize any of the back-end storage that is owned by cluster B. A cluster cannot access logical units (LUs) that a host or another cluster can also access. Figure 3-5 shows the SVC zoning topology.

Figure 3-5 SVC zoning topology

Figure 3-6 on page 68 shows an example of SVC, host, and storage subsystem connections.

Chapter 3. Planning and configuration

67

Figure 3-6 Example of SVC, host, and storage subsystem connections You must also observe the following additional guidelines: LUNs (MDisks) must have exclusive access to a single SVC cluster and cannot be shared between other SVC clusters or hosts. A storage controller can present LUNs to both the SVC (as MDisks) and to other hosts in the SAN. However, in this case it is better to avoid having SVC and hosts share the same storage ports. Mixed port speeds are not permitted for intracluster communication. All node ports within a cluster must be running at the same speed. ISLs are not to be used for intracluster node communication or node-to-storage controller access. The switch configuration in an SVC fabric must comply with the switch manufacturers configuration rules, which can impose restrictions on the switch configuration. For example, a switch manufacturer might limit the number of supported switches in a SAN. Operation outside of the switch manufacturers rules is not supported. Host bus adapters (HBAs) in dissimilar hosts or dissimilar HBAs in the same host need to be in separate zones. For example, if you have AIX and Microsoft hosts, they need to be in separate zones. In this case, dissimilar means that the hosts are running separate operating systems or are using separate hardware platforms. Therefore, various levels of the same operating system are regarded as similar. Note that this requirement is a SAN interoperability issue, rather than an SVC requirement. Host zones are to contain only one initiator (HBA) each, and as many SVC node ports as you need, depending on the high availability and performance that you want from your configuration.

68

Implementing the IBM System Storage SAN Volume Controller V6.1

Attention: Be aware of the following considerations. The use of ISLs for intracluster node communication can negatively impact the availability of the system due to the high dependency on the quality of these links to maintain heartbeat and other cluster management services. Therefore it is strongly advised that they only be used as part of an interim configuration to facilitate SAN migrations, and not be part of the architected solution. The use of ISLs for SVC node to storage controller access can lead to port congestion, which can negatively impact the performance and resiliency of the SAN. Therefore it is strongly advised that they only be used as part of an interim configuration to facilitate SAN migrations, and not be part of the architected solution. The use of mixed port speeds used for intercluster communication can lead to port congestion, which can negatively impact the performance and resiliency of the SAN and is therefore not supported.

You can use the svcinfo lsfabric command to generate a report that displays the connectivity between nodes and other controllers and hosts. This report is particularly helpful in diagnosing SAN problems.

Zoning examples
Figure 3-7 shows an SVC cluster zoning example.

Figure 3-7 SVC cluster zoning example

Figure 3-8 on page 70 shows a storage subsystem zoning example.

Chapter 3. Planning and configuration

69

Figure 3-8 Storage subsystem zoning example

Figure 3-9 shows a host zoning example.

Figure 3-9 Host zoning example

70

Implementing the IBM System Storage SAN Volume Controller V6.1

3.3.3 iSCSI IP addressing plan


SVC 6.1 supports host access through iSCSI (as an alternative to FC), and the following considerations apply: SVC uses the built-in Ethernet ports for iSCSI traffic. All node types, which can run SVC 6.1, can use the iSCSI feature. SVC supports the Challenge Handshake Authentication Protocol (CHAP) authentication methods for iSCSI. iSCSI IP addresses can fail over to the partner node in the I/O Group if a node fails. This design reduces the need for multipathing support in the iSCSI host. iSCSI IP addresses can be configured for one or more nodes. iSCSI Simple Name Server (iSNS) addresses can be configured in the SVC. The iSCSI qualified name (IQN) for an SVC node will be: iqn.1986-03.com.ibm:2145.<cluster_name>.<node_name>. Because the IQN contains the cluster name and the node name, it is important not to change these names after iSCSI is deployed. Each node can be given an iSCSI alias, as an alternative to the IQN. The IQN of the host to an SVC host object is added in the same way that you add FC WWPNs. Host objects can have both WWPNs and IQNs. Standard iSCSI host connection procedures can be used to discover and configure SVC as an iSCSI target. Next, we explain several ways in which you can configure SVC 6.1. Figure 3-10 shows the use of IPv4 management and iSCSI addresses in the same subnet.

Figure 3-10 Use of IPv4 addresses

You can set up the equivalent configuration with only IPv6 addresses.
Chapter 3. Planning and configuration

71

Figure 3-11 shows the use of IPv4 management and iSCSI addresses in two separate subnets.

Figure 3-11 IPv4 address plan with two subnets

Figure 3-12 shows the use of redundant networks.

Figure 3-12 Redundant networks

Figure 3-13 on page 73 shows the use of a redundant network and a third subnet for management.

72

Implementing the IBM System Storage SAN Volume Controller V6.1

Figure 3-13 Redundant network with third subnet for management

Figure 3-14 shows the use of a redundant network for both iSCSI data and management.

Figure 3-14 Redundant network for iSCSI and management

Be aware of these considerations: All of the examples are valid using IPv4 and IPv6 addresses. It is valid to use IPv4 addresses on one port and IPv6 addresses on the other port. It is valid to have separate subnet configurations for IPv4 and IPv6 addresses.

Chapter 3. Planning and configuration

73

3.3.4 Back-end storage subsystem configuration


Back-end storage subsystem configuration planning must be applied to all storage controllers attached to SVC. Refer to the following website for a list of currently supported storage subsystems: http://www-03.ibm.com/systems/storage/software/virtualization/svc/interop.html Apply the following general guidelines for back-end storage subsystem configuration planning: In the SAN, storage controllers that are used by the SVC cluster must be connected through SAN switches. Direct connection between the SVC and storage controller is not supported. Multiple connections are allowed from the redundant controllers in the disk subsystem to improve data bandwidth performance. It is not mandatory to have a connection from each redundant controller in the disk subsystem to each counterpart SAN, but it is a best practice. Therefore, controller A in the DS4000 can be connected to SAN A only, or to SAN A and SAN B, and controller B in the DS4000 can be connected to SAN B only, or to SAN B and SAN A. Split controller configurations are supported with certain rules and configuration guidelines. See 3.3.6, Split-cluster configuration on page 77 for more information. All SVC nodes in an SVC cluster must be able to see the same set of ports from each storage subsystem controller, violating this guideline will cause paths to become degraded. This degradation can occur as a result of applying inappropriate zoning and LUN masking. This guideline has important implications for a disk subsystems such as DS3000, DS4000, or DS5000, which impose exclusivity rules regarding which HBA WWPNs a storage partition can be mapped to. Notes: SVC 6.1 provides for better load distribution across paths within storage pools. In previous code levels, the path to MDisk assignment was made in a round-robin fashion across all MDisks configured to the cluster. With that method no attention is paid to how MDisks within storage pools are distributed across paths and therefore it is possible and even likely to have certain paths be more heavily loaded than others. This condition became even more likely to occur as the number of MDisks contained in the storage pool reduced. SVC 6.1 contains logic that considers MDisks within storage pools and more effectively distributes their active paths based on the storage controller ports available. The Detect MDisk commands need to be run following the creation or modification (add or remove MDisk) of storage pools for paths to be redistributed. To ensure sufficient bandwidth to the storage controller and an even balance across storage controller ports, the number of MDisks per storage pool is to be a multiple of the number of storage ports available. For example, if a storage pool has 8 storage controller ports available to it, then it is to contain either 8, 16, 24, 32, or 40 MDisks. Exceeding 40 MDisks per storage pool is not advisable.

74

Implementing the IBM System Storage SAN Volume Controller V6.1

In general, configure disk subsystems as though there is no SVC. However, we suggest the following specific guidelines: Disk drives Exercise caution with large disk drives so that you do not have too few spindles to handle the load. Using RAID-5 is suggested for the vast majority of workloads. Array sizes 8+P or 4+P is suggested for the DS4000 and DS5000 families, if possible. Use the DS4000 segment size of 128 KB or larger to help the sequential performance. Upgrade to EXP810 drawers, if possible. Create LUN sizes that are equal to the RAID array and rank size. When adding more disks to a subsystem, consider adding the new MDisks to existing Storage Pools versus creating additional small Storage Pools. Scripts are available to restripe volume extents evenly across all MDisks in the Storage Pools if required. Go to the website http://www.ibm.com/alphaworks and search for svctools. Maximum of 256 worldwide node names (WWNNs) EMC DMX/SYMM, all HDS, and SUN/HP HDS clones use one WWNN per port. Each WWNN appears as a separate controller to the SVC. IBM, EMC Clariion, and HP use one WWNN per subsystem. Each WWNN appears as a single controller with multiple ports/WWPNs, for a maximum of 16 ports/WWPNs per WWNN using one out of the maximum of 256. DS8000 using four or eight 4 port HA cards Use port 1 and 3 or 2 and 4 on each card. This setup provides 8 or 16 ports for SVC use. Use 8 ports minimum up to 40 ranks. Use 16 ports, which is the maximum, for 40 or more ranks. DS4000/DS5000 EMC CLARiiON/CX Both systems have the preferred controller architecture, and SVC supports this configuration. Use a minimum of 4 ports, and preferably 8 or more ports up to maximum of 16 ports, so that more ports equate to more concurrent I/O that is driven by the SVC. Support for mapping controller A ports to Fabric A and controller B ports to Fabric B or cross-connecting ports to both fabrics from both controllers. The cross-connecting approach is preferred to avoid AVT/Trespass occurring if a fabric or all paths to a fabric fail. DS3400 Use a minimum of 4 ports. XIV requirements and restrictions The use of XIV extended functions including snaps, thin-provisioning, synchronous replication, and LUN expansion is not LUNs presented to the SVC is not supported. A maximum of 511 LUNs from one XIV system can be mapped to an SVC cluster.

Chapter 3. Planning and configuration

75

Full 15 module XIV recommendations 161 TB usable Use two interface host ports from each of the six interface modules. Use ports 1 and 3 from each interface module and zone these 12 ports with all SVC node ports. Create 48 LUNs of equal size, each of which is a multiple of 17 GB, and you will get 1632 GB approximately if using the entire full frame XIV with the SVC. Map LUNs to the SVC as 48 MDisks, and add all of them to the one XIV Storage Pool so that the SVC will drive the I/O to four MDisks/LUNs for each of the 12 XIV FC ports. This design provides a good queue depth on the SVC to drive XIV adequately. Six module XIV recommendations - 55 TB usable Use two interface host ports from each of the two active interface modules. Use ports 1 and 3 from interface modules 4 and 5. (Interface module 6 is inactive.) And, zone these four ports with all SVC node ports. Create 16 LUNs of equal size, each of which is a multiple of 17 GB, and you will get 1632 GB approximately if using the entire XIV with the SVC. Map LUNs to the SVC as 16 MDisks, and add all of them to the one XIV Storage Pool that the SVC will drive I/O to four MDisks/LUNs per each of the four XIV FC ports. This design provides a good queue depth on the SVC to drive XIV adequately. Nine module XIV recommendations - 87 TB usable: Use two interface host ports from each of the four active interface modules. Use ports 1 and 3 from interface modules 4, 5, 7, and 8. (Interface modules 6 and 9 are inactive.) Also, zone these eight ports with all of the SVC node ports. Create 26 LUNs of equal size, each of which is a multiple of 17 GB, and you will get 1632 GB approximately if using the entire XIV with the SVC. Map LUNs to SVC as 26 MDisks, and map add all of them to the one XIV Storage Pool, so that the SVC will drive I/O to three MDisks/LUNs on each of six ports and four MDisks/LUNs on the other two XIV FC ports. This design provides a useful queue depth on SVC to drive XIV adequately. Configure XIV host connectivity for the SVC cluster: Create one host definition on XIV, and include all SVC node WWPNs. You can create clustered host definitions (one per I/O Group), but the preceding method is easier. Map all LUNs to all SVC node WWPNs.

3.3.5 SVC cluster configuration


To ensure high availability in SVC installations, consider the following guidelines when you design a SAN with the SVC. All nodes in a cluster must be in the same LAN segment, because the nodes in the cluster must be able to assume the same cluster, or service IP, address. Make sure that the network configuration will allow any of the nodes to use these IP addresses. Note that if you plan to use the second Ethernet port on each node, it is possible to have two LAN segments. However, port 1 of every node must be in one LAN segment, and port 2 of every node must be in the other LAN segment. To maintain application uptime in the unlikely event of an individual SVC node failing, SVC nodes are always deployed in pairs (I/O Groups). If a node fails or is removed from the configuration, the remaining node operates in a degraded mode, but it is still a valid 76
Implementing the IBM System Storage SAN Volume Controller V6.1

configuration. The remaining node operates in write-through mode, meaning that the data is written directly to the disk subsystem (the cache is disabled for the write). The uninterruptible power supply unit must be in the same rack as the node to which it provides power, and each uninterruptible power supply unit can only have one node connected. The FC SAN connections between the SVC node and the switches are optical fiber. These connections can run at either 2 Gbps, 4 Gbps, or 8 Gbps, depending on your SVC and switch hardware. The 2145-CF8, 2145-8A4, 2145-8G4, and 2145-8F4 SVC nodes auto-negotiate the connection speed with the switch. The SVC node ports must be connected to the FC fabric only. Direct connections between the SVC and the host, or the disk subsystem, are unsupported. Two SVC clusters cannot have access to the same LUNs within a disk subsystem. Configuring zoning such that two SVC clusters have access to the same LUNs (MDisks) can, and will likely, result in data corruption. The two nodes within an I/O Group can be co-located (within the same set of racks) or can be located in separate racks and separate rooms. See 3.3.6, Split-cluster configuration on page 77 for more information about this topic. The SVC uses three MDisks as quorum disks for the cluster. A best practice for redundancy is to have each quorum disk be located in a separate storage subsystem where possible. The current locations of the quorum disks can be displayed using the svcinfo lsquorum command and relocated using the svcinfo chquorum command.

3.3.6 Split-cluster configuration


A split-cluster configuration (also referred to as a split I/O Group) can be implemented as a high availability option. The maximum distance for a split-cluster configuration is 10 km between nodes in an I/O Group. Use the split-cluster configuration in conjunction with the volume mirroring option to realize an availability benefit. After volume mirroring has been configured, use the svcinfo lscontrollerdependentvdisks command to validate volume mirrors reside on separate storage controllers. This will ensure that access to volumes is maintained in the event of the loss of a storage controller. When implementing a split-cluster configuration, two of the three quorum disks can be co-located in the same room where the SVC nodes are located. However, the active quorum disk must reside in a separate room. This configuration ensures that a quorum disk is always available, even after a single site failure. For split-cluster configuration, configure the SVC as follows: Site 1: Half of the SVC cluster nodes + one quorum disk candidate Site 2: Half of the SVC cluster nodes + one quorum disk candidate Site 3: Active quorum disk Note: ISLs between nodes in the same I/O Group are not allowed. This restriction holds regardless of whether nodes are in co-located or split-cluster configurations. Figure 3-15 on page 78 illustrates the split-cluster configuration. When used in conjunction with volume mirroring, this configuration provides a high availability solution that is tolerant of a failure at a single site. If either the primary or secondary site fails, the remaining sites can continue performing I/O operations.

Chapter 3. Planning and configuration

77

In this configuration, the connections between SAN Volume Controller nodes in the cluster are greater than 100 meters apart, and therefore must be longwave Fibre Channel connections.

Figure 3-15 Split-cluster configuration

In Figure 3-15, the storage system that hosts the third-site quorum disk is attached directly to a switch at both the primary and secondary sites using longwave Fibre Channel connections. If either the primary site or the secondary site fails, you must ensure that the remaining site has retained direct access to the storage system that hosts the quorum disks. Restriction: Do not connect a storage system in one site directly to a switch fabric in the other site. An alternative configuration can use an additional Fibre Channel switch at the third site with connections from that switch to the primary site and to the secondary site. A split-site configuration is supported only when the storage system that hosts the quorum disks supports extended quorum. Although SAN Volume Controller can use other types of storage systems for providing quorum disks, access to these quorum disks is always through a single path. For quorum disk configuration requirements, see the Guidance for Identifying and Changing Managed Disks Assigned as Quorum Disk Candidates technote at the following website: http://www-01.ibm.com/support/docview.wss?rs=591&uid=ssg1S1003311

78

Implementing the IBM System Storage SAN Volume Controller V6.1

3.3.7 Storage Pool configuration


The Storage Pool is at the center of the many-to-many relationship between the MDisks and the volumes. It acts as a container from which managed disks contribute chunks of physical disk capacity known as extents, and from which volumes are created. MDisks in the SVC are LUNs assigned from the underlying disk subsystems to the SVC and can be either managed or unmanaged. A managed MDisk is an MDisk that is assigned to a Storage Pool: A Storage Pool is a collection of MDisks. An MDisk can only be contained within a single Storage Pool. An SVC supports up to 128 Storage Pools. There is no limit to the number of volumes that can be allocated from a Storage Pool, however there is a limit an I/O Group limit of 2048 and cluster limit of 8192. Volumes are associated with a single Storage Pool with exception cases where a volume is being migrated or mirrored between Storage Pools. SVC supports extent sizes of 16, 32, 64, 128, 256, 512, 1024, 2048, 4096, and 8192 MB. Note that support for extent sizes 4096 and 8192 was added in SVC 6.1. The extent size is a property of the Storage Pool and is set when the Storage Pool is created. All MDisks in the Storage Pool will have the same extent size, as will all volumes allocated from the Storage Pool. The extent size of a Storage Pool cannot be changed. If a different extent size is desired, the Storage Pool will need to be deleted and a new Storage Pool configured. Table 3-1 lists all of the extent sizes that are available in an SVC.
Table 3-1 Extent size and maximum cluster capacities Extent size 16 MB 32 MB 64 MB 128 MB 256 MB 512 MB 1,024 MB 2,048 MB 4,096 MB 8,192 MB Maximum cluster capacity 64 TB 128 TB 256 TB 512 TB 1 PB 2 PB 4 PB 8 PB 16 PB 32 PB

There are several additional Storage Pool considerations: Maximum cluster capacity is related to the extent size. 16 MB extent = 64 TB and doubles for each increment in extent size; for example, 32 MB = 128 TB. We strongly advise a minimum 128/256 MB. The IBM Storage Performance Council (SPC) benchmarks used a 256 MB extent. Pick the extent size and use that size for all Storage Pools.

Chapter 3. Planning and configuration

79

You cannot migrate volumes between Storage Pools with different extent sizes. However, you can use volume mirroring to create copies between Storage Pools with different extent sizes. Storage Pool reliability, availability, and serviceability (RAS) considerations. It might make sense to create multiple Storage Pools if you ensure a host only gets its volumes built from one of the Storage Pools. If the Storage Pool goes offline, it impacts only a subset of all of the hosts using the SVC. However, creating multiple Storage Pools can cause a high number of Storage Pools, approaching the SVC limits. If you do not isolate hosts to Storage Pools, create one large Storage Pool. Creating one large Storage Pool assumes that the physical disks are all the same size, speed, and RAID level. The Storage Pool goes offline if an MDisk is unavailable, even if the MDisk has no data on it. Do not put MDisks into a Storage Pool until needed. Create at least one separate Storage Pool for all the image mode volumes. Make sure that the LUNs that are given to the SVC have any host persistent reserves removed. Storage Pool performance considerations. It might make sense to create multiple Storage Pools if you are attempting to isolate workloads to separate disk spindles. Storage Pools with too few MDisks cause an MDisk overload, so it is better to have more spindle counts in a Storage Pool to meet workload requirements. The Storage Pool and SVC cache relationship. SVC employs cache partitioning to limit the potentially negative effect that a poorly performing storage controller can have on the cluster. The partition allocation size is defined based on the number of Storage Pools configured. This design protects against individual controller overloading and failures from consuming write cache and degrading performance of other Storage Pools in the cluster. More details are discussed in 2.8.3, Cache on page 38. Table 3-2 shows the limit of the write cache data.
Table 3-2 Limit of the cache data Number of Storage Pools 1 2 3 4 5 or more Upper limit 100% 66% 40% 30% 25%

Consider the rule to be that no single partition can occupy more than its upper limit of cache capacity with write data. These limits are upper limits, and they are the points at which the SVC cache will start to limit incoming I/O rates for volumes created from the Storage Pool. If a particular partition reaches this upper limit, the net result is the same as a global cache resource that is full. That is, the host writes will be serviced on a one-out-one-in basis, because the cache destages writes to the back-end disks. However, only writes targeted at the full partition are limited. All I/O destined for other (non-limited) Storage Pools will continue as normal. Read I/O requests for the limited partition will also continue as normal. However, because the SVC is destaging write data 80
Implementing the IBM System Storage SAN Volume Controller V6.1

at a rate that is obviously greater than the controller can sustain (otherwise the partition does not reach the upper limit), read response times are also likely to be impacted.

3.3.8 Virtual disk configuration


An individual virtual disk (volume) is a member of one Storage Pool and one I/O Group. When creating a volume you first identify the desired performance, availability, and cost requirements for that volume, and then select the Storage Pool accordingly. The Storage Pool defines which MDisks provided by the disk subsystem make up the volume. The I/O Group (two nodes make an I/O Group) defines which SVC nodes provide I/O access to the volume. Note: There is no fixed relationship between I/O Groups and Storage Pools. Perform volume allocation based on the following considerations: Optimize performance between the hosts and the SVC by attempting to distribute volumes evenly across available I/O Groups and nodes within the cluster. Reach the level of performance, reliability, and capacity you require by using the Storage Pool that corresponds to your needs (you can access any Storage Pool from any node). That is, choose the Storage Pool that fulfills the demands for your volumes with respect to performance, reliability, and capacity. I/O Group considerations When you create a volume, it is associated with one node of an I/O Group. By default, every time that you create a new volume, it is associated with the next node using a round-robin algorithm. You can specify a preferred access node, which is the node through which you send I/O to the volume instead of using the round-robin algorithm. A volume is defined for an I/O Group. Even if you have eight paths for each volume, all I/O traffic flows only toward one node (the preferred node). Therefore, only four paths are really used by the IBM Subsystem Device Driver (SDD). The other four paths are used only in the case of a failure of the preferred node or when concurrent code upgrade is running. Creating image mode volumes Use image mode volumes when an MDisk already has data on it, from a non-virtualized disk subsystem. When an image mode volume is created, it directly corresponds to the MDisk from which it is created. Therefore, volume logical block address (LBA) x = MDisk LBA x. The capacity of image mode volumes defaults to the capacity of the supplied MDisk. When you create an image mode disk, the MDisk must have a mode of unmanaged and therefore does not belong to any Storage Pool. A capacity of 0 is not allowed. Image mode volumes can be created in sizes with a minimum granularity of 512 bytes, and they must be at least one block (512 bytes) in size. Creating managed mode volumes with sequential or striped policy When creating a managed mode volume with sequential or striped policy, you must use a number of MDisks containing extents that are free and of a size that is equal or greater than the size of the volume that you want to create. There might be sufficient extents available on the MDisk, but there might not be a contiguous block large enough to satisfy the request.

Chapter 3. Planning and configuration

81

Thin-Provisioned volume considerations When creating the Thin-Provisioned volume, you need to understand the utilization patterns by the applications or group users accessing this volume. You must take into consideration items such as the actual size of the data, the rate of creation of new data, modifying or deleting existing data, and so on. There are two operating modes for Thin-Provisioned volumes

Autoexpand volumes allocate storage from a Storage Pool on demand with minimal
user intervention required. However, a misbehaving application can cause a volume to expand until it has consumed all of the storage in a Storage Pool.

Non-autoexpand volumes have a fixed amount of storage assigned. In this case, the
user must monitor the volume and assign additional capacity when required. A misbehaving application can only cause the volume that it is using to fill up.

Depending on the initial size for the real capacity, the grain size and a warning level can be set. If a volume goes offline, either through a lack of available physical storage for autoexpand, or because a volume marked as non-expand had not been expanded in time, there is a danger of data being left in the cache until storage is made available. This situation is not a data integrity or data loss issue, but you must not rely on the SVC cache as a backup storage mechanism. Important: Keep a warning level on the used capacity so that it provides adequate time to respond and provision more physical capacity. Warnings must not be ignored by an administrator. Use the autoexpand feature of the Thin-Provisioned volumes. The grain size allocation unit for the real capacity in the volume can be set as 32 KB, 64 KB, 128 KB, or 256 KB. A smaller grain size utilizes space more effectively, but it results in a larger directory map, which can reduce performance. Thin-Provisioned volumes require more I/Os because of directory accesses. For truly random workloads with 70% read and 30% write, a Thin-Provisioned volume will require approximately one directory I/O for every user I/O. As a result, performance can be up to 50% less than that of a normal volume for writes. The directory is two-way write-back-cached (just like the SVC fastwrite cache), so certain applications will perform better. Thin-Provisioned volumes require more CPU processing, so the performance per I/O Group can also be reduced. A Thin-Provisioned volume feature called zero detect provides clients with the ability to reclaim unused allocated disk space (zeros) when converting a fully allocated volume to a Thin-Provisioned volume using volume mirroring. Volume mirroring guidelines Create or identify two separate Storage Pools to allocate space for your mirrored volume. Allocate the Storage Pools containing the mirrors from separate storage controllers. If possible, use a Storage Pool with MDisks that share the same characteristics. Otherwise, the volume performance can be affected by the poorer performing MDisk.

82

Implementing the IBM System Storage SAN Volume Controller V6.1

3.3.9 Host mapping (LUN masking)


For the host and application servers, the following guidelines apply: Each SVC node presents a volume to the SAN through four ports. Because two nodes are used in normal operations to provide redundant paths to the same storage, a host with two HBAs can see multiple paths to each LUN that is presented by the SVC. Use zoning to limit the pathing from a minimum of two paths to the maximum available of eight paths, depending on the kind of high availability and performance that you want to have in your configuration. It is best to use zoning to limit the pathing to four paths. The hosts must run a multipathing device driver to resolve this back to a single device. The multipathing driver supported and delivered by SVC is the IBM Subsystem Device Driver (SDD). Native multipath I/O (MPIO) drivers on selected hosts are supported. For operating system-specific information about MPIO support, see this website: http://www-03.ibm.com/systems/storage/software/virtualization/svc/interop.html The number of paths to a volume from a host to the nodes in the I/O Group that owns the volume must not exceed eight, even if eight is not the maximum number of paths supported by the multipath driver (SDD supports up to 32). To restrict the number of paths to a host volume, the fabrics must be zoned so that each host FC port is zoned to no more than two ports from each SVC node in the I/O Group that owns the volume. Notes: Following is a list of the suggested number of paths per volume: (n+1 redundancy) With 2 HBA ports: zone HBA ports to SVC ports 1 to 2 for a total of 4 paths With 4 HBA ports: zone HBA ports to SVC ports 1 to 1 for a total of 4 paths Optional: (n+2 redundancy) With 4 HBA ports: zone HBA ports to SVC ports 1 to 2 for a total of 8 paths The term HBA port is used to describe the SCSI Initiator. The term SVC port is used to describe the SCSI target. The maximum number of host paths per volume is not to exceed 8. If a host has multiple HBA ports, each port must be zoned to a separate set of SVC ports to maximize high availability and performance. To configure greater than 256 hosts, you will need to configure the host to I/O Group mappings on the SVC. Each I/O Group can contain a maximum of 256 hosts, so it is possible to create 1024 host objects on an eight-node SVC cluster. Volumes can only be mapped to a host that is associated with the I/O Group to which the volume belongs. Port masking You can use a port mask to control the node target ports that a host can access, which satisfies two requirements: As part of a security policy, to limit the set of WWPNs that are able to obtain access to any volumes through a given SVC port As part of a scheme to limit the number of logins with mapped volumes visible to a host multipathing driver (such as SDD) and thus limit the number of host objects configured without resorting to switch zoning

Chapter 3. Planning and configuration

83

The port mask is an optional parameter of the svctask mkhost and chhost commands. The port mask is four binary bits. Valid mask values range from 0000 (no ports enabled) to 1111 (all ports enabled). For example, a mask of 0011 enables port 1 and port 2. The default value is 1111 (all ports enabled). The SVC supports connection to the Cisco MDS family and Brocade family. See the following website for the latest support information: http://www-03.ibm.com/systems/storage/software/virtualization/svc/interop.html

3.3.10 Advanced Copy Services


The SVC offers these Advanced Copy Services: FlashCopy Metro Mirror Global Mirror SVC Advanced Copy Services must apply the following guidelines.

FlashCopy guidelines
Consider these FlashCopy guidelines: Identify each application that must have a FlashCopy function implemented for its volume. FlashCopy is a relationship between volumes. Those volumes can belong to separate Storage Pools and separate storage subsystems. You can use FlashCopy for backup purposes by interacting with the Tivoli Storage Manager Agent, or for cloning a particular environment. Define which FlashCopy best fits your requirements: No copy, Full copy, Thin-Provisioned, or Incremental. Define which FlashCopy rate best fits your requirement in terms of performance and time to get the FlashCopy completed. The relationship of the background copy rate value to the attempted number of grains to be split per second is shown in Table 3-3. Define the grain size that you want to use. A grain is the unit of data represented by a single bit in the FlashCopy bitmap table. Larger grain sizes can cause a longer FlashCopy elapsed time and a higher space usage in the FlashCopy target volume. Smaller grain sizes can have the opposite effect. Remember that the data structure and the source data location can modify those effects. In an actual environment, check the results of your FlashCopy procedure in terms of the data copied at every run and in terms of elapsed time, comparing them to the new SVC FlashCopy results. Eventually, adapt the grain/second and the copy rate parameter to fit your environments requirements.
Table 3-3 Grain splits per second User percentage 1 - 10 11 - 20 21 - 30 31 - 40 41 - 50 Data copied per second 128 KB 256 KB 512 KB 1 MB 2 MB 256 KB grain per second 0.5 1 2 4 8 64 KB grain per second 2 4 8 16 32

84

Implementing the IBM System Storage SAN Volume Controller V6.1

User percentage 51 - 60 61 - 70 71 - 80 81 - 90 91 - 100

Data copied per second 4 MB 8 MB 16 Mb 32 MB 64 MB

256 KB grain per second 16 32 64 128 256

64 KB grain per second 64 128 256 512 1024

Metro Mirror and Global Mirror guidelines


SVC supports both intracluster and intercluster Metro Mirror and Global Mirror. From the intracluster point of view, any single cluster is a reasonable candidate for a Metro Mirror or Global Mirror operation. Intercluster operation, however, needs at least two clusters that are separated by a number of moderately high bandwidth links. Figure 3-16 shows a schematic of Metro Mirror connections.

Figure 3-16 Metro Mirror connections

Figure 3-16 contains two redundant fabrics. Part of each fabric exists at the local cluster and at the remote cluster. There is no direct connection between the two fabrics. Technologies for extending the distance between two SVC clusters can be broadly divided into two categories: FC extenders SAN multiprotocol routers Due to the more complex interactions involved, IBM explicitly tests products of this class for interoperability with the SVC. The current list of supported SAN routers can be found in the supported hardware list on the SVC support website: http://www.ibm.com/storage/support/2145
Chapter 3. Planning and configuration

85

IBM has tested a number of FC extenders and SAN router technologies with the SVC, which must be planned, installed, and tested so that the following requirements are met: The round-trip latency between sites must not exceed 80 ms (40 ms one-way). For Global Mirror, this limit allows a distance between the primary and secondary sites of up to 8000 km (4970.96 miles) using a planning assumption of 100 km (62.13 miles) per 1 ms of round-trip link latency. The latency of long distance links depends upon the technology that is used to implement them. A point-to-point dark fiber-based link will typically provide a round-trip latency of 1ms per 100 km (62.13 miles) or better. Other technologies will provide longer round-trip latencies, which will affect the maximum supported distance. The configuration must be tested with the expected peak workloads. When Metro Mirror or Global Mirror is used, a certain amount of bandwidth will be required for SVC intercluster heartbeat traffic. The amount of traffic depends on how many nodes are in each of the two clusters. Figure 3-17 shows the amount of heartbeat traffic, in megabits per second, that is generated by various sizes of clusters.

Figure 3-17 Amount of heartbeat traffic

These numbers represent the total traffic between the two clusters when no I/O is taking place to mirrored volumes. Half of the data is sent by one cluster, and half of the data is sent by the other cluster. The traffic will be divided evenly over all available intercluster links. Therefore, if you have two redundant links, half of this traffic will be sent over each link during fault-free operation. The bandwidth between sites must, at the least, sized to meet the peak workload requirements in addition to maintaining the maximum latency specified previously. The peak workload requirement must be evaluated by considering the average write workload over a period of one minute or less, plus the required synchronization copy bandwidth. With no synchronization copies active and no write I/O disks in Metro Mirror or Global Mirror relationships, the SVC protocols will operate with the bandwidth indicated in Figure 3-17. However, the true bandwidth required for the link can only be determined by considering the peak write bandwidth to volumes participating in Metro Mirror or Global Mirror relationships and adding to it the peak synchronization copy bandwidth. If the link between the sites is configured with redundancy so that it can tolerate single failures, the link must be sized so that the bandwidth and latency statements continue to be true even during single failure conditions. The configuration is tested to simulate the failure of the primary site (to test the recovery capabilities and procedures), including eventual failback to the primary site from the secondary. The configuration must be tested to confirm that any failover mechanisms in the intercluster links interoperate satisfactorily with the SVC.

86

Implementing the IBM System Storage SAN Volume Controller V6.1

The FC extender must be treated as a normal link. The bandwidth and latency measurements must be made by, or on behalf of, the client. They are not part of the standard installation of the SVC by IBM. Make these measurements during installation, and record the measurements. Testing must be repeated following any significant changes to the equipment providing the intercluster link.

Global Mirror guidelines


Consider these guidelines: When using SVC Global Mirror, all components in the SAN must be capable of sustaining the workload generated by application hosts and the Global Mirror background copy workload. Otherwise, Global Mirror can automatically stop your relationships to protect your application hosts from increased response times. Therefore, it is important to configure each component correctly. Use a SAN performance monitoring tool, such as IBM System Storage Productivity Center, which will allow you to continuously monitor the SAN components for error conditions and performance problems. This tool will help you detect potential issues before they impact your disaster recovery solution. The long-distance link between the two clusters must be provisioned to allow for the peak application write workload to the Global Mirror source volumes, plus the client-defined level of background copy. The peak application write workload should ideally be determined by analyzing the SVC performance statistics. Statistics must be gathered over a typical application I/O workload cycle, which might be days, weeks, or months depending on the environment on which the SVC is used. These statistics must be used to find the peak write workload that the link must be able to support. Characteristics of the link can change with use; for example, latency can increase as the link is used to carry an increased bandwidth. The user must be aware of the links behavior in such situations and ensure that the link remains within the specified limits. If the characteristics are not known, testing must be performed to gain confidence of the links suitability. Users of Global Mirror must consider how to optimize the performance of the long-distance link, which will depend upon the technology that is used to implement the link. For example, when transmitting FC traffic over an IP link, it can be desirable to enable jumbo frames to improve efficiency. Using Global Mirror and Metro Mirror between the same two clusters is supported. It is supported for cache-disabled volumes to participate in a Global Mirror relationship; however, it not a best practice to do so. The gmlinktolerance parameter of the remote copy partnership must be set to an appropriate value. The default value is 300 seconds (5 minutes), which will be appropriate for most clients. During SAN maintenance, the user must choose to: reduce the application I/O workload for the duration of the maintenance (so that the degraded SAN components are capable of the new workload); disable the gmlinktolerance feature; increase the gmlinktolerance value (meaning that application hosts might see extended response times from Global Mirror volumes); or stop the Global Mirror relationships. If the gmlinktolerance value is increased for maintenance lasting x minutes, it must only be reset to the normal value x minutes after the end of the maintenance activity.

Chapter 3. Planning and configuration

87

If gmlinktolerance is disabled for the duration of the maintenance, it must be re-enabled after the maintenance is complete. Global Mirror volumes must have their preferred nodes evenly distributed between the nodes of the clusters. Each volume within an I/O Group has a preferred node property that can be used to balance the I/O load between nodes in that group. Figure 3-18 shows the correct relationship between volumes in a Metro Mirror or Global Mirror solution.

Figure 3-18 Correct volume relationship

The capabilities of the storage controllers at the secondary cluster must be provisioned to allow for the peak application workload to the Global Mirror volumes, plus the client-defined level of background copy, plus any other I/O being performed at the secondary site. The performance of applications at the primary cluster can be limited by the performance of the back-end storage controllers at the secondary cluster to maximize the amount of I/O that applications can perform to Global Mirror volumes. Do a complete review before using SATA for Metro Mirror or Global Mirror secondary volumes. Using a slower disk subsystem for the secondary volumes for high performance primary volumes can mean that the SVC cache might not be able to buffer all the writes, and flushing cache writes to SATA might slow I/O at the production site. Global Mirror volumes at the secondary cluster must be in dedicated storage pools (which contain no non-Global Mirror volumes). Storage controllers must be configured to support the Global Mirror workload that is required of them. You can: dedicate storage controllers to only Global Mirror volumes; configure the controller to guarantee sufficient quality of service for the disks being used by Global Mirror; or ensure that physical disks are not shared between Global Mirror volumes and other I/O (for example, by not splitting an individual RAID array). MDisks within a Global Mirror storage pool must be similar in their characteristics (for example, RAID level, physical disk count, and disk speed). This requirement is true of all storage pools, but it is particularly important to maintain performance when using Global Mirror. When a consistent relationship is stopped, for example, by a persistent I/O error on the intercluster link, the relationship enters the consistent_stopped state. I/O at the primary site continues, but the updates are not mirrored to the secondary site. Restarting the relationship will begin the process of synchronizing new data to the secondary disk. While this synchronization is in progress, the relationship will be in the inconsistent_copying

88

Implementing the IBM System Storage SAN Volume Controller V6.1

state. Therefore, the Global Mirror secondary volume will not be in a usable state until the copy has completed and the relationship has returned to a Consistent state. For this reason it is highly advisable to create a FlashCopy of the secondary volume before restarting the relationship. When started, the FlashCopy will provide a consistent copy of the data, even while the Global Mirror relationship is copying. If the Global Mirror relationship does not reach the Synchronized state (if, for example, the intercluster link experiences further persistent I/O errors), the FlashCopy target can be used at the secondary site for disaster recovery purposes. If you are planning to use an FCIP intercluster link, it is extremely important to design and size the pipe correctly. Example 3-2 shows a best-guess bandwidth sizing formula.
Example 3-2 WAN link calculation example

Amount of write data within 24 hours times 4 to allow for peaks Translate into MB/s to determine WAN link needed Example: 250 GB a day 250 GB * 4 = 1 TB 24 hours * 3600 secs/hr. = 86400 secs 1,000,000,000,000/ 86400 = approximately 12 MB/s Which means OC3 or higher is needed (155 Mbps or higher) If compression is available on routers or WAN communication devices, smaller pipelines might be adequate. Note that workload is probably not evenly spread across 24 hours. If there are extended periods of high data change rates, consider suspending Global Mirror during that time frame. If the network bandwidth is too small to handle the traffic, the application write I/O response times might be elongated. For the SVC, Global Mirror must support short-term Peak Write bandwidth requirements. Remember that SVC Global Mirror is much more sensitive to a lack of bandwidth than the DS8000. You will need to consider the initial sync and re-sync workload, as well. The Global Mirror partnerships background copy rate must be set to a value that is appropriate to the link and secondary back-end storage. The more bandwidth that you give to the sync and re-sync operation, the less workload can be delivered by the SVC for the regular data traffic. The Metro Mirror or Global Mirror background copy rate is predefined, the per volume limit is 25 MBps, and the maximum per I/O Group is roughly 200 MBps. Be careful using Thin-Provisioned secondary volumes at the disaster recovery site, because a Thin-Provisioned volume can have performance of up to 50% less than that of a normal volume and can affect the performance of the volumes at the primary site. Do not propose Global Mirror if the data change rate will exceed the communication bandwidth or if the round-trip latency exceeds 80 - 120 ms. Greater than 80 ms round-trip latency requires SCORE/RPQ submission.

3.3.11 SAN boot support


The SVC supports SAN boot or startup for AIX, Windows Server 2003, and other operating systems. SAN boot support can change from time to time, so check the following website regularly: http://www.ibm.com/systems/storage/software/virtualization/svc/interop.html

Chapter 3. Planning and configuration

89

3.3.12 Data migration from a non-virtualized storage subsystem


Data migration is an extremely important part of an SVC implementation. Therefore, a data migration plan must be accurately prepared. You might need to migrate your data because of one of these reasons: To redistribute workload within a cluster across the disk subsystem To move workload onto newly installed storage To move workload off old or failing storage, ahead of decommissioning it To move workload to rebalance a changed workload To migrate data from an older disk subsystem to SVC-managed storage To migrate data from one disk subsystem to another disk subsystem Because there are multiple data migration methods, choose the method that best fits your environment, your operating system platform, your kind of data, and your applications service level agreement. We can define data migration as belonging to three groups: Based on operating system Logical Volume Manager (LVM) or commands Based on special data migration software Based on the SVC data migration feature With data migration, apply the following guidelines: Choose which data migration method best fits your operating system platform, your kind of data, and your service level agreement. Check the interoperability matrix for the storage subsystem to which your data is being migrated: http://www-03.ibm.com/systems/storage/software/virtualization/svc/interop.html Choose where you want to place your data after migration in terms of the Storage Pools related to a specific storage subsystem tier. Check whether a sufficient amount of free space or extents are available in the target Storage Pool. Decide if your data is critical and must be protected by a volume mirroring option or if it must be replicated in a remote site for disaster recovery. Prepare offline all of the zone and LUN masking/host mappings that you might need, to minimize downtime during the migration. Prepare a detailed operation plan so that you do not overlook anything at data migration time. Execute a data backup before you start any data migration. Data backup must be part of the regular data management process. You might want to use the SVC as a data mover to migrate data from a non-virtualized storage subsystem to another non-virtualized storage subsystem. In this case, you might have to add additional checks that are related to the specific storage subsystem to which you want to migrate. Be careful using slower disk subsystems for the secondary volumes for high performance primary volumes, because SVC cache might not be able to buffer all the writes and flushing cache writes to SATA might slow I/O at the production site.

3.3.13 SVC configuration backup procedure


Save the configuration externally when changes, such as adding new nodes, disk subsystems, and so on, have been performed on the cluster. Configuration saving is a crucial part of the SVC management, and various methods can be applied to back up your SVC 90
Implementing the IBM System Storage SAN Volume Controller V6.1

configuration. Best practice is to implement an automatic configuration backup by applying the configuration backup command. We describe this command for the CLI and the GUI in Chapter 9, SAN Volume Controller operations using the command-line interface on page 439 and in Chapter 10, SAN Volume Controller operations using the GUI on page 579.

3.4 Performance considerations


Although storage virtualization with the SVC improves flexibility and provides simpler management of a storage infrastructure, it can also provide a substantial performance advantage for a variety of workloads. The SVCs caching capability and its ability to stripe volumes across multiple disk arrays are the reasons why performance improvement is significant when implemented with midrange disk subsystems, because this technology is often only provided with high-end enterprise disk subsystems. Tip: Technically, almost all storage controllers provide both striping (RAID-1 or RAID-10) and a form of caching. The real benefit is the degree to which you can stripe the data across all MDisks in a storage pool and therefore have the maximum number of spindles active at one time. The caching is secondary. The SVC provides additional caching to what midrange controllers provide (usually a couple of GB), whereas enterprise systems have much larger caches. To ensure the desired performance and capacity of your storage infrastructure, it is best to undertake a performance and capacity analysis to reveal the business requirements of your storage environment. When this is done, you can use the guidelines in this chapter to design a solution that meets the business requirements. When discussing performance for a system, it always comes down to identifying the bottleneck, and thereby the limiting factor of a given system. You must also take into consideration the component for whose workload you identify a limiting factor, because it might not be the same component that is identified as the limiting factor for other workloads. When designing a storage infrastructure using SVC, or implementing SVC in an existing storage infrastructure, you must therefore take into consideration the performance and capacity of the SAN, the disk subsystems, the SVC, and the known or expected workload.

3.4.1 SAN
The SVC now has many models: 2145-8F4, 2145-8G4, 2145-8A4, and 2145-CF8. All of them can connect to 2 Gbps, 4 Gbps, or 8 Gbps switches. From a performance point of view, it is better to connect the SVC to 8 Gbps switches. Correct zoning on the SAN switch will bring security and performance together. Implement a dual HBA approach at the host to access the SVC.

3.4.2 Disk subsystems


From a performance perspective, there are a few guidelines in connecting to an SVC. Connect all storage ports to the switch, and zone them to all of the SVC ports. Zone all ports on the disk back-end storage to all ports on the SVC nodes in a cluster. Also ensure that you configure the storage subsystem LUN masking settings to map all LUNs to all the SVC WWPNs in the cluster. The SVC is designed to handle large quantities of multiple paths from the back-end storage.

Chapter 3. Planning and configuration

91

Using as many as possible 15,000 RPM disks will improve performance considerably. Creating one LUN per array will help in a sequential workload environment. In most cases, the SVC will be able to improve performance, especially on middle- to low-end disk subsystems, older disk subsystems with slow controllers, or uncached disk systems, for these reasons: The SVC has the capability to stripe across disk arrays, and it can do so across the entire set of supported physical disk resources. The SVC has a 4 GB, 8 GB, or 24 GB cache in the latest 2145-CF8 model and it has an advanced caching mechanism. The SVC is capable of providing automated performance optimizing of hotspots through the use of Solid State Drives (SSDs) and Easy Tier. The SVCs large cache and advanced cache management algorithms also allow it to improve upon the performance of many types of underlying disk technologies. The SVCs capability to manage, in the background, the destaging operations incurred by writes (in addition to still supporting full data integrity) has the potential to be particularly important in achieving good database performance. Depending upon the size, age, and technology level of the disk storage system, the total cache available in the SVC can be larger, smaller, or about the same as that associated with the disk storage. Because hits to the cache can occur in either the upper (SVC) or the lower (disk controller) level of the overall system, the system as a whole can take advantage of the larger amount of cache wherever it is located. Thus, if the storage control level of cache has the greater capacity, expect hits to this cache to occur, in addition to hits in the SVC cache. Also, regardless of their relative capacities, both levels of cache will tend to play an important role in allowing sequentially organized data to flow smoothly through the system. The SVC cannot increase the throughput potential of the underlying disks in all cases, because this depends upon both the underlying storage technology and the degree to which the workload exhibits hot spots or sensitivity to cache size or cache algorithms. IBM SAN Volume Controller 4.2.1 Cache Partitioning, REDP-4426, explains the SVCs cache partitioning capability: http://www.redbooks.ibm.com/abstracts/redp4426.html?Open

3.4.3 SVC
The SVC cluster is scalable up to eight nodes, and the performance is nearly linear when adding more nodes into an SVC cluster, until it becomes limited by other components in the storage infrastructure. Although virtualization with the SVC provides a great deal of flexibility, it does not diminish the necessity to have a SAN and disk subsystems that can deliver the desired performance. Essentially, SVC performance improvements are gained by having as many MDisks as possible, therefore creating a greater level of concurrent I/O to the back-end without overloading a single disk or array. Assuming that there are no bottlenecks in the SAN or on the disk subsystem, remember that specific guidelines must be followed when you are performing these tasks: Creating a Storage Pool Creating volumes Connecting or configuring hosts that must receive disk space from an SVC cluster

92

Implementing the IBM System Storage SAN Volume Controller V6.1

You can obtain more detailed information about performance and best practices for the SVC in SAN Volume Controller Best Practices and Performance Guidelines, SG24-7521: http://www.redbooks.ibm.com/abstracts/sg247521.html?Open

3.4.4 Performance monitoring


Performance monitoring must be an integral part of the overall IT environment. For the SVC, as for the other IBM storage subsystems, the official IBM tool to collect performance statistics and supply a performance report is the TotalStorage Productivity Center. You can obtain more information about using the TotalStorage Productivity Center to monitor your storage subsystem in Monitoring Your Storage Subsystems with TotalStorage Productivity Center, SG24-7364: http://www.redbooks.ibm.com/abstracts/sg247364.html?Open See Chapter 10, SAN Volume Controller operations using the GUI on page 579, for detailed information about collecting performance statistics.

Chapter 3. Planning and configuration

93

94

Implementing the IBM System Storage SAN Volume Controller V6.1

Chapter 4.

SAN Volume Controller initial configuration


In this chapter we discuss the following topics: Managing the cluster System Storage Productivity Center overview SAN Volume Controller (SVC) Hardware Management Console SVC initial configuration steps

Copyright IBM Corp. 2011. All rights reserved.

95

4.1 Managing the cluster


There are many ways to manage the SVC. The most commonly used are the following: Using the SVC Management GUI Using a PuTTY-based SVC command-line interface Using the System Storage Productivity Center (SSPC) Figure 4-1 shows the various ways to manage an SVC cluster.

Figure 4-1 SVC cluster management

Note that you have full management control of the SVC regardless of which method you choose. IBM System Storage Productivity Center is supplied by default when you purchase your SVC cluster. If you already have a previously installed SVC cluster in your environment, it is possible that you are using the SVC Console (Hardware Management Console (HMC)). You can still use it together with IBM System Storage Productivity Center, but you can only log in to your SVC from one of them at a time. If you decide to manage your SVC cluster with the SVC CLI, it does not matter if you are using the SVC Console or IBM System Storage Productivity Center, because the SVC CLI is located on the cluster and accessed through Secure Shell (SSH), which can be installed anywhere.

4.1.1 TCP/IP requirements for SAN Volume Controller


To plan your installation, consider the TCP/IP address requirements of the SAN Volume Controller cluster and the requirements for the SAN Volume Controller cluster to access other services. You must also plan the address allocation and the Ethernet router, gateway, and firewall configuration to provide the required access and network security.

96

Implementing the IBM System Storage SAN Volume Controller V6.1

Figure 4-2 shows the TCP/IP ports and services that are used by the SVC.

Figure 4-2 TCP/IP ports

For more information about TCP/IP prerequisites, see Chapter 3, Planning and configuration on page 57 and also the IBM System Storage Productivity Center: Introduction and Planning Guide, SC23-8824. To assist you in starting an SVC initial configuration, Figure 4-3 shows a common flowchart that covers all of the types of management.

Chapter 4. SAN Volume Controller initial configuration

97

Figure 4-3 SVC initial configuration flowchart

In the next sections, we describe each of the steps shown in Figure 4-3.

4.2 System Storage Productivity Center overview


The IBM System Storage Productivity Center (SSPC) is an integrated hardware and software solution that provides a single point of entry for managing SAN Volume Controller clusters, IBM System Storage DS8000 systems, and other components of your data storage infrastructure. SSPC simplifies storage management in the following ways: It centralizes the management of storage network resources with IBM storage management software. It provides greater synergy between storage management software and IBM storage devices. It reduces the number of servers that are required to manage your software infrastructure. It provides simple migration from basic device management to storage management applications that provide higher-level functions. The current release of System Storage Productivity Center (1.5) consists of the following components: IBM Tivoli Storage Productivity Center Basic Edition 4.2.1 is pre-installed on the System Storage Productivity Center server.

98

Implementing the IBM System Storage SAN Volume Controller V6.1

Tivoli Storage Productivity Center for Replication is pre-installed. An additional license is required. IBM System Storage DS Storage Manager 10.70 is available for you to optionally install on the System Storage Productivity Center server, or on a remote server. The DS Storage Manager 10.70 can manage the IBM DS3000, IBM DS4000, and IBM DS5000. With DS Storage Manager 10.70, when you use Tivoli Storage Productivity Center to add and discover a DS CIM Agent, you can launch the DS Storage Manager from the topology viewer, the Configuration Utility, or the Disk Manager of the Tivoli Storage Productivity Center. IBM Java 1.6 is pre-installed. IBM Java is pre-installed and supports DS Storage Manager 10.70. You do not need to download Java from Sun Microsystems. DS CIM Agent management commands. The DS CIM Agent management commands (DSCIMCLI) for 5.5.0.3 are pre-installed on the System Storage Productivity Center. SSPC supports SVC 6.1 and the new Storwize V7000, and also supports manual install of the 5.1 GUI (the SVC Console needed for SVC 5.1 or previous SVC release is also available on the IBM website). With SVC 6.1, the GUI console is embedded in the SVC Cluster, so there is no longer a need to install any SVC software directly on the SSPC. IBM DB2 Enterprise Server Edition PuTTY (SSH client software) Figure 4-4 shows the product stack in the IBM System Storage Productivity Center Console 1.5.

Figure 4-4 Overview of the IBM System Storage Productivity Center

Chapter 4. SAN Volume Controller initial configuration

99

IBM System Storage Productivity Center has all of the software components pre-installed and tested on a System xTM machine model IBM System Storage Productivity Center 2805-MC5 with Windows installed on it. All the software components installed on the IBM System Storage Productivity Center can be ordered and installed on hardware that meets or exceeds minimum requirements. For a detailed guide to the IBM System Storage Productivity Center, refer to IBM System Storage Productivity Center Software Installation and Users Guide, SC23-8823. For information pertaining to physical connectivity to the SVC, see Chapter 3, Planning and configuration on page 57.

4.2.1 IBM System Storage Productivity Center hardware


The hardware used by the IBM System Storage Productivity Center solution is the IBM System Storage Productivity Center 2805-MC5. It is a 1U rack-mounted server. It has the following initial configuration: One Intel Xeon E5630 quad-core, with speed of 2.53GHz, with L3 cache of 12 MB 8 GB of RAM PC3-10600 1333 MHz Two 2.5" SAS Open Bay - hard disk drives Two Broadcom 5709C Ethernet cards One CD/DVD bay with read and write-read capability Microsoft Windows 2008 Enterprise Edition Optional secondary power supply It is designed to perform System Storage Productivity Center functions. If you plan to upgrade System Storage Productivity Center for more functions, you can purchase the Performance Upgrade Kit to add more capacity to your hardware.

4.2.2 SVC installation planning information for System Storage Productivity Center
Consider the following steps when planning the System Storage Productivity Center installation: Verify that the hardware and software prerequisites have been met. Determine the location of the rack where the System Storage Productivity Center is to be installed. Verify that the System Storage Productivity Center will be installed in line of sight to the SVC nodes. Verify that you have a keyboard, mouse, and monitor available to use. Determine the cabling required. Determine the network IP address. Determine the System Storage Productivity Center host name.

100

Implementing the IBM System Storage SAN Volume Controller V6.1

For detailed installation guidance, see IBM System Storage Productivity Center: Introduction and Planning Guide, SC23-8824: https://www-304.ibm.com/systems/support/supportsite.wss/supportresources?brandind= 5000033&familyind=5356448 Also see IBM Tivoli Storage Productivity Center IBM Tivoli Storage Productivity Center for Replication Installation and Configuration Guide, SC27-2337: http://http://www-01.ibm.com/support/docview.wss?rs=1181&uid=ssg1S7002597 Figure 4-5 shows the front view of the System Storage Productivity Center Console based on the 2805-MC5 hardware.

Figure 4-5 System Storage Productivity Center 2805-MC5 front view

Figure 4-6 shows a rear view of System Storage Productivity Center Console based on the 2805-MC5 hardware.

Figure 4-6 System Storage Productivity Center 2805-MC5 rear view

4.3 Setting up the SVC cluster


This section provides the step-by-step instructions that are needed to create the cluster. You must create a cluster to use SAN Volume Controller virtualized storage. The first phase to create a cluster is performed from the front panel of the SAN Volume Controller (see 4.3.3, Initiating cluster creation from the front panel on page 105). The second phase is performed from a web browser accessing the management GUI (see 4.4, Configuring the GUI on page 107).

4.3.1 Introducing the service panels


This section gives you an overview of the service panels you have available, depending on your SVC nodes. Use Figure 4-7 as a reference for the SVC 2145-8F2 and 2145-8F4 node model buttons to be pressed in the steps that follow.

Chapter 4. SAN Volume Controller initial configuration

101

Figure 4-7 SVC 8F2 node and SVC 8F4 node front and operator panel

Use Figure 4-8 for the SVC Node 2145-8G4 and 2145-8A4 models.

102

Implementing the IBM System Storage SAN Volume Controller V6.1

Figure 4-8 SVC 8G4 node front and operator panel

Use Figure 4-9 as a reference for the SVC Node 2145-CF8 model; the figure shows the CF8 model front panel.

Chapter 4. SAN Volume Controller initial configuration

103

Figure 4-9 CF8 front panel

SVC V6.1 introduces a new method for performing service tasks. In addition to being able to perform service tasks from the front panel, you can also service a node through an Ethernet connection using either a web browser or the command-line interface. An additional Service IP address for each node canister is required. For more details see 4.4.3, Configuring the Service IP Addresses on page 119 and 10.17, Service Assistant with the GUI on page 809.

4.3.2 Prerequisites
Ensure that the SVC nodes are physically installed and that Ethernet and Fibre Channel connectivity has been correctly configured. For information about physical connectivity to the SVC, see Chapter 3, Planning and configuration on page 57. Prior to configuring the cluster, ensure that the following information is available: License The license indicates whether the client is permitted to use FlashCopy, MetroMirror, or both. It also indicates how much capacity the client is licensed to virtualize. For IPv4 addressing Cluster IPv4 addresses - These addresses include one address for the cluster and another address for the service address. IPv4 subnet mask. Gateway IPv4 address.

104

Implementing the IBM System Storage SAN Volume Controller V6.1

For IPv6 addressing Cluster IPv6 addresses - These addresses include one address for the cluster and another address for the service address. IPv6 prefix. Gateway IPv6 address. You must create a cluster to use the SAN Volume Controller virtualized storage. The first phase to create a cluster is performed from the front panel of the SAN Volume Controller. The second phase is performed from a web browser accessing the management GUI.

4.3.3 Initiating cluster creation from the front panel


After the hardware is physically installed into racks, complete the following steps to initially configure the cluster through the physical service panel; see 4.3.1, Introducing the service panels on page 101. 1. Choose any node that is to become a member of the cluster being created. Note: To add additional nodes to your cluster, use a separate process after you have successfully created and initialized the cluster on the selected node. 2. Press and release the up or down button until Actions is displayed. Important: If a time-out occurs when you enter the input for the fields during these steps, you must begin again from step 2. All of the changes are lost, so be sure to have all of the information available before beginning again. 3. Press and release the select button. 4. Depending on whether you are creating a cluster with an IPv4 address or an IPv6 address, press and release the up or down button until either New Cluster IPv4? or New Cluster IPv6? is displayed. Figure 4-10 shows the various options for the cluster creation.

Figure 4-10 Cluster IPv4? and Cluster IPv6? options on the front panel display

Chapter 4. SAN Volume Controller initial configuration

105

If the New Cluster IPv4? or New Cluster IPv6? actions are displayed, move directly to step 5. If the New Cluster IPv4? or New Cluster IPv6? actions are not displayed, it means that this node is already a member of a cluster. a. Press and release the up or down button until Actions is displayed. b. Press and release the select button to return to the Main Options menu. c. Press and release the up or down button until Cluster: is displayed. The name of the cluster that the node belongs to is displayed on line 2 of the panel. In this case there are two options. a. If you want to delete this node from cluster: i. Press and release the up or down button until Actions is displayed. ii. Press and release the select button. iii. Press and release the up or down button until Remove Cluster? is displayed. iv. Press and hold the up button v. Press and release the select button. vi. Press and release the up or down button until Confirm remove? is displayed. vii. Press and release the select button. viii.Release the up button. ix. Then, release the up button, which deletes the cluster information from the node. Go back to step 1 on page 105 and start again. b. If you do not want this node to be removed from an existing cluster, review the situation and determine the correct nodes to include in the new cluster. 5. Press and release the select button to create the new cluster. 6. Press and release the select button again to modify the IP. 7. Use the up or down navigation buttons to change the value of the first field of the IP address to the value that has been chosen. Notes: For IPv4, pressing and holding the up or down buttons will increment or decrease the IP address field by units of 10. The field value rotates from 0 to 255 with the down button, and from 255 to 0 with the up button. For IPv6, the address and the gateway address consist of eight 4-digit hexadecimal values. Enter the full address by working across a series of four panels to update each of the 4-digit hexadecimal values that make up the IPv6 addresses. The panels consist of eight fields, where each field is a 4-digit hexadecimal value. 8. Use the right navigation button to move to the next field. Use the up or down navigation buttons to change the value of this field. 9. Repeat step 7 for each of the remaining fields of the IP address. 10.When the last field of the IP address has been changed, press the select button. 11.Press the right arrow button: a. For IPv4, IPv4 Subnet: is displayed. b. For IPv6, IPv6 Prefix: is displayed. 12.Press the select button. 106
Implementing the IBM System Storage SAN Volume Controller V6.1

13.Change the fields for IPv4 Subnet in the same way that the IPv4 IP address fields were changed. There is only a single field for IPv6 Prefix. 14.When the last field of IPv4 Subnet/IPv6 Mask has been changed, press the select button. 15.Press the right navigation button: a. For IPv4, IPv4 Gateway: is displayed. b. For IPv6, IPv6 Gateway: is displayed. 16.Press the select button. 17.Change the fields for the appropriate Gateway in the same way that the IPv4/IPv6 address fields were changed. 18.When the changes to all of the Gateway fields have been made, press the select button. 19.To review the settings before creating the cluster, use the right and left buttons. Make any necessary changes, then use right and left buttons to Confirm Created?, and press the select button. 20.After you complete this task, the following information is displayed on the service display panel: Cluster: is displayed on line 1. A temporary, system-assigned cluster name that is based on the IP address is displayed on line 2. If the cluster is not created, Create Failed: is displayed on line 1 of the service display. Line 2 contains an error code. Refer to the error codes that are documented in IBM System Storage SAN Volume Controller: Service Guide, GC26-7901, to identify the reason why the cluster creation failed and the corrective action to take. After you have created the cluster on the front panel with the correct IP address format, you can finish the cluster configuration by accessing the management GUI, completing the Create Cluster wizard, and adding nodes to the cluster. Important: At this time, do not repeat this procedure to add other nodes to the cluster. To add nodes to the cluster, follow the steps described in 9.9.2, Adding a node on page 495 and in 10.12.3, Adding a node to the cluster on page 750.

4.4 Configuring the GUI


After you have performed the activities in 4.3, Setting up the SVC cluster on page 101, complete the cluster setup by using the SVC Console. Follow the steps detailed in 4.4.1, Completing the Create Cluster Wizard on page 108, to create the cluster and complete the configuration. Important: Make sure that the SVC cluster IP address (svcclusterip) can be reached successfully by entering a ping command from the network.

Chapter 4. SAN Volume Controller initial configuration

107

4.4.1 Completing the Create Cluster Wizard


You can easily access the management GUI by opening any supported web browser. 1. Open the Web GUI from the SSPC Console or from any using supported web browser on any workstation that can communicate with the cluster. Open a supported web browser and point to the IP address that you entered in step 7 on page 106: http://svcclusteripaddress/ Figure 4-11 shows the SVC 6.1 Welcome window.

Figure 4-11 Welcome window

2. Enter the default superuser password: passw0rd (with a zero) and click Continue, as shown in Figure 4-12.

Figure 4-12 Login window

3. On the next page, read the license agreement carefully. To agree with it, select I agree with the terms in the license agreement and click Next as shown in Figure 4-13.

108

Implementing the IBM System Storage SAN Volume Controller V6.1

Figure 4-13 License Agreement window

4. At the Name, Date, and Time window (Figure 4-14), fill in the following details: A Cluster Name (System Name): This name is case sensitive and can consist of A to Z, a to z, 0 to 9, and the underscore (_). It cannot start with a number. It has a minimum of one character and a maximum of 60 characters. A Time Zone: You can select the time zone for the cluster here. A Date and a Time: Here you can change the date and the time of your cluster. If you are using an Network Time Protocol (NTP) server, you can enter the IP address of the NTP server by selecting Set NTP Server IP Address. Click Next to confirm your changes.

Figure 4-14 Name, Date and Time window

Chapter 4. SAN Volume Controller initial configuration

109

5. The Change Date and Time Settings window appears to complete updates on the cluster; see Figure 4-15. When the task is completed, click Close.

Figure 4-15 Change Date and Time Settings window

6. Next, the System License window is displayed, as shown in Figure 4-16. To continue, fill out the fields for Virtualization Limit, FlashCopy Limit, and Global and Metro Mirror Limit for the number of Terabytes that are licensed. If you do not have a license for any of these features, leave the value at 0. Click Next.

Figure 4-16 System License Settings

7. The Configure Email Event Notification window is displayed as shown in Figure 4-17.

110

Implementing the IBM System Storage SAN Volume Controller V6.1

Figure 4-17 Configure Email Event Notification window

If you do not want to configure it or if you want to do it later, click Next and go to step 8 on page 114. To ensure your system continues to run smoothly, you can enable email event notifications. Email event notifications send messages about error, warning, or informational events and inventory reports to an email address of local or remote support personnel. Ensure that all the information is valid, or email notification is disabled. If you want to configure it, click Configure Email Event Notifications and a wizard appears. a. On the first page, shown in Figure 4-18, fill in the information required to enable IBM Support personnel to contact this person to assist with problem resolution (Contact Name, Email Reply Address, Machine Location and Phone). Ensure that all contact information is valid. Then, click Next.

Figure 4-18 Define Company Contact information

b. On the next page, shown in Figure 4-19, configure at least one email server that is used by your site and optionally, enable inventory reporting. Enter a valid IP address and a server port for each server added. Ensure that the email servers are valid. Inventory reports allow IBM service personnel to proactively notify you of any known issues with your system. To activate it, enable inventory reporting and choose a Reporting Interval in this window.

Chapter 4. SAN Volume Controller initial configuration

111

Figure 4-19 Configure Email Servers and Inventory Reporting window

c. Next, as shown on Figure 4-20, you can configure email addresses to receive notifications. It is a best practice to have one of the email addresses be a support user with the error event notification type enabled to notify IBM service personnel if an error condition occurs on your system. Ensure that all email addresses are valid.

Figure 4-20 Configure Email Addresses window

d. The last window, Figure 4-20, is a summary of your Email Event Notification wizard. Click Finish to complete the setup.

112

Implementing the IBM System Storage SAN Volume Controller V6.1

Figure 4-21 Email event Notification Summary window

e. The wizard is now closed and additional information has been added, as shown in Figure 4-22. You can edit or discard your changes from this window. Then, click Next.

Figure 4-22 Configure Email Event Notification window with configuration information

Chapter 4. SAN Volume Controller initial configuration

113

8. Next, you can add available nodes to your cluster; see Figure 4-23.

Figure 4-23 Hardware window

To complete this operation, click an empty node position to view the candidate nodes. Important: Keep in mind that you need to have at least two nodes by IO group. Add your available nodes in sequence. For an empty slot, select the node you want to add to your cluster using the drop-down list. Then change its name and click Add Node, as shown in Figure 4-24.

Figure 4-24 Add a node to the cluster

114

Implementing the IBM System Storage SAN Volume Controller V6.1

A pop-up window appears to inform you about the time required to add a node to the cluster; see Figure 4-25. If you want to add it, click the OK button.

Figure 4-25 Warning message

The Add New Node window appears to complete the update on the cluster, as shown on Figure 4-26. When the task is completed, click Close.

Figure 4-26 Add New Node window

After your node has been successfully added to the cluster, you have an updated view of the Figure 4-23 as shown in Figure 4-27.

Chapter 4. SAN Volume Controller initial configuration

115

Figure 4-27 Hardware window with a second node added to the cluster

When all your nodes have been added to your cluster, click Finish. 9. Several operations will be done to update the cluster configuration, as shown in Figure 4-28. When the task is completed, click Close.

Figure 4-28 Final cluster update window

10.Your cluster is now successfully created. However, there are several remaining tasks to be completed before you use the cluster, such as changing the default superuser password or defining an IP address for service. We guide you through these tasks in the following sections.

116

Implementing the IBM System Storage SAN Volume Controller V6.1

4.4.2 Changing the default superuser password


1. Log into the cluster using your web browser, and enter the user superuser and its default password: passw0rd (with a zero) as shown in Figure 4-29. Then click Login.

Figure 4-29 Login window

2. From the GUI, select User Management Users as shown in Figure 4-30.

Figure 4-30 Users windows

Chapter 4. SAN Volume Controller initial configuration

117

3. Right -click the superuser user and select Modify as shown in Figure 4-31.

Figure 4-31 Edit superuser settings window

4. Click Change, as shown in Figure 4-32.

Figure 4-32 User Properties windows

5. Enter the new password twice and validate your change by clicking OK, as shown in Figure 4-33.

118

Implementing the IBM System Storage SAN Volume Controller V6.1

Figure 4-33 Modifying password

4.4.3 Configuring the Service IP Addresses


Configuring this IP is important because it will let you access the Service Assistant Tool. If there is an issue with a node, it will allow you to view a detailed status and error summary, and manage service actions on it. 1. To configure the Service IP Addresses, select Configuration Network as shown in Figure 4-34.

Chapter 4. SAN Volume Controller initial configuration

119

Figure 4-34 Network window

2. Select Service IP addresses as shown in Figure 4-35.

Figure 4-35 Service IP Addresses window

3. Select one node, then click the port you want to assign a service IP address; see Figure 4-36.

120

Implementing the IBM System Storage SAN Volume Controller V6.1

Figure 4-36 Configure Service IP window

4. Depending on whether you have installed an IPv4 or an IPv6 cluster, there is other information to enter. For IPv4: Type an IPv4 address in the IP Address field. Type an IPv4 gateway in the Gateway field. Type an IPv4 Subnet Mask. For IPv6: Select the Show IPv6 Button Type an IPv6 prefix in the IPv6 Network Prefix field. The Prefix field can have a value of 0 to 127. Type an IPv6 address in the IP Address field. Type an IPv6 gateway in the Gateway field. After the information has been entered, click OK to confirm modification as shown in Figure 4-37.

Figure 4-37 Service IP window.

5. Repeat steps 3 and 4 for each node in your cluster.

4.4.4 Postrequisites
Perform the following steps to complete the SVC cluster configuration. We explain all of these steps in greater detail in Chapter 9, SAN Volume Controller operations using the

Chapter 4. SAN Volume Controller initial configuration

121

command-line interface on page 439, and in Chapter 10, SAN Volume Controller operations using the GUI on page 579. a. Configure SSH keys for the command line user, as shown in 4.5, Secure Shell overview on page 122. b. Configure user authentication and authorization. c. Set up event notifications and inventory reporting. d. Create the storage pools. e. Add an MDisk to the storage pool. f. Identify and create volumes. g. Create a map host objects map. h. Identify and configure FlashCopy mappings and Metro Mirror relationship. i. Back up configuration data.

4.5 Secure Shell overview


Since SVC 5.1, SSH keys authentication is no longer needed for the GUI, but it is still needed for the SVC command-line interface. The connection is secured by means of a private key and a public key pair: 1. A public key and a private key are generated together as a pair. 2. A public key is uploaded to the SSH server (SVC Cluster). 3. A private key identifies the client and is checked against the public key during the connection. The private key must be protected. 4. The SSH server must also identify itself with a specific host key. 5. If the client does not have that host key yet; instead, it is added to a list of known hosts. Secure Shell is the communication vehicle between the management system (the System Storage Productivity Center or any workstation) and the SVC cluster. The SSH client provides a secure environment from which to connect to a remote machine. It uses the principles of public and private keys for authentication. SSH keys are generated by the SSH client software. The SSH keys include a public key, which is uploaded and maintained by the cluster, and a private key that is kept private to the workstation that is running the SSH client. These keys authorize specific users to access the administration and service functions on the cluster. Each key pair is associated with a user-defined ID string that can consist of up to 40 characters. Up to 100 keys can be stored on the cluster. New IDs and keys can be added, and unwanted IDs and keys can be deleted. To use the CLI, an SSH client must be installed on that system; the SSH key pair must be generated on the client system; and the clients SSH public key must be stored on the SVC clusters. The System Storage Productivity Center or other any workstation must have the freeware implementation of SSH-2 for Windows called PuTTY pre-installed. This software provides the SSH client function for users logged into the SVC Console that want to invoke the CLI to manage the SVC cluster.

122

Implementing the IBM System Storage SAN Volume Controller V6.1

4.5.1 Generating public and private SSH key pairs using PuTTY
Perform the following steps to generate SSH keys on the SSH client system: Start the PuTTY Key Generator to generate public and private SSH keys. From the client desktop, select Start Programs PuTTY PuTTYgen. 6. On the PuTTY Key Generator GUI window (Figure 4-38), generate the keys: a. Select SSH2 RSA. b. Leave the number of bits in a generated key value at 1024. c. Click Generate.

Figure 4-38 PuTTY key generator GUI

7. Move the cursor onto the blank area to generate the keys. To generate keys: The blank area indicated by the message is the large blank rectangle on the GUI inside the section of the GUI labeled Key. Continue to move the mouse pointer over the blank area until the progress bar reaches the far right. This action generates random characters to create a unique key pair. 8. After the keys are generated, save them for later use: a. Click Save public key, as shown in Figure 4-39.

Chapter 4. SAN Volume Controller initial configuration

123

Figure 4-39 Saving the public key

b. You are prompted for a name (for example, pubkey) and a location for the public key (for example, C:\Support Utils\PuTTY). Click Save. If another name or location is chosen, ensure that a record of the name or location is kept, because the name and location of this SSH public key must be specified in the steps that are documented in 4.5.2, Uploading the SSH public key to the SVC cluster on page 125. Tip: The PuTTY Key Generator saves the public key with no extension, by default. Use the string pub in naming the public key, for example, pubkey, to easily differentiate the SSH public key from the SSH private key. c. In the PuTTY Key Generator window, click Save private key. d. You are prompted with a warning message, as shown in Figure 4-40. Click Yes to save the private key without a passphrase.

Figure 4-40 Saving the private key without a passphrase

124

Implementing the IBM System Storage SAN Volume Controller V6.1

e. When prompted, enter a name (for example, icat) and location for the private key (for example, C:\Support Utils\PuTTY). Click Save. We suggest that you use the default name icat.ppk, because in SVC clusters running on versions prior to SVC 5.1, this key has been used for icat application authentication and must have this default name. Private key extension: The PuTTY Key Generator saves the private key with the PPK extension. 9. Close the PuTTY Key Generator GUI. 10.Navigate to the directory where the private key was saved (for example, C:\Support Utils\PuTTY).

4.5.2 Uploading the SSH public key to the SVC cluster


After you have created your SSH key pair, you need to upload your SSH private key onto the SVC cluster: 1. From your browser: http://svcclusteripaddress/ From the GUI interface, go to the User management interface as shown in Figure 4-30. Select Users, and then on the next window, select Create a User from the list as shown in Figure 4-41, and then click Go.

Figure 4-41 Create a user

2. From the Create a User window, insert the user ID name that you want to create and the password. Also select the access level that you want to assign to your user (remember that the Security Administrator is the maximum level) and choose the location where you want to upload the SSH pub key file you have created for this user, as shown in Figure 4-42. Click Ok.
Chapter 4. SAN Volume Controller initial configuration

125

Figure 4-42 Create user and password

3. You have completed your user creation process and uploaded the users SSH public key that will be paired later with the users private .ppk key, as described in 4.5.3, Configuring the PuTTY session for the CLI on page 126. Figure 4-43 shows the successful upload of the SSH admin key.

Figure 4-43 Adding the SSH admin key successfully

You have now completed the basic setup requirements for the SVC cluster using the SVC cluster web interface.

4.5.3 Configuring the PuTTY session for the CLI


Before the CLI can be used, the PuTTY session must be configured using the SSH keys that were generated earlier in 4.5.1, Generating public and private SSH key pairs using PuTTY on page 123.

126

Implementing the IBM System Storage SAN Volume Controller V6.1

Perform these steps to configure the PuTTY session on the SSH client system: 1. From the System Storage Productivity Center Windows desktop, select Start Programs PuTTY PuTTY to open the PuTTY Configuration GUI window. 2. In the PuTTY Configuration window (Figure 4-44), from the Category pane on the left, click Session, if it is not selected. Tip: The items selected in the Category pane affect the content that appears in the right pane.

Figure 4-44 PuTTY Configuration window

3. In the right pane, under the Specify the destination you want to connect to section, select SSH. Under the Close window on exit section, select Only on clean exit, which ensures that if there are any connection errors, they will be displayed on the users window. 4. From the Category pane on the left side of the PuTTY Configuration window, click Connection SSH to display the PuTTY SSH Configuration window, as shown in Figure 4-45.

Chapter 4. SAN Volume Controller initial configuration

127

Figure 4-45 PuTTY SSH connection configuration window

5. In the right pane, in the Preferred SSH protocol version section, select 2. 6. From the Category pane on the left side of the PuTTY Configuration window, select Connection SSH Auth. 7. On Figure 4-46, in the right pane, in the Private key file for authentication: field under the Authentication Parameters section, either browse to or type the fully qualified directory path and file name of the SSH client private key file created earlier (for example, C:\Support Utils\PuTTY\icat.PPK). See Figure 4-46.

128

Implementing the IBM System Storage SAN Volume Controller V6.1

Figure 4-46 PuTTY Configuration: Private key location

8. From the Category pane on the left side of the PuTTY Configuration window, click Session. 9. In the right pane, follow these steps, as shown in Figure 4-47: a. Under the Load, save, or delete a stored session section, select Default Settings, and click Save. b. For the Host Name (or IP address), type the IP address of the SVC cluster. c. In the Saved Sessions field, type a name (for example, SVC) to associate with this session. d. Click Save.

Chapter 4. SAN Volume Controller initial configuration

129

Figure 4-47 PuTTY Configuration: Saving a session

You can now either close the PuTTY Configuration window or leave it open to continue. Tips: When you want to enter the Host Name or IP address in Putty, insert your SVC user followed by @ previous to your Host Name or IP address as shown previously. this way you will not have to enter your user each time you want to access your SVC cluster. Normally, output that comes from the SVC is wider than the default PuTTY window size. Change your PuTTY window appearance to use a font with a character size of 8. To change, click the Appearance item in the Category tree, as shown in Figure 4-47, and then click Font. Choose a font with a character size of 8.

4.5.4 Starting the PuTTY CLI session


The PuTTY application is required for all CLI tasks. If it was closed for any reason, restart the session as detailed here: 1. From the SVC Console desktop, open the PuTTY application by selecting Start Programs PuTTY. 2. On the PuTTY Configuration window (Figure 4-48), select the session saved earlier (in our example, ITSO-SVC1), and click Load. 3. Click Open.

130

Implementing the IBM System Storage SAN Volume Controller V6.1

Figure 4-48 Open PuTTY command-line session

4. If this is the first time that the PuTTY application is being used since you generated and uploaded the SSH key pair, a PuTTY Security Alert window with a prompt opens stating that there is a mismatch between the private and public keys, as shown in Figure 4-49. Click Yes, which invokes the CLI.

Figure 4-49 PuTTY Security Alert

5. As shown in Example 4-1, the private key used in this PuTTY session is now authenticated against the public key that was uploaded to the SVC cluster.
Example 4-1 Authenticating

Using username "admin". Authenticating with public key "rsa-key-20100909" IBM_2145:ITSO-CLS1:admin> You have now completed the tasks that are required to configure the CLI for SVC administration from the SVC Console. You can close the PuTTY session.

Chapter 4. SAN Volume Controller initial configuration

131

4.5.5 Configuring SSH for AIX clients


To configure SSH for AIX clients, follow these steps: 1. The SVC cluster IP address must be able to be successfully reached using the ping command from the AIX workstation from which cluster access is desired. 2. Open SSL must be installed for OpenSSH to work. Install OpenSSH on the AIX client: a. The installation images can be found at this website: https://www14.software.ibm.com/webapp/iwm/web/preLogin.do?source=aixbp http://sourceforge.net/projects/openssh-aix b. Follow the instructions carefully, because OpenSSL must be installed before using SSH. 3. Generate an SSH key pair: a. Run the cd command to go to the /.ssh directory. b. Run the ssh-keygen -t rsa command. c. The following message is displayed: Generating public/private rsa key pair. Enter file in which to save the key (//.ssh/id_rsa) d. Pressing Enter will use the default file that is shown in parentheses; otherwise, enter a file name (for example, aixkey), and press Enter. e. The following prompt is displayed: Enter a passphrase (empty for no passphrase) When the CLI will be used interactively, enter a passphrase because there is no other authentication when connecting through the CLI. After typing in the passphrase, press Enter. f. The following prompt is displayed: Enter same passphrase again: Type the passphrase again, and then, press Enter again. g. A message is displayed indicating that the key pair has been created. The private key file will have the name entered previously (for example, aixkey). The public key file will have the name entered previously with an extension of .pub (for example, aixkey.pub). Using a passphrase: If you are generating an SSH keypair so that you can interactively use the CLI, use a passphrase so you will need to authenticate every time that you connect to the cluster. It is possible to have a passphrase-protected key for scripted usage, but you will have to use the expect command or a similar command to have the passphrase parsed into the ssh command.

4.6 Using IPv6


You can use IPv4, or IPv6 in a dual stack configuration. Migrating to (or from) IPv6 can be done remotely and is nondisruptive. Using IPv6: To remotely access the SVC clusters running IPv6, you are required to run a supported web browser and have IPv6 configured on your local workstation.

132

Implementing the IBM System Storage SAN Volume Controller V6.1

4.6.1 Migrating a cluster from IPv4 to IPv6


As a prerequisite, have IPv6 already enabled and configured on your local workstation. In our case, we have configured an interface with IPv4 and IPv6 addresses on the System Storage Productivity Center, as shown in Example 4-2.
Example 4-2 Output of ipconfig on System Storage Productivity Center

C:\Documents and Settings\Administrator>ipconfig Windows IP Configuration

Ethernet adapter IPv6: Connection-specific IP Address. . . . . Subnet Mask . . . . IP Address. . . . . IP Address. . . . . Default Gateway . . DNS . . . . . . . . . . Suffix . . . . . . . . . . . . . . . . . . . . . . . . . . : : : : : :

10.0.1.115 255.255.255.0 2001:610::115 fe80::214:5eff:fecd:9352%5

To update a cluster, follow these steps: 1. Select Configuration Network, as shown in Figure 4-50.

Figure 4-50 Network window

Chapter 4. SAN Volume Controller initial configuration

133

2. Select Management IP Addresses, then click port 1 of one of the node as shown in Figure 4-51.

Figure 4-51 Management IP Addresses

3. In the window that is shown in Figure 4-52, follow these steps: a. Select Show IPv6. b. Type an IPv6 prefix in the IPv6 Network Prefix field. The Prefix field can have a value of 0 to 127. c. Type an IPv6 address in the IP Address field. d. Type an IPv6 gateway in the Gateway field. e. Click OK.

Figure 4-52 Modify IP Addresses: Adding IPv6 addresses

4. A confirmation window displays (Figure 4-53). Click Apply Changes.

134

Implementing the IBM System Storage SAN Volume Controller V6.1

Figure 4-53 Confirm changes window

5. The Change Management task is launched on the server as shown in Figure 4-54. Click Close when the task is completed.

Figure 4-54 Change Management IP window

6. Test the IPv6 connectivity using the ping command from a cmd.exe session on your local workstation (as shown in Example 4-3).
Example 4-3 Testing IPv6 connectivity to the SVC cluster

C:\Documents and Settings\Administrator>ping 2001:0610:0000:0000:0000:0000:0000:119 Pinging 2001:610::119 from 2001:610::115 with 32 bytes of data: Reply Reply Reply Reply from from from from 2001:610::119: 2001:610::119: 2001:610::119: 2001:610::119: time=3ms time<1ms time<1ms time<1ms

Ping statistics for 2001:610::119: Packets: Sent = 4, Received = 4, Lost = 0 (0% loss),

Chapter 4. SAN Volume Controller initial configuration

135

Approximate round trip times in milli-seconds: Minimum = 0ms, Maximum = 3ms, Average = 0ms 7. Test the IPv6 connectivity to the cluster using a compatible IPv6 and SVC web browser on your local workstation; see Figure 4-55.

Figure 4-55 Testing IPv6 SVC GUI access using a compatible web browser

Tip: To access an IPv6 address in a web browser, you need to put this IP between square brackets as shown at the top in Figure 4-55. 8. Finally, remove the IPv4 address in the SVC GUI accessing the same windows as shown in Figure 4-52, and validate this change by clicking OK.

4.6.2 Migrating a cluster from IPv6 to IPv4


The process of migrating a cluster from IPv6 to IPv4 is identical to the process described in 4.6.1, Migrating a cluster from IPv4 to IPv6 on page 133, except that you add IPv4 addresses and remove the IPv6 addresses.

136

Implementing the IBM System Storage SAN Volume Controller V6.1

Chapter 5.

Host configuration
In this chapter we describe the basic host configuration procedures that are required to connect supported hosts to the IBM System Storage SAN Volume Controller (SVC).

Copyright IBM Corp. 2011. All rights reserved.

137

5.1 Host attachment overview for IBM System Storage SAN Volume Controller
The IBM System Storage SAN Volume Controller supports a wide range of host types (both IBM and non-IBM), thereby making it possible to consolidate storage in an open systems environment into a common pool of storage. The storage pool can then be utilized and managed more efficiently as a single entity from a central point on the SAN. The ability to consolidate storage for open systems attach hosts provides the following benefits: Storage is easier to manage. The utilization of capacity is increased. Support is provided for applying advanced Copy Services functions across storage systems from many vendors.

5.2 SVC setup


In the majority of IBM SAN Volume Controller (SVC) environments, where high performance and high availability requirements exist, hosts are attached through Fibre Channel in a Storage Area Network (SAN). In the majority of these implementations, the SAN is implemented as two independent fabrics providing hosts with fully redundant paths to protect against path failures. SVC 6.1 also supports iSCSI as an alternative protocol to attaching hosts through a LAN to the SVC. Note that within the SVC, all communications with back-end storage subsystems, and with other SVC clusters, take place through Fibre Channel (FC). For iSCSI/LAN-based access networks to the SVC using a single network, or using two physically separated networks, is supported. The iSCSI feature is a software feature that is provided by the SVC code. It is available on the existing nodes that support the SVC 6.1 release. The existing SVC node hardware has two 1 Gbps Ethernet ports that can be used for cluster configuration and also, in the meantime, for iSCSI host connectivity. Redundant paths to volumes can be provided for both SAN and iSCSI attached hosts. Figure 5-1 on page 139 shows the attachments that are supported with the SVC 6.1 release.

138

Implementing the IBM System Storage SAN Volume Controller V6.1

Figure 5-1 SVC host attachment overview

5.2.1 Fibre Channel and SAN setup overview


Hosts using Fibre Channel (FC) as the connection to an SVC must always be connected to a SAN switch and never directly to the SVC nodes. For SVC configurations, it is a best practice to use two redundant SAN fabrics. Therefore, each server is equipped with a minimum of two host bus adapters (HBAs), with each of the HBAs connected to a SAN switch in one of the two fabrics (assuming one port per HBA). SVC imposes no special limit on the Fibre Channel optical distance between SVC nodes and host servers. A server can therefore be attached to an edge switch in a core-edge configuration. The SVC cluster would be at the core. SVC supports up to three interswitch link (ISL) hops in the fabric, which means that the server to SVC can be separated by up to five Fibre Channel links, four of which can be 10 km long (6.2 miles) if longwave small form-factor pluggable (SFPs) are used. The SVC nodes themselves contain shortwave SFPs and must therefore be within 300 m (.186 miles) of the switch they are attached to. The configuration shown in Figure 5-2 on page 140 is therefore supported.

Chapter 5. Host configuration

139

Figure 5-2 Example of host connectivity

In this figure, the optical distance between SVC Node 1 and Host 2 is just over 40 km. For high performance servers, the rule is to avoid ISL hops, that is, connect the servers to the same switch to which the SVC is connected, if possible. Remember these limits when connecting host servers to an SVC: Up to 256 hosts per I/O Group, which results in a total of 1,024 hosts per cluster. Note that if the same host is connected to multiple I/O Groups of a cluster, it counts as a host in each of these groups. A total of 512 distinct configured host worldwide port names (WWPNs) are supported per I/O Group. This limit is the sum of the FC host ports and the host iSCSI names (an internal WWPN is generated for each iSCSI name) associated with all of the hosts that are associated with a single I/O Group. Access from a server to an SVC cluster through the SAN fabric is defined by the use of switch zoning. Consider these rules for zoning hosts with the SVC: Homogeneous HBA port zones Switch zones containing HBAs must contain HBAs from similar host types and similar HBAs in the same host. For example, AIX and NT hosts must be in separate zones and QLogic and Emulex adapters must also be in separate zones. Important: A configuration that breaches this rule is unsupported because it can introduce instability into the environment.

140

Implementing the IBM System Storage SAN Volume Controller V6.1

HBA to SVC port zones Place each host HBA in a separate zone along with two SVC ports, one from each node in the I/O Group. Do not place more than two SVC ports in a zone with an HBA, because this will produce more than the recommended number of paths as seen from the host multipath driver. Recommended number of paths per volume: (n+1 redundancy) With 2 HBA ports: zone HBA ports to SVC ports 1 to 2 for a total of 4 paths With 4 HBA ports: zone HBA ports to SVC ports 1 to 1 for a total of 4 paths Optional: (n+2 redundancy) With 4 HBA ports: zone HBA ports to SVC ports 1 to 2 for a total of 8 paths Note: Here the term HBA port is used to describe the SCSI Initiator and SVC port is used to describe the SCSI target. Maximum host paths per LU For any volume, the number of paths through the SAN from the SVC nodes to a host must

not exceed eight. For most configurations, four paths to an I/O Group (four paths to each
volume that is provided by this I/O Group) are sufficient. Note: The maximum number of host paths per LU is not to exceed 8. Balanced Host Load across HBA ports To obtain the best performance from a host with multiple ports, ensure that each host port is zoned with a separate group of SVC ports. Balanced Host Load across SVC ports To obtain the best overall performance of the subsystem and to prevent overloading, the workload to each SVC port must be equal. You can achieve this balance by zoning approximately the same number of host ports to each SVC port. Figure 5-3 on page 142 shows an overview of a configuration where servers contain two single port HBAs each. Attempt to distribute the attached hosts equally between two logical sets per I/O Group. Connect hosts from each set to the same group of SVC ports. This port group includes exactly one port from each SVC node in the I/O Group. The zoning defines the correct connections. The port groups are defined as follows: Hosts in host set one of an I/O Group are always zoned to the P1 and P4 ports on both nodes, for example, N1/N2 of I/O Group zero. Hosts in host set two of an I/O Group are always zoned to the P2 and P3 ports on both nodes of an I/O Group. You can create aliases for these port groups (per I/O Group): Fabric A: IOGRP0_PG1 N1_P1;N2_P1,IOGRP0_PG2 N1_P3;N2_P3 Fabric B: IOGRP0_PG1 N1_P4;N2_P4,IOGRP0_PG2 N1_P2;N2_P2 Create host zones by always using the host port WWPN, plus the PG1 alias for hosts in the first host set. Always use the host port WWPN, plus the PG2 alias for hosts from the

Chapter 5. Host configuration

141

second host set. If a host has to be zoned to multiple I/O Groups, simply add the PG1 or PG2 aliases from the specific I/O Groups to the host zone. Using this schema provides four paths to one I/O Group for each host and helps to maintain an equal distribution of host connections on the SVC ports. Figure 5-3 shows an overview of this host zoning schema.

Figure 5-3 Overview of four-path host zoning

When possible, use the minimum number of paths necessary to achieve a sufficient level of redundancy. For SVC environment, no more than four paths per I/O Group are required to accomplish this. Remember that all paths must be managed by the multipath driver on the host side. If we assume a server is connected through four ports to the SVC, each volume is seen through eight paths. With 125 volumes mapped to this server, the multipath driver has to support handling up to 1000 active paths (8 x 125). You can find configuration and operational details about the IBM Subsystem Device Driver (SDD) Storage Multipath Subsystem Device Driver Users Guide, GC52-1309, at the following website: http://www-01.ibm.com/support/docview.wss?uid=ssg1S7000303&aid=1 For hosts using four HBAs/ports with eight connections to an I/O Group, use the zoning schema that is shown in Figure 5-4 on page 143. You can combine this schema with the previous four-path zoning schema.

142

Implementing the IBM System Storage SAN Volume Controller V6.1

Figure 5-4 Overview of eight-path host zoning

5.2.2 Port mask


The port mask feature is available in SVC. By default, port masking is set such that all attached hosts can see the same set of SCSI logical unit numbers (LUNs) from each of the four FC ports on each node in the respective I/O Group. The port mask is associated with a host object. The port mask controls which SVC (target) ports a particular host can access. The port mask applies to logins from any of the host (initiator) ports associated with the host object in the configuration model. The port mask consists of four binary bits, represented in the command-line interface (CLI) as 0 or 1. The rightmost bit is associated with FC port 1 on each node. The leftmost bit is associated with port 4. A 1 in any particular bit position allows access to that port and a zero denies access. The default port mask is 1111, preserving the behavior of the product prior to the introduction of this feature. From the GUI, you can use the port mask feature as shown on Figure 5-5 on page 144.

Chapter 5. Host configuration

143

Figure 5-5 Port Mask feature

For each login between an HBA port and an SVC node port, SVC allows access based on the port mask defined within the host object to which the HBA belongs. If access is denied, SVC responds to SCSI commands as though the HBA port is unknown to the SVC.

5.3 iSCSI
iSCSI is a block-level protocol that encapsulates SCSI commands into TCP/IP packets and, thereby, leverages an existing IP network instead of requiring FC HBAs and SAN fabric infrastructure. The iSCSI standard is defined by RFC 3720. For SAN Volume Controller, only connections from iSCSI-attached hosts to nodes are supported. The network interface controller (NIC) cards carry iSCSI traffic and are also used for the configuration of UI traffic. Important: iSCSI connections from SAN Volume Controller nodes to storage systems are not supported. The network interface controller (NIC) cards carry iSCSI traffic and are also used for the configuration of UI traffic.

5.3.1 Initiators and targets


An iSCSI client, which is known as an (iSCSI) initiator, sends SCSI commands over an IP network to an iSCSI target. We refer to a single iSCSI initiator or iSCSI target as an iSCSI node. An iSCSI target refers to a storage resource that is located on an iSCSI server, or, to be more precise, to one of potentially many instances of iSCSI nodes running on that server as a target. For maximum performance, use a gigabit Ethernet adapter that transmits 1000 megabits per second (Mbps) for the connection between the iSCSI host and the iSCSI target.

144

Implementing the IBM System Storage SAN Volume Controller V6.1

5.3.2 Nodes
There are one or more iSCSI nodes within a network entity. The iSCSI node is accessible through one or more network portals. A network portal is a component of a network entity that has a TCP/IP network address and that can be used by an iSCSI node. An iSCSI node is identified by its unique iSCSI name and is referred to as an IQN. Remember that this name serves only for the identification of the node; it is not the nodes address, and in iSCSI, the name is separated from the addresses. This separation allows multiple iSCSI nodes to use the same addresses, or, while it is implemented in the SVC, the same iSCSI node to use multiple addresses.

5.3.3 IQN
An SVC cluster can provide up to eight iSCSI targets, one per node. Each SVC node has its own IQN, which by default will be in this form: iqn.1986-03.com.ibm:2145.<clustername>.<nodename> An iSCSI host in SVC is defined by specifying its iSCSI initiator names, for an example of an IQN of a Windows Server: iqn.1991-05.com.microsoft:itsoserver01 During the configuration of an iSCSI host in the SVC, you must specify the hosts initiator IQNs. You can read about host creation in detail in Chapter 9, SAN Volume Controller operations using the command-line interface on page 439, and in Chapter 10, SAN Volume Controller operations using the GUI on page 579. An alias string can also be associated with an iSCSI node. The alias allows an organization to associate a user friendly string with the iSCSI name. However, the alias string is not a substitute for the iSCSI name. Figure 5-6 on page 146 shows an overview of iSCSI implementation in the SVC.

Chapter 5. Host configuration

145

Figure 5-6 SVC iSCSI overview

A host that is using iSCSI as the communication protocol to access its volumes on an SVC cluster uses its single or multiple Ethernet adapters to connect to an IP LAN. The nodes of the SVC cluster are connected to the LAN by the existing 1 Gbps Ethernet ports on the node. For iSCSI, both ports can be used. Note that Ethernet link aggregation (port trunking) or channel bonding for the SVC nodes Ethernet ports is not supported for the 1 Gbps ports in this release. For each SVC node, that is, for each instance of an iSCSI target node in the SVC node, two IPv4 and two IPv6 addresses or iSCSI network portals can be defined. Figure 2-10 on page 31 shows one IPv4 and one IPv6 address per Ethernet port.

5.3.4 Setting up the host server


The following basic procedure must be performed when setting up a host server for use as an iSCSI initiator with SAN Volume Controller volumes. The specific steps vary depending on the particular host type and operating system that is involved. To configure a host, first select a software-based iSCSI initiator or a hardware-based iSCSI initiator. For example, the software-based iSCSI initiator can be a Microsoft Windows iSCSI software initiator, and the hardware-based iSCSI initiator can be an iSCSI host bus adapter inside the host server. To set up your host server for use as an iSCSI software-based initiator with SAN Volume Controller volumes, perform the following steps: 1. Set up your SAN Volume Controller cluster for iSCSI. a. Select a set of IPv4 or IPv6 addresses for the clustered Ethernet ports on the nodes that are in the I/O groups that will use the iSCSI volumes. 146

Implementing the IBM System Storage SAN Volume Controller V6.1

b. Configure the node Ethernet ports on each node in the cluster with the svctask cfgportip command. c. Verify that you have configured the node and the clustered Ethernet ports correctly by reviewing the output of the svcinfo lsportip command and svcinfo lsclusterip command. d. Use the svctask mkvdisk command to create volumes on the SAN Volume Controller cluster. e. Use the svctask mkhost command to create a host object on the SAN Volume Controller server that describes the iSCSI server initiator to which the volumes are to be mapped. f. Use the svctask mkvdiskhostmap command to map the volume to the host object in the SAN Volume Controller. 2. Set up your host server. a. Ensure that you have configured your IP interfaces on the server. b. Install the software for the iSCSI software-based initiator on the server. c. On the host server, run the configuration methods for iSCSI so that the host server iSCSI initiator logs in to the SAN Volume Controller cluster and discovers the SAN Volume Controller volumes. The host then creates host devices for the volumes. 3. After the host devices are created, you can use them with your host applications.

5.3.5 Volume discovery


Hosts can discover volumes through one of the following three mechanisms: Internet Storage Name Service (iSNS) SVC can register itself with an iSNS name server; the IP address of this server is set using the svctask chcluster command. A host can then query the iSNS server for available iSCSI targets. Service Location Protocol (SLP) The SVC node runs an SLP daemon, which responds to host requests. This daemon reports the available services on the node. One service is the CIMOM, which runs on the configuration node; iSCSI I/O service can now also be reported. SCSI Send Target request The host can also send a Send Target request using the iSCSI protocol to the iSCSI TCP/IP port (port 3260). You must define the network portal IP addresses of the iSCSI targets before a discovery can be started.

5.3.6 Authentication
Authentication of hosts is optional; by default, it is disabled. The user can choose to enable Challenge Handshake Authentication Protocol (CHAP) or CHAP authentication, which involves sharing a CHAP secret between the cluster and the host. If the correct key is not provided by the host, the SVC will not allow it to perform I/O to volumes. The cluster can also be assigned a CHAP secret. A new feature with iSCSI is that you can move IP addresses, which are used to address an iSCSI target on the SVC node, between the nodes of an I/O Group. IP addresses will only be moved from one node to its partner node if a node goes through a planned or unplanned restart. If the Ethernet link to the SVC cluster fails due to a cause outside of the SVC (such as the cable being disconnected, the Ethernet router failing, and so on), the SVC makes no attempt to fail over an IP address to restore IP access to the cluster. To enable validation of

Chapter 5. Host configuration

147

the Ethernet access to the nodes, it will respond to ping with the standard one-per-second rate without frame loss. There is a concept, which is used for handling the iSCSI IP address failover, that is called a clustered Ethernet port. A clustered Ethernet port consists of one physical Ethernet port on each node in the cluster. The clustered Ethernet port contains configuration settings that are shared by all of these ports. These clustered ports are referred to as Port 1 and Port 2 in the CLI or GUI on each node of an SVC cluster. Clustered Ethernet ports can be used for iSCSI or management ports. Figure 5-7 shows an example of an iSCSI target node failover. It gives a simplified overview of what happens during a planned or unplanned node restart in an SVC I/O Group. 1. During normal operation, one iSCSI node target node instance is running on each SVC node. All of the IP addresses (IPv4/IPv6) belonging to this iSCSI target, including the management addresses if the node acts as the configuration node, are presented on the two ports (P1/P2) of a node. 2. During a restart of an SVC node (N1), the iSCSI initiator, including all of its network portal (IPv4/IPv6) IP addresses defined on Port1/Port2 and the management (IPv4/IPv6) IP addresses (if N1 acted as the configuration node), will fail over to Port1/Port2 of the partner node within the I/O Group, that is, node N2. An iSCSI initiator running on a server will execute a reconnect to its iSCSI target, that is, the same IP addresses presented now by a new node of the SVC cluster. 3. As soon as the node (N1) has finished its restart, the iSCSI target node (including its IP addresses) running on N2 will fail back to N1. Again, the iSCSI initiator running on a server will execute a reconnect to its iSCSI target. The management addresses will not fail back. N2 will remain in the role of the configuration node for this cluster.

Figure 5-7 iSCSI node failover scenario

From a server perspective, it is not required to have a multipathing driver (MPIO) in place to be able to handle an SVC node failover. In the case of a node restart, the server simply reconnects to the IP addresses of the iSCSI target node that will reappear after several seconds on the ports of the partner node. 148
Implementing the IBM System Storage SAN Volume Controller V6.1

A host multipathing driver for iSCSI is required in these situations: To protect a server from network link failures, including port failures on the SVC nodes To protect a server from a server HBA failure (if two HBAs are in use) To protect a server form network failures, if the server is connected through two HBAs to two separate networks To provide load balancing on the servers HBA and the network links The commands for the configuration of the iSCSI IP addresses have been separated from the configuration of the cluster IP addresses. The following commands are new commands for managing iSCSI IP addresses: The svcinfo lsportip command lists the iSCSI IP addresses assigned for each port on each node in the cluster. The svctask cfgportip command assigns an IP address to each nodes Ethernet port for iSCSI I/O. The following commands are new commands for managing the cluster IP addresses: The svcinfo lsclusterip command returns a list of the cluster management IP addresses configured for each port. The svctask chclusterip command modifies the IP configuration parameters for the cluster. For a detailed description about how to use these commands, see Chapter 9, SAN Volume Controller operations using the command-line interface on page 439. The parameters for remote services (ssh and Web services) will remain associated with the cluster object. During a software upgrade the configuration settings for the cluster will be used to configure clustered Ethernet Port1. For iSCSI-based access, using two separate networks and separating iSCSI traffic within the networks by using a dedicated VLAN path for storage traffic will prevent any IP interface, switch, or target port failure from compromising the host servers access to the volumes.

5.4 AIX-specific information


The following section details specific information that relates to the connection of AIX-based hosts in an SVC environment. AIX-specific information: In this section, the IBM System p information applies to all AIX hosts that are listed on the SVC interoperability support website, including IBM System i partitions and IBM JS blades.

5.4.1 Configuring the AIX host


The following list outlines the steps required to attach SVC volumes to an AIX host: 1. Install the HBAs in the AIX host system. 2. Ensure that you have installed the correct operating systems and version levels on your host, including any updates and Authorized Program Analysis Reports (APARs) for the operating system.

Chapter 5. Host configuration

149

3. Connect the AIX host system to the FC switches. 4. Configure the FC switch zoning. 5. Install the 2145 host attachment support package. 6. Install and configure the Subsystem Device Driver Path Control Module (SDDPCM). 7. Perform the logical configuration on the SAN Volume Controller to define the host, volumes, and host mapping. 8. Run cfgmgr to discover and configure the SVC volumes. The following sections detail the current support information. It is vital that you check the websites that are listed regularly for any updates.

5.4.2 Operating system versions and maintenance levels


At the time of writing, the following AIX levels are supported: AIX V4.3.3 AIX 5L V5.1 AIX 5L V5.2 AIX 5L V5.3 AIX V6.1.3 For the latest information, and device driver support, always refer to the following website: http://www-03.ibm.com/systems/storage/software/virtualization/svc/interop.html

5.4.3 HBAs for IBM System p hosts


Ensure that your IBM System p AIX hosts contain supported host bus adapters (HBAs). Refer to the following website to obtain current interoperability information: http://www-03.ibm.com/systems/storage/software/virtualization/svc/interop.html Note: The maximum number of FC ports that are supported in a single host (or logical partition) is four. These ports can be four single-port adapters or two dual-port adapters or a combination, as long as the maximum number of ports that are attached to the SAN Volume Controller does not exceed four.

5.4.4 Configuring fast fail and dynamic tracking


For hosts running AIX 5L V5.2 or later operating systems, enable both fast fail and dynamic tracking. Perform the following steps to configure your host system to use the fast fail and dynamic tracking attributes: 1. Issue the following command to set the FC SCSI I/O Controller Protocol Device to each adapter: chdev -l fscsi0 -a fc_err_recov=fast_fail The preceding command was for adapter fscsi0. Example 5-1 on page 151 shows the command for both adapters on our test system running AIX 5L V5.3.

150

Implementing the IBM System Storage SAN Volume Controller V6.1

Example 5-1 Enable fast fail

#chdev fscsi0 #chdev fscsi1

-l fscsi0 -a fc_err_recov=fast_fail changed -l fscsi1 -a fc_err_recov=fast_fail changed

2. Issue the following command to enable dynamic tracking for each FC device: chdev -l fscsi0 -a dyntrk=yes The preceding example command was for adapter fscsi0. Example 5-2 shows the command for both adapters on our test system running AIX 5L V5.3.
Example 5-2 Enable dynamic tracking

#chdev fscsi0 #chdev fscsi1

-l fscsi0 -a dyntrk=yes changed -l fscsi1 -a dyntrk=yes changed

Note: The fast fail and dynamic tracking attributes do not persist through an adapter delete and reconfigure. Thus, if the adapters are deleted and then configured back into the system, these attributes will be lost and will need to be reapplied.

Host adapter configuration settings


You can display the availability of installed host adapters by using the command shown in Example 5-3.
Example 5-3 FC host adapter availability

#lsdev -Cc adapter |grep fcs fcs0 Available 1Z-08 FC Adapter fcs1 Available 1D-08 FC Adapter

You can display the worldwide port number (WWPN), along with other attributes including firmware level, by using the command shown in Example 5-4. Note that the WWPN is represented as Network Address.
Example 5-4 FC host adapter settings and WWPN

#lscfg -vpl fcs0 fcs0

U0.1-P2-I4/Q1

FC Adapter

Part Number.................00P4494 EC Level....................A Serial Number...............1E3120A68D Manufacturer................001E Device Specific.(CC)........2765 FRU Number.................. 00P4495 Network Address.............10000000C932A7FB ROS Level and ID............02C03951 Device Specific.(Z0)........2002606D Device Specific.(Z1)........00000000 Device Specific.(Z2)........00000000 Device Specific.(Z3)........03000909
Chapter 5. Host configuration

151

Device Device Device Device Device Device Device Device Device

Specific.(Z4)........FF401210 Specific.(Z5)........02C03951 Specific.(Z6)........06433951 Specific.(Z7)........07433951 Specific.(Z8)........20000000C932A7FB Specific.(Z9)........CS3.91A1 Specific.(ZA)........C1D3.91A1 Specific.(ZB)........C2D3.91A1 Specific.(YL)........U0.1-P2-I4/Q1

PLATFORM SPECIFIC Name: fibre-channel Model: LP9002 Node: fibre-channel@1 Device Type: fcp Physical Location: U0.1-P2-I4/Q1

5.4.5 Installing the 2145 host attachment support package


To configure SVC volumes to an AIX host with the proper device type of 2145, you must have the 2145 host attachment support fileset installed prior to running cfgmgr. Running cfgmgr prior to installing the host attachment support fileset will result in the LUNs being configured as Other SCSI Disk Drives and will not be recognized by the SDDPCM. To correct the device type, hdisks will need to be deleted using rmdev -dl hdiskX and then cfgmgr will need to be rerun. Perform the following steps to install the host attachment support package: 1. Access the following website: http://www.ibm.com/servers/storage/support/software/sdd/downloading.html 2. Select Host Attachment Scripts for AIX. 3. Select the Host Attachment Script for SDDPCM from the available options. 4. Download the AIX host attachment fileset for your multipath device driver. package: devices.fcp.disk.ibm.mpio.rte 5. Follow the instructions that are provided on the website or any readme files to install the script.

5.4.6 Subsystem Device Driver Path Control Module


The Subsystem Device Driver Path Control Module (SDDPCM) is a loadable path control module for supported storage devices to supply path management functions and error recovery algorithms. When the supported storage devices are configured as Multipath I/O (MPIO) devices, SDDPCM is loaded as part of the AIX MPIO FCP (Fibre Channel Protocol) or AIX MPIO SAS (serial-attached SCSI) device driver during the configuration. The AIX MPIO device driver automatically discovers, configures and makes available all storage device paths. SDDPCM then manages these paths to provide: High availability and load balancing of storage I/O Automatic path-failover protection 152
Implementing the IBM System Storage SAN Volume Controller V6.1

Concurrent download of supported storage devices licensed machine code Prevention of a single-point failure The AIX MPIO device driver along with SDDPCM enhances the data availability and I/O load balancing of SVC volumes. Note: For AIX hosts, use the Subsystem Device Driver Path Control Module (SDDPCM) as the multipath software over the legacy Subsystem Device Driver (SDD). Although still supported, a discussion of SDD is beyond the scope of this publication. For information regarding SDD, see Multipath Subsystem Device Driver Users Guide, GC52-1309.

SDDPCM installation
Download the appropriate version of SDDPCM and install using the standard AIX installation procedure. The latest SDDPCM software versions are available at the following website: http://www-304.ibm.com/systems/support/supportsite.wss/supportresources?brandind=5 000033&familyind=5329528&taskind=2 Check the driver readme file and make sure your AIX system meets all prerequisites. Example 5-5 shows the appropriate version of SDDPCM downloaded into the /tmp/sddpcm directory. From here, we extract it and initiate the inutoc command, which generates a dot.toc (.toc) file that is needed by the installp command prior to installing SDDPCM. Finally, we initiate the installp command, which installs SDDPCM onto this AIX host.
Example 5-5 Installing SDDPCM on AIX

# ls -l total 3232 -rw-r----1 root system 1648640 Jul 15 13:24 devices.sddpcm.61.rte.tar # tar -tvf devices.sddpcm.61.rte.tar -rw-r----- 271001 449628 1638400 Oct 31 12:16:23 2007 devices.sddpcm.61.rte # tar -xvf devices.sddpcm.61.rte.tar x devices.sddpcm.61.rte, 1638400 bytes, 3200 media blocks. # inutoc . # ls -l total 6432 -rw-r--r-1 root system 531 Jul 15 13:25 .toc -rw-r----1 271001 449628 1638400 Oct 31 2007 devices.sddpcm.61.rte -rw-r----1 root system 1648640 Jul 15 13:24 devices.sddpcm.61.rte.tar # installp -ac -d . all Example 5-6 shows the lslpp command that can be used to check the version of SDDPCM currently installed.
Example 5-6 Checking SDDPCM device driver

# lslpp -l | grep sddpcm devices.sddpcm.61.rte devices.sddpcm.61.rte

2.2.0.0 2.2.0.0

COMMITTED COMMITTED

IBM SDD PCM for AIX V61 IBM SDD PCM for AIX V61

Enabling the SDDPCM web interface is described in 5.14, Using SDDDSM, SDDPCM, and SDD web interface on page 224.

Chapter 5. Host configuration

153

5.4.7 Configuring assigned volume using SDDPCM


We use an AIX host with host name Atlanta to demonstrate attaching SVC volumes to an AIX host. Example 5-7 shows host configuration prior to configuring SVC volumes. The lspv output shows existing hdisks and lsvg output shows existing Volume Group.
Example 5-7 Status of AIX host system Atlanta

# lspv hdisk0 hdisk1 hdisk2 # lsvg rootvg

0009cdcaeb48d3a3 0009cdcac26dbb7c 0009cdcab5657239

rootvg rootvg rootvg

active active active

Identify WWPNs of host adapter ports


Example 5-8, shows how the lscfg commands can be used to list the WWPNs for all installed adapters. The WWPNs will be used later for mapping the SVC volumes.
Example 5-8 HBA information for host Atlantic

# lscfg -vl fcs* |egrep fcs|Network fcs1 U0.1-P2-I4/Q1 FC Adapter Network Address.............10000000C932A865 Physical Location: U0.1-P2-I4/Q1 fcs2 U0.1-P2-I5/Q1 FC Adapter Network Address.............10000000C94C8C1C

Display SVC configuration


The SVC CLI can be used to display the host configuration on the SVC and validate physical access from the host to the SVC. Example 5-9 shows the use of the lshost and lshostvdiskmap command to obtain the following information: 1. Confirmation that a host definition has been properly defined for host Atlantic. 2. The WWPNs listed in Example 5-8 are logged in with two logins each. 3. Atlantic has three volumes assigned to each, and volume serial numbers are listed.
Example 5-9 SVC definitions for host system Atlantic

IBM_2145:ITSO-CLS2:admin>svcinfo lshost Atlantic id 8 name Atlantic port_count 2 type generic mask 1111 iogrp_count 4 WWPN 10000000C94C8C1C node_logged_in_count 2 state active WWPN 10000000C932A865 node_logged_in_count 2 state active IBM_2145:ITSO-CLS2:admin>svcinfo lshostvdiskmap Atlantic 154
Implementing the IBM System Storage SAN Volume Controller V6.1

id name SCSI_id vdisk_id wwpn vdisk_UID 8 Atlantic 0 14 10000000C94C8C1C 6005076801A180E90800000000000060 8 Atlantic 1 22 10000000C94C8C1C 6005076801A180E90800000000000061 8 Atlantic 2 23 10000000C94C8C1C 6005076801A180E90800000000000062 IBM_2145:ITSO-CLS2:admin>

vdisk_name Atlantic0001 Atlantic0002 Atlantic0003

Discover and configure LUNs


The cfgmgr command performs the discovery of the new LUNs and configured them into AIX. The following command will probe devices on adapters individually: # cfgmgr -l fcs1 # cfgmgr -l fcs2 The following command will probe devices sequentially across all installed adapters: # cfgmgr -vS The lsdev command lists the three newly configured hdisks represented as MPIO FC 2145 devices, as shown in Example 5-10.
Example 5-10 Volumes from SVC

# lsdev -Cc disk hdisk0 Available hdisk1 Available hdisk2 Available hdisk3 Available hdisk4 Available hdisk5 Available

1S-08-00-8,0 1S-08-00-9,0 1S-08-00-10,0 1D-08-02 1D-08-02 1D-08-02

16 Bit LVD SCSI Disk Drive 16 Bit LVD SCSI Disk Drive 16 Bit LVD SCSI Disk Drive MPIO FC 2145 MPIO FC 2145 MPIO FC 2145

The mkvg command can now be used to create a Volume Group with the three newly configured hdisks, as shown in Example 5-11.
Example 5-11 Running the mkvg command

# mkvg -y itsoaixvg hdisk3 0516-1254 mkvg: Changing the PVID in the ODM. itsoaixvg # mkvg -y itsoaixvg1 hdisk4 0516-1254 mkvg: Changing the PVID in the ODM. itsoaixvg1 # mkvg -y itsoaixvg2 hdisk5 0516-1254 mkvg: Changing the PVID in the ODM. itsoaixvg2 The lspv output now shows the new Volume Group label on each of the hdisks that were included in the Volume Groups, as seen in Example 5-12.
Example 5-12 Showing the vpath assignment into the Volume Group

# lspv hdisk0 hdisk1

0009cdcaeb48d3a3 0009cdcac26dbb7c

rootvg rootvg

active active 155

Chapter 5. Host configuration

hdisk2 hdisk3 hdisk4 hdisk5

0009cdcab5657239 0009cdca28b589f5 0009cdca28b87866 0009cdca28b8ad5b

rootvg itsoaixvg itsoaixvg1 itsoaixvg2

active active active active

5.4.8 Using SDDPCM


The SDDPM is administered using the pcmpath command. This commands is used to perform all administrative functions such as displaying and changing the path state. The pcmpath query adapter command displays the current state of the adapters. In Example 5-13, we can see the status that both adapters are showing as optimal with State=NORMAL and Mode=ACTIVE.
Example 5-13 SDDPCM commands that are used to check the availability of the adapters

# pcmpath query adapter Active Adapters :2 Adpt# 0 1 Name fscsi1 fscsi2 State NORMAL NORMAL Mode ACTIVE ACTIVE Select 407 425 Errors 0 0 Paths 6 6 Active 6 6

The pcmpath query device command displays the current state of the adapters. In Example 5-14, we can see the path State and Mode for each of the defined hdisks. The status that both adapters are showing as optimal with State=NORMAL and Mode=ACTIVE. Additionally, an asterisk (*) displayed next to paths indicates inactive paths that are configured to the non-preferred SVC nodes in the IO Group.
Example 5-14 SDDPCM commands that are used to check the availability of the devices

# pcmpath query device Total Devices : 3 DEV#: 3 DEVICE NAME: hdisk3 TYPE: 2145 ALGORITHM: Load Balance SERIAL: 6005076801A180E90800000000000060 ========================================================================== Path# Adapter/Path Name State Mode Select Errors 0 fscsi1/path0 OPEN NORMAL 152 0 1* fscsi1/path1 OPEN NORMAL 48 0 2* fscsi2/path2 OPEN NORMAL 48 0 3 fscsi2/path3 OPEN NORMAL 160 0 DEV#: 4 DEVICE NAME: hdisk4 TYPE: 2145 ALGORITHM: Load Balance SERIAL: 6005076801A180E90800000000000061 ========================================================================== Path# Adapter/Path Name State Mode Select Errors 0* fscsi1/path0 OPEN NORMAL 37 0 1 fscsi1/path1 OPEN NORMAL 66 0 2 fscsi2/path2 OPEN NORMAL 71 0 3* fscsi2/path3 OPEN NORMAL 38 0

156

Implementing the IBM System Storage SAN Volume Controller V6.1

DEV#: 5 DEVICE NAME: hdisk5 TYPE: 2145 ALGORITHM: Load Balance SERIAL: 6005076801A180E90800000000000062 ========================================================================== Path# Adapter/Path Name State Mode Select Errors 0 fscsi1/path0 OPEN NORMAL 66 0 1* fscsi1/path1 OPEN NORMAL 38 0 2* fscsi2/path2 OPEN NORMAL 38 0 3 fscsi2/path3 OPEN NORMAL 70 0 #

5.4.9 Creating and preparing volumes for use with AIX V6.1 and SDDPCM
The itsoaixvg Volume Group is created using hdisk3. A logical volume is created using the Volume Group. Then, the testlv1 file system is created and mounted on the /testlv1 mount point, as shown in Example 5-15.
Example 5-15 Host system new Volume Group and file system configuration

# lsvg -o itsoaixvg2 itsoaixvg1 itsoaixvg rootvg # crfs -v jfs2 -g itsoaixvg -a size=3G File system created successfully. 3145428 kilobytes total disk space. New File System size is 6291456 # lsvg -l itsoaixvg itsoaixvg: LV NAME TYPE LPs loglv00 jfs2log 1 fslv00 jfs2 384 #

-m /itsoaixvg -p rw -a agblksize=4096

PPs 1 384

PVs 1 1

LV STATE closed/syncd closed/syncd

MOUNT POINT N/A /itsoaixvg

5.4.10 Expanding an AIX volume


AIX supports dynamic volume expansion starting at AIX 5L Version 5.2. This capability allows a volumes capacity to be increased by the storage subsystem while the volumes are actively in use by the host and applications. The following restrictions exist: The volume cannot belong to a concurrent-capable Volume Group. The volume cannot belong to FlashCopy, Metro Mirror, or Global Mirror relationship. The following steps outline how to expand a volume on an AIX host, where the volume is a volume from the SVC: 1. Display the current size of the SVC volume using the SVC CLI command svcinfo lsvdisk <VDisk_name>. The capacity of the volume as seen from the host is contained in the capacity field of the lsvdisk output in GBs. 2. The corresponding AIX hdisk can be identified by matching the vdisk_UID from the lsvdisk output with the SERIAL field of the pcmpath query device output.

Chapter 5. Host configuration

157

3. Display the current AIX configured capacity using the lspv hdisk command. The capacity will be shown in the TOTAL PPs field in MBs. 4. To expand the capacity of the SVC volume, use the svctask expandvdisksize command. 5. After the capacity of the volume has been expanded, AIX will need to update its configured capacity. To initiate the capacity update on AIX, use the chvg -g vg_name command, where vg_name is the Volume Group the expanded volume resides in. If AIX does not return any messages, it means that the command was successful and the volume changes in this Volume Group have been saved. If AIX cannot see any changes in the volumes, it will return an explanatory message. 6. Display the new AIX configured capacity using the lspv hdisk command, again the capacity will be shown in the TOTAL PPs field in MBs.

5.4.11 Running SVC commands from an AIX host system


To issue CLI commands, you must install and prepare the SSH client system on the AIX host system. For AIX 5L V5.1 and later, you can get OpenSSH from the Bonus Packs. You also need its prerequisite, OpenSSL, from the AIX toolbox for Linux applications for Power Systems. For AIX V4.3.3, the software is available from the AIX toolbox for Linux applications. The AIX installation images from IBM developerWorks are available at this website: http://sourceforge.net/projects/openssh-aix Perform the following steps: 1. To generate the key files on AIX, issue the following command: ssh-keygen -t rsa -f filename The -t parameter specifies the type of key to generate: rsa1, rsa2, or dsa. The value for rsa2 is only rsa. For rsa1, the type must be rsa1. When creating the key to the SVC, use type rsa2. The -f parameter specifies the file names of the private and public keys on the AIX server (the public key gets the extension .pub after the file name). 2. Next, install the public key on the SVC by using the Master Console. Copy the public key to the Master Console and install the key to the SVC, as described in Chapter 4, SAN Volume Controller initial configuration on page 95. 3. On the AIX server, make sure that the private key and the public key are in the .ssh directory and in the home directory of the user. 4. To connect to the SVC and use a CLI session from the AIX host, issue the following command: ssh -l admin -i filename svc 5. You can also issue the commands directly on the AIX host, which is useful when making scripts. To do this, add the SVC commands to the previous command. For example, to list the hosts that are defined on the SVC, enter the following command: ssh -l admin -i filename svc svcinfo lshost In this command, -l admin is the user on the SVC to which we will connect, -i filename is the filename of the private key generated, and svc is the name or IP address of the SVC.

158

Implementing the IBM System Storage SAN Volume Controller V6.1

5.5 Windows-specific information


In the following sections, we detail specific information about the connection of Windows 2000-based hosts to the SVC environment.

5.5.1 Configuring Windows Server 2000, 2003, 2008 hosts


This section provides an overview of the requirements for attaching the SVC to a host running Windows Server 2000, Windows Server 2003, or Windows Server 2008. Before you attach the SVC to your host, make sure that all of the following requirements are fulfilled: For Windows Server 2003 x64 Edition operating system, you must install the Hotfix from KB 908980. If you do not install it before operation, preferred pathing is not available. You can find the Hotfix at this website: http://support.microsoft.com/kb/908980 Check LUN limitations for your host system. Ensure that there are enough FC adapters installed in the server to handle the total LUNs that you want to attach.

5.5.2 Configuring Windows


To configure the Windows hosts, follow these steps: 1. Make sure that the latest OS service pack and Hotfixes are applied to your Microsoft server. 2. Use the latest firmware and driver levels on your host system. 3. Install the HBA or HBAs on the Windows server, as shown in 5.5.4, Host adapter installation and configuration on page 160. 4. Connect the Windows 2000/2003/2008 server FC host adapters to the switches. 5. Configure the switches (zoning). 6. Install the FC host adapter driver, as described in 5.5.3, Hardware lists, device driver, HBAs, and firmware levels on page 160. 7. Configure the HBA for hosts running Windows, as described in 5.5.4, Host adapter installation and configuration on page 160. 8. Check the HBA driver readme file for the required Windows registry settings, as described in 5.5.3, Hardware lists, device driver, HBAs, and firmware levels on page 160. 9. Check the disk timeout on Microsoft Windows Server, as described in 5.5.5, Changing the disk timeout on Microsoft Windows Server on page 162. 10.Install and configure SDD/Subsystem Device Driver Device Specific Module (SDDDSM). 11.Restart the Windows 2000/2003/2008 host system. 12.Configure the host, volumes, and host mapping in the SVC. 13.Use Rescan disk in Computer Management of the Windows server to discover the volumes that were created on the SAN Volume Controller.

Chapter 5. Host configuration

159

5.5.3 Hardware lists, device driver, HBAs, and firmware levels


The latest information about supported hardware, device driver, and firmware is available at this website: http://www-03.ibm.com/systems/storage/software/virtualization/svc/interop.html On this page, browse to V6.1.x section, select the Supported hardware list link and then search for Windows. At this website, you will also find the hardware list for supported HBAs and the driver levels for Windows. Check the supported firmware and driver level for your HBA and follow the manufacturers instructions to upgrade the firmware and driver levels for each type of HBA. In most manufacturers driver readme files, you will find instructions for the Windows registry parameters that have to be set for the HBA driver: For the Emulex HBA driver, SDD requires the port driver, not the miniport port driver. For the QLogic HBA driver, SDDDSM requires the storport version of the miniport driver. For the QLogic HBA driver, SDD requires the scsiport version of the miniport driver.

5.5.4 Host adapter installation and configuration


Install the host adapters into your system. Refer to the manufacturers instructions for installation and configuration of the HBAs. In IBM System x servers, the HBA must always be installed in the first slots. If you install, for example, two HBAs and two network cards, the HBAs must be installed in slot 1 and slot 2, and the network cards can be installed in the remaining slots.

Configure the QLogic HBA for hosts running Windows


After you have installed the HBA in the server, and have applied the HBA firmware and device driver, you have to configure the HBA. Perform the following steps: 1. Restart the server. 2. When you see the QLogic banner, press the Ctrl+Q keys to open the FAST!UTIL menu panel. 3. From the Select Host Adapter menu, select the Adapter Type QLA2xxx. 4. From the Fast!UTIL Options menu, select Configuration Settings. 5. From the Configuration Settings menu, click Host Adapter Settings. 6. From the Host Adapter Settings menu, select the following values: a. Host Adapter BIOS: Disabled b. Frame size: 2048 c. Loop Reset Delay: 5 (minimum) d. Adapter Hard Loop ID: Disabled e. Hard Loop ID: 0 f. Spinup Delay: Disabled g. Connection Options: 1 - point to point only h. Fibre Channel Tape Support: Disabled i. Data Rate: 2 7. Press the Esc key to return to the Configuration Settings menu.

160

Implementing the IBM System Storage SAN Volume Controller V6.1

8. From the Configuration Settings menu, select Advanced Adapter Settings. 9. From the Advanced Adapter Settings menu, set the following parameters: a. Execution throttle: 100 b. Luns per Target: 0 c. Enable LIP Reset: No d. Enable LIP Full Login: Yes e. Enable Target Reset: No Note: If you are using a subsystem device driver (SDD) lower than 1.6, set Enable Target Reset to Yes. f. Login Retry Count: 30 g. Port Down Retry Count: 15 h. Link Down Timeout: 30 i. Extended error logging: Disabled (might be enabled for debugging) j. RIO Operation Mode: 0 k. Interrupt Delay Timer: 0 10.Press Esc to return to the Configuration Settings menu. 11.Press Esc. 12.From the Configuration settings modified window, select Save changes. 13.From the Fast!UTIL Options menu, select Select Host Adapter if more than one QLogic adapter were installed in your system. 14.Select the other Host Adapter and repeat all steps from step 4 to 12. 15.You must repeat this process for all installed QLogic adapters in your system. When you are done, press Esc to exit the QLogic BIOS and restart the server.

Configuring the Emulex HBA for hosts running Windows


After you have installed the Emulex HBA and driver, you must configure your HBA. For the Emulex HBA StorPort driver, first accept the default settings and then set the topology to 1 (1 = F Port Fabric). For the Emulex HBA FC Port driver, use the default settings and change the parameters to the parameters that are listed in Table 5-1.
Table 5-1 FC port driver changes Parameters Query name server for all N-ports (BrokenRSCN) LUN mapping (MapLuns) Automatic LUN mapping (MapLuns) Allow multiple paths to SCSI target (MultipleSCSIClaims) Scan in device ID order (ScanDeviceIDOrder) Translate queue full to busy (TransleteQueueFull) Retry timer (RetryTimer) Maximum number of LUNs (MaximumLun) Recommended settings Enabled Enabled (1) Enabled (1) Enabled Disabled Enabled 2000 milliseconds Equal to or greater than the number of the SVC LUNs that are available to the HBA

Chapter 5. Host configuration

161

Note: The parameters that are shown in Table 5-1 correspond to the parameters in HBAnywhere.

5.5.5 Changing the disk timeout on Microsoft Windows Server


This section describes how to change the disk I/O timeout value on Windows Server 2000, Windows Server 2003, and Windows Server 2008 operating systems. On your Windows server hosts, change the disk I/O timeout value to 60 in the Windows registry: 1. In Windows, click Start, and select Run. 2. In the dialog text box, type regedit and press Enter. 3. In the registry browsing tool, locate the HKEY_LOCAL_MACHINE\System\CurrentControlSet\Services\Disk\TimeOutValue key. 4. Confirm that the value for the key is 60 (decimal value), and, if necessary, change the value to 60, as shown in Figure 5-8.

Figure 5-8 Regedit

5.5.6 Installing the SDD driver on Windows


At the time of writing, the SDD levels listed in Table 5-2 are supported.
Table 5-2 Currently supported SDD levels Windows operating system NT 4 Windows Server 2000 and Windows Server 2003 service pack (SP2) (32-bit)/2003 SP2 (IA-64) Windows Server 2000 with Microsoft Cluster Server (MSCS) and Veritas Volume Manager/Windows Server 2003 SP2 (32-bit) with MSCS and Veritas Volume Manager SDD level 1.5.1.1 1.6.4.0-1 N/A

See the following website for the latest information about SDD for Windows: http://www-1.ibm.com/support/docview.wss?rs=540&context=ST52G7&dc=DA400&uid=ssg1S7 001350&loc=en_US&cs=utf-8&lang=en

162

Implementing the IBM System Storage SAN Volume Controller V6.1

Important: Use SDD only on existing systems where you do not want to change from SDD to SDDDSM. New operating systems will only be supported with SDDDSM. Before installing the SDD driver, the HBA driver has to be installed on your system. SDD requires the HBA SCSI port driver. After downloading the appropriate version of SDD from the website, extract the file and run setup.exe to install SDD. A command line will appear. Answer Y (Figure 5-9) to install the driver.

Figure 5-9 Confirm SDD installation

After the setup has completed, answer Y again to reboot your system (Figure 5-10).

Figure 5-10 Reboot system after installation

To check whether your SDD installation is complete, open the Windows Device Manager, expand SCSI and RAID Controllers, right-click Subsystem Device Driver Management, and click Properties (see Figure 5-11 on page 164).

Chapter 5. Host configuration

163

Figure 5-11 Subsystem Device Driver Management

The Subsystem Device Driver Management Properties window opens. Select the Driver tab, and make sure that you have installed the correct driver version (see Figure 5-12).

Figure 5-12 Subsystem Device Driver Management Properties Driver tab

5.5.7 Installing the SDDDSM driver on Windows


The following sections show how to install the SDDDSM driver on Windows.

164

Implementing the IBM System Storage SAN Volume Controller V6.1

Windows Server 2003, Windows Server 2008, and MPIO


Microsoft Multi Path Input Output (MPIO) solutions are designed to work in conjunction with device-specific modules (DSMs) written by vendors, but the MPIO driver package does not, by itself, form a complete solution. This joint solution allows the storage vendors to design device-specific solutions that are tightly integrated with the Windows operating system. MPIO is not shipped with the Windows operating system. Instead, storage vendors must pack the MPIO drivers with their own DSM. IBM Subsystem Device Driver DSM (SDDDSM) is the IBM multipath I/O solution that is based on Microsoft MPIO technology. It is a device-specific module specifically designed to support IBM storage devices on Windows Server 2003 and Windows Server 2008 servers. The intention of MPIO is to achieve better integration of multipath storage with the operating system, and it allows the use of multipaths in the SAN infrastructure during the boot process for SAN boot hosts.

Subsystem Device Driver Device Specific Module for SVC


Subsystem Device Driver Device Specific Module (SDDDSM) installation is a package for the SVC device for the Windows Server 2003 and Windows Server 2008 operating systems. SDDDSM is the IBM multipath I/O solution that is based on Microsoft MPIO technology, and it is a device-specific module that is specifically designed to support IBM storage devices. Together with MPIO, it is designed to support the multipath configuration environments in the IBM System Storage SAN Volume Controller. It resides in a host system with the native disk device driver and provides the following functions: Enhanced data availability Dynamic I/O load-balancing across multiple paths Automatic path failover protection Concurrent download of licensed internal code Path-selection policies for the host system No SDDDSM support for Windows Server 2000 For the HBA driver, SDDDSM requires the StorPort version of HBA miniport driver Table 5-3 lists the SDDDSM driver levels that are supported at the time of writing.
Table 5-3 Currently supported SDDDSM driver levels Windows operating system Windows Server 2003 SP2 (32-bit)/Windows Server 2003 SP2 (x64) Windows Server 2008 (32-bit)/Windows Server 2008 (x64) SDD level 2.2.0.0-11 2.2.0.0-11

To check which levels are available, go to the website: http://www-1.ibm.com/support/docview.wss?rs=540&context=ST52G7&dc=DA400&uid=ssg1S7 001350&loc=en_US&cs=utf-8&langw=en#WindowsSDDDSM To download SDDDSM, go to the website: http://www-1.ibm.com/support/docview.wss?rs=540&context=ST52G7&dc=D430&uid=ssg1S40 00350&loc=en_US&cs=utf-8&lang=en The installation procedure for SDDDSM and SDD are the same, but remember that you have to use the StorPort HBA driver instead of the SCSI driver. We describe the SDD installation in 5.5.6, Installing the SDD driver on Windows on page 162. After completing the installation, you will see the Microsoft MPIO in Device Manager (Figure 5-13 on page 166).

Chapter 5. Host configuration

165

Figure 5-13 Windows Device Manager: MPIO

We describe the SDDDSM installation for Windows Server 2008 in 5.7, Example configuration - attaching an SVC to Windows Server 2008 host on page 176.

5.6 Discovering assigned volumes in Windows Server 2000 and Windows Server 2003
In this section, we describe how to discover assigned volumes in Windows Server 2000 and Windows Server 2003. The figures show a Windows Server 2003 host with SDDDSM installed. Discovering the disks in Windows Server 2000 or with SDD is the same procedure. Before adding a new volume from the SVC, the Windows Server 2003 host system had the configuration that is shown in Figure 5-14 on page 167, with only local disks.

166

Implementing the IBM System Storage SAN Volume Controller V6.1

Figure 5-14 Windows Server 2003 host system before adding a new volume from SVC

We can check that the WWPN is logged into the SVC for the host named Senegal by entering the following command (Example 5-16): svcinfo lshost Senegal
Example 5-16 Host information for Senegal

IBM_2145:ITSO-CLS2:admin>svcinfo lshost Senegal id 1 name Senegal port_count 2 type generic mask 1111 iogrp_count 4 WWPN 210000E08B89B9C0 node_logged_in_count 2 state active WWPN 210000E08B89CCC2 node_logged_in_count 2 state active The configuration of the Senegal host, the Senegal_bas0001 volume, and the mapping between the host and the volume are defined in the SVC, as shown in Example 5-17. In our example, the Senegal_bas0002 and Senegal_bas003 volumes have the same configuration as the Senegal_bas0001 volume.
Example 5-17 Host mapping: Senegal

IBM_2145:ITSO-CLS2:admin>svcinfo lshostvdiskmap Senegal id name SCSI_id vdisk_id vdisk_name wwpn vdisk_UID 1 Senegal 0 7 Senegal_bas0001 210000E08B89B9C0 6005076801A180E9080000000000000F 1 Senegal 1 8 Senegal_bas0002 210000E08B89B9C0 6005076801A180E90800000000000010

Chapter 5. Host configuration

167

1 Senegal 2 9 Senegal_bas0003 6005076801A180E90800000000000011

210000E08B89B9C0

IBM_2145:ITSO-CLS2:admin>svcinfo lsvdisk Senegal_bas0001 id 7 name Senegal_bas0001 IO_group_id 0 IO_group_name io_grp0 status online mdisk_grp_id 0 mdisk_grp_name MDG_0_DS45 capacity 10.0GB type striped formatted yes mdisk_id mdisk_name FC_id FC_name RC_id RC_name vdisk_UID 6005076801A180E9080000000000000F throttling 0 preferred_node_id 3 fast_write_state empty cache readwrite udid 0 fc_map_count 0 sync_rate 50 copy_count 1 copy_id 0 status online sync yes primary yes mdisk_grp_id 0 mdisk_grp_name MDG_0_DS45 type striped mdisk_id mdisk_name fast_write_state empty used_capacity 10.00GB real_capacity 10.00GB free_capacity 0.00MB overallocation 100 autoexpand warning grainsize We can also obtain the serial number of the volumes by entering the following command (Example 5-18): svcinfo lsvdiskhostmap Senegal_bas0001
Example 5-18 Volume serial number: Senegal_bas0001

IBM_2145:ITSO-CLS2:admin>svcinfo lsvdiskhostmap Senegal_bas0001

168

Implementing the IBM System Storage SAN Volume Controller V6.1

id name SCSI_id host_id host_name wwpn 7 Senegal_bas0001 0 1 Senegal 210000E08B89B9C0 6005076801A180E9080000000000000F 7 Senegal_bas0001 0 1 Senegal 210000E08B89CCC2 6005076801A180E9080000000000000F

vdisk_UID

After installing the necessary drivers and the rescan disks operation completes, the new disks are found in the Computer Management window, as shown in Figure 5-15.

Figure 5-15 Windows Server 2003 host system with three new volumes from SVC

In Windows Device Manager, the disks are shown as IBM 2145 SCSI Disk Device (Figure 5-16 on page 170). The number of IBM 2145 SCSI Disk Devices that you see is equal to: (number of volumes) x (number of paths per I/O Group per HBA) x (number of HBAs) The IBM 2145 Multi-Path Disk Devices are the devices that are created by the multipath driver (Figure 5-16 on page 170). The number of these devices is equal to the number of volumes that are presented to the host.

Chapter 5. Host configuration

169

Figure 5-16 Windows Server 2003 Device Manager with assigned volumes

When following the SAN zoning recommendation, this calculation gives us, for one volume and a host with two HBAs: (number of volumes) x (number of paths per I/O Group per HBA) x (number of HBAs) = 1 x 2 x 2 = 4 paths You can check if all of the paths are available if you select Start All Programs Subsystem Device Driver (DSM) Subsystem Device Driver (DSM). The SDD (DSM) command-line interface will appear. Enter the following command to see which paths are available to your system (Example 5-19).
Example 5-19 Datapath query device

Microsoft Windows [Version 5.2.3790] (C) Copyright 1985-2003 Microsoft Corp. C:\Program Files\IBM\SDDDSM>datapath query device Total Devices : 3

DEV#: 0 DEVICE NAME: Disk1 Part0 TYPE: 2145 POLICY: OPTIMIZED SERIAL: 6005076801A180E9080000000000002A ============================================================================ Path# Adapter/Hard Disk State Mode Select Errors 0 Scsi Port2 Bus0/Disk1 Part0 OPEN NORMAL 47 0 1 Scsi Port2 Bus0/Disk1 Part0 OPEN NORMAL 0 0 2 Scsi Port3 Bus0/Disk1 Part0 OPEN NORMAL 0 0 3 Scsi Port3 Bus0/Disk1 Part0 OPEN NORMAL 28 0 DEV#: 1 DEVICE NAME: Disk2 Part0 TYPE: 2145 POLICY: OPTIMIZED SERIAL: 6005076801A180E90800000000000010 ============================================================================ Path# Adapter/Hard Disk State Mode Select Errors 0 Scsi Port2 Bus0/Disk2 Part0 OPEN NORMAL 0 0 1 Scsi Port2 Bus0/Disk2 Part0 OPEN NORMAL 162 0 170
Implementing the IBM System Storage SAN Volume Controller V6.1

2 3

Scsi Port3 Bus0/Disk2 Part0 Scsi Port3 Bus0/Disk2 Part0

OPEN OPEN

NORMAL NORMAL

155 0

0 0

DEV#: 2 DEVICE NAME: Disk3 Part0 TYPE: 2145 POLICY: OPTIMIZED SERIAL: 6005076801A180E90800000000000011 ============================================================================ Path# Adapter/Hard Disk State Mode Select Errors 0 Scsi Port2 Bus0/Disk3 Part0 OPEN NORMAL 51 0 1 Scsi Port2 Bus0/Disk3 Part0 OPEN NORMAL 0 0 2 Scsi Port3 Bus0/Disk3 Part0 OPEN NORMAL 0 0 3 Scsi Port3 Bus0/Disk3 Part0 OPEN NORMAL 25 0 C:\Program Files\IBM\SDDDSM> Note: All path states have to be OPEN. The path state can be OPEN or CLOSE. If one path state is CLOSE, it means that the system is missing a path that it saw during startup. If you restart your system, the CLOSE paths are removed from this view.

5.6.1 Extending a Windows Server 2000 or Windows Server 2003 volume


You can expand a volume in the SVC cluster, even if it is mapped to a host. Certain operating systems, such as Windows Server 2000 and Windows Server 2003, can handle the volumes being expanded even if the host has applications running. A volume that is defined to be in a FlashCopy, Metro Mirror, or Global Mirror mapping on the SVC cannot be expanded unless the host mapping is removed. This means that the FlashCopy, Metro Mirror, or Global Mirror on that volume has to be stopped before it is possible to expand the volume. Important: For volume expansion to work on Windows Server 2000, apply Windows Server 2000 Hotfix Q327020, which is available from the Microsoft Knowledge Base at this website: http://support.microsoft.com/kb/327020 If you want to expand a logical drive in a extended partition in Windows Server 2003, apply the Hotfix from KB 841650, which is available from the Microsoft Knowledge Base at this website: http://support.microsoft.com/kb/841650/en-us Use the updated Diskpart version for Windows Server 2003, which is available from the Microsoft Knowledge Base at this website: http://support.microsoft.com/kb/923076/en-us If the volume is part of a Microsoft Cluster (MSCS), Microsoft recommends that you shut down all nodes except one node, and that applications in the resource that use the volume that is going to be expanded are stopped before expanding the volume. Applications running in other resources can continue. After expanding the volume, start the application and the resource, and then restart the other nodes in the MSCS.

Chapter 5. Host configuration

171

To expand a volume in use on Windows Server 2000 and Windows Server 2003, we used Diskpart. The Diskpart tool is part of Windows Server 2003; for other Windows versions, you can download it free of charge from Microsoft. Diskpart is a tool that was developed by Microsoft to ease administration of storage. It is a command-line interface which you can use to manage disks, partitions, and volumes by using scripts or direct input on the command line. You can list disks and volumes, select them, and after selecting them, get more detailed information, create partitions, extend volumes, and more. For more information, see the Microsoft website: http://www.microsoft.com Or see the following website: http://support.microsoft.com/default.aspx?scid=kb;en-us;304736&sd=tech An example of how to expand a volume on a Windows Server 2003 host, where the volume is a volume from the SVC, is shown in the following discussion. To list a volume size, use the svcinfo lsvdisk <VDisk_name> command. This command gives this information for the Senegal_bas0001 before expanding the volume (Example 5-17 on page 167). Here, we can see that the capacity is 10 GB, and also what the vdisk_UID is. To find on what vpath this volume is on the Windows Server 2003 host, we use the datapath query device SDD command on the Windows host (Figure 5-17). We can see that the serial 6005076801A180E9080000000000000F of Disk1 on the Windows host (Figure 5-17) matches the volume ID of Senegal_bas0001 (Example 5-17 on page 167). To see the size of the volume on the Windows host we use Disk Manager, as shown in Figure 5-17.

Figure 5-17 Windows Server 2003: Disk Management

172

Implementing the IBM System Storage SAN Volume Controller V6.1

This window shows that the volume size is 10 GB. To expand the volume on the SVC, we use the svctask expandvdisksize command to increase the capacity on the volume. In this example, we expand the volume by 1 GB (Example 5-20).
Example 5-20 svctask expandvdisksize command

IBM_2145:ITSO-CLS2:admin>svctask expandvdisksize -size 1 -unit gb Senegal_bas0001 IBM_2145:ITSO-CLS2:admin>svcinfo lsvdisk Senegal_bas0001 id 7 name Senegal_bas0001 IO_group_id 0 IO_group_name io_grp0 status online mdisk_grp_id 0 mdisk_grp_name MDG_0_DS45 capacity 11.0GB type striped formatted yes mdisk_id mdisk_name FC_id FC_name RC_id RC_name vdisk_UID 6005076801A180E9080000000000000F throttling 0 preferred_node_id 3 fast_write_state empty cache readwrite udid 0 fc_map_count 0 sync_rate 50 copy_count 1 copy_id 0 status online sync yes primary yes mdisk_grp_id 0 mdisk_grp_name MDG_0_DS45 type striped mdisk_id mdisk_name fast_write_state empty used_capacity 11.00GB real_capacity 11.00GB free_capacity 0.00MB overallocation 100 autoexpand warning grainsize To check that the volume has been expanded, we use the svctask expandvdisksize command. In Example 5-20, we can see that the Senegal_bas0001 volume has been expanded to 11 GB in capacity.
Chapter 5. Host configuration

173

After performing a Disk Rescan in Windows, you will see the new unallocated space in Windows Disk Management, as shown in Figure 5-18.

Figure 5-18 Expanded volume in Disk Manager

This window shows that Disk1 now has 1 GB unallocated new capacity. To make this capacity available for the file system, use the following commands, as shown in Example 5-21. diskpart list volume select volume detail volume extend Starts DiskPart in a DOS prompt Shows you all available volumes Selects the volume to expand Displays details for the selected volume, including the unallocated capacity Extends the volume to the available unallocated space

Example 5-21 Using diskpart

C:\>diskpart Microsoft DiskPart version 5.2.3790.3959 Copyright (C) 1999-2001 Microsoft Corporation. On computer: SENEGAL DISKPART> list volume Volume ### ---------Volume 0 Volume 1 Volume 2 Ltr --C S D Label ----------SVC_Senegal Fs ----NTFS NTFS Type ---------Partition Partition DVD-ROM Size ------75 GB 10 GB 0 B Status --------Healthy Healthy Healthy Info -------System

DISKPART> select volume 1 Volume 1 is the selected volume. DISKPART> detail volume

174

Implementing the IBM System Storage SAN Volume Controller V6.1

Disk ### -------* Disk 1

Status ---------Online

Size ------11 GB

Free ------1020 MB

Dyn ---

Gpt ---

Readonly : No Hidden : No No Default Drive Letter: No Shadow Copy : No DISKPART> extend DiskPart successfully extended the volume. DISKPART> detail volume Disk ### -------* Disk 1 Status ---------Online Size ------11 GB Free ------0 B Dyn --Gpt ---

Readonly : No Hidden : No No Default Drive Letter: No Shadow Copy : No After extending the volume, the detail volume command shows that there is no free capacity on the volume anymore. The list volume command shows the file system size. The Disk Management window also shows the new disk size; see Figure 5-19.

Figure 5-19 Disk Management after extending disk

The example here is referred to as a Windows Basic Disk. Dynamic disks can be expanded by expanding the underlying SVC volume. The new space will appear as unallocated space at the end of the disk.

Chapter 5. Host configuration

175

In this case, you do not need to use the DiskPart tool. Instead, you can use Windows Disk Management functions to allocate the new space. Expansion works irrespective of the volume type (simple, spanned, mirrored, and so on) on the disk. Dynamic disks can be expanded without stopping I/O in most cases. Important: Never try to upgrade your Basic Disk to Dynamic Disk or vice versa without backing up your data, because this operation is disruptive for the data due to a change in the position of the logical block address (LBA) on the disks.

5.7 Example configuration - attaching an SVC to Windows Server 2008 host


This section describes an example configuration that shows the attachment of a Windows Server 2008 host system to the SVC. For more detailed information about Windows Server 2008 and the SVC see 5.5, Windows-specific information on page 159.

5.7.1 Installing SDDDSM on a Windows Server 2008 host


Download the HBA driver and the SDDDSM package and copy them to your host system. We describe information about the SDDDSM package to use in 5.5.7, Installing the SDDDSM driver on Windows on page 164. We list the HBA driver details in 5.5.3, Hardware lists, device driver, HBAs, and firmware levels on page 160. We perform the steps that are described in 5.5.2, Configuring Windows on page 159 to achieve this task. As a prerequisite for this example, we have already performed steps 1 to 5 for the hardware installation, SAN configuration is done, and the hotfixes are applied. The Disk timeout value is set to 60 seconds (see 5.5.5, Changing the disk timeout on Microsoft Windows Server on page 162), and we will start with the driver installation.

Installing the HBA driver


Perform these steps to install the HBA driver: 1. Extract the QLogic driver package to your hard drive. 2. Select Start Run. 3. Enter the devmgmt.msc command, click OK, and the Device Manager will appear. 4. Expand Storage Controllers.

176

Implementing the IBM System Storage SAN Volume Controller V6.1

5. Right-click the HBA, and select Update driver Software (Figure 5-20).

Figure 5-20 Windows Server 2008 driver update

6. Click Browse my computer for driver software (Figure 5-21).

Figure 5-21 Windows Server 2008 driver update

7. Enter the path to the extracted QLogic driver, and click Next (Figure 5-22 on page 178).

Chapter 5. Host configuration

177

Figure 5-22 Windows Server 2008 driver update

8. Windows installs the driver (Figure 5-23).

Figure 5-23 Windows Server 2008 driver installation

178

Implementing the IBM System Storage SAN Volume Controller V6.1

9. When the driver update is complete, click Close to exit the wizard (Figure 5-24).

Figure 5-24 Windows Server 2008 driver installation

10.Repeat steps 1 to 8 for all of the HBAs that are installed in the system.

5.7.2 Installing SDDDSM


To install the SDDDSM driver on your system, perform the following steps: 1. Extract the SDDDSM driver package to a folder on your hard drive. 2. Open the folder with the extracted files. 3. Run the setup.exe command, and a DOS command prompt will appear. 4. Type Y and press Enter to install SDDDSM (Figure 5-25).

Figure 5-25 Installing SDDDSM

5. After the SDDDSM Setup is finished, type Y and press Enter to restart your system. After the reboot, the SDDDSM installation is complete. You can verify the installation completion in Device Manager, because the SDDDSM device will appear (Figure 5-26 on page 180), and the SDDDSM tools will have been installed (Figure 5-27 on page 180).

Chapter 5. Host configuration

179

Figure 5-26 SDDDSM installation

The SDDDSM tools have been installed (Figure 5-27).

Figure 5-27 SDDDSM installation

180

Implementing the IBM System Storage SAN Volume Controller V6.1

5.7.3 Attaching SVC volumes to Windows Server 2008


Create the volumes on the SVC and map them to the Windows Server 2008 host. In this example, we have mapped three SVC disks to the Windows Server 2008 host named Diomede; see Example 5-22.
Example 5-22 SVC host mapping to host Diomede

IBM_2145:ITSO-CLS2:admin>svcinfo lshostvdiskmap Diomede id name SCSI_id vdisk_id vdisk_name wwpn 0 Diomede 0 20 Diomede_0001 210000E08B0541BC 6005076801A180E9080000000000002B 0 Diomede 1 21 Diomede_0002 210000E08B0541BC 6005076801A180E9080000000000002C 0 Diomede 2 22 Diomede_0003 210000E08B0541BC 6005076801A180E9080000000000002D

vdisk_UID

Perform the following steps to use the devices on your Windows Server 2008 host: 1. Click Start, and click Run. 2. Enter the diskmgmt.msc command, and click OK. The Disk Management window opens. 3. Select Action, and click Rescan Disks (Figure 5-28).

Figure 5-28 Windows Server 2008: Rescan disks

4. The SVC disks will now appear in the Disk Management window (Figure 5-29 on page 182).

Chapter 5. Host configuration

181

Figure 5-29 Windows Server 2008 Disk Management window

After you have assigned the SVC disks, they are also available in Device Manager. The three assigned drives are represented by SDDDSM/MPIO as IBM-2145 Multipath disk devices in the Device Manager (Figure 5-30).

Figure 5-30 Windows Server 2008 Device Manager

182

Implementing the IBM System Storage SAN Volume Controller V6.1

5. To check that the disks are available, select Start All Programs Subsystem Device Driver DSM, and click Subsystem Device Driver DSM (Figure 5-31). The SDDDSM Command Line Utility will appear.

Figure 5-31 Windows Server 2008 Subsystem Device Driver DSM utility

6. Enter the datapath query device command and press Enter (Example 5-23). This command will display all of the disks and the available paths, including their states.
Example 5-23 Windows Server 2008 SDDDSM command-line utility

Microsoft Windows [Version 6.0.6001] Copyright (c) 2006 Microsoft Corporation.

All rights reserved.

C:\Program Files\IBM\SDDDSM>datapath query device Total Devices : 3

DEV#: 0 DEVICE NAME: Disk1 Part0 TYPE: 2145 POLICY: OPTIMIZED SERIAL: 6005076801A180E9080000000000002B ============================================================================ Path# Adapter/Hard Disk State Mode Select Errors 0 Scsi Port2 Bus0/Disk1 Part0 OPEN NORMAL 0 0 1 Scsi Port2 Bus0/Disk1 Part0 OPEN NORMAL 1429 0 2 Scsi Port3 Bus0/Disk1 Part0 OPEN NORMAL 1456 0 3 Scsi Port3 Bus0/Disk1 Part0 OPEN NORMAL 0 0 DEV#: 1 DEVICE NAME: Disk2 Part0 TYPE: 2145 POLICY: OPTIMIZED SERIAL: 6005076801A180E9080000000000002C ============================================================================ Path# Adapter/Hard Disk State Mode Select Errors 0 Scsi Port2 Bus0/Disk2 Part0 OPEN NORMAL 1520 0 1 Scsi Port2 Bus0/Disk2 Part0 OPEN NORMAL 0 0
Chapter 5. Host configuration

183

2 3

Scsi Port3 Bus0/Disk2 Part0 Scsi Port3 Bus0/Disk2 Part0

OPEN OPEN

NORMAL NORMAL

0 1517

0 0

DEV#: 2 DEVICE NAME: Disk3 Part0 TYPE: 2145 POLICY: OPTIMIZED SERIAL: 6005076801A180E9080000000000002D ============================================================================ Path# Adapter/Hard Disk State Mode Select Errors 0 Scsi Port2 Bus0/Disk3 Part0 OPEN NORMAL 27 0 1 Scsi Port2 Bus0/Disk3 Part0 OPEN NORMAL 1396 0 2 Scsi Port3 Bus0/Disk3 Part0 OPEN NORMAL 1459 0 3 Scsi Port3 Bus0/Disk3 Part0 OPEN NORMAL 0 0 C:\Program Files\IBM\SDDDSM> SAN zoning: When following the SAN zoning guidance, we get this result, using one volume and a host with two HBAs, (number of volumes) x (number of paths per I/O Group per HBA) x (number of HBAs) = 1 x 2 x 2 = four paths. 7. Right-click the disk in Disk Management, and select Online to place the disk online (Figure 5-32).

Figure 5-32 Windows Server 2008: Place disk online

8. Repeat step 7 for all of your attached SVC disks. 9. Right-click one disk again, and select Initialize Disk (Figure 5-33).

Figure 5-33 Windows Server 2008: Initialize Disk

184

Implementing the IBM System Storage SAN Volume Controller V6.1

10.Mark all of the disks that you want to initialize, and click OK (Figure 5-34).

Figure 5-34 Windows Server 2008: Initialize Disk

11.Right-click the unallocated disk space, and select New Simple Volume (Figure 5-35).

Figure 5-35 Windows Server 2008: New Simple Volume

12.The New Simple Volume Wizard window opens. Click Next. 13.Enter a disk size, and click Next (Figure 5-36).

Figure 5-36 Windows Server 2008: New Simple Volume

Chapter 5. Host configuration

185

14.Assign a drive letter, and click Next (Figure 5-37).

Figure 5-37 Windows Server 2008: New Simple Volume

15.Enter a volume label, and click Next (Figure 5-38).

Figure 5-38 Windows Server 2008: New Simple Volume

186

Implementing the IBM System Storage SAN Volume Controller V6.1

16.Click Finish, and repeat this step for every SVC disk on your host system (Figure 5-39).

Figure 5-39 Windows Server 2008: Disk Management

5.7.4 Extending a Windows Server 2008 volume


Using SVC and Windows Server 2008 gives you the ability to extend volumes while they are in use. We describe the steps to extend a volume in 5.6.1, Extending a Windows Server 2000 or Windows Server 2003 volume on page 171. Windows Server 2008 also uses the DiskPart utility to extend volumes. To start it, select Start Run, and enter DiskPart. The DiskPart utility will appear. The procedure is exactly the same as the procedure in Windows Server 2003. Follow the Windows Server 2003 description to extend your volume.

5.7.5 Removing a disk on Windows


To remove a disk from Windows, and the disk is an SVC volume, we follow the standard Windows procedure to make sure that there is no data that we want to preserve on the disk, that no applications are using the disk, and that no I/O is going to the disk. After completing this procedure, we remove the host mapping on the SVC. We must make sure that we are removing the correct volume. To verify, we use SDD to find the serial number for the disk, and on the SVC, we use lshostvdiskmap to find the volume name and number. We also check that the SDD Serial number on the host matches the UID on the SVC for the volume. When the host mapping is removed, we perform a rescan for the disk, Disk Management on the server removes the disk, and the vpath goes into the status of CLOSE on the server. We can verify these actions by using the datapath query device SDD command, but the vpath that is closed will first be removed after a reboot of the server. In the following sequence of examples, we show how to remove an SVC volume from a Windows server. We show it on a Windows Server 2003 operating system, but the steps also apply to Windows Server 2000 and Windows Server 2008.

Chapter 5. Host configuration

187

Figure 5-17 on page 172 shows the Disk Manager before removing the disk. We will remove Disk 1. To find the correct volume information, we find the Serial/UID number using SDD (Example 5-24).
Example 5-24 Removing SVC disk from the Windows server

C:\Program Files\IBM\SDDDSM>datapath query device Total Devices : 3

DEV#: 0 DEVICE NAME: Disk1 Part0 TYPE: 2145 POLICY: OPTIMIZED SERIAL: 6005076801A180E9080000000000000F ============================================================================ Path# Adapter/Hard Disk State Mode Select Errors 0 Scsi Port2 Bus0/Disk1 Part0 OPEN NORMAL 1471 0 1 Scsi Port2 Bus0/Disk1 Part0 OPEN NORMAL 0 0 2 Scsi Port3 Bus0/Disk1 Part0 OPEN NORMAL 0 0 3 Scsi Port3 Bus0/Disk1 Part0 OPEN NORMAL 1324 0 DEV#: 1 DEVICE NAME: Disk2 Part0 TYPE: 2145 POLICY: OPTIMIZED SERIAL: 6005076801A180E90800000000000010 ============================================================================ Path# Adapter/Hard Disk State Mode Select Errors 0 Scsi Port2 Bus0/Disk2 Part0 OPEN NORMAL 20 0 1 Scsi Port2 Bus0/Disk2 Part0 OPEN NORMAL 94 0 2 Scsi Port3 Bus0/Disk2 Part0 OPEN NORMAL 55 0 3 Scsi Port3 Bus0/Disk2 Part0 OPEN NORMAL 0 0 DEV#: 2 DEVICE NAME: Disk3 Part0 TYPE: 2145 POLICY: OPTIMIZED SERIAL: 6005076801A180E90800000000000011 ============================================================================ Path# Adapter/Hard Disk State Mode Select Errors 0 Scsi Port2 Bus0/Disk3 Part0 OPEN NORMAL 100 0 1 Scsi Port2 Bus0/Disk3 Part0 OPEN NORMAL 0 0 2 Scsi Port3 Bus0/Disk3 Part0 OPEN NORMAL 0 0 3 Scsi Port3 Bus0/Disk3 Part0 OPEN NORMAL 69 0

Knowing the Serial/UID of the volume and the host name Senegal, we find the host mapping to remove by using the lshostvdiskmap command on the SVC, and then we remove the actual host mapping (Example 5-25).
Example 5-25 Finding and removing the host mapping

IBM_2145:ITSO-CLS2:admin>svcinfo lshostvdiskmap Senegal id name SCSI_id vdisk_id vdisk_name wwpn 1 Senegal 0 7 Senegal_bas0001 210000E08B89B9C0 6005076801A180E9080000000000000F 1 Senegal 1 8 Senegal_bas0002 210000E08B89B9C0 6005076801A180E90800000000000010 1 Senegal 2 9 Senegal_bas0003 210000E08B89B9C0 6005076801A180E90800000000000011

vdisk_UID

IBM_2145:ITSO-CLS2:admin>svctask rmvdiskhostmap -host Senegal Senegal_bas0001 188

Implementing the IBM System Storage SAN Volume Controller V6.1

IBM_2145:ITSO-CLS2:admin>svcinfo lshostvdiskmap Senegal id name SCSI_id vdisk_id vdisk_name wwpn 1 Senegal 1 8 Senegal_bas0002 210000E08B89B9C0 6005076801A180E90800000000000010 1 Senegal 2 9 Senegal_bas0003 210000E08B89B9C0 6005076801A180E90800000000000011

vdisk_UID

Here, we can see that the volume is removed from the server. On the server, we then perform a disk rescan in Disk Management, and we now see that the correct disk (Disk1) has been removed, as shown in Figure 5-40.

Figure 5-40 Disk Management: Disk has been removed

SDD also shows us that the status for all paths to Disk1 has changed to CLOSE, because the disk is not available (Example 5-26 on page 190).

Chapter 5. Host configuration

189

Example 5-26 SDD: Closed path

C:\Program Files\IBM\SDDDSM>datapath query device Total Devices : 3

DEV#: 0 DEVICE NAME: Disk1 Part0 TYPE: 2145 POLICY: OPTIMIZED SERIAL: 6005076801A180E9080000000000000F ============================================================================ Path# Adapter/Hard Disk State Mode Select Errors 0 Scsi Port2 Bus0/Disk1 Part0 CLOSE NORMAL 1471 0 1 Scsi Port2 Bus0/Disk1 Part0 CLOSE NORMAL 0 0 2 Scsi Port3 Bus0/Disk1 Part0 CLOSE NORMAL 0 0 3 Scsi Port3 Bus0/Disk1 Part0 CLOSE NORMAL 1324 0 DEV#: 1 DEVICE NAME: Disk2 Part0 TYPE: 2145 POLICY: OPTIMIZED SERIAL: 6005076801A180E90800000000000010 ============================================================================ Path# Adapter/Hard Disk State Mode Select Errors 0 Scsi Port2 Bus0/Disk2 Part0 OPEN NORMAL 20 0 1 Scsi Port2 Bus0/Disk2 Part0 OPEN NORMAL 124 0 2 Scsi Port3 Bus0/Disk2 Part0 OPEN NORMAL 72 0 3 Scsi Port3 Bus0/Disk2 Part0 OPEN NORMAL 0 0 DEV#: 2 DEVICE NAME: Disk3 Part0 TYPE: 2145 POLICY: OPTIMIZED SERIAL: 6005076801A180E90800000000000011 ============================================================================ Path# Adapter/Hard Disk State Mode Select Errors 0 Scsi Port2 Bus0/Disk3 Part0 OPEN NORMAL 134 0 1 Scsi Port2 Bus0/Disk3 Part0 OPEN NORMAL 0 0 2 Scsi Port3 Bus0/Disk3 Part0 OPEN NORMAL 0 0 3 Scsi Port3 Bus0/Disk3 Part0 OPEN NORMAL 82 0 The disk (Disk1) is now removed from the server. However, to remove the SDD information of the disk, we need to reboot the server, but we can wait until a more suitable time.

5.8 Using the SVC CLI from a Windows host


To issue CLI commands, we must install and prepare the SSH client system on the Windows host system. We can install the PuTTY SSH client software on a Windows host by using the PuTTY installation program. This program is in the SSHClient\PuTTY directory of the SAN Volume Controller Console CD-ROM, or PuTTY can be downloaded from the following website: http://www.chiark.greenend.org.uk/~sgtatham/putty/ The following website offers SSH client alternatives for Windows: http://www.openssh.com/windows.html Cygwin software has an option to install an OpenSSH client. Cygwin can be downloaded from the following website: http://www.cygwin.com/

190

Implementing the IBM System Storage SAN Volume Controller V6.1

For more information about the CLI, see Chapter 9, SAN Volume Controller operations using the command-line interface on page 439.

5.9 Microsoft Volume Shadow Copy


The SVC provides support for the Microsoft Volume Shadow Copy Service. The Microsoft Volume Shadow Copy Service can provide a point-in-time (shadow) copy of a Windows host volume while the volume is mounted and the files are in use. In this section, we discuss how to install the Microsoft Volume Copy Shadow Service. The following operating system versions are supported: Windows Server 2003 Standard Server Edition, 32-bit and 64-bit (x64) versions Windows Server 2003 Enterprise Edition, 32-bit and 64-bit (x64) versions Windows Server 2003 Standard Server R2 Edition, 32-bit and 64-bit (x64) versions Windows Server 2003 Enterprise R2 Edition, 32-bit and 64-bit (x64) versions Windows Server 2008 Standard Windows Server 2008 Enterprise The following components are used to provide support for the service: SAN Volume Controller SAN Volume Controller Master Console IBM System Storage hardware provider, known as the IBM System Storage Support for Microsoft Volume Shadow Copy Service Microsoft Volume Shadow Copy Service The IBM System Storage provider is installed on the Windows host. To provide the point-in-time shadow copy, the components complete the following process: 1. A backup application on the Windows host initiates a snapshot backup. 2. The Volume Shadow Copy Service notifies the IBM System Storage hardware provider that a copy is needed. 3. The SAN Volume Controller prepares the volume for a snapshot. 4. The Volume Shadow Copy Service quiesces the software applications that are writing data on the host and flushes file system buffers to prepare for a copy. 5. The SAN Volume Controller creates the shadow copy using the FlashCopy Service. 6. The Volume Shadow Copy Service notifies the writing applications that I/O operations can resume and notifies the backup application that the backup was successful. The Volume Shadow Copy Service maintains a free pool of volumes for use as a FlashCopy target and a reserved pool of volumes. These pools are implemented as virtual host systems on the SAN Volume Controller.

5.9.1 Installation overview


The steps for implementing the IBM System Storage Support for Microsoft Volume Shadow Copy Service must be completed in the correct sequence.

Chapter 5. Host configuration

191

Before you begin, you must have experience with, or knowledge of, administering a Windows operating system. And you must also have experience with, or knowledge of, administering a SAN Volume Controller. You will need to complete the following tasks: Verify that the system requirements are met. Install the SAN Volume Controller Console if it is not already installed. Install the IBM System Storage hardware provider. Verify the installation. Create a free pool of volumes and a reserved pool of volumes on the SAN Volume Controller.

5.9.2 System requirements for the IBM System Storage hardware provider
Ensure that your system satisfies the following requirements before you install the IBM System Storage Support for Microsoft Volume Shadow Copy Service and Virtual Disk Service software on the Windows operating system: SAN Volume Controller and Master Console Version 2.1.0 or later with FlashCopy enabled. You must install the SAN Volume Controller Console before you install the IBM System Storage Hardware provider. IBM System Storage Support for Microsoft Volume Shadow Copy Service and Virtual Disk Service software Version 3.1 or later.

5.9.3 Installing the IBM System Storage hardware provider


This section includes the steps to install the IBM System Storage hardware provider on a Windows server. You must satisfy all of the system requirements before starting the installation. During the installation, you will be prompted to enter information about the SAN Volume Controller Master Console, including the location of the truststore file. The truststore file is generated during the installation of the Master Console. You must copy this file to a location that is accessible to the IBM System Storage hardware provider on the Windows server. When the installation is complete, the installation program might prompt you to restart the system. Complete the following steps to install the IBM System Storage hardware provider on the Windows server: 1. Download the installation program files from the IBM website, and place a copy on the Windows server where you will install the IBM System Storage hardware provider: http://www-1.ibm.com/support/docview.wss?rs=591&context=STCCCXR&context=STCCCYH &dc=D400&uid=ssg1S4000663&loc=en_US&cs=utf-8&lang=en 2. Log on to the Windows server as an administrator, and navigate to the directory where the installation program is located. 3. Run the installation program by double-clicking IBMVSS.exe. 4. The Welcome window opens, as shown in Figure 5-41 on page 193. Click Next to continue with the installation. You can click Cancel at any time to exit the installation. To move back to previous windows while using the wizard, click Back.

192

Implementing the IBM System Storage SAN Volume Controller V6.1

Figure 5-41 IBM System Storage Support for Microsoft Volume Shadow Copy installation

5. The License Agreement window opens (Figure 5-42). Read the license agreement information. Select whether you accept the terms of the license agreement, and click Next. If you do not accept, it means that you cannot continue with the installation.

Figure 5-42 IBM System Storage Support for Microsoft Volume Shadow Copy installation

Chapter 5. Host configuration

193

6. The Choose Destination Location window opens (Figure 5-43). Click Next to accept the default directory where the setup program will install the files, or click Change to select another directory. Click Next.

Figure 5-43 IBM System Storage Support for Microsoft Volume Shadow Copy installation

7. Click Install to begin the installation (Figure 5-44).

Figure 5-44 IBM System Storage Support for Microsoft Volume Shadow Copy installation

194

Implementing the IBM System Storage SAN Volume Controller V6.1

8. From the next window, select the required CIM server, or select Enter the CIM Server address manually, and click Next (Figure 5-45).

Figure 5-45 IBM System Storage Support for Microsoft Volume Shadow Copy installation

9. The Enter CIM Server Details window opens. Enter the following information in the fields (Figure 5-46): a. In the CIM Server Address field, type the name of the server where the SAN Volume Controller Console is installed. b. In the CIM User field, type the user name that the IBM System Storage Support for Microsoft Volume Shadow Copy Service and Virtual Disk Service software will use to gain access to the server where the SAN Volume Controller Console is installed. c. In the CIM Password field, type the password for the user name that the IBM System Storage Support for Microsoft Volume Shadow Copy Service and Virtual Disk Service software will use to gain access to the SAN Volume Controller Console. d. Click Next.

Figure 5-46 IBM System Storage Support for Microsoft Volume Shadow Copy installation

10.In the next window, click Finish. If necessary, the InstallShield Wizard prompts you to restart the system (Figure 5-47 on page 196).

Chapter 5. Host configuration

195

Figure 5-47 IBM System Storage Support for Microsoft Volume Shadow Copy installation

Additional information: If these settings change after installation, you can use the ibmvcfg.exe tool to update the Microsoft Volume Shadow Copy and Virtual Disk Services software with the new settings. If you do not have the CIM Agent server, port, or user information, contact your CIM Agent administrator.

5.9.4 Verifying the installation


Perform the following steps to verify the installation: 1. Select Start All Programs Administrative Tools Services from the Windows server task bar. 2. Ensure that the service named IBM System Storage Support for Microsoft Volume Shadow Copy Service and Virtual Disk Service software appears and that Status is set to Started and that Startup Type is set to Automatic. 3. Open a command prompt window, and issue the following command: vssadmin list providers This command ensures that the service named IBM System Storage Support for Microsoft Volume Shadow Copy Service and Virtual Disk Service software is listed as a provider; see Example 5-27.
Example 5-27 Microsoft Software Shadow copy provider

C:\Documents and Settings\Administrator>vssadmin list providers vssadmin 1.1 - Volume Shadow Copy Service administrative command-line tool (C) Copyright 2001 Microsoft Corp. Provider name: 'Microsoft Software Shadow Copy provider 1.0' Provider type: System Provider Id: {b5946137-7b9f-4925-af80-51abd60b20d5} Version: 1.0.0.7

196

Implementing the IBM System Storage SAN Volume Controller V6.1

Provider name: 'IBM System Storage Volume Shadow Copy Service Hardware Provider' Provider type: Hardware Provider Id: {d90dd826-87cf-42ce-a88d-b32caa82025b} Version: 3.1.0.1108 If you are able to successfully perform all of these verification tasks, it means that the IBM System Storage Support for Microsoft Volume Shadow Copy Service and Virtual Disk Service software was successfully installed on the Windows server.

5.9.5 Creating the free and reserved pools of volumes


The IBM System Storage hardware provider maintains a free pool of volumes and a reserved pool of volumes. Because these objects do not exist on the SAN Volume Controller, the free pool of volumes and the reserved pool of volumes are implemented as virtual host systems. You must define these two virtual host systems on the SAN Volume Controller. When a shadow copy is created, the IBM System Storage hardware provider selects a volume in the free pool, assigns it to the reserved pool, and then removes it from the free pool. This process protects the volume from being overwritten by other Volume Shadow Copy Service users. To successfully perform a Volume Shadow Copy Service operation, there must be enough volumes mapped to the free pool. The volumes must be the same size as the source volumes. Use the SAN Volume Controller Console or the SAN Volume Controller command-line interface (CLI) to perform the following steps: 1. Create a host for the free pool of volumes. You can use the default name VSS_FREE or specify another name. Associate the host with the worldwide port name (WWPN) 5000000000000000 (15 zeroes); see Example 5-28.
Example 5-28 Creating an mkhost for the free pool

IBM_2145:ITSO-CLS2:admin>svctask mkhost -name VSS_FREE -hbawwpn 5000000000000000 -force Host, id [2], successfully created 2. Create a virtual host for the reserved pool of volumes. You can use the default name VSS_RESERVED or specify another name. Associate the host with the WWPN 5000000000000001 (14 zeroes); see Example 5-29.
Example 5-29 Creating an mkhost for the reserved pool

IBM_2145:ITSO-CLS2:admin>svctask mkhost -name VSS_RESERVED -hbawwpn 5000000000000001 -force Host, id [3], successfully created 3. Map the logical units (volumes) to the free pool of volumes. The volumes cannot be mapped to any other hosts. If you already have volumes created for the free pool of volumes, you must assign the volumes to the free pool. 4. Create host mappings between the volumes selected in step 3 and the VSS_FREE host to add the volumes to the free pool. Alternatively, you can use the ibmvcfg add command to add volumes to the free pool; see Example 5-30 on page 198.

Chapter 5. Host configuration

197

Example 5-30 Host mappings

IBM_2145:ITSO-CLS2:admin>svctask mkvdiskhostmap -host VSS_FREE msvc0001 Virtual Disk to Host map, id [0], successfully created IBM_2145:ITSO-CLS2:admin>svctask mkvdiskhostmap -host VSS_FREE msvc0002 Virtual Disk to Host map, id [1], successfully created 5. Verify that the volumes have been mapped. If you do not use the default WWPNs 5000000000000000 and 5000000000000001, you must configure the IBM System Storage hardware provider with the WWPNs; see Example 5-31.
Example 5-31 Verify hosts

IBM_2145:ITSO-CLS2:admin>svcinfo lshostvdiskmap VSS_FREE id name SCSI_id vdisk_id vdisk_name wwpn vdisk_UID 2 VSS_FREE 0 10 msvc0001 5000000000000000 6005076801A180E90800000000000012 2 VSS_FREE 1 11 msvc0002 5000000000000000 6005076801A180E90800000000000013

5.9.6 Changing the configuration parameters


You can change the parameters that you defined when you installed the IBM System Storage Support for Microsoft Volume Shadow Copy Service and Virtual Disk Service software. Therefore, you must use the ibmvcfg.exe utility. It is a command-line utility located in C:\Program Files\IBM\Hardware Provider for VSS-VDS directory; see Example 5-32.
Example 5-32 Using ibmvcfg.exe utility help

C:\Program Files\IBM\Hardware Provider for VSS-VDS>ibmvcfg.exe IBM System Storage VSS Provider Configuration Tool Commands ---------------------------------------ibmvcfg.exe <command> <command arguments> Commands: /h | /help | -? | /? showcfg listvols <all|free|unassigned> add <volume esrial number list> (separated by spaces) rem <volume serial number list> (separated by spaces) Configuration: set user <CIMOM user name> set password <CIMOM password> set trace [0-7] set trustpassword <trustpassword> set truststore <truststore location> set usingSSL <YES | NO> set vssFreeInitiator <WWPN> set vssReservedInitiator <WWPN> set FlashCopyVer <1 | 2> (only applies to ESS) set cimomPort <PORTNUM> set cimomHost <Hostname> set namespace <Namespace>

198

Implementing the IBM System Storage SAN Volume Controller V6.1

set targetSVC <svc_cluster_ip> set backgroundCopy <0-100> Table 5-4 lists the available commands.
Table 5-4 Available ibmvcfg.util commands Command ibmvcfg showcfg ibmvcfg set username <username> ibmvcfg set password <password> Description This lists the current settings. This sets the user name to access the SAN Volume Controller Console. This sets the password of the user name that will access the SAN Volume Controller Console. This specifies the IP address of the SAN Volume Controller on which the volumes are located when volumes are moved to and from the free pool with the ibmvcfg add and ibmvcfg rem commands. The IP address is overridden if you use the -s flag with the ibmvcfg add and ibmvcfg rem commands. This sets the background copy rate for FlashCopy. This specifies whether to use Secure Sockets Layer protocol to connect to the SAN Volume Controller Console. This specifies the SAN Volume Controller Console port number. The default value is 5999. This sets the name of the server where the SAN Volume Controller Console is installed. This specifies the namespace value that the Master Console is using. The default value is \root\ibm. This specifies the WWPN of the host. The default value is 5000000000000000. Modify this value only if there is a host already in your environment with a WWPN of 5000000000000000. Example ibmvcfg showcfg ibmvcfg set username Dan

ibmvcfg set password mypassword

ibmvcfg set targetSVC <ipaddress>

set targetSVC 9.43.86.120

set backgroundCopy ibmvcfg set usingSSL

set backgroundCopy 80 ibmvcfg set usingSSL yes

ibmvcfg set cimomPort <portnum>

ibmvcfg set cimomPort 5999

ibmvcfg set cimomHost <server name> ibmvcfg set namespace <namespace>

ibmvcfg set cimomHost cimomserver ibmvcfg set namespace \root\ibm

ibmvcfg set vssFreeInitiator <WWPN>

ibmvcfg set vssFreeInitiator 5000000000000000

Chapter 5. Host configuration

199

Command ibmvcfg set vssReservedInitiator <WWPN>

Description This specifies the WWPN of the host. The default value is 5000000000000001. Modify this value only if there is a host already in your environment with a WWPN of 5000000000000001. This lists all volumes, including information about the size, location, and host mappings. This lists all volumes, including information about the size, location, and host mappings. This lists the volumes that are currently in the free pool. This lists the volumes that are currently not mapped to any hosts. This adds one or more volumes to the free pool of volumes. Use the -s parameter to specify the IP address of the SAN Volume Controller where the volumes are located. The -s parameter overrides the default IP address that is set with the ibmvcfg set targetSVC command. This removes one or more volumes from the free pool of volumes. Use the -s parameter to specify the IP address of the SAN Volume Controller where the volumes are located. The -s parameter overrides the default IP address that is set with the ibmvcfg set targetSVC command.

Example ibmvcfg set vssFreeInitiator 5000000000000001

ibmvcfg listvols

ibmvcfg listvols

ibmvcfg listvols all

ibmvcfg listvols all

ibmvcfg listvols free ibmvcfg listvols unassigned

ibmvcfg listvols free ibmvcfg listvols unassigned

ibmvcfg add -s ipaddress

ibmvcfg add vdisk12 ibmvcfg add 600507 68018700035000000 0000000BA -s 66.150.210.141

ibmvcfg rem -s ipaddress

ibmvcfg rem vdisk12 ibmvcfg rem 600507 68018700035000000 0000000BA -s 66.150.210.141

5.10 Specific Linux (on Intel) information


The following sections describe specific information pertaining to the connection of Linux on Intel-based hosts to the SVC environment.

5.10.1 Configuring the Linux host


Follow these steps to configure the Linux host: 1. Use the latest firmware levels on your host system. 2. Install the HBA or HBAs on the Linux server, as described in 5.5.4, Host adapter installation and configuration on page 160.

200

Implementing the IBM System Storage SAN Volume Controller V6.1

3. Install the supported HBA driver/firmware and upgrade the kernel if required, as described in 5.10.2, Configuration information on page 201. 4. Connect the Linux server FC host adapters to the switches. 5. Configure the switches (zoning) if needed. 6. Install SDD for Linux, as described in 5.10.5, Multipathing in Linux on page 202. 7. Configure the host, volumes, and host mapping in the SAN Volume Controller. 8. Rescan for LUNs on the Linux server to discover the volumes that were created on the SVC.

5.10.2 Configuration information


The SAN Volume Controller supports hosts that run the following Linux distributions: Red Hat Enterprise Linux SUSE Linux Enterprise Server For the latest information, always refer to the following website: http://www.ibm.com/storage/support/2145 This website provides the hardware list for supported HBAs and device driver levels for Linux. Check the supported firmware and driver level for your HBA, and follow the manufactures instructions to upgrade the firmware and driver levels for each type of HBA.

5.10.3 Disabling automatic Linux system updates


Many Linux distributions give you the ability to configure your systems for automatic system updates. Red Hat provides this ability in the form of a program called up2date. Novell SUSE provides the YaST Online Update utility. These features periodically query for updates that are available for each host and they can be configured to automatically install any new updates that they find. Often, the automatic update process also upgrades the system to the latest kernel level. Hosts running SDD must turn off the automatic update of kernel levels, because certain drivers that are supplied by IBM, such as SDD, are dependent on a specific kernel and will cease to function on a new kernel. Similarly, HBA drivers need to be compiled against specific kernels to function optimally. By allowing automatic updates of the kernel, you risk affecting your host systems unexpectedly.

5.10.4 Setting queue depth with QLogic HBAs


The queue depth is the number of I/O operations that can be run in parallel on a device. Configure your host running the Linux operating system by using the formula that is specified in 5.15, Calculating the queue depth on page 225. Perform the following steps to set the maximum queue depth: 1. Add the following line to the /etc/modules.conf file: For the 2.4 kernel (SUSE Linux Enterprise Server 8 or Red Hat Enterprise Linux): options qla2300 ql2xfailover=0 ql2xmaxqdepth=new_queue_depth For the 2.6 kernel (SUSE Linux Enterprise Server 9, or later, or Red Hat Enterprise Linux 4, or later): options qla2xxx ql2xfailover=0 ql2xmaxqdepth=new_queue_depth

Chapter 5. Host configuration

201

2. Rebuild the RAM disk that is associated with the kernel being used by using one of the following commands: If you are running on a SUSE Linux Enterprise Server operating system, run the mk_initrd command. If you are running on a Red Hat Enterprise Linux operating system, run the mkinitrd command, and then restart.

5.10.5 Multipathing in Linux


Red Hat Enterprise Linux 5 and later and SUSE Linux Enterprise Server 10 and later provide their own multipath support by the operating system. On older systems, it is necessary to install the IBM SDD multipath driver.

Installing SDD
This section describes how to install SDD for older distributions. Before performing these steps, always check for the currently supported levels, as described in 5.10.2, Configuration information on page 201. The cat /proc/scsi/scsi command displayed in Example 5-33 shows the devices that the SCSI driver has probed. In our configuration, we have two HBAs installed in our server, and we configured the zoning to access our volume from four paths.
Example 5-33 cat /proc/scsi/scsi command example

[root@diomede sdd]# cat /proc/scsi/scsi Attached devices: Host: scsi4 Channel: 00 Id: 00 Lun: 00 Vendor: IBM Model: 2145 Type: Unknown Host: scsi5 Channel: 00 Id: 00 Lun: 00 Vendor: IBM Model: 2145 Type: Unknown [root@diomede sdd]#

Rev: 0000 ANSI SCSI revision: 04 Rev: 0000 ANSI SCSI revision: 04

The rpm -ivh IBMsdd-1.6.3.0-5.i686.rhel4.rpm command installs the package, as shown in Example 5-34.
Example 5-34 rpm command example

[root@Palau sdd]# rpm -ivh IBMsdd-1.6.3.0-5.i686.rhel4.rpm Preparing... ########################################### [100%] 1:IBMsdd ########################################### [100%] Added following line to /etc/inittab: srv:345:respawn:/opt/IBMsdd/bin/sddsrv > /dev/null 2>&1 [root@Palau sdd]# To manually load and configure SDD on Linux, use the service sdd start command (SUSE Linux users can use the sdd start command). If you are not running a supported kernel, you will get an error message. If your kernel is supported, you see an OK success message, as shown in Example 5-35 on page 203.

202

Implementing the IBM System Storage SAN Volume Controller V6.1

Example 5-35 Supported kernel for SDD

[root@Palau sdd]# sdd start Starting IBMsdd driver load: [ Issuing killall sddsrv to trigger respawn... Starting IBMsdd configuration: [ OK OK ] ]

Issue the cfgvpath query command to view the name and serial number of the volume that is configured in the SAN Volume Controller, as shown in Example 5-36.
Example 5-36 cfgvpath query example

[root@Palau ~]# cfgvpath query RTPG command = a3 0a 00 00 00 00 00 00 08 20 00 00 total datalen=52 datalen_str=0x00 00 00 30 RTPG succeeded: sd_name=/dev/sda df_ctlr=0 /dev/sda ( 8, 0) host=0 ch=0 id=0 lun=0 vid=IBM pid=2145 serial=60050768018201bee000000000000035 lun_id=60050768018201bee000000000000035 ctlr_flag=1 ctlr_nbr=1 df_ctlr=0 RTPG command = a3 0a 00 00 00 00 00 00 08 20 00 00 total datalen=52 datalen_str=0x00 00 00 30 RTPG succeeded: sd_name=/dev/sdb df_ctlr=0 /dev/sdb ( 8, 16) host=0 ch=0 id=1 lun=0 vid=IBM pid=2145 serial=60050768018201bee000000000000035 lun_id=60050768018201bee000000000000035 ctlr_flag=1 ctlr_nbr=0 df_ctlr=0 RTPG command = a3 0a 00 00 00 00 00 00 08 20 00 00 total datalen=52 datalen_str=0x00 00 00 30 RTPG succeeded: sd_name=/dev/sdc df_ctlr=0 /dev/sdc ( 8, 32) host=1 ch=0 id=0 lun=0 vid=IBM pid=2145 serial=60050768018201bee000000000000035 lun_id=60050768018201bee000000000000035 ctlr_flag=1 ctlr_nbr=0 df_ctlr=0 RTPG command = a3 0a 00 00 00 00 00 00 08 20 00 00 total datalen=52 datalen_str=0x00 00 00 30 RTPG succeeded: sd_name=/dev/sdd df_ctlr=0 /dev/sdd ( 8, 48) host=1 ch=0 id=1 lun=0 vid=IBM pid=2145 serial=60050768018201bee000000000000035 lun_id=60050768018201bee000000000000035 ctlr_flag=1 ctlr_nbr=1 df_ctlr=0 [root@Palau ~]# The cfgvpath command configures the SDD vpath devices, as shown in Example 5-37.
Example 5-37 cfgvpath command example

[root@Palau ~]# cfgvpath c--------- 1 root root 253, 0 Jun 5 WARNING: vpatha path sda has WARNING: vpatha path sdb has WARNING: vpatha path sdc has WARNING: vpatha path sdd has Writing out new configuration to file [root@Palau ~]#

09:04 /dev/IBMsdd already been configured. already been configured. already been configured. already been configured. /etc/vpath.conf

The configuration information is saved by default in the /etc/vpath.conf file. You can save the configuration information to a specified file name by entering the following command: cfgvpath -f file_name.cfg
Chapter 5. Host configuration

203

Issue the chkconfig command to enable SDD to run at system startup: chkconfig sdd on To verify the setting, enter the following command: chkconfig --list sdd This verification is shown in Example 5-38.
Example 5-38 sdd run level example

[root@Palau sdd]# chkconfig --list sdd sdd 0:off 1:off 2:on [root@Palau sdd]#

3:on

4:on

5:on

6:off

If necessary, you can disable the startup option by entering this command: chkconfig sdd off Run the datapath query commands to display the online adapters and the paths to the adapters. Notice that the preferred paths are used from one of the nodes, that is, path 0 and path 2. Path 1 and path 3 connect to the other node and are used as alternate or backup paths for high availability, as shown in Example 5-39.
Example 5-39 datapath query command example

[root@Palau ~]# datapath query adapter Active Adapters :2 Adpt# Name State Mode 0 Host0Channel0 NORMAL ACTIVE 1 Host1Channel0 NORMAL ACTIVE [root@Palau ~]# [root@Palau ~]# datapath query device Total Devices : 1 Select 1 0 Errors 0 0 Paths 2 2 Active 0 0

DEV#: 0 DEVICE NAME: vpatha TYPE: 2145 POLICY: Optimized Sequential SERIAL: 60050768018201bee000000000000035 ============================================================================ Path# Adapter/Hard Disk State Mode Select Errors 0 Host0Channel0/sda CLOSE NORMAL 1 0 1 Host0Channel0/sdb CLOSE NORMAL 0 0 2 Host1Channel0/sdc CLOSE NORMAL 0 0 3 Host1Channel0/sdd CLOSE NORMAL 0 0 [root@Palau ~]# SDD has three path-selection policy algorithms: Failover only (fo): All I/O operations for the device are sent to the same (preferred) path unless the path fails because of I/O errors. Then, an alternate path is chosen for subsequent I/O operations. Load balancing (lb): The path to use for an I/O operation is chosen by estimating the load on the adapter to which each path is attached. The load is a function of the number of I/O operations currently in process. If multiple paths have the same load, a path is chosen at

204

Implementing the IBM System Storage SAN Volume Controller V6.1

random from those paths. Load-balancing mode also incorporates failover protection. The load-balancing policy is also known as the optimized policy. Round-robin (rr): The path to use for each I/O operation is chosen at random from paths that were not used for the last I/O operation. If a device has only two paths, SDD alternates between the two paths. You can dynamically change the SDD path-selection policy algorithm by using the datapath set device policy SDD command. You can see the SDD path-selection policy algorithm that is active on the device when you use the datapath query device command. Example 5-39 on page 204 shows that the active policy is optimized, which means that the SDD path-selection policy algorithm active is Optimized Sequential. Example 5-40 shows the volume information from the SVC command-line interface.
Example 5-40 svcinfo redhat1

IBM_2145:ITSOSVC42A:admin>svcinfo lshost linux2 id 6 name linux2 port_count 2 type generic mask 1111 iogrp_count 4 WWPN 210000E08B89C1CD node_logged_in_count 2 state active WWPN 210000E08B054CAA node_logged_in_count 2 state active IBM_2145:ITSOSVC42A:admin> IBM_2145:ITSOSVC42A:admin>svcinfo lshostvdiskmap linux2 id name SCSI_id vdisk_id wwpn vdisk_UID 6 linux2 0 33 210000E08B89C1CD 60050768018201BEE000000000000035 IBM_2145:ITSOSVC42A:admin> IBM_2145:ITSOSVC42A:admin>svcinfo lsvdisk linux_vd1 id 33 name linux_vd1 IO_group_id 0 IO_group_name io_grp0 status online mdisk_grp_id 1 mdisk_grp_name MDG0 capacity 1.0GB type striped formatted no mdisk_id mdisk_name FC_id FC_name RC_id RC_name

vdisk_name linux_vd1

Chapter 5. Host configuration

205

vdisk_UID 60050768018201BEE000000000000035 throttling 0 preferred_node_id 1 fast_write_state empty cache readwrite udid 0 fc_map_count 0 IBM_2145:ITSOSVC42A:admin>

5.10.6 Creating and preparing the SDD volumes for use


Follow these steps to create and prepare the volumes: 1. Create a partition on the vpath device, as shown in Example 5-41.
Example 5-41 fdisk example

[root@Palau ~]# fdisk /dev/vpatha Device contains neither a valid DOS partition table, nor Sun, SGI or OSF disklabel Building a new DOS disklabel. Changes will remain in memory only, until you decide to write them. After that, of course, the previous content won't be recoverable. Warning: invalid flag 0x0000 of partition table 4 will be corrected by w(rite) Command (m for help): m Command action a toggle a bootable flag b edit bsd disklabel c toggle the dos compatibility flag d delete a partition l list known partition types m print this menu n add a new partition o create a new empty DOS partition table p print the partition table q quit without saving changes s create a new empty Sun disklabel t change a partition's system id u change display/entry units v verify the partition table w write table to disk and exit x extra functionality (experts only) Command (m for help): n Command action e extended p primary partition (1-4) e Partition number (1-4): 1 First cylinder (1-1011, default 1): Using default value 1 Last cylinder or +size or +sizeM or +sizeK (1-1011, default 1011): Using default value 1011 Command (m for help): w 206
Implementing the IBM System Storage SAN Volume Controller V6.1

The partition table has been altered! Calling ioctl() to re-read partition table. Syncing disks. [root@Palau ~]# 2. Create a file system on the vpath, as shown in Example 5-42.
Example 5-42 mkfs command example

[root@Palau ~]# mkfs -t ext3 /dev/vpatha mke2fs 1.35 (28-Feb-2004) Filesystem label= OS type: Linux Block size=4096 (log=2) Fragment size=4096 (log=2) 131072 inodes, 262144 blocks 13107 blocks (5.00%) reserved for the super user First data block=0 Maximum filesystem blocks=268435456 8 block groups 32768 blocks per group, 32768 fragments per group 16384 inodes per group Superblock backups stored on blocks: 32768, 98304, 163840, 229376 Writing inode tables: done Creating journal (8192 blocks): done Writing superblocks and filesystem accounting information: done This filesystem will be automatically checked every 27 mounts or 180 days, whichever comes first. Use tune2fs -c or -i to override. [root@Palau ~]# 3. Create the mount point, and mount the vpath drive, as shown in Example 5-43.
Example 5-43 Mount point

[root@Palau ~]# mkdir /itsosvc [root@Palau ~]# mount -t ext3 /dev/vpatha /itsosvc 4. The drive is now ready for use. The df command shows us the mounted disk /itsosvc, and the datapath query command shows that four paths are available; see Example 5-44.
Example 5-44 Display mounted drives

[root@Palau ~]# df Filesystem 1K-blocks /dev/mapper/VolGroup00-LogVol00 74699952 /dev/hda1 101086 none 1033136 /dev/vpatha 1032088 [root@Palau ~]#

Used Available Use% Mounted on 2564388 13472 0 34092 68341032 82395 1033136 945568 4% 15% 0% 4% / /boot /dev/shm /itsosvc

[root@Palau ~]# datapath query device

Chapter 5. Host configuration

207

Total Devices : 1

DEV#: 0 DEVICE NAME: vpatha TYPE: 2145 POLICY: Optimized Sequential SERIAL: 60050768018201bee000000000000035 ============================================================================ Path# Adapter/Hard Disk State Mode Select Errors 0 Host0Channel0/sda OPEN NORMAL 1 0 1 Host0Channel0/sdb OPEN NORMAL 6296 0 2 Host1Channel0/sdc OPEN NORMAL 6178 0 3 Host1Channel0/sdd OPEN NORMAL 0 0 [root@Palau ~]#

5.10.7 Using the operating system MPIO


Red Hat Enterprise Linux 5 and later and SUSE Linux Enterprise Server 10 and later provide their own multipath support for the operating system. Therefore, you do not have to install an additional device driver. Always check whether your operating system includes one of the supported multipath drivers. You will find this information in the links that are provided in 5.10.2, Configuration information on page 201. In SLES10, the multipath drivers and tools are installed by default. For RHEL5, though, the user has to explicitly choose the multipath components during the operating system installation to install them. Each of the attached SAN Volume Controller LUNs has a special device file in the Linux /dev directory. Hosts that use 2.6 kernel Linux operating systems can have as many FC disks as the SVC allows. The following website provides the most current information about the maximum configuration for the SAN Volume Controller: http://www.ibm.com/storage/support/2145

5.10.8 Creating and preparing MPIO volumes for use


First, you have to start the MPIO daemon on your system. Run the following commands on your host system: 1. Enable MPIO for SLES10 by running the following commands: /etc/init.d/boot.multipath {start|stop} /etc/init.d/multipathd {start|stop|status|try-restart|restart|force-reload|reload|probe}

Tip: Run insserv boot.multipath multipathd to automatically load the multipath driver and multipathd daemon during startup. 2. Enable MPIO for RHEL5 by running the following commands: modprobe dm-multipath modprobe dm-round-robin service multipathd start chkconfig multipathd on

208

Implementing the IBM System Storage SAN Volume Controller V6.1

Example 5-45 shows the commands issued on a Red Hat Enterprise Linux 5.1 operating system.
Example 5-45 Starting MPIO daemon on Red Hat Enterprise Linux

[root@palau [root@palau [root@palau [root@palau

~]# modprobe dm-round-robin ~]# multipathd start ~]# chkconfig multipathd on ~]#

3. Open the multipath.conf file and follow the instructions to enable multipathing for IBM devices. The file is located in the /etc directory. Example 5-46 shows editing using vi.
Example 5-46 Editing the multipath.conf file

[root@palau etc]# vi multipath.conf 4. Add the following entry to the multipath.conf file: device { vendor "IBM" product "2145" path_grouping_policy group_by_prio prio_callout "/sbin/mpath_prio_alua /dev/%n" } 5. Restart the multipath daemon; see Example 5-47.
Example 5-47 Stopping and starting the multipath daemon

[root@palau ~]# service multipathd stop Stopping multipathd daemon: [root@palau ~]# service multipathd start Starting multipathd daemon:

[ [

OK OK

] ]

6. Type the multipath -dl command to see the mpio configuration. You will see two groups with two paths each. All paths must have the state [active][ready] and one group will be [enabled].

Chapter 5. Host configuration

209

7. Use the fdisk command to create a partition on the SVC disk, as shown in Example 5-48.
Example 5-48 fdisk

[root@palau scsi]# fdisk -l Disk /dev/hda: 80.0 GB, 80032038912 bytes 255 heads, 63 sectors/track, 9730 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Device Boot /dev/hda1 * /dev/hda2 Start 1 14 End 13 9730 Blocks 104391 78051802+ Id 83 8e System Linux Linux LVM

Disk /dev/sda: 4244 MB, 4244635648 bytes 131 heads, 62 sectors/track, 1020 cylinders Units = cylinders of 8122 * 512 = 4158464 bytes Disk /dev/sda doesn't contain a valid partition table Disk /dev/sdb: 4244 MB, 4244635648 bytes 131 heads, 62 sectors/track, 1020 cylinders Units = cylinders of 8122 * 512 = 4158464 bytes Disk /dev/sdb doesn't contain a valid partition table Disk /dev/sdc: 4244 MB, 4244635648 bytes 131 heads, 62 sectors/track, 1020 cylinders Units = cylinders of 8122 * 512 = 4158464 bytes Disk /dev/sdc doesn't contain a valid partition table Disk /dev/sdd: 4244 MB, 4244635648 bytes 131 heads, 62 sectors/track, 1020 cylinders Units = cylinders of 8122 * 512 = 4158464 bytes Disk /dev/sdd doesn't contain a valid partition table Disk /dev/sde: 4244 MB, 4244635648 bytes 131 heads, 62 sectors/track, 1020 cylinders Units = cylinders of 8122 * 512 = 4158464 bytes Disk /dev/sde doesn't contain a valid partition table Disk /dev/sdf: 4244 MB, 4244635648 bytes 131 heads, 62 sectors/track, 1020 cylinders Units = cylinders of 8122 * 512 = 4158464 bytes Disk /dev/sdf doesn't contain a valid partition table Disk /dev/sdg: 4244 MB, 4244635648 bytes 131 heads, 62 sectors/track, 1020 cylinders Units = cylinders of 8122 * 512 = 4158464 bytes Disk /dev/sdg doesn't contain a valid partition table

210

Implementing the IBM System Storage SAN Volume Controller V6.1

Disk /dev/sdh: 4244 MB, 4244635648 bytes 131 heads, 62 sectors/track, 1020 cylinders Units = cylinders of 8122 * 512 = 4158464 bytes Disk /dev/sdh doesn't contain a valid partition table Disk /dev/dm-2: 4244 MB, 4244635648 bytes 255 heads, 63 sectors/track, 516 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Disk /dev/dm-2 doesn't contain a valid partition table Disk /dev/dm-3: 4244 MB, 4244635648 bytes 255 heads, 63 sectors/track, 516 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Disk /dev/dm-3 doesn't contain a valid partition table [root@palau scsi]# fdisk /dev/dm-2 Device contains neither a valid DOS partition table, nor Sun, SGI or OSF disklabel Building a new DOS disklabel. Changes will remain in memory only, until you decide to write them. After that, of course, the previous content won't be recoverable. Warning: invalid flag 0x0000 of partition table 4 will be corrected by w(rite) Command (m for help): n Command action e extended p primary partition (1-4) e Partition number (1-4): 1 First cylinder (1-516, default 1): Using default value 1 Last cylinder or +size or +sizeM or +sizeK (1-516, default 516): Using default value 516 Command (m for help): w The partition table has been altered! Calling ioctl() to re-read partition table. WARNING: Re-reading the partition table failed with error 22: Invalid argument. The kernel still uses the old table. The new table will be used at the next reboot. [root@palau scsi]# shutdown -r now

Chapter 5. Host configuration

211

8. Create a file system using the mkfs command (Example 5-49).


Example 5-49 mkfs command

[root@palau ~]# mkfs -t ext3 /dev/dm-2 mke2fs 1.39 (29-May-2006) Filesystem label= OS type: Linux Block size=4096 (log=2) Fragment size=4096 (log=2) 518144 inodes, 1036288 blocks 51814 blocks (5.00%) reserved for the super user First data block=0 Maximum filesystem blocks=1061158912 32 block groups 32768 blocks per group, 32768 fragments per group 16192 inodes per group Superblock backups stored on blocks: 32768, 98304, 163840, 229376, 294912, 819200, 884736 Writing inode tables: done Creating journal (16384 blocks): done Writing superblocks and filesystem accounting information: done This filesystem will be automatically checked every 29 mounts or 180 days, whichever comes first. Use tune2fs -c or -i to override. [root@palau ~]# 9. Create a mount point, and mount the drive, as shown in Example 5-50.
Example 5-50 Mount point

[root@palau ~]# mkdir /svcdisk_0 [root@palau ~]# cd /svcdisk_0/ [root@palau svcdisk_0]# mount -t ext3 /dev/dm-2 /svcdisk_0 [root@palau svcdisk_0]# df Filesystem 1K-blocks Used Available Use% Mounted on /dev/mapper/VolGroup00-LogVol00 73608360 1970000 67838912 3% / /dev/hda1 101086 15082 80785 16% /boot tmpfs 967984 0 967984 0% /dev/shm /dev/dm-2 4080064 73696 3799112 2% /svcdisk_0

5.11 VMware configuration information


This section explains the requirements and additional information for attaching the SAN Volume Controller to a variety of guest host operating systems running on the VMware operating system.

212

Implementing the IBM System Storage SAN Volume Controller V6.1

5.11.1 Configuring VMware hosts


To configure the VMware hosts, follow these steps: 1. Install the HBAs in your host system, as described in 5.11.3, HBAs for hosts running VMware on page 213. 2. Connect the server FC host adapters to the switches. 3. Configure the switches (zoning), as described in 5.11.4, VMware storage and zoning guidance on page 214. 4. Install the VMware operating system (if not already done) and check the HBA timeouts, as described in 5.11.5, Setting the HBA timeout for failover in VMware on page 214. 5. Configure the host, volumes, and host mapping in the SVC, as described in 5.11.7, Attaching VMware to volumes on page 215.

5.11.2 Operating system versions and maintenance levels


For the latest information about VMware support, refer to this website: http://www-03.ibm.com/systems/storage/software/virtualization/svc/interop.html At the time of writing, the following versions are supported: ESX 4.x ESX V3.5

5.11.3 HBAs for hosts running VMware


Ensure that your hosts that are running on VMware operating systems use the correct HBAs and firmware levels. Install the host adapters in your system. Refer to the manufacturers instructions for installation and configuration of the HBAs. For older ESX versions, you will find the supported HBAs at the IBM website: http://www.ibm.com/storage/support/2145 Mostly the supported HBA device drivers are already included in the ESX server build, but for various newer storage adapters it can be required to load additional ESX drivers. Check the VMware HCL if you need to load a custom driver for your adapter: http://www.vmware.com/resources/compatibility/search.php After installing, load the default configuration of your FC HBAs. Use the same model of HBA with the same firmware in one server. It is not supported to have Emulex and QLogic HBAs that access the same target in one server.

SAN boot support


SAN boot of any guest operating system is supported under VMware. The nature of VMware means that SAN boot is a requirement on any guest operating system. The guest operating system must reside on a SAN disk. If you are unfamiliar with the VMware environments and the advantages of storing virtual machines and application data on a SAN, it is useful to get an overview about VMware products before continuing. VMware documentation is available at this website: http://www.vmware.com/support/pubs/
Chapter 5. Host configuration

213

5.11.4 VMware storage and zoning guidance


The VMware ESX server can use a Virtual Machine File System (VMFS). VMFS is a file system that is optimized to run multiple virtual machines as one workload to minimize disk I/O. It is also able to handle concurrent access from multiple physical machines, because it enforces the appropriate access controls. Therefore, multiple ESX hosts can share the same set of LUNs. Theoretically, you can run all of your virtual machines on one LUN. However, for performance reasons in more complex scenarios, it can be better to load balance virtual machines over separate HBAs, storages, or arrays. If you run an ESX host, for example, with several virtual machines, it makes sense to use one slow array, for example, for Print and Active Directory Services guest operating systems without high I/O, and another fast array for database guest operating systems. Using fewer volumes has the following advantages: More flexibility to create virtual machines without creating new space on the SVC More possibilities for taking VMware snapshots Fewer volumes to manage Using more and smaller volumes has the following advantages: Separate I/O characteristics of the guest operating systems More flexibility (the multipathing policy and disk shares are set per volume) Microsoft Cluster Service requires its own volume for each cluster disk resource More documentation about designing your VMware infrastructure is provided at one of these websites: http://www.vmware.com/vmtn/resources/ http://www.vmware.com/resources/techresources/1059 Guidelines: ESX Server hosts that use shared storage for virtual machine failover or load balancing must be in the same zone. You can have only one VMFS volume per volume.

5.11.5 Setting the HBA timeout for failover in VMware


The timeout for failover for ESX hosts must be set to 30 seconds: For QLogic HBAs, the timeout depends on the PortDownRetryCount parameter. The timeout value is 2 x PortDownRetryCount + 5 sec. Set the qlport_down_retry parameter to 14. For Emulex HBAs, the lpfc_linkdown_tmo and the lpcf_nodev_tmo parameters must be set to 30 seconds. To make these changes on your system, perform the following steps; see Example 5-51 on page 215: 1. 2. 3. 4. 5. 214 Back up the /etc/vmware/esx.cof file. Open the /etc/vmware/esx.cof file for editing. The file includes a section for every installed SCSI device. Locate your SCSI adapters, and edit the previously described parameters. Repeat this process for every installed HBA.

Implementing the IBM System Storage SAN Volume Controller V6.1

Example 5-51 Setting the HBA timeout

[root@nile svc]# cp /etc/vmware/esx.conf /etc/vmware/esx.confbackup [root@nile svc]# vi /etc/vmware/esx.conf

5.11.6 Multipathing in ESX


The ESX Server performs multipathing. You do not need to install a multipathing driver, such as SDD, either on the ESX server or on the guest operating systems.

5.11.7 Attaching VMware to volumes


First, we make sure that the VMware host is logged into the SAN Volume Controller. In our examples, we use the VMware ESX server V3.5 and the host name Nile. Enter the following command to check the status of the host: svcinfo lshost <hostname> Example 5-52 shows that the host Nile is logged into the SVC with two HBAs.
Example 5-52 lshost Nile

IBM_2145:ITSO-CLS1:admin>svcinfo lshost Nile id 1 name Nile port_count 2 type generic mask 1111 iogrp_count 2 WWPN 210000E08B892BCD node_logged_in_count 4 state active WWPN 210000E08B89B8C0 node_logged_in_count 4 state active Then, we have to set the SCSI Controller Type in VMware. By default, ESX Server disables the SCSI bus sharing and does not allow multiple virtual machines to access the same VMFS file at the same time; see Figure 5-48 on page 216. But in many configurations, such as those configurations for high availability, the virtual machines have to share the same VMFS file to share a disk. To set the SCSI Controller Type in VMware: 1. Log on to your Infrastructure Client, shut down the virtual machine, right-click it, and select Edit settings. 2. Highlight the SCSI Controller, and select one of the three available settings, depending on your configuration: None: Disks cannot be shared by other virtual machines. Virtual: Disks can be shared by virtual machines on the same server. Physical: Disks can be shared by virtual machines on any server. Click OK to apply the setting.

Chapter 5. Host configuration

215

Figure 5-48 Changing SCSI bus settings

3. Create your volumes on the SVC, then map them to the ESX hosts. Tips: If you want to use features, such as VMotion, the volumes that own the VMFS file have to be visible to every ESX host that will be able to host the virtual machine. In SVC, select Allow the virtual disks to be mapped even if they are already mapped to a host. The volume has to have the same SCSI ID on each ESX host. For this configuration, we created one volume and mapped it to our ESX host, as shown in Example 5-53.
Example 5-53 Mapped volume to ESX host Nile

IBM_2145:ITSO-CLS1:admin>svcinfo lshostvdiskmap Nile id name SCSI_id vdisk_id vdisk_name wwpn 1 Nile 0 12 VMW_pool 210000E08B892BCD 60050768018301BF2800000000000010

vdisk_UID

ESX does not automatically scan for SAN changes (except when rebooting the entire ESX server). If you have made any changes to your SVC or SAN configuration, perform the following steps: 1. Open your VMware Infrastructure Client. 2. Select the host. 3. In the Hardware window, choose Storage Adapters. 4. Click Rescan.

216

Implementing the IBM System Storage SAN Volume Controller V6.1

To configure a storage device to use it in VMware, perform the following steps: 1. Open your VMware Infrastructure Client. 2. Select the host for which you want to see the assigned volumes, and click the Configuration tab. 3. In the Hardware window on the left side, click Storage. 4. To create a new storage pool, select click here to create a datastore or Add storage if the yellow field does not appear (Figure 5-49).

Figure 5-49 VMWare add datastore

5. The Add storage wizard will appear. 6. Select Create Disk/Lun, and click Next. 7. Select the SVC volume that you want to use for the datastore, and click Next. 8. Review the disk layout and click Next. 9. Enter a datastore name and click Next. 10.Select a block size, enter the size of the new partition, and then, click Next. 11.Review your selections, and click Finish. Now, the created VMFS datastore appears in the Storage window (Figure 5-50). You will see the details for the highlighted datastore. Check whether all of the paths are available and that the Path Selection is set to Round Robin.

Figure 5-50 VMWare storage configuration

Chapter 5. Host configuration

217

If not all of the paths are available, check your SAN and storage configuration. After fixing the problem, select Refresh to perform a path rescan. The view will be updated to the new configuration. Best practice is to use the Round Robin Multipath Policy for SVC. If you have to edit this policy, perform the following steps: 1. Highlight the datastore. 2. Click Properties. 3. Click Managed Paths. 4. Click Change (see Figure 5-50 on page 217). 5. Select Round Robin. 6. Click OK. 7. Click Close. Now, your VMFS datastore has been created, and you can start using it for your guest operating systems. Round Robin will distribute the I/O load across all available paths. If you do want to use a fixed path, the policy setting Fixed is supported as well.

5.11.8 Volume naming in VMware


In the Virtual Infrastructure Client, a volume is displayed as a sequence of three or four numbers, separated by colons (Figure 5-51): <SCSI HBA>:<SCSI target>:<SCSi volume>:<disk partition> where: SCSI HBA The number of the SCSI HBA (can change). SCSI target The number of the SCSI target (can change). SCSI volume The number of the volume (never changes). disk partition The number of the disk partition (never changes). If the last number is not displayed, the name stands for the entire volume.

Figure 5-51 Volume naming in VMware

218

Implementing the IBM System Storage SAN Volume Controller V6.1

5.11.9 Setting the Microsoft guest operating system timeout


For a Microsoft Windows 2000 Server or Windows Server 2003 installed as a VMware guest operating system, the disk timeout value must be set to 60 seconds. We provide the instructions to perform this task in 5.5.5, Changing the disk timeout on Microsoft Windows Server on page 162.

5.11.10 Extending a VMFS volume


It is possible to extend VMFS volumes while virtual machines are running. First, you have to extend the volume on the SVC, and then you are able to extend the VMFS volume. Before performing these steps, perform a backup of your data. Perform the following steps to extend a volume: 1. The volume can be expanded with the svctask expandvdisksize -size 1 -unit gb <VDiskname> command; see Example 5-54.
Example 5-54 Expanding a volume in SVC

IBM_2145:ITSO-CLS1:admin>svcinfo lsvdisk VMW_pool id 12 name VMW_pool IO_group_id 0 IO_group_name io_grp0 status online mdisk_grp_id 0 mdisk_grp_name MDG_DS45 capacity 60.0GB type striped formatted yes mdisk_id mdisk_name FC_id FC_name RC_id RC_name vdisk_UID 60050768018301BF2800000000000010 throttling 0 preferred_node_id 2 fast_write_state empty cache readwrite udid 0 fc_map_count 0 sync_rate 50 copy_count 1 copy_id 0 status online sync yes primary yes mdisk_grp_id 0 mdisk_grp_name MDG_DS45 type striped mdisk_id mdisk_name
Chapter 5. Host configuration

219

fast_write_state empty used_capacity 60.00GB real_capacity 60.00GB free_capacity 0.00MB overallocation 100 autoexpand warning grainsize IBM_2145:ITSO-CLS1:admin>svctask expandvdisksize -size 5 -unit gb VMW_pool IBM_2145:ITSO-CLS1:admin>svcinfo lsvdisk VMW_pool id 12 name VMW_pool IO_group_id 0 IO_group_name io_grp0 status online mdisk_grp_id 0 mdisk_grp_name MDG_DS45 capacity 65.0GB type striped formatted yes mdisk_id mdisk_name FC_id FC_name RC_id RC_name vdisk_UID 60050768018301BF2800000000000010 throttling 0 preferred_node_id 2 fast_write_state empty cache readwrite udid 0 fc_map_count 0 sync_rate 50 copy_count 1 copy_id 0 status online sync yes primary yes mdisk_grp_id 0 mdisk_grp_name MDG_DS45 type striped mdisk_id mdisk_name fast_write_state empty used_capacity 65.00GB real_capacity 65.00GB free_capacity 0.00MB overallocation 100 autoexpand warning grainsize IBM_2145:ITSO-CLS1:admin>

220

Implementing the IBM System Storage SAN Volume Controller V6.1

2. Open the Virtual Infrastructure Client. 3. Select the host. 4. Select Configuration. 5. Select Storage Adapters. 6. Click Rescan. 7. Make sure that the Scan for new Storage Devices check box is marked, and click OK. After the scan has completed, the new capacity is displayed in the Details section. 8. Click Storage. 9. Right-click the VMFS volume and click Properties. 10.Click Add Extend. 11.Select the new free space, and click Next. 12.Click Next. 13.Click Finish. The VMFS volume has now been extended, and the new space is ready for use.

5.11.11 Removing a datastore from an ESX host


Before you remove a datastore from an ESX host, you have to migrate or delete all of the virtual machines that reside on this datastore. To remove it, perform the following steps: 1. Back up the data. 2. Open the Virtual Infrastructure Client. 3. Select the host. 4. Select Configuration. 5. Select Storage. 6. Highlight the datastore that you want to remove. 7. Click Remove. 8. Read the warning, and if you are sure that you want to remove the datastore and delete all of the data on it, click Yes. 9. Remove the host mapping on the SVC, or delete the volume (as shown in Example 5-55). 10.In the VI Client, select Storage Adapters. 11.Click Rescan. 12.Make sure that the Scan for new Storage Devices check box is marked, and click OK. 13.After the scan completes, the disk disappears from the view. Your datastore has been successfully removed from the system.
Example 5-55 Remove host mapping: Delete volume

IBM_2145:ITSO-CLS1:admin>svctask rmvdiskhostmap -host Nile VMW_pool IBM_2145:ITSO-CLS1:admin>svctask rmvdisk VMW_pool

Chapter 5. Host configuration

221

5.12 Sun Solaris support information


For the latest information about supported software and driver levels, always refer to this website: http://www-03.ibm.com/systems/storage/software/virtualization/svc/interop.html

5.12.1 Operating system versions and maintenance levels


At the time of writing, Sun Solaris 8, Sun Solaris 9, and Sun Solaris 10 are supported in 64-bit only.

5.12.2 SDD dynamic pathing


Solaris supports dynamic pathing when you either add more paths to an existing volume, or present a new volume to a host. No user intervention is required. SDD is aware of the preferred paths that SVC sets per volume. SDD will use a round-robin algorithm when failing over paths. That is, it will try the next known preferred path. If this method fails and all preferred paths have been tried, it will use a round-robin algorithm on the non-preferred paths until it finds a path that is available. If all paths are unavailable, the volume will go offline. Therefore, it can take time to perform path failover when multiple paths go offline. SDD under Solaris performs load balancing across the preferred paths where appropriate.

Veritas Volume Manager with dynamic multipathing


Veritas Volume Manager (VM) with dynamic multipathing (DMP) automatically selects the next available I/O path for I/O requests without action from the administrator. VM with DMP is also informed when you repair or restore a connection, and when you add or remove devices after the system has been fully booted (provided that the operating system recognizes the devices correctly). The new Java Native Interface (JNI) drivers support the host mapping of new volumes without rebooting the Solaris host. Note the following support characteristics: Veritas VM with DMP supports load balancing across multiple paths with SVC. Veritas VM with DMP does not support preferred pathing with SVC.

Coexistence with SDD and Veritas VM with DMP


Veritas Volume Manager with DMP will coexist in pass-through mode with SDD. DMP will use the vpath devices that are provided by SDD.

OS cluster support
Solaris with Symantec Cluster V4.1, Symantec SFHA and SFRAC V4.1/5.0, and Solaris with Sun Cluster V3.1/3.2 are supported at the time of writing.

SAN boot support


Note the following support characteristics: Boot from SAN is supported under Solaris 9 running Symantec Volume Manager. Boot from SAN is not supported when SDD is used as the multipathing software.

222

Implementing the IBM System Storage SAN Volume Controller V6.1

5.13 Hewlett-Packard UNIX configuration information


For the latest information about Hewlett-Packard UNIX (HP-UX) support, refer to this website: http://www-03.ibm.com/systems/storage/software/virtualization/svc/interop.html

5.13.1 Operating system versions and maintenance levels


At the time of writing, HP-UX V11.0 and V11i v1/v2/v3 are supported (64-bit only).

5.13.2 Multipath solutions supported


At the time of writing, SDD V1.6.3.0 for HP-UX is supported. Multipathing Software PV Link and Cluster Software Service Guard V11.14/11.16/11.17/11.18 are also supported, but in a cluster environment, we suggest that you use SDD.

SDD dynamic pathing


HP-UX supports dynamic pathing when you either add more paths to an existing volume, or present a new volume to a host. SDD is aware of the preferred paths that SVC sets per volume. SDD will use a round-robin algorithm when failing over paths. That is, it will try the next known preferred path. If this method fails and all preferred paths have been tried, it will use a round-robin algorithm on the non-preferred paths until it finds a path that is available. If all paths are unavailable, the volume will go offline. It can take time, therefore, to perform path failover when multiple paths go offline. SDD under HP-UX performs load balancing across the preferred paths where appropriate.

Physical volume links (PVLinks) dynamic pathing


Unlike SDD, PVLinks does not load balance and it is unaware of the preferred paths that SVC sets per volume. Therefore, it is strongly suggested that you use SDD, except when in a clustering environment or when using an SVC volume as your boot disk. When creating a Volume Group, specify the primary path that you want HP-UX to use when accessing the Physical Volume that is presented by SVC. This path, and only this path, will be used to access the PV as long as it is available, no matter what the SVCs preferred path to that volume is. Therefore, be careful when creating Volume Groups so that the primary links to the PVs (and load) are balanced over both HBAs, FC switches, SVC nodes, and so on. When extending a Volume Group to add alternate paths to the PVs, the order in which you add these paths is HP-UXs order of preference if the primary path becomes unavailable. Therefore, when extending a Volume Group, the first alternate path that you add must be from the same SVC node as the primary path, to avoid unnecessary node failover due to an HBA, FC link, or FC switch failure.

5.13.3 Coexistence of SDD and PV Links


If you want to multipath a volume with PVLinks while SDD is installed, you need to make sure that SDD does not configure a vpath for that volume. To do this, you need to put the serial number of any volumes that you want SDD to ignore in the /etc/vpathmanualexcl.cfg directory. In the case of SAN boot, if you are booting from an SVC volume, when you install SDD (from Version 1.6 onward), SDD will automatically ignore the boot volume.

Chapter 5. Host configuration

223

SAN boot support


SAN boot is supported on HP-UX by using PVLinks as the multipathing software on the boot device. You can use PVLinks or SDD to provide the multipathing support for the other devices that are attached to the system.

5.13.4 Using an SVC volume as a cluster lock disk


ServiceGuard does not provide a way to specify alternate links to a cluster lock disk. When using an SVC volume as your lock disk, if the path to FIRST_CLUSTER_LOCK_PV becomes unavailable, the HP node will not be able to access the lock disk if a 50-50 split in quorum occurs. To ensure redundancy, when editing your Cluster Configuration ASCII file, make sure that the variable FIRST_CLUSTER_LOCK_PV has a separate path to the lock disk for each HP node in your cluster. For example, when configuring a two-node HP cluster, make sure that FIRST_CLUSTER_LOCK_PV on HP server A is on a separate SVC node and through a separate FC switch than the FIRST_CLUSTER_LOCK_PV on HP server B.

5.13.5 Support for HP-UX with greater than eight LUNs


HP-UX will not recognize more than eight LUNS per port using the generic SCSI behavior. To accommodate this behavior, SVC supports a type associated with a host. This type can be set using the svctask mkhost command and modified using the svctask chhost command. The type can be set to generic, which is the default for HP-UX. When an initiator port, which is a member of a host of type HP-UX, accesses an SVC, the SVC will behave in the following way:

Flat Space Addressing mode is used rather than the Peripheral Device Addressing Mode.
When an inquiry command for any page is sent to LUN 0 using Peripheral Device Addressing, it is reported as Peripheral Device Type 0Ch (controller). When any command other than an inquiry is sent to LUN 0 using Peripheral Device Addressing, SVC will respond as an unmapped LUN 0 normally responds. When an inquiry is sent to LUN 0 using Flat Space Addressing, it is reported as Peripheral Device Type 00h (Direct Access Device) if a LUN is mapped at LUN 0 or 1Fh Unknown Device Type. When an inquiry is sent to an unmapped LUN that is not LUN 0 using Peripheral Device Addressing, the Peripheral qualifier returned is 001b and the Peripheral Device type is 1Fh (unknown or no device type). This response is in contrast to the behavior for generic hosts, where peripheral Device Type 00h is returned.

5.14 Using SDDDSM, SDDPCM, and SDD web interface


After installing the SDDDSM or SDD driver, there are specific commands available. To open a command window for SDDDSM or SDD, from the desktop, select Start Programs Subsystem Device Driver Subsystem Device Driver Management. The command documentation for the various operating systems is available in the Multipath Subsystem Device Driver User Guides: http://www-1.ibm.com/support/docview.wss?rs=540&context=ST52G7&dc=DA400&uid=ssg1S7 000303&loc=en_US&cs=utf-8&lang=en

224

Implementing the IBM System Storage SAN Volume Controller V6.1

It is also possible to configure the multipath driver so that it offers a web interface to run the commands. Before this configuration can work, we need to configure the web interface. Sddsrv does not bind to any TCP/IP port by default, but it allows port binding to be dynamically enabled or disabled. For all platforms except Linux, the multipath driver package ships an sddsrv.conf template file named the sample_sddsrv.conf file. On all UNIX platforms except Linux, the sample_sddsrv.conf file is located in the /etc directory. On Windows platforms, the sample_sddsrv.conf file is in the directory where SDD is installed. You must use the sample_sddsrv.conf file to create the sddsrv.conf file in the same directory as the sample_sddsrv.conf file by simply copying it and naming the copied file sddsrv.conf. You can then dynamically change port binding by modifying the parameters in the sddsrv.conf file and changing the values of Enableport and Loopbackbind to True. Figure 5-52 shows the start window of the multipath driver web interface.

Figure 5-52 SDD web interface

5.15 Calculating the queue depth


The queue depth is the number of I/O operations that can be run in parallel on a device. It is usually possible to set a limit on the queue depth on the SDD paths (or equivalent) or the HBA. Ensure that you configure the servers to limit the queue depth on all of the paths to the SAN Volume Controller disks in configurations that contain a large number of servers or volumes. You might have a number of servers in the configuration that are idle, or do not initiate the calculated quantity of I/O operations. In that case, you might not need to limit the queue depth.
Chapter 5. Host configuration

225

5.16 Further sources of information


For more information about host attachment and configuration to the SVC, refer to IBM System Storage SAN Volume Controller: Host Attachment Guide, SC26-7563. For more information about SDDDSM or SDD configuration, refer to IBM TotalStorage Multipath Subsystem Device Driver Users Guide, SC30-4096. When looking for information about certain storage subsystems, this link is usually helpful: http://publib.boulder.ibm.com/infocenter/svcic/v3r1m0/index.jsp

5.16.1 Publications containing SVC storage subsystem attachment guidelines


It is beyond the scope of this document to describe the attachment to each subsystem that the SVC supports. Here is a short list of what we found especially useful in the writing of this book, and in the field: SAN Volume Controller Best Practices and Performance Guidelines, SG24-7521, describes in detail how you can tune your back-end storage to maximize your performance on the SVC. http://www.redbooks.ibm.com/redbooks/pdfs/sg247521.pdf DS8000 Performance Monitoring and Tuning, SG24-7146, describes the guidelines and procedures to make the most of the performance that is available from your DS8000 storage subsystem when attached to the IBM SAN Volume Controller. http://www.redbooks.ibm.com/redbooks/pdfs/sg247146.pdf DS4000 Best Practices and Performance Tuning Guide, SG24-6363, explains how to connect and configure your storage for optimized performance on the SVC. http://www.redbooks.ibm.com/redbooks/pdfs/sg2476363.pdf IBM XIV Storage System: Architecture, Implementation and Usage, SG24-7659, discusses specific considerations for attaching the XIV Storage System to a SAN Volume Controller. http://www.redbooks.ibm.com/redpieces/pdfs/sg247659.pdf

226

Implementing the IBM System Storage SAN Volume Controller V6.1

Chapter 6.

Data migration
In this chapter we explain how to migrate from a conventional storage infrastructure to a virtualized storage infrastructure by using the IBM System Storage SAN Volume Controller (SVC). We also explain how the SVC can be phased out of a virtualized storage infrastructure, for example, after a trial period or after using the SVC as a data migration tool. Nest, we describe how to migrate from a fully allocated volume to a thin-provisioned volume by using the volume mirroring feature and the thin-provisioned volume together. Finally, we provide you with examples of using intracluster Metro Mirror to migrate data.

Copyright IBM Corp. 2011. All rights reserved.

227

6.1 Migration overview


The SVC allows you to change the mapping of volume extents to managed disk (MDisk) extents, without interrupting host access to the volume. This functionality is utilized when performing volume migrations, and it applies to any volume that is defined on the SVC. This functionality can be used for these tasks: Migrating data from older back-end storage to SVC-managed storage Migrating data from one back-end controller to another back-end controller using the SVC as a data block mover, and afterwards removing the SVC from the SAN Migrating data from managed mode back into image mode prior to removing the SVC from a SAN Redistributing volumes and, therefore, the workload within an SVC cluster across back-end storage Moving workload onto newly installed storage Moving workload off of old or failing storage, ahead of decommissioning it Moving workload to rebalance a changed workload Migrating data from one SVC cluster to another SVC cluster

6.2 Migration operations


You can perform migration at either the volume or the extent level, depending on the purpose of the migration. The following migration activities are supported: Migrating extents within a storage pool, redistributing the extents of a given volume on the MDisks within the same storage pool Migrating extents off an MDisk, which is removed from the storage pool, to other MDisks in the same storage pool Migrating a volume from one storage pool to another storage pool Migrating a volume to change the virtualization type of the volume to image Migrating a volume between I/O Groups

6.2.1 Migrating multiple extents (within a storage pool)


You can migrate a number of volume extents at one time by using the migrateexts command. For detailed information about the migrateexts command parameters, use the SVC command-line interface help by typing this command: svctask migrateexts -h Or see IBM System Storage SAN Volume Controller Command-Line Interface Users Guide, GC27-2287. When executed, this command migrates a given number of extents from the source MDisk, where the extents of the specified volume reside, to a defined target MDisk that must be part of the same storage pool. You can specify a number of migration threads to be used in parallel (from 1 to 4).

228

Implementing the IBM System Storage SAN Volume Controller V6.1

If the type of the volume is image, then the volume type transitions to striped when the first extent is migrated. The MDisk access mode transitions from image to managed.

6.2.2 Migrating extents off an MDisk that is being deleted


When an MDisk is deleted from a storage pool using the rmmdisk -force command, any extents on the MDisk being used by a volume are first migrated off the MDisk and onto other MDisks in the storage pool prior to its deletion. In this case, the extents that need to be migrated are moved onto the set of MDisks that are not being deleted. This statement holds true if multiple MDisks are being removed from the storage pool at the same time. If a volume uses one or more extents that need to be moved as a result of an rmmdisk command, the virtualization type for that volume is set to striped (if it was previously sequential or image). If the MDisk is operating in image mode, the MDisk transitions to managed mode while the extents are being migrated. Upon deletion, it transitions to unmanaged mode.

Using the -force flag: If the -force flag is not used and if volumes occupy extents on one or more of the MDisks that are specified, the command fails. When the -force flag is used and if volumes occupy extents on one or more of the MDisks that are specified, all extents on the MDisks will be migrated to the other MDisks in the storage pool if there are enough free extents in the storage pool. The deletion of the MDisks is postponed until all extents are migrated, which can take time. In the case where there are insufficient free extents in the storage pool, the command fails.

6.2.3 Migrating a volume between storage pools


An entire volume can be migrated from one storage pool to another storage pool by using the migratevdisk command. A volume can be migrated between storage pools regardless of the virtualization type (image, striped, or sequential), although it transitions to the virtualization type of striped. The command varies, depending on the type of migration, as shown in Table 6-1.
Table 6-1 Migration types and associated commands Storage pool-to-storage pool type Managed to managed Image to managed Managed to image Image to image Command migratevdisk migratevdisk migratetoimage migratetoimage

Rule: For the migration to be acceptable, the source and destination storage pool must have the same extent size. Note that volume mirroring can also be used to migrate a volume between storage pools. This method can be used if the extent sizes of the two pools are not the same.

Chapter 6. Data migration

229

Figure 6-1 Managed volume migration to another storage pool

In Figure 6-1, we illustrate volume V3 migrating from Pool 2 to Pool 3. Extents are allocated to the migrating volume from the set of MDisks in the target storage pool, using the extent allocation algorithm. The process can be prioritized by specifying the number of threads that will be used in parallel (from 1 to 4) while migrating; using only one thread will put the least background load on the system. The offline rules apply to both storage pools. Therefore, referring back to Figure 6-1, if any of the M4, M5, M6, or M7 MDisks go offline, then the V3 volume goes offline. If the M4 MDisk goes offline, then V3 and V5 go offline, but V1, V2, V4, and V6 remain online. If the type of the volume is image, then the volume type transitions to striped when the first extent is migrated. The MDisk access mode transitions from image to managed. For the duration of the move, the volume is listed as being a member of the original storage pool. For the purposes of configuration, the volume moves to the new storage pool instantaneously at the end of the migration.

6.2.4 Migrating the volume to image mode


The facility to migrate a volume to an image mode volume can be combined with the ability to migrate between storage pools. The source for the migration can be a managed mode or an image mode volume. This leads to four possibilities: Migrate image mode-to-image mode within a storage pool. Migrate managed mode-to-image mode within a storage pool.

230

Implementing the IBM System Storage SAN Volume Controller V6.1

Migrate image mode-to-image mode between storage pools. Migrate managed mode-to-image mode between storage pools. These conditions must apply to be able to migrate: The destination MDisk must be greater than or equal to the size of the volume. The MDisk that is specified as the target must be in an unmanaged state at the time that the command is run. If the migration is interrupted by a cluster recovery, the migration will resume after the recovery completes. If the migration involves moving between storage pools, the volume behaves as described in 6.2.3, Migrating a volume between storage pools on page 229. Regardless of the mode in which the volume starts, it is reported as being in managed mode during the migration. Also, both of the MDisks involved are reported as being in image mode during the migration. Upon completion of the command, the volume is classified as an image mode volume.

6.2.5 Migrating a volume between I/O Groups


A volume can be migrated between I/O Groups by using the svctask chvdisk command. This command is only supported if the volume is not in a FlashCopy Mapping or Remote Copy relationship. To move a volume between I/O Groups, the cache must first be flushed. The SVC will attempt to destage all write data for the volume from the cache during the I/O Group move. This flush will fail if data has been pinned in the cache for any reason (such as an storage pool being offline). By default, this failed flush will cause the migration between I/O Groups to fail, but this behavior can be overridden using the -force flag. If the -force flag is used and if the SVC is unable to destage all write data from the cache, the result is that the contents of the volume are corrupted by the loss of the cached data. During the flush, the volume operates in cache write-through mode. Important: Do not move a volume to an offline I/O Group under any circumstance. You must ensure that the I/O Group is online before you move the volumes to avoid any data loss. You must quiesce host I/O before the migration for two reasons: If there is significant data in cache that takes a long time to destage, the command line will time out. Subsystem Device Driver (SDD) vpaths that are associated with the volume are deleted before the volume move takes place to avoid data corruption. So, data corruption can occur if I/O is still occurring for a particular logical unit number (LUN) ID. When migrating a volume between I/O Groups, you have the ability to specify the preferred node, if desired, or you can let SVC assign the preferred node. A volume that is a member of a FlashCopy mapping or a Remote Copy relationship cannot be moved to another I/O Group, and you cannot override this restriction by using the -force flag on the CLI command used to migrate the volume (chvdisk). You must delete the mapping or relationship before the volume can be migrated between I/O Groups.

Chapter 6. Data migration

231

6.2.6 Monitoring the migration progress


To monitor the progress of ongoing migrations use the CLI command: svcinfo lsmigrate To determine the extent allocation of MDisks and volumes, use the following commands. To list the volume IDs and the corresponding number of extents that the volumes occupy on the queried MDisk, use the following CLI command: svcinfo lsmdiskextent <mdiskname | mdisk_id> To list the MDisk IDs and the corresponding number of extents that the queried volumes occupy on the listed MDisks, use the following CLI command: svcinfo lsvdiskextent <vdiskname | vdisk_id> To list the number of available free extents on an MDisk, use the following CLI command: svcinfo lsfreeextents <mdiskname | mdisk_id> Important: After a migration has been started, there is no way for you to stop the migration. The migration runs to completion unless it is stopped or suspended by an error condition, or if the volume being migrated is deleted. If you want the ability to start, suspend, or cancel a migration or control the rate of migration, consider using the volume mirroring function or migrating volumes between storage pools.

6.3 Functional overview of migration


This section describes the functional view of data migration.

6.3.1 Parallelism
You can perform several of the following activities in parallel.

Per cluster
An SVC cluster supports up to 32 active concurrent instances of members of the set of migration activities: Migrate multiple extents Migrate between storage pools Migrate off of a deleted MDisk Migrate to image mode These high-level migration tasks operate by scheduling single extent migrations: Up to 256 single extent migrations can run concurrently. This number is made up of single extent migrates, which result from the operations previously listed. The Migrate Multiple Extents and Migrate Between storage pools commands support a flag that allows you to specify the number of parallel threads to use, between 1 and 4. This parameter affects the number of extents that will be concurrently migrated for that migration operation. Thus, if the thread value is set to 4, up to four extents can be migrated concurrently for that operation, subject to other resource constraints.

232

Implementing the IBM System Storage SAN Volume Controller V6.1

Per MDisk
The SVC supports up to four concurrent single extent migrates per MDisk. This limit does not take into account whether the MDisk is the source or the destination. If more than four single extent migrates are scheduled for a particular MDisk, further migrations are queued pending the completion of one of the currently running migrations.

6.3.2 Error handling


If a medium error occurs on a read from the source; if the destinations medium error table is full; if an I/O error occurs on a read from the source repeatedly; or if the MDisks go offline repeatedly, then the migration is suspended or stopped. The migration will be suspended if any of the following conditions exist. Otherwise, it will be stopped: The migration is between storage pools and has progressed beyond the first extent. These migrations are always suspended rather than stopped, because stopping a migration in progress leaves a volume spanning storage pools, which is not a valid configuration other than during a migration. The migration is a Migrate to Image Mode (even if it is processing the first extent). These migrations are always suspended rather than stopped, because stopping a migration in progress leaves the volume in an inconsistent state. A migration is waiting for a metadata checkpoint that has failed. If a migration is stopped, and if any migrations are queued awaiting the use of the MDisk for migration, these migrations now commence. However, if a migration is suspended then the migration continues to use resources, and so, another migration is not started. The SVC attempts to resume the migration if the error log entry is marked as fixed using the CLI or the GUI. If the error condition no longer exists, the migration will proceed. The migration might resume on a node other than the node that started the migration.

6.3.3 Migration algorithm


This section describes the effect of the migration algorithm.

Chunks
Regardless of the extent size for the storage pool, data is migrated in units of 16 MB. In this description, this unit is referred to as a chunk. We describe the algorithm that is used to migrate an extent: 1. Pause (pause means to queue all new I/O requests in the virtualization layer in SVC and to wait for all outstanding requests to complete) all I/O on the source MDisk on all nodes in the SVC cluster. The I/O to other extents is unaffected. 2. Unpause (resume) I/O on all of the source MDisk extents apart from writes to the specific chunk that is being migrated. Writes to the extent are mirrored to the source and destination. 3. On the node that is performing the migration, for each 256 KB section of the chunk: Synchronously read 256 KB from the source. Synchronously write 256 KB to the target. 4. After the entire chunk has been copied to the destination, repeat the process for the next chunk within the extent.
Chapter 6. Data migration

233

5. After the entire extent has been migrated, pause all I/O to the extent being migrated, perform a checkpoint on the extent move to on-disk metadata, redirect all further reads to the destination, and stop mirroring writes (writes only to destination). 6. If the checkpoint fails, the I/O is unpaused. During the migration, the extent can be divided into three regions, as shown in Figure 6-2. Region B is the chunk that is being copied. Writes to Region B are queued (paused) in the virtualization layer waiting for the chunk to be copied. Reads to Region A are directed to the destination, because this data has already been copied. Writes to Region A are written to both the source and the destination extent to maintain the integrity of the source extent. Reads and writes to Region C are directed to the source, because this region has yet to be migrated. The migration of a chunk requires 64 synchronous reads and 64 synchronous writes. During this time, all writes to the chunk from higher layers in the software stack (such as cache destages) are held back. If the back-end storage is operating with significant latency, it is possible that this operation might take time (minutes) to complete, which can have an adverse affect on the overall performance of the SVC. To avoid this situation, if the migration of a particular chunk is still active after one minute, the migration is paused for 30 seconds. During this time, writes to the chunk are allowed to proceed. After 30 seconds, the migration of the chunk is resumed. This algorithm is repeated as many times as necessary to complete the migration of the chunk.

Managed Disk Extents Extent N-1 Extent N Extent N+1

Region A (already copied) reads/writes go to destination

Region B (copying) reads/writes paused

Region C (yet to be copied) reads/writes go to source

16 MB
Figure 6-2 Migrating an extent

Not to scale

SVC guarantees read stability during data migrations even if the data migration is stopped by a node reset or a cluster shutdown. This read stability is possible because SVC disallows writes on all nodes to the area being copied, and upon a failure, the extent migration is restarted from the beginning. At the conclusion of the operation, we will have these results: Extents migrated in 16 MB chunks, one chunk at a time. Chunks are either copied, in progress, or not copied. When the extent is finished, its new location is saved.

234

Implementing the IBM System Storage SAN Volume Controller V6.1

Figure 6-3 shows the data migration and write operation relationship.

Figure 6-3 Migration and write operation relationship

6.4 Migrating data from an image mode volume


This section describes migrating data from an image mode volume to a fully managed volume. This is the type of migration used to take an existing host LUN and move it into the virtualisation environment as provided by the SVC.

6.4.1 Image mode volume migration concept


First, we describe the concepts associated with this operation.

MDisk modes
There are three MDisk modes: Unmanaged MDisk An MDisk is reported as unmanaged when it is not a member of any storage pool. An unmanaged MDisk is not associated with any volumes and has no metadata stored on it. The SVC will not write to an MDisk that is in unmanaged mode except when it attempts to change the mode of the MDisk to one of the other modes. Image mode MDisk Image mode provides a direct block-for-block translation from the MDisk to the volume with no virtualization. Image mode volumes have a minimum size of one block (512 bytes) and always occupy at least one extent. An image mode MDisk is associated with exactly one volume. Managed mode MDisk Managed mode Mdisks contribute extents to the pool of available extents in the storage pool. Zero or more managed mode volumes might use these extents.

Transitions between the modes


The following state transitions can occur to an MDisk (see Figure 6-4 on page 236): Unmanaged mode to managed mode This transition occurs when an MDisk is added to a storage pool, which makes the MDisk eligible for the allocation of data and metadata extents.

Chapter 6. Data migration

235

Managed mode to unmanaged mode This transition occurs when an MDisk is removed from a storage pool. Unmanaged mode to image mode This transition occurs when an image mode MDisk is created on an MDisk that was previously unmanaged. It also occurs when an MDisk is used as the target for a migration to image mode. Image mode to unmanaged mode There are two distinct ways in which this transition can happen: When an image mode volume is deleted. The MDisk that supported the volume becomes unmanaged. When an image mode volume is migrated in image mode to another MDisk, the MDisk that is being migrated from remains in image mode until all data has been moved off of it. It then transitions to unmanaged mode. Image mode to managed mode This transition occurs when the image mode volume that is using the MDisk is migrated into managed mode. Managed mode to image mode is impossible There is no operation that will take an MDisk directly from managed mode to image mode. You can achieve this transition by performing operations that convert the MDisk to unmanaged mode and then to image mode.
add to group

Not in group
remove from group

Managed mode

delete image mode vdisk start migrate to managed mode

complete migrate

create image mode vdisk

Migrating to image mode

start migrate to image mode

Image mode

Figure 6-4 Various states of a volume

Image mode volumes have the special property that the last extent in the volume can be a partial extent. Managed mode disks do not have this property. To perform any type of migration activity on an image mode volume, the image mode disk must first be converted into a managed mode disk. If the image mode disk has a partial last

236

Implementing the IBM System Storage SAN Volume Controller V6.1

extent, this last extent in the image mode volume must be the first extent to be migrated. This migration is handled as a special case. After this special migration operation has occurred, the volume becomes a managed mode volume and is treated in the same way as any other managed mode volume. If the image mode disk does not have a partial last extent, no special processing is performed. The image mode volume is simply changed into a managed mode volume and is treated in the same way as any other managed mode volume. After data is migrated off a partial extent, there is no way to migrate data back onto the partial extent.

6.4.2 Migration tips


Several methods are available to migrate an image mode volume to a managed mode volume. If your image mode volume is in the same storage pool as the MDisks on which you want to migrate the extents, you can perform one of these migrations: Migrate a single extent. You have to migrate the last extent of the image mode volume (number N-1). Migrate multiple extents. Migrate all of the in-use extents from an MDisk. Migrate extents off an MDisk that is being deleted. If you have two storage pools, one storage pool for the image mode volume, and one storage pool for the managed mode volumes, you can migrate a volume from one storage pool to another storage pool. Have one storage pool for all the image mode volumes, and other storage pools for the managed mode volumes, and use the migrate volume facility. Be sure to verify that enough extents are available in the target storage pool.

6.5 Data migration for Windows using the SVC GUI


In this section, we move two LUNs from a Windows Server 2008 server that is currently attached to a DS4700 storage subsystem over to the SVC. The migration examples include: Moving a Microsoft servers SAN LUNs from a storage subsystem and virtualizing those same LUNs through the SVC Perform this activity when introducing the SVC into your environment. This section shows that your host downtime is only a few minutes while you remap and remask disks using your storage subsystem LUN management tool. We describe this step in detail in 6.5.2, Adding the SVC between the host system and the DS4700 on page 240. Migrating your image mode volume to a volume while your host is still running and servicing your business application Perform this activity if you are removing a storage subsystem from your SAN environment, or if you want to move the data onto LUNs that are more appropriate for the type of data stored on those LUNs, taking into account availability, performance, and redundancy. We describe this step in 6.5.6, Migrating the volume from image mode to image mode on page 264.

Chapter 6. Data migration

237

Migrating your volume to an image mode volume Perform this activity if you are removing the SVC from your SAN environment after a trial period. We describe this step in detail in 6.5.5, Migrating a volume from managed mode to image mode on page 260. Moving an image mode volume to another image mode volume Use this procedure to migrate data from one storage subsystem to another storage subsystem. We describe this step in detail in 6.6.6, Migrating the volumes to image mode volumes on page 294. You can use these activities individually or together to migrate your servers LUNs from one storage subsystem to another storage subsystem using the SVC as your migration tool. The only downtime that is required for these activities is the time that it takes you to remask and remap the LUNs between the storage subsystems and your SVC.

6.5.1 Windows Server 2008 host system connected directly to the DS4700
In our example configuration, we use a Windows Server 2008 host, a DS4700, and a DS4500. The host has two LUNs (drive X and Y). The two LUNs are part of one DS4700 array. Before the migration, LUN masking is defined in the DS4700 to give access to the Windows Server 2008 host system for the volume from DS4700 labeled X and Y (see Figure 6-6 on page 239). Figure 6-5 shows the starting zoning scenario.

Figure 6-5 Starting zoning scenario

Figure 6-6 on page 239 shows the two LUNs (drive X and Y).

238

Implementing the IBM System Storage SAN Volume Controller V6.1

Figure 6-6 Drives X and Y

Figure 6-7 shows the properties of one of the DS4700 disks using the Subsystem Device Driver DSM (SDDDSM). The disk appears as an IBM 1814 Fast Multipath Device.

Figure 6-7 Disk properties

Chapter 6. Data migration

239

6.5.2 Adding the SVC between the host system and the DS4700
Figure 6-8 shows the new environment with the SVC and a second storage subsystem attached to the SAN. The second storage subsystem is not required to migrate to the SVC, but in the following examples, we show that it is possible to move data across storage subsystems without any host downtime.

Figure 6-8 Add SVC and second storage subsystem

To add the SVC between the host system and the DS4700 storage subsystem, perform the following steps: 1. Check that you have installed supported device drivers on your host system. 2. Check that your SAN environment fulfills the supported zoning configurations. 3. Shut down the host. 4. Change the LUN masking in the DS4700. Mask the LUNs to the SVC, and remove the masking for the host. Figure 6-9 on page 241 shows the two LUNs with LUN IDs 16 and 33 remapped to SVC ITSO-CLS1.

240

Implementing the IBM System Storage SAN Volume Controller V6.1

Figure 6-9 LUNs remapped

Attention: To avoid potential data loss, back up all the data stored on your external storage before using the wizard. 5. Logon to your SVC Console and open Physical Storage and Migration; see Figure 6-10.

Figure 6-10 Physical Storage and Migration

6. Click Start New Migration; this will start a wizard as shown in Figure 6-11 on page 242.
Chapter 6. Data migration

241

Figure 6-11 Start New Migration

7. Follow the Storage Migration Wizard as shown in Figure 6-12, then click Next.

Figure 6-12 Migration Wizard - Step 1 of 8

8. Figure 6-13 on page 243 shows the Prepare Environment for Migration information; click Next.

242

Implementing the IBM System Storage SAN Volume Controller V6.1

Figure 6-13 Migration Wizard - Step 2 of 8 - preparing the environment for migration

9. Click Next to complete Step 3; see Figure 6-14.

Figure 6-14 Migration Wizard - Step 3 of 8 - mapping storage

10.Figure 6-15 on page 244 shows device discovery; click Close.

Chapter 6. Data migration

243

Figure 6-15 Discovering devices

11.Figure 6-16 shows the available MDisks for Migration; click Next.

Figure 6-16 Migration Wizard - Step 4 of 8

12.Mark both MDisks for migrating as shown in Figure 6-17 on page 245, and then click Next.

244

Implementing the IBM System Storage SAN Volume Controller V6.1

Figure 6-17 Migration Wizard - selecting disks for migration

13.Figure 6-18 shows the MDisk import process. During the import process a new storage pool is automatically created, in our case Migrationpool_8192. You can see the command that the wizard is issuing is creating an image mode volume with a one-to-one mapping to mdisk5. Click Close to continue.

Figure 6-18 Migration Wizard - Step 5 of 8 - MDisk import process

14.Now we create a new host object that we will later map the volume to. Click New Host as shown in Figure 6-19 on page 246.

Chapter 6. Data migration

245

Figure 6-19 Migration Wizard - creating a new host

15.Figure 6-20 shows the empty fields that we need to complete to match our host requirements.

Figure 6-20 Migration Wizard - host information fields

16.Here you type the name you want to use for the Host, add the Fibre Channel port, and then select a Host Type. In our case, the name is W2k8_Server. Click Create Host as shown in Figure 6-21 on page 247.

246

Implementing the IBM System Storage SAN Volume Controller V6.1

Figure 6-21 Migration Wizard - completed host information

17.Figure 6-22 shows the progress of creating a host. Click Close.

Figure 6-22 Progress status - creating a host

18.Figure 6-23 on page 248 shows that the host was created successfully. Click Next to continue.

Chapter 6. Data migration

247

Figure 6-23 Migration Wizard - host creation was successful

19.Figure 6-24 shows all the available volumes to map to a host. Click Next to continue.

Figure 6-24 Migration Wizard - Step 6 of 8 - volumes available for mapping

20.Mark both volumes and click Map to Host as shown in Figure 6-25 on page 249.

248

Implementing the IBM System Storage SAN Volume Controller V6.1

Figure 6-25 Migration Wizard - mapping volumes to host

21.Modify Mapping by choosing the host using the drop-down menu as shown in Figure 6-26, and then click Next.

Figure 6-26 Migration Wizard - modifying mappings

22.The rightmost side of Figure 6-27 on page 250 shows the volumes that can be marked to map to your host. Mark both volumes and click OK.

Chapter 6. Data migration

249

Figure 6-27 Migration Wizard - volume mapping to host

23.Figure 6-28 shows the progress of the volume mapping to host. Click Close when finished.

Figure 6-28 Modify Mappings - task completed

24.After the volume to host mapping task is completed, notice that beneath the column heading Host Mapping a host is shown marked Yes; see Figure 6-29 on page 251. Click Next.

250

Implementing the IBM System Storage SAN Volume Controller V6.1

Figure 6-29 Migration Wizard - Map Volumes to Hosts

25.Select the storage pool you want to use for migration, in our case DS4700_2 as shown in Figure 6-30, and click Next.

Figure 6-30 Migration Wizard - Step 7 - selecting a storage pool to use for migration

26.Migration starts automatically by doing a volume copy, as shown in Figure 6-31 on page 252.

Chapter 6. Data migration

251

Figure 6-31 Start Migration - task completed

27.Figure 6-32 then appears, advising that migration has begun. Click Finish.

Figure 6-32 Migration Wizard - Step 8 of 8 - data migration has begun

28.The window in Figure 6-33 on page 253 will appear automatically to show the progress of the migration.

252

Implementing the IBM System Storage SAN Volume Controller V6.1

Figure 6-33 Progress of migration process

29.Go to Volumes Volumes by host as shown in Figure 6-34 to see all the volumes served by the newly created host for this migration step.

Figure 6-34 Selecting to view volumes by host

30.Figure 6-35 on page 254 shows all the volumes (copy0* and copy1) served by the created host.

Chapter 6. Data migration

253

Figure 6-35 Volumes served by host

You can see in Figure 6-35 that the migrated volume is actually a mirrored volume with one copy on the image mode pool and another copy in a managed mode storage pool. The administrator can choose to leave the volume like this or split the initial copy from the mirror.

6.5.3 Importing the migrated disks into an online Windows Server 2008 host
To import the migrated disks into an online Windows 2008 Server host, perform these steps: 1. Start the Windows Server 2008 host system again, and expand Computer Management to see the new disk properties changed to a 2145 Multi-Path Disk Device (Figure 6-36 on page 255).

254

Implementing the IBM System Storage SAN Volume Controller V6.1

Figure 6-36 Device management - see the new disk properties

Chapter 6. Data migration

255

2. Figure 6-37 shows the Disk Management window.

Figure 6-37 Migrated disks are available

3. Select Start All Programs Subsystem Device Driver DSM Subsystem Device Driver DSM to open the SDDDSM command-line utility; see Figure 6-38.

Figure 6-38 Subsystem Device Driver DSM CLI

256

Implementing the IBM System Storage SAN Volume Controller V6.1

4. Enter the datapath query device command to check whether all paths are available, as planned in your SAN environment; see Example 6-1.
Example 6-1 The zdatapath query device command

C:\Program Files\IBM\SDDDSM>datapath query device Total Devices : 2

DEV#: 0 DEVICE NAME: Disk0 Part0 TYPE: 2145 POLICY: OPTIMIZED SERIAL: 6005076801A680E90800000000000007 ============================================================================ Path# Adapter/Hard Disk State Mode Select Errors 0 Scsi Port2 Bus0/Disk0 Part0 OPEN NORMAL 180 0 1 Scsi Port2 Bus0/Disk0 Part0 OPEN NORMAL 0 0 2 Scsi Port2 Bus0/Disk0 Part0 OPEN NORMAL 145 0 3 Scsi Port2 Bus0/Disk0 Part0 OPEN NORMAL 0 0 DEV#: 1 DEVICE NAME: Disk1 Part0 TYPE: 2145 POLICY: OPTIMIZED SERIAL: 6005076801A680E90800000000000005 ============================================================================ Path# Adapter/Hard Disk State Mode Select Errors 0 Scsi Port2 Bus0/Disk1 Part0 OPEN NORMAL 25 0 1 Scsi Port2 Bus0/Disk1 Part0 OPEN NORMAL 164 0 2 Scsi Port2 Bus0/Disk1 Part0 OPEN NORMAL 0 0 3 Scsi Port2 Bus0/Disk1 Part0 OPEN NORMAL 136 0 C:\Program Files\IBM\SDDDSM>

6.5.4 Adding the SVC between the host and DS4700 using the CLI
In this section we only use CLI commands to add direct attached storage to the SVCs managed storage. To read about our preparation of the environment see 6.5.1, Windows Server 2008 host system connected directly to the DS4700 on page 238.

Verifying the currently used storage pools


Verify the currently used storage pool on the SVC, as shown in Example 6-2, to get an idea of the storage pools free capacity. In our case there are only two storage pools, DS4700_1 and DS4700_2.
Example 6-2 Verifying the storage pools

IBM_2145:ITSO-CLS1:admin>svcinfo lsmdiskgrp id name status mdisk_count vdisk_count capacity extent_size free_capacity virtual_capacity used_capacity real_capacity overallocation warning easy_tier easy_tier_status 0 DS4700_1 online 2 0 49.50GB 256 49.50GB 0.00MB 0.00MB 0.00MB 0 80 auto inactive 1 DS4700_2 online 2 0 50.00GB 256 50.00GB 0.00MB 0.00MB 0.00MB 0 80 auto inactive

Chapter 6. Data migration

257

Creating a storage pool


When we move the two LUNs to the SVC, we use them initially in image mode. Therefore, we need a storage pool to hold those disks. First, we add a new empty storage pool for the import of the LUNs as shown in Example 6-3, in our case imagepool, because it is better to have one separate pool in case something happens during the import, and the import process will not be able to affect the other storage pools.
Example 6-3 Adding a storage pool

IBM_2145:ITSO-CLS1:admin>svctask mkmdiskgrp -name imagepool -tier generic_hdd -ext 256 MDisk Group, id [2], successfully created IBM_2145:ITSO-CLS1:admin>

Verifying the new storage pool has been created


Now we verify whether the new storage pool has been added correctly, as shown in Example 6-4.
Example 6-4 Verifying the new storage pool

IBM_2145:ITSO-CLS1:admin>svcinfo lsmdiskgrp id name status mdisk_count vdisk_count capacity extent_size free_capacity virtual_capacity used_capacity real_capacity overallocation warning easy_tier easy_tier_status 0 DS4700_1 online 2 0 49.50GB 256 49.50GB 0.00MB 0.00MB 0.00MB 0 80 auto inactive 1 DS4700_2 online 2 0 50.00GB 256 50.00GB 0.00MB 0.00MB 0.00MB 0 80 auto inactive 2 imagepool online 0 0 0 256 0 0.00MB 0.00MB 0.00MB 0 0 auto inactive IBM_2145:ITSO-CLS1:admin>

Creating the image volume


As shown in Example 6-5 we need to create two image volumes (image1 and image2) within our storage pool imagepool, one for each MDisk to import LUNs from the storage controller to within the SVC.
Example 6-5 Creating the image volume

IBM_2145:ITSO-CLS1:admin>svctask mkvdisk -name imagepool -vtype image -mdisk mdisk4 -syncrate Virtual Disk, id [0], successfully created IBM_2145:ITSO-CLS1:admin>svctask mkvdisk -name imagepool -vtype image -mdisk mdisk5 -syncrate Virtual Disk, id [1], successfully created IBM_2145:ITSO-CLS1:admin>

image1 -iogrp io_grp0 -mdiskgrp 80 image2 -iogrp io_grp0 -mdiskgrp 80

258

Implementing the IBM System Storage SAN Volume Controller V6.1

Verifying the image volumes


Now we check again whether the volumes are created within the storage pool imagepool, as shown in Example 6-6.
Example 6-6 Verifying the image volumes

IBM_2145:ITSO-CLS1:admin>svcinfo lsvdisk id name IO_group_id IO_group_name status mdisk_grp_id mdisk_grp_name capacity type FC_id FC_name RC_id RC_name vdisk_UID fc_map_count copy_count fast_wri te_state se_copy_count 0 image1 0 io_grp0 online 2 imagepool 5.00GB image 6005076801910281A000000000000022 0 1 empty 0 1 image2 0 io_grp0 online 2 imagepool 5.00GB image 6005076801910281A000000000000023 0 1 empty 0 IBM_2145:ITSO-CLS1:admin>

Creating the host


We check whether our host exists or if we need to create it, as shown in Example 6-7. In our case the server has already been created.
Example 6-7 Listing the host

IBM_2145:ITSO-CLS1:admin>svcinfo lshost id name port_count iogrp_count 0 W2K8_Server 1 IBM_2145:ITSO-CLS1:admin>

Mapping the image volumes to the host


Next, we map the image volumes to host W2K8_Server as shown in Example 6-8; this is also known as LUN masking.
Example 6-8 Mapping the volumes

IBM_2145:ITSO-CLS1:admin>svctask mkvdiskhostmap -host W2K8_Server -scsi 0 -force image1 Virtual Disk to Host map, id [0], successfully created IBM_2145:ITSO-CLS1:admin> IBM_2145:ITSO-CLS1:admin>svctask mkvdiskhostmap -host W2K8_Server -scsi 1 -force image2 Virtual Disk to Host map, id [1], successfully created IBM_2145:ITSO-CLS1:admin>

Adding the image volumes to a storage pool


Add the image volumes to storage pool DS4700_2 as shown in Example 6-9 to have them mapped as fully allocated volumes that are managed by the SVC.
Example 6-9 Adding volumes to storage pool

IBM_2145:ITSO-CLS1:admin>svctask addvdiskcopy -mdiskgrp DS4700_2 image1 Vdisk [0] copy [1] successfully created IBM_2145:ITSO-CLS1:admin> IBM_2145:ITSO-CLS1:admin>svctask addvdiskcopy -mdiskgrp DS4700_2 image2 Vdisk [1] copy [1] successfully created IBM_2145:ITSO-CLS1:admin>

Chapter 6. Data migration

259

Checking the status of the volumes


Both volumes now have a second copy (shown as type many in Example 6-10) and are available to be used by the host.
Example 6-10 Status check

IBM_2145:ITSO-CLS1:admin>svcinfo lsvdisk id name IO_group_id IO_group_name status mdisk_grp_id mdisk_grp_name capacity type FC_id FC_name RC_id RC_name vdisk_UID fc_map_count copy_count fast_write_state se_copy_count 0 image1 0 io_grp0 online many many 5.00GB many 6005076801910281A000000000000022 0 2 empty 0 1 image2 0 io_grp0 online many many 5.00GB many 6005076801910281A000000000000023 0 2 empty 0 IBM_2145:ITSO-CLS1:admin>

6.5.5 Migrating a volume from managed mode to image mode


In this section, we migrate a managed volume to an image mode volume by performing these steps: 1. We create an empty storage pool for each volume that we want to migrate to image mode. These storage pools will host the target MDisk that we will map later to our server at the end of the migration. 2. We go to Physical Storage Pools and create a new pool from the drop-down menu, as shown in Figure 6-39.

Figure 6-39 Selecting Pools

3. To create an empty storage pool for migration, perform Step 1 and Step 2 as shown in Figure 6-40 on page 261 and Figure 6-41 on page 261.

260

Implementing the IBM System Storage SAN Volume Controller V6.1

Figure 6-40 Create Storage Pool - Step 1 of 2

Next, click Finish; see Figure 6-41.

Figure 6-41 Create Storage Pool - Step 2

4. Figure 6-42 reminds you that an empty storage pool has been created. Click OK.

Figure 6-42 Reminder

5. Figure 6-43 on page 262 shows the progress status of creating a storage pool for migration. Click Close to continue.

Chapter 6. Data migration

261

Figure 6-43 Create storage pool - progress status

6. From the Volumes > All Volumes panel, select the volume that you want to migrate to image mode and select Export to Image Mode from the drop-down menu as shown in Figure 6-44.

Figure 6-44 Select volume

7. Select the MDisk to migrate the volume onto, as shown in Figure 6-45 on page 263, and then click Next.

262

Implementing the IBM System Storage SAN Volume Controller V6.1

Figure 6-45 Migrate to an Image Mode

8. Select a storage pool in which the image mode volume will be placed after migration is completed, in our case for migration, and click Finish; see Figure 6-46.

Figure 6-46 Select storage pool

Chapter 6. Data migration

263

9. The volume is exported to image mode and placed in the For Migration pool; see Figure 6-47. Click Close.

Figure 6-47 Export Volume to image process

10.Navigate to the Physical Storage >MDisks section; notice that MDisk5 is now an image mode MDisk as shown in Figure 6-48.

Figure 6-48 MDisk is in image mode

11.Repeat these steps for every volume that you want to migrate to an image mode volume. 12.Delete the image mode data from the SVC by using the procedure described in 6.5.7, Removing image mode data from the SVC on page 274.

6.5.6 Migrating the volume from image mode to image mode


Use the volume migration from image mode to image mode process to move image mode volumes from one storage subsystem to another storage subsystem without going through

264

Implementing the IBM System Storage SAN Volume Controller V6.1

the SVC fully managed mode. The data stays available for the applications during this migration. This procedure is nearly the same as the procedure described in 6.5.5, Migrating a volume from managed mode to image mode on page 260. In our example, we migrate the windows server W2k8_Log volume to another disk subsystem as an image mode volume. The second storage subsystem is a DS4500; a new LUN is configured on the storage and mapped to the SVC cluster. The LUN is available to the SVC as an unmanaged MDisk5 as shown in Figure 6-49.

Figure 6-49 Unmanaged disk on a DS4500 storage subsystem

Chapter 6. Data migration

265

To migrate the image mode volume to another image mode volume, perform the following steps: 1. Mark the unmanaged MDisk5 and click either Actions or the right-side mouse button and select Import from the list as shown in Figure 6-50.

Figure 6-50 Import the unmanaged Mdisk into SVC

2. The Introduction window opens describing the process of importing the MDisk and mapping an image mode volume to it, as shown in Figure 6-51. Click Next.

Figure 6-51 Import Wizard - Step1 of 2

3. Do not select a target pool because you do not want to migrate into an SVC managed volume pool. Instead, simply click Finish; see Figure 6-52 on page 267.

266

Implementing the IBM System Storage SAN Volume Controller V6.1

Figure 6-52 Import Wizard - Step 2

4. Figure 6-53 shows a warning message indicating a storage pool has not been selected and the volume will remain in the temporary pool. Click OK to continue.

Figure 6-53 Warning message - storage pool not selected

Chapter 6. Data migration

267

5. The import process starts, as shown in Figure 6-54, by creating a temporary storage pool Migrationpool_8192 (8 GB) and an image volume. Click Close to continue.

Figure 6-54 Import of MDisk and creation of temporary storage pool Migrationpool_8192

6. As shown in Figure 6-55, there is now an image mode mdisk5 with the import controller name and SCSI ID as its name.

Figure 6-55 Imported mdisk5 within the created storage pool

268

Implementing the IBM System Storage SAN Volume Controller V6.1

7. Now create a new storage pool Migration_out (with a same extent size (8 GB) as the automatically created storage pool Migrationpool_8192) for transferring the image mode disk. Go to Physical Storage Pools, as shown in Figure 6-56.

Figure 6-56 Pools

8. Click New Pool to create an empty storage pool, as shown in Figure 6-57.

Figure 6-57 Create a new storage pool

Chapter 6. Data migration

269

9. Give your new storage pool the meaningful name Migration_out and click the Advanced Settings drop-down menu. Choose 8 GB as the extent size for your new storage pool, as shown in Figure 6-58.

Figure 6-58 Step 1 of 2 - create an empty storage pool with extent size 8 GB

10.Figure 6-59 shows a storage pool window without any disks. Click Finish to continue to create an empty storage pool.

Figure 6-59 Step 2 no disks

11.The warning in Figure 6-60 on page 271 pops up to remind you that an empty storage pool will be created. Click OK to continue.

270

Implementing the IBM System Storage SAN Volume Controller V6.1

Figure 6-60 Warning message - creating an empty storage pool

12.Figure 6-61 shows the progress of creating the storage pool Migration_out. Click Close to continue.

Figure 6-61 Progress of storage pool creation

13.The empty storage pool for image to image migration has been created. Go to Volumes Volumes by Pool as shown in Figure 6-62.

Figure 6-62 Storage pool created

Chapter 6. Data migration

271

14.Select the storage pool of the imported disk, Migrationpool_8192 in the left panel. Then mark the image disk you want to migrate out and select Actions. From the drop-down menu select Export to Image Mode, as shown in Figure 6-63.

Figure 6-63 Export to Image Mode

15.Select the target MDisk on the new disk controller that you want to migrate to. Click Next, as shown in Figure 6-64.

Figure 6-64 Step 1 of 2 - select target MDisk

272

Implementing the IBM System Storage SAN Volume Controller V6.1

16.Select the target migrate out (empty) storage pool, as shown in Figure 6-65. Click Finish.

Figure 6-65 Step 2 - select target storage pool

17.Figure 6-66 shows the progress status of the Export Volume to Image process. Click Close to continue.

Figure 6-66 Export Volume to Image progress status

18.Figure 6-67 on page 274 shows that the MDisk location has changed as expected to the new storage pool Migration_out.

Chapter 6. Data migration

273

Figure 6-67 Image disk migrated to new storage pool

19.Repeat these steps for all image mode volumes that you want to migrate. 20.If you want to delete the data from the SVC, use the procedure described in 6.5.7, Removing image mode data from the SVC on page 274.

6.5.7 Removing image mode data from the SVC


If your data resides in an image mode volume inside the SVC, you can remove the volume from the SVC, which allows you to free up the original LUN for reuse. The following sections illustrate how to migrate data to an image mode volume. Depending on your environment, you might have to follow these procedures before deleting the image volume: 6.5.5, Migrating a volume from managed mode to image mode on page 260 6.5.6, Migrating the volume from image mode to image mode on page 264 To remove the image mode volume from the SVC, we use the delete vdisk command.

274

Implementing the IBM System Storage SAN Volume Controller V6.1

If the command succeeds on an image mode volume, the underlying back-end storage controller will be consistent with the data that a host might previously have read from the image mode volume; that is, all fast write data will have been flushed to the underlying LUN. Deleting an image mode volume causes the MDisk that is associated with the volume to be ejected from the storage pool. The mode of the MDisk will be returned to unmanaged. Note: This situation only applies to image mode volumes. If you delete a normal volume, all of the data will also be deleted. As shown in Example 6-1 on page 257, the SAN disks currently reside on the SVC 2145 device. Check that you have installed the supported device drivers on your host system. To switch back to the storage subsystem, perform the following steps: 1. Shut down your host system. 2. Edit the LUN masking on your storage subsystem. Remove the SVC from the LUN masking, and add the host to the masking. 3. Open the view Volumes by Host window to see which volumes are currently mapped to your host as shown in Figure 6-68.

Figure 6-68 Volume by host mapping

4. Check your Host and select your volume. Then, show the drop-down menu by clicking the right mouse button and select Unmap all Hosts as shown in Figure 6-69 on page 276.

Chapter 6. Data migration

275

Figure 6-69 Unmap volume from host

5. Verify your unmap process, as shown in Figure 6-70, and click Unmap.

Figure 6-70 Verify your unmapping process

6. Figure 6-71 shows that the volume has been removed from the SVC.

Figure 6-71 Volume has been removed from host

7. Repeat steps 3 to 5 for every image mode volume that you want to remove from the SVC.

276

Implementing the IBM System Storage SAN Volume Controller V6.1

8. Power on your host system.

6.5.8 Map the free disks onto the Windows Server 2008
To detect and map the disks which have been freed from SVC management, go to the WIndows Server 2008: 1. Using your DS4500 Storage Manager interface, now remap the two LUNs that were MDisks back to your Windows Server 2008 server. 2. Open your Computer Management window. Figure 6-72 shows that the LUNs are now back to an IBM 1814 type.

Figure 6-72 IBM 1814 type

3. Open your Disk Management window and notice that the disks have appeared. You might need to reactivate your disk by using the right-click option on each disk.

Chapter 6. Data migration

277

Figure 6-73 Windows Server 2008 Disk Management

6.6 Migrating Linux SAN disks to SVC disks


In this section, we move the two LUNs from a Linux server that is currently booting directly off of our DS4000 storage subsystem over to the SVC. We then manage those LUNs with SVC, move them between other managed disks and finally, move them back to image mode disks so that those LUNs can be masked and mapped back to the Linux server directly. Using this example can help you to perform any of the following activities in your environment: Move a Linux servers SAN LUNs from a storage subsystem and virtualize those same LUNs through the SVC. Perform this activity first when introducing the SVC into your environment. This section shows that your host downtime is only a few minutes while you remap and remask disks using your storage subsystem LUN management tool. We describe this step in detail in 6.6.2, Preparing your SVC to virtualize disks on page 281. Move data between storage subsystems while your Linux server is still running and servicing your business application. Perform this activity if you are removing a storage subsystem from your SAN environment. Or, perform this activity if you want to move the data onto LUNs that are more appropriate for the type of data that is stored on those LUNs, taking availability, performance, and redundancy into account. We describe this step in 6.6.4, Migrating the image mode volumes to managed MDisks on page 288. Move your Linux servers LUNs back to image mode volumes so that they can be remapped and remasked directly back to the Linux server. We describe this step in 6.6.5, Preparing to migrate from the SVC on page 291.

278

Implementing the IBM System Storage SAN Volume Controller V6.1

You can use these three activities individually, or together, to migrate your Linux servers LUNs from one storage subsystem to another storage subsystem using the SVC as your migration tool. If you do not use all three activities, you can introduce or remove the SVC from your environment. The only downtime required for these activities is the time that it takes to remask and remap the LUNs between the storage subsystems and your SVC. In Figure 6-74, we show our Linux environment.

Zoning for migration scenarios LINUX Host

SAN

IBM or OEM Storage Subsystem

Green Zone

Figure 6-74 Linux SAN environment

Figure 6-74 shows our Linux server connected to our SAN infrastructure. It has two LUNs that are masked directly to it from our storage subsystem: The LUN with SCSI ID 0 has the host operating system (our host is Red Hat Enterprise Linux V5.1), and this LUN is used to boot the system directly from the storage subsystem. The operating system identifies it as /dev/mapper/VolGroup00-LogVol00. SCSI LUN ID 0: To successfully boot a host off of the SAN, you must have assigned the LUN as SCSI LUN ID 0. Linux sees this LUN as our /dev/sda disk. We have also mapped a second disk (SCSI LUN ID 1) to the host. It is 5 GB in size, and it is mounted in the / data folder on the /dev/dm-2 disk. Example 6-11 on page 280 shows our disks that are directly attached to the Linux hosts.

Chapter 6. Data migration

279

Example 6-11 Directly attached disks

[root@Palau data]# df Filesystem 1K-blocks /dev/mapper/VolGroup00-LogVol00 10093752 /dev/sda1 101086 tmpfs 1033496 /dev/dm-2 5160576 [root@Palau data]#

Used Available Use% Mounted on 1971344 12054 0 158160 7601400 83813 1033496 4740272 21% 13% 0% 4% / /boot /dev/shm /data

Our Linux server represents a typical SAN environment with a host directly using LUNs that were created on a SAN storage subsystem, as shown in Figure 6-74 on page 279: The Linux servers host bus adapter (HBA) cards are zoned so that they are in the Green Zone with our storage subsystem. The two LUNs that have been defined on the storage subsystem, using LUN masking, are directly available to our Linux server.

6.6.1 Connecting the SVC to your SAN fabric


This section describes the basic steps that you take to introduce the SVC into your SAN environment. Although this section only summarizes these activities, you can introduce the SVC into your SAN environment without any downtime to any host or application that also uses your storage area network. If you have an SVC that is already connected, skip to 6.6.2, Preparing your SVC to virtualize disks on page 281. Connecting the SVC to your SAN fabric requires that you perform these tasks: 1. Assemble your SVC components (nodes, uninterruptible power supply unit and SSPC, cable the SVC correctly, power the SVC on, and verify that the SVC is visible on your SAN. We describe these tasks in much greater detail in Chapter 3, Planning and configuration on page 57. 2. Create and configure your SVC cluster. 3. Create these additional zones: An SVC node zone (our Black Zone in Figure 6-75 on page 281). A storage zone (our Red Zone). A host zone (our Blue Zone). For more detailed information about how to configure the zones correctly, see Chapter 3, Planning and configuration on page 57. Figure 6-75 on page 281 shows our environment.

280

Implementing the IBM System Storage SAN Volume Controller V6.1

Zoning per Migration Scenarios LINUX Host

SAN

SVC I/O grp0 SVC SVC

IBM or OEM Storage Subsystem

IBM or OEM Storage Subsystem

Green Zone Red Zone Blue Zone Black Zone

By Pinocchio 12-09-2005

Figure 6-75 SAN environment with SVC attached

6.6.2 Preparing your SVC to virtualize disks


This section describes the preparation tasks that we performed before taking our Linux server offline. These activities are all nondisruptive. They do not affect your SAN fabric or your existing SVC configuration (if you already have a production SVC in place).

Creating a storage pool


When we move the two Linux LUNs to the SVC, we use them initially in image mode. Therefore, we need a storage pool to hold those disks. First, we need to create an empty storage pool for each of the disks, using the commands in Example 6-12. We name our storage pools Palau_Pool0 to hold our boot LUN. And, we name the second storage pool Palau_Pool1 to hold the data LUN.
Example 6-12 Create an empty storage pool

IBM_2145:ITSO-CLS1:admin>svctask mkmdiskgrp -name Palau_Pool1 -ext 512 MDisk Group, id [2], successfully created IBM_2145:ITSO-CLS1:admin>svcinfo lsmdiskgrp id name status mdisk_count vdisk_count capacity extent_size free_capacity virtual_capacity used_capacity real_capacity overallocation warning easy_tier easy_tier_status 2 Palau_Pool1 online 0 0 0 512 0 0.00MB 0.00MB 0.00MB 0 0 auto inactive

Chapter 6. Data migration

281

3 Palau_Pool2 online 0 0.00MB 0.00MB inactive IBM_2145:ITSO-CLS1:admin>

0 0.00MB

0 0

512 0

0 auto

Creating your host definition


If you have prepared your zones correctly, the SVC can see the Linux servers HBA adapters on the fabric (our host only had one HBA). The svcinfo lshbaportcandidate command on the SVC lists all of the WWNs, which have not yet been allocated to a host, that the SVC can see on the SAN fabric. Example 6-13 shows the output of the nodes that it found on our SAN fabric. (If the port did not show up, it indicates a zone configuration problem.)
Example 6-13 Display HBA port candidates

IBM_2145:ITSO-CLS1:admin>svcinfo lshbaportcandidate id 210000E08B89C1CD 210000E08B054CAA 210000E08B0548BC 210000E08B0541BC 210000E08B89CCC2 IBM_2145:ITSO-CLS1:admin> If you do not know the WWN of your Linux server, you can look at which WWNs are currently configured on your storage subsystem for this host. Figure 6-76 shows our configured ports on an IBM DS4700 storage subsystem.

Figure 6-76 Display port WWNs

282

Implementing the IBM System Storage SAN Volume Controller V6.1

After verifying that the SVC can see our host (linux2), we create the host entry and assign the WWN to this entry. Example 6-14 shows these commands.
Example 6-14 Create the host entry

IBM_2145:ITSO-CLS1:admin>svctask mkhost -name Palau -hbawwpn 210000E08B054CAA:210000E08B89C1CD Host, id [0], successfully created IBM_2145:ITSO-CLS1:admin>svcinfo lshost Palau id 0 name Palau port_count 2 type generic mask 1111 iogrp_count 4 WWPN 210000E08B89C1CD node_logged_in_count 4 state inactive WWPN 210000E08B054CAA node_logged_in_count 4 state inactive IBM_2145:ITSO-CLS1:admin>

Verifying that we can see our storage subsystem


If we set up our zoning correctly, the SVC can see the storage subsystem with the svcinfo lscontroller command (Example 6-15).
Example 6-15 Discover storage controller

IBM_2145:ITSO-CLS1:admin>svcinfo lscontroller id controller_name ctrl_s/n vendor_id product_id_low product_id_high 0 DS4500 IBM 1742-900 1 DS4700 IBM 1814 FAStT IBM_2145:ITSO-CLS1:admin> You can rename the storage subsystem to a more meaningful name (if we had multiple storage subsystems that were connected to our SAN fabric, renaming them makes it considerably easier to identify them) with the svctask chcontroller -name command.

Getting the disk serial numbers


To help avoid the risk of creating the wrong volumes from all of the available, unmanaged MDisks (in case the SVC sees many available, unmanaged MDisks), we get the LUN serial numbers from our storage subsystem administration tool (Storage Manager). When we discover these MDisks, we confirm that we have the correct serial numbers before we create the image mode volumes. If you also use a DS4000 family storage subsystem, Storage Manager provides the LUN serial numbers. Right-click your logical drive and choose Properties. Our serial numbers are shown in the following figures. Figure 6-77 on page 284 shows the disk serial number SAN_Boot_palau.

Chapter 6. Data migration

283

Figure 6-77 Obtaining the disk serial number - SAN_Boot_palau

Figure 6-78 shows the disk serial number Palau_data.

Figure 6-78 Obtaining the disk serial number - Palau_data

284

Implementing the IBM System Storage SAN Volume Controller V6.1

Before we move the LUNs to the SVC, we must configure the host multipath configuration for the SVC. Add the following entry to your multipath.conf file, as shown in Example 6-16, and add the content of Example 6-17 to the file.
Example 6-16 Edit the multipath.conf file

[root@Palau ~]# vi /etc/multipath.conf [root@Palau ~]# service multipathd stop Stopping multipathd daemon: [root@Palau ~]# service multipathd start Starting multipathd daemon: [root@Palau ~]#
Example 6-17 Data to add to the multipath.conf file

[ [

OK OK

] ]

# SVC device { vendor "IBM" product "2145CF8" path_grouping_policy group_by_serial } We are now ready to move the ownership of the disks to the SVC, discover them as MDisks, and give them back to the host as volumes.

6.6.3 Moving the LUNs to the SVC


In this step, we move the LUNs that are assigned to the Linux server and reassign them to the SVC. Our Linux server has two LUNs: one LUN is for our boot disk and operating system file systems, and the other LUN holds our application and data files. Moving both LUNs at one time requires shutting down the host. If we only wanted to move the LUN that holds our application and data files, we do not have to reboot the host. The only requirement is that we unmount the file system and vary off the volumegroup (VG) to ensure data integrity between the reassignment. The following steps are required, because we intend to move both LUNs at the same time: 1. Confirm that the multipath.conf file is configured for SVC. 2. Shut down the host. If you are only moving the LUNs that contain the application and data, follow this procedure instead: a. Stop the applications that are using the LUNs. b. Unmount those file systems with the umount MOUNT_POINT command. c. If the file systems are a logical volume manager (LVM) volume, deactivate that VG with the vgchange -a n VOLUMEGROUP_NAME. d. If possible, also unload your HBA driver using the rmmod DRIVER_MODULE command. This command removes the SCSI definitions from the kernel (we will reload this module and rediscover the disks later). It is possible to tell the Linux SCSI subsystem to rescan for new disks without requiring you to unload the HBA driver; however, we do not provide those details here.

Chapter 6. Data migration

285

3. Using Storage Manager (our storage subsystem management tool), we can unmap and unmask the disks from the Linux server and remap and remask the disks to the SVC. LUN IDs: Even though we are using boot from SAN, you can also map the boot disk with any LUN number to the SVC. It does not have to be 0 until later when we configure the mapping in the SVC to the host. 4. From the SVC, discover the new disks with the svctask detectmdisk command. The disks will be discovered and named mdiskN, where N is the next available MDisk number (starting from 0). Example 6-18 shows the commands that we used to discover our MDisks and to verify that we have the correct MDisks.
Example 6-18 Discover the new MDisks

IBM_2145:ITSO-CLS1:admin>svctask detectmdisk IBM_2145:ITSO-CLS1:admin>svcinfo lsmdisk id name status mode mdisk_grp_id mdisk_grp_name capacity ctrl_LUN_# controller_name UID tier 26 mdisk26 online unmanaged 12.0GB 0000000000000008 DS4700 600a0b800026b2820000428f48739bca00000000000000000000000000000000 generic_hdd 27 mdisk27 online unmanaged 5.0GB 0000000000000009 DS4700 600a0b800026b282000042f84873c7e100000000000000000000000000000000 generic_hdd IBM_2145:ITSO-CLS1:admin> Important: Match your discovered MDisk serial numbers (UID on the svcinfo lsmdisk task display) with the serial number that you recorded earlier (in Figure 6-77 and Figure 6-78 on page 284). 5. After we have verified that we have the correct MDisks, we rename them to avoid confusion in the future when we perform other MDisk-related tasks (Example 6-19).
Example 6-19 Rename the MDisks

IBM_2145:ITSO-CLS1:admin>svctask chmdisk -name md_palauS mdisk26 IBM_2145:ITSO-CLS1:admin>svctask chmdisk -name md_palauD mdisk27 IBM_2145:ITSO-CLS1:admin>svcinfo lsmdisk id name status mode mdisk_grp_id mdisk_grp_name capacity ctrl_LUN_# controller_name UID tier 26 md_palauS online unmanaged 12.0GB 0000000000000008 DS4700 600a0b800026b2820000428f48739bca00000000000000000000000000000000 generic_hdd 27 md_palauD online unmanaged 5.0GB 0000000000000009 DS4700 600a0b800026b282000042f84873c7e100000000000000000000000000000000 generic_hdd IBM_2145:ITSO-CLS1:admin>

286

Implementing the IBM System Storage SAN Volume Controller V6.1

6. We create our image mode volumes with the svctask mkvdisk command and the -vtype image option (Example 6-20). This command virtualizes the disks in the exact same layout as though they were not virtualized.
Example 6-20 Create the image mode volumes

IBM_2145:ITSO-CLS1:admin>svctask mkvdisk -mdiskgrp Palau_Pool1 -iogrp 0 -vtype image -mdisk md_palauS -name palau_SANB Virtual Disk, id [29], successfully created IBM_2145:ITSO-CLS1:admin> IBM_2145:ITSO-CLS1:admin>svctask mkvdisk -mdiskgrp Palau_Pool2 -iogrp 0 -vtype image -mdisk md_palauD -name palau_Data Virtual Disk, id [30], successfully create IBM_2145:ITSO-CLS1:admin>svcinfo lsmdisk id name status mode mdisk_grp_id mdisk_grp_name capacity ctrl_LUN_# controller_name UID tier 26 md_palauS online image 2 Palau_Pool1 12.0GB 0000000000000008 DS4700 600a0b800026b2820000428f48739bca00000000000000000000000000000000 generic_hdd 27 md_palauD online image 3 Palau_Pool2 5.0GB 0000000000000009 DS4700 600a0b800026b282000042f84873c7e100000000000000000000000000000000 generic_hdd IBM_2145:ITSO-CLS1:admin> IBM_2145:ITSO-CLS1:admin>svcinfo lsvdisk id name IO_group_id IO_group_name status mdisk_grp_id mdisk_grp_name capacity type FC_id FC_name RC_id RC_name vdisk_UID fc_map_count copy_count fast_wri te_state se_copy_count 29 palau_SANB 0 io_grp0 online 4 Palau_Pool1 12.0GB image 60050768018301BF280000000000002B 0 1 empty 0 30 palau_Data 0 io_grp0 online 4 Palau_Pool2 5.0GB image 60050768018301BF280000000000002C 0 1 empty 0 7. Map the new image mode volumes to the host (Example 6-21). Important: Make sure that you map the boot volume with SCSI ID 0 to your host. The host must be able to identify the boot volume during the boot process.
Example 6-21 Map the volumes to the host

IBM_2145:ITSO-CLS1:admin>svctask mkvdiskhostmap -host Palau -scsi 0 palau_SANB Virtual Disk to Host map, id [0], successfully created IBM_2145:ITSO-CLS1:admin>svctask mkvdiskhostmap -host Palau -scsi 1 palau_Data Virtual Disk to Host map, id [1], successfully created IBM_2145:ITSO-CLS1:admin>svcinfo lshostvdiskmap Palau id name SCSI_id vdisk_id vdisk_name wwpn vdisk_UID 0 Palau 0 29 palau_SANB 210000E08B89C1CD 60050768018301BF280000000000002B
Chapter 6. Data migration

287

0 Palau 1 30 210000E08B89C1CD 60050768018301BF280000000000002C IBM_2145:ITSO-CLS1:admin>

palau_Data

FlashCopy: While the application is in a quiescent state, you can choose to use FlashCopy to copy the new image volumes onto other volumes. You do not need to wait until the FlashCopy process has completed before starting your application. 8. Power on your host server and enter your Fibre Channel (FC) HBA adapter BIOS before booting the operating system, and make sure that you change the boot configuration so that it points to the SVC. In our example, we performed the following steps on a QLogic HBA: a. Press Ctrl+Q to enter the HBA BIOS. b. Open Configuration Settings. c. Open Selectable Boot Settings. d. Change the entry from your storage subsystem to the SVC 2145 LUN with SCSI ID 0. e. Exit the menu and save your changes. 9. Boot up your Linux operating system. If you only moved the application LUN to the SVC and left your Linux server running, you only need to follow these steps to see the new volume: a. Load your HBA driver with the modprobe DRIVER_NAME command. If you did not (and cannot) unload your HBA driver, you can issue commands to the kernel to rescan the SCSI bus to see the new volumes (these details are beyond the scope of this book). b. Check your syslog, and verify that the kernel found the new volumes. On Red Hat Enterprise Linux, the syslog is stored in the /var/log/messages directory. c. If your application and data are on an LVM volume, rediscover the VG and then run the vgchange -a y VOLUME_GROUP command to activate the VG. 10.Mount your file systems with the mount /MOUNT_POINT command (Example 6-22). The df output shows us that all of disks are available again.
Example 6-22 Mount data disk

[root@Palau data]# mount /dev/dm-2 /data [root@Palau data]# df Filesystem 1K-blocks Used Available Use% Mounted on /dev/mapper/VolGroup00-LogVol00 10093752 1938056 7634688 21% / /dev/sda1 101086 12054 83813 13% /boot tmpfs 1033496 0 1033496 0% /dev/shm /dev/dm-2 5160576 158160 4740272 4% /data [root@Palau data]# 11.You are now ready to start your application.

6.6.4 Migrating the image mode volumes to managed MDisks


While the Linux server is still running, and while our file systems are in use, we migrate the image mode volumes onto striped volumes, with the extents being spread over the other three MDisks. In our example, the three new LUNs are located on an DS4500 storage subsystem, so we will also move to another storage subsystem in this example.

288

Implementing the IBM System Storage SAN Volume Controller V6.1

Preparing MDisks for striped mode volumes


From our second storage subsystem, we have performed these tasks: Created and allocated three new LUNs to the SVC Discovered them as MDisks Renamed these LUNs to more meaningful names Created a new storage pool Placed all of these MDisks into this storage pool You can see the output of our commands in Example 6-23.
Example 6-23 Create a new storage pool

IBM_2145:ITSO-CLS1:admin>svctask detectmdisk IBM_2145:ITSO-CLS1:admin>svctask mkmdiskgrp -name MD_palauVD -ext 512 MDisk Group, id [8], successfully created IBM_2145:ITSO-CLS1:admin>svcinfo lsmdisk id name status mode mdisk_grp_id mdisk_grp_name capacity ctrl_LUN_# controller_name UID tier 26 md_palauS online image 2 Palau_Pool1 12.0GB 0000000000000008 DS4700 600a0b800026b2820000428f48739bca00000000000000000000000000000000 generic_hdd 27 md_palauD online image 3 Palau_Pool2 5.0GB 0000000000000009 DS4700 600a0b800026b282000042f84873c7e100000000000000000000000000000000 generic_hdd 28 mdisk28 online unmanaged 8.0GB 0000000000000010 DS4500 600a0b8000174233000000b9487778ab00000000000000000000000000000000 generic_hdd 29 mdisk29 online unmanaged 8.0GB 0000000000000011 DS4500 600a0b80001744310000010f48776bae00000000000000000000000000000000 generic_hdd 30 mdisk30 online unmanaged 8.0GB 0000000000000012 DS4500 600a0b8000174233000000bb487778d900000000000000000000000000000000 generic_hdd IBM_2145:ITSO-CLS1:admin>svctask chmdisk -name palau-md1 mdisk28 IBM_2145:ITSO-CLS1:admin>svctask chmdisk -name palau-md2 mdisk29 IBM_2145:ITSO-CLS1:admin>svctask chmdisk -name palau-md3 mdisk30 IBM_2145:ITSO-CLS1:admin>svctask addmdisk -mdisk palau-md1 MD_palauVD IBM_2145:ITSO-CLS1:admin>svctask addmdisk -mdisk palau-md2 MD_palauVD IBM_2145:ITSO-CLS1:admin>svctask addmdisk -mdisk palau-md3 MD_palauVD IBM_2145:ITSO-CLS1:admin>svcinfo lsmdisk id name status mode mdisk_grp_id mdisk_grp_name capacity ctrl_LUN_# controller_name UID tier 26 md_palauS online image 2 Palau_Pool1 12.0GB 0000000000000008 DS4700 600a0b800026b2820000428f48739bca00000000000000000000000000000000 generic_hdd 27 md_palauD online image 3 Palau_Pool2 5.0GB 0000000000000009 DS4700 600a0b800026b282000042f84873c7e100000000000000000000000000000000 generic_hdd 28 palau-md1 online unmanaged 8 MD_palauVD 8.0GB 0000000000000010 DS4500 600a0b8000174233000000b9487778ab00000000000000000000000000000000 generic_hdd

Chapter 6. Data migration

289

29 palau-md2 online unmanaged 8 MD_palauVD 8.0GB 0000000000000011 DS4500 600a0b80001744310000010f48776bae00000000000000000000000000000000 generic_hdd 30 palau-md3 online unmanaged 8 MD_palauVD 8.0GB 0000000000000012 DS4500 600a0b8000174233000000bb487778d900000000000000000000000000000000 generic_hdd IBM_2145:ITSO-CLS1:admin>

Migrating the volumes


We are now ready to migrate the image mode volumes onto striped volumes in the MD_palauVD storage pool with the svctask migratevdisk command (Example 6-24). While the migration is running, our Linux server is still running. To check the overall progress of the migration, we use the svcinfo lsmigrate command as shown in Example 6-24. Listing the storage pool with the svcinfo lsmdiskgrp command shows that the free capacity on the old storage pools is slowly increasing as those extents are moved to the new storage pool.
Example 6-24 Migrating image mode volumes to striped volumes

IBM_2145:ITSO-CLS1:admin>svctask migratevdisk -vdisk palau_SANB -mdiskgrp MD_palauVD IBM_2145:ITSO-CLS1:admin>svctask migratevdisk -vdisk palau_Data -mdiskgrp MD_palauVD IBM_2145:ITSO-CLS1:admin>svcinfo lsmigrate migrate_type MDisk_Group_Migration progress 25 migrate_source_vdisk_index 29 migrate_target_mdisk_grp 8 max_thread_count 4 migrate_source_vdisk_copy_id 0 migrate_type MDisk_Group_Migration progress 70 migrate_source_vdisk_index 30 migrate_target_mdisk_grp 8 max_thread_count 4 migrate_source_vdisk_copy_id 0 IBM_2145:ITSO-CLS1:admin> After this task has completed, Example 6-25 shows that the volumes are now spread over three MDisks.
Example 6-25 Migration complete

IBM_2145:ITSO-CLS1:admin>svcinfo lsmdiskgrp MD_palauVD id 8 name MD_palauVD status online mdisk_count 3 vdisk_count 2 capacity 24.0GB extent_size 512 free_capacity 7.0GB virtual_capacity 17.00GB used_capacity 17.00GB 290
Implementing the IBM System Storage SAN Volume Controller V6.1

real_capacity 17.00GB overallocation 70 warning 0 IBM_2145:ITSO-CLS1:admin>svcinfo lsvdiskmember palau_SANB id 28 29 30 IBM_2145:ITSO-CLS1:admin>svcinfo lsvdiskmember palau_Data id 28 29 30 IBM_2145:ITSO-CLS1:admin> Our migration to striped volumes on another storage subsystem (DS4500) is now complete. The original MDisks (palau-md1, palau-md2, and palau-md3) can now be removed from the SVC, and these LUNs can be removed from the storage subsystem. If these LUNs are the last LUNs that were used on our DS4700 storage subsystem, we can remove it from our SAN fabric.

6.6.5 Preparing to migrate from the SVC


Before we move the Linux servers LUNs from being accessed by the SVC as volumes to being directly accessed from the storage subsystem, we must convert the volumes into image mode volumes. You might want to perform this activity for any one of these reasons: You purchased a new storage subsystem, and you were using SVC as a tool to migrate from your old storage subsystem to this new storage subsystem. You used the SVC to FlashCopy or Metro Mirror a volume onto another volume, and you no longer need that host connected to the SVC. You want to move a host and its data that is currently connected to the SVC to a site where there is no SVC. Changes to your environment no longer require this host to use the SVC. There are also other preparation activities that we can perform before we have to shut down the host and reconfigure the LUN masking and mapping. This section covers those activities. If you are moving the data to a new storage subsystem, it is assumed that the storage subsystem is connected to your SAN fabric, powered on, and visible from your SAN switches. Your environment must look similar to our environment, which is shown in Figure 6-79 on page 292.

Chapter 6. Data migration

291

Zoning for migration scenarios LINUX Host

SAN

SVC I/O grp0 SVC SVC

IBM or OEM Storage Subsystem

IBM or OEM Storage Subsystem

Green Zone Red Zone Blue Zone Black Zone

Figure 6-79 Environment with SVC

Making fabric zone changes


The first step is to set up the SAN configuration so that all of the zones are created. You must add the new storage subsystem to the Red Zone so that the SVC can talk to it directly. We also need a Green Zone for our host to use when we are ready for it to directly access the disk after it has been removed from the SVC. It is assumed that you have created the necessary zones, and after your zone configuration is set up correctly, the SVC sees the new storage subsystem controller using the svcinfo lscontroller command as in Example 6-26.
Example 6-26 Check controller name

IBM_2145:ITSO-CLS1:admin>svcinfo lscontroller id controller_name ctrl_s/n vendor_id product_id_high 0 controller0 IBM FAStT IBM_2145:ITSO-CLS1:admin>

product_id_low 1814

It is also a good idea to rename the new storage subsystems controller to a more useful name, which can be done with the svctask chcontroller -name command as in Example 6-27 on page 293.

292

Implementing the IBM System Storage SAN Volume Controller V6.1

Example 6-27 Rename controller

IBM_2145:ITSO-CLS1:admin>svctask chcontroller -name ITSO-4700 0 IBM_2145:ITSO-CLS1:admin> Also verify that controller name was changed as you wanted, as shown in Example 6-28.
Example 6-28 Recheck controller name

IBM_2145:ITSO-CLS1:admin>svcinfo lscontroller id controller_name ctrl_s/n vendor_id product_id_high 0 ITSO-4700 IBM FAStT IBM_2145:ITSO-CLS1:admin>

product_id_low 1814

Creating new LUNs


On our storage subsystem, we created two LUNs and masked the LUNs so that the SVC can see them. Eventually, we will give these two LUNs directly to the host, removing the volumes that the host currently has. To check that the SVC can use these two LUNs, issue the svctask detectmdisk command, as shown in Example 6-29.
Example 6-29 Discover the new MDisks

IBM_2145:ITSO-CLS1:admin>svctask detectmdisk IBM_2145:ITSO-CLS1:admin>svcinfo lsmdisk id name status mode mdisk_grp_id mdisk_grp_name capacity ctrl_LUN_# controller_name UID 0 mdisk0 online managed 600a0b800026b282000042f84873c7e100000000000000000000000000000000 28 palau-md1 online managed 8 MD_palauVD 8.0GB 0000000000000010 DS4500 600a0b8000174233000000b9487778ab00000000000000000000000000000000 29 palau-md2 online managed 8 MD_palauVD 8.0GB 0000000000000011 DS4500 600a0b80001744310000010f48776bae00000000000000000000000000000000 30 palau-md3 online managed 8 MD_palauVD 8.0GB 0000000000000012 DS4500 600a0b8000174233000000bb487778d900000000000000000000000000000000 31 mdisk31 online unmanaged 6.0GB 0000000000000013 DS4500 600a0b8000174233000000bd4877890f00000000000000000000000000000000 32 mdisk32 online unmanaged 12.5GB 0000000000000014 DS4500 600a0b80001744310000011048777bda00000000000000000000000000000000 IBM_2145:ITSO-CLS1:admin> Even though the MDisks will not stay in the SVC for long, we suggest that you rename them to more meaningful names so that they do not get confused with other MDisks that are used by other activities. Also, we create the storage pools to hold our new MDisks, which is shown in Example 6-30 on page 294.

Chapter 6. Data migration

293

Example 6-30 Rename the MDisks

IBM_2145:ITSO-CLS1:admin>svctask chmdisk -name mdpalau_ivd mdisk32 IBM_2145:ITSO-CLS1:admin>svctask mkmdiskgrp -name MDG_Palauivd -ext 512 MDisk Group, id [9], successfully created IBM_2145:ITSO-CLS1:admin>svctask mkmdiskgrp -name MDG_Palauivd -ext 512 CMMVC5758E Object name already exists. IBM_2145:ITSO-CLS1:admin>svcinfo lsmdiskgrp id name status mdisk_count vdisk_count capacity extent_size free_capacity virtual_capacity used_capacity real_capacity overallocation warning easy_tier easy_tier_status 8 MD_palauVD online 3 2 24.0GB 512 7.0GB 17.00GB 17.00GB 17.00GB 70 0 auto inactive 9 MDG_Palauivd online 0 0 0 512 0 0.00MB 0.00MB 0.00MB 0 0 auto inactive IBM_2145:ITSO-CLS1:admin>

Our SVC environment is now ready for the volume migration to image mode volumes.

6.6.6 Migrating the volumes to image mode volumes


While our Linux server is still running, we migrate the managed volumes onto the new MDisks using image mode volumes. The command to perform this action is the svctask migratetoimage command, which is shown in Example 6-31.
Example 6-31 Migrate the volumes to image mode volumes

IBM_2145:ITSO-CLS1:admin>svctask migratetoimage -vdisk palau_SANB -mdisk mdpalau_ivd -mdiskgrp MD_palauVD IBM_2145:ITSO-CLS1:admin>svctask migratetoimage -vdisk palau_Data -mdisk mdpalau_ivd1 -mdiskgrp MD_palauVD IBM_2145:ITSO-CLS1:admin>svcinfo lsmdisk id name status mode mdisk_grp_id mdisk_grp_name capacity ctrl_LUN_# controller_name UID 28 palau-md1 online managed 8 MD_palauVD 8.0GB 0000000000000010 DS4500 600a0b8000174233000000b9487778ab00000000000000000000000000000000 29 palau-md2 online managed 8 MD_palauVD 8.0GB 0000000000000011 DS4500 600a0b80001744310000010f48776bae00000000000000000000000000000000 30 palau-md3 online managed 8 MD_palauVD 8.0GB 0000000000000012 DS4500 600a0b8000174233000000bb487778d900000000000000000000000000000000 31 mdpalau_ivd1 online image 8 MD_palauVD 6.0GB 0000000000000013 DS4500 600a0b8000174233000000bd4877890f00000000000000000000000000000000 32 mdpalau_ivd online image 8 MD_palauVD 12.5GB 0000000000000014 DS4500 600a0b80001744310000011048777bda00000000000000000000000000000000 IBM_2145:ITSO-CLS1:admin>svcinfo lsmigrate migrate_type Migrate_to_Image progress 4

294

Implementing the IBM System Storage SAN Volume Controller V6.1

migrate_source_vdisk_index 29 migrate_target_mdisk_index 32 migrate_target_mdisk_grp 8 max_thread_count 4 migrate_source_vdisk_copy_id 0 migrate_type Migrate_to_Image progress 30 migrate_source_vdisk_index 30 migrate_target_mdisk_index 31 migrate_target_mdisk_grp 8 max_thread_count 4 migrate_source_vdisk_copy_id 0 IBM_2145:ITSO-CLS1:admin> During the migration, our Linux server is unaware that its data is being physically moved between storage subsystems. After the migration has completed, the image mode volumes are ready to be removed from the Linux server, and the real LUNs can be mapped and masked directly to the host by using the storage subsystems tool.

6.6.7 Removing the LUNs from the SVC


The next step requires downtime on the Linux server, because we will remap and remask the disks so that the host sees them directly through the Green Zone, as shown in Figure 6-79 on page 292. Our Linux server has two LUNs: one LUN is our boot disk and operating system file systems, and the other LUN holds our application and data files. Moving both LUNs at one time requires shutting down the host. If we only want to move the LUN that holds our application and data files, we can move that LUN without rebooting the host. The only requirement is that we unmount the file system and vary off the VG to ensure data integrity during the reassignment. Before you start: Moving LUNs to another storage subsystem might need an additional entry in the multipath.conf file. Check with the storage subsystem vendor to see which content you must add to the file. You might be able to install and modify the file ahead of time. When you intend to move both LUNs at the same time, you must use these required steps: 1. Confirm that your operating system is configured for the new storage. 2. Shut down the host. If you are only moving the LUNs that contain the application and data, you can follow this procedure instead: a. Stop the applications that are using the LUNs. b. Unmount those file systems with the umount MOUNT_POINT command. c. If the file systems are an LVM volume, deactivate that VG with the vgchange -a n VOLUMEGROUP_NAME command. d. If you can, unload your HBA driver using the rmmod DRIVER_MODULE command. This command removes the SCSI definitions from the kernel (we will reload this module and

Chapter 6. Data migration

295

rediscover the disks later). It is possible to tell the Linux SCSI subsystem to rescan for new disks without requiring you to unload the HBA driver; however, we do not provide these details here. 3. Remove the volumes from the host by using the svctask rmvdiskhostmap command (Example 6-32). To double-check that you have removed the volumes, use the svcinfo lshostvdiskmap command, which shows that these disks are no longer mapped to the Linux server.
Example 6-32 Remove the volumes from the host

IBM_2145:ITSO-CLS1:admin>svctask rmvdiskhostmap -host Palau palau_SANB IBM_2145:ITSO-CLS1:admin>svctask rmvdiskhostmap -host Palau palau_Data IBM_2145:ITSO-CLS1:admin>svcinfo lshostvdiskmap Palau IBM_2145:ITSO-CLS1:admin>

4. Remove the volumes from the SVC by using the svctask rmvdisk command. This step makes them unmanaged, as seen in Example 6-33. Cached data: When you run the svctask rmvdisk command, the SVC will first double-check that there is no outstanding dirty cached data for the volume that is being removed. If there is still uncommitted cached data, the command fails with the following error message: CMMVC6212E The command failed because data in the cache has not been committed to disk You will have to wait for this cached data to be committed to the underlying storage subsystem before you can remove the volume. The SVC will automatically destage uncommitted cached data two minutes after the last write activity for the volume. How much data there is to destage, and how busy the I/O subsystem is, determine how long this command takes to complete. You can check if the volume has uncommitted data in the cache by using the command svcinfo lsvdisk <VDISKNAME> and checking the fast_write_state attribute. This attribute has the following meanings: empty not_empty corrupt No modified data exists in the cache. Modified data might exist in the cache. Modified data might have existed in the cache, but any data has been lost.

Example 6-33 Remove the volumes from the SVC

IBM_2145:ITSO-CLS1:admin>svctask rmvdisk palau_SANB IBM_2145:ITSO-CLS1:admin>svctask rmvdisk palau_Data IBM_2145:ITSO-CLS1:admin>svcinfo lsmdisk id name status mode mdisk_grp_id mdisk_grp_name capacity ctrl_LUN_# controller_name UID 31 mdpalau_ivd1 online unmanaged 6.0GB 0000000000000013 DS4500 600a0b8000174233000000bd4877890f00000000000000000000000000000000 32 mdpalau_ivd online unmanaged 12.5GB 0000000000000014 DS4500 600a0b80001744310000011048777bda00000000000000000000000000000000 IBM_2145:ITSO-CLS1:admin>

296

Implementing the IBM System Storage SAN Volume Controller V6.1

5. Using Storage Manager (our storage subsystem management tool), unmap and unmask the disks from the SVC back to the Linux server. Important: If one of the disks is used to boot your Linux server, you must make sure that it is presented back to the host as SCSI ID 0 so that the FC adapter BIOS finds that disk during its initialization. 6. Power on your host server and enter your FC HBA BIOS before booting the OS. Make sure that you change the boot configuration so that it points to the SVC. In our example, we performed the following steps on a QLogic HBA: a. Pressed Ctrl+Q to enter the HBA BIOS. b. Opened Configuration Settings. c. Opened Selectable Boot Settings. d. Changed the entry from the SVC to your storage subsystem LUN with SCSI ID 0. e. Exited the menu and saved the changes. Important: This is the last step that you can perform and still safely back out everything that you have done so far. Up to this point, you can reverse all of the actions that you have performed so far to get the server back online without data loss: Remap and remask the LUNs back to the SVC. Run the svctask detectmdisk command to rediscover the MDisks. Recreate the volumes with the svctask mkvdisk command. Remap the volumes back to the server with the svctask mkvdiskhostmap command. After you start the next step, you might not be able to turn back without the risk of data loss. 7. We now restart the Linux server. If all of the zoning and LUN masking and mapping were done successfully, the Linux server boots as though nothing has happened. However, if you only moved the application LUN to the SVC and left your Linux server running, you must follow these steps to see the new volume: a. Load your HBA driver with the modprobe DRIVER_NAME command. If you did not (and cannot) unload your HBA driver, you can issue commands to the kernel to rescan the SCSI bus to see the new volumes (describing these details is beyond the scope of this book). b. Check your syslog and verify that the kernel found the new volumes. On Red Hat Enterprise Linux, the syslog is stored in the /var/log/messages directory. c. If your application and data are on an LVM volume, run the vgscan command to rediscover the VG, and then, run the vgchange -a y VOLUME_GROUP command to activate the VG. 8. Mount your file systems with the mount /MOUNT_POINT command (Example 6-34 on page 298). The df output shows that all of the disks are available again.

Chapter 6. Data migration

297

Example 6-34 File system after migration

[root@Palau ~]# mount /dev/dm-2 /data [root@Palau ~]# df Filesystem 1K-blocks Used Available Use% Mounted on /dev/mapper/VolGroup00-LogVol00 10093752 1938124 7634620 21% / /dev/sda1 101086 12054 83813 13% /boot tmpfs 1033496 0 1033496 0% /dev/shm /dev/dm-2 5160576 158160 4740272 4% /data [root@Palau ~]# 9. You are ready to start your application. 10.Finally, to make sure that the MDisks are removed from the SVC, run the svctask detectmdisk command. The MDisks will first be discovered as offline, and then they will automatically be removed when the SVC determines that there are no volumes associated with these MDisks.

6.7 Migrating ESX SAN disks to SVC disks


In this section, we move the two LUNs from our VMware ESX server to the SVC. The ESX operating system is installed locally on the host, but the two SAN disks are connected, and the virtual machines are stored there. We then manage those LUNs with the SVC, move them between other managed disks, and finally move them back to image mode disks so that those LUNs can then be masked and mapped back to the VMware ESX server directly. This example can help you perform any one of the following activities in your environment: Move your ESX servers data LUNs (that are your VMware vmfs file systems where you might have your virtual machines stored), which are directly accessed from a storage subsystem, to virtualized disks under the control of the SVC. Move LUNs between storage subsystems while your VMware virtual machines are still running. You can perform this activity to move the data onto LUNs that are more appropriate for the type of data that is stored on those LUNs, taking into account availability, performance, and redundancy. We describe this step in 6.7.4, Migrating the image mode volumes on page 307. Move your VMware ESX servers LUNs back to image mode volumes so that they can be remapped and remasked directly back to the server. This step starts in 6.7.5, Preparing to migrate from the SVC on page 310. You can use these activities individually, or together, to migrate your VMware ESX servers LUNs from one storage subsystem to another storage subsystem, using the SVC as your migration tool. If you do not use all three activities, you can introduce the SVC in your environment, or move the data between your storage subsystems. The only downtime that is required for these activities is the time that it takes you to remask and remap the LUNs between the storage subsystems and your SVC. In Figure 6-80 on page 299, we show our starting SAN environment.

298

Implementing the IBM System Storage SAN Volume Controller V6.1

Figure 6-80 ESX environment before migration

Figure 6-80 shows our ESX server connected to the SAN infrastructure. It has two LUNs that are masked directly to it from our storage subsystem. Our ESX server represents a typical SAN environment with a host directly using LUNs that were created on a SAN storage subsystem, as shown in Figure 6-80: The ESX Servers HBA cards are zoned so that they are in the Green Zone with our storage subsystem. The two LUNs that have been defined on the storage subsystem and that use LUN masking are directly available to our ESX server.

6.7.1 Connecting the SVC to your SAN fabric


This section describes the steps needed to introduce the SVC into your SAN environment. Although we only summarize these activities here, you can introduce the SVC into your SAN environment without any downtime to any host or application that also uses your storage area network. If you have an SVC already connected, skip to the instructions that are given in 6.7.2, Preparing your SVC to virtualize disks on page 301.

Chapter 6. Data migration

299

Attention: Be extremely careful when connecting the SVC to your storage area network, because this requires you to connect cables to your SAN switches and to alter your switch zone configuration. Performing these activities incorrectly can render your SAN inoperable, so make sure that you fully understand the effect of your actions. You must perform these tasks to connect the SVC to your SAN fabric: Assemble your SVC components (nodes, uninterruptible power supply unit, SSPC), cable the SVC correctly, power the SVC on, and verify that the SVC is visible on your SAN area network. Create and configure your SVC cluster. Create these additional zones: An SVC node zone (the Black Zone in our picture on Example 6-57 on page 322). A storage zone (our Red Zone). A host zone (our Blue Zone). For more detailed information about how to configure the zones in the correct way, see Chapter 3, Planning and configuration on page 57. Figure 6-81 shows the environment that we set up.

Figure 6-81 SAN environment with SVC attached

300

Implementing the IBM System Storage SAN Volume Controller V6.1

6.7.2 Preparing your SVC to virtualize disks


This section describes the preparatory tasks that we perform before taking our ESX server or virtual machines offline. These tasks are all nondisruptive activities, which do not affect your SAN fabric or your existing SVC configuration (if you already have a production SVC in place).

Creating a storage pool


When we move the two ESX LUNs to the SVC, they are first used in image mode, and therefore, we need a storage pool to hold those disks. We create an empty storage pool for these disks by using the command shown in Example 6-35. Our MDG_Nile_VM storage pool holds the boot LUN and our data LUN.
Example 6-35 Creating an empty storage pool

IBM_2145:ITSO-CLS1:admin>svctask mkmdiskgrp -name MDG_Nile_VM -ext 512 MDisk Group, id [3], successfully created

Creating the host definition


If you prepared the zones correctly, the SVC can see the ESX servers HBA adapters on the fabric (our host only had one HBA). First, we get the WWN for our ESX servers HBA because we have many hosts connected to our SAN fabric and in the Blue Zone. We want to make sure that we have the correct WWN to reduce our ESX servers downtime. Log in to your VMware management console as root, then navigate to Configuration and select Storage Adapter. The Storage Adapters are shown on the right side of this window and display all of the necessary information. Figure 6-82 shows our WWNs, which are 210000E08B89B8C0 and 210000E08B892BCD.

Figure 6-82 Obtain your WWN using the VMware Management Console

Use the svcinfo lshbaportcandidate command on the SVC to list all of the WWNs, which have not yet been allocated to a host, that the SVC can see on the SAN fabric. Example 6-36 on page 302 shows the output of the nodes that it found on our SAN fabric. (If the port did not show up, it indicates a zone configuration problem.) 301

Chapter 6. Data migration

Example 6-36 Add the host to the SVC

IBM_2145:ITSO-CLS1:admin>svcinfo lshbaportcandidate id 210000E08B89B8C0 210000E08B892BCD 210000E08B0548BC 210000E08B0541BC 210000E08B89CCC2 IBM_2145:ITSO-CLS1:admin> After verifying that the SVC can see our host, we create the host entry and assign the WWN to this entry. Example 6-37 shows these commands.
Example 6-37 Create the host entry

IBM_2145:ITSO-CLS1:admin>svctask mkhost -name Nile -hbawwpn 210000E08B89B8C0:210000E08B892BCD Host, id [1], successfully created IBM_2145:ITSO-CLS1:admin>svcinfo lshost Nile id 1 name Nile port_count 2 type generic mask 1111 iogrp_count 4 WWPN 210000E08B892BCD node_logged_in_count 4 state active WWPN 210000E08B89B8C0 node_logged_in_count 4 state active IBM_2145:ITSO-CLS1:admin>

Verifying that you can see your storage subsystem


If our zoning has been performed correctly, the SVC can also see the storage subsystem with the svcinfo lscontroller command (Example 6-38).
Example 6-38 Available storage controllers

IBM_2145:ITSO-CLS1:admin>svcinfo lscontroller id controller_name ctrl_s/n product_id_low product_id_high 0 DS4500 1742-900 1 DS4700 1814 FAStT

vendor_id IBM IBM

Getting your disk serial numbers


To help avoid the risk of creating the wrong volumes from all of the available unmanaged MDisks (in case the SVC sees many available unmanaged MDisks), we get the LUN serial numbers from our storage subsystem administration tool (Storage Manager). When we discover these MDisks, we confirm that we have the correct serial numbers before we create the image mode volumes. 302
Implementing the IBM System Storage SAN Volume Controller V6.1

If you also use a DS4000 family storage subsystem, Storage Manager provides the LUN serial numbers. Right-click your logical drive, and choose Properties. The following figures show our serial numbers. Figure 6-83 shows disk serial number VM_W2k3.

Figure 6-83 Obtaining the disk serial number - VM_W2k3

Figure 6-84 shows disk serial number VM_SLES

Figure 6-84 Obtaining the disk serial number - VM_SLES

Chapter 6. Data migration

303

We are ready to move the ownership of the disks to the SVC, discover them as MDisks, and give them back to the host as volumes.

6.7.3 Moving the LUNs to the SVC


In this step, we move the LUNs that are assigned to the ESX server and reassign them to the SVC. Our ESX server has two LUNs, as shown in Figure 6-85.

Figure 6-85 VMWare LUNs

The virtual machines are located on these LUNs. Therefore, to move these LUNs under the control of the SVC, we do not need to reboot the entire ESX server, but we do have to stop and suspend all VMware guests that are using these LUNs.

Moving VMware guest LUNs


To move the VMware LUNs to the SVC, perform the following steps: 1. Using Storage Manager, we have identified the LUN number that has been presented to the ESX Server. Record which LUN had which LUN number; see Figure 6-86.

Figure 6-86 Identify LUN numbers in IBM DS4000 Storage Manager

2. Identify all of the VMware guests that are using this LUN and shut them down. One way to identify them is to highlight the virtual machine and open the Summary Tab. The datapool that is used is displayed under Datastore. Figure 6-87 on page 305 shows a Linux virtual machine using the datastore named SLES_Costa_Rica.

304

Implementing the IBM System Storage SAN Volume Controller V6.1

Figure 6-87 Identify the LUNs that are used by virtual machines

3. If you have several ESX hosts, also check the other ESX hosts to make sure that there is no guest operating system that is running and using this datastore. 4. Repeat steps 1 to 3 for every datastore that you want to migrate. 5. After the guests are suspended, we use Storage Manager (our storage subsystem management tool) to unmap and unmask the disks from the ESX server and to remap and to remask the disks to the SVC. 6. From the SVC, discover the new disks with the svctask detectmdisk command. The disks will be discovered and named as mdiskN, where N is the next available MDisk number (starting from 0). Example 6-39 shows the commands that we used to discover our MDisks and to verify that we have the correct MDisks.
Example 6-39 Discover the new MDisks

IBM_2145:ITSO-CLS1:admin>svctask detectmdisk IBM_2145:ITSO-CLS1:admin>svcinfo lsmdisk id name status mode mdisk_grp_id mdisk_grp_name capacity ctrl_LUN_# controller_name UID 21 mdisk21 online unmanaged 60.0GB 0000000000000008 DS4700 600a0b800026b282000041ca486d14a500000000000000000000000000000000 22 mdisk22 online unmanaged 70.0GB 0000000000000009 DS4700 600a0b80002904de0000447a486d14cd00000000000000000000000000000000 IBM_2145:ITSO-CLS1:admin>

Chapter 6. Data migration

305

Important: Match your discovered MDisk serial numbers (UID on the svcinfo lsmdisk command task display) with the serial number that you obtained earlier (in Figure 6-83 and Figure 6-84 on page 303). 7. After we have verified that we have the correct MDisks, we rename them to avoid confusion in the future when we perform other MDisk-related tasks; see Example 6-40.
Example 6-40 Rename the MDisks

IBM_2145:ITSO-CLS1:admin>svctask chmdisk -name ESX_W2k3 mdisk22 IBM_2145:ITSO-CLS1:admin>svctask chmdisk -name ESX_SLES mdisk21 IBM_2145:ITSO-CLS1:admin>svcinfo lsmdisk 21 ESX_SLES online unmanaged 60.0GB 0000000000000008 DS4700 600a0b800026b282000041ca486d14a500000000000000000000000000000000 22 ESX_W2k3 online unmanaged 70.0GB 0000000000000009 DS4700 600a0b80002904de0000447a486d14cd00000000000000000000000000000000 IBM_2145:ITSO-CLS1:admin> 8. We create our image mode volumes with the svctask mkvdisk command; see Example 6-41. Using the parameter -vtype image ensures that it will create image mode volumes, which means that the virtualized disks will have the exact same layout as though they were not virtualized.
Example 6-41 Create the image mode volumes

IBM_2145:ITSO-CLS1:admin>svctask mkvdisk -mdiskgrp MDG_Nile_VM -iogrp 0 -vtype image -mdisk ESX_W2k3 -name ESX_W2k3_IVD Virtual Disk, id [29], successfully created IBM_2145:ITSO-CLS1:admin>svctask mkvdisk -mdiskgrp MDG_Nile_VM -iogrp 0 -vtype image -mdisk ESX_SLES -name ESX_SLES_IVD Virtual Disk, id [30], successfully created IBM_2145:ITSO-CLS1:admin> 9. Finally, we can map the new image mode volumes to the host. Use the same SCSI LUN IDs as on the storage subsystem for the mapping; see Example 6-42.
Example 6-42 Map the volumes to the host

IBM_2145:ITSO-CLS1:admin>svctask mkvdiskhostmap -host Nile -scsi 0 ESX_SLES_IVD Virtual Disk to Host map, id [0], successfully created IBM_2145:ITSO-CLS1:admin>svctask mkvdiskhostmap -host Nile -scsi 1 ESX_W2k3_IVD Virtual Disk to Host map, id [1], successfully created IBM_2145:ITSO-CLS1:admin>svcinfo lshostvdiskmap id name SCSI_id vdisk_id vdisk_name wwpn vdisk_UID 1 Nile 0 30 ESX_SLES_IVD 210000E08B892BCD 60050768018301BF280000000000002A 1 Nile 1 29 ESX_W2k3_IVD 210000E08B892BCD 60050768018301BF2800000000000029

306

Implementing the IBM System Storage SAN Volume Controller V6.1

10.Using the VMware management console, rescan to discover the new volume. Open the configuration tab, select Storage Adapters, and click Rescan. During the rescan you can receive geometry errors when ESX discovers that the old disk has disappeared. Your volume will appear with the new vmhba devices. 11.We are ready to restart the VMware guests again. At this point, you have migrated the VMware LUNs successfully to the SVC.

6.7.4 Migrating the image mode volumes


While the VMware server and its virtual machines are still running, we migrate the image mode volumes onto striped volumes, with the extents being spread over three other MDisks.

Preparing MDisks for striped mode volumes


In this example, we migrate the image mode volumes to volumes and move the data to another storage subsystem in one step.

Adding a new storage subsystem to SVC


If you are moving the data to a new storage subsystem, it is assumed that this storage subsystem is connected to your SAN fabric, powered on, and visible from your SAN switches. Your environment must look similar to our environment, which is shown in Figure 6-88.

Figure 6-88 ESX SVC SAN environment

Make fabric zone changes


The first step is to set up the SAN configuration so that all of the zones are created. Add the new storage subsystem to the Red Zone so that the SVC can talk to it directly.

Chapter 6. Data migration

307

We also need a Green Zone for our host to use when we are ready for it to directly access the disk, after it has been removed from the SVC. We assume that you have created the necessary zones. In our environment, we have performed these tasks: Created three LUNs on another storage subsystem and mapped it to the SVC Discovered them as MDisks Created a new storage pool Renamed these LUNs to more meaningful names Put all these MDisks into this storage pool You can see the output of our commands in Example 6-43. Example 6-43 Create a new storage pool IBM_2145:ITSO-CLS1:admin>svctask detectmdisk IBM_2145:ITSO-CLS1:admin>svcinfo lsmdisk id name status mode mdisk_grp_id mdisk_grp_name capacity ctrl_LUN_# controller_name UID 21 ESX_SLES online image 3 MDG_Nile_VM 60.0GB 0000000000000008 DS4700 600a0b800026b282000041ca486d14a500000000000000000000000000000000 22 ESX_W2k3 online image 3 MDG_Nile_VM 70.0GB 0000000000000009 DS4700 600a0b80002904de0000447a486d14cd00000000000000000000000000000000 23 mdisk23 online unmanaged 55.0GB 000000000000000D DS4500 600a0b8000174233000000b4486d250300000000000000000000000000000000 24 mdisk24 online unmanaged 55.0GB 000000000000000E DS4500 600a0b800017443100000108486d182c00000000000000000000000000000000 25 mdisk25 online unmanaged 55.0GB 000000000000000F DS4500 600a0b8000174233000000b5486d255b00000000000000000000000000000000 IBM_2145:ITSO-CLS1:admin>svctask mkmdiskgrp -name MDG_ESX_VD -ext 512 IBM_2145:ITSO-CLS1:admin>svctask chmdisk -name IBMESX-MD1 mdisk23 IBM_2145:ITSO-CLS1:admin>svctask chmdisk -name IBMESX-MD2 mdisk24 IBM_2145:ITSO-CLS1:admin>svctask chmdisk -name IBMESX-MD3 mdisk25 IBM_2145:ITSO-CLS1:admin>svctask addmdisk -mdisk IBMESX-MD1 MDG_ESX_VD IBM_2145:ITSO-CLS1:admin>svctask addmdisk -mdisk IBMESX-MD2 MDG_ESX_VD IBM_2145:ITSO-CLS1:admin>svctask addmdisk -mdisk IBMESX-MD3 MDG_ESX_VD IBM_2145:ITSO-CLS1:admin>svcinfo lsmdisk id name status mode mdisk_grp_id mdisk_grp_name capacity ctrl_LUN_# controller_name UID 21 ESX_SLES online image 3 MDG_Nile_VM 60.0GB 0000000000000008 DS4700 600a0b800026b282000041ca486d14a500000000000000000000000000000000 22 ESX_W2k3 online image 3 MDG_Nile_VM 70.0GB 0000000000000009 DS4700 600a0b80002904de0000447a486d14cd00000000000000000000000000000000 23 IBMESX-MD1 online managed 4 MDG_ESX_VD 55.0GB 000000000000000D DS4500 600a0b8000174233000000b4486d250300000000000000000000000000000000 308
Implementing the IBM System Storage SAN Volume Controller V6.1

24 IBMESX-MD2 online managed MDG_ESX_VD 55.0GB 000000000000000E DS4500 600a0b800017443100000108486d182c00000000000000000000000000000000 25 IBMESX-MD3 online managed MDG_ESX_VD 55.0GB 000000000000000F DS4500 600a0b8000174233000000b5486d255b00000000000000000000000000000000 IBM_2145:ITSO-CLS1:admin>

Migrating the volumes


At this point we are ready to migrate the image mode volumes onto striped volumes in the new storage pool (MDG_ESX_VD) with the svctask migratevdisk command (Example 6-44). While the migration is running, our VMware ESX server and our VMware guests will remain running. To check the overall progress of the migration, we use the svcinfo lsmigrate command as shown in Example 6-44. Listing the storage pool with the svcinfo lsmdiskgrp command shows that the free capacity on the old storage pool is slowly increasing as those extents are moved to the new storage pool.
Example 6-44 Migrating image mode volumes to striped volumes

IBM_2145:ITSO-CLS1:admin>svctask migratevdisk -vdisk ESX_SLES_IVD -mdiskgrp MDG_ESX_VD IBM_2145:ITSO-CLS1:admin>svctask migratevdisk -vdisk ESX_W2k3_IVD -mdiskgrp MDG_ESX_VD IBM_2145:ITSO-CLS1:admin>svcinfo lsmigrate migrate_type MDisk_Group_Migration progress 0 migrate_source_vdisk_index 30 migrate_target_mdisk_grp 4 max_thread_count 4 migrate_source_vdisk_copy_id 0 migrate_type MDisk_Group_Migration progress 0 migrate_source_vdisk_index 29 migrate_target_mdisk_grp 4 max_thread_count 4 migrate_source_vdisk_copy_id 0 IBM_2145:ITSO-CLS1:admin>svcinfo lsmigrate migrate_type MDisk_Group_Migration progress 1 migrate_source_vdisk_index 30 migrate_target_mdisk_grp 4 max_thread_count 4 migrate_source_vdisk_copy_id 0 migrate_type MDisk_Group_Migration progress 0 migrate_source_vdisk_index 29 migrate_target_mdisk_grp 4 max_thread_count 4 migrate_source_vdisk_copy_id 0 IBM_2145:ITSO-CLS1:admin>svcinfo lsmdiskgrp id name status mdisk_count vdisk_count capacity extent_size free_capacity virtual_capacity used_capacity real_capacity overallocation warning

Chapter 6. Data migration

309

3 MDG_Nile_VM 130.0GB 512 130.00GB 100 4 MDG_ESX_VD 165.0GB 512 0.00MB 0 IBM_2145:ITSO-CLS1:admin>

online 1.0GB 0 online 35.0GB 0

2 130.00GB 3 0.00MB

2 130.00GB 0 0.00MB

If you compare the svcinfo lsmdiskgrp output after the migration, as shown in Example 6-45, you can see that all of the virtual capacity has now been moved from the old storage pool (MDG_Nile_VM) to the new storage pool (MDG_ESX_VD). The mdisk_count column shows that the capacity is now spread over three MDisks.
Example 6-45 List MDisk group

IBM_2145:ITSO-CLS1:admin>svcinfo lsmdiskgrp id name status capacity extent_size free_capacity real_capacity overallocation warning 3 MDG_Nile_VM online 130.0GB 512 130.0GB 0.00MB 0 0 4 MDG_ESX_VD online 165.0GB 512 35.0GB 130.00GB 78 0 IBM_2145:ITSO-CLS1:admin>

mdisk_count vdisk_count virtual_capacity used_capacity 2 0.00MB 3 130.00GB 0 0.00MB 2 130.00GB

The migration to the SVC is complete. You can remove the original MDisks from the SVC and remove these LUNs from the storage subsystem. If these LUNs are the last LUNs that were used on our storage subsystem, we can remove it from our SAN fabric.

6.7.5 Preparing to migrate from the SVC


Before we move the ESX servers LUNs from being accessible by the SVC as volumes to becoming directly accessed from the storage subsystem, we need to convert the volumes into image mode volumes. You might want to perform this activity for any one of these reasons: You purchased a new storage subsystem, and you were using SVC as a tool to migrate from your old storage subsystem to this new storage subsystem. You used SVC to FlashCopy or Metro Mirror a volume onto another volume, and you no longer need that host connected to the SVC. You want move a host and its data that currently is connected to the SVC to a site where there is no SVC. Changes to your environment no longer require this host to use the SVC.

310

Implementing the IBM System Storage SAN Volume Controller V6.1

There are also other preparatory activities that we can perform before we shut down the host and reconfigure the LUN masking and mapping. This section describes those activities. In our example, we will move volumes that are located on a DS4500 to image mode volumes that are located on a DS4700. If you are moving the data to a new storage subsystem, it is assumed that this storage subsystem is connected to your SAN fabric, powered on, and visible from your SAN switches. Your environment must look similar to our environment, as described in Adding a new storage subsystem to SVC on page 307 and Make fabric zone changes on page 307.

Creating new LUNs


On our storage subsystem, we create two LUNs and mask the LUNs so that the SVC can see them. These two LUNs will eventually be given directly to the host, removing the volumes that it currently has. To check that the SVC can use them, issue the svctask detectmdisk command as shown in Example 6-46.
Example 6-46 Discover the new MDisks

IBM_2145:ITSO-CLS1:admin>svctask detectmdisk IBM_2145:ITSO-CLS1:admin>svcinfo lsmdisk id name status mode mdisk_grp_id mdisk_grp_name capacity ctrl_LUN_# controller_name UID 23 IBMESX-MD1 online managed MDG_ESX_VD 55.0GB 000000000000000D DS4500 600a0b8000174233000000b4486d250300000000000000000000000000000000 24 IBMESX-MD2 online managed MDG_ESX_VD 55.0GB 000000000000000E DS4500 600a0b800017443100000108486d182c00000000000000000000000000000000 25 IBMESX-MD3 online managed MDG_ESX_VD 55.0GB 000000000000000F DS4500 600a0b8000174233000000b5486d255b00000000000000000000000000000000 26 mdisk26 online unmanaged 120.0GB 000000000000000A DS4700 600a0b800026b282000041f0486e210100000000000000000000000000000000 27 mdisk27 online unmanaged 100.0GB 000000000000000B DS4700 600a0b800026b282000041e3486e20cf00000000000000000000000000000000 4

Even though the MDisks will not stay in the SVC for long, we suggest that you rename them to more meaningful names so that they do not get confused with other MDisks being used by other activities. We also the storage pools to hold our new MDisks. Example 6-47 shows these tasks.
Example 6-47 Rename the MDisks

IBM_2145:ITSO-CLS1:admin>svctask chmdisk -name ESX_IVD_SLES mdisk26 IBM_2145:ITSO-CLS1:admin>svctask chmdisk -name ESX_IVD_W2K3 mdisk27 IBM_2145:ITSO-CLS1:admin>svctask mkmdiskgrp -name MDG_IVD_ESX -ext 512 MDisk Group, id [5], successfully created IBM_2145:ITSO-CLS1:admin>svcinfo lsmdiskgrp id name status mdisk_count vdisk_count capacity extent_size free_capacity virtual_capacity used_capacity real_capacity overallocation warning

Chapter 6. Data migration

311

4 MDG_ESX_VD online 3 165.0GB 512 35.0GB 130.00GB 130.00GB 78 0 5 MDG_IVD_ESX online 0 512 0 0.00MB 0.00MB 0 IBM_2145:ITSO-CLS1:admin>

2 130.00GB 0 0.00MB 0 0

Our SVC environment is ready for the volume migration to image mode volumes.

6.7.6 Migrating the managed volumes to image mode volumes


While our ESX server is still running, we migrate the managed volumes onto the new MDisks using image mode volumes. The command to perform this action is the svctask migratetoimage command, which is shown in Example 6-48.
Example 6-48 Migrate the volumes to image mode volumes

IBM_2145:ITSO-CLS1:admin>svctask migratetoimage -vdisk ESX_SLES_IVD -mdisk ESX_IVD_SLES -mdiskgrp MDG_IVD_ESX IBM_2145:ITSO-CLS1:admin>svctask migratetoimage -vdisk ESX_W2k3_IVD -mdisk ESX_IVD_W2K3 -mdiskgrp MDG_IVD_ESX IBM_2145:ITSO-CLS1:admin>svcinfo lsmdisk id name status mode mdisk_grp_id mdisk_grp_name capacity ctrl_LUN_# controller_name UID 23 IBMESX-MD1 online managed 4 MDG_ESX_VD 55.0GB 000000000000000D DS4500 600a0b8000174233000000b4486d250300000000000000000000000000000000 24 IBMESX-MD2 online managed 4 MDG_ESX_VD 55.0GB 000000000000000E DS4500 600a0b800017443100000108486d182c00000000000000000000000000000000 25 IBMESX-MD3 online managed 4 MDG_ESX_VD 55.0GB 000000000000000F DS4500 600a0b8000174233000000b5486d255b00000000000000000000000000000000 26 ESX_IVD_SLES online image 5 MDG_IVD_ESX 120.0GB 000000000000000A DS4700 600a0b800026b282000041f0486e210100000000000000000000000000000000 27 ESX_IVD_W2K3 online image 5 MDG_IVD_ESX 100.0GB 000000000000000B DS4700 600a0b800026b282000041e3486e20cf00000000000000000000000000000000 IBM_2145:ITSO-CLS1:admin>

During the migration, our ESX server is unaware that its data is being physically moved between storage subsystems. We can continue to run and continue to use the virtual machines that are running on the server. You can check the migration status with the svcinfo lsmigrate command, as shown in Example 6-49 on page 313.

312

Implementing the IBM System Storage SAN Volume Controller V6.1

Example 6-49 The svcinfo lsmigrate command and output

IBM_2145:ITSO-CLS1:admin>svcinfo lsmigrate migrate_type Migrate_to_Image progress 2 migrate_source_vdisk_index 29 migrate_target_mdisk_index 27 migrate_target_mdisk_grp 5 max_thread_count 4 migrate_source_vdisk_copy_id 0 migrate_type Migrate_to_Image progress 12 migrate_source_vdisk_index 30 migrate_target_mdisk_index 26 migrate_target_mdisk_grp 5 max_thread_count 4 migrate_source_vdisk_copy_id 0 IBM_2145:ITSO-CLS1:admin> After the migration has completed, the image mode volumes are ready to be removed from the ESX server, and the real LUNs can be mapped and masked directly to the host using the storage subsystems tool.

6.7.7 Removing the LUNs from the SVC


Your ESX servers configuration determines in what order your LUNs are removed from the control of the SVC, and whether you need to reboot the ESX server and suspend the VMware guests. In our example we have moved the virtual machine disks. Therefore, to remove these LUNs from the control of the SVC, we must stop and suspend all of the VMware guests that are using this LUN. The following steps must be performed: 1. Check which SCSI LUN IDs are assigned to the migrated disks by using the svcinfo lshostvdiskmap command, as shown in Example 6-50. Compare the volume UID and sort out the information.
Example 6-50 Note the SCSI LUN IDs

IBM_2145:ITSO-CLS1:admin>svcinfo lshostvdiskmap id name SCSI_id vdisk_id wwpn vdisk_UID 1 Nile 0 30 210000E08B892BCD 60050768018301BF280000000000002A 1 Nile 1 29 210000E08B892BCD 60050768018301BF2800000000000029 IBM_2145:ITSO-CLS1:admin>svcinfo lsvdisk id name IO_group_id mdisk_grp_id mdisk_grp_name capacity FC_name RC_id RC_name copy_count 0 vdisk_A 0 2 MDG_Image 36.0GB 29 ESX_W2k3_IVD 0 4 MDG_ESX_VD 70.0GB striped 60050768018301BF2800000000000029 0

vdisk_name ESX_SLES_IVD ESX_W2k3_IVD

IO_group_name status type FC_id vdisk_UID fc_map_count io_grp0 image io_grp0 1


Chapter 6. Data migration

online online

313

30 ESX_SLES_IVD 0 4 MDG_ESX_VD 60.0GB striped 60050768018301BF280000000000002A 0 IBM_2145:ITSO-CLS1:admin>

io_grp0 1

online

2. Shut down and suspend all guests using the LUNs. You can use the same method that is used in Moving VMware guest LUNs on page 304 to identify the guests that are using this LUN. 3. Remove the volumes from the host by using the svctask rmvdiskhostmap command (Example 6-51). To double-check that the volumes have been removed use the svcinfo lshostvdiskmap command, which shows that these volumes are no longer mapped to the ESX server.
Example 6-51 Remove the volumes from the host

IBM_2145:ITSO-CLS1:admin>svctask rmvdiskhostmap -host Nile ESX_W2k3_IVD IBM_2145:ITSO-CLS1:admin>svctask rmvdiskhostmap -host Nile ESX_SLES_IVD 4. Remove the volumes from the SVC by using the svctask rmvdisk command, which makes the MDisks unmanaged, as shown in Example 6-52. Cached data: When you run the svctask rmvdisk command, the SVC first double-checks that there is no outstanding dirty cached data for the volume that is being removed. If there is still uncommitted cached data, the command fails with this error message: CMMVC6212E The command failed because data in the cache has not been committed to disk You have to wait for this cached data to be committed to the underlying storage subsystem before you can remove the volume. The SVC will automatically destage uncommitted cached data two minutes after the last write activity for the volume. How much data there is to destage, and how busy the I/O subsystem is, determine how long this command takes to complete. You can check if the volume has uncommitted data in the cache by using the svcinfo lsvdisk <VDISKNAME> command and checking the fast_write_state attribute. This attribute has the following meanings: empty not_empty corrupt No modified data exists in the cache. Modified data might exist in the cache. Modified data might have existed in the cache, but the data has been lost.

Example 6-52 Remove the volumes from the SVC

IBM_2145:ITSO-CLS1:admin>svctask rmvdisk ESX_W2k3_IVD IBM_2145:ITSO-CLS1:admin>svctask rmvdisk ESX_SLES_IVD IBM_2145:ITSO-CLS1:admin>svcinfo lsmdisk id name status mode mdisk_grp_id mdisk_grp_name capacity ctrl_LUN_# controller_name UID 26 ESX_IVD_SLES online unmanaged 120.0GB 000000000000000A DS4700 600a0b800026b282000041f0486e210100000000000000000000000000000000 314
Implementing the IBM System Storage SAN Volume Controller V6.1

27 ESX_IVD_W2K3 online unmanaged 100.0GB 000000000000000B DS4700 600a0b800026b282000041e3486e20cf00000000000000000000000000000000 IBM_2145:ITSO-CLS1:admin> 5. Using Storage Manager (our storage subsystem management tool), unmap and unmask the disks from the SVC back to the ESX server. Remember that in Example 6-50 on page 313, we recorded the SCSI LUN IDs. To map your LUNs on the storage subsystem, use the same SCSI LUN IDs that you used in the SVC. Important: This is the last step that you can perform and still safely back out of everything you have done so far. Up to this point, you can reverse all of the actions that you have performed so far to get the server back online without data loss: Remap and remask the LUNs back to the SVC. Run the svctask detectmdisk command to rediscover the MDisks. Recreate the volumes with the svctask mkvdisk command. Remap the volumes back to the server with the svctask mkvdiskhostmap command. After you start the next step, you might not be able to turn back without the risk of data loss. 6. Using the VMware management console, rescan to discover the new volume. Figure 6-89 shows the view before the rescan. Figure 6-90 on page 316 shows the view after the rescan. Note that the size of the LUN has changed, because we have moved to another LUN on another storage subsystem.

Figure 6-89 Before adapter rescan

Chapter 6. Data migration

315

Figure 6-90 After adapter rescan

During the rescan, you can receive geometry errors when ESX discovers that the old disk has disappeared. Your volume will appear with a new vmhba address, and VMware will recognize it as our VMWARE-GUESTS disk. 7. We are now ready to restart the VMware guests. 8. Finally, to make sure that the MDisks are removed from the SVC, run the svctask detectmdisk command. The MDisks are discovered as offline and then automatically removed when the SVC determines that there are no volumes associated with these MDisks.

6.8 Migrating AIX SAN disks to SVC volumes


In this section we describe how to move the two LUNs from an AIX server, which is directly off our DS4000 storage subsystem, over to the SVC. We manage those LUNs with the SVC, move them between other managed disks, and then finally move them back to image mode disks so that those LUNs can then be masked and mapped back to the AIX server directly. Using this example can help you to perform any of the following activities in your environment: Move an AIX servers SAN LUNs from a storage subsystem and virtualize those same LUNs through the SVC, which is the first activity that you perform when introducing the SVC into your environment. This section shows that your host downtime is only a few minutes while you remap and remask disks using your storage subsystem LUN management tool. This step starts in 6.8.2, Preparing your SVC to virtualize disks on page 319. Move data between storage subsystems while your AIX server is still running and servicing your business application. You can perform this activity if you are removing a storage subsystem from your SAN environment and you want to move the data onto LUNs that are more appropriate for the

316

Implementing the IBM System Storage SAN Volume Controller V6.1

type of data that is stored on those LUNs, taking into account availability, performance, and redundancy. We describe this step in 6.8.4, Migrating image mode volumes to volumes on page 326. Move your AIX servers LUNs back to image mode volumes, so that they can be remapped and remasked directly back to the AIX server. This step starts in 6.8.5, Preparing to migrate from the SVC on page 328. Use these activities individually or together to migrate your AIX servers LUNs from one storage subsystem to another storage subsystem by using the SVC as your migration tool. If you do not use all three activities, you can introduce or remove the SVC from your environment. The only downtime that is required for these activities is the time that it takes you to remask and remap the LUNs between the storage subsystems and your SVC. We show our AIX environment in Figure 6-91.

Zoning for migration scenarios AIX Host

SAN

IBM or OEM Storage Subsystem

Green Zone

Figure 6-91 AIX SAN environment

Figure 6-91 shows our AIX server connected to our SAN infrastructure. It has two LUNs (hdisk3 and hdisk4) that are masked directly to it from our storage subsystem. The hdisk3 disk makes up the itsoaixvg LVM group, and the hdisk4 disk makes up the itsoaixvg1 LVM group, as shown in Example 6-53 on page 318.

Chapter 6. Data migration

317

Example 6-53 AIX SAN configuration

#lsdev hdisk0 hdisk1 hdisk2 hdisk3 hdisk4 #lspv hdisk0 hdisk1 hdisk2 hdisk3 hdisk4 #

-Cc disk Available Available Available Available Available

1S-08-00-8,0 1S-08-00-9,0 1S-08-00-10,0 1D-08-02 1D-08-02

16 Bit LVD SCSI Disk Drive 16 Bit LVD SCSI Disk Drive 16 Bit LVD SCSI Disk Drive 1814 DS4700 Disk Array Device 1814 DS4700 Disk Array Device rootvg rootvg rootvg itsoaixvg itsoaixvg1 active active active active active

0009cddaea97bf61 0009cdda43c9dfd5 0009cddabaef1d99 0009cdda0a4c0dd5 0009cdda0a4d1a64

Our AIX server represents a typical SAN environment with a host directly using LUNs that were created on a SAN storage subsystem, as shown in Figure 6-91 on page 317: The AIX servers HBA cards are zoned so that they are in the Green (dotted line) Zone with our storage subsystem. The two LUNs, hdisk3 and hdisk4, have been defined on the storage subsystem. Using LUN masking, they are directly available to our AIX server.

6.8.1 Connecting the SVC to your SAN fabric


This section describes the steps to take to introduce the SVC into your SAN environment. Although this section only summarizes these activities, you can accomplish this task without any downtime to any host or application that also uses your storage area network. If you have an SVC already connected, skip to 6.8.2, Preparing your SVC to virtualize disks on page 319. Attention: Be extremely careful when connecting the SVC to your storage area network, because this requires you to connect cables to your SAN switches and to alter your switch zone configuration. Performing these activities incorrectly can render your SAN inoperable, so make sure that you fully understand the effect of your actions. Connecting the SVC to your SAN fabric will require you to perform these tasks: Assemble your SVC components (nodes, uninterruptible power supply unit, and Master Console), cable the SVC correctly, power the SVC on, and verify that the SVC is visible on your SAN. Create and configure your SVC cluster. Create these additional zones: An SVC node zone (our Black Zone in Example 6-66 on page 328). A storage zone (our Red Zone). A host zone (our Blue Zone). Figure 6-92 on page 319 shows our environment.

318

Implementing the IBM System Storage SAN Volume Controller V6.1

Zoning for migration scenarios AIX Host

SAN

SVC I/O grp0 SVC SVC

IBM or OEM Storage Subsystem

IBM or OEM Storage Subsystem

Green Zone Red Zone Blue Zone Black Zone

Figure 6-92 SAN environment with SVC attached

6.8.2 Preparing your SVC to virtualize disks


This section describes the preparatory tasks that we perform before taking our AIX server offline. These tasks are all nondisruptive activities and do not affect your SAN fabric or your existing SVC configuration (if you already have a production SVC in place).

Creating a storage pool


When we move the two AIX LUNs to the SVC, they are first used in image mode; therefore, we must create a storage pool to hold those disks. We must create an empty storage pool for these disks, using the commands in Example 6-54 on page 320. We name the storage pool to hold our LUNs aix_imgmdg.

Chapter 6. Data migration

319

Example 6-54 Create empty mdiskgroup

IBM_2145:ITSO-CLS2:admin>svctask mkmdiskgrp -name aix_imgmdg -ext 512 MDisk Group, id [7], successfully created IBM_2145:ITSO-CLS2:admin>svcinfo lsmdiskgrp id name status mdisk_count vdisk_count capacity extent_size free_capacity virtual_capacity used_capacity real_capacity overallocation warning 7 aix_imgmdg online 512 0 0.00MB 0 IBM_2145:ITSO-CLS2:admin> 0 0.00MB 0 0.00MB 0 0

Creating our host definition


If you have prepared the zones correctly, the SVC can see the AIX servers HBA adapters on the fabric (our host only had one HBA). First, we get the WWN for our AIX servers HBA, because we have many hosts that are connected to our SAN fabric and in the Blue Zone. We want to make sure we have the correct WWN to reduce our AIX servers downtime. Example 6-55 shows the commands to get the WWN; our host has a WWN of 10000000C932A7FB.
Example 6-55 Discover your WWN

#lsdev -Ccadapter|grep fcs fcs0 Available 1Z-08 FC Adapter fcs1 Available 1D-08 FC Adapter #lscfg -vpl fcs0 fcs0 U0.1-P2-I4/Q1 FC Adapter Part Number.................00P4494 EC Level....................A Serial Number...............1E3120A68D Manufacturer................001E Device Specific.(CC)........2765 FRU Number.................. 00P4495 Network Address.............10000000C932A7FB ROS Level and ID............02C03951 Device Specific.(Z0)........2002606D Device Specific.(Z1)........00000000 Device Specific.(Z2)........00000000 Device Specific.(Z3)........03000909 Device Specific.(Z4)........FF401210 Device Specific.(Z5)........02C03951 Device Specific.(Z6)........06433951 Device Specific.(Z7)........07433951 Device Specific.(Z8)........20000000C932A7FB Device Specific.(Z9)........CS3.91A1 Device Specific.(ZA)........C1D3.91A1 Device Specific.(ZB)........C2D3.91A1 Device Specific.(YL)........U0.1-P2-I4/Q1

PLATFORM SPECIFIC

320

Implementing the IBM System Storage SAN Volume Controller V6.1

Name: fibre-channel Model: LP9002 Node: fibre-channel@1 Device Type: fcp Physical Location: U0.1-P2-I4/Q1 #lscfg -vpl fcs1 fcs1 U0.1-P2-I5/Q1 FC Adapter Part Number.................00P4494 EC Level....................A Serial Number...............1E3120A67B Manufacturer................001E Device Specific.(CC)........2765 FRU Number.................. 00P4495 Network Address.............10000000C932A800 ROS Level and ID............02C03891 Device Specific.(Z0)........2002606D Device Specific.(Z1)........00000000 Device Specific.(Z2)........00000000 Device Specific.(Z3)........02000909 Device Specific.(Z4)........FF401050 Device Specific.(Z5)........02C03891 Device Specific.(Z6)........06433891 Device Specific.(Z7)........07433891 Device Specific.(Z8)........20000000C932A800 Device Specific.(Z9)........CS3.82A1 Device Specific.(ZA)........C1D3.82A1 Device Specific.(ZB)........C2D3.82A1 Device Specific.(YL)........U0.1-P2-I5/Q1

PLATFORM SPECIFIC Name: fibre-channel Model: LP9000 Node: fibre-channel@1 Device Type: fcp Physical Location: U0.1-P2-I5/Q1 ## The svcinfo lshbaportcandidate command on the SVC lists all of the WWNs, which have not yet been allocated to a host, that the SVC can see on the SAN fabric. Example 6-56 shows the output of the nodes that it found in our SAN fabric. (If the port did not show up, it indicates a zone configuration problem.)
Example 6-56 Add the host to the SVC

IBM_2145:ITSO-CLS2:admin>svcinfo lshbaportcandidate id 10000000C932A7FB 10000000C932A800 210000E08B89B8C0 IBM_2145:ITSO-CLS2:admin>

Chapter 6. Data migration

321

After verifying that the SVC can see our host (Kanaga), we create the host entry and assign the WWN to this entry, as shown with the commands in Example 6-57.
Example 6-57 Create the host entry

IBM_2145:ITSO-CLS2:admin>svctask mkhost -name Kanaga -hbawwpn 10000000C932A7FB:10000000C932A800 Host, id [5], successfully created IBM_2145:ITSO-CLS2:admin>svcinfo lshost Kanaga id 5 name Kanaga port_count 2 type generic mask 1111 iogrp_count 4 WWPN 10000000C932A800 node_logged_in_count 2 state inactive WWPN 10000000C932A7FB node_logged_in_count 2 state inactive IBM_2145:ITSO-CLS2:admin>

Verifying that we can see our storage subsystem


If we performed the zoning correctly, the SVC can see the storage subsystem with the svcinfo lscontroller command (Example 6-58).
Example 6-58 Discover the storage controller

IBM_2145:ITSO-CLS2:admin>svcinfo lscontroller id controller_name ctrl_s/n vendor_id product_id_low product_id_high 0 DS4500 IBM 1742-900 1 DS4700 IBM 1814 IBM_2145:ITSO-CLS2:admin> Names: The svctask chcontroller command enables you to change the discovered storage subsystem name in SVC. In complex SANs, we suggest that you rename your storage subsystem to a more meaningful name.

Getting the disk serial numbers


To help avoid the risk of creating the wrong volumes from all of the available unmanaged MDisks (in case there are many available unmanaged MDisks that are seen by the SVC), we obtain the LUN serial numbers from our storage subsystem administration tool (Storage Manager). When we discover these MDisks, we confirm that we have the correct serial numbers before we create the image mode volumes. If you also use a DS4000 family storage subsystem, Storage Manager will provide the LUN serial numbers. Right-click your logical drive and choose Properties. The following figures show our serial numbers. Figure 6-93 on page 323 shows disk serial number kanage_lun0.

322

Implementing the IBM System Storage SAN Volume Controller V6.1

Figure 6-93 Obtaining disk serial number - kanage_lun0

Figure 6-94 shows disk serial number kanage_Lun1.

Figure 6-94 Obtaining disk serial number - kanga_Lun1

We are ready to move the ownership of the disks to the SVC, discover them as MDisks, and give them back to the host as volumes.

Chapter 6. Data migration

323

6.8.3 Moving the LUNs to the SVC


In this step, we move the LUNs that are assigned to the AIX server and reassign them to the SVC. Because we only want to move the LUN that holds our application and data files, we move that LUN without rebooting the host. The only requirement is that we unmount the file system and vary off the VG to ensure data integrity after the reassignment. Before you start: Moving LUNs to the SVC requires that the Subsystem Device Driver (SDD) device driver is installed on the AIX server. You can install the SDD ahead of time; however, it might require an outage of your host to do so. The following steps are required because we intend to move both LUNs at the same time. 1. Confirm that the SDD is installed. 2. Unmount and vary off the VGs: a. Stop the applications that are using the LUNs. b. Unmount those file systems with the umount MOUNT_POINT command. c. If the file systems are an LVM volume, deactivate that VG with the varyoffvg VOLUMEGROUP_NAME command. Example 6-59 shows the commands that we ran on Kanaga.
Example 6-59 AIX command sequence

#varyoffvg itsoaixvg #varyoffvg itsoaixvg1 #lsvg rootvg itsoaixvg itsoaixvg1 #lsvg -o rootvg 3. Using Storage Manager (our storage subsystem management tool), we can unmap and unmask the disks from the AIX server and remap and remask the disks to the SVC. 4. From the SVC, discover the new disks with the svctask detectmdisk command. The disks will be discovered and named mdiskN, where N is the next available mdisk number (starting from 0). Example 6-60 shows the commands that we used to discover our MDisks and to verify that we have the correct MDisks.
Example 6-60 Discover the new MDisks

IBM_2145:ITSO-CLS2:admin>svctask detectmdisk IBM_2145:ITSO-CLS2:admin>svcinfo lsmdisk id name mdisk_grp_id mdisk_grp_name controller_name UID

status capacity

mode ctrl_LUN_#

24 mdisk24 online unmanaged 5.0GB 0000000000000008 DS4700 600a0b800026b282000043224874f41900000000000000000000000000000000

324

Implementing the IBM System Storage SAN Volume Controller V6.1

25 mdisk25 online unmanaged 8.0GB 0000000000000009 DS4700 600a0b800026b2820000432f4874f57c00000000000000000000000000000000 IBM_2145:ITSO-CLS2:admin> Important: Match your discovered MDisk serial numbers (UID on the svcinfo lsmdisk command task display) with the serial number that you discovered earlier (in Figure 6-93 and Figure 6-94 on page 323). 5. After we have verified that we have the correct MDisks, we rename them to avoid confusion in the future when we perform other MDisk-related tasks (Example 6-61).
Example 6-61 Rename the MDisks

IBM_2145:ITSO-CLS2:admin>svctask chmdisk -name Kanaga_AIX mdisk24 IBM_2145:ITSO-CLS2:admin>svctask chmdisk -name Kanaga_AIX1 mdisk25 IBM_2145:ITSO-CLS2:admin>svcinfo lsmdisk id name status mode mdisk_grp_id mdisk_grp_name capacity ctrl_LUN_# controller_name UID 5.0GB 0000000000000008 DS4700 600a0b800026b282000043224874f41900000000000000000000000000000000 25 Kanaga_AIX1 online unmanaged 8.0GB 0000000000000009 DS4700 600a0b800026b2820000432f4874f57c00000000000000000000000000000000 IBM_2145:ITSO-CLS2:admin> 6. We create our image mode volumes with the svctask mkvdisk command and the option -vtype image (Example 6-62). This command virtualizes the disks in the exact same layout as though they were not virtualized.
Example 6-62 Create the image mode volumes

IBM_2145:ITSO-CLS2:admin>svctask mkvdisk -mdiskgrp aix_imgmdg -iogrp 0 -vtype image -mdisk Kanaga_AIX -name IVD_Kanaga Virtual Disk, id [8], successfully created IBM_2145:ITSO-CLS2:admin>svctask mkvdisk -mdiskgrp aix_imgmdg -iogrp 0 -vtype image -mdisk Kanaga_AIX1 -name IVD_Kanaga1 Virtual Disk, id [9], successfully created IBM_2145:ITSO-CLS2:admin> 7. Finally, we can map the new image mode volumes to the host (Example 6-63).
Example 6-63 Map the volumes to the host

IBM_2145:ITSO-CLS2:admin>svctask mkvdiskhostmap -host Kanaga IVD_Kanaga Virtual Disk to Host map, id [0], successfully created IBM_2145:ITSO-CLS2:admin>svctask mkvdiskhostmap -host Kanaga IVD_Kanaga1 Virtual Disk to Host map, id [1], successfully created IBM_2145:ITSO-CLS2:admin> FlashCopy: While the application is in a quiescent state, you can choose to use FlashCopy to copy the new image volumes onto other volumes. You do not need to wait until the FlashCopy process has completed before starting your application.

Chapter 6. Data migration

325

Now, we are ready to perform the following steps to put the image mode volumes online: 1. Remove the old disk definitions, if you have not done so already. 2. Run the cfgmgr -vs command to rediscover the available LUNs. 3. If your application and data are on an LVM volume, rediscover the VG, and then, run the varyonvg VOLUME_GROUP command to activate the VG. 4. Mount your file systems with the mount /MOUNT_POINT command. 5. You are ready to start your application.

6.8.4 Migrating image mode volumes to volumes


While the AIX server is still running and our file systems are in use, we migrate the image mode volumes onto striped volumes, with the extents being spread over three other MDisks.

Preparing MDisks for striped mode volumes


From our storage subsystem, we performed these tasks: Created and allocated three LUNs to the SVC Discovered them as MDisks Renamed these LUNs to more meaningful names Created a new storage pool Put all these MDisks into this storage pool You can see the output of our commands in Example 6-64.
Example 6-64 Create a new storage pool

IBM_2145:ITSO-CLS2:admin>svctask mkmdiskgrp -name aix_vd -ext 512 IBM_2145:ITSO-CLS2:admin>svctask detectmdisk IBM_2145:ITSO-CLS2:admin>svcinfo lsmdisk id name status mode mdisk_grp_id mdisk_grp_name capacity ctrl_LUN_# controller_name UID 24 Kanaga_AIX online image 7 aix_imgmdg 5.0GB 0000000000000008 DS4700 600a0b800026b282000043224874f41900000000000000000000000000000000 25 Kanaga_AIX1 online image 7 aix_imgmdg 8.0GB 0000000000000009 DS4700 600a0b800026b2820000432f4874f57c00000000000000000000000000000000 26 mdisk26 online unmanaged 6.0GB 000000000000000A DS4700 600a0b800026b2820000439c48751ddc00000000000000000000000000000000 27 mdisk27 online unmanaged 6.0GB 000000000000000B DS4700 600a0b800026b2820000438448751da900000000000000000000000000000000 28 mdisk28 online unmanaged 6.0GB 000000000000000C DS4700 600a0b800026b2820000439048751dc200000000000000000000000000000000 IBM_2145:ITSO-CLS2:admin>svctask chmdisk -name aix_vd0 mdisk26 IBM_2145:ITSO-CLS2:admin>svctask chmdisk -name aix_vd1 mdisk27 IBM_2145:ITSO-CLS2:admin>svctask chmdisk -name aix_vd2 mdisk28 IBM_2145:ITSO-CLS2:admin> IBM_2145:ITSO-CLS2:admin>svctask addmdisk -mdisk aix_vd0 aix_vd IBM_2145:ITSO-CLS2:admin>svctask addmdisk -mdisk aix_vd1 aix_vd IBM_2145:ITSO-CLS2:admin>svctask addmdisk -mdisk aix_vd2 aix_vd 326
Implementing the IBM System Storage SAN Volume Controller V6.1

IBM_2145:ITSO-CLS2:admin>svcinfo lsmdisk id name status mode mdisk_grp_id mdisk_grp_name capacity ctrl_LUN_# controller_name UID 24 Kanaga_AIX online image 7 aix_imgmdg 5.0GB 0000000000000008 DS4700 600a0b800026b282000043224874f41900000000000000000000000000000000 25 Kanaga_AIX1 online image 7 aix_imgmdg 8.0GB 0000000000000009 DS4700 600a0b800026b2820000432f4874f57c00000000000000000000000000000000 26 aix_vd0 online managed 6 aix_vd 6.0GB 000000000000000A DS4700 600a0b800026b2820000439c48751ddc00000000000000000000000000000000 27 aix_vd1 online managed 6 aix_vd 6.0GB 000000000000000B DS4700 600a0b800026b2820000438448751da900000000000000000000000000000000 28 aix_vd2 online managed 6 aix_vd 6.0GB 000000000000000C DS4700 600a0b800026b2820000439048751dc200000000000000000000000000000000 IBM_2145:ITSO-CLS2:admin>

Migrating the volumes


We are ready to migrate the image mode volumes onto striped volumes with the svctask migratevdisk command (Example 6-24 on page 290). While the migration is running, our AIX server is still running and we can continue accessing the files. To check the overall progress of the migration we use the svcinfo lsmigrate command, as shown in Example 6-65. Listing the storage pool with the svcinfo lsmdiskgrp command shows that the free capacity on the old storage pool is slowly increasing while those extents are moved to the new storage pool.
Example 6-65 Migrating image mode volumes to striped volumes

IBM_2145:ITSO-CLS2:admin>svctask migratevdisk -vdisk IVD_Kanaga -mdiskgrp aix_vd IBM_2145:ITSO-CLS2:admin>svctask migratevdisk -vdisk IVD_Kanaga1 -mdiskgrp aix_vd IBM_2145:ITSO-CLS2:admin>svcinfo lsmigrate migrate_type MDisk_Group_Migration progress 10 migrate_source_vdisk_index 8 migrate_target_mdisk_grp 6 max_thread_count 4 migrate_source_vdisk_copy_id 0 migrate_type MDisk_Group_Migration progress 0 migrate_source_vdisk_index 9 migrate_target_mdisk_grp 6 max_thread_count 4 migrate_source_vdisk_copy_id 0 IBM_2145:ITSO-CLS2:admin> After this task has completed, Example 6-66 on page 328 shows that the volumes are spread over three MDisks in the aix_vd storage pool. The old storage pool is empty.
Chapter 6. Data migration

327

Example 6-66 Migration complete

IBM_2145:ITSO-CLS2:admin>svcinfo lsmdiskgrp aix_vd id 6 name aix_vd status online mdisk_count 3 vdisk_count 2 capacity 18.0GB extent_size 512 free_capacity 5.0GB virtual_capacity 13.00GB used_capacity 13.00GB real_capacity 13.00GB overallocation 72 warning 0 IBM_2145:ITSO-CLS2:admin>svcinfo lsmdiskgrp aix_imgmdg id 7 name aix_imgmdg status online mdisk_count 2 vdisk_count 0 capacity 13.0GB extent_size 512 free_capacity 13.0GB virtual_capacity 0.00MB used_capacity 0.00MB real_capacity 0.00MB overallocation 0 warning 0 IBM_2145:ITSO-CLS2:admin> Our migration to the SVC is complete. You can remove the original MDisks from the SVC, and you can remove these LUNs from the storage subsystem. If these LUNs are the LUNs that were used last on our storage subsystem, we can remove it from our SAN fabric.

6.8.5 Preparing to migrate from the SVC


Before we change the AIX servers LUNs from being accessed by the SVC as volumes to being directly accessed from the storage subsystem, we need to convert the volumes into image mode volumes. You can perform this activity for one of these reasons: You purchased a new storage subsystem, and you were using the SVC as a tool to migrate from your old storage subsystem to this new storage subsystem. You used the SVC to FlashCopy or Metro Mirror a volume onto another volume and you no longer need that host connected to the SVC. You want move a host and its data that is currently connected to the SVC to a site where there is no SVC. Changes to your environment no longer require this host to use the SVC.

328

Implementing the IBM System Storage SAN Volume Controller V6.1

There are other preparatory activities to be performed before we shut down the host and reconfigure the LUN masking and mapping. This section covers those activities. If you are moving the data to a new storage subsystem, it is assumed that this storage subsystem is connected to your SAN fabric, powered on, and visible from your SAN switches. Your environment must look similar to our environment, as shown in Figure 6-95.

Zoning for migration scenarios AIX Host

SAN

SVC I/O grp0 SVC SVC

IBM or OEM Storage Subsystem

IBM or OEM Storage Subsystem

Green Zone Red Zone Blue Zone Black Zone

Figure 6-95 Environment with SVC

Making fabric zone changes


The first step is to set up the SAN configuration so that all of the zones are created. Add the new storage subsystem to the Red Zone, so that the SVC can communicate with it directly. Create a Green Zone for our host to use when we are ready for it to directly access the disk, after it has been removed from the SVC. It is assumed that you have created the necessary zones. After your zone configuration is set up correctly, the SVC sees the new storage subsystems controller by using the svcinfo lscontroller command, as shown in Example 6-67 on page 330. It is also useful to rename the controller to a more meaningful name. You can use the svctask chcontroller -name command.

Chapter 6. Data migration

329

Example 6-67 Discovering the new storage subsystem

IBM_2145:ITSO-CLS2:admin>svcinfo lscontroller id controller_name ctrl_s/n vendor_id product_id_low product_id_high 0 DS4500 IBM 1742-900 1 DS4700 IBM 1814 FAStT IBM_2145:ITSO-CLS2:admin>

Creating new LUNs


On our storage subsystem, we created two LUNs and masked them so that the SVC can see them. We will eventually give these LUNs directly to the host, removing the volumes that it currently has. To check that the SVC can use the LUNs, issue the svctask detectmdisk command, as shown in Example 6-68. In our example, we use two 10 GB LUNs that are located on the DS4500 subsystem. Thus, in this step, we migrate back to image mode volumes and to another subsystem in one step. We have already deleted the old LUNs on the DS4700 storage subsystem, which is the reason why they appear offline here.
Example 6-68 Discover the new MDisks

IBM_2145:ITSO-CLS2:admin>svctask detectmdisk IBM_2145:ITSO-CLS2:admin>svcinfo lsmdisk id name status mode mdisk_grp_id mdisk_grp_name capacity ctrl_LUN_# controller_name UID 24 Kanaga_AIX offline managed 7 aix_imgmdg 5.0GB 0000000000000008 DS4700 600a0b800026b282000043224874f41900000000000000000000000000000000 25 Kanaga_AIX1 offline managed 7 aix_imgmdg 8.0GB 0000000000000009 DS4700 600a0b800026b2820000432f4874f57c00000000000000000000000000000000 26 aix_vd0 online managed 6 aix_vd 6.0GB 000000000000000A DS4700 600a0b800026b2820000439c48751ddc00000000000000000000000000000000 27 aix_vd1 online managed 6 aix_vd 6.0GB 000000000000000B DS4700 600a0b800026b2820000438448751da900000000000000000000000000000000 28 aix_vd2 online managed 6 aix_vd 6.0GB 000000000000000C DS4700 600a0b800026b2820000439048751dc200000000000000000000000000000000 29 mdisk29 online unmanaged 10.0GB 0000000000000010 DS4500 600a0b8000174233000000b84876512f00000000000000000000000000000000 30 mdisk30 online unmanaged 10.0GB 0000000000000011 DS4500 600a0b80001744310000010e4876444600000000000000000000000000000000 IBM_2145:ITSO-CLS2:admin> Even though the MDisks will not stay in the SVC for long, we suggest that you rename them to more meaningful names so that they do not get confused with other MDisks that are used by other activities. Also, we create the storage pools to hold our new MDisks, as shown in Example 6-69 on page 331.

330

Implementing the IBM System Storage SAN Volume Controller V6.1

Example 6-69 Rename the MDisks

IBM_2145:ITSO-CLS2:admin>svctask chmdisk -name AIX_MIG mdisk29 IBM_2145:ITSO-CLS2:admin>svctask chmdisk -name AIX_MIG1 mdisk30 IBM_2145:ITSO-CLS2:admin>svctask mkmdiskgrp -name KANAGA_AIXMIG -ext 512 MDisk Group, id [3], successfully created IBM_2145:ITSO-CLS2:admin>svcinfo lsmdiskgrp id name status mdisk_count vdisk_count capacity extent_size free_capacity virtual_capacity used_capacity real_capacity overallocation warning 3 KANAGA_AIXMIG online 0 0 0 512 0 0.00MB 0.00MB 0.00MB 0 0 6 aix_vd online 3 2 18.0GB 512 5.0GB 13.00GB 13.00GB 13.00GB 72 0 7 aix_imgmdg offline 2 0 13.0GB 512 13.0GB 0.00MB 0.00MB 0.00MB 0 0 IBM_2145:ITSO-CLS2:admin>

At this point, our SVC environment is ready for the volume migration to image mode volumes.

6.8.6 Migrating the managed volumes


While our AIX server is still running, we migrate the managed volumes onto the new MDisks using image mode volumes. The command to perform this action is the svctask migratetoimage command, which is shown in Example 6-70.
Example 6-70 Migrate the volumes to image mode volumes

IBM_2145:ITSO-CLS2:admin>svctask migratetoimage -vdisk IVD_Kanaga -mdisk AIX_MIG -mdiskgrp KANAGA_AIXMIG IBM_2145:ITSO-CLS2:admin>svctask migratetoimage -vdisk IVD_Kanaga1 -mdisk AIX_MIG1 -mdiskgrp KANAGA_AIXMIG IBM_2145:ITSO-CLS2:admin>svcinfo lsmdisk id name status mode mdisk_grp_id mdisk_grp_name capacity ctrl_LUN_# controller_name UID 24 Kanaga_AIX offline managed 7 aix_imgmdg 5.0GB 0000000000000008 DS4700 600a0b800026b282000043224874f41900000000000000000000000000000000 25 Kanaga_AIX1 offline managed 7 aix_imgmdg 8.0GB 0000000000000009 DS4700 600a0b800026b2820000432f4874f57c00000000000000000000000000000000 26 aix_vd0 online managed 6 aix_vd 6.0GB 000000000000000A DS4700 600a0b800026b2820000439c48751ddc00000000000000000000000000000000 27 aix_vd1 online managed 6 aix_vd 6.0GB 000000000000000B DS4700 600a0b800026b2820000438448751da900000000000000000000000000000000 28 aix_vd2 online managed 6 aix_vd 6.0GB 000000000000000C DS4700 600a0b800026b2820000439048751dc200000000000000000000000000000000

Chapter 6. Data migration

331

29 AIX_MIG online image KANAGA_AIXMIG 10.0GB 0000000000000010 DS4500 600a0b8000174233000000b84876512f00000000000000000000000000000000 30 AIX_MIG1 online image KANAGA_AIXMIG 10.0GB 0000000000000011 DS4500 600a0b80001744310000010e4876444600000000000000000000000000000000 IBM_2145:ITSO-CLS2:admin>svcinfo lsmigrate migrate_type Migrate_to_Image progress 50 migrate_source_vdisk_index 9 migrate_target_mdisk_index 30 migrate_target_mdisk_grp 3 max_thread_count 4 migrate_source_vdisk_copy_id 0 migrate_type Migrate_to_Image progress 50 migrate_source_vdisk_index 8 migrate_target_mdisk_index 29 migrate_target_mdisk_grp 3 max_thread_count 4 migrate_source_vdisk_copy_id 0 IBM_2145:ITSO-CLS2:admin>

During the migration, our AIX server is unaware that its data is being physically moved between storage subsystems. After the migration is complete, the image mode volumes are ready to be removed from the AIX server, and the real LUNs can be mapped and masked directly to the host by using the storage subsystems tool.

6.8.7 Removing the LUNs from the SVC


The next step will require downtime while we remap and remask the disks so that the host sees them directly through the Green Zone. Because our LUNs only hold data files, and because we use a unique VG, we can remap and remask the disks without rebooting the host. The only requirement is that we unmount the file system and vary off the VG to ensure data integrity after the reassignment. Before you start: Moving LUNs to another storage system might need a driver other than SDD. Check with the storage subsystems vendor to see which driver you will need. You might be able to install this driver ahead of time. Follow these required steps to remove the SVC. 1. Confirm that the correct device driver for the new storage subsystem is loaded. Because we are moving to a DS4500, we can continue to use the SDD. 2. Shut down any applications and unmount the file systems: a. Stop the applications that are using the LUNs. b. Unmount those file systems with the umount MOUNT_POINT command. c. If the file systems are an LVM volume, deactivate that VG with the varyoffvg VOLUMEGROUP_NAME command.

332

Implementing the IBM System Storage SAN Volume Controller V6.1

3. Remove the volumes from the host by using the svctask rmvdiskhostmap command (Example 6-71). To double-check that you have removed the volumes, use the svcinfo lshostvdiskmap command, which shows that these disks are no longer mapped to the AIX server.
Example 6-71 Remove the volumes from the host

IBM_2145:ITSO-CLS2:admin>svctask rmvdiskhostmap -host Kanaga IVD_Kanaga IBM_2145:ITSO-CLS2:admin>svctask rmvdiskhostmap -host Kanaga IVD_Kanaga1 IBM_2145:ITSO-CLS2:admin>svcinfo lshostvdiskmap Kanaga IBM_2145:ITSO-CLS2:admin> 4. Remove the volumes from the SVC by using the svctask rmvdisk command, which will make the MDisks unmanaged, as shown in Example 6-72. Cached data: When you run the svctask rmvdisk command, the SVC first double-checks that there is no outstanding dirty cached data for the volume being removed. If uncommitted cached data still exists, the command fails with the following error message: CMMVC6212E The command failed because data in the cache has not been committed to disk You will have to wait for this cached data to be committed to the underlying storage subsystem before you can remove the volume. The SVC will automatically destage uncommitted cached data two minutes after the last write activity for the volume. How much data there is to destage, and how busy the I/O subsystem is, determine how long this command takes to complete. You can check whether the volume has uncommitted data in the cache by using the svcinfo lsvdisk <VDISKNAME> command and checking the fast_write_state attribute. This attribute has the following meanings: empty not_empty corrupt No modified data exists in the cache. Modified data might exist in the cache. Modified data might have existed in the cache, but any modified data has been lost.

Example 6-72 Remove the volumes from the SVC

IBM_2145:ITSO-CLS2:admin>svctask rmvdisk IVD_Kanaga IBM_2145:ITSO-CLS2:admin>svctask rmvdisk IVD_Kanaga1 IBM_2145:ITSO-CLS2:admin>svcinfo lsmdisk id name status mode mdisk_grp_id mdisk_grp_name capacity ctrl_LUN_# controller_name UID 29 AIX_MIG online unmanaged 10.0GB 0000000000000010 DS4500 600a0b8000174233000000b84876512f00000000000000000000000000000000 30 AIX_MIG1 online unmanaged 10.0GB 0000000000000011 DS4500 600a0b80001744310000010e4876444600000000000000000000000000000000 IBM_2145:ITSO-CLS2:admin>

Chapter 6. Data migration

333

5. Using Storage Manager (our storage subsystem management tool), unmap and unmask the disks from the SVC back to the AIX server. Important: This step is the last step that you can perform and still safely back out of everything you have done so far. Up to this point, you can reverse all of the actions that you have performed so far to get the server back online without data loss: Remap and remask the LUNs back to the SVC. Run the svctask detectmdisk command to rediscover the MDisks. Recreate the volumes with the svctask mkvdisk command. Remap the volumes back to the server with the svctask mkvdiskhostmap command. After you start the next step, you might not be able to turn back without the risk of data loss. We are ready to access the LUNs from the AIX server. If all of the zoning and LUN masking and mapping were done successfully, our AIX server will boot as though nothing has happened: 1. Run the cfgmgr -S command to discover the storage subsystem. 2. Use the lsdev -Ccdisk command to verify the discovery of the new disk. 3. Remove the references to all of the old disks. Example 6-73 shows the removal using SDD and Example 6-74 on page 335 shows the removal using SDDPCM.
Example 6-73 Remove references to old paths using SDD

#lsdev -Cc disk hdisk0 Available 1S-08-00-8,0 16 Bit LVD SCSI Disk Drive hdisk1 Available 1S-08-00-9,0 16 Bit LVD SCSI Disk Drive hdisk2 Available 1S-08-00-10,0 16 Bit LVD SCSI Disk Drive hdisk3 Available 1Z-08-02 1742-900 (900) Disk Array Device hdisk4 Available 1Z-08-02 1742-900 (900) Disk Array Device hdisk5 Defined 1Z-08-02 SAN volume Controller Device hdisk6 Defined 1Z-08-02 SAN volume Controller Device hdisk7 Defined 1D-08-02 SAN volume Controller Device hdisk8 Defined 1D-08-02 SAN volume Controller Device hdisk10 Defined 1Z-08-02 SAN volume Controller Device hdisk11 Defined 1Z-08-02 SAN volume Controller Device hdisk12 Defined 1D-08-02 SAN volume Controller Device hdisk13 Defined 1D-08-02 SAN volume Controller Device vpath0 Defined Data Path Optimizer Pseudo Device Driver vpath1 Defined Data Path Optimizer Pseudo Device Driver vpath2 Defined Data Path Optimizer Pseudo Device Driver # for i in 5 6 7 8 10 11 12 13; do rmdev -dl hdisk$i -R;done hdisk5 deleted hdisk6 deleted hdisk7 deleted hdisk8 deleted hdisk10 deleted hdisk11 deleted hdisk12 deleted hdisk13 deleted #for i in 0 1 2; do rmdev -dl vpath$i -R;done 334
Implementing the IBM System Storage SAN Volume Controller V6.1

vpath0 vpath1 vpath2 #lsdev hdisk0 hdisk1 hdisk2 hdisk3 hdisk4 #

deleted deleted deleted -Cc disk Available Available Available Available Available

1S-08-00-8,0 1S-08-00-9,0 1S-08-00-10,0 1Z-08-02 1Z-08-02

16 Bit LVD SCSI Disk Drive 16 Bit LVD SCSI Disk Drive 16 Bit LVD SCSI Disk Drive 1742-900 (900) Disk Array Device 1742-900 (900) Disk Array Device

Example 6-74 Remove references to old paths using SDDPCM

# lsdev -Cc disk hdisk0 Available 1S-08-00-8,0 16 Bit LVD SCSI hdisk1 Available 1S-08-00-9,0 16 Bit LVD SCSI hdisk2 Available 1S-08-00-10,0 16 Bit LVD SCSI hdisk3 Defined 1D-08-02 MPIO FC 2145 hdisk4 Defined 1D-08-02 MPIO FC 2145 hdisk5 Available 1D-08-02 MPIO FC 2145 # for i in 3 4; do rmdev -dl hdisk$i -R;done hdisk3 deleted hdisk4 deleted # lsdev -Cc disk hdisk0 Available 1S-08-00-8,0 16 Bit LVD SCSI hdisk1 Available 1S-08-00-9,0 16 Bit LVD SCSI hdisk2 Available 1S-08-00-10,0 16 Bit LVD SCSI hdisk5 Available 1D-08-02 MPIO FC 2145

Disk Drive Disk Drive Disk Drive

Disk Drive Disk Drive Disk Drive

4. If your application and data are on an LVM volume, rediscover the VG and then run the varyonvg VOLUME_GROUP command to activate the VG. 5. Mount your file systems with the mount /MOUNT_POINT command. 6. You are ready to start your application. Finally, to make sure that the MDisks are removed from the SVC, run the svctask detectmdisk command. The MDisks will first be discovered as offline and then they will automatically be removed after the SVC determines that there are no volumes associated with these MDisks.

6.9 Using SVC for storage migration


The primary use of the SVC is not as a storage migration tool. However, the advanced capabilities of the SVC enable us to use the SVC as a storage migration tool; therefore, you can add the SVC temporarily to your SAN environment to copy the data from one storage subsystem to another storage subsystem. The SVC enables you to copy image mode volumes directly from one subsystem to another subsystem while host I/O is running. The only downtime that is required is when the SVC is added to and removed from your SAN environment. To use the SVC for migration purposes only, perform the following steps: 1. Add the SVC to your SAN environment. 2. Prepare the SVC.

Chapter 6. Data migration

335

3. Depending on your operating system, unmount the selected LUNs or shut down the host. 4. Add the SVC between your storage and the host. 5. Mount the LUNs or start the host again. 6. Start the migration. 7. After the migration process is complete, unmount the selected LUNs or shut down the host. 8. Remove the SVC from your SAN. 9. Mount the LUNs, or start the host again. 10.The migration is complete. As you can see, extremely little downtime is required. If you prepare everything correctly, you are able to reduce your downtime to a few minutes. The copy process is handled by the SVC, so the host does not hinder the performance while the migration progresses. To use the SVC for storage migrations, perform the steps that are described in the following sections: 6.5.2, Adding the SVC between the host system and the DS4700 on page 240 6.5.6, Migrating the volume from image mode to image mode on page 264 6.5.7, Removing image mode data from the SVC on page 274

6.10 Using volume mirroring and thin-provisioned volumes together


In this section, we show that you can use the volume mirroring feature and thin-provisioned volumes together to move data from a fully allocated volume to a thin-provisioned volume.

6.10.1 Zero detect feature


The zero detect feature for thin-provisioned volumes enables clients to reclaim unused allocated disk space (zeros) when converting a fully allocated volume to a thin-provisioned volume using volume mirroring. To migrate from a fully allocated volume to a thin-provisioned volume, perform these steps: 1. Add the target thin-provisioned copy. 2. Wait for synchronization to complete. 3. Remove the source fully allocated copy. By using this feature, clients can easily free up managed disk space and make better use of their storage, without needing to purchase any additional function for the SVC. Volume mirroring and thin-provisioned volume functions are included in the base virtualization license. Clients with thin-provisioned storage on an existing storage system can migrate their data under SVC management using thin-provisioned volumes without having to allocate additional storage space. Zero detect only works if the disk actually contains zeroes; an uninitialized disk can contain anything, unless the disk has been formatted (for example, using the -fmtdisk flag on the mkvdisk command).

336

Implementing the IBM System Storage SAN Volume Controller V6.1

Figure 6-96 shows the thin-provisioned volume zero detect concept.

Figure 6-96 The thin-provisioned volume zero detect feature

Figure 6-97 shows the thin-provisioned volume organization.

Figure 6-97 The thin-provisioned volume organization

As shown in Figure 6-97, a thin-provisioned volume has these components: Used capacity This term specifies the portion of real capacity that is being used to store data. For non-thin-provisioned copies, this value is the same as the volume capacity. If the volume

Chapter 6. Data migration

337

copy is thin-provisioned, the value increases from zero to the real capacity value as more of the volume is written to. Real capacity This capacity is the real allocated space in the storage pool. In a thin-provisioned volume, this value can differ from the total capacity. Free data This value specifies the difference between the real capacity and the used capacity values. The SVC is continuously trying to keep equal to the real capacity for contingency. If the free data capacity reaches the used capacity and if the volume has been configured with the -autoexpand option, the SVC will autoexpand the allocated space for this volume to keep this value equal to the real capacity. Grains This value is the smallest unit in into which the allocated space can be divided. Metadata This value is allocated in the real capacity, and it tracks the used capacity, real capacity, and free capacity.

6.10.2 Volume mirroring with thin-provisioned volumes


In this section, we show an example of using the volume mirror feature with thin-provisioned volumes: 1. We create a fully allocated volume of 15 GB named VD_Full, as shown in Example 6-75.
Example 6-75 VD_Full creation example

IBM_2145:ITSO-CLS2:admin>svctask mkvdisk -mdiskgrp 0 -iogrp 0 -mdisk 0:1:2:3:4:5 -node 1 -vtype striped -size 15 -unit gb -fmtdisk -name VD_Full Virtual Disk, id [2], successfully created IBM_2145:ITSO-CLS2:admin>svcinfo lsvdisk VD_Full id 2 name VD_Full IO_group_id 0 IO_group_name io_grp0 status offline mdisk_grp_id 0 mdisk_grp_name MDG_DS47 capacity 15.00GB type striped formatted yes . . vdisk_UID 60050768018401BF280000000000000B mdisk_grp_name MDG_DS47 used_capacity 15.00GB real_capacity 15.00GB free_capacity 0.00MB overallocation 100 2. We then add a thin-provisioned volume copy with the volume mirroring option by using the addvdiskcopy command and the autoexpand parameter, as shown in Example 6-76 on page 339.

338

Implementing the IBM System Storage SAN Volume Controller V6.1

Example 6-76 addvdiskcopy command

IBM_2145:ITSO-CLS2:admin>svctask addvdiskcopy -mdiskgrp 1 -mdisk 6:7:8:9 -vtype striped -rsize 2% -autoexpand -grainsize 32 -unit gb VD_Full VDisk [2] copy [1] successfully created IBM_2145:ITSO-CLS2:admin>svcinfo lsvdisk VD_Full id 2 name VD_Full IO_group_id 0 IO_group_name io_grp0 status online mdisk_grp_id many mdisk_grp_name many capacity 15.00GB type many formatted yes mdisk_id many mdisk_name many vdisk_UID 60050768018401BF280000000000000B tsync_rate 50 copy_count 2 copy_id 0 sync yes primary yes mdisk_grp_id 0 mdisk_grp_name MDG_DS47 fused_capacity 15.00GB real_capacity 15.00GB free_capacity 0.00MB overallocation 100 autoexpand warning grainsize copy_id 1 status online sync no primary no mdisk_grp_id 1 mdisk_grp_name MDG_DS83 type striped mdisk_id mdisk_name fast_write_state empty used_capacity 0.41MB real_capacity 323.57MB free_capacity 323.17MB overallocation 4746 autoexpand on warning 80 grainsize 32 As you can see in Example 6-76, the VD_Full has a copy_id 1 where the used_capacity is 0.41 MB, which is equal to the metadata, because only zeros exist in the disk.

Chapter 6. Data migration

339

The real_capacity is 323.57 MB, which is equal to the -rsize 2% value that is specified in the addvdiskcopy command. The free capacity is 323.17 MB, which is equal to the real capacity minus the used capacity. If zeros are written on the disk, the thin-provisioned volume does not consume space. Example 6-77 shows that the thin-provisioned volume does not consume space even when they are in sync.
Example 6-77 Thin-provisioned volume display

IBM_2145:ITSO-CLS2:admin>svcinfo lsvdisksyncprogress 2 vdisk_id vdisk_name copy_id progress estimated_completion_time 2 VD_Full 0 100 2 VD_Full 1 100 IBM_2145:ITSO-CLS2:admin>svcinfo lsvdisk VD_Full id 2 name VD_Full IO_group_id 0 IO_group_name io_grp0 status online mdisk_grp_id many mdisk_grp_name many capacity 15.00GB type many formatted yes mdisk_id many mdisk_name many vdisk_UID 60050768018401BF280000000000000B throttling 0 preferred_node_id 1 fast_write_state empty cache readwrite udid 0 fc_map_count 0 sync_rate 50 copy_count 2 copy_id 0 status online sync yes primary yes mdisk_grp_id 0 mdisk_grp_name MDG_DS47 type striped mdisk_id mdisk_name fast_write_state empty used_capacity 15.00GB real_capacity 15.00GB free_capacity 0.00MB overallocation 100 autoexpand warning grainsize copy_id 1 status online

340

Implementing the IBM System Storage SAN Volume Controller V6.1

sync yes primary no mdisk_grp_id 1 mdisk_grp_name MDG_DS83 type striped mdisk_id mdisk_name fast_write_state empty used_capacity 0.41MB real_capacity 323.57MB free_capacity 323.17MB overallocation 4746 autoexpand on warning 80 grainsize 32 3. We can split the volume mirror or remove one of the copies, keeping the thin-provisioned copy as our valid copy by using the splitvdiskcopy command or the rmvdiskcopy command: If you need your copy as a thin-provisioned clone, we suggest that you use the splitvdiskcopy command because that command will generate a new volume and you will be able to map to any server that you want. If you need your copy because you are migrating from a previously fully allocated volume to go to a thin-provisioned volume without any effect on the server operations, we suggest that you use the rmvdiskcopy command. In this case, the original volume name is kept and it remains mapped to the same server. Example 6-78 shows the splitvdiskcopy command.
Example 6-78 splitvdiskcopy command

IBM_2145:ITSO-CLS2:admin>svctask splitvdiskcopy -copy 1 -name VD_SEV VD_Full Virtual Disk, id [7], successfully created IBM_2145:ITSO-CLS2:admin>svcinfo lsvdisk -filtervalue name=VD* id name IO_group_id IO_group_name status mdisk_grp_id mdisk_grp_name capacity type FC_id FC_name RC_id RC_name vdisk_UID fc_map_count copy_count fast_write_state 2 VD_Full 0 io_grp0 online 0 MDG_DS47 15.00GB striped 60050768018401BF280000000000000B 0 1 empty 7 VD_SEV 0 io_grp0 online 1 MDG_DS83 15.00GB striped 60050768018401BF280000000000000D 0 1 empty IBM_2145:ITSO-CLS2:admin>svcinfo lsvdisk VD_SEV id 7 name VD_SEV IO_group_id 0 IO_group_name io_grp0 status online mdisk_grp_id 1 mdisk_grp_name MDG_DS83 capacity 15.00GB type striped formatted no

Chapter 6. Data migration

341

vdisk_UID 60050768018401BF280000000000000D throttling 0 preferred_node_id 2 fast_write_state empty cache readwrite udid fc_map_count 0 sync_rate 50 copy_count 1 copy_id 0 status online sync yes primary yes mdisk_grp_id 1 mdisk_grp_name MDG_DS83 type striped mdisk_id mdisk_name fast_write_state empty used_capacity 0.41MB real_capacity 323.57MB free_capacity 323.17MB overallocation 4746 autoexpand on warning 80 grainsize 32 Example 6-79 shows the rmvdiskcopy command.
Example 6-79 rmvdiskcopy command

IBM_2145:ITSO-CLS2:admin>svctask rmvdiskcopy -copy 0 VD_Full IBM_2145:ITSO-CLS2:admin>svcinfo lsvdisk -filtervalue name=VD* id name IO_group_id IO_group_name status mdisk_grp_id mdisk_grp_name capacity type FC_id FC_name RC_id RC_name vdisk_UID fc_map_count copy_count fast_write_state 2 VD_Full 0 io_grp0 online 1 MDG_DS83 15.00GB striped 60050768018401BF280000000000000B 0 1 empty IBM_2145:ITSO-CLS2:admin>svcinfo lsvdisk 2 id 2 name VD_Full IO_group_id 0 IO_group_name io_grp0 status online mdisk_grp_id 1 mdisk_grp_name MDG_DS83 capacity 15.00GB type striped formatted no vdisk_UID 60050768018401BF280000000000000B throttling 0 preferred_node_id 1 fast_write_state empty cache readwrite 342
Implementing the IBM System Storage SAN Volume Controller V6.1

udid 0 fc_map_count 0 sync_rate 50 copy_count 1 copy_id 1 status online sync yes primary yes mdisk_grp_id 1 mdisk_grp_name MDG_DS83 type striped mdisk_id mdisk_name fast_write_state empty used_capacity 0.41MB real_capacity 323.57MB free_capacity 323.17MB overallocation 4746 autoexpand on warning 80 grainsize 32

Chapter 6. Data migration

343

344

Implementing the IBM System Storage SAN Volume Controller V6.1

Chapter 7.

Easy Tier
In this chapter we describe the function provided by the EasyTier disk performance optimization feature of the SAN Volume Controller. We also explain how to activate the EasyTier process for both evaluation purposes and for automatic extent migration.

Copyright IBM Corp. 2011. All rights reserved.

345

7.1 Overview of Easy Tier


Determining the amount of I/O activity occurring on an SVC extent and when to move the extent to an appropriate storage performance tier is usually too complex a task to manage manually. Easy Tier is a performance optimization function that overcomes this issue as it will automatically migrate or move extents belonging to a volume between MDisk storage tiers. Easy Tier monitors the I/O activity and latency of the extents on all volumes with the Easy Tier function turned on in a multitier storage pool over a 24-hour period. It then creates an extent migration plan based on this activity and will dynamically move high activity or hot extents to a higher disk tier within the storage pool. It will also move extents whose activity has dropped off, or cooled, from the high-tier MDisks back to a lower-tiered MDisk. Because this migration works at the extent level, it is often referred to as sub-LUN migration. The Easy Tier function can be turned on or off at the storage pool level and at the volume level. To experience the potential benefits of using Easy Tier in your environment before actually installing actually installing expensive solid-state disks (SSDs), you can turn on the Easy Tier function for a single level storage pool. Next, also turn on the Easy Tier function for the volumes within that pool. This will start monitoring activity on the volume extents in the pool. Easy Tier will create a migration report every 24 hours on the number of extents that would be moved if the pool were a multitiered storage pool. So even though Easy Tier extent migration is not possible within a single tier pool, the Easy Tier statistical measurement function is available. Note: Image mode and sequential volumes are not candidates for Easy Tier automatic data placement.

7.2 Easy Tier concepts


This section explains the concepts underpinning Easy Tier functionality.

7.2.1 SSD arrays and MDisks


The SSD drives are treated no differently by the SVC than HDDs with respect to RAID arrays or MDisks. The individual SSD drives in the storage managed by the SVC will be combined into an array, usually in RAID10 or RAID5 format. It is unlikely that RAID6 SSD arrays will be used due to the double parity overhead, with two SSD logical drives used for parity only. An LUN will be created on the array, which is then presented to the SVC as a normal managed disk (MDisk). As is the case for HDDs, the SSD RAID array format will help protect against individual SSD failures. Depending on your requirements, additional high availability protection, above the RAID level, can be achieved by using volume mirroring. In the example disk tier pool shown in Figure 7-2 on page 348, you can see the SSD MDisks presented from the SSD disk arrays.

346

Implementing the IBM System Storage SAN Volume Controller V6.1

7.2.2 Disk tiers


It is likely that the MDisks (LUNs) presented to the SVC cluster will have different performance attributes because of the type of disk or RAID array that they reside on. The MDisks can be on 15 K RPM Fibre Channel or SAS disk, Nearline SAS or SATA, or even solid state disks (SSDs). Thus, a storage tier attribute is assigned to each MDisk. The default is generic_hdd. With SVC 6.1, a new disk tier attribute is available for SSDs known as generic_ssd. Note that the SVC does not automatically detect SSD MDisks. Instead, all external MDisks are initially put into the generic_hdd tier by default. Then the administrator has to manually change the SSD tier to generic_ssd by using the CLI or GUI.

7.2.3 Single tier storage pools


Figure 7-1 shows a scenario in which a single storage pool will be populated with MDisks presented by an external storage controller. In this solution the striped or mirrored volume can be measured by Easy Tier, but no action to optimize the performance will occur.

Figure 7-1 Single tier storage pool with striped volume

MDisks that are used in a single tier storage pool should have the same hardware characteristics, for example, the same RAID type, RAID array size, disk type, and disk revolutions per minute (RPMs) and controller performance characteristics.

7.2.4 Multiple tier storage pools


A multiple tiered storage pool will have a mix of MDisks with more than one type of disk tier attribute, for example, a storage pool containing a mix of generic_hdd and generic_ssd MDisks.

Chapter 7. Easy Tier

347

Figure 7-2 shows a scenario in which a storage pool is populated with two different MDisk types: one belonging to an SSD array, and one belonging to an HDD array. Although this example shows RAID5 arrays, other RAID types can be used.

Figure 7-2 Multitier storage pool with striped volume

Adding SSD to the pool means additional space is also now available for new volumes, or volume expansion.

7.2.5 Easy Tier process


The Easy Tier function has four main processes: I/O Monitoring This process operates continuously and monitors volumes for host I/O activity. It collects performance statistics for each extent and derives averages for a rolling 24-hour period of I/O activity. Easy Tier makes allowances for large block I/Os and thus only considers I/Os of up to 64 KB as migration candidates. This is an efficient process and adds negligible processing overheads to the SVC nodes. Data Placement Advisor The Data Placement Advisor uses workload statistics to make a cost benefit decision as to which extents are to be candidates for migration to a higher performance (SSD) tier. This process also identifies extents that need to be migrated back to a lower (HDD) tier.

348

Implementing the IBM System Storage SAN Volume Controller V6.1

Data Migration Planner Using the extents previously identified, the Data Migration Planner step builds the extent migration plan for the storage pool. 4. Data Migrator The Data Migrator step involves the actual movement or migration of the volumes extents up to, or down from the high disk tier. The extent migration rate is capped so that a maximum of up to 30 MBps is migrated. This equates to around 3 TB a day that will be migrated between disk tiers. When relocating volume extents, Easy Tier performs these actions: It attempts to migrate the most active volume extents up to SSD first. To ensure there is a free extent available, a less frequently accessed extent may first need to be migrated back to HDD. A previous migration plan and any queued extents that are not yet relocated are abandoned.

7.2.6 Easy Tier operating modes


There are three main operating modes for Easy Tier: Off mode, Evaluation or measurement only mode, and Automatic Data Placement or extent migration mode.

Easy Tier - Off mode


With Easy Tier turned off, there are no statistics recorded and no extent migration.

Evaluation or measurement only mode


Easy Tier Evaluation or measurement only mode collects usage statistics for each extent in a single tier storage pool where the Easy Tier value is set to on for both the volume and the pool. This is typically done for a single tier pool containing only HDDs, so that the benefits of adding SSDs to the pool can be evaluated prior to any major hardware acquisition. A statistics summary file is created in the /dumps directory of the SVC nodes named dpa_heat.nodeid.yymmdd.hhmmss.data. This file can be offloaded from the SVC nodes with PSCP -load or using the GUI as shown in 7.4.1, Measuring by using the Storage Advisor Tool on page 352. A web browser is used to view the report created by the tool.

Auto Data Placement or extent migration mode


In Auto Data Placement or extent migration operating mode, the storage pool parameter -easytier on or auto must be set and the volumes in the pool will have -easytier on. The storage pool must also contain MDisks with different disk tiers; thus a multitiered storage pool. Dynamic data movement is transparent to the host server and application users of the data, other than providing improved performance. Extents will automatically be migrated according to 7.3.2, Implementation rules on page 350. The statistic summary file is also created in this mode. This file can be offloaded for input to the advisor tool. The tool will produce a report on the extents moved to SSD and a prediction of performance improvement that can be gained if more SSD arrays were available.

Chapter 7. Easy Tier

349

7.2.7 Easy Tier activation


To activate Easy Tier, set the Easy Tier value on the pool and volumes as shown in Table 7-1. The defaults are set in favor of Easy Tier. For example, if you create a new storage pool the -easytier value is auto. If you create a new volume, the value is on.
Table 7-1 Easy Tier parameter settings

Examples of the use of these parameters are shown in 7.5, Using Easy Tier with the SVC CLI on page 353 and 7.6, Using Easy Tier with the SVC GUI on page 359.

7.3 Easy Tier implementation considerations


In this section we describe considerations to keep in mind before implementing Easy Tier.

7.3.1 Prerequisites
There is no Easy Tier license required for the SVC; it comes as part of the V6.1 code. For Easy Tier to migrate extents you will need to have disk storage available that has different tiers, for example a mix of SSD and HDD.

7.3.2 Implementation rules


Keep the following implementation and operation rules in mind when you use the IBM System Storage Easy Tier function on the SAN Volume Controller. Easy Tier automatic data placement is not supported on image mode or sequential volumes. I/O monitoring for such volumes is supported, but you cannot migrate extents on such volumes unless you convert image or sequential volume copies to striped volumes. 350
Implementing the IBM System Storage SAN Volume Controller V6.1

Automatic data placement and extent I/O activity monitors are supported on each copy of a mirrored volume. Easy Tier works with each copy independently of the other copy. Note: Volume mirroring can have different workload characteristics on each copy of the data because reads are normally directed to the primary copy and writes occur to both. Thus, the number of extents that Easy Tier will migrate to SSD tier will probably be different for each copy. If possible, the SAN Volume Controller creates new volumes or volume expansions using extents from MDisks from the HDD tier. However, it will use extents from MDisks from the SSD tier if necessary. When a volume is migrated out of a storage pool that is managed with Easy Tier, then Easy Tier automatic data placement mode is no longer active on that volume. Automatic data placement is also turned off while a volume is being migrated even if it is between pools that both have Easy Tier automatic data placement enabled. Automatic data placement for the volume is re-enabled when the migration is complete.

7.3.3 Limitations
Limitations exist when using IBM System Storage Easy Tier on the SAN Volume Controller. Limitations when removing an MDisk by using the -force parameter When an MDisk is deleted from a storage pool with the -force parameter, extents in use are migrated to MDisks in the same tier as the MDisk being removed, if possible. If insufficient extents exist in that tier, then extents from the other tier are used. Limitations when migrating extents When Easy Tier automatic data placement is enabled for a volume, the svctask migrateexts command-line interface (CLI) command cannot be used on that volume. Limitations when migrating a volume to another storage pool When the SAN Volume Controller migrates a volume to a new storage pool, Easy Tier automatic data placement between the two tiers is temporarily suspended. After the volume is migrated to its new storage pool, Easy Tier automatic data placement between the generic SSD tier and the generic HDD tier resumes for the moved volume, if appropriate. When the SAN Volume Controller migrates a volume from one storage pool to another, it will attempt to migrate each extent to an extent in the new storage pool from the same tier as the original extent. In several cases, such as a target tier being unavailable, the other tier is used. For example, the generic SSD tier might be unavailable in the new storage pool. Limitations when migrating a volume to image mode Easy Tier automatic data placement does not support image mode. When a volume with Easy Tier automatic data placement mode active is migrated to image mode, Easy Tier automatic data placement mode is no longer active on that volume. Image mode and sequential volumes cannot be candidates for automatic data placement. Easy Tier does support evaluation mode for image mode volumes.

Best practices
Always set the Storage Pool -easytier value to on rather than to the default value auto. This makes it easier to turn on evaluation mode for existing single tier pools, and no further

Chapter 7. Easy Tier

351

changes will be needed when you move to multitier pools. See Easy Tier activation on page 350 for more information about the mix of pool and volume settings. Using Easy Tier can make it more appropriate to use smaller storage pool extent sizes.

7.4 Measuring and activating Easy Tier


In the following sections we describe how to measure using Easy Tier and how to activate it.

7.4.1 Measuring by using the Storage Advisor Tool


The IBM Storage Advisor Tool is a command-line tool that runs on Windows systems. It takes input from the dpa_heat files created on the SVC nodes and produces a set of html files containing activity reports. The advisor tool is an application that creates a Hypertext Markup Language (HTML) file containing a report. For more information, visit the following website: http://www-01.ibm.com/support/docview.wss?uid=ssg1S4000935 Contact your IBM Representative or IBM Business Partner for further detail about Storage Advisor Tool.

Offloading statistics
To extract the summary performance data, use one of these methods.

Using the command-line interface (CLI)


Find the most recent dpa_heat.node_name.date.time.data file in the cluster by entering the following CLI command: svcinfo lsdumps node_id | node_name where node_id | node_name is the node ID or name to list the available dpa_heat data files. Next, perform the normal PSCP -load download process: pscp -unsafe -load saved_putty_configuration admin@cluster_ip_address:/dumps/dpa_heat.node_name.date.time.data your_local_directory

Using the GUI


If you prefer using the GUI, then navigate to the Troubleshooting Support page, as shown in Figure 7-3.

Figure 7-3 dpa_heat File Download

352

Implementing the IBM System Storage SAN Volume Controller V6.1

Running the tool


You run the tool from a command line or terminal session by specifying up to two input dpa_heat file names and directory paths; for example: C:\Program Files\IBM\STAT>STAT dpa_heat.nodenumber.yymmdd.hhmmss.data A file called index.html is then created in the STAT base directory. When opened with your browser, it will display a summary page as shown in Figure 7-4.

Figure 7-4 Example of STAT Summary

The distribution of hot data and cold data for each volume is shown in the volume heat distribution report. The report displays the portion of the capacity of each volume on SSD (red), and HDD (blue), as shown in Figure 7-5.

Figure 7-5 STAT Volume Heatmap Distribution sample

7.5 Using Easy Tier with the SVC CLI


This section describes the basic steps for activating Easy Tier by using the SVC command line interface (CLI). Our example is based on the storage pool configurations as shown in Figure 7-1 on page 347 and Figure 7-2 on page 348. Our environment is an SVC cluster with the following resources available: 1 x I/O group with two 2145-CF8 nodes 8 x external 73 GB SSD Drives - (4 x SSD per RAID5 array) 1 x external Storage Subsystem with HDDs

Chapter 7. Easy Tier

353

Deleted lines: Many non-Easy Tier-related lines have been deleted in the command output or responses in examples shown in the following sections to enable you to focus on Easy Tier-related information only.

7.5.1 Initial cluster status


Example 7-1 displays the SVC cluster characteristics prior to adding multitiered storage (SSD with HDD) and commencing the Easy Tier process. The example shows two different tiers available in our SVC cluster, generic_ssd and generic_hdd. At this time there is zero disk allocated to the generic_ssd tier, and therefore it is showing 0.00 MB capacity.
Example 7-1 SVC cluster IBM_2145:ITSO-CLS5:admin>svcinfo lscluster id name location partnership bandwidth id_alias 0000020060800004 ITSO-CLS5 local 0000020060800004 IBM_2145:ITSO-CLS5:admin>svcinfo lscluster 0000020060800004 id 0000020060800004 name ITSO-CLS5 . tier generic_ssd tier_capacity 0.00MB tier_free_capacity 0.00MB tier generic_hdd tier_capacity 18.85TB tier_free_capacity 18.43TB

7.5.2 Turning on Easy Tier evaluation mode


Figure 7-1 on page 347 shows an existing single tier storage pool. To turn on Easy Tier evaluation mode, we need to set -easytier on for both the storage pool and the volumes in the pool. Refer to Table 7-1 on page 350 to check the required mix of parameters needed to set the volume Easy Tier status to measured. As shown in Example 7-2, we turn Easy Tier on for both the pool and volume so that the extent workload measurement is enabled. We first check and then change the pool. Then we repeat the steps for the volume.
Example 7-2 Turning on Easy Tier evaluation mode

IBM_2145:ITSO-CLS5:admin>svcinfo lsmdiskgrp -filtervalue "name=Single*" id name status mdisk_count vdisk_count easy_tier easy_tier_status 27 Single_Tier_Storage_Pool online 3 1 off inactive IBM_2145:ITSO-CLS5:admin>svcinfo lsmdiskgrp Single_Tier_Storage_Pool id 27 name Single_Tier_Storage_Pool status online mdisk_count 3 vdisk_count 1 . easy_tier off easy_tier_status inactive . tier generic_ssd 354
Implementing the IBM System Storage SAN Volume Controller V6.1

tier_mdisk_count 0 . tier generic_hdd tier_mdisk_count 3 tier_capacity 200.25GB IBM_2145:ITSO-CLS5:admin>svctask chmdiskgrp -easytier on Single_Tier_Storage_Pool IBM_2145:ITSO-CLS5:admin>svcinfo lsmdiskgrp Single_Tier_Storage_Pool id 27 name Single_Tier_Storage_Pool status online mdisk_count 3 vdisk_count 1 . easy_tier on easy_tier_status active . tier generic_ssd tier_mdisk_count 0 . tier generic_hdd tier_mdisk_count 3 tier_capacity 200.25GB

------------ Now Reapeat for the Volume ------------IBM_2145:ITSO-CLS5:admin>svcinfo lsvdisk -filtervalue "mdisk_grp_name=Single*" id name status mdisk_grp_id mdisk_grp_name capacity type 27 ITSO_Volume_1 online 27 Single_Tier_Storage_Pool 10.00GB striped IBM_2145:ITSO-CLS5:admin>svcinfo lsvdisk ITSO_Volume_1 id 27 name ITSO_Volume_1 . easy_tier off easy_tier_status inactive . tier generic_ssd tier_capacity 0.00MB . tier generic_hdd tier_capacity 10.00GB

IBM_2145:ITSO-CLS5:admin>svctask chvdisk -easytier on ITSO_Volume_1 IBM_2145:ITSO-CLS5:admin>svcinfo lsvdisk ITSO_Volume_1 id 27 name ITSO_Volume_1 . easy_tier on easy_tier_status measured . tier generic_ssd tier_capacity 0.00MB

Chapter 7. Easy Tier

355

. tier generic_hdd tier_capacity 10.00GB

7.5.3 Creating a multitier storage pool


With the SSD drive candidates placed into an array, we now need a pool into which the two tiers of disk storage will be placed. If you already have an HDD single tier pool, a traditional pre-SVC V6.1 pool, then all you will need to know is the existing MDiskgrp ID or name. In this example we have a storage pool available within which we want to place our SSD arrays, Multi_Tier_Storage_Pool. After creating the SSD arrays, which appear as MDisks, they are placed into the storage pool as shown in Example 7-3. Note that the storage pool easy_tier value is set to auto because it is the default value assigned when you create a new storage pool. Also note that the SSD MDisks default tier value is set to generic_hdd, and not to generic_ssd.
Example 7-3 Multitier pool creation IBM_2145:ITSO-CLS5:admin>svcinfo lsmdiskgrp -filtervalue "name=Multi*" id name status mdisk_count vdisk_count capacity easy_tier easy_tier_status 28 Multi_Tier_Storage_Pool online 3 1 200.25GB auto inactive IBM_2145:ITSO-CLS5:admin>svcinfo lsmdiskgrp Multi_Tier_Storage_Pool id 28 name Multi_Tier_Storage_Pool status online mdisk_count 3 vdisk_count 1 . easy_tier auto easy_tier_status inactive . tier generic_ssd tier_mdisk_count 0 . tier generic_hdd tier_mdisk_count 3

IBM_2145:ITSO-CLS5:admin>svcinfo lsmdisk mdisk_id mdisk_name status mdisk_grp_name capacity raid_level tier 299 SSD_Array_RAID5_1 online Multi_Tier_Storage_Pool 203.6GB raid5 generic_hdd 300 SSD_Array_RAID5_2 online Multi_Tier_Storage_Pool 203.6GB raid5 generic_hdd IBM_2145:ITSO-CLS5:admin>svcinfo lsmdisk SSD_Array_RAID5_2 mdisk_id 300 mdisk_name SSD_Array_RAID5_2 status online mdisk_grp_id 28 mdisk_grp_name Multi_Tier_Storage_Pool capacity 203.6GB

. raid_level raid5 tier generic_hdd

356

Implementing the IBM System Storage SAN Volume Controller V6.1

IBM_2145:ITSO-CLS5:admin>svcinfo lsmdiskgrp -filtervalue "name=Multi" *" id name mdisk_count vdisk_count capacity easy_tier easy_tier_status 28 Multi_Tier_Storage_Pool 5 1 606.00GB auto inactive IBM_2145:ITSO-CLS5:admin>svcinfo lsmdiskgrp Multi_Tier_Storage_Pool id 28 name Multi_Tier_Storage_Pool status online mdisk_count 5 vdisk_count 1 . easy_tier auto easy_tier_status inactive . tier generic_ssd tier_mdisk_count 0 . tier generic_hdd tier_mdisk_count 5

7.5.4 Setting the disk tier


As shown in Example 7-3 on page 356, MDisks that are detected have a default disk tier of generic_hdd. Easy Tier is also still inactive for the storage pool because we do not yet have a true multidisk tier pool. To activate the pool we have to reset the SSD MDisks to their correct generic_ssd tier. Example 7-4 shows how to modify the SSD disk tier.
Example 7-4 Changing an SSD disk tier to generic_ssd

IBM_2145:ITSO-CLS5:admin>svcinfo lsmdisk SSD_Array_RAID5_1 id 299 name SSD_Array_RAID5_1 status online . tier generic_hdd IBM_2145:ITSO-CLS5:admin>svctask chmdisk -tier generic_ssd SSD_Array_RAID5_1 IBM_2145:ITSO-CLS5:admin>svctask chmdisk -tier generic_ssd SSD_Array_RAID5_2

IBM_2145:ITSO-CLS5:admin>svcinfo lsmdisk SSD_Array_RAID5_1 id 299 name SSD_Array_RAID5_1 status online . tier generic_ssd IBM_2145:ITSO-CLS5:admin>svcinfo lsmdiskgrp Multi_Tier_Storage_Pool id 28 name Multi_Tier_Storage_Pool status online mdisk_count 5 vdisk_count 1 . easy_tier auto
Chapter 7. Easy Tier

357

easy_tier_status active . tier generic_ssd tier_mdisk_count 2 tier_capacity 407.00GB . tier generic_hdd tier_mdisk_count 3

7.5.5 Checking a volumes Easy Tier mode


To check the Easy Tier operating mode on a volume, we need to display its properties using the lsvdisk command. An automatic data placement mode volume will have its pool value set to ON or AUTO, and the volume set to ON. The CLI volume easy_tier_status will be displayed as active, as shown in Example 7-5. An evaluation mode volume will have both the pool and volume value set to ON. However, the CLI volume easy_tier_status will be shown as measured, as seen in Example 7-2 on page 354.
Example 7-5 Checking a volumes easy_tier_status

IBM_2145:ITSO-CLS5:admin>svcinfo lsvdisk ITSO_Volume_10 id 28 name ITSO_Volume_10 mdisk_grp_name Multi_Tier_Storage_Pool capacity 10.00GB type striped . easy_tier on easy_tier_status active . tier generic_ssd tier_capacity 0.00MB tier generic_hdd tier_capacity 10.00GB The volume in the example will be measured by Easy Tier and a hot extent migration will be performed from the hdd tier MDisk to the ssd tier MDisk. Also note that the volume hdd tier generic_hdd still holds the entire capacity of the volume because the generic_ssd capacity value is 0.00 MB. The allocated capacity on the generic_hdd tier will gradually change as Easy Tier optimizes the performance by moving extents into the generic_ssd tier.

7.5.6 Final cluster status


Example 7-6 shows the SVC cluster characteristics after adding multitiered storage (SSD with HDD).
Example 7-6 SVC Multi-Tier cluster

IBM_2145:ITSO-CLS5:admin>svcinfo lscluster ITSO-CLS5 id 000002006A800002 name ITSO-CLS5 . 358


Implementing the IBM System Storage SAN Volume Controller V6.1

tier generic_ssd tier_capacity 407.00GB tier_free_capacity 100.00GB tier generic_hdd tier_capacity 18.85TB tier_free_capacity 10.40TB As you can now see we have two different tiers available in our SVC cluster, generic_ssd and generic_hdd. At this time there are also extents being used on both the generic_ssd tier and the generic_hdd tier; see the free_capacity values. However, we do not know from this command if the SSD storage is being used by the Easy Tier process. To determine if Easy Tier is actively measuring or migrating extents within the cluster, you need to view the volume status as shown previously in Example 7-5 on page 358.

7.6 Using Easy Tier with the SVC GUI


This section describes the basic steps to activate Easy Tier by using the web interface or GUI. Our example is based on the storage pool configurations shown in Figure 7-1 on page 347 and Figure 7-2 on page 348. Our environment is an SVC cluster with the following resources available: 1 x I/O group with two 2145-CF8 nodes 8 x external 73 GB SSD Drives - (4 x SSD per RAID5 array) 1 x external Storage Subsystem with HDDs

7.6.1 Setting the disk tier on MDisks


When displaying the storage pool you can see that Easy Tier is inactive, even though there are SSD MDisks in the pool as shown in Figure 7-6.

Figure 7-6 GUI select MDisk to change tier

This is because, by default, all MDisks are initially discovered as Hard Disk Drives (HDDs); see the MDisk properties panel Figure 7-7 on page 360.

Chapter 7. Easy Tier

359

Figure 7-7 MDisk default tier is Hard Disk Drive

Therefore, for Easy Tier to take effect, you need to change the disk tier. Right-click the selected MDisk and choose Select Tier, as shown in Figure 7-8.

Figure 7-8 Select the Tier

Now set the MDisk Tier to Solid-State Drive, as shown in Figure 7-9 on page 361.

360

Implementing the IBM System Storage SAN Volume Controller V6.1

Figure 7-9 GUI Setting Solid-State Drive tier

The MDisk now has the correct tier and so the properties value is correct for a multidisk tier pool, as shown in Figure 7-10.

Figure 7-10 Show MDisk details Tier and RAID level

7.6.2 Checking Easy Tier status


Now that the SSDs are known to the pool as Solid-State Drives, the Easy Tier function becomes active as shown in Figure 7-11 on page 362. After the pool has an Easy Tier active status, the automatic data relocation process begins for the volumes in the pool. This occurs as the default Easy Tier setting for volumes is ON.

Chapter 7. Easy Tier

361

Figure 7-11 Storage Pool with Easy Tier active

362

Implementing the IBM System Storage SAN Volume Controller V6.1

Chapter 8.

Advanced Copy Services


Before discussing FlashCopy, Metro Mirror, and Global Mirror, we first describe in this chapter the IBM System Storage SAN Volume Controller (SVC) Advanced Copy Services of FlashCopy, Metro Mirror, and Global Mirror. In Chapter 9, SAN Volume Controller operations using the command-line interface on page 439, we explain how to use the command-line interface and Advanced Copy Services. In Chapter 10, SAN Volume Controller operations using the GUI on page 579, we explain how to use the GUI and Advanced Copy Services.

Copyright IBM Corp. 2011. All rights reserved.

363

8.1 FlashCopy
The FlashCopy function of the IBM System Storage SAN Volume Controller (SVC) provides the capability to perform a point-in-time copy of one or more volumes. In this section we describe the inner workings of FlashCopy and provide details of its configuration and use. You can use FlashCopy to help you solve the critical but challenging task of creating consistent copies of data sets while they remain online and actively in use. Because the copy is performed at the block level, it operates below the host operating system and cache and is therefore transparent to the host. While the FlashCopy operation is performed, the source volume is frozen briefly to initialize the FlashCopy bitmap and then I/O is allowed to resume. Although several FlashCopy options require the data to be copied from the source to the target in the background, which can take a length of time to complete, the resulting data on the target volume is presented so that the copy appears to have completed immediately.

8.1.1 Business requirement


The business applications for FlashCopy are wide ranging. Common use cases for FlashCopy include: Creating consistent backups of dynamically changing data Creating consistent copies of production data to facilitate data movement or migration between hosts Creating copies of production datasets for application development and testing Creating copies of production datasets for auditing purposes and data mining These use cases are discussed in more detail in the following sections.

8.1.2 Backup
FlashCopy does not reduce the time it takes to perform a backup. However, it can be used to minimize and, under certain conditions, eliminate application downtime associated with performing backups. After the FlashCopy is performed, the resulting image of the data can be backed up to tape. After the copy to tape has been completed, the image data is redundant and the target volumes can be discarded. Usually when FlashCopy is used for backup purposes, the target data is managed as read-only.

8.1.3 Restore
FlashCopies can be taken periodically and targets left online so they can be rapidly restored from. The target can be used to preform a restore of individual files, or the entire source volume can be restored if required.

8.1.4 Moving and migrating data


FlashCopy can be used to facilitate the movement or migration of data between hosts while minimizing downtime for applications. FlashCopy will allow application data to be copied from source volumes to new target volumes while applications remain online. After the volumes are fully copied and synchronized, the application can be brought down and then immediately brought back up on the new server accessing the new FlashCopy target volumes. 364
Implementing the IBM System Storage SAN Volume Controller V6.1

8.1.5 Application testing


It is often important to test a new version of an application or operating system using actual production data. FlashCopy makes this type of testing easy to accomplish without putting the production data at risk or requiring downtime to create a constant copy.

8.1.6 Host considerations to ensure FlashCopy integrity


To ensure the integrity of the copy that is made, it is necessary to flush the host cache for any outstanding reads or writes prior to performing the FlashCopy operation. Failing to do so will produce what is referred to as a crash consistent copy, meaning the resulting copy will require the same type of recovery procedure (such as log replay and filesystem checks) as is required following a host crash. Various operating systems and applications provide facilities to stop I/O operations and ensure all data is flushed from host cache. If these facilities are available, they can be used to prepare and start a FlashCopy operation. When this type of facility is not available, then the host cache must be flushed manually by quiescing the application and unmounting the filesystem or drives.

8.1.7 FlashCopy attributes


The FlashCopy function in SVC possesses the following attributes: The target is the time-zero copy of the source (known as FlashCopy mapping targets). FlashCopy produces an exact copy of the source volume, including any metadata that was written by the host operating system, logical volume manager, and applications. The source volume and target volume are available (almost) immediately following the FlashCopy operation. The source and target volumes must be the same virtual size. The source and target volumes must be on the same SVC cluster. The source and target volumes do not need to be in the same I/O group or storage pool. The storage pool extent sizes can be different between the source and target. The source volumes can have up to 256 target volumes (Multiple Target FlashCopy). The target volumes can be the source volumes for other FlashCopy relationships (Cascaded FlashCopy). Consistency Groups are supported to enable FlashCopy across multiple volumes. The target volume can be updated independently of the source volume. Bitmaps governing I/O redirection (I/O indirection layer) are maintained in both nodes of the SVC I/O Group to prevent a single point of failure. FlashCopy mapping and Consistency Groups can be automatically withdrawn after the completion of the background copy. Thin-provisioned FlashCopy will only consume disk space when updates are made to the source or target data and not for the entire capacity of a volume copy. FlashCopy licensing is based on the virtual capacity of the source volumes. Incremental FlashCopy copies all of the data for the first FlashCopy and then only the changes for all subsequent FlashCopy. Incremental FlashCopy can substantially reduce the time required to recreate an independent image.

Chapter 8. Advanced Copy Services

365

Reverse FlashCopy enables FlashCopy targets to become restore points for the source without breaking the FlashCopy relationship and without having to wait for the original copy operation to complete. The maximum number of supported FlashCopy mappings is 8192 per SVC cluster. The size of the source and target volumes cannot be altered (increased or decreased) while a FlashCopy mapping is defined.

8.2 Reverse FlashCopy


Reverse FlashCopy enables FlashCopy targets to become restore points for the source without breaking the FlashCopy relationship and without having to wait for the original copy operation to complete. It supports multiple targets and thus multiple rollback points. A key advantage of SVC Multiple Target Reverse FlashCopy function is that the reverse FlashCopy does not destroy the original target, thus allowing processes using the target, such as a tape backup, to continue uninterrupted. SVC also provided the ability to create an optional copy of the source volume to be made prior to starting the reverse copy operation. This provides the ability to restore back to the original source data, which can be useful for diagnostic purposes. The steps required to restore from an on-disk backup are listed here: 1. (Optional) Create a new target volume (volume Z) and use FlashCopy to copy the production volume (volume X) onto the new target for later problem analysis. 2. Create a new FlashCopy map with the backup to be restored (volume Y) or (volume W) as the source volume and volume X as the target volume, if this map does not already exist. 3. Start the FlashCopy map (volume Y volume X) with the new -restore option to copy the backup data onto the production disk. 4. The production disk is instantly available with the backup data. Figure 8-1 on page 367 shows an example of Reverse FlashCopy.

366

Implementing the IBM System Storage SAN Volume Controller V6.1

Figure 8-1 Reverse FlashCopy

Note that regardless of whether the initial FlashCopy map (volume X volume Y) is incremental, the Reverse FlashCopy operation only copies the modified data. Consistency Groups are reversed by creating a set of new reverse FlashCopy maps and adding them to a new reverse Consistency Group. Consistency Groups cannot contain more than one FlashCopy map with the same target volume.

8.2.1 FlashCopy and Tivoli Storage Manager


The management of many large FlashCopy relationships and Consistency Groups is a complex task without a form of automation for assistance. IBM Tivoli FlashCopy Manager V2.2 provides integration between the SVC and Tivoli Storage Manager for Advanced Copy Services, providing application-aware backup and restore by leveraging the SVC FlashCopy features and function. Figure 8-2 on page 368 shows the Tivoli Storage Manager for Advanced Copy Services features.

Chapter 8. Advanced Copy Services

367

Figure 8-2 Tivoli Storage Manager for Advanced Copy Services features

Tivoli FlashCopy Manager provides many of the features of Tivoli Storage Manager for Advanced Copy Services without the requirement to use Tivoli Storage Manager. With Tivoli FlashCopy Manager, you can coordinate and automate host preparation steps before issuing FlashCopy start commands to ensure that a consistent backup of the application is made. You can put databases into hot backup mode and flush filesystem cache prior to starting the FlashCopy. FlashCopy Manager also allows for easier management of on-disk backups using FlashCopy, and provides a simple interface to perform the reverse operation. Figure 8-3 on page 369 shows the FlashCopy Manager feature.

368

Implementing the IBM System Storage SAN Volume Controller V6.1

Figure 8-3 Tivoli Storage Manager FlashCopy Manager features

Describing Tivoli Storage Manager FlashCopy Manager is beyond the scope of this publication.

8.3 FlashCopy functional overview


FlashCopy works by defining a FlashCopy mapping that consists of one source volume together with one target volume. Multiple FlashCopy mappings (source-to-target relationships) can be defined, and point-in-time consistency can be maintained across multiple individual mappings using Consistency Groups. See Consistency Group with Multiple Target FlashCopy on page 373 for more information about this topic. When FlashCopy is started, an effective copy of a source volume to a target volume has been created. The content of the source volume is immediately presented on the target volume and the original content of the target volume is lost. This FlashCopy operation is also referred to as a time-zero copy (T0 ). Immediately following the FlashCopy operation, both the source and target volumes are available for use. The FlashCopy operation creates a bitmap that is referenced and maintained to direct I/O requests within the source and target relationship. This bitmap is updated to reflect the active block locations as data is copied in the background from the source to target and updates are made to the source. For more details about background copy, see 8.4.5, Grains and the FlashCopy bitmap on page 374. Figure 8-4 on page 370 illustrates the redirection of the host I/O toward the source volume and the target volume.

Chapter 8. Advanced Copy Services

369

Figure 8-4 Redirection of host I/O

8.4 Implementing SVC FlashCopy


In the following section we describe how FlashCopy is implemented in the SVC.

8.4.1 FlashCopy mappings


FlashCopy occurs between a source volume and a target volume. The source and target volumes must be the same size. The minimum granularity that SVC supports for FlashCopy is an entire volume; it is not possible to use FlashCopy to copy only part of a volume. The source and target volumes must belong to the same SVC cluster, but they do not have to be in the same I/O Group or storage pool. FlashCopy associates a source volume to a target volume through FlashCopy mapping. Volumes that are members of a FlashCopy mapping cannot have their size increased or decreased while they are members of the FlashCopy mapping. A FlashCopy mapping is the act of creating a relationship between a source volume and a target volume. FlashCopy mappings can be either stand-alone or a member of a Consistency Group. You can perform the actions of preparing, starting, or stopping FlashCopy on either a stand-alone mapping or a Consistency Group. Figure 8-5 illustrates the concept of FlashCopy mapping.

Figure 8-5 FlashCopy mapping

370

Implementing the IBM System Storage SAN Volume Controller V6.1

8.4.2 Multiple Target FlashCopy


SVC supports up to 256 target volumes from a single source volume. Each copy is managed by a unique mapping. In general, each mapping acts independently and is not affected by other mappings sharing the same source volume. Figure 8-6 illustrates the Multiple Target FlashCopy implementation.

Figure 8-6 Multiple Target FlashCopy implementation

Figure 8-6 shows four targets and mappings taken from a single source, along with their interdependencies. In this example Target 1 is the oldest (as measured from the time it was started) through to Target 4, which is the newest. The ordering is important because of the way in which data is copied when multiple target volumes are defined and because of the dependency chain that results. A write to the source volume does not cause its data to be copied to all of the targets. Instead, it is copied to the newest target volume only (Target 4 in Figure 8-6). The older targets will refer to new targets first before referring to the source. From the point of view of an intermediate target disk (neither the oldest or the newest), it treats the set of newer target volumes and the true source volume as a type of composite source. It treats all older volumes as a kind of target (and behaves like a source to them). If the mapping for an intermediate target volume shows 100% progress, its target volume contains a complete set of data. In this case, mappings treat the set of newer target volumes, up to and including the 100% progress target, as a form of composite source. A dependency relationship exists between a particular target and all newer targets (up to and including a target that shows 100% progress) that share the same source until all data has been copied to this target and all older targets. You can read more about Multiple Target FlashCopy in 8.4.6, Interaction and dependency between Multiple Target FlashCopy mappings on page 375.

8.4.3 Consistency Groups


Consistency Groups address the requirement to preserve point-in-time data consistency across multiple volumes for applications having related data that spans multiple volumes. For these volumes, Consistency Groups maintain the integrity of the FlashCopy by ensuring that dependent writes are executed in the applications intended sequence.

Chapter 8. Advanced Copy Services

371

When Consistency Groups are used, the FlashCopy commands are issued to the FlashCopy Consistency Group, which performs the operation on all FlashCopy mappings contained within the Consistency Group. Figure 8-7 illustrates a Consistency Group consisting of two FlashCopy mappings.

Figure 8-7 FlashCopy Consistency Group

Note: After an individual FlashCopy mapping has been added to a Consistency Group, it can only be managed as part of the group. Operations such as prepare, start, and stop are no longer allowed on the individual mapping.

Dependent writes
To illustrate why it is crucial to use Consistency Groups when a data set spans multiple volumes, consider the following typical sequence of writes for a database update transaction: 1. A write is executed to update the database log, indicating that a database update is about to be performed. 2. A second write is executed to perform the actual update to the database. 3. A third write is executed to update the database log, indicating that the database update has completed successfully. The database ensures the correct ordering of these writes by waiting for each step to complete before starting the next step. However, if the database log (updates 1 and 3) and the database itself (update 2) are on separate volumes, then it is possible for the FlashCopy of the database volume to occur prior to the FlashCopy of the database log. This can result in the target volumes seeing writes (1) and (3) but not (2), because the FlashCopy of the database volume occurred before the write was completed. In this case, if the database was restarted using the backup that was made from the FlashCopy target volumes, the database log indicates that the transaction had completed successfully when in fact it had not, because the FlashCopy of the volume with the database file was started (bitmap was created) before the write had completed to the volume. Therefore, the transaction is lost and the integrity of the database is in question.

372

Implementing the IBM System Storage SAN Volume Controller V6.1

To overcome the issue of dependent writes across volumes and to create a consistent image of the client data, it is necessary to perform a FlashCopy operation on multiple volumes as an atomic operation. To accomplish this the SVC supports the concept of Consistency Groups. A FlashCopy Consistency Group can contain up to 512 FlashCopy mappings (this is the maximum number of FlashCopy mappings supported by the SVC cluster). FlashCopy commands can then be issued to the FlashCopy Consistency Group and thereby simultaneously for all of the FlashCopy mappings that are defined in the Consistency Group. For example, when issuing a FlashCopy start command to the Consistency Group, all of the FlashCopy mappings in the Consistency Group are started at the same time, resulting in a point-in-time copy that is consistent across all of the FlashCopy mappings that are contained in the Consistency Group.

Consistency Group with Multiple Target FlashCopy


It is important to note that a Consistency Group aggregates FlashCopy mappings, not volumes. Thus, where a source volume has multiple FlashCopy mappings, they can be in the same or separate Consistency Groups. If a particular volume is the source volume for multiple FlashCopy mappings, you might want to create separate Consistency Groups to separate each mapping of the same source volume. If the source volume with multiple target volumes is in the same Consistency Group, the resulting FlashCopy will produce multiple identical copies of the source data.

Maximum configurations
Table 8-1 lists the FlashCopy properties and maximum configurations.
Table 8-1 FlashCopy properties and maximum configuration FlashCopy property FlashCopy targets per source Maximum 256 Comment This maximum is the maximum number of FlashCopy mappings that can exist with the same source volume. The number of mappings is no longer limited by the number of volumes in the cluster, so the FlashCopy component limit applies. This maximum is an arbitrary limit that is policed by the software. This maximum is a limit on the quantity of FlashCopy mappings using bitmap space from this I/O Group. This maximum configuration will consume all 512 MB of bitmap space for the I/O Group and allow no Metro and Global Mirror bitmap space. The default is 40 TB. This limit is due to the time that is taken to prepare a Consistency Group with a large number of mappings.

FlashCopy mappings per cluster

4,096

FlashCopy Consistency Groups per cluster FlashCopy volumes per I/O Group

127 1,024

FlashCopy mappings per Consistency Group

512

Chapter 8. Advanced Copy Services

373

8.4.4 FlashCopy indirection layer


The FlashCopy indirection layer governs the I/O to both the source and target volumes when a FlashCopy mapping is started, which is done using a FlashCopy bitmap. The purpose of the FlashCopy indirection layer is to enable both the source and target volumes for read and write I/O immediately after the FlashCopy has been started. To illustrate how the FlashCopy indirection layer works, we examine what happens when a FlashCopy mapping is prepared and subsequently started. When a FlashCopy mapping is prepared and started, the following sequence is applied: 1. Flush write cache to the source volume or volumes that are part of a Consistency Group. 2. Put cache into write-through mode on the source volumes. 3. Discard cache for the target volumes. 4. Establish a sync point on all of the source volumes in the Consistency Group (creating the FlashCopy bitmap). 5. Ensure that the indirection layer governs all of the I/O to the source volumes and target volumes. 6. Enable cache on both the source volumes and target volumes. FlashCopy provides the semantics of a point-in-time copy using the indirection layer, which intercepts I/O directed at either the source or target volumes. The act of starting a FlashCopy mapping causes this indirection layer to become active in the I/O path, which occurs automatically across all FlashCopy mappings in the Consistency Group. The indirection layer then determines how each I/O is to be routed based on the following factors: The volume and the logical block address (LBA) to which the I/O is addressed Its direction (read or write) The state of an internal data structure, the FlashCopy bitmap The indirection layer allows the I/O to go through to the underlying volume; redirects the I/O from the target volume to the source volume; or queues the I/O while it arranges for data to be copied from the source volume to the target volume. To explain in more detail which action is applied for each I/O, we first look at the FlashCopy bitmap.

8.4.5 Grains and the FlashCopy bitmap


When data is copied between volumes, it is copied in units of address space known as grains. The grain size is 64 KB or 256 KB. The FlashCopy bitmap contains one bit for each grain, and it is used to keep track of whether the source grain has been copied to the target. The FlashCopy bitmap dictates read and write behavior for both the source and target volumes.

Source reads
Reads are performed from the source volume. This is the same as for non-FlashCopy volumes.

Source writes
Writes to the source will cause the grain to be copied to the target if it has not already been copied, then the write will be performed to the source.

374

Implementing the IBM System Storage SAN Volume Controller V6.1

Target reads
Reads are performed from the target if the grain has already been copied. Otherwise, the read is performed from the source and no copy is performed.

Target writes
Writes to the target will cause the grain to be copied from the source to the target unless the entire grain is being written, then the write will complete to the target.

The FlashCopy indirection layer algorithm


Imagine the FlashCopy indirection layer as the I/O traffic director when a FlashCopy mapping is active. The I/O is intercepted and handled according to whether it is directed at the source volume or at the target volume, depending on the nature of the I/O (read or write) and the state of the grain (whether it has been copied). In Figure 8-8, we illustrate how the background copy runs while I/Os are handled according to the indirection layer algorithm.

Figure 8-8 I/O processing with FlashCopy

8.4.6 Interaction and dependency between Multiple Target FlashCopy mappings


Figure 8-9 on page 376 represents a set of four FlashCopy mappings that share a common source. The FlashCopy mappings will target volumes Target 0, Target 1, Target 2, and Target 3.

Chapter 8. Advanced Copy Services

375

Figure 8-9 Interactions between MTFC mappings

Target 0 is not dependent on a source, because it has completed copying. Target 0 has two dependent mappings (Target 1 and Target 2). Target 1 is dependent upon Target 0. It will remain dependent until all of Target 1 has been copied. Target 2 is dependent on it, because Target 2 is 20% copy complete. After all of Target 1 has been copied, it can then move to the idle_copied state. Target 2 is dependent upon Target 0 and Target 1 and will remain dependent until all of Target 2 has been copied. No target is dependent on Target 2, so when all of the data has been copied to Target 2, it can move to the Idle_copied state. Target 3 has actually completed copying, so it is not dependent on any other maps.

Target writes (with Multiple Target FlashCopy)


A write to an intermediate or newest target volume must consider the state of the grain within its own mapping, and the state of the grain of the next oldest mapping. If the grain of the next oldest mapping has not yet been copied, it must be copied before the write is allowed to proceed to preserve the contents of the next oldest mapping. The data written to the next oldest mapping comes from a target or source. If the grain in the target being written has not yet been copied, the grain is copied from the oldest already copied grain in the mappings that are newer than the target, or the source if none are already copied. After this copy has been done, the write can be applied to the target.

Target reads (with Multiple target FlashCopy)


If the grain being read has already been copied from the source to the target, then the read simply returns data from the target being read. If the grain has not been copied, then each of the newer mappings is examined in turn and the read is performed from the first copy found. If none are found, then the read is performed from the source.

376

Implementing the IBM System Storage SAN Volume Controller V6.1

Stopping the copy process


When a stop command is issued to a mapping that contains a target that has dependent mappings, the mapping will enter the stopping state and begin copying all grains that are uniquely held on the target volume of the mapping being stopped to the next oldest mapping that is in the Copying state. The mapping will remain in the stopping state until all grains have been copied and then enter the stopped state. Note about stopping the copy process: The stopping copy process can be ongoing for several mappings sharing the same source at the same time. At the completion of this process, the mapping will automatically make an asynchronous state transition to the Stopped state or the idle_copied state if the mapping was in the Copying state with progress = 100%. For example, if the mapping associated with Target 0 was issued a stopfcmap or stopfcconsistgrp command, then Target 0 enters the Stopping state while a process copies the data of Target 0 to Target 1. After all of the data has been copied, Target 0 enters the Stopped state, and Target 1 is no longer dependent upon Target 0, but Target 1 remains dependent on Target 2.

8.4.7 Summary of the FlashCopy indirection layer algorithm


Table 8-2 summarizes the indirection layer algorithm.
Table 8-2 Summary table of the FlashCopy indirection layer algorithm Volume being accessed Source Has the grain been copied? No Host I/O operation Read Read from source volume. Write Copy grain to most recently started target for this source, then write to the source. Write to source volume. Hold the write. Check the dependency target volumes to see if the grain has been copied. If the grain is not already copied to the next oldest target for this source, copy the grain to the next oldest target. Then, write to the target. Write to target volume.

Yes Target No

Read from source volume. If any newer targets exist for this source in which this grain has already been copied, read from the oldest of these targets. Otherwise, read from the source.

Yes

Read from target volume.

8.4.8 Interaction with the cache


This copy-on-write process introduces significant latency into write operations. To isolate the active application from this additional latency, the FlashCopy indirection layer is placed logically beneath the cache. Therefore, the additional latency introduced by the copy-on-write process is only encountered by internal cache destage operation and not by the application. In Figure 8-10 on page 378, we illustrate the logical placement of the FlashCopy indirection layer.

Chapter 8. Advanced Copy Services

377

Figure 8-10 Logical placement of the FlashCopy indirection layer

8.4.9 FlashCopy and image mode disks


FlashCopy can be used with image mode volumes. Because the source and target volumes must be exactly the same size, when creating a FlashCopy mapping you must create a target volume with the exact same size as the image mode volume. To accomplish this, use the svcinfo lsvdisk -bytes volumeName command. The size in bytes is then used to create the volume to use in the FlashCopy mapping. In Example 8-1, we list the size of the Image_volume_A volume. Subsequently, the volume_A_copy volume is created, specifying the same size.
Example 8-1 Listing the size of a volume in bytes and creating a volume of equal size

IBM_2145:ITSO-CLS1:admin>svcinfo lsvdisk Image_volume_A id 8 name Image_volume_A IO_group_id 0 IO_group_name io_grp0 status online storage_pool_id 2 storage_pool_name Storage_Pool_Image capacity 36.0GB type image . . . autoexpand warning grainsize IBM_2145:ITSO-CLS1:admin>svctask mkvolume -size 36 -unit gb -name volume_A_copy -mdiskgrp Storage_Pool_DS47 -vtype striped -iogrp 1 Virtual Disk, id [19], successfully created

378

Implementing the IBM System Storage SAN Volume Controller V6.1

Tip: Alternatively, you can use the expandvolumesize and shrinkvolumesize volume commands to modify the size of the volume. See 9.5.10, Expanding a volume on page 471 and 9.5.16, Shrinking a volume on page 476 for more information. You can use an image mode volume as either a FlashCopy source volume or target volume.

8.4.10 FlashCopy mapping events


In this section, we describe the events that modify the states of a FlashCopy. We describe the mapping events in Table 8-3. Overview of a FlashCopy sequence of events: 1. Associate the source data set with a target location (one or more source and target volumes). 2. Create a FlashCopy mapping for each source volume to the corresponding target volume. The target volume must be equal in size to the source volume. 3. Discontinue access to the target (application dependent). 4. Prepare (pre-trigger) the FlashCopy: a. Flush cache for the source. b. Discard cache for the target. 5. Start (trigger) the FlashCopy: a. Pause I/O (briefly) on the source. b. Resume I/O on the source. c. Start I/O on the target.
Table 8-3 Mapping events Mapping event Create Description A new FlashCopy mapping is created between the specified source volume and the specified target volume. The operation fails if any of the following conditions is true: For SAN Volume Controller software Version 4.1.0 or earlier, the source or target volume is already a member of a FlashCopy mapping. For SAN Volume Controller software Version 4.2.0 or later, the source or target volume is already a target volume of a FlashCopy mapping. For SAN Volume Controller software Version 4.2.0 or later, the source volume is already a member of 16 FlashCopy mappings. For SAN Volume Controller software Version 4.3.0 or later, the source volume is already a member of 256 FlashCopy mappings. The node has insufficient bitmap memory. The source and target volume sizes differ.

Chapter 8. Advanced Copy Services

379

Mapping event Prepare

Description The prestartfcmap or prestartfcconsistgrp command is directed to either a Consistency Group for FlashCopy mappings that are members of a normal Consistency Group or to the mapping name for FlashCopy mappings that are stand-alone mappings. The prestartfcmap or prestartfcconsistgrp command places the FlashCopy mapping into the Preparing state. Important: The prestartfcmap or prestartfcconsistgrp command can corrupt any data that previously resided on the target volume because cached writes are discarded. Even if the FlashCopy mapping is never started, the data from the target might have logically changed during the act of preparing to start the FlashCopy mapping. The FlashCopy mapping automatically moves from the Preparing state to the Prepared state after all cached data for the source is flushed and all cached data for the target is no longer valid. When all of the FlashCopy mappings in a Consistency Group are in the Prepared state, the FlashCopy mappings can be started. To preserve the cross-volume Consistency Group, the start of all of the FlashCopy mappings in the Consistency Group must be synchronized correctly with respect to I/Os that are directed at the volumes by using the startfcmap or startfcconsistgrp command. The following actions occur during the startfcmap or startfcconsistgrp commands run: New reads and writes to all source volumes in the Consistency Group are paused in the cache layer until all ongoing reads and writes beneath the cache layer are completed. After all FlashCopy mappings in the Consistency Group are paused, the internal cluster state is set to allow FlashCopy operations. After the cluster state is set for all FlashCopy mappings in the Consistency Group, read and write operations continue on the source volumes. The target volumes are brought online. As part of the startfcmap or startfcconsistgrp command, read and write caching is enabled for both the source and target volumes. You can modify the following FlashCopy mapping properties: FlashCopy mapping name Clean rate Consistency Group Copy rate (for background copy) Automatic deletion of the mapping when the background copy is complete There are two separate mechanisms by which a FlashCopy mapping can be stopped: You have issued a command. An I/O error has occurred. This command requests that the specified FlashCopy mapping be deleted. If the FlashCopy mapping is in the Stopped state, the force flag must be used. If the flush of data from the cache cannot be completed, the FlashCopy mapping enters the Stopped state.

Flush done

Start

Modify

Stop

Delete

Flush failed

380

Implementing the IBM System Storage SAN Volume Controller V6.1

Mapping event Copy complete

Description After all of the source data has been copied to the target and there are no dependent mappings, the state is set to Copied. If the option to automatically delete the mapping after the background copy completes is specified, the FlashCopy mapping is automatically deleted. If this option is not specified, the FlashCopy mapping is not automatically deleted and can be reactivated by preparing and starting again. The node has failed.

Bitmap online/offline

8.4.11 FlashCopy mapping states


In this section, we describe the states of a FlashCopy mapping in more detail.

Idle_or_copied
Read and write caching is enabled for both the source and the target. A FlashCopy mapping exists between the source and target but the source and target behave as independent volumes in this state.

Copying
The FlashCopy indirection layer governs all I/O to the source and target volumes while the background copy is running. The background copy process is copying grains from the source to the target. Reads and writes are executed on the target as though the contents of the source were instantaneously copied to the target during the startfcmap or startfcconsistgrp command. The source and target can be independently updated. Internally, the target depends on the source for certain tracks. Read and write caching is enabled on the source and the target.

Stopped
The FlashCopy was stopped either by a user command or by an I/O error. When a FlashCopy mapping is stopped, the integrity of the data on the target volume is lost. Therefore, while the FlashCopy mapping is in this state, the target volume is in the Offline state. To regain access to the target, the mapping must be started again (the previous point-in-time will be lost) or the FlashCopy mapping must be deleted. The source volume is accessible, and read/write caching is enabled for the source. In the Stopped state, a mapping can either be prepared again or deleted.

Stopping
The mapping is in the process of transferring data to a depend mapping. The behavior of the target volume depends on whether the background copy process had completed while the mapping was in the Copying state. If the copy process had completed, the target volume remains online while the stopping copy process completes. If the copy process had not completed, data in the cache is discarded for the target volume. The target volume is taken offline, and the stopping copy process runs. After the data has been copied, a stop complete asynchronous event notification is issued. The mapping will move to the Idle/Copied state if the background copy has completed or to the Stopped state if the background copy has not completed. The source volume remains accessible for I/O.

Chapter 8. Advanced Copy Services

381

Suspended
The FlashCopy was in the Copying or Stopping state when access to the metadata was lost. As a result both the source and target volumes are offline and the background copy process has been halted. When the metadata becomes available again, the FlashCopy mapping will return to the Copying or Stopping state. Access to the source and target volumes will be restored, and the background copy or stopping process will resume. Unflushed data that was written to the source or target before the FlashCopy was suspended is pinned in cache until the FlashCopy mapping leaves the Suspended state.

Preparing
The FlashCopy is in the process of preparing the mapping. While in this state, data from cache is destaged to disk and a consistent copy of the source exists on disk. At this time cache is operating in write-through mode and therefore writes to the source volume will experience additional latency. The target volume is reported as online, but will not perform reads or writes. These reads and writes are failed by the SCSI front-end. Before starting the FlashCopy mapping, it is important that any cache at the host level, for example, buffers on the host operating system or application, are also instructed to flush any outstanding writes to the source volume. Performing the cache flush required as part of the startfcmap or startfcconsistgrp command causes I/Os to be delayed waiting for the cache flush to complete. To overcome this problem, SVC FlashCopy supports the prestartfcmap or prestartfcconsistgrp commands, which prepare for a FlashCopy start while still allowing I/Os to continue to the source volume. In the Preparing state, the FlashCopy mapping is prepared by the following steps: 1. Flushing any modified write data associated with the source volume from the cache. Read data for the source will be left in the cache. 2. Placing the cache for the source volume into write-through mode, so that subsequent writes wait until data has been written to disk before completing the write command that is received from the host. 3. Discarding any read or write data that is associated with the target volume from the cache.

Prepared
When in the Prepared state, the FlashCopy mapping is ready to perform a start. While the FlashCopy mapping is in this state, the target volume is in the Offline state. In the Prepared state, writes to the source volume experience additional latency because the cache is operating in write-through mode.

Summary of FlashCopy mapping states


Table 8-4 on page 383 lists the various FlashCopy mapping states and the corresponding states of the source and target volumes.

382

Implementing the IBM System Storage SAN Volume Controller V6.1

Table 8-4 FlashCopy mapping state summary State Online/Offline Idling/Copied Copying Stopped Stopping Online Online Online Online Source Cache state Write-back Write-back Write-back Write-back Online/Offline Online Online Offline Online if copy complete Offline if copy not complete Offline Online but not accessible Online but not accessible Target Cache state Write-back Write-back N/A N/A

Suspended Preparing Prepared

Offline Online Online

Write-back Write-through Write-through

N/A N/A N/A

8.4.12 Thin-provisioned FlashCopy


FlashCopy source and target volumes can be thin-provisioned.

Either source or target thin-provisioned


The most common configuration is a fully allocated source and a thin-provisioned target. This allows the target to consume a smaller amount of real storage than the source. With this configuration, only use the NOCOPY (background copy rate = 0%) option. Although the COPY option is supported, this creates a fully allocated target and thereby defeat the purpose of thin provisioning.

Source and target both thin-provisioned


When both the source and target volumes are thin-provisioned, only the data allocated to the source will be copied to the target. In this configuration the background copy option will have no effect. Note: Best performance is obtained when the grain size of the thin-provisioned volume is the same as the grain size of the FlashCopy mapping.

Thin-provisioned incremental FlashCopy


The implementation of thin-provisioned volumes does not preclude the use of incremental FlashCopy on the same volumes. It does not make sense to have a fully allocated source volume and then use incremental FlashCopy to copy this fully allocated source volume to a thin-provisioned target volume; however, it is not prohibited. Optional configuration: A thin-provisioned source volume can be incrementally copied using FlashCopy to a thin-provisioned target volume. Whenever the FlashCopy is performed, only data that has been modified is recopied to the target. Note that if space is allocated on the target because of I/O to the target volume, this space will not be reclaimed with subsequent FlashCopy operations.

Chapter 8. Advanced Copy Services

383

A fully allocated source volume can be incrementally copied using FlashCopy to another fully allocated volume at the same time as being copied to multiple thin-provisioned targets (taken at separate points in time). This combination allows a single full backup to be kept for recovery purposes and separates the backup workload from the production workload, and at the same time, allowing older thin-provisioned backups to be retained.

8.4.13 Background copy


With FlashCopy background copy enabled, the source volume data will be copied to the corresponding target volume. With the FlashCopy background copy disabled, only data that changed on the source volume will be copied to the target volume. The benefit of using a FlashCopy mapping with background copy enabled is that the target volume becomes a real clone (independent from the source volume) of the FlashCopy mapping source volume after the copy is complete. When the background copy function is not performed, the target volume only remains a valid copy of the source data while the FlashCopy mapping remains in place. The background copy rate is a property of a FlashCopy mapping defined as a value between 0 and 100. The background copy rate can be defined and dynamically changed for individual FlashCopy mappings. A value of 0 disables background copy. The relationship of the background copy rate value to the attempted number of grains to be copied per second is shown in Table 8-5.
Table 8-5 Background copy rate Value 1 - 10 11 - 20 21 - 30 31 - 40 41 - 50 51 - 60 61 - 70 71 - 80 81 - 90 91 - 100 Data copied per second 128 KB 256 KB 512 KB 1 MB 2 MB 4 MB 8 MB 16 MB 32 MB 64 MB Grains per second 0.5 1 2 4 8 16 32 64 128 256

The grains per second numbers represent the maximum number of grains that the SVC will copy per second, assuming that the bandwidth to the managed disks (MDisks) can accommodate this rate. If the SVC is unable to achieve these copy rates because of insufficient bandwidth from the SVC nodes to the MDisks, then background copy I/O contends for resources on an equal basis with the I/O that is arriving from the hosts. Both background copy I/O and I/O that is arriving from the hosts tend to see an increase in latency and a consequential reduction in throughput. Both background copy and foreground I/O continue to make forward progress, and do not stop, hang, or cause the node to fail. The background copy is performed by both nodes of the I/O Group in which the source volume resides.

384

Implementing the IBM System Storage SAN Volume Controller V6.1

8.4.14 Synthesis
The FlashCopy functionality in SVC simply creates copy volumes. All of the data in the source volume is copied to the destination volume, including operating system, logical volume manager, and application metadata. Note: Certain operating systems are unable to use FlashCopy without an additional step, which is termed synthesis. In summary, synthesis performs a type of transformation on the operating system metadata on the target volume so that the operating system can use the disk.

8.4.15 Serialization of I/O by FlashCopy


In general, the FlashCopy function in the SVC introduces no explicit serialization into the I/O path. Therefore, many concurrent I/Os are allowed to the source and target volumes. However, there is a lock for each grain. The lock can be in shared or exclusive mode. For multiple targets, a common lock is shared and the mappings are derived from a particular source volume. The lock is used in the following modes under the following conditions: The lock is held in shared mode for the duration of a read from the target volume, which touches a grain that has not been copied from the source. The lock is held in exclusive mode while a grain is being copied from the source to the target. If the lock is held in shared mode, and another process wants to use the lock in shared mode, this request is granted unless a process is already waiting to use the lock in exclusive mode. If the lock is held in shared mode and it is requested to be exclusive, the requesting process must wait until all holders of the shared lock free it. Similarly, if the lock is held in exclusive mode, a process wanting to use the lock in either shared or exclusive mode must wait for it to be freed.

8.4.16 Event handling


When a FlashCopy mapping is not copying or stopping, the FlashCopy function does not affect the handling or reporting of events for error conditions encountered in the I/O path. Event handling and reporting are only affected by FlashCopy when a FlashCopy mapping is copying or stopping. We describe these scenarios in the following sections.

Node failure
Normally, two copies of the FlashCopy bitmaps are maintained. One copy of the FlashCopy bitmaps is on each of the two nodes making up the I/O Group of the source volume. When a node fails, one copy of the bitmaps, for all FlashCopy mappings whose source volume is a member of the failing nodes I/O Group, will become inaccessible. FlashCopy will continue with a single copy of the FlashCopy bitmap being stored as non-volatile in the remaining node in the source I/O Group. The cluster metadata is updated to indicate that the missing node no longer holds a current bitmap. When the failing node recovers, or a replacement node is added to the I/O Group, the bitmap redundancy will be restored.

Chapter 8. Advanced Copy Services

385

Path failure (Path Offline state)


In a fully functioning cluster, all of the nodes have a software representation of every volume in the cluster within their application hierarchy. Because the storage area network (SAN) that links the SVC nodes to each other and to the MDisks is made up of many independent links, it is possible for a subset of the nodes to be temporarily isolated from several of the MDisks. When this situation happens, the managed disks are said to be Path Offline on certain nodes. Other nodes: Other nodes might see the managed disks as Online, because their connection to the managed disks is still functioning. When an MDisk enters the Path Offline state on an SVC node, all of the volumes that have extents on the MDisk also become Path Offline. Again, this situation happens only on the affected nodes. When a volume is Path Offline on a particular SVC node, the host access to that volume through the node will fail with the SCSI check condition indicating Offline.

Path Offline for the source volume


If a FlashCopy mapping is in the Copying state and the source volume goes Path Offline, this Path Offline state is propagated to all target volumes up to but not including the target volume for the newest mapping that is 100% copied but remains in the Copying state. If no mappings are 100% copied, all of the target volumes are taken offline. Again, note that Path Offline is a state that exists on a per-node basis. Other nodes might not be affected. If the source volume comes Online, the target and source volumes are brought back Online.

Path Offline for the target volume


If a target volume goes Path Offline but the source volume is still Online, and if there are any dependent mappings, those target volumes will also go Path Offline. The source volume will remain Online.

8.4.17 Asynchronous notifications


FlashCopy raises informational event log entries for certain mapping and Consistency Group state transitions. These state transitions occur as a result of configuration events that complete asynchronously, and the informational events can be used to generate Simple Network Management Protocol (SNMP) traps to notify the user. Other configuration events complete synchronously, and no informational events are logged as a result of these events: PREPARE_COMPLETED: This state transition is logged when the FlashCopy mapping or Consistency Group enters the Prepared state as a result of a user request to prepare. The user can now start (or stop) the mapping or Consistency Group. COPY_COMPLETED: This state transition is logged when the FlashCopy mapping or Consistency Group enters the Idle_or_copied state when it was previously in the Copying or Stopping state. This state transition indicates that the target disk now contains a complete copy and no longer depends on the source. STOP_COMPLETED: This state transition is logged when the FlashCopy mapping or Consistency Group has entered the Stopped state as a result of a user request to stop. It will be logged after the automatic copy process has completed. This state transition includes mappings where no copying needed to be performed. This state transition differs from the event that is logged when a mapping or group enters the Stopped state as a result of an I/O error.

386

Implementing the IBM System Storage SAN Volume Controller V6.1

8.4.18 Interoperation with Metro Mirror and Global Mirror


FlashCopy can work together with Metro Mirror and Global Mirror to provide better protection of the data. For example, we can perform a Metro Mirror copy to duplicate data from Site_A to Site_B and, then perform a daily FlashCopy to backup the data to another location. Table 8-6 lists which combinations of FlashCopy and Remote Copy are supported. In the table, remote copy refers to Metro Mirror and Global Mirror.
Table 8-6 FlashCopy and remote copy interaction Component FlashCopy Source Remote copy primary site Supported Remote copy secondary site Supported Latency: When the FlashCopy relationship is in the Preparing and Prepared states, the cache at the remote copy secondary site operates in write-through mode. This process adds additional latency to the already latent remote copy relationship. Not supported

FlashCopy Destination

Not supported

8.4.19 FlashCopy presets


The GUI interface provides three FlashCopy presets (Snapshot, Clone, Backup) to simplify the more common FlashCopy operations. Note that although these presets will meet the majority of FlashCopy requirements, they do not provide support for all possible FlashCopy options. If more specialized options are required that are not supported by the presets, they will need to be performed using CLI commands. The following section describe the three preset options and their use cases.

Snapshot
Options: If Auto-Create Target Thin-provisioned target with rsize = 0 Autoexpand=on Target pool is primary copy source pool No background copy Use case The user wants to produce a copy of a volume without impacting the availability of the volume. The user does not anticipate a large number of changes to be made to the source or target volume; a significant proportion of the volumes will not be changed. By ensuring that only changes require a copy of data to be made, the total amount of disk space required for the copy is significantly reduced, and so allows for many such snapshot copies to be used in the environment. Snapshots are therefore useful for providing protection against corruption or similar issues with the validity of the data, but do not provide protection from physical controller failures.

Chapter 8. Advanced Copy Services

387

Snapshots can also provide a vehicle for performing repeatable testing including what-if modeling based on production data without requiring a full copy of the data to be provisioned.

Clone
Options: If auto-create target Created volume identical to primary copy of source volume (including storage pool) Auto-Delete Clean Rate = 0 Background Copy Rate = 50 Use case Users want a copy of the volume that they can modify without impacting the original. After the clone is established, there is no expectation that it will be refreshed or that there will be any further need to reference the original production data again. If the source is thin-provisioned, then the target will be thin-provisioned for auto-create target.

Backup
Options: If auto-create target Created volume identical to primary copy of source volume Incremental Clean Rate = 0 Background Copy Rate = 50 Use case The user wants to create a copy of the volume that can be used as a backup in the event that the source becomes unavailable, as in the case of the loss of the underlying physical controller. The user plans to periodically update the secondary copy and does not want to suffer the overhead of creating a completely new copy each time (and incremental FlashCopy times are faster than full copy, which helps to reduce the window where the new backup is not yet fully effective). If the source is thin-provisioned, then the target will be thin-provisioned on this one for auto-create target. Another use case here, which is not supported by the name, is to create and maintain (periodically refresh) an independent image that can be subjected to intensive I/O (for example, data mining) without impacting source volume performance.

8.5 Metro Mirror


In the following topics, we describe the Metro Mirror copy service, which is a synchronous remote copy function. Metro Mirror in SVC is similar to Metro Mirror in the IBM System Storage DS family. SVC provides a single point of control when enabling Metro Mirror in your SAN, regardless of the disk subsystems that are used. The general application of Metro Mirror is to maintain two real-time synchronized copies of a disk. Often, two copies are geographically dispersed between two SVC clusters, although it is possible to use Metro Mirror within a single cluster (within an I/O Group). If the master copy fails, you can enable a auxiliary copy for I/O operation.

388

Implementing the IBM System Storage SAN Volume Controller V6.1

Tips: Note that intracluster Metro Mirror will consume more resources within the cluster as compared to an intercluster Metro Mirror relationship, where resource allocation is shared between the clusters. Use intercluster Metro Mirror when possible. A typical application of this function is to set up a dual-site solution using two SVC clusters. The first site is considered the primary or production site, and the second site is considered the backup site or failover site, which is activated when a failure at the first site is detected.

8.5.1 Metro Mirror overview


Metro Mirror establishes a Metro Mirror relationship between two volumes of equal size. The volumes in a Metro Mirror relationship are referred to as the master (primary) volume and the auxiliary (secondary) volume. Consistency Groups can be used to maintain data integrity for dependent writes, similar to FlashCopy Consistency Groups. SVC provides both intracluster and intercluster Metro Mirror.

Intracluster Metro Mirror


Performs intracluster copying of a volume, in which both volumes belong to the same cluster and I/O Group within the cluster. Note: Performing Metro Mirror across I/O Groups within a cluster is not supported.

Intercluster Metro Mirror


Performs intercluster copying of a volume, in which one volume belongs to a cluster and the other volume belongs to a different cluster. Two SVC clusters must be defined in an SVC partnership, which must be performed on both SVC clusters to establish a fully functional Metro Mirror partnership. Using standard single mode connections, the supported distance between two SVC clusters in a Metro Mirror partnership is 10 km (6.2 miles), although greater distances can be achieved by using extenders. For extended distance solutions, contact your IBM representative. Limit: When a local and a remote fabric are connected together for Metro Mirror purposes, the inter-switch link (ISL) hop count between a local node and a remote node cannot exceed seven.

8.5.2 Remote copy techniques


In this section we describe the differences between synchronous remote copy and asynchronous remote copy.

Synchronous remote copy


Metro Mirror is a fully synchronous remote copy technique that ensures, as long as writes to the auxiliary volumes are possible, that writes are committed at both the master and auxiliary volumes before write completion has been acknowledged to the host. Events such as a loss of connectivity between clusters can cause mirrored writes from the master and auxiliary volume to fail. In that case Metro Mirror suspends writes to the auxiliary

Chapter 8. Advanced Copy Services

389

volume and allows I/O to the master volume to continue, to avoid impacting the operation of the master volumes. Figure 8-11 illustrates how a write to the master volume is mirrored to the cache of the auxiliary volume before an acknowledgement of the write is sent back to the host that issued the write. This process ensures that the auxiliary is synchronized in real time, in case it is needed in a failover situation. However, this process also means that the application is exposed to the latency and bandwidth limitations (if any) of the communication link between the master and auxiliary volumes. This process might lead to unacceptable application performance, particularly when placed under peak load. Therefore, using Metro Mirror has distance limitations.

Figure 8-11 Write on volume in Metro Mirror relationship

8.5.3 Metro Mirror features


SVC Metro Mirror supports the following features: Synchronous remote copy of volumes dispersed over metropolitan distances. SVC implements Metro Mirror relationships between volume pairs, with each volume in a pair managed by an SVC cluster. SVC supports intracluster Metro Mirror, where both volumes belong to the same cluster (and I/O Group). SVC supports intercluster Metro Mirror, where each volume belongs to a separate SVC cluster. You can configure a specific SVC cluster for partnership with another cluster. All intercluster Metro Mirror processing takes place between two SVC clusters that are configured in a partnership. Intercluster and intracluster Metro Mirror can be used concurrently. SVC does not require that a control network or fabric is installed to manage Metro Mirror. For intercluster Metro Mirror, SVC maintains a control link between two clusters. This control link is used to control the state and coordinate updates at either end. The control link is implemented on top of the same FC fabric connection that the SVC uses for Metro Mirror I/O.

390

Implementing the IBM System Storage SAN Volume Controller V6.1

SVC implements a configuration model that maintains the Metro Mirror configuration and state through major events, such as failover, recovery, and resynchronization, to minimize user configuration action through these events. SVC allows resynchronization of changed data so that write failures occurring on either the master or auxiliary volumes do not require a complete resyncronization of the relationship.

8.5.4 Multiple Cluster Mirroring


Each SVC cluster can maintain up to three partner cluster relationships, allowing as many as four clusters to be directly associated with each other. This SVC partnership capability enables the implementation of disaster recovery (DR) solutions. Figure 8-12 shows an example of a Multiple Cluster Mirroring configuration.

Figure 8-12 Multiple Cluster Mirroring configuration example

Software level restrictions for Multiple Cluster Mirroring: Partnership between a cluster running 6.1.0 and a cluster running a version earlier than 4.3.1 is not supported. Clusters in a partnership where one cluster is running 6.1.0 and the other is running 4.3.1 cannot participate in additional partnerships with other clusters. Clusters that are all running either 6.1.0 or 5.1.0 can participate in up to three cluster partnerships.

Chapter 8. Advanced Copy Services

391

Note: SVC 6.1 supports object names up to 63 characters. Previous levels only supported up to 15 characters. When SVC 6.1 clusters are partnered with 4.3.1 and 5.1.0 clusters, various object names will be truncated at 15 characters when displayed from 4.3.1 and 5.1.0 clusters.

Supported Multiple Cluster Mirroring topologies


Multiple Cluster Mirroring allows for various partnership topologies as illustrated in the following examples:

Example: A B, A C, and A D

Figure 8-13 SVC star topology

Figure 8-13 shows four clusters in a star topology, with cluster A at the center. Cluster A can be a central DR site for the three other locations. Using a star topology, you can migrate applications by using a process like the one described in the following example: 1. Suspend application at A. 2. Remove the A B relationship. 3. Create the A C relationship (or alternatively, the B C relationship). 4. Synchronize to cluster C, and ensure A C is established: A B, A C, A D, B C, B D, and C D A B, A C, and B C

392

Implementing the IBM System Storage SAN Volume Controller V6.1

Example: A B, A C, and B C

Figure 8-14 SVC triangle topology

Example: A B, A C, A D, B D, and C D

Figure 8-15 SVC fully connected topology

Figure 8-15 is a fully connected mesh where every cluster has a partnership to each of the three other clusters. This allows volumes to be replicated between any pair of clusters.

Example: A B, A C, and B C
Figure 8-16 shows a daisy-chain topology.

Figure 8-16 SVC daisy-chain topology

Chapter 8. Advanced Copy Services

393

Note that although clusters can have up to three partnerships, volumes can only be part of one Remote Copy relationship, for example A B. Upgrade restriction: Upgrading a cluster to 6.1.0 requires that the partner cluster be running 4.3.1 or later. If the partner cluster is running 4.3.0, it must first be upgraded to 4.3.1.

8.5.5 Importance of write ordering


Many applications that use block storage must survive failures such as the loss of power or a software crash without losing the data that existed prior to the failure. Because many applications need to perform large numbers of update operations in parallel with storage, maintaining write ordering is key to ensuring the correct operation of applications following a disruption. An application that performs a high volume of database updates is usually designed with the concept of dependent writes. With dependent writes, it is important to ensure that an earlier write has completed before a later write is started. Reversing the order of dependent writes can undermine an applications algorithms and can lead to problems such as detected or undetected data corruption. See 8.4.3, Consistency Groups on page 371 for more information regarding dependent writes

Metro Mirror Consistency Groups


A Metro Mirror Consistency Group can contain an arbitrary number of relationships up to the maximum number of Metro Mirror relationships supported by the SVC cluster. Metro Mirror commands can be issued to a Metro Mirror Consistency Group, and therefore simultaneously for all Metro Mirror relationships defined within that Consistency Group, or to a single Metro Mirror relationship that is not part of a Metro Mirror Consistency Group. For example, when issuing a Metro Mirror startrcconsistgrp command to the Consistency Group, all of the Metro Mirror relationships in the Consistency Group are started at the same time. Figure 8-17 on page 395 illustrates the concept of Metro Mirror Consistency Groups. Because the MM_Relationship 1 and 2 are part of the Consistency Group, they can be handled as one entity. The stand-alone MM_Relationship 3 is handled separately.

394

Implementing the IBM System Storage SAN Volume Controller V6.1

Figure 8-17 Metro Mirror Consistency Group

Certain uses of Metro Mirror require manipulation of more than one relationship. Metro Mirror Consistency Groups can provide the ability to group relationships so that they are manipulated in unison. Consider the following points: Metro Mirror relationships can be part of a Consistency Group, or they can be stand-alone and therefore handled as single instances. A Consistency Group can contain zero or more relationships. An empty Consistency Group, with zero relationships in it, has little purpose until it is assigned its first relationship, except that it has a name. All relationships in a Consistency Group must have corresponding master and auxiliary volumes. Although it is possible to use Consistency Groups to manipulate sets of relationships that do not need to satisfy these strict rules, this manipulation can lead to undesired side effects. The rules behind a Consistency Group mean that certain configuration commands are prohibited. These configuration commands are not prohibited if the relationship is not part of a Consistency Group. For example, consider the case of two applications that are completely independent, yet they are placed into a single Consistency Group. In the event of an error there is a loss of synchronization, and a background copy process is required to recover synchronization. While this process is in progress, Metro Mirror rejects attempts to enable access to the auxiliary volumes of either application. If one application finishes its background copy much more quickly than the other application, Metro Mirror still refuses to grant access to its auxiliary volumes even though it is safe in this case, because Metro Mirror policy is to refuse access to the entire Consistency Group if any part of it is inconsistent.
Chapter 8. Advanced Copy Services

395

Stand-alone relationships and Consistency Groups share a common configuration and state model. All of the relationships in a non-empty Consistency Group have the same state as the Consistency Group.

8.5.6 Remote copy intercluster communication


All intercluster communication between clusters in a Metro Mirror and Global Mirror partnership is performed over the SAN. The following section provides details regarding this communication path.

Zoning
SVC node ports on each SVC cluster must be able to communicate with each other for the partnership creation to be performed. Switch zoning is critical to facilitating intercluster communication. See Chapter 3, Planning and configuration on page 57 for critical information regarding proper zoning for intercluster communication.

Intercluster communication channels


When an SVC cluster partnership has been defined on a pair of clusters, additional intercluster communication channels are established: A single control channel, which is used to exchange and coordinate configuration information I/O channels between each of these nodes in the clusters These channels are maintained and updated as nodes and links appear and disappear from the fabric, and are repaired to maintain operation where possible. If communication between SVC clusters is interrupted or lost, an event is logged (and consequently, the Metro Mirror and Global Mirror relationships will stop). Note: SVC can be configured to raise Simple Network Management Protocol (SNMP) traps to the enterprise monitoring system to alert on events indicating an interruption in internode communication has occurred.

Intercluster links
All SVC nodes maintain a database of other devices that are visible on the fabric. This database is updated as devices appear and disappear. Devices that advertise themselves as SVC nodes are categorized according to the SVC cluster to which they belong. SVC nodes that belong to the same cluster establish communication channels between themselves and begin to exchange messages to implement clustering and the functional protocols of SVC. Nodes that are in separate clusters do not exchange messages after initial discovery is complete, unless they have been configured together to perform a remote copy relationship. The intercluster link carries control traffic to coordinate activity between two clusters. It is formed between one node in each cluster. The traffic between the designated nodes is distributed among logins that exist between those nodes. If the designated node fails (or all of its logins to the remote cluster fail), then a new node is chosen to carry control traffic. This node change causes the I/O to pause, but it does not put the relationships in a ConsistentStopped state.

396

Implementing the IBM System Storage SAN Volume Controller V6.1

8.5.7 Metro Mirror attributes


The Metro Mirror function in SVC possesses the following attributes: 1. An SVC cluster partnership is created between two SVC clusters (for intercluster Metro Mirror). 2. A Metro Mirror relationship is created between two volumes of the same size. 3. To manage multiple Metro Mirror relationships as one entity, relationships can be made part of a Metro Mirror Consistency Group, which ensures data consistency across multiple Metro Mirror relationships and provides ease of management. 4. When a Metro Mirror relationship is started, and when the background copy has completed, the relationship becomes consistent and synchronized. 5. After the relationship is synchronized, the auxiliary volume holds a copy of the production data at the primary, which can be used for DR. 6. To access the auxiliary volume, the Metro Mirror relationship must be stopped with the access option enabled before write I/O will be allowed to the auxiliary. 7. The remote host server is mapped to the auxiliary volume, and the disk is available for I/O.

8.5.8 Methods of synchronization


This section describes two methods that can be used to establish a synchronized relationship.

Full synchronization after creation


The full synchronization after creation method is the default method. It is the simplest in that it requires no administrative activity apart from issuing the necessary commands. However, in certain environments, the available bandwidth can make this method unsuitable. Use this command sequence for a single relationship: 1. Run mkrcrelationship without specifying the -sync option. 2. Run startrcrelationship without specifying the -clean option.

Synchronized before creation


In this method, the administrator must ensure that the master and auxiliary volumes contain identical data before creating the relationship. Both disks are created with the security delete feature so as to make all data zero. A complete tape image (or other method of moving data) is copied from one disk to the other disk. With this technique, do not allow I/O on the master or auxiliary before the relationship is established. Then, the administrator must run these commands: Run mkrcrelationship with the -sync flag. Run startrcrelationship without the -clean flag. Attention: Failure to perform these steps correctly can cause Metro Mirror to report the relationship as consistent when it is not, thereby creating a data loss or data integrity exposure for hosts accessing data on the auxiliary volume.

Chapter 8. Advanced Copy Services

397

8.5.9 Metro Mirror states and events


In this section we describe the various states of a Metro Mirror relationship and the conditions that cause them to change. In Figure 8-18, the Metro Mirror relationship state diagram shows an overview of states that can apply to a Metro Mirror relationship in a connected state.

Figure 8-18 Metro Mirror mapping state diagram

When creating the Metro Mirror relationship, you can specify if the auxiliary volume is already in sync with the master volume, and the background copy process is then skipped. This capability is especially useful when creating Metro Mirror relationships for volumes that have been created with the format option. The step identifiers in Figure 8-18 are described here. Step 1 a. The Metro Mirror relationship is created with the -sync option, and the Metro Mirror relationship enters the ConsistentStopped state. b. The Metro Mirror relationship is created without specifying that the master and auxiliary volumes are in sync, and the Metro Mirror relationship enters the InconsistentStopped state.

398

Implementing the IBM System Storage SAN Volume Controller V6.1

Step 2 a. When starting a Metro Mirror relationship in the ConsistentStopped state, the Metro Mirror relationship enters the ConsistentSynchronized state. Therefore, no updates (write I/O) have been performed on the master volume while in the ConsistentStopped state. Otherwise, the -force option must be specified, and the Metro Mirror relationship then enters the InconsistentCopying state, while the background copy is started. b. When starting a Metro Mirror relationship in the InconsistentStopped state, the Metro Mirror relationship enters the InconsistentCopying state, while the background copy is started. Step 3 When the background copy completes, the Metro Mirror relationship transitions from the InconsistentCopying state to the ConsistentSynchronized state. Step 4 a. When stopping a Metro Mirror relationship in the ConsistentSynchronized state, specifying the -access option, which enables write I/O on the auxiliary volume, the Metro Mirror relationship enters the Idling state. b. To enable write I/O on the auxiliary volume, when the Metro Mirror relationship is in the ConsistentStopped state, issue the command svctask stoprcrelationship specifying the -access option, and the Metro Mirror relationship enters the Idling state. Step 5 a. When starting a Metro Mirror relationship that is in the Idling state, you must specify the -primary argument to set the copy direction. Given that no write I/O has been performed (to either the master or auxiliary volume) while in the Idling state, the Metro Mirror relationship enters the ConsistentSynchronized state. b. If write I/O has been performed to either the master or auxiliary volume, the -force option must be specified, and the Metro Mirror relationship then enters the InconsistentCopying state, while the background copy is started. Stop or Error: When a Metro Mirror relationship is stopped (either intentionally or due to an error), a state transition is applied: For example, the Metro Mirror relationships in the ConsistentSynchronized state enter the ConsistentStopped state, and the Metro Mirror relationships in the InconsistentCopying state enter the InconsistentStopped state. If the connection is broken between the SVC clusters in a partnership, then all (intercluster) Metro Mirror relationships enter a Disconnected state. For further information, refer to Connected versus disconnected on page 399. Common states: Stand-alone relationships and Consistency Groups share a common configuration and state model. All Metro Mirror relationships in a Consistency Group that is not empty have the same state as the Consistency Group.

State overview
in the following sections we provide an overview of the different Metro Mirror states.

Connected versus disconnected


Under certain error scenarios (for example, a power failure at one site causing one complete cluster to disappear), communications between two clusters in a Metro Mirror relationship can be lost. Alternatively, the fabric connection between the two clusters might fail, leaving the two clusters running but unable to communicate with each other.

Chapter 8. Advanced Copy Services

399

When the two clusters can communicate, the clusters and the relationships spanning them are described as connected. When they cannot communicate, the clusters and the relationships spanning them are described as disconnected. In this state, both clusters are left with fragmented relationships and will be limited regarding the configuration commands that can be performed. The disconnected relationships are portrayed as having a changed state. The new states describe what is known about the relationship and what configuration commands are permitted. When the clusters can communicate again, the relationships become connected again. Metro Mirror automatically reconciles the two state fragments, taking into account any configuration or other event that took place while the relationship was disconnected. As a result, the relationship can either return to the state that it was in when it became disconnected or enter a new state. Relationships that are configured between volumes in the same SVC cluster (intracluster) will never be described as being in a disconnected state.

Consistent versus inconsistent


Relationships that contain volumes that are operating as secondaries can be described as being consistent or inconsistent. Consistency Groups that contain relationships can also be described as being consistent or inconsistent. The consistent or inconsistent property describes the relationship of the data on the auxiliary to that on the master volume. It can be considered a property of the auxiliary volume itself. A auxiliary volume is described as consistent if it contains data that might have been read by a host system from the master if power had failed at an imaginary point in time while I/O was in progress, and power was later restored. This imaginary point in time is defined as the recovery point. The requirements for consistency are expressed with respect to activity at the master up to the recovery point: The auxiliary volume contains the data from all of the writes to the master for which the host received successful completion and that data had not been overwritten by a subsequent write (before the recovery point). For writes for which the host did not receive a successful completion (that is, it received bad completion or no completion at all), and the host subsequently performed a read from the master of that data and that read returned successful completion and no later write was sent (before the recovery point), the auxiliary contains the same data as that returned by the read from the master. From the point of view of an application, consistency means that an auxiliary volume contains the same data as the master volume at the recovery point (the time at which the imaginary power failure occurred). If an application is designed to cope with unexpected power failure, this guarantee of consistency means that the application will be able to use the auxiliary and begin operation just as though it had been restarted after the hypothetical power failure. Again, maintaining the application write ordering is the key property of consistency. See 8.4.3, Consistency Groups on page 371 for more information regarding dependent writes. If a relationship, or set of relationships, is inconsistent and an attempt is made to start an application using the data in the secondaries, a number of outcomes are possible: The application might decide that the data is corrupt and crash or exit with an event code. The application might fail to detect that the data is corrupt and return erroneous data.

400

Implementing the IBM System Storage SAN Volume Controller V6.1

The application might work without a problem. Because of the risk of data corruption, and in particular undetected data corruption, Metro Mirror strongly enforces the concept of consistency and prohibits access to inconsistent data. Consistency as a concept can be applied to a single relationship or a set of relationships in a Consistency Group. Write ordering is a concept that an application can maintain across a number of disks accessed through multiple systems; therefore, consistency must operate across all those disks. When deciding how to use Consistency Groups, the administrator must consider the scope of an applications data, taking into account all of the interdependent systems that communicate and exchange information. If two programs or systems communicate and store details as a result of the information exchanged, either of the following actions might occur: All of the data accessed by the group of systems must be placed into a single Consistency Group. The systems must be recovered independently (each within its own Consistency Group). Then, each system must perform recovery with the other applications to become consistent with them.

Consistent versus synchronized


A copy that is consistent and up-to-date is described as synchronized. In a synchronized relationship, the master and auxiliary volumes only differ in regions where writes are outstanding from the host. Consistency does not mean that the data is up-to-date. A copy can be consistent and yet contain data that was frozen at a point in time in the past. Write I/O might have continued to a master and not have been copied to the auxiliary. This state arises when it becomes impossible to keep up-to-date and maintain consistency. An example is a loss of communication between clusters when writing to the auxiliary. When communication is lost for an extended period of time, Metro Mirror tracks the changes that occurred on the master, but not the order of such changes or the details of such changes (write data). When communication is restored, it is impossible to synchronize the auxiliary without sending write data to the auxiliary out of order and, therefore, losing consistency. Two policies can be used to cope with this situation: Make a point-in-time copy of the consistent auxiliary before allowing the auxiliary to become inconsistent. In the event of a disaster before consistency is achieved again, the point-in-time copy target provides a consistent, although out-of-date, image. Accept the loss of consistency and the loss of a useful auxiliary, while synchronizing the auxiliary.

Chapter 8. Advanced Copy Services

401

Detailed states
The following sections detail the states that are portrayed to the user, for either Consistency Groups or relationships. It also details additional information that is available in each state. The major states are designed to provide guidance about the configuration commands that are available.

InconsistentStopped
InconsistentStopped is a connected state. In this state, the master is accessible for read and write I/O, but the auxiliary is not accessible for either read or write I/O. A copy process needs to be started to make the auxiliary consistent. This state is entered when the relationship or Consistency Group was InconsistentCopying and has either suffered a persistent error or received a stop command that has caused the copy process to stop. A start command causes the relationship or Consistency Group to move to the InconsistentCopying state. A stop command is accepted, but has no effect. If the relationship or Consistency Group becomes disconnected, the auxiliary side transitions to InconsistentDisconnected. The master side transitions to IdlingDisconnected.

InconsistentCopying
InconsistentCopying is a connected state. In this state, the master is accessible for read and write I/O, but the auxiliary is not accessible for either read or write I/O. This state is entered after a start command is issued to an InconsistentStopped relationship or a Consistency Group. It is also entered when a forced start is issued to an Idling or ConsistentStopped relationship or Consistency Group. In this state, a background copy process runs that copies data from the master to the auxiliary volume. In the absence of errors, an InconsistentCopying relationship is active, and the copy progress increases until the copy process completes. In certain error situations, the copy progress might freeze or even regress. A persistent error or stop command places the relationship or Consistency Group into an InconsistentStopped state. A start command is accepted but has no effect. If the background copy process completes on a stand-alone relationship, or on all relationships for a Consistency Group, the relationship or Consistency Group transitions to the ConsistentSynchronized state. If the relationship or Consistency Group becomes disconnected, the auxiliary side transitions to InconsistentDisconnected. The master side transitions to IdlingDisconnected.

ConsistentStopped
ConsistentStopped is a connected state. In this state, the auxiliary contains a consistent image, but it might be out-of-date with respect to the master. This state can arise when a relationship was in a ConsistentSynchronized state and suffers an error that forces a Consistency Freeze. It can also arise when a relationship is created with a CreateConsistentFlag set to TRUE. Normally, following an I/O error, subsequent write activity causes updates to the master and the auxiliary is no longer synchronized (set to false). In this case, to reestablish synchronization, consistency must be given up for a period. You must use a start command 402
Implementing the IBM System Storage SAN Volume Controller V6.1

with the -force option to acknowledge this condition, and the relationship or Consistency Group transitions to InconsistentCopying. Enter this command only after all outstanding events have been repaired. In the unusual case where the master and the auxiliary are still synchronized (perhaps following a user stop, and no further write I/O was received), a start command takes the relationship to ConsistentSynchronized. No -force option is required. Also, in this case, you can enter a switch command that moves the relationship or Consistency Group to ConsistentSynchronized and reverses the roles of the master and the auxiliary. If the relationship or Consistency Group becomes disconnected, the auxiliary transitions to ConsistentDisconnected. The master transitions to IdlingDisconnected. An informational status log is generated whenever a relationship or Consistency Group enters the ConsistentStopped state with a status of Online. You can configure this event to generate an SNMP trap that can be used to trigger automation or manual intervention to issue a start command following a loss of synchronization.

ConsistentSynchronized
ConsistentSynchronized is a connected state. In this state, the master volume is accessible for read and write I/O, and the auxiliary volume is accessible for read-only I/O. Writes that are sent to the master volume are sent to both the master and auxiliary volumes. Either successful completion must be received for both writes, the write must be failed to the host, or a state must transition out of the ConsistentSynchronized state before a write is completed to the host. A stop command takes the relationship to the ConsistentStopped state. A stop command with the -access parameter takes the relationship to the Idling state. A switch command leaves the relationship in the ConsistentSynchronized state, but it reverses the master and auxiliary roles. A start command is accepted, but it has no effect. If the relationship or Consistency Group becomes disconnected, the same transitions are made as for ConsistentStopped.

Idling
Idling is a connected state. Both master and auxiliary volumes operate in the master role. Consequently, both master and auxiliary volumes are accessible for write I/O. In this state, the relationship or Consistency Group accepts a start command. Metro Mirror maintains a record of regions on each disk that received write I/O while idling. This record is used to determine what areas need to be copied following a start command. The start command must specify the new copy direction. A start command can cause a loss of consistency if either volume in any relationship has received write I/O, which is indicated by the Synchronized status. If the start command leads to loss of consistency, you must specify the -force parameter. Following a start command, the relationship or Consistency Group transitions to ConsistentSynchronized if there is no loss of consistency, or to InconsistentCopying if there is a loss of consistency.

Chapter 8. Advanced Copy Services

403

Also, while in this state, the relationship or Consistency Group accepts a -clean option on the start command. If the relationship or Consistency Group becomes disconnected, both sides change their state to IdlingDisconnected.

IdlingDisconnected
IdlingDisconnected is a disconnected state. The volume or disks in this half of the relationship or Consistency Group are all in the master role and accept read or write I/O. The priority in this state is to recover the link to restore the relationship or consistency. No configuration activity is possible (except for deletes or stops) until the relationship becomes connected again. At that point, the relationship transitions to a connected state. The exact connected state that is entered depends on the state of the other half of the relationship or Consistency Group, which depends on these factors: The state when it became disconnected The write activity since it was disconnected The configuration activity since it was disconnected If both halves are IdlingDisconnected, the relationship becomes Idling when reconnected. While IdlingDisconnected, if a write I/O is received that causes loss of synchronization (synchronized attribute transitions from true to false) and the relationship was not already stopped (either through a user stop or a persistent error), an event is raised to notify you of the condition. This same event will also be raised when this condition occurs for the ConsistentSynchronized state.

InconsistentDisconnected
InconsistentDisconnected is a disconnected state. The volumes in this half of the relationship or Consistency Group are all in the auxiliary role and do not accept read or write I/O. No configuration activity, except for deletes, is permitted until the relationship becomes connected again. When the relationship or Consistency Group becomes connected again, the relationship becomes InconsistentCopying automatically unless either condition is true: The relationship was InconsistentStopped when it became disconnected. The user issued a stop command while disconnected. In either case, the relationship or Consistency Group becomes InconsistentStopped.

ConsistentDisconnected
ConsistentDisconnected is a disconnected state. The volumes in this half of the relationship or Consistency Group are all in the auxiliary role and accept read I/O but not write I/O. This state is entered from ConsistentSynchronized or ConsistentStopped when the auxiliary side of a relationship becomes disconnected. In this state, the relationship or Consistency Group displays an attribute of FreezeTime, which is the point in time that Consistency was frozen. When entered from ConsistentStopped, it retains the time that it had in that state. When entered from ConsistentSynchronized, the

404

Implementing the IBM System Storage SAN Volume Controller V6.1

FreezeTime shows the last time at which the relationship or Consistency Group was known to be consistent. This time corresponds to the time of the last successful heartbeat to the other cluster. A stop command with the -access flag set to true transitions the relationship or Consistency Group to the IdlingDisconnected state. This state allows write I/O to be performed to the auxiliary volume and is used as part of a DR scenario. When the relationship or Consistency Group becomes connected again, the relationship or Consistency Group becomes ConsistentSynchronized only if this action does not lead to a loss of consistency. These conditions must be true: The relationship was ConsistentSynchronized when it became disconnected. No writes received successful completion at the master while disconnected. Otherwise, the relationship become ConsistentStopped. The FreezeTime setting is retained.

Empty
This state only applies to Consistency Groups. It is the state of a Consistency Group that has no relationships and no other state information to show. It is entered when a Consistency Group is first created. It is exited when the first relationship is added to the Consistency Group, at which point, the state of the relationship becomes the state of the Consistency Group.

Background copy
Metro Mirror paces the rate at which background copy is performed by the appropriate relationships. Background copy takes place on relationships that are in the InconsistentCopying state with a status of Online. The quota of background copy (configured on the intercluster link) is divided evenly between all of the nodes that are performing background copy for one of the eligible relationships. This allocation is made irrespective of the number of disks for which the node is responsible. Each node in turn divides its allocation evenly between the multiple relationships performing a background copy. For intracluster relationships, each node is assigned a static quota of 25 MBps.

8.5.10 Practical use of Metro Mirror


The master volume is the production volume and updates to this copy are mirrored in real time to the auxiliary volume. The contents of the auxiliary volume that existed when the relationship was created are destroyed. Switching copy direction: The copy direction for a Metro Mirror relationship can be switched so the auxiliary volume becomes the master, and the master volume becomes the auxiliary. While the Metro Mirror relationship is active, the auxiliary volume is not accessible for host application write I/O at any time. The SVC allows read-only access to the auxiliary volume when it contains a consistent image. This time period is only intended to allow boot time operating system discovery to complete without error, so that any hosts at the secondary site can be ready to start up the applications with minimum delay, if required. For example, many operating systems must read logical block address (LBA) zero to configure a logical unit. Although read access is allowed at the auxiliary in practice, the data
Chapter 8. Advanced Copy Services

405

on the auxiliary volumes cannot be read by a host, because most operating systems write a dirty bit to the file system when it is mounted. Because this write operation is not allowed on the auxiliary volume, the volume cannot be mounted. This access is only provided where consistency can be guaranteed. However, there is no way in which coherency can be maintained between reads that are performed at the auxiliary and later write I/Os that are performed at the master. To enable access to the auxiliary volume for host operations, you must stop the Metro Mirror relationship by specifying the -access parameter. While access to the auxiliary volume for host operations is enabled, the host must be instructed to mount the volume and related tasks before the application can be started, or instructed to perform a recovery process. For example, the Metro Mirror requirement to enable the auxiliary copy for access differentiates it from third-party mirroring software on the host, which aims to emulate a single, reliable disk regardless of what system is accessing it. Metro Mirror retains the property that there are two volumes in existence, but it suppresses one volume while the copy is being maintained. Using an auxiliary copy demands a conscious policy decision by the administrator that a failover is required and that the tasks to be performed on the host involved in establishing operation on the auxiliary copy are substantial. The goal is to make this rapid (much faster when compared to recovering from a backup copy) but not seamless. The failover process can be automated through failover management software. The SVC provides Simple Network Management Protocol (SNMP) traps and programming (or scripting) for the command-line interface (CLI) to enable this automation.

8.5.11 Valid combinations of FlashCopy, Metro Mirror, and Global Mirror


Table 8-7 outlines the combinations of FlashCopy and Metro Mirror or Global Mirror functions that are valid for a single volume.
Table 8-7 Volume valid combination FlashCopy FlashCopy Source FlashCopy Target Metro Mirror or Global Mirror Master Supported Not supported Metro Mirror or Global Mirror Auxiliary Supported Not supported

8.5.12 Metro Mirror configuration limits


Table 8-8 lists the Metro Mirror configuration limits.
Table 8-8 Metro Mirror configuration limits Parameter Number of Metro Mirror Consistency Groups per cluster Number of Metro Mirror relationships per cluster Value 256 8192

406

Implementing the IBM System Storage SAN Volume Controller V6.1

Parameter Number of Metro Mirror relationships per Consistency Group Total volume size per I/O Group

Value 8192

There is a per I/O Group limit of 1024 TB on the quantity of master and auxiliary volume address space that can participate in Metro Mirror and Global Mirror relationships. This maximum configuration will consume all 512 MB of bitmap space for the I/O Group and allow no FlashCopy bitmap space.

8.6 Metro Mirror commands


For comprehensive details about Metro Mirror Commands, see IBM System Storage SAN Volume Controller Command-Line Interface Users Guide, GC27-2287. The command set for Metro Mirror contains two broad groups: Commands to create, delete, and manipulate relationships and Consistency Groups Commands to cause state changes Where a configuration command affects more than one cluster, Metro Mirror performs the work to coordinate configuration activity between the clusters. Certain configuration commands can only be performed when the clusters are connected and fail with no effect when they are disconnected. Other configuration commands are permitted even though the clusters are disconnected. The state is reconciled automatically by Metro Mirror when the clusters become connected again. For any given command, with one exception, a single cluster actually receives the command from the administrator. This design is significant for defining the context for a CreateRelationship mkrcrelationship or CreateConsistencyGroup mkrcconsistgrp command, in which case, the cluster receiving the command is called the local cluster. The exception mentioned previously is the command that sets clusters into a Metro Mirror partnership. The mkpartnership command must be issued to both the local and remote clusters. The commands here are described as an abstract command set and are implemented as either method: A command-line interface (CLI), which can be used for scripting and automation A graphical user interface (GUI), which can be used for one-off tasks

8.6.1 Listing available SVC cluster partners


To create an SVC cluster partnership, use the svcinfo lsclustercandidate command.

svcinfo lsclustercandidate
The svcinfo lsclustercandidate command is used to list the clusters that are available for setting up a two-cluster partnership. This command is a prerequisite for creating Metro Mirror relationships.

Chapter 8. Advanced Copy Services

407

8.6.2 Creating the SVC cluster partnership


To create an SVC cluster partnership, use the svctask mkpartnership command.

svctask mkpartnership
The svctask mkpartnership command is used to establish a one-way Metro Mirror partnership between the local cluster and a remote cluster. To establish a fully functional Metro Mirror partnership, you must issue this command to both clusters. This step is a prerequisite to creating Metro Mirror relationships between volumes on the SVC clusters. When creating the partnership, you can specify the bandwidth to be used by the background copy process between the local and the remote SVC cluster, and if it is not specified, the bandwidth defaults to 50 MBps. The bandwidth must be set to a value that is less than or equal to the bandwidth that can be sustained by the intercluster link.

Background copy bandwidth effect on foreground I/O latency


The background copy bandwidth determines the rate at which the background copy for the SVC will be attempted. The background copy bandwidth can affect the foreground I/O latency in one of three ways: The following results can occur if the background copy bandwidth is set too high for the Metro Mirror intercluster link capacity: The background copy I/Os can back up on the Metro Mirror intercluster link. There is a delay in the synchronous auxiliary writes of foreground I/Os. The foreground I/O latency will increase as perceived by applications. If the background copy bandwidth is set too high for the storage at the primary site, the background copy read I/Os overload the master storage and delay foreground I/Os. If the background copy bandwidth is set too high for the storage at the secondary site, background copy writes at the auxiliary overload the secondary storage and again delay the synchronous auxiliary writes of foreground I/Os. To set the background copy bandwidth optimally, make sure that you consider all three resources (the master storage, the intercluster link bandwidth, and the secondary storage). Provision the most restrictive of these three resources between the background copy bandwidth and the peak foreground I/O workload. This provisioning can be done by a calculation (as previously described) or alternatively by determining experimentally how much background copy can be allowed before the foreground I/O latency becomes unacceptable, and then backing off to allow for peaks in workload and a safety margin.

svctask chpartnership
In case it is needed to change the bandwidth that is available for background copy in an SVC cluster partnership, you can use the svctask chpartnership command to specify the new bandwidth.

8.6.3 Creating a Metro Mirror Consistency Group


To create a Metro Mirror Consistency Group, use the svctask mkrcconsistgrp command.

svctask mkrcconsistgrp
The svctask mkrcconsistgrp command is used to create a new empty Metro Mirror Consistency Group.

408

Implementing the IBM System Storage SAN Volume Controller V6.1

The Metro Mirror Consistency Group name must be unique across all of the Consistency Groups that are known to the clusters owning this Consistency Group. If the Consistency Group involves two clusters, the clusters must be in communication throughout the creation process. The new Consistency Group does not contain any relationships and will be in the Empty state. Metro Mirror relationships can be added to the group either upon creation or afterward by using the svctask chrelationship command.

8.6.4 Creating a Metro Mirror relationship


To create a Metro Mirror relationship, use the command svctask mkrcrelationship.

svctask mkrcrelationship
The svctask mkrcrelationship command is used to create a new Metro Mirror relationship. This relationship persists until it is deleted. The auxiliary volume must be equal in size to the master volume or the command will fail, and if both volumes are in the same cluster, they must both be in the same I/O Group. The master and auxiliary volume cannot be in an existing relationship and cannot be the target of a FlashCopy mapping. This command returns the new relationship (relationship_id) when successful. When creating the Metro Mirror relationship, it can be added to an already existing Consistency Group, or it can be a stand-alone Metro Mirror relationship if no Consistency Group is specified. To check whether the master or auxiliary volumes comply with the prerequisites to participate in a Metro Mirror relationship, use the svcinfo lsrcrelationshipcandidate command.

svcinfo lsrcrelationshipcandidate
The svcinfo lsrcrelationshipcandidate command is used to list available volumes that are eligible for a Metro Mirror relationship. When issuing the command, you can specify the source volume name and secondary cluster to list candidates that comply with prerequisites to create a Metro Mirror relationship. If the command is issued with no flags, all volumes that are not disallowed by another configuration state, such as being a FlashCopy target, are listed.

8.6.5 Changing a Metro Mirror relationship


To modify the properties of a Metro Mirror relationship, use the command svctask chrcrelationship.

svctask chrcrelationship
The svctask chrcrelationship command is used to modify the following properties of a Metro Mirror relationship: Change the name of a Metro Mirror relationship. Add a relationship to a group. Remove a relationship from a group using the -force flag. Adding a Metro Mirror relationship: When adding a Metro Mirror relationship to a Consistency Group that is not empty, the relationship must have the same state and copy direction as the group to be added to it.
Chapter 8. Advanced Copy Services

409

8.6.6 Changing a Metro Mirror Consistency Group


To change the name of a Metro Mirror Consistency Group, use the svctask chrcconsistgrp command.

svctask chrcconsistgrp
The svctask chrcconsistgrp command is used to change the name of a Metro Mirror Consistency Group.

8.6.7 Starting a Metro Mirror relationship


To start a stand-alone Metro Mirror relationship, use the svctask startrcrelationship command.

svctask startrcrelationship
The svctask startrcrelationship command is used to start the copy process of a Metro Mirror relationship. When issuing the command, the copy direction can be set, if it is undefined, and optionally mark the auxiliary volume of the relationship as clean. The command fails if it is used to attempt to start a relationship that is part of a Consistency Group. This command can only be issued to a relationship that is connected. For a relationship that is idling, this command assigns a copy direction (master and auxiliary roles) and begins the copy process. Otherwise, this command restarts a previous copy process that was stopped either by a stop command or by an I/O error. If the resumption of the copy process leads to a period when the relationship is inconsistent, you must specify the -force flag when restarting the relationship. This situation can arise if, for example, the relationship was stopped, and then further writes were performed on the original master of the relationship. The use of the -force flag here is a reminder that the data on the auxiliary will become inconsistent while resynchronization (background copying) occurs, and therefore, the data is not usable for DR purposes before the background copy has completed. In the Idling state, you must specify the master volume to indicate the copy direction. In other connected states, you can provide the -primary argument, but it must match the existing setting.

8.6.8 Stopping a Metro Mirror relationship


To stop a stand-alone Metro Mirror relationship, use the svctask stoprcrelationship command.

svctask stoprcrelationship
The svctask stoprcrelationship command is used to stop the copy process for a relationship. It can also be used to enable write access to a consistent auxiliary volume by specifying the -access flag. This command applies to a stand-alone relationship. It is rejected if it is addressed to a relationship that is part of a Consistency Group. You can issue this command to stop a relationship that is copying from master to auxiliary.

410

Implementing the IBM System Storage SAN Volume Controller V6.1

If the relationship is in an inconsistent state, any copy operation stops and does not resume until you issue an svctask startrcrelationship command. Write activity is no longer copied from the master to the auxiliary volume. For a relationship in the ConsistentSynchronized state, this command causes a consistency freeze. When a relationship is in a consistent state (that is, in the ConsistentStopped, ConsistentSynchronized, or ConsistentDisconnected state), you can use the -access parameter with the stoprcrelationship command to enable write access to the auxiliary volume.

8.6.9 Starting a Metro Mirror Consistency Group


To start a Metro Mirror Consistency Group, use the svctask startrcconsistgrp command. The svctask startrcconsistgrp command is used to start a Metro Mirror Consistency Group. This command can only be issued to a Consistency Group that is connected. For a Consistency Group that is idling, this command assigns a copy direction (master and auxiliary roles) and begins the copy process. Otherwise, this command restarts a previous copy process that was stopped either by a stop command or by an I/O error.

8.6.10 Stopping a Metro Mirror Consistency Group


To stop a Metro Mirror Consistency Group, use the svctask stoprcconsistgrp command.

svctask stoprcconsistgrp
The svctask startrcconsistgrp command is used to stop the copy process for a Metro Mirror Consistency Group. It can also be used to enable write access to the auxiliary volumes in the group if the group is in a consistent state. If the Consistency Group is in an inconsistent state, any copy operation stops and does not resume until you issue the svctask startrcconsistgrp command. Write activity is no longer copied from the master to the auxiliary volumes belonging to the relationships in the group. For a Consistency Group in the ConsistentSynchronized state, this command causes a consistency freeze. When a Consistency Group is in a consistent state (for example, in the ConsistentStopped, ConsistentSynchronized, or ConsistentDisconnected state), the -access argument can be used with the svctask stoprcconsistgrp command to enable write access to the auxiliary volumes within that group.

8.6.11 Deleting a Metro Mirror relationship


To delete a Metro Mirror relationship, use the svctask rmrcrelationship command.

svctask rmrcrelationship
The svctask rmrcrelationship command is used to delete the relationship that is specified. Deleting a relationship only deletes the logical relationship between the two volumes. It does not affect the volumes themselves. If the relationship is disconnected at the time that the command is issued, the relationship is only deleted on the cluster on which the command is being run. When the clusters reconnect, then the relationship is automatically deleted on the other cluster.

Chapter 8. Advanced Copy Services

411

Alternatively, if the clusters are disconnected, and you still want to remove the relationship on both clusters, you can issue the rmrcrelationship command independently on both of the clusters. If you delete an inconsistent relationship, the auxiliary volume becomes accessible even though it is still inconsistent. This situation is the one case in which Metro Mirror does not inhibit access to inconsistent data.

8.6.12 Deleting a Metro Mirror Consistency Group


To delete a Metro Mirror Consistency Group, use the svctask rmrcconsistgrp command.

svctask rmrcconsistgrp
The svctask rmrcconsistgrp command is used to delete a Metro Mirror Consistency Group. This command deletes the specified Consistency Group. You can issue this command for any existing Consistency Group. If the Consistency Group is disconnected at the time that the command is issued, the Consistency Group is only deleted on the cluster on which the command is being run. When the clusters reconnect, the Consistency Group is automatically deleted on the other cluster. Alternatively, if the clusters are disconnected, and you still want to remove the Consistency Group on both clusters, you can issue the svctask rmrcconsistgrp command separately on both of the clusters. If the Consistency Group is not empty, the relationships within it are removed from the Consistency Group before the group is deleted. These relationships then become stand-alone relationships. The state of these relationships is not changed by the action of removing them from the Consistency Group.

8.6.13 Reversing a Metro Mirror relationship


To reverse a Metro Mirror relationship, use the svctask switchrcrelationship command.

svctask switchrcrelationship
The svctask switchrcrelationship command is used to reverse the roles of the master and auxiliary volumes when a stand-alone relationship is in a consistent state. When issuing the command, the desired master is specified.

8.6.14 Reversing a Metro Mirror Consistency Group


To reverse a Metro Mirror Consistency Group, use the svctask switchrcconsistgrp command.

svctask switchrcconsistgrp
The svctask switchrcconsistgrp command is used to reverse the roles of the master and auxiliary volumes when a Consistency Group is in a consistent state. This change is applied to all of the relationships in the Consistency Group, and when issuing the command, the desired master is specified.

412

Implementing the IBM System Storage SAN Volume Controller V6.1

8.6.15 Background copy


Metro Mirror paces the rate at which background copy is performed by the appropriate relationships. Background copy takes place on relationships that are in the InconsistentCopying state with a status of Online. The quota of background copy (configured on the intercluster link) is divided evenly between the nodes that are performing background copy for one of the eligible relationships. This allocation is made without regard for the number of disks for which the node is responsible. Each node in turn divides its allocation evenly between the multiple relationships performing a background copy. For intracluster relationships, each node is assigned a static quota of 25 MBps.

8.7 Global Mirror


In the following topics, we describe the Global Mirror copy service, which is an asynchronous remote copy service. It provides and maintains a consistent mirrored copy of a source volume to a target volume. Global Mirror establishes a Global Mirror relationship between two volumes of equal size. The volumes in a Global Mirror relationship are referred to as the master (source) volume and the auxiliary (target) volume, same as with Metro Mirror. Consistency Groups can be used to maintain data integrity for dependent writes, similar to FlashCopy Consistency Groups. Global Mirror writes data to the auxiliary volume asynchronously, meaning that host writes to the master volume will provide the host with confirmation that the write is complete prior to the I/O completing on the auxiliary volume.

8.7.1 Intracluster Global Mirror


Although Global Mirror is available for intracluster, it has no functional value for production use. Intracluster Metro Mirror provides the same capability with less overhead. However, leaving this functionality in place simplifies testing and allows for client experimentation and testing (for example, to validate server failover on a single test cluster).

8.7.2 Intercluster Global Mirror


Intercluster Global Mirror operations require a pair of SVC clusters connected by a number of intercluster links. The two SVC clusters must be defined in an SVC cluster partnership to establish a fully functional Global Mirror relationship. Limit: When a local and a remote fabric are connected together for Global Mirror purposes, the ISL hop count between a local node and a remote node must not exceed seven hops.

8.7.3 Asynchronous remote copy


Global Mirror is an asynchronous remote copy technique. In asynchronous remote copy, write operations are completed on the primary site and the write acknowledgement is sent to the host before it is received at the secondary site. An update of this write operation is sent to the

Chapter 8. Advanced Copy Services

413

secondary site at a later stage, which provides the capability to perform remote copy over distances exceeding the limitations of synchronous remote copy. The Global Mirror function provides the same function as Metro Mirror Remote Copy, but over long distance links with higher latency, without requiring the hosts to wait for the full round-trip delay of the long distance link. Figure 8-19 shows that a write operation to the master volume is acknowledged back to the host issuing the write before the write operation is mirrored to the cache for the auxiliary volume.

Figure 8-19 Global Mirror write sequence

The Global Mirror algorithms maintain a consistent image on the auxiliary at all times. They achieve this consistent image by identifying sets of I/Os that are active concurrently at the master, assigning an order to those sets, and applying those sets of I/Os in the assigned order at the secondary. As a result, Global Mirror maintains the features of Write Ordering and Read Stability that are described in this chapter. The multiple I/Os within a single set are applied concurrently. The process that marshals the sequential sets of I/Os operates at the secondary cluster, and is therefore not subject to the latency of the long distance link. These two elements of the protocol ensure that the throughput of the total cluster can be grown by increasing cluster size, while maintaining consistency across a growing data set. In a failover scenario, where the secondary site needs to become the master source of data, certain updates might be missing at the secondary site. Therefore, any applications that will use this data must have an external mechanism for recovering the missing updates and reapplying them, for example, such as a transaction log replay.

8.7.4 SVC Global Mirror features


SVC Global Mirror supports the following features: Asynchronous remote copy of volumes dispersed over metropolitan scale distances is supported.

414

Implementing the IBM System Storage SAN Volume Controller V6.1

SVC implements the Global Mirror relationship between a volume pair, with each volume in the pair being managed by an SVC cluster. SVC supports intracluster Global Mirror, where both volumes belong to the same cluster (and I/O Group). Although, as stated earlier, this functionality is better suited to Metro Mirror. SVC supports intercluster Global Mirror, where each volume belongs to its separate SVC cluster. A given SVC cluster can be configured for partnership with between one and three other clusters. Intercluster and intracluster Global Mirror can be used concurrently within a cluster for separate relationships. SVC does not require a control network or fabric to be installed to manage Global Mirror. For intercluster Global Mirror, the SVC maintains a control link between the two clusters. This control link is used to control the state and to coordinate the updates at either end. The control link is implemented on top of the same FC fabric connection that the SVC uses for Global Mirror I/O. SVC implements a configuration model that maintains the Global Mirror configuration and state through major events, such as failover, recovery, and resynchronization, to minimize user configuration action through these events. SVC implements flexible resynchronization support, enabling it to resynchronize volume pairs that have experienced write I/Os to both disks and to resynchronize only those regions that are known to have changed. Colliding writes are supported. An optional feature for Global Mirror permits a delay simulation to be applied on writes that are sent to auxiliary volumes.

Colliding writes
Prior to V4.3.1, the Global Mirror algorithm required that only a single write is active on any given 512 byte LBA of a volume. If a further write is received from a host while the auxiliary write is still active, even though the master write might have completed, the new host write will be delayed until the auxiliary write is complete. This restriction is needed in case a series of writes to the auxiliary have to be retried (called reconstruction). Conceptually, the data for reconstruction comes from the master volume. If multiple writes are allowed to be applied to the master for a given sector, only the most recent write will get the correct data during reconstruction, and if reconstruction is interrupted for any reason, the intermediate state of the auxiliary is inconsistent. Applications that deliver such write activity will not achieve the performance that Global Mirror is intended to support. A volume statistic is maintained about the frequency of these collisions. From V4.3.1 onward, an attempt is made to allow multiple writes to a single location to be outstanding in the Global Mirror algorithm. There is still a need for master writes to be serialized, and the intermediate states of the master data must be kept in a non-volatile journal while the writes are outstanding to maintain the correct write ordering during reconstruction. Reconstruction must never overwrite data on the auxiliary with an earlier version. The volume statistic monitoring colliding writes is now limited to those writes that are not affected by this change. Figure 8-20 on page 416 shows a colliding write sequence example.

Chapter 8. Advanced Copy Services

415

Figure 8-20 Colliding writes example

These numbers correspond to the numbers in Figure 8-20: (1) First write is performed from the host to LBA X. (2) Host is provided acknowledgment that the write it complete even though the mirrored write to the auxiliary volume has note yet completed. (1) and (2) occur asynchronously with the first write. (3) Second write is performed from host also to LBA X, if this write occurs prior to (2) the write will be written to the journal file. (4) Host is provided acknowledgment that the second write is complete.

Delay simulation
An optional feature for Global Mirror permits a delay simulation to be applied on writes that are sent to auxiliary volumes. This feature allows testing to be performed that detects colliding writes, and therefore, this feature can be used to test an application before the full deployment of the feature. The feature can be enabled separately for each of the intracluster or intercluster Global Mirrors. You specify the delay setting by using the chcluster command and viewed by using the lscluster command. The gm_intra_delay_simulation field expresses the amount of time that intracluster auxiliary I/Os are delayed. The gm_inter_delay_simulation field expresses the amount of time that intercluster auxiliary I/Os are delayed. A value of zero (0) disables the feature.

Multiple Cluster Mirroring


The rules for a Global Mirror Multiple Cluster Mirroring environment are the same as the rules in an Metro Mirror environment; see 8.5.4, Multiple Cluster Mirroring on page 391.

8.7.5 Global Mirror relationship between master and auxiliary volumes


When creating a Global Mirror relationship, the master volume is initially assigned as the master, and the auxiliary volume is initially assigned as the auxiliary. This design implies that the initial copy direction is mirroring the master volume to the auxiliary volume. After the initial synchronization is complete, the copy direction can be changed, if appropriate. In the most common applications of Global Mirror, the master volume contains the production copy of the data and is used by the host application. The auxiliary volume contains the mirrored copy of the data and is used for failover in DR scenarios. 416
Implementing the IBM System Storage SAN Volume Controller V6.1

Notes: A volume can only be part of one Global Mirror relationship at a time. A volume that is a FlashCopy target cannot be part of a Global Mirror relationship.

8.7.6 Importance of write ordering


Many applications that use block storage have a requirement to survive failures, such as loss of power or a software crash, and to not lose data that existed prior to the failure. Because many applications must perform large numbers of update operations in parallel to that block storage, maintaining write ordering is key to ensuring the correct operation of applications following a disruption. An application that performs a high volume of database updates is usually designed with the concept of dependent writes. With dependent writes, it is important to ensure that an earlier write has completed before a later write is started. Reversing the order of dependent writes can undermine the applications algorithms and can lead to problems, such as detected or undetected data corruption. See 8.4.3, Consistency Groups on page 371 for more information regarding dependent writes.

8.7.7 Global Mirror Consistency Groups


Global Mirror Consistency Groups address the issue of dependent writes across volumes, where the objective is to preserve data consistency across multiple Global Mirrored volumes. Consistency Groups ensure a consistent data set, because applications have relational data spanning across multiple volumes. A Global Mirror Consistency Group can contain an arbitrary number of relationships up to the maximum number of Global Mirror relationships that is supported by the SVC cluster. Global Mirror commands can be issued to a Global Mirror Consistency Group, and thereby simultaneously for all Global Mirror relationships that are defined within that Consistency Group, or to a single Metro Mirror relationship, if not part of a Global Mirror Consistency Group. For example, when issuing a Global Mirror start command to the Consistency Group, all of the Global Mirror relationships in the Consistency Group are started at the same time. Figure 8-21 on page 418 illustrates the concept of Global Mirror Consistency Groups. Because GM_Relationship 1 and GM_Relationship 2 are part of the Consistency Group, they can be handled as one entity. The stand-alone GM_Relationship 3 is handled separately.

Chapter 8. Advanced Copy Services

417

Figure 8-21 Global Mirror Consistency Group

Certain uses of Global Mirror require the manipulation of more than one relationship. Global Mirror Consistency Groups can provide the ability to group relationships so that they are manipulated in unison. Global Mirror relationships within a Consistency Group can be in any form: Global Mirror relationships can be part of a Consistency Group, or be stand-alone and therefore handled as single instances. A Consistency Group can contain zero (0) or more relationships. An empty Consistency Group, with zero relationships in it, has little purpose until it is assigned its first relationship, except that it has a name. All of the relationships in a Consistency Group must have matching master and auxiliary volumes. Although it is possible to use Consistency Groups to manipulate sets of relationships that do not need to satisfy these strict rules, such manipulation can lead to undesired side effects. The rules behind a Consistency Group mean that certain configuration commands are prohibited. These specific configuration commands are not prohibited if the relationship is not part of a Consistency Group. For example, consider the case of two applications that are completely independent, yet they are placed into a single Consistency Group. If a loss of synchronization were to occur, and a background copy process is required to recover synchronization, then while this process is in progress Global Mirror rejects attempts to enable access to the auxiliary volumes of either application. If one application finishes its background copy before the other, Global Mirror still refuses to grant access to its auxiliary volume. Even though it is safe in this case, Global Mirror policy refuses access to the entire Consistency Group if any part of it is inconsistent.

418

Implementing the IBM System Storage SAN Volume Controller V6.1

Stand-alone relationships and Consistency Groups share a common configuration and state model. All of the relationships in a Consistency Group that is not empty have the same state as the Consistency Group.

8.7.8 Distribution of work among nodes


Global Mirror volumes must have their preferred nodes evenly distributed among the nodes of the clusters. Each volume within an I/O Group has a preferred node property that can be used to balance the I/O load between nodes in that group. Global Mirror also uses this property to route I/O between clusters. Figure 8-22 shows the best relationship between volumes and their preferred nodes to get the best performance.

Figure 8-22 Preferred volume Global Mirror relationship

8.7.9 Background copy performance


Background copy resources for intercluster remote copy are available within two nodes of an I/O Group to perform background copy at a maximum of 200 MBps (each data read and data written) total. The background copy performance is subject to sufficient RAID controller bandwidth. Performance is also subject to other potential bottlenecks (such as the intercluster fabric) and possible contention from host I/O for the SVC bandwidth resources. Background copy I/O will be scheduled to avoid bursts of activity that might have an adverse effect on system behavior. An entire grain of tracks on one volume will be processed at around the same time but not as a single I/O. Double buffering is used to try to take advantage of sequential performance within a grain. However, the next grain within the volume might not be scheduled for a while. Multiple grains might be copied simultaneously and might be enough to satisfy the requested rate, unless the available resources cannot sustain the requested rate. Background copy proceeds from the low LBA to the high LBA in sequence to avoid convoying conflicts with FlashCopy, which operates in the opposite direction. It is expected that background copy will not convoy conflict with sequential applications, because it tends to vary disks more often.

Chapter 8. Advanced Copy Services

419

8.7.10 Thin-provisioned background copy


Metro Mirror and Global Mirror relationships will preserve the space-efficiency of the master. Conceptually, the background copy process detects an unallocated region of the master and sends a special zero buffer to the auxiliary. If the auxiliary volume is thin-provisioned, and the region is unallocated, the special buffer prevents a write (and, therefore, an allocation). If the auxiliary volume is not thin-provisioned, or the region in question is an allocated region of a thin-provisioned volume, a buffer of real zeros is synthesized on the auxiliary and written as normal.

8.8 Global Mirror process


There are several steps in the Global Mirror process: 1. An SVC cluster partnership is created between two SVC clusters (for intercluster Global Mirror). 2. A Global Mirror relationship is created between two volumes of the same size. 3. To manage multiple Global Mirror relationships as one entity, the relationships can be made part of a Global Mirror Consistency Group to ensure data consistency across multiple Global Mirror relationships, or simply for ease of management. 4. The Global Mirror relationship is started, and when the background copy has completed, the relationship is consistent and synchronized. 5. When synchronized, the auxiliary volume holds a copy of the production data at the master that can be used for DR. 6. To access the auxiliary volume, the Global Mirror relationship must be stopped with the access option enabled, before write I/O is submitted to the auxiliary. 7. The remote host server is mapped to the auxiliary volume, and the disk is available for I/O.

8.8.1 Methods of synchronization


This section describes two methods that can be used to establish a relationship.

Full synchronization after creation


Full synchronization after creation is the default method. It is the simplest method, and it requires no administrative activity apart from issuing the necessary commands. However, in certain environments, the bandwidth that is available makes this method unsuitable. Use this sequence for a single relationship: A new relationship is created (mkrcrelationship is issued) without specifying the -sync flag. A new relationship is started (startrcrelationship is issued) without the -clean flag.

Synchronized before creation


In this method, the administrator must ensure that the master and auxiliary volumes contain identical data before creating the relationship. There are two ways to ensure that the master and auxiliary volumes contain identical data: Both disks are created with the security delete (-fmtdisk) feature to make all data zero. A complete tape image (or other method of moving data) is copied from one disk to the other disk. 420
Implementing the IBM System Storage SAN Volume Controller V6.1

With this technique, do not allow I/O on either the master or auxiliary before the relationship is established. Then, the administrator must ensure that commands are issued: A new relationship is created (mkrcrelationship is issued) with the -sync flag. A new relationship is started (startrcrelationship is issued) without the -clean flag. Attention: Failure to perform these steps correctly can cause Metro Mirror to report the relationship as consistent when it is not, thereby creating a data loss or data integrity exposure for hosts accessing data on the auxiliary volume.

8.8.2 Global Mirror states and events


In this section, we explain the states of a Global Mirror relationship and the series of events that modify these states. Figure 8-23 shows an overview of the states that apply to a Global Mirror relationship in the connected state.

Figure 8-23 Global Mirror state diagram

When creating the Global Mirror relationship, you can specify whether the auxiliary volume is already in sync with the master volume, and the background copy process is then skipped. This capability is especially useful when creating Global Mirror relationships for volumes that have been created with the format option.

Chapter 8. Advanced Copy Services

421

The following steps explain the Global Mirror state diagram (these numbers correspond to the numbers in Figure 8-23 on page 421): Step 1 a. The Global Mirror relationship is created with the -sync option, and the Global Mirror relationship enters the ConsistentStopped state. b. The Global Mirror relationship is created without specifying that the master and auxiliary volumes are in sync, and the Global Mirror relationship enters the InconsistentStopped state. Step 2 a. When starting a Global Mirror relationship in the ConsistentStopped state, it enters the ConsistentSynchronized state. This state implies that no updates (write I/O) have been performed on the master volume while in the ConsistentStopped state. Otherwise, you must specify the -force option, and the Global Mirror relationship then enters the InconsistentCopying state while the background copy is started. b. When starting a Global Mirror relationship in the InconsistentStopped state, it enters the InconsistentCopying state while the background copy is started. Step 3 a. When the background copy completes, the Global Mirror relationship transitions from the InconsistentCopying state to the ConsistentSynchronized state. Step 4 a. When stopping a Global Mirror relationship in the ConsistentSynchronized state, where specifying the -access option enables write I/O on the auxiliary volume, the Global Mirror relationship enters the Idling state. b. To enable write I/O on the auxiliary volume, when the Global Mirror relationship is in the ConsistentStopped state, issue the command svctask stoprcrelationship, specifying the -access option, and the Global Mirror relationship enters the Idling state. Step 5 a. When starting a Global Mirror relationship that is in the Idling state, you must specify the -primary argument to set the copy direction. Because no write I/O has been performed (to either the master or auxiliary volume) while in the Idling state, the Global Mirror relationship enters the ConsistentSynchronized state. b. In case write I/O has been performed to either the master or the auxiliary volume, then you must specify the -force option. The Global Mirror relationship then enters the InconsistentCopying state, while the background copy is started. If the Global Mirror relationship is intentionally stopped or experiences an error, a state transition is applied. For example, Global Mirror relationships in the ConsistentSynchronized state enter the ConsistentStopped state, and Global Mirror relationships in the InconsistentCopying state enter the InconsistentStopped state. In a case where the connection is broken between the SVC clusters in a partnership, all of the (intercluster) Global Mirror relationships enter a Disconnected state. For further information, refer to Connected versus disconnected on page 423. Common configuration and state model: Stand-alone relationships and Consistency Groups share a common configuration and state model. All of the Global Mirror relationships in a Consistency Group that is not empty have the same state as the Consistency Group.

422

Implementing the IBM System Storage SAN Volume Controller V6.1

State overview
The SVC defined concepts of state are key to understanding the configuration concepts. We explain them in more detail here.

Connected versus disconnected


This distinction can arise when a Global Mirror relationship is created with the two volumes in separate clusters. Under certain error scenarios, communications between the two clusters might be lost. For example, power might fail, causing one complete cluster to disappear. Alternatively, the fabric connection between the two clusters might fail, leaving the two clusters running but unable to communicate with each other. When the two clusters can communicate, the clusters and the relationships spanning them are described as connected. When they cannot communicate, the clusters and the relationships spanning them are described as disconnected. In this scenario, each cluster is left with half of the relationship, and each cluster has only a portion of the information that was available to it before. Only a subset of the normal configuration activity is available. The disconnected relationships are portrayed as having a changed state. The new states describe what is known about the relationship and which configuration commands are permitted. When the clusters can communicate again, the relationships become connected again. Global Mirror automatically reconciles the two state fragments, taking into account any configuration activity or other event that took place while the relationship was disconnected. As a result, the relationship can either return to the state that it was in when it became disconnected or it can enter another connected state. Relationships that are configured between volumes in the same SVC cluster (intracluster) will never be described as being in a disconnected state.

Consistent versus inconsistent


Relationships or Consistency Groups that contain relationships can be described as being consistent or inconsistent. The consistent or inconsistent property describes the state of the data on the auxiliary volume in relation to the data on the master volume. Consider the consistent or inconsistent property to be a property of the auxiliary volume. An auxiliary volume is described as consistent if it contains data that might have been read by a host system from the master if power had failed at an imaginary point in time while I/O was in progress, and power was later restored. This imaginary point in time is defined as the recovery point. The requirements for consistency are expressed with respect to activity at the master up to the recovery point: The auxiliary volume contains the data from all writes to the master for which the host had received successful completion and that data has not been overwritten by a subsequent write (before the recovery point). The writes are on the auxiliary and the host did not receive successful completion for these writes (that is, the host received bad completion or no completion at all), and the host subsequently performed a read from the master of that data. If that read returned successful completion and no later write was sent (before the recovery point), the auxiliary contains the same data as the data that was returned by the read from the master.

Chapter 8. Advanced Copy Services

423

From the point of view of an application, consistency means that a auxiliary volume contains the same data as the master volume at the recovery point (the time at which the imaginary power failure occurred). If an application is designed to cope with an unexpected power failure, this guarantee of consistency means that the application will be able to use the auxiliary and begin operation just as though it had been restarted after the hypothetical power failure. Again, the application is dependent on the key properties of consistency: Write ordering Read stability for correct operation at the auxiliary If a relationship, or a set of relationships, is inconsistent and if an attempt is made to start an application using the data in the secondaries, a number of outcomes are possible: The application might decide that the data is corrupt and crash or exit with an error code. The application might fail to detect that the data is corrupt and return erroneous data. The application might work without a problem. Because of the risk of data corruption, and, in particular, undetected data corruption, Global Mirror strongly enforces the concept of consistency and prohibits access to inconsistent data. You can apply consistency as a concept to a single relationship or to a set of relationships in a Consistency Group. Write ordering is a concept that an application can maintain across a number of disks that are accessed through multiple systems, and therefore, consistency must operate across all of those disks. When deciding how to use Consistency Groups, the administrator must consider the scope of an applications data, taking into account all of the interdependent systems that communicate and exchange information. If two programs or systems communicate and store details as a result of the information exchanged, either of the following actions might occur: All of the data that is accessed by the group of systems must be placed into a single Consistency Group. The systems must be recovered independently (each within its own Consistency Group). Then, each system must perform recovery with the other applications to become consistent with them.

Consistent versus synchronized


A copy that is consistent and up-to-date is described as synchronized. In a synchronized relationship, the master and auxiliary volumes only differ in the regions where writes are outstanding from the host. Consistency does not mean that the data is up-to-date. A copy can be consistent and yet contain data that was frozen at an earlier point in time. Write I/O might have continued to a master and not have been copied to the auxiliary. This state arises when it becomes impossible to keep up-to-date and maintain consistency. An example is a loss of communication between clusters when writing to the auxiliary. When communication is lost for an extended period of time, Global Mirror tracks the changes that occur on the master volumes, but not the order of these changes, or the details of these changes (write data). When communication is restored, it is impossible to make the auxiliary synchronized without sending write data to the auxiliary out of order and, therefore, losing consistency.

424

Implementing the IBM System Storage SAN Volume Controller V6.1

You can use two policies to cope with this situation: Make a point-in-time copy of the consistent auxiliary before allowing the auxiliary to become inconsistent. In the event of a disaster, before consistency is achieved again, the point-in-time copy target provides a consistent, though out-of-date, image. Accept the loss of consistency, and the loss of a useful auxiliary, while making it synchronized.

Detailed states
The following sections detail the states that are portrayed to the user, for either Consistency Groups or relationships. It also details the extra information that is available in each state. We described the various major states to provide guidance regarding the available configuration commands.

InconsistentStopped
InconsistentStopped is a connected state. In this state, the master is accessible for read and write I/O, but the auxiliary is inaccessible for either read or write I/O. A copy process needs to be started to make the auxiliary consistent. This state is entered when the relationship or Consistency Group was InconsistentCopying and has either suffered a persistent error or received a stop command that has caused the copy process to stop. A start command causes the relationship or Consistency Group to move to the InconsistentCopying state. A stop command is accepted, but has no effect. If the relationship or Consistency Group becomes disconnected, the auxiliary side transitions to InconsistentDisconnected. The master side transitions to IdlingDisconnected.

InconsistentCopying
InconsistentCopying is a connected state. In this state, the master is accessible for read and write I/O, but the auxiliary is inaccessible for either read or write I/O. This state is entered after a start command is issued to an InconsistentStopped relationship or Consistency Group. It is also entered when a forced start is issued to an Idling or ConsistentStopped relationship or Consistency Group. In this state, a background copy process runs, which copies data from the master to the auxiliary volume. In the absence of errors, an InconsistentCopying relationship is active, and the copy progress increases until the copy process completes. In certain error situations, the copy progress might freeze or even regress. A persistent error or stop command places the relationship or Consistency Group into the InconsistentStopped state. A start command is accepted, but has no effect. If the background copy process completes on a stand-alone relationship, or on all relationships for a Consistency Group, the relationship or Consistency Group transitions to the ConsistentSynchronized state. If the relationship or Consistency Group becomes disconnected, the auxiliary side transitions to InconsistentDisconnected. The master side transitions to IdlingDisconnected.

Chapter 8. Advanced Copy Services

425

ConsistentStopped
ConsistentStopped is a connected state. In this state, the auxiliary contains a consistent image, but it might be out-of-date with respect to the master. This state can arise when a relationship is in the ConsistentSynchronized state and experiences an error that forces a Consistency Freeze. It can also arise when a relationship is created with a CreateConsistentFlag set to true. Normally, following an I/O error, subsequent write activity causes updates to the master, and the auxiliary is no longer synchronized (set to false). In this case, to reestablish synchronization, consistency must be given up for a period. A start command with the -force option must be used to acknowledge this situation, and the relationship or Consistency Group transitions to InconsistentCopying. Issue this command only after all of the outstanding events are repaired. In the unusual case where the master and auxiliary are still synchronized (perhaps following a user stop, and no further write I/O was received), a start command takes the relationship to ConsistentSynchronized. No -force option is required. Also, in this unusual case, a switch command is permitted that moves the relationship or Consistency Group to ConsistentSynchronized and reverses the roles of the master and the auxiliary. If the relationship or Consistency Group becomes disconnected, then the auxiliary side transitions to ConsistentDisconnected. The master side transitions to IdlingDisconnected. An informational status log is generated every time a relationship or Consistency Group enters the ConsistentStopped with a status of Online state. This can be configured to enable an SNMP trap and provide a trigger to automation software to consider issuing a start command following a loss of synchronization.

ConsistentSynchronized
This is a connected state. In this state, the master volume is accessible for read and write I/O. The auxiliary volume is accessible for read-only I/O. Writes that are sent to the master volume are sent to both master and auxiliary volumes. Either successful completion must be received for both writes; the write must be failed to the host; or a state must transition out of the ConsistentSynchronized state before a write is completed to the host. A stop command takes the relationship to the ConsistentStopped state. A stop command with the -access parameter takes the relationship to the Idling state. A switch command leaves the relationship in the ConsistentSynchronized state, but reverses the master and auxiliary roles. A start command is accepted, but has no effect. If the relationship or Consistency Group becomes disconnected, the same transitions are made as for ConsistentStopped.

Idling
Idling is a connected state. Both master and auxiliary disks are operating in the master role. Consequently, both master and auxiliary disks are accessible for write I/O. In this state, the relationship or Consistency Group accepts a start command. Global Mirror maintains a record of regions on each disk that received write I/O while Idling. This record is used to determine what areas need to be copied following a start command.

426

Implementing the IBM System Storage SAN Volume Controller V6.1

The start command must specify the new copy direction. A start command can cause a loss of consistency if either volume in any relationship has received write I/O, which is indicated by the synchronized status. If the start command leads to loss of consistency, you must specify a -force parameter. Following a start command, the relationship or Consistency Group transitions to ConsistentSynchronized if there is no loss of consistency, or to InconsistentCopying if there is a loss of consistency. Also, while in this state, the relationship or Consistency Group accepts a -clean option on the start command. If the relationship or Consistency Group becomes disconnected, both sides change their state to IdlingDisconnected.

IdlingDisconnected
IdlingDisconnected is a disconnected state. The volume or disks in this half of the relationship or Consistency Group are all in the master role and accept read or write I/O. The major priority in this state is to recover the link and reconnect the relationship or Consistency Group. No configuration activity is possible (except for deletes or stops) until the relationship is reconnected. At that point, the relationship transitions to a connected state. The exact connected state that is entered depends on the state of the other half of the relationship or Consistency Group, which depends on these factors: The state when it became disconnected The write activity since it was disconnected The configuration activity since it was disconnected If both halves are IdlingDisconnected, the relationship becomes Idling when reconnected. While IdlingDisconnected, if a write I/O is received that causes loss of synchronization (synchronized attribute transitions from true to false) and the relationship was not already stopped (either through a user stop or a persistent error), an event is raised. This same event will also be raised when this condition occurs for the ConsistentSynchronized state.

InconsistentDisconnected
InconsistentDisconnected is a disconnected state. The volumes in this half of the relationship or Consistency Group are all in the auxiliary role and do not accept read or write I/O. No configuration activity, except for deletes, is permitted until the relationship reconnects. When the relationship or Consistency Group reconnects, the relationship becomes InconsistentCopying automatically unless either of these conditions exist: The relationship was InconsistentStopped when it became disconnected. The user issued a stop while disconnected. In either case, the relationship or Consistency Group becomes InconsistentStopped.

ConsistentDisconnected
ConsistentDisconnected is a disconnected state. The volumes in this half of the relationship or Consistency Group are all in the auxiliary role and accept read I/O but not write I/O. This state is entered from ConsistentSynchronized or ConsistentStopped when the auxiliary side of a relationship becomes disconnected.

Chapter 8. Advanced Copy Services

427

In this state, the relationship or Consistency Group displays an attribute of FreezeTime, which is the point in time that Consistency was frozen. When entered from ConsistentStopped, it retains the time that it had in that state. When entered from ConsistentSynchronized, the FreezeTime shows the last time at which the relationship or Consistency Group was known to be consistent. This time corresponds to the time of the last successful heartbeat to the other cluster. A stop command with the -access flag set to true transitions the relationship or Consistency Group to the IdlingDisconnected state. This state allows write I/O to be performed to the auxiliary volume and is used as part of a DR scenario. When the relationship or Consistency Group reconnects, the relationship or Consistency Group becomes ConsistentSynchronized only if this state does not lead to a loss of consistency. This is the case provided that these conditions are true: The relationship was ConsistentSynchronized when it became disconnected. No writes received successful completion at the master while disconnected. Otherwise, the relationship becomes ConsistentStopped. The FreezeTime setting is retained.

Empty
This state only applies to Consistency Groups. It is the state of a Consistency Group that has no relationships and no other state information to show. It is entered when a Consistency Group is first created. It is exited when the first relationship is added to the Consistency Group, at which point, the state of the relationship becomes the state of the Consistency Group.

8.8.3 Practical use of Global Mirror


Global Mirror establishes a Global Mirror relationship between two volumes of equal size. The volumes in a Global Mirror relationship are referred to as the master (primary) volume and the auxiliary (secondary) volume. The relationship between the two copies is asymmetric. The master volume is the production volume, and updates to this copy are mirrored to the auxiliary volume. The contents of the auxiliary volume that existed prior to the relationship is lost. Switching the copy direction: The copy direction for a Global Mirror relationship can be switched so the auxiliary volume becomes the master and the master volume becomes the auxiliary. While the Global Mirror relationship is active, the auxiliary copy (volume) is inaccessible for host application write I/O at any time. The SVC allows read-only access to the auxiliary volume when it contains a consistent image. This read-only access is only intended to allow boot time operating system discovery to complete without error, so that any hosts at the secondary site can be ready to start up the applications with minimal delay, if required. For example, many operating systems need to read logical block address (LBA) 0 (zero) to configure a logical unit. Although read access is allowed on the auxiliary, in practice the data on the auxiliary volumes cannot be read by a host, because most operating systems write a dirty bit to the file system when it is mounted. Because this write operation is not allowed on the auxiliary volume, the volume cannot be mounted.

428

Implementing the IBM System Storage SAN Volume Controller V6.1

This access is only provided where consistency can be guaranteed. However, there is no way in which coherency can be maintained between reads that are performed at the auxiliary and later write I/Os that are performed at the master. To enable access to the auxiliary volume for host operations, you must stop the Global Mirror relationship by specifying the -access parameter. While access to the auxiliary volume for host operations is enabled, you must instruct the host to mount the volume and other related tasks, before the application can be started or instructed to perform a recovery process. Using an auxiliary copy demands a conscious policy decision by the administrator that a failover is required, and the tasks to be performed on the host that is involved in establishing operation on the auxiliary copy are substantial. The goal is to make this failover rapid (much faster than recovering from a backup copy), but it is not seamless. You can automate the failover process by using failover management software. The SVC provides Simple Network Management Protocol (SNMP) traps and programming (or scripting) for the command-line interface (CLI) to enable this automation.

8.8.4 Valid combinations of FlashCopy, Metro Mirror, and Global Mirror


Table 8-7 on page 406 outlines the combinations of FlashCopy and Metro Mirror or Global Mirror functions that are valid for a volume.
Table 8-9 volume valid combinations FlashCopy FlashCopy Source FlashCopy Target Metro Mirror or Global Mirror Master Supported Not supported Metro Mirror or Global Mirror Auxiliary Supported Not supported

8.8.5 Global Mirror configuration limits


Table 8-10 lists the Global Mirror configuration limits.
Table 8-10 Global Mirror configuration limits Parameter Number of Metro Mirror Consistency Groups per cluster Number of Metro Mirror relationships per cluster Number of Metro Mirror relationships per Consistency Group Total volume size per I/O Group Value 256 8192 8192

A per I/O Group limit of 1024 TB exists on the quantity of master and auxiliary volume address spaces that can participate in Metro Mirror and Global Mirror relationships. This maximum configuration will consume all 512 MB of bitmap space for the I/O Group and allow no FlashCopy bitmap space.

Chapter 8. Advanced Copy Services

429

8.9 Global Mirror commands


Here, we summarize several of the most important Global Mirror commands. For complete details about all of the Global Mirror commands, see IBM System Storage SAN Volume Controller: Command-Line Interface User's Guide, GC27-2287. The command set for Global Mirror contains two broad groups: Commands to create, delete, and manipulate relationships and Consistency Groups Commands that cause state changes Where a configuration command affects more than one cluster, Global Mirror performs the work to coordinate configuration activity between the clusters. Certain configuration commands can only be performed when the clusters are connected, and those commands fail with no effect when the clusters are disconnected. Other configuration commands are permitted even though the clusters are disconnected. The state is reconciled automatically by Global Mirror when the clusters are reconnected. For any given command, with one exception, a single cluster actually receives the command from the administrator. This action is significant for defining the context for a CreateRelationship (mkrcrelationship) command or a CreateConsistencyGroup (mkrcconsistgrp) command, in which case, the cluster receiving the command is called the local cluster. The exception is the command that sets clusters into a Global Mirror partnership. The administrator must issue the mkpartnership command to both the local and to the remote cluster. The commands are described here as an abstract command set. You can implement these commands in one of two ways: A command-line interface (CLI), which can be used for scripting and automation A graphical user interface (GUI), which can be used for one-off tasks

8.9.1 Listing the available SVC cluster partners


To create an SVC cluster partnership, we use the svcinfo lsclustercandidate command.

svcinfo lsclustercandidate
Use the svcinfo lsclustercandidate command to list the clusters that are available for setting up a two-cluster partnership. This command is a prerequisite for creating Global Mirror relationships. To display the characteristics of the cluster, use the svcinfo lscluster command, specifying the name of the cluster.

svctask chcluster
There are three parameters for Global Mirror in the command output: -gmlinktolerance link_tolerance This parameter specifies the maximum period of time that the system will tolerate delay before stopping Global Mirror relationships. Specify values between 60 and 86400 seconds in increments of 10 seconds. The default value is 300. Do not change this value except under the direction of IBM Support.

430

Implementing the IBM System Storage SAN Volume Controller V6.1

-relationshipbandwidthlimit cluster_relationship_bandwidth_limit This parameter controls the maximum rate at which any one remote copy relationship can synchronize. The default value for the relationship bandwidth limit is 25 MBps, but this value can now be specified between 1 MBps to 1000 MBps. Note that the overall limit is controlled by the -bandwidth parameter of each cluster partnership so the partnership bandwidth will need to be raised accordingly. Attention: Do not set this value higher than the default without first establishing that the higher bandwidth can be sustained without impacting host performance. -gminterdelaysimulation link_tolerance This parameter specifies the number of milliseconds that I/O activity (intercluster copying to a auxiliary volume) is delayed. This parameter permits you to test performance implications before deploying Global Mirror and obtaining a long distance link. Specify a value from 0 to 100 milliseconds in 1 millisecond increments. The default value is 0. Use this argument to test each intercluster Global Mirror relationship separately. -gmintradelaysimulation link_tolerance This parameter specifies the number of milliseconds that I/O activity (intracluster copying to a auxiliary volume) is delayed. This parameter permits you to test performance implications before deploying Global Mirror and obtaining a long distance link. Specify a value from 0 to 100 milliseconds in 1 millisecond increments. The default value is 0. Use this argument to test each intracluster Global Mirror relationship separately. Use the svctask chcluster command to adjust these values; see the following example: svctask chcluster -gmlinktolerance 300 You can view all of these parameter values with the svcinfo lscluster <clustername> command.

gmlinktolerance
The gmlinktolerance parameter needs a particular and detailed note. If poor response extends past the specified tolerance, a 1920 event is logged and one or more Global Mirror relationships are automatically stopped, which protects the application hosts at the primary site. During normal operation, application hosts experience a minimal effect from the response times, because the Global Mirror feature uses asynchronous replication. However, if Global Mirror operations experience degraded response times from the secondary cluster for an extended period of time, I/O operations begin to queue at the primary cluster. This queue results in an extended response time to application hosts. In this situation, the gmlinktolerance feature stops Global Mirror relationships and the application hosts response time returns to normal. After a 1920 event has occurred, the Global Mirror auxiliary volumes are no longer in the consistent_synchronized state until you fix the cause of the event and restart your Global Mirror relationships. For this reason, ensure that you monitor the cluster to track when this 1920 events occur. You can disable the gmlinktolerance feature by setting the gmlinktolerance value to 0 (zero). However, the gmlinktolerance feature cannot protect applications from extended response times if it is disabled. It might be appropriate to disable the gmlinktolerance feature under the following circumstances: During SAN maintenance windows where degraded performance is expected from SAN components and application hosts can withstand extended response times from Global Mirror volumes.
Chapter 8. Advanced Copy Services

431

During periods when application hosts can tolerate extended response times and it is expected that the gmlinktolerance feature might stop the Global Mirror relationships. For example, if you test using an I/O generator, which is configured to stress the back-end storage, the gmlinktolerance feature might detect the high latency and stop the Global Mirror relationships. Disabling the gmlinktolerance feature prevents this result at the risk of exposing the test host to extended response times. We suggest using a script to periodically monitor the Global Mirror status. Example 8-2 shows an example of a script in ksh to check the Global Mirror status.
Example 8-2 Script example

[AIX1@root] /usr/GMC > cat checkSVCgm #!/bin/sh # # Description # # GM_STATUS GM Status variable # HOSTsvcNAME SVC cluster ipaddress # PARA_TEST Consistent syncronized variable # PARA_TESTSTOPIN Stop inconsistent variable # PARA_TESTSTOP Consistent stopped variable # IDCONS Consistency Group ID variable # variable definition HOSTsvcNAME="128.153.3.237" IDCONS=255 PARA_TEST="consistent_synchronized" PARA_TESTSTOP="consistent_stopped" PARA_TESTSTOPIN="inconsistent_stopped" FLOG="/usr/GMC/log/gmtest.log" VAR=0 # Start Programm if [[ $1 == "" ]] then CICLI="true" fi while $CICLI do GM_STATUS=`ssh -l admin $HOSTsvcNAME svcinfo lsrcconsistgrp -delim : | awk -F: 'NR==2 {print $8 }'` echo "`date` Gobal Mirror STATUS <$GM_STATUS> " >> $FLOG if [[ $GM_STATUS = $PARA_TEST ]] then sleep 600 else sleep 600 GM_STATUS=`ssh -l admin $HOSTsvcNAME svcinfo lsrcconsistgrp -delim : | awk -F: 'NR==2 {print $8 }'` if [[ $GM_STATUS = $PARA_TESTSTOP || $GM_STATUS = $PARA_TESTSTOPIN ]] then ssh -l admin $HOSTsvcNAME svctask startrcconsistgrp -force $IDCONS TESTEX=`echo $?` echo "`date` Gobal Mirror RESTARTED.......... con RC=$TESTEX " >> $FLOG fi

432

Implementing the IBM System Storage SAN Volume Controller V6.1

GM_STATUS=`ssh -l admin $HOSTsvcNAME svcinfo lsrcconsistgrp -delim : | awk -F: 'NR==2 {print $8 }'` if [[ $GM_STATUS = $PARA_TESTSTOP ]] then echo "`date` Global Mirror restarted <$GM_STATUS>" else echo "`date` ERROR Global Mirro Failed <$GM_STATUS>" fi sleep 600 fi ((VAR+=1)) done PARA_TESTSTOP="consistent_stopped" The script in Example 8-2 on page 432 performs these functions: Check the Global Mirror status every 600 seconds. If the status is ConsistentSyncronized, wait another 600 seconds and test again. If the status is ConsistentStopped or InconsistentStopped, wait another 600 seconds and then try to restart Global Mirror. If the status remains ConsistentStopped or InconsistentStopped, it is likely that an associated 1920 event exists, which means that we might have a performance problem. Waiting 600 seconds before restarting Global Mirror can give the SVC enough time to deliver the high workload that is requested by the server. Because Global Mirror has been stopped for 10 minutes (600 seconds), the auxiliary copy is now out of date by this amount of time and must be resynchronized. Sample script: The script described in Example 8-2 on page 432 is supplied as is. A 1920 event indicates that one or more of the SAN components are unable to provide the performance that is required by the application hosts. This situation can be temporary (for example, a result of a maintenance activity) or permanent (for example, a result of a hardware failure or an unexpected host I/O workload). If 1920 events are occurring, it can be necessary to use a performance monitoring and analysis tool, such as the IBM Tivoli Storage Productivity Center, to assist in identifying and resolving the problem.

8.9.2 Creating an SVC cluster partnership


To create an SVC cluster partnership, use the svctask mkpartnership command.

svctask mkpartnership
Use the svctask mkpartnership command to establish a one-way Global Mirror partnership between the local cluster and a remote cluster. To establish a fully functional Global Mirror partnership, you must issue this command on both clusters. This step is a prerequisite for creating Global Mirror relationships between volumes on the SVC clusters. When creating the partnership, you can specify the bandwidth to be used by the background copy process between the local and the remote SVC cluster, and if it is not specified, the

Chapter 8. Advanced Copy Services

433

bandwidth defaults to 50 MBps. The bandwidth must be set to a value that is less than or equal to the bandwidth that can be sustained by the intercluster link.

Background copy bandwidth effect on foreground I/O latency


The background copy bandwidth determines the rate at which the background copy will be attempted for Global Mirror. The background copy bandwidth can affect foreground I/O latency in one of three ways: The following result can occur if the background copy bandwidth is set too high compared to the Global Mirror intercluster link capacity: The background copy I/Os can back up on the Global Mirror intercluster link. There is a delay in the synchronous auxiliary writes of foreground I/Os. The foreground I/O latency will increase as perceived by applications. If the background copy bandwidth is set too high for the storage at the primary site, background copy read I/Os overload the primary storage and delay foreground I/Os. If the background copy bandwidth is set too high for the storage at the secondary site, background copy writes at the secondary overload the secondary storage and again delay the synchronous secondary writes of foreground I/Os. To set the background copy bandwidth optimally, make sure that you consider all three resources (the primary storage, the intercluster link bandwidth, and the secondary storage). Provision the most restrictive of these three resources between the background copy bandwidth and the peak foreground I/O workload. Perform this provisioning by calculation or, alternatively, by determining experimentally how much background copy can be allowed before the foreground I/O latency becomes unacceptable and then reducing the background copy to accommodate peaks in workload and an additional safety margin.

svctask chpartnership
To change the bandwidth that is available for background copy in an SVC cluster partnership, use the svctask chpartnership command to specify the new bandwidth.

8.9.3 Creating a Global Mirror Consistency Group


To create a Global Mirror Consistency Group, use the svctask mkrcconsistgrp command.

svctask mkrcconsistgrp
Use the svctask mkrcconsistgrp command to create a new, empty Global Mirror Consistency Group. The Global Mirror Consistency Group name must be unique across all Consistency Groups that are known to the clusters owning this Consistency Group. If the Consistency Group involves two clusters, the clusters must be in communication throughout the creation process. The new Consistency Group does not contain any relationships and will be in the Empty state. You can add Global Mirror relationships to the group, either upon creation or afterward, by using the svctask chrelationship command.

8.9.4 Creating a Global Mirror relationship


To create a Global Mirror relationship, use the svctask mkrcrelationship command. Optional parameter: If you do not use the -global optional parameter, a Metro Mirror relationship will be created instead of a Global Mirror relationship.

434

Implementing the IBM System Storage SAN Volume Controller V6.1

svctask mkrcrelationship
Use the svctask mkrcrelationship command to create a new Global Mirror relationship. This relationship persists until it is deleted. The auxiliary volume must be equal in size to the master volume or the command will fail, and if both volumes are in the same cluster, they must both be in the same I/O Group. The master and auxiliary volume cannot be in an existing relationship, and they cannot be the target of a FlashCopy mapping. This command returns the new relationship (relationship_id) when successful. When creating the Global Mirror relationship, you can add it to a Consistency Group that already exists, or it can be a stand-alone Global Mirror relationship if no Consistency Group is specified. To check whether the master or auxiliary volumes comply with the prerequisites to participate in a Global Mirror relationship, use the svcinfo lsrcrelationshipcandidate command, as shown in svcinfo lsrcrelationshipcandidate on page 435.

svcinfo lsrcrelationshipcandidate
Use the svcinfo lsrcrelationshipcandidate command to list the available volumes that are eligible to form a Global Mirror relationship. When issuing the command, you can specify the master volume name and auxiliary cluster to list candidates that comply with the prerequisites to create a Global Mirror relationship. If the command is issued with no parameters, all volumes that are not disallowed by another configuration state, such as being a FlashCopy target, are listed.

8.9.5 Changing a Global Mirror relationship


To modify the properties of a Global Mirror relationship, use the svctask chrcrelationship command.

svctask chrcrelationship
Use the svctask chrcrelationship command to modify the following properties of a Global Mirror relationship: Change the name of a Global Mirror relationship. Add a relationship to a group. Remove a relationship from a group using the -force flag. Adding a Global Mirror relationship: When adding a Global Mirror relationship to a Consistency Group that is not empty, the relationship must have the same state and copy direction as the group to be added to it.

8.9.6 Changing a Global Mirror Consistency Group


To change the name of a Global Mirror Consistency Group, use the following command.

svctask chrcconsistgrp
Use the svctask chrcconsistgrp command to change the name of a Global Mirror Consistency Group.

Chapter 8. Advanced Copy Services

435

8.9.7 Starting a Global Mirror relationship


To start a stand-alone Global Mirror relationship, use the following command.

svctask startrcrelationship
Use the svctask startrcrelationship command to start the copy process of a Global Mirror relationship. When issuing the command, you can set the copy direction if it is undefined, and, optionally, you can mark the auxiliary volume of the relationship as clean. The command fails if it is used as an attempt to start a relationship that is already a part of a Consistency Group. You can only issue this command to a relationship that is connected. For a relationship that is idling, this command assigns a copy direction (master and auxiliary roles) and begins the copy process. Otherwise, this command restarts a previous copy process that was stopped either by a stop command or by an I/O error. If the resumption of the copy process leads to a period when the relationship is inconsistent, you must specify the -force parameter when restarting the relationship. This situation can arise if, for example, the relationship was stopped and then further writes were performed on the original master of the relationship. The use of the -force parameter here is a reminder that the data on the auxiliary will become inconsistent while resynchronization (background copying) takes place and, therefore, is unusable for DR purposes before the background copy has completed. In the Idling state, you must specify the master volume to indicate the copy direction. In other connected states, you can provide the -primary argument, but it must match the existing setting.

8.9.8 Stopping a Global Mirror relationship


To stop a stand-alone Global Mirror relationship, use the svctask stoprcrelationship command.

svctask stoprcrelationship
Use the svctask stoprcrelationship command to stop the copy process for a relationship. You can also use this command to enable write access to a consistent auxiliary volume by specifying the -access parameter. This command applies to a stand-alone relationship. It is rejected if it is addressed to a relationship that is part of a Consistency Group. You can issue this command to stop a relationship that is copying from master to auxiliary. If the relationship is in an inconsistent state, any copy operation stops and does not resume until you issue an svctask startrcrelationship command. Write activity is no longer copied from the master to the auxiliary volume. For a relationship in the ConsistentSynchronized state, this command causes a Consistency Freeze. When a relationship is in a consistent state (that is, in the ConsistentStopped, ConsistentSynchronized, or ConsistentDisconnected state), you can use the -access parameter with the svctask stoprcrelationship command to enable write access to the auxiliary volume.

436

Implementing the IBM System Storage SAN Volume Controller V6.1

8.9.9 Starting a Global Mirror Consistency Group


To start a Global Mirror Consistency Group, use the svctask startrcconsistgrp command.

svctask startrcconsistgrp
Use the svctask startrcconsistgrp command to start a Global Mirror Consistency Group. You can only issue this command to a Consistency Group that is connected. For a Consistency Group that is idling, this command assigns a copy direction (master and auxiliary roles) and begins the copy process. Otherwise, this command restarts a previous copy process that was stopped either by a stop command or by an I/O error.

8.9.10 Stopping a Global Mirror Consistency Group


To stop a Global Mirror Consistency Group, use the svctask stoprcconsistgrp command.

svctask stoprcconsistgrp
Use the svctask startrcconsistgrp command to stop the copy process for a Global Mirror Consistency Group. You can also use this command to enable write access to the auxiliary volumes in the group if the group is in a consistent state. If the Consistency Group is in an inconsistent state, any copy operation stops and does not resume until you issue the svctask startrcconsistgrp command. Write activity is no longer copied from the master to the auxiliary volumes, which belong to the relationships in the group. For a Consistency Group in the ConsistentSynchronized state, this command causes a Consistency Freeze. When a Consistency Group is in a consistent state (for example, in the ConsistentStopped, ConsistentSynchronized, or ConsistentDisconnected state), you can use the -access parameter with the svctask stoprcconsistgrp command to enable write access to the auxiliary volumes within that group.

8.9.11 Deleting a Global Mirror relationship


To delete a Global Mirror relationship, use the svctask rmrcrelationship command.

svctask rmrcrelationship
Use the svctask rmrcrelationship command to delete the relationship that is specified. Deleting a relationship only deletes the logical relationship between the two volumes. It does not affect the volumes themselves. If the relationship is disconnected at the time that the command is issued, the relationship is only deleted on the cluster on which the command is being run. When the clusters reconnect, the relationship is automatically deleted on the other cluster. Alternatively, if the clusters are disconnected, and you still want to remove the relationship on both clusters, you can issue the rmrcrelationship command independently on both of the clusters. A relationship cannot be deleted if it is part of a Consistency Group. You must first remove the relationship from the Consistency Group. If you delete an inconsistent relationship, the auxiliary volume becomes accessible even though it is still inconsistent. This situation is the one case in which Global Mirror does not inhibit access to inconsistent data.
Chapter 8. Advanced Copy Services

437

8.9.12 Deleting a Global Mirror Consistency Group


To delete a Global Mirror Consistency Group, use the svctask rmrcconsistgrp command.

svctask rmrcconsistgrp
Use the svctask rmrcconsistgrp command to delete a Global Mirror Consistency Group. This command deletes the specified Consistency Group. You can issue this command for any existing Consistency Group. If the Consistency Group is disconnected at the time that the command is issued, the Consistency Group is only deleted on the cluster on which the command is being run. When the clusters reconnect, the Consistency Group is automatically deleted on the other cluster. Alternatively, if the clusters are disconnected, and you still want to remove the Consistency Group on both clusters, you can issue the svctask rmrcconsistgrp command separately on both of the clusters. If the Consistency Group is not empty, the relationships within it are removed from the Consistency Group before the group is deleted. These relationships then become stand-alone relationships. The state of these relationships is not changed by the action of removing them from the Consistency Group.

8.9.13 Reversing a Global Mirror relationship


To reverse a Global Mirror relationship, use the svctask switchrcrelationship command.

svctask switchrcrelationship
Use the svctask switchrcrelationship command to reverse the roles of the master volume and the auxiliary volume when a stand-alone relationship is in a consistent state; when issuing the command, the desired master needs to be specified.

8.9.14 Reversing a Global Mirror Consistency Group


To reverse a Global Mirror Consistency Group, use the svctask switchrcconsistgrp command.

svctask switchrcconsistgrp
Use the svctask switchrcconsistgrp command to reverse the roles of the master volume and the auxiliary volume when a Consistency Group is in a consistent state. This change is applied to all of the relationships in the Consistency Group, and when issuing the command, the desired master needs to be specified.

438

Implementing the IBM System Storage SAN Volume Controller V6.1

Chapter 9.

SAN Volume Controller operations using the command-line interface


In this chapter we describe operational management. We use the command-line interface (CLI) to demonstrate both normal operation and then advanced operation. You can use either the CLI or GUI to manage IBM System Storage SAN Volume Controller (SVC) operations. We use the CLI in this chapter. You can script these operations, and we think it is easier to create the documentation for the scripts using the CLI. This chapter assumes a fully functional SVC environment.

Copyright IBM Corp. 2011. All rights reserved.

439

9.1 Normal operations using CLI


In the following topics, we describe the commands that best represent normal operational commands.

9.1.1 Command syntax and online help


Two major command sets are available: The svcinfo command set allows you to query the various components within the SVC environment. The svctask command set allows you to make changes to the various components within the SVC. When the command syntax is shown you will see certain parameters in square brackets, for example [parameter]. This indicates that the parameter is optional in most if not all instances. Any information that is not in square brackets is required information. You can view the syntax of a command by entering one of the following commands: svcinfo svctask svcinfo svctask svcinfo -? -? commandname -? commandname -? commandname -filtervalue? Shows a complete list of information commands. Shows a complete list of task commands. Shows the syntax of information commands. Shows the syntax of task commands. Shows the filters you can use to reduce the output of the information commands.

Help: You can also use -h instead of -?, for example, the svcinfo -h or svctask commandname -h command. If you look at the syntax of the command by typing svcinfo command name -?, you often see -filter listed as a parameter. Be aware that the correct parameter is -filtervalue. Tip: You can use the up and down arrow keys on your keyboard to recall commands that were recently issued. Then, you can use the left and right, Backspace, and Delete keys to edit commands before you resubmit them.

Using reverse-i-search
If you work on your SVC with the same PuTTy session for many hours and enter many commands, then scrolling back to find your previous or similar commands can be a time-intensive task. In this case, using the reverse-i-search command can help you quickly and easily find any command you already issued in the history of your commands by using the Ctrl+r keys. Ctrl+r will allow you to interactively search through the command history as you type in commands. Pressing Ctrl+r at an empty command prompt will give you a prompt as shown in Example 9-1.
Example 9-1 Using reverse-i-search

IBM_2145:ITSO-CLS5:admin>svcinfo lsarray mdisk_id mdisk_name status mdisk_grp_id mdisk_grp_name capacity raid_status raid_level redundancy strip_size tier 298 SDD-Array_1 online 0 ITSO-Storage_Pool-Multi_Tier 135.7GB online raid10 1 256 generic_ssd (reverse-i-search)`sv': svcinfo lsarray

440

Implementing the IBM System Storage SAN Volume Controller V6.1

As shown in Example 9-1 on page 440, we had executed an svcinfo lsarray command. By then pressing Ctrl+r and typing sv, the command we needed was recalled from history.

9.2 Working with managed disks and disk controller systems


This section details the various configuration and administration tasks that you can perform on the managed disks (MDisks) within the SVC environment and the tasks that you can perform at a disk controller level.

9.2.1 Viewing disk controller details


Use the svcinfo lscontroller command to display summary information about all available back-end storage systems. To display more detailed information about a specific controller, run the command again and append the controller name parameter, for example, controller id 0, as shown in Example 9-2.
Example 9-2 svctask lscontroller command

IBM_2145:ITSO_SVC_4:admin>svcinfo lscontroller 0 id 0 controller_name ITSO_XIV_01 WWNN 50017380022C0000 mdisk_link_count 10 max_mdisk_link_count 10 degraded no vendor_id IBM product_id_low 2810XIVproduct_id_high LUN-0 product_revision 10.1 ctrl_s/n allow_quorum yes WWPN 50017380022C0170 path_count 2 max_path_count 4 WWPN 50017380022C0180 path_count 2 max_path_count 2 WWPN 50017380022C0190 path_count 4 max_path_count 6 WWPN 50017380022C0182 path_count 4 max_path_count 12 WWPN 50017380022C0192 path_count 4 max_path_count 6 WWPN 50017380022C0172 path_count 4 max_path_count 6

Chapter 9. SAN Volume Controller operations using the command-line interface

441

9.2.2 Renaming a controller


Use the svctask chcontroller command to change the name of a storage controller. To verify the change, run the svcinfo lscontroller command. Example 9-3 shows both of these commands.
Example 9-3 svctask chcontroller command

IBM_2145:ITSO-CLS1:admin>svctask chcontroller -name DS4500 controller0 IBM_2145:ITSO-CLS1:admin>svcinfo lscontroller -delim , id,controller_name,ctrl_s/n,vendor_id,product_id_low,product_id_high 0,DS4500,,IBM ,1742-900, 1,DS4700,,IBM ,1814 , FAStT This command renames the controller named controller0 to DS4500. Choosing a new name: The chcontroller command specifies the new name first. You can use letters A to Z, a to z, numbers 0 to 9, the dash (-), and the underscore (_). The new name can be between one and 63 characters in length. However, the new name cannot start with a number, dash, or the word controller (because this prefix is reserved for SVC assignment only).

9.2.3 Discovery status


Use the svcinfo lsdiscoverystatus command, as shown in Example 9-4, to determine if a discovery operation is in progress. The output of this command is a status of active or inactive.
Example 9-4 lsdiscoverystatus command

IBM_2145:ITSO-CLS5:admin>svcinfo lsdiscoverystatus id scope IO_group_id IO_group_name status 0 fc_fabric inactive 1 sas_iogrp 0 io_grp0 inactive This command displays the state of all discoveries in the cluster. During discovery, the system updates the drive and MDisk records. You must wait until the discovery has finished and is inactive before you attempt to use the system. This command displays one of the following results: active: There is a discovery operation in progress at the time that the command is issued. inactive: There are no discovery operations in progress at the time that the command is issued.

9.2.4 Discovering MDisks


In general, the cluster detects the MDisks automatically when they appear in the network. However, certain Fibre Channel (FC) controllers do not send the required Small Computer System Interface (SCSI) primitives that are necessary to automatically discover the new MDisks. If new storage has been attached and the cluster has not detected it, it might be necessary to run this command before the cluster can detect the new MDisks.

442

Implementing the IBM System Storage SAN Volume Controller V6.1

Use the svctask detectmdisk command to scan for newly added MDisks (Example 9-5).
Example 9-5 svctask detectmdisk

IBM_2145:ITSO-CLS1:admin>svctask detectmdisk To check whether any newly added MDisks were successfully detected, run the svcinfo lsmdisk command and look for new unmanaged MDisks. If the disks do not appear, check that the disk is appropriately assigned to the SVC in the disk subsystem, and that the zones are set up properly. Note: If you have assigned a large number of logical unit numbers (LUNs) to your SVC, the discovery process can take time. Check several times, using the svcinfo lsmdisk command if all of the MDisks that you were expecting are present. When all of the disks allocated to the SVC are seen from the SVC cluster, the following procedure is a useful way to verify which MDisks are unmanaged and ready to be added to the storage pool. Perform the following steps to display MDisks: 1. Enter the svcinfo lsmdiskcandidate command, as shown in Example 9-6. This command displays all detected MDisks that are not currently part of a storage poll.
Example 9-6 svcinfo lsmdiskcandidate command IBM_2145:ITSO-CLS1:admin>svcinfo lsmdiskcandidate id 0 1 2 . .

Alternatively, you can list all MDisks (managed or unmanaged) by issuing the svcinfo lsmdisk command, as shown in Example 9-7.
Example 9-7 svcinfo lsmdisk command IBM_2145:ITSO-CLS5:admin>svcinfo lsmdisk -filtervalue controller_name=ITSO-4700 id name status mode mdisk_grp_id mdisk_grp_name capacity ctrl_LUN_# controller_name UID tier 43 mdisk43 online managed 1 ITSO-Storage_Pool-Single_Tier 66.8GB 0000000000000000 ITSO-4700 600a0b8000510b8a000003f54b 1fc84b00000000000000000000000000000000 generic_hdd 61 mdisk61 online managed 0 ITSO-Storage_Pool-Multi_Tier 66.8GB 0000000000000001 ITSO-4700 600a0b8000510f3a000003ee4b 1fc8c900000000000000000000000000000000 generic_hdd 73 mdisk73 online managed 2 STGPool_DS4700 66.8GB 0000000000000002 ITSO-4700 600a0b8000510b8a000003f84b 1fc8db00000000000000000000000000000000 generic_hdd 80 mdisk80 online managed 2 STGPool_DS4700 66.8GB 0000000000000003 ITSO-4700 600a0b8000510f3a000003f34b 1fc96700000000000000000000000000000000 generic_hdd 93 mdisk93 online unmanaged 66.8GB 0000000000000004 ITSO-4700 600a0b8000510b8a0000049e4b

Chapter 9. SAN Volume Controller operations using the command-line interface

443

From this output, you can see additional information about each MDisk (such as the current status). For the purpose of our current task, we are only interested in the unmanaged disks, because they are candidates for a storage pool (in our case all MDisks). Tip: The -delim parameter collapses output instead of wrapping text over multiple lines. 2. If not all of the MDisks that you expected are visible, rescan the available FC network by entering the svctask detectmdisk command, as shown in Example 9-8.
Example 9-8 svctask detectmdisk IBM_2145:ITSO-CLS1:admin>svctask detectmdisk

3. If you run the svcinfo lsmdiskcandidate command again and your MDisk or MDisks are still not visible, check that the LUNs from your subsystem have been properly assigned to the SVC and that appropriate zoning is in place (for example, the SVC can see the disk subsystem). See Chapter 3, Planning and configuration on page 57 for details about setting up your storage area network (SAN) fabric.

9.2.5 Viewing MDisk information


When viewing information about the MDisks (managed or unmanaged), we can use the svcinfo lsmdisk command to display overall summary information about all available managed disks. To display more detailed information about a specific MDisk, run the command again and append the -mdisk name parameter (for example, mdisk0). The overview command is svcinfo lsmdisk -delim, as shown in Example 9-9. The summary for an individual MDisk is svcinfo lsmdisk (name/ID of the MDisk from which you want the information), as shown in Example 9-10.
Example 9-9 svcinfo lsmdisk command

IBM_2145:ITSO-CLS5:admin>svcinfo lsmdisk -delim : id:name:status:mode:mdisk_grp_id:mdisk_grp_name:capacity:ctrl_LUN_#:controller_nam e:UID:tier 0:mdisk0:online:unmanaged:::47.0GB:0000000000000000:controller2:60050768017f0000a8 0000000000000000000000000000000000000000000000:generic_hdd . the remaining line has been removed for brevity . 61:mdisk61:online:managed:0:ITSO-Storage_Pool-Multi_Tier:66.8GB:0000000000000001:I TSO-4700:600a0b8000510f3a000003ee4b1fc8c900000000000000000000000000000000:generic_ hdd . the remaining line has been removed for brevity . 298:SSD-Array_1:online:array:0:ITSO-Storage_Pool-Multi_Tier:135.7GB::::generic_ssd Example 9-10 shows a summary for a single MDisk.
Example 9-10 Usage of the command svcinfo lsmdisk (ID)

IBM_2145:ITSO-CLS5:admin>svcinfo lsmdisk 61 id 61

444

Implementing the IBM System Storage SAN Volume Controller V6.1

name mdisk61 status online mode managed mdisk_grp_id 0 mdisk_grp_name ITSO-Storage_Pool-Multi_Tier capacity 66.8GB quorum_index 1 block_size 512 controller_name ITSO-4700 ctrl_type 4 ctrl_WWNN 200600A0B8510B8A controller_id 12 path_count 2 max_path_count 2 ctrl_LUN_# 0000000000000001 UID 600a0b8000510f3a000003ee4b1fc8c900000000000000000000000000000000 preferred_WWPN 200700A0B8510B8B active_WWPN 200700A0B8510B8B fast_write_state empty raid_status raid_level redundancy strip_size spare_goal spare_protection_min balanced tier generic_hdd

9.2.6 Renaming an MDisk


Use the svctask chmdisk command to change the name of an MDisk. When using the command, be aware that the new name comes first and then the ID/name of the MDisk being renamed. Use this format: svcinfo chmdisk -name (new name) (current ID/name). Use the svcinfo lsmdisk command to verify the change. Example 9-11 show both of these commands.
Example 9-11 svctask chmdisk command

IBM_2145:ITSO-CLS1:admin>svctask chmdisk -name mdisk_6 mdisk6 This command renamed the MDisk named mdisk6 to mdisk_6. The chmdisk command: The chmdisk command specifies the new name first. You can use letters A to Z, a to z, numbers 0 to 9, the dash (-), and the underscore (_). The new name can be between one and 63 characters in length. However, the new name cannot start with a number, dash, or the word mdisk (because this prefix is reserved for SVC assignment only).

9.2.7 Including an MDisk


If a significant number of errors occur on an MDisk, the SVC automatically excludes it. These errors can result from a hardware problem, a SAN problem, or the result of poorly planned maintenance. If it is a hardware fault, you can receive a Simple Network Management Protocol (SNMP) alert about the state of the disk subsystem (before the disk was excluded),
Chapter 9. SAN Volume Controller operations using the command-line interface

445

and you can undertake preventive maintenance. If not, the hosts that were using virtual disks (VDisks), which used the excluded MDisk, now have I/O errors. By running the svcinfo lsmdisk command, you can see that mdisk61 is excluded in Example 9-12.
Example 9-12 svcinfo lsmdisk command: Excluded MDisk

IBM_2145:ITSO-CLS5:admin>svcinfo lsmdisk -delim : id:name:status:mode:mdisk_grp_id:mdisk_grp_name:capacity:ctrl_LUN_#:controller_nam e:UID:tier 0:mdisk0:online:unmanaged:::47.0GB:0000000000000000:controller2:60050768017f0000a8 0000000000000000000000000000000000000000000000:generic_hdd . the remaining line has been removed for brevity . 61:mdisk61:excluded:managed:0:ITSO-Storage_Pool-Multi_Tier:66.8GB:0000000000000001 :ITSO-4700:600a0b8000510f3a000003ee4b1fc8c900000000000000000000000000000000:generi c_hdd After taking the necessary corrective action to repair the MDisk (for example, replace the failed disk, repair the SAN zones, and so on), we need to include the MDisk again by issuing the svctask includemdisk command (Example 9-13), because the SVC cluster does not include the MDisk automatically.
Example 9-13 svctask includemdisk

IBM_2145:ITSO-CLS5:admin>svctask includemdisk mdisk61 Running the svcinfo lsmdisk command again shows mdisk61 online again; see Example 9-14.
Example 9-14 svcinfo lsmdisk command: Verifying that MDisk is included

IBM_2145:ITSO-CLS5:admin>svcinfo lsmdisk -delim : id:name:status:mode:mdisk_grp_id:mdisk_grp_name:capacity:ctrl_LUN_#:controller_nam e:UID:tier 0:mdisk0:online:unmanaged:::47.0GB:0000000000000000:controller2:60050768017f0000a8 0000000000000000000000000000000000000000000000:generic_hdd . the remaining line has been removed for brevity . 61:mdisk61:online:managed:0:ITSO-Storage_Pool-Multi_Tier:66.8GB:0000000000000001:I TSO-4700:600a0b8000510f3a000003ee4b1fc8c900000000000000000000000000000000:generic_ hdd

9.2.8 Adding MDisks to a storage pool


If you created an empty storage pool or you simply assign additional MDisks to your already configured storage poll, you can use the svctask addmdisk command to populate the storage pool. (Example 9-15).
Example 9-15 svctask addmdisk command

IBM_2145:ITSO-CLS5:admin>svctask addmdisk -mdisk mdisk61 ITSO-Storage_Pool-Multi_Tier

446

Implementing the IBM System Storage SAN Volume Controller V6.1

You can only add unmanaged MDisks to a storage pool. This command adds the MDisk named mdisk61 to the storage pool named ITSO-Storage_Pool-Multi_Tier. Important: Do not add this MDisk to a storage pool if you want to create an image mode volume from the MDisk that you are adding. As soon as you add an MDisk to a storage pool it becomes managed, and extent mapping is not necessarily one-to-one anymore.

9.2.9 Showing MDisks in a storage pool


Use the svcinfo lsmdisk -filtervalue command, as shown in Example 9-16, to see which MDisks are part of a specific storage pool. This command shows all of the MDisks that are part of a storage pool if they belong to the Storage Subsystem named ITSO-4700.
Example 9-16 svcinfo lsmdisk -filtervalue: Mdisks in MDG

IBM_2145:ITSO-CLS5:admin>svcinfo lsmdisk -filtervalue mdisk_grp_name=ITSO-Storage_* id name status mode mdisk_grp_id mdisk_grp_name capacity ctrl_LUN_# controller_name UID tier 43 mdisk43 online managed 1 ITSO-Storage_Pool-Single_Tier 66.8GB 0000000000000000 ITSO-4700 600a0b8000510b8a000003f54b1fc84b00000000000000000000000000000000 generic_hdd 61 mdisk61 online managed 0 ITSO-Storage_Pool-Multi_Tier 66.8GB 0000000000000001 ITSO-4700 600a0b8000510f3a000003ee4b1fc8c900000000000000000000000000000000 generic_hdd 298 SDD-Array_1 online array 0 ITSO-Storage_Pool-Multi_Tier 135.7GB generic_ssd As you can see in Example 9-16, with this command you will be able to see all the MDisks present in the storage pools named ITSO-Storage_* where the asterisk (*) is a wild card.

9.2.10 Working with a storage pool


Before we can create any volumes on the SVC cluster, we need to virtualize the allocated storage that is assigned to the SVC. After we have assigned volumes to the SVCs managed disks, we cannot start using them until they are members of a storage pool. Therefore, one of our first operations is to create a storage pool where we can place our MDisks. This section describes the operations using MDisks and the storage pool. It explains the tasks that we can perform at storage pool level.

9.2.11 Creating a storage pool


After a successful login to the CLI interface of the SVC, we create the storage pool. Using the svctask mkmdiskgrp command, create a storage pool, as shown in Example 9-17.
Example 9-17 svctask mkmdiskgrp

IBM_2145:ITSO-CLS5:admin>svctask mkmdiskgrp -name ITSO-Storage_Pool-Single_Tier -ext 256 MDisk Group, id [1], successfully created

Chapter 9. SAN Volume Controller operations using the command-line interface

447

This command creates a storage pool called ITSO-Storage_Pool-Single_Tier. The extent size that is used within this group is 256 MB. We have not added any MDisks to the storage pool yet, so it is an empty storage pool. You can add unmanaged MDisks and create the storage pool in the same command. Use the command svctask mkmdiskgrp with the -mdisk parameter and enter the IDs or names of the MDisks. This will add the MDisks immediately after the storage pool is created. Prior to the creation of the storage pool, enter the svcinfo lsmdisk command as shown in Example 9-18. This lists all of the available MDisks that are seen by the SVC cluster.
Example 9-18 Listing available MDisks

IBM_2145:ITSO-CLS5:admin>svcinfo lsmdisk -filtervalue controller_name=ITSO-4700 -delim : id:name:status:mode:mdisk_grp_id:mdisk_grp_name:capacity:ctrl_LUN_#:controller_nam e:UID:tier 43:mdisk43:online:managed:1:ITSO-Storage_Pool-Single_Tier:66.8GB:0000000000000000: ITSO-4700:600a0b8000510b8a000003f54b1fc84b00000000000000000000000000000000:generic _hdd 61:mdisk61:online:managed:0:ITSO-Storage_Pool-Multi_Tier:66.8GB:0000000000000001:I TSO-4700:600a0b8000510f3a000003ee4b1fc8c900000000000000000000000000000000:generic_ hdd 73:mdisk73:online:unmanaged:::66.8GB:0000000000000002:ITSO-4700:600a0b8000510b8a00 0003f84b1fc8db00000000000000000000000000000000:generic_hdd 80:mdisk80:online:unmanaged:::66.8GB:0000000000000003:ITSO-4700:600a0b8000510f3a00 0003f34b1fc96700000000000000000000000000000000:generic_hdd Using the same command as before (svctask mkmdiskgrp) and knowing the MDisk IDs that we are using, we can add multiple MDisks to the storage pool at the same time. We now add the unmanaged MDisks to the storage pool that we created, as shown in Example 9-19.
Example 9-19 Creating an storage pool and adding available MDisks

IBM_2145:ITSO-CLS5:admin>svctask mkmdiskgrp -name STGPool_DS4700 -ext 512 -mdisk 73:80 MDisk Group, id [2], successfully created This command creates a storage pool called STGPool_DS4700. The extent size that is used within this group is 512 MB, and two MDisks (73 and 80) are added to the storage pool. Storage pool name: The -name and -mdisk parameters are optional. If you do not enter a -name, the default is MDiskgrpx, where x is the ID sequence number that is assigned by the SVC internally. If you do not enter the -mdisk parameter, an empty storage pool is created. If you want to provide a name, you can use letters A to Z, a to z, numbers 0 to 9, and the underscore. The name can be between one and 63 characters in length, but it cannot start with a number or the word MDiskgrp (because this prefix is reserved for SVC assignment only). By running the svcinfo lsmdisk command, you now see the MDisks as managed and as part of the STGPool_DS4700, as shown in Example 9-20 on page 449.

448

Implementing the IBM System Storage SAN Volume Controller V6.1

Example 9-20 svcinfo lsmdisk command

IBM_2145:ITSO-CLS5:admin>svcinfo lsmdisk -filtervalue controller_name=ITSO-4700 -delim : id:name:status:mode:mdisk_grp_id:mdisk_grp_name:capacity:ctrl_LUN_#:controller_nam e:UID:tier 43:mdisk43:online:managed:1:ITSO-Storage_Pool-Single_Tier:66.8GB:0000000000000000: ITSO-4700:600a0b8000510b8a000003f54b1fc84b00000000000000000000000000000000:generic _hdd 61:mdisk61:online:managed:0:ITSO-Storage_Pool-Multi_Tier:66.8GB:0000000000000001:I TSO-4700:600a0b8000510f3a000003ee4b1fc8c900000000000000000000000000000000:generic_ hdd 73:mdisk73:online:managed:2:STGPool_DS4700:66.8GB:0000000000000002:ITSO-4700:600a0 b8000510b8a000003f84b1fc8db00000000000000000000000000000000:generic_hdd 80:mdisk80:online:managed:2:STGPool_DS4700:66.8GB:0000000000000003:ITSO-4700:600a0 b8000510f3a000003f34b1fc96700000000000000000000000000000000:generic_hdd At this point, you have completed the tasks that are required to create a new storage pool.

9.2.12 Viewing storage pool information


Use the svcinfo lsmdiskgrp command, as shown in Example 9-21, to display information about the storage pools that are defined in the SVC.
Example 9-21 svcinfo lsmdiskgrp command

IBM_2145:ITSO-CLS5:admin>svcinfo lsmdiskgrp -delim : id:name:status:mdisk_count:vdisk_count:capacity:extent_size:free_capacity:virtual_ capacity:used_capacity:real_capacity:overallocation:warning:easy_tier:easy_tier_st atus 0:ITSO-Storage_Pool-Multi_Tier:online:2:5:200.50GB:256:150.50GB:50.00GB:50.00GB:50 .00GB:24:0:auto:active 1:ITSO-Storage_Pool-Single_Tier:online:1:2:66.25GB:256:46.25GB:20.00GB:20.00GB:20. 00GB:30:0:on:active 2:STGPool_DS4700:online:2:0:132.50GB:512:132.50GB:0.00MB:0.00MB:0.00MB:0:0:auto:in active

9.2.13 Renaming a storage pool


Use the svctask chmdiskgrp command to change the name of a storage pool. To verify the change, run the svcinfo lsmdiskgrp command. Example 9-22 shows both of these commands.
Example 9-22 svctask chmdiskgrp command

IBM_2145:ITSO-CLS5:admin>svctask chmdiskgrp -name STGPool_DS4700_new 2 IBM_2145:ITSO-CLS5:admin>svcinfo lsmdiskgrp -delim : id:name:status:mdisk_count:vdisk_count:capacity:extent_size:free_capacity:virtual_ capacity:used_capacity:real_capacity:overallocation:warning:easy_tier:easy_tier_st atus 0:ITSO-Storage_Pool-Multi_Tier:online:2:5:200.50GB:256:150.50GB:50.00GB:50.00GB:50 .00GB:24:0:auto:active 1:ITSO-Storage_Pool-Single_Tier:online:1:2:66.25GB:256:46.25GB:20.00GB:20.00GB:20. 00GB:30:0:on:active

Chapter 9. SAN Volume Controller operations using the command-line interface

449

2:STGPool_DS4700_new:online:2:0:132.50GB:512:132.50GB:0.00MB:0.00MB:0.00MB:0:0:aut o:inactive This command renamed the storage pool STGPool_DS4700 shown in Example 9-21 on page 449 to STGPool_DS4700_new as shown in Example 9-22 on page 449. Changing the storage pool: The chmdiskgrp command specifies the new name first. You can use letters A to Z, a to z, numbers 0 to 9, the dash (-), and the underscore (_). The new name can be between one and 63 characters in length. However, the new name cannot start with a number, dash, or the word mdiskgrp (because this prefix is reserved for SVC assignment only).

9.2.14 Deleting a storage pool


Use the svctask rmmdiskgrp command to remove a storage pool from the SVC cluster configuration (Example 9-23).
Example 9-23 svctask rmmdiskgrp

IBM_2145:ITSO-CLS5:admin>svctask rmmdiskgrp STGPool_DS4700_new

This command removes storage pool STGPool_DS4700_new from the SVC cluster configuration. Removing a storage pool from the SVC cluster configuration: If there are MDisks within the storage pool, you must use the -force flag to remove the storage pool from the SVC cluster configuration, for example: svctask rmmdiskgrp STGPool_DS4700_new -force Ensure that you definitely want to use this flag, because it destroys all mapping information and data held on the volumes, which cannot be recovered.

9.2.15 Removing MDisks from a storage pool


Use the svctask rmmdisk command to remove an MDisk from a storage pool (Example 9-24).
Example 9-24 svctask rmmdisk command

IBM_2145:ITSO-CLS1:admin>svctask rmmdisk -mdisk 80 -force 2 This command removes the MDisk with ID 80 from the storage pool with ID 2. The -force flag is set because there are volumes using this storage pool. Sufficient space: The removal only takes place if there is sufficient space to migrate the volumes data to other extents on other MDisks that remain in the storage pool. After you remove the MDisk from the storage pool, it takes time to change the mode from managed to unmanaged depending on the size of the MDisk you are removing.

450

Implementing the IBM System Storage SAN Volume Controller V6.1

9.3 Working with hosts


In this section we explain the tasks that you can perform at a host level. When we create a host in our SVC cluster, we need to define the connection method. Starting with SVC 5.1, we can now define our host as iSCSI-attached or FC-attached.

9.3.1 Creating a Fibre Channel-attached host


In the following sections we illustrate how to create an FC-attached host under various circumstances.

Host is powered on, connected, and zoned to the SVC


When you create your host on the SVC, it is good practice to check whether the host bus adapter (HBA) worldwide port names (WWPNs) of the server are visible to the SVC. By doing that, you ensure that zoning is done and that the correct WWPN will be used. Issue the svcinfo lshbaportcandidate command, as shown in Example 9-25.
Example 9-25 svcinfo lshbaportcandidate command

IBM_2145:ITSO-CLS5:admin>svcinfo lshbaportcandidate id 210000E08B89C1CD 210000E08B054CAA After you know that the WWPNs that are displayed match your host (use host or SAN switch utilities to verify), use the svctask mkhost command to create your host. Name: If you do not provide the -name parameter, the SVC automatically generates the name hostx (where x is the ID sequence number that is assigned by the SVC internally). You can use the letters A to Z and a to z, the numbers 0 to 9, the dash (-), and the underscore (_). The name can be between one and 63 characters in length. However, the name cannot start with a number, dash, or the word host (because this prefix is reserved for SVC assignment only). The command to create a host is shown in Example 9-26.
Example 9-26 svctask mkhost

IBM_2145:ITSO-CLS5:admin>svctask mkhost -name Almaden -hbawwpn 210000E08B89C1CD:210000E08B054CAA Host, id [0], successfully created This command creates a host called Almaden using WWPN 21:00:00:E0:8B:89:C1:CD and 21:00:00:E0:8B:05:4C:AA. Ports: You can define from one up to eight ports per host, or you can use the addport command, which we show in 9.3.5, Adding ports to a defined host on page 455.

Chapter 9. SAN Volume Controller operations using the command-line interface

451

Host is not powered on or not connected to the SAN


If you want to create a host on the SVC without seeing your target WWPN by using the svcinfo lshbaportcandidate command, add the -force flag to your mkhost command, as shown in Example 9-27. This option is more open to human error than if you choose the WWPN from a list, but it is typically used when many host definitions are created at the same time, such as through a script. In this case, you can type the WWPN of your HBA or HBAs and use the -force flag to create the host, regardless of whether they are connected, as shown in Example 9-27.
Example 9-27 mkhost -force

IBM_2145:ITSO-CLS5:admin>svctask mkhost -name Almaden -hbawwpn 210000E08B89C1CD:210000E08B054CAA -force Host, id [0], successfully created This command forces the creation of a host called Almaden using WWPN 210000E08B89C1CD:210000E08B054CAA. Note: WWPNs are not case sensitive in the CLI.

9.3.2 Creating an iSCSI-attached host


Now we can create host definitions to a host that is not connected to the SAN but that has LAN access to our SVC nodes. Before we create the host definition, we configure our SVC clusters to use the new iSCSI connection method. We describe additional information about configuring your nodes to use iSCSI in 9.8.4, iSCSI configuration on page 488. The iSCSI functionality allows the host to access volumes through the SVC without being attached to the SAN. Back-end storage and node-to-node communication still need the FC network to communicate, but the host does not necessarily need to be connected to the SAN. When we create a host that is going to use iSCSI as a communication method, iSCSI initiator software must be installed on the host to initiate the communication between the SVC and the host. This installation creates an iSCSI qualified name (IQN) identifier that is needed before we create our host. Before we start, we check our servers IQN address. We are running Windows Server 2008. We select Start Programs Administrative tools, and we select iSCSI initiator. In our example, our IQN, as shown in Figure 9-1 on page 453, is: iqn.1991-05.com.microsoft:st-2k8hv-004.englab.brocade.com

452

Implementing the IBM System Storage SAN Volume Controller V6.1

Figure 9-1 IQN from the iSCSI initiator tool

We create the host by issuing the mkhost command, as shown in Example 9-28. When the command completes successfully, we display our newly created host. It is important to know that when the host is initially configured, the default authentication method is set to no authentication and no Challenge Handshake Authentication Protocol (CHAP) secret is set. To set a CHAP secret for authenticating the iSCSI host with the SVC cluster, use the svctask chhost command with the chapsecret parameter.
Example 9-28 mkhost command

IBM_2145:ITSO-CLS1:admin>svctask mkhost -name Baldur -iogrp 0 -iscsiname iqn.1991-05.com.microsoft:st-2k8hv-004.englab.brocade.com Host, id [4], successfully created IBM_2145:ITSO-CLS1:admin>svcinfo lshost 4 id 4 name Baldur port_count 1 type generic mask 1111 iogrp_count 1 iscsiname iqn.1991-05.com.microsoft:st-2k8hv-004.englab.brocade.com node_logged_in_count 0 state offline We have now created our host definition. We map a volume to our new iSCSI server, as shown in Example 9-29 on page 454. We have already created the volume, as shown in 9.5.1, Creating a volume on page 458. In our scenario, our volume has ID 21 and the host name is Baldur. We map it to our iSCSI host.
Chapter 9. SAN Volume Controller operations using the command-line interface

453

Example 9-29 Mapping a volume to the iSCSI host

IBM_2145:ITSO-CLS1:admin>svctask mkvdiskhostmap -host Baldur 21 Virtual Disk to Host map, id [0], successfully created After the volume has been mapped to the host, we display the host information again, as shown in Example 9-30.
Example 9-30 svcinfo lshost

IBM_2145:ITSO-CLS1:admin>svcinfo lshost 4 id 4 name Baldur port_count 1 type generic mask 1111 iogrp_count 1 iscsiname iqn.1991-05.com.microsoft:st-2k8hv-004.englab.brocade.com node_logged_in_count 1 state online Note: FC hosts and iSCSI hosts are handled in the same way operationally after they have been created. If you need to display a CHAP secret for an already defined server, use the svcinfo lsiscsiauth command. The lsiscsiauth command lists the Challenge Handshake Authentication Protocol (CHAP) secret configured for authenticating an entity to the SAN Volume Controller cluster.

9.3.3 Modifying a host


Use the svctask chhost command to change the name of a host. To verify the change, run the svcinfo lshost command. Example 9-31 shows both of these commands.
Example 9-31 svctask chhost command

IBM_2145:ITSO-CLS1:admin>svctask chhost -name Angola Guinea IBM_2145:ITSO-CLS1:admin>svcinfo lshost id name port_count 0 Palau 2 1 Nile 2 2 Kanaga 2 3 Siam 2 4 Angola 1

iogrp_count 4 1 1 2 4

This command renamed the host from Guinea to Angola. Note: The chhost command specifies the new name first. You can use letters A to Z and a to z, numbers 0 to 9, the dash (-), and the underscore (_). The new name can be between one and 63 characters in length. However, it cannot start with a number, dash, or the word host (because this prefix is reserved for SVC assignment only).

454

Implementing the IBM System Storage SAN Volume Controller V6.1

Note: If you use Hewlett-Packard UNIX (HP-UX), you use the -type option. See IBM System Storage Open Software Family SAN Volume Controller: Host Attachment Guide, SC26-7563, for more information about the hosts that require the -type parameter.

9.3.4 Deleting a host


Use the svctask rmhost command to delete a host from the SVC configuration. If your host is still mapped to volumes and you use the -force flag, the host and all of the mappings with it are deleted. The volumes are not deleted, only the mappings to them. The command that is shown in Example 9-32 deletes the host called Angola from the SVC configuration.
Example 9-32 svctask rmhost Angola

IBM_2145:ITSO-CLS1:admin>svctask rmhost Angola Deleting a host: If there are any volume assigned to the host, you must use the -force flag, for example: svctask rmhost -force Angola.

9.3.5 Adding ports to a defined host


If you add an HBA or a network interface controller (NIC) to a server that is already defined within the SVC, you can use the svctask addhostport command to add the new port definitions to your host configuration. If your host is currently connected through SAN with FC and if the WWPN is already zoned to the SVC cluster, issue the svcinfo lshbaportcandidate command, as shown in Example 9-33, to compare with the information that you have from the server administrator.
Example 9-33 svcinfo lshbaportcandidate

IBM_2145:ITSO-CLS1:admin>svcinfo lshbaportcandidate id 210000E08B054CAA If the WWPN matches your information (use host or SAN switch utilities to verify), use the svctask addhostport command to add the port to the host. Example 9-34 shows the command to add a host port.
Example 9-34 svctask addhostport

IBM_2145:ITSO-CLS1:admin>svctask addhostport -hbawwpn 210000E08B054CAA Palau This command adds the WWPN of 210000E08B054CAA to the Palau host. Adding multiple ports: You can add multiple ports all at one time by using the separator or colon (:) between WWPNs, for example: svctask addhostport -hbawwpn 210000E08B054CAA:210000E08B89C1CD Palau

Chapter 9. SAN Volume Controller operations using the command-line interface

455

If the new HBA is not connected or zoned, the svcinfo lshbaportcandidate command does not display your WWPN. In this case, you can manually type the WWPN of your HBA or HBAs and use the -force flag to create the host, as shown in Example 9-35.
Example 9-35 svctask addhostport

IBM_2145:ITSO-CLS1:admin>svctask addhostport -hbawwpn 210000E08B054CAA -force Palau This command forces the addition of the WWPN named 210000E08B054CAA to the host called Palau. WWPNs: WWPNs are not case sensitive within the CLI. If you run the svcinfo lshost command again, you see your host with an updated port count of 2 in Example 9-36.
Example 9-36 svcinfo lshost command: Port count

IBM_2145:ITSO-CLS1:admin>svcinfo lshost id name port_count 0 Palau 2 1 ITSO_W2008 1 2 Thor 3 3 Frigg 1 4 Baldur 1

iogrp_count 4 4 1 1 1

If your host currently uses iSCSI as a connection method, you must have the new iSCSI IQN ID before you add the port. Unlike FC-attached hosts, you cannot check for available candidates with iSCSI. After you have acquired the additional iSCSI IQN, use the svctask addhostport command, as shown in Example 9-37.
Example 9-37 Adding an iSCSI port to an already configured host

IBM_2145:ITSO-CLS1:admin>svctask addhostport -iscsiname iqn.1991-05.com.microsoft:baldur 4

9.3.6 Deleting ports


If you make a mistake when adding a port, or if you remove an HBA from a server that is already defined within the SVC, you can use the svctask rmhostport command to remove WWPN definitions from an existing host. Before you remove the WWPN, be sure that it is the correct WWPN by issuing the svcinfo lshost command, as shown in Example 9-38.
Example 9-38 svcinfo lshost command

IBM_2145:ITSO-CLS1:admin>svcinfo lshost Palau id 0 name Palau port_count 2 type generic mask 1111

456

Implementing the IBM System Storage SAN Volume Controller V6.1

iogrp_count 4 WWPN 210000E08B054CAA node_logged_in_count 2 state active WWPN 210000E08B89C1CD node_logged_in_count 2 state offline When you know the WWPN or iSCSI IQN, use the svctask rmhostport command to delete a host port, as shown in Example 9-39.
Example 9-39 svctask rmhostport

For removing WWPN IBM_2145:ITSO-CLS1:admin>svctask rmhostport -hbawwpn 210000E08B89C1CD Palau and for removing iSCSI IQN IBM_2145:ITSO-CLS1:admin>svctask rmhostport -iscsiname iqn.1991-05.com.microsoft:baldur Baldur This command removes the WWPN of 210000E08B89C1CD from the Palau host and the iSCSI IQN iqn.1991-05.com.microsoft:baldur from the Baldur host. Removing multiple ports: You can remove multiple ports at one time by using the separator or colon (:) between the port names, for example: svctask rmhostport -hbawwpn 210000E08B054CAA:210000E08B892BCD Angola

9.4 Working with the Ethernet port for iscsi


This section details commands that are useful for setting, changing, and displaying the SVC Ethernet port for iscsi configuration. Example 9-40 shows the lsportip command listing the iSCSI IP addresses assigned for each port on each node in the cluster.
Example 9-40 lsportip command

IBM_2145:ITSO-CLS5:admin>svcinfo lsportip id node_id node_name IP_address gateway IP_address_6 prefix_6 gateway_6 duplex state speed failover 1 1 node1 00:1a:64:95:2f:cc Full unconfigured 1Gb/s 1 1 node1 00:1a:64:95:2f:cc Full unconfigured 1Gb/s 2 1 node1 10.44.36.64 10.44.36.254 00:1a:64:95:2f:ce Full online 1Gb/s 2 1 node1 00:1a:64:95:2f:ce Full online 1Gb/s 1 2 node2 00:1a:64:95:3f:4c Full unconfigured 1Gb/s

mask MAC

no yes 255.255.255.0 no yes no

Chapter 9. SAN Volume Controller operations using the command-line interface

457

1 2 node2 00:1a:64:95:3f:4c Full 2 2 10.44.36.254 00:1a:64:95:3f:4e Full 2 2 node2 00:1a:64:95:3f:4e Full 1 3 node3 00:21:5e:41:53:18 Full 1 3 node3 00:21:5e:41:53:18 Full 2 3 10.44.36.254 00:21:5e:41:53:1a Full 2 3 node3 00:21:5e:41:53:1a Full 1 4 node4 00:21:5e:41:56:8c Full 1 4 node4 00:21:5e:41:56:8c Full 2 4 10.44.36.254 00:21:5e:41:56:8e Full 2 4 node4 00:21:5e:41:56:8e Full

node2

unconfigured 1Gb/s 10.44.36.65 online online unconfigured 1Gb/s 1Gb/s 1Gb/s

yes 255.255.255.0 no yes no yes 255.255.255.0 no yes no yes 255.255.255.0 no yes

node3

unconfigured 1Gb/s 10.44.36.60 online online unconfigured 1Gb/s 1Gb/s 1Gb/s

node4

unconfigured 1Gb/s 10.44.36.63 online online 1Gb/s 1Gb/s

Example 9-41 shows how the cfgportip command assigns an IP address to each node Ethernet port for iSCSI I/O.
Example 9-41 cfgportip command

IBM_2145:ITSO-CLS5:admin>svctask 10.44.36.254 -mask 255.255.255.0 IBM_2145:ITSO-CLS5:admin>svctask 10.44.36.254 -mask 255.255.255.0 IBM_2145:ITSO-CLS5:admin>svctask 10.44.36.254 -mask 255.255.255.0

cfgportip -node 4 -ip 10.44.36.63 -gw 2 cfgportip -node 1 -ip 10.44.36.64 -gw 2 cfgportip -node 2 -ip 10.44.36.65 -gw 2

9.5 Working with volumes


This section details the various configuration and administration tasks that can be performed on the volume within the SVC environment.

9.5.1 Creating a volume


The mkvdisk command creates sequential, striped, or image mode volume objects. When they are mapped to a host object, these objects are seen as disk drives with which the host can perform I/O operations. When creating a volume, you must enter several parameters at the CLI. There are both mandatory and optional parameters. See the full command string and detailed information in Command-Line Interface Users Guide, SC27-2287. 458
Implementing the IBM System Storage SAN Volume Controller V6.1

Creating an image mode disk: If you do not specify the -size parameter when you create an image mode disk, the entire MDisk capacity is used. When you are ready to create a volume, you must know the following information before you start creating the volume: In which storage pool the volume is going to have its extents From which I/O Group the volume will be accessed Which SVC node will be the preferred node for the volume Size of the volume Name of the volume Type of the volume Whether this volume will be managed by Easy Tier to optimize its performance When you are ready to create your striped volume, use the svctask mkvdisk command (we discuss sequential and image mode volume later). In Example 9-42, this command creates a 10 GB striped volume with volume id7 within the storage pool STGPool_DS4700 and assigns it to the iogrp_0 I/O Group. Its preferred node will be node 1.
Example 9-42 svctask mkvdisk command

IBM_2145:ITSO-CLS5:admin>svctask mkvdisk -mdiskgrp STGPool_DS4700 -iogrp io_grp0 -node 1 -size 10 -unit gb -name Tiger Virtual Disk, id [7], successfully created

To verify the results use the svcinfo lsvdisk command, as shown in Example 9-43.
Example 9-43 svcinfo lsvdisk command

IBM_2145:ITSO-CLS5:admin>svcinfo lsvdisk 7 id 7 name Tiger IO_group_id 0 IO_group_name io_grp0 status online mdisk_grp_id 2 mdisk_grp_name STGPool_DS4700 capacity 10.00GB type striped formatted no mdisk_id mdisk_name FC_id FC_name RC_id RC_name vdisk_UID 6005076801820000100000000000000D throttling 0 preferred_node_id 1 fast_write_state empty cache readwrite udid fc_map_count 0 sync_rate 50

Chapter 9. SAN Volume Controller operations using the command-line interface

459

copy_count 1 se_copy_count 0 copy_id 0 status online sync yes primary yes mdisk_grp_id 2 mdisk_grp_name STGPool_DS4700 type striped mdisk_id mdisk_name fast_write_state empty used_capacity 10.00GB real_capacity 10.00GB free_capacity 0.00MB overallocation 100 autoexpand warning grainsize se_copy no easy_tier on easy_tier_status inactive tier generic_ssd tier_capacity 0.00MB tier generic_hdd tier_capacity 10.00GB At this point, you have completed the required tasks to create a volume.

9.5.2 Volume information


Use the svcinfo lsvdisk command to display summary information about all volumes defined within the SVC environment. To display more detailed information about a specific volume, run the command again and append the volume name parameter or the volume ID. Example 9-44 shows both of these commands.
Example 9-44 svcinfo lsvdisk command

IBM_2145:ITSO-CLS5:admin>svcinfo lsvdisk -delim : id:name:IO_group_id:IO_group_name:status:mdisk_grp_id:mdisk_grp_name:capacity:type :FC_id:FC_name:RC_id:RC_name:vdisk_UID:fc_map_count:copy_count:fast_write_state:se _copy_count 0:Volume_measured_only:0:io_grp0:online:1:ITSO-Storage_Pool-Single_Tier:10.00GB:st riped:::::60050768018200001000000000000003:0:1:empty:0 1:Volume_EasyTier_active:0:io_grp0:online:0:ITSO-Storage_Pool-Multi_Tier:10.00GB:s triped:::::60050768018200001000000000000005:0:1:empty:0 2:Volume_EasyTier_active1:0:io_grp0:online:0:ITSO-Storage_Pool-Multi_Tier:10.00GB: striped:::::60050768018200001000000000000009:0:1:empty:0 3:Volume_EasyTier_active2:0:io_grp0:online:0:ITSO-Storage_Pool-Multi_Tier:10.00GB: striped:::::6005076801820000100000000000000A:0:1:empty:0 4:Volume_EasyTier_active3:0:io_grp0:online:0:ITSO-Storage_Pool-Multi_Tier:10.00GB: striped:::::60050768018200001000000000000008:0:1:empty:0

460

Implementing the IBM System Storage SAN Volume Controller V6.1

5:Volume_not_measured:0:io_grp0:online:1:ITSO-Storage_Pool-Single_Tier:10.00GB:str iped:::::6005076801820000100000000000000B:0:1:empty:0 6:Volume_EasyTier_active4:0:io_grp0:online:0:ITSO-Storage_Pool-Multi_Tier:10.00GB: striped:::::6005076801820000100000000000000C:0:1:empty:0 7:Tiger:0:io_grp0:online:2:STGPool_DS4700:10.00GB:striped:::::60050768018200001000 00000000000D:0:1:empty:0 IBM_2145:ITSO-CLS4:admin>svcinfo lsvdisk Volume_measured_only id 0 name Volume_measured_only IO_group_id 0 IO_group_name io_grp0 status online mdisk_grp_id 1 mdisk_grp_name ITSO-Storage_Pool-Single_Tier capacity 10.00GB type striped formatted no mdisk_id mdisk_name FC_id FC_name RC_id RC_name vdisk_UID 60050768018281BEE000000000000000 throttling 0 preferred_node_id 3 fast_write_state empty cache readwrite udid fc_map_count 0 sync_rate 50 copy_count 1 se_copy_count 0 copy_id 0 status online sync yes primary yes mdisk_grp_id 1 mdisk_grp_name ITSO-Storage_Pool-Single_Tier type striped mdisk_id mdisk_name fast_write_state empty used_capacity 10.00GB real_capacity 10.00GB free_capacity 0.00MB overallocation 100 autoexpand warning grainsize se_copy no easy_tier on easy_tier_status measured tier generic_ssd

Chapter 9. SAN Volume Controller operations using the command-line interface

461

tier_capacity 0.00MB tier generic_hdd tier_capacity 10.00GB

9.5.3 Creating a thin-provisioned volume


Example 9-45 shows how to create a thin-provisioned volume. In addition to the normal parameters, you must use the following parameters: -rsize -autoexpand This parameter makes the volume a thin-provisioned volume; otherwise, the volume is fully allocated. This parameter specifies that thin-provisioned volume copies automatically expand their real capacities by allocating new extents from their storage pool. This parameter sets the grain size (in KB) for a thin-provisioned volume.

-grainsize

Example 9-45 Usage of the command svctask mkvdisk

IBM_2145:ITSO-CLS5:admin>svctask mkvdisk -mdiskgrp STGPool_DS4700 -iogrp 0 -vtype striped -size 10 -unit gb -rsize 50% -autoexpand -grainsize 32 Virtual Disk, id [8], successfully created

This command creates a space-efficient 10 GB volume. The volume belongs to the storage pool named STGPool_DS4700 and is owned by the io_grp1 I/O Group. The real capacity automatically expands until the volume size of 10 GB is reached. The grain size is set to 32 K, which is the default. Disk size: When using the -rsize parameter, you have the following options: disk_size, disk_size_percentage, and auto. Specify the disk_size_percentage value using an integer, or an integer immediately followed by the percent (%) symbol. Specify the units for a disk_size integer using the -unit parameter; the default is MB. The -rsize value can be greater than, equal to, or less than the size of the volume. The auto option creates a volume copy that uses the entire size of the MDisk. If you specify the -rsize auto option, you must also specify the -vtype image option. An entry of 1 GB uses 1024 MB.

9.5.4 Creating a volume in image mode


This virtualization type allows an image mode volume to be created when an MDisk already has data on it, perhaps from a previrtualized subsystem. When an image mode volume is created, it directly corresponds to the previously unmanaged MDisk from which it was created. Therefore, with the exception of thin-provisioned image mode volume, the volumes logical block address (LBA) x equals MDisk LBA x. You can use this command to bring a nonvirtualized disk under the control of the cluster. After it is under the control of the cluster, you can migrate the volume from the single managed disk.

462

Implementing the IBM System Storage SAN Volume Controller V6.1

As soon as the first MDisk extent has been migrated, the volume is no longer an image mode volume. You can add an image mode volume to an already populated storage pool with other types of volume, such as a striped or sequential volume. Size: An image mode volume must be at least 512 bytes (the capacity cannot be 0). That is, the minimum size that can be specified for an image mode volume must be the same as the storage pool extent size to which it is added, with a minimum of 16 MB. You must use the -mdisk parameter to specify an MDisk that has a mode of unmanaged. The -fmtdisk parameter cannot be used to create an image mode volume. Capacity: If you create a mirrored volume from two image mode MDisks without specifying a -capacity value, the capacity of the resulting volume is the smaller of the two MDisks, and the remaining space on the larger MDisk is inaccessible. If you do not specify the -size parameter when you create an image mode disk, the entire MDisk capacity is used. Use the svctask mkvdisk command to create an image mode volume, as shown in Example 9-46.
Example 9-46 svctask mkvdisk (image mode)

IBM_2145:ITSO-CLS5:admin>svctask mkvdisk -mdiskgrp STGPool_DS4700 -iogrp 0 -mdisk mdisk93 -vtype image -name Image_Volume_A Virtual Disk, id [9], successfully created

This command creates an image mode volume called Image_Volume_A using the mdisk93 MDisk. The volume belongs to the storage pool STGPool_DS4700 and is owned by the io_grp0 I/O Group. If we run the svcinfo lsvdisk command again, notice that volume named Image_Volume_A has a status of image, as shown in Example 9-47.
Example 9-47 svcinfo lsmdisk

IBM_2145:ITSO-CLS5:admin>svcinfo lsvdisk -filtervalue type=image id name IO_group_id IO_group_name status mdisk_grp_id mdisk_grp_name capacity type FC_id FC_name RC_id RC_name vdisk_UID fc_map_count copy_count fast_write_state se_copy_count 9 Image_Volume_A 0 io_grp0 online 2 STGPool_DS4700 66.80GB image 6005076801820000100000000000000F 0 1 empty 0

9.5.5 Adding a mirrored volume copy


You can create a mirrored copy of a volume, which keeps a volume accessible even when the MDisk on which it depends has become unavailable. You can create a copy of a volume either on separate storage pools or by creating an image mode copy of the volume. Copies increase the availability of data; however, they are not separate objects. You can only create or change mirrored copies from the volume.

Chapter 9. SAN Volume Controller operations using the command-line interface

463

In addition, you can use volume mirroring as an alternative method of migrating volumes between storage pools. For example, if you have a non-mirrored volume in one storage pool and want to migrate that volume to another storage pool, you can add a new copy of the volume and specify the second storage pool. After the copies are synchronized, you can delete the copy on the first storage pool. The volume is copied to the second storage pool while remaining online during the copy. To create a mirrored copy of a volume, use the addvdiskcopy command. This command adds a copy of the chosen volume to the selected storage pool, which changes a non-mirrored volume into a mirrored volume. In the following scenario, we show creating a mirrored volume from one storage pool to another storage pool. As you can see in Example 9-48, the volume has a copy with copy_id 0.
Example 9-48 svcinfo lsvdisk

IBM_2145:ITSO-CLS4:admin>svcinfo lsvdisk Volume_no_mirror id 2 name Volume_no_mirror IO_group_id 0 IO_group_name io_grp0 status online mdisk_grp_id 2 mdisk_grp_name STGPool_DS4700_1 capacity 1.00GB type striped formatted no mdisk_id mdisk_name FC_id FC_name RC_id RC_name vdisk_UID 60050768018281BEE000000000000004 throttling 0 preferred_node_id 2 fast_write_state empty cache readwrite udid fc_map_count 0 sync_rate 50 copy_count 1 se_copy_count 0 copy_id 0 status online sync yes primary yes mdisk_grp_id 2 mdisk_grp_name STGPool_DS4700_1 type striped mdisk_id mdisk_name 464
Implementing the IBM System Storage SAN Volume Controller V6.1

fast_write_state empty used_capacity 1.00GB real_capacity 1.00GB free_capacity 0.00MB overallocation 100 autoexpand warning grainsize se_copy no easy_tier on easy_tier_status inactive tier generic_ssd tier_capacity 0.00MB tier generic_hdd tier_capacity 1.00GB In Example 9-49, we add the volume copy mirror by using the svctask addvdiskcopy command.
Example 9-49 svctask addvdiskcopy

IBM_2145:ITSO-CLS4:admin>svctask addvdiskcopy -mdiskgrp STGPool_DS4700_2 -vtype striped -unit gb Volume_no_mirror Vdisk [2] copy [1] successfully created During the synchronization process, you can see the status by using the svcinfo lsvdisksyncprogress command. As shown in Example 9-50, the first time that the status is checked, the synchronization progress is at 26%, and the estimated completion time is 11:12:44. The second time that the command is run, the progress status is at 100%, and the synchronization is complete.
Example 9-50 Synchronization

IBM_2145:ITSO-CLS4:admin>svcinfo lsvdisksyncprogress vdisk_id vdisk_name copy_id progress estimated_completion_time 2 Volume_no_mirror 1 26 100920111244 IBM_2145:ITSO-CLS4:admin>svcinfo lsvdisksyncprogress vdisk_id vdisk_name copy_id progress estimated_completion_time 2 Volume_no_mirror 1 100 As you can see in Example 9-51, the new mirrored volume copy (copy_id 1) has been added and can be seen by using the svcinfo lsvdisk command.
Example 9-51 svcinfo lsvdisk

IBM_2145:ITSO-CLS4:admin>svcinfo lsvdisk 2 id 2 name Volume_no_mirror IO_group_id 0 IO_group_name io_grp0 status online mdisk_grp_id many mdisk_grp_name many capacity 1.00GB type many formatted no

Chapter 9. SAN Volume Controller operations using the command-line interface

465

mdisk_id many mdisk_name many FC_id FC_name RC_id RC_name vdisk_UID 60050768018281BEE000000000000004 throttling 0 preferred_node_id 2 fast_write_state empty cache readwrite udid fc_map_count 0 sync_rate 50 copy_count 2 se_copy_count 0 copy_id 0 status online sync yes primary yes mdisk_grp_id 2 mdisk_grp_name STGPool_DS4700_1 type striped mdisk_id mdisk_name fast_write_state empty used_capacity 1.00GB real_capacity 1.00GB free_capacity 0.00MB overallocation 100 autoexpand warning grainsize se_copy no easy_tier on easy_tier_status inactive tier generic_ssd tier_capacity 0.00MB tier generic_hdd tier_capacity 1.00GB copy_id 1 status online sync yes primary no mdisk_grp_id 3 mdisk_grp_name STGPool_DS4700_2 type striped mdisk_id mdisk_name fast_write_state empty used_capacity 1.00GB real_capacity 1.00GB free_capacity 0.00MB

466

Implementing the IBM System Storage SAN Volume Controller V6.1

overallocation 100 autoexpand warning grainsize se_copy no easy_tier on easy_tier_status inactive tier generic_ssd tier_capacity 0.00MB tier generic_hdd tier_capacity 1.00GB While adding a volume copy mirror, you can define a mirror with different parameters to the volume copy. Therefore, you can define a thin-provisioned volume copy for a non-volume copy volume and vice versa, which is one way to migrate a non-thin-provisioned volume to a thin-provisioned volume. Note: To change the parameters of a volume copy mirror, you must delete the volume copy and redefine it with the new values. Now we can change the name of the volume just mirrored from Volume_no_mirror to Volume_mirrored, as shown in Example 9-52.
Example 9-52 volume name changing

IBM_2145:ITSO-CLS4:admin>svctask chvdisk -name Volume_mirrored Volume_no_mirror

9.5.6 Splitting a mirrored volume


The splitvdiskcopy command creates a new volume in the specified I/O Group from a copy of the specified volume. If the copy that you are splitting is not synchronized, you must use the -force parameter. The command fails if you are attempting to remove the only synchronized copy. To avoid this failure, wait for the copy to synchronize, or split the unsynchronized copy from the volume by using the -force parameter. You can run the command when either volume copy is offline. Example 9-53 shows the svctask splitvdiskcopy command, which is used to split a mirrored volume. It creates a new volume, Volume_new from the copy of Volume_mirrored.
Example 9-53 Split volume

IBM_2145:ITSO-CLS4:admin>svctask splitvdiskcopy -copy 1 -iogrp 0 -name Volume_new Volume_mirrored Virtual Disk, id [3], successfully created As you can see in Example 9-54, the new volume named Volume_new, has been created as an independent volume.
Example 9-54 svcinfo lsvdisk

IBM_2145:ITSO-CLS4:admin>svcinfo Volume_new id 3 name Volume_new IO_group_id 0 IO_group_name io_grp0

Chapter 9. SAN Volume Controller operations using the command-line interface

467

status online mdisk_grp_id 3 mdisk_grp_name STGPool_DS4700_2 capacity 1.00GB type striped formatted no mdisk_id mdisk_name FC_id FC_name RC_id RC_name vdisk_UID 60050768018281BEE000000000000005 throttling 0 preferred_node_id 1 fast_write_state empty cache readwrite udid fc_map_count 0 sync_rate 50 copy_count 1 se_copy_count 0 copy_id 0 status online sync yes primary yes mdisk_grp_id 3 mdisk_grp_name STGPool_DS4700_2 type striped mdisk_id mdisk_name fast_write_state empty used_capacity 1.00GB real_capacity 1.00GB free_capacity 0.00MB overallocation 100 autoexpand warning grainsize se_copy no easy_tier on easy_tier_status inactive tier generic_ssd tier_capacity 0.00MB tier generic_hdd tier_capacity 1.00GB By issuing the command in Example 9-53 on page 467, vdisk_B will no longer have its mirrored copy and a new volume will be created automatically.

468

Implementing the IBM System Storage SAN Volume Controller V6.1

9.5.7 Modifying a volume


Executing the svctask chvdisk command will modify a single property of a volume. Only one property can be modified at a time. So, changing the name and modifying the I/O Group require two invocations of the command. You can specify a new name or label. The new name can be used subsequently to reference the volume. The I/O Group with which this volume is associated can be changed. Note that this requires a flush of the cache within the nodes in the current I/O Group to ensure that all data is written to disk. I/O must be suspended at the host level before performing this operation. Tips: If the volume has a mapping to any hosts, it is not possible to move the volume to an I/O Group that does not include any of those hosts. This operation will fail if there is not enough space to allocate bitmaps for a mirrored volume in the target I/O Group. If the -force parameter is used and the cluster is unable to destage all write data from the cache, the contents of the volume are corrupted by the loss of the cached data. If the -force parameter is used to move a volume that has out-of-sync copies, a full resynchronization is required.

9.5.8 I/O governing


You can set a limit on the number of I/O operations accepted for a volume. The limit is set in terms of I/Os per second or MB per second. By default, no I/O governing rate is set when a volume is created. Base the choice between I/O and MB as the I/O governing throttle on the disk access profile of the application. Database applications generally issue large amounts of I/O, but they only transfer a relatively small amount of data. In this case, setting an I/O governing throttle based on MB per second does not achieve much. It is better to use an I/Os per second as a second throttle. At the other extreme, a streaming video application generally issues a small amount of I/O, but transfers large amounts of data. In contrast to the database example, setting an I/O governing throttle based on I/Os per second does not achieve much, so it is better to use an MB per second throttle. I/O governing rate: An I/O governing rate of 0 (displayed as throttling in the CLI output of the svcinfo lsvdisk command) does not mean that zero I/Os per second (or MB per second) can be achieved. It means that no throttle is set. An example of the chvdisk command is shown in Example 9-55.
Example 9-55 svctask chvdisk

IBM_2145:ITSO-CLS4:admin>svctask chvdisk -rate 20 -unitmb vdisk_C IBM_2145:ITSO-CLS4:admin>svctask chvdisk -warning 85% vdisk7

Chapter 9. SAN Volume Controller operations using the command-line interface

469

New name: The chvdisk command specifies the new name first. The name can consist of letters A to Z and a to z, numbers 0 to 9, the dash (-), and the underscore (_). It can be between one and 63 characters in length. However, it cannot start with a number, the dash, or the word vdisk (because this prefix is reserved for SVC assignment only). The first command changes the volume throttling of vdisk7 to 20 MBps. The second command changes the thin-provisioned volume warning to 85%. To verify the changes, issue the svcinfo lsvdisk command as shown in Example 9-56.
Example 9-56 svcinfo lsvdisk command: Verifying throttling

IBM_2145:ITSO-CLS4:admin>svcinfo lsvdisk vdisk7 id 7 name vdisk7 IO_group_id 1 IO_group_name io_grp1 status online mdisk_grp_id 0 mdisk_grp_name STGPool_DS4700 capacity 10.0GB type striped formatted no mdisk_id mdisk_name FC_id FC_name RC_id RC_name vdisk_UID 60050768018301BF280000000000000A virtual_disk_throttling (MB) 20 preferred_node_id 6 fast_write_state empty cache readwrite udid fc_map_count 0 sync_rate 50 copy_count 1 copy_id 0 status online sync yes primary yes mdisk_grp_id 0 mdisk_grp_name STGPool_DS4700 type striped mdisk_id mdisk_name fast_write_state empty used_capacity 0.41MB real_capacity 5.02GB free_capacity 5.02GB overallocation 199 autoexpand on warning 85 grainsize 32

470

Implementing the IBM System Storage SAN Volume Controller V6.1

9.5.9 Deleting a volume


When executing this command on an existing fully managed mode volume, any data that remained on it will be lost. The extents that made up this volume will be returned to the pool of free extents available in the storage pool. If any Remote Copy, FlashCopy, or host mappings still exist for this volume, the delete fails unless the -force flag is specified. This flag ensures the deletion of the volume and any volume to host mappings and copy mappings. If the volume is currently the subject of a migrate to image mode, the delete fails unless the -force flag is specified. This flag halts the migration and then deletes the volume. If the command succeeds (without the -force flag) for an image mode volume, the underlying back-end controller logical unit will be consistent with the data that a host might previously have read from the image mode volume. That is, all fast write data has been flushed to the underlying LUN. If the -force flag is used, there is no guarantee. If there is any non-destaged data in the fast write cache for this volume, the deletion of the volume fails unless the -force flag is specified. Now, any non-destaged data in the fast write cache is deleted. Use the svctask rmvdisk command to delete a volume from your SVC configuration, as shown in Example 9-57.
Example 9-57 svctask rmvdisk

IBM_2145:ITSO-CLS4:admin>svctask rmvdisk volume_A This command deletes the volume_A volume from the SVC configuration. If the volume is assigned to a host, you need to use the -force flag to delete the volume (Example 9-58).
Example 9-58 svctask rmvdisk (-force)

IBM_2145:ITSO-CLS4:admin>svctask rmvdisk -force volume_A

9.5.10 Expanding a volume


Expanding a volume presents a larger capacity disk to your operating system. Although this expansion can be easily performed using the SVC, you must ensure that your operating systems support expansion before using this function. Assuming your operating systems support it, you can use the svctask expandvdisksize command to increase the capacity of a given volume. Example 9-59 shows a sample of this command.
Example 9-59 svctask expandvdisksize

IBM_2145:ITSO-CLS4:admin>svctask expandvdisksize -size 5 -unit gb volume_C This command expands the volume_C volume, which was 35 GB before, by another 5 GB to give it a total size of 40 GB. To expand a thin-provisioned volume, you can use the -rsize option, as shown in Example 9-60 on page 472. This command changes the real size of the volume_B volume to a real capacity of 55 GB. The capacity of the volume remains unchanged.

Chapter 9. SAN Volume Controller operations using the command-line interface

471

Example 9-60 svcinfo lsvdisk

IBM_2145:ITSO-CLS4:admin>svcinfo lsvdisk volume_B id 1 name volume_B capacity 100.0GB mdisk_name fast_write_state empty used_capacity 0.41MB real_capacity 50.00GB free_capacity 50.00GB overallocation 200 autoexpand off warning 40 grainsize 32 IBM_2145:ITSO-CLS1:admin>svctask expandvdisksize -rsize 5 -unit gb volume_B IBM_2145:ITSO-CLS1:admin>svcinfo lsvdisk volume_B id 1 name vdisk_B capacity 100.0GB mdisk_grp_name STGPool_DS4700 type striped mdisk_id mdisk_name fast_write_state empty used_capacity 0.41MB real_capacity 55.00GB free_capacity 55.00GB overallocation 181 autoexpand off warning 40 grainsize 32 Important: If a volume is expanded, its type will become striped even if it was previously sequential or in image mode. If there are not enough extents to expand your volume to the specified size, you receive the following error message: CMMVC5860E Ic_failed_vg_insufficient_virtual_extents

9.5.11 Assigning a volume to a host


Use the svctask mkvdiskhostmap command to map a volume to a host. When executed, this command creates a new mapping between the volume and the specified host, which essentially presents this volume to the host as though the disk was directly attached to the host. It is only after this command is executed that the host can perform I/O to the volume. Optionally, a SCSI LUN ID can be assigned to the mapping. When the HBA on the host scans for devices that are attached to it, it discovers all of the volumes that are mapped to its FC ports. When the devices are found, each one is allocated an identifier (SCSI LUN ID). For example, the first disk found is generally SCSI LUN 1, and so on. You can control the order in which the HBA discovers volumes by assigning the SCSI LUN ID as required. If you

472

Implementing the IBM System Storage SAN Volume Controller V6.1

do not specify a SCSI LUN ID, the cluster automatically assigns the next available SCSI LUN ID, given any mappings that already exist with that host. Using the volume and host definition that we created in the previous sections, we assign volumes to hosts that are ready for their use. We use the svctask mkvdiskhostmap command (see Example 9-61).
Example 9-61 svctask mkvdiskhostmap

IBM_2145:ITSO-CLS4:admin>svctask mkvdiskhostmap -host Tiger volume_B Virtual Disk to Host map, id [2], successfully created IBM_2145:ITSO-CLS4:admin>svctask mkvdiskhostmap -host Tiger volume_C Virtual Disk to Host map, id [1], successfully created This command assigns volume_B and volume_C to host Tiger as shown in Example 9-62.
Example 9-62 svcinfo lshostvdiskmap -delim, command

IBM_2145:ITSO-CLS1:admin>svcinfo lshostvdiskmap -delim , id,name,SCSI_id,vdisk_id,vdisk_name,wwpn,vdisk_UID 1,Tiger,2,1,volume_B,210000E08B892BCD,60050768018301BF2800000000000001 1,Tiger,1,2,volume_C,210000E08B892BCD,60050768018301BF2800000000000002 Assigning a specific LUN ID to a volume: The optional -scsi scsi_num parameter can help assign a specific LUN ID to a volume that is to be associated with a given host. The default (if nothing is specified) is to increment based on what is already assigned to the host. Be aware that certain HBA device drivers stop when they find a gap in the SCSI LUN IDs. For example: Volume 1 is mapped to Host 1 with SCSI LUN ID 1. Volume 2 is mapped to Host 1 with SCSI LUN ID 2. Volume 3 is mapped to Host 1 with SCSI LUN ID 4. When the device driver scans the HBA, it might stop after discovering Volumes 1 and 2, because there is no SCSI LUN mapped with ID 3. Important: Ensure that the SCSI LUN ID allocation is contiguous. It is not possible to map a volume to a host more than one time at separate LUNs (Example 9-63).
Example 9-63 svctask mkvdiskhostmap

IBM_2145:ITSO-CLS1:admin>svctask mkvdiskhostmap -host Siam volume_A Virtual Disk to Host map, id [0], successfully created This command maps the volume called volume_A to the host called Siam. At this point, you have completed all tasks that are required to assign a volume to an attached host.

Chapter 9. SAN Volume Controller operations using the command-line interface

473

9.5.12 Showing volumes to host mapping


Use the svcinfo lshostvdiskmap command to show which volumes are assigned to a specific host (Example 9-64).
Example 9-64 svcinfo lshostvdiskmap

IBM_2145:ITSO-CLS1:admin>svcinfo lshostvdiskmap -delim , Siam id,name,SCSI_id,vdisk_id,vdisk_name,wwpn,vdisk_UID 3,Siam,0,0,volume_A,210000E08B18FF8A,60050768018301BF280000000000000C From this command, you can see that the host Siam has only one assigned volume called volume_A. The SCSI LUN ID is also shown, which is the ID by which the volume is presented to the host. If no host is specified, all defined host to volume mappings will be returned. Specifying the flag before the host name: Although the -delim flag normally comes at the end of the command string, in this case, you must specify this flag before the host name. Otherwise, it returns the following message: CMMVC6070E An invalid or duplicated parameter, unaccompanied argument, or incorrect argument sequence has been detected. Ensure that the input is as per the help.

9.5.13 Deleting a volume to host mapping


When deleting a volume mapping, you are not deleting the volume itself, only the connection from the host to the volume. If you mapped a volume to a host by mistake, or you simply want to reassign the volume to another host, use the svctask rmvdiskhostmap command to unmap a volume from a host (Example 9-65).
Example 9-65 svctask rmvdiskhostmap

IBM_2145:ITSO-CLS4:admin>svctask rmvdiskhostmap -host Tiger volume_D This command unmaps the volume called volume_D from the host called Tiger.

9.5.14 Migrating a volume


From time to time, you might want to migrate volumes from one set of MDisks to another set of MDisks to decommission an old disk subsystem, to have better balanced performance across your virtualized environment, or simply to migrate data into the SVC environment transparently using image mode. You can obtain further information about migration in Chapter 6, Data migration on page 227. Important: After migration is started, it continues until completion unless it is stopped or suspended by an error condition or unless the volume being migrated is deleted. As you can see from the parameters shown in Example 9-66 on page 475, before you can migrate your volume you must know the name of the volume you want to migrate and the name of the storage pool to which you want to migrate. To discover the names, run the svcinfo lsvdisk and svcinfo lsmdiskgrp commands.

474

Implementing the IBM System Storage SAN Volume Controller V6.1

After you know these details you can issue the migratevdisk command, as shown in Example 9-66.
Example 9-66 svctask migratevdisk

IBM_2145:ITSO-CLS4:admin>svctask migratevdisk -mdiskgrp STGPool_DS4700_2 -vdisk volume_C This command moves volume_C to the storage pool named STGPool_DS4700_2. Tips: If insufficient extents are available within your target storage pool, you receive an error message. Make sure that the source and target MDisk group have the same extent size. The optional threads parameter allows you to assign a priority to the migration process. The default is 4, which is the highest priority setting. However, if you want the process to take a lower priority over other types of I/O, you can specify 3, 2, or 1. You can run the svcinfo lsmigrate command at any time to see the status of the migration process (Example 9-67).
Example 9-67 svcinfo lsmigrate command

IBM_2145:ITSO-CLS4:admin>svcinfo lsmigrate migrate_type MDisk_Group_Migration progress 12 migrate_source_vdisk_index 2 migrate_target_mdisk_grp 1 max_thread_count 4 migrate_source_vdisk_copy_id 0 IBM_2145:ITSO-CLS4:admin>svcinfo lsmigrate migrate_type MDisk_Group_Migration progress 16 migrate_source_vdisk_index 2 migrate_target_mdisk_grp 1 max_thread_count 4 migrate_source_vdisk_copy_id 0 Progress: The progress is given as percent complete. If you receive no more replies, it means that the process has finished.

9.5.15 Migrating a fully managed volume to an image mode volume


Migrating a fully managed volume to an image mode volume allows the SVC to be removed from the data path, which might be useful where the SVC is used as a data mover appliance. You can use the svctask migratetoimage command. To migrate a fully managed volume to an image mode volume, the following rules apply: The destination MDisk must be greater than or equal to the size of the volume. The MDisk that is specified as the target must be in an unmanaged state. Regardless of the mode in which the volume starts, it is reported as managed mode during the migration.

Chapter 9. SAN Volume Controller operations using the command-line interface

475

Both of the MDisks involved are reported as being image mode during the migration. If the migration is interrupted by a cluster recovery or by a cache problem, the migration resumes after the recovery completes. Example 9-68 shows an example of the command.
Example 9-68 svctask migratetoimage

IBM_2145:ITSO-CLS4:admin>svctask migratetoimage -vdisk volume_A -mdisk mdisk8 -mdiskgrp STGPool_Image In this example, you migrate the data from volume_A onto mdisk8, and the MDisk must be put into the STGPool_Image storage pool.

9.5.16 Shrinking a volume


The shrinkvdisksize command reduces the capacity that is allocated to the particular volume by the amount that you specify. You cannot shrink the real size of a thin-provisioned volume to less than its used size. All capacities, including changes, must be in multiples of 512 bytes. An entire extent is reserved even if it is only partially used. The default capacity units are MB. The command can be used to shrink the physical capacity that is allocated to a particular volume by the specified amount. The command can also be used to shrink the virtual capacity of a thin-provisioned volume without altering the physical capacity assigned to the volume: For a non thin-provisioned volume, use the -size parameter. For a thin-provisioned volume real capacity, use the -rsize parameter. For the thin-provisioned volume virtual capacity, use the -size parameter. When the virtual capacity of a thin-provisioned volume is changed, the warning threshold is automatically scaled to match. The new threshold is stored as a percentage. The cluster arbitrarily reduces the capacity of the volume by removing a partial extent, one extent, or multiple extents from those extents that are allocated to the volume. You cannot control which extents are removed, and therefore you cannot assume that it is unused space that is removed. Note that image mode volumes cannot be reduced in size. Instead, they must first be migrated to fully Managed Mode. To run the shrinkvdisksize command on a mirrored volume, all copies of the volume must be synchronized. Important: If the volume contains data, do not shrink the disk. Certain operating systems or file systems use what they consider to be the outer edge of the disk for performance reasons. This command can shrink a FlashCopy target volume to the same capacity as the source. Before you shrink a volume, validate that the volume is not mapped to any host objects. If the volume is mapped, data is displayed. You can determine the exact capacity of the source or master volume by issuing the svcinfo lsvdisk -bytes vdiskname command. Shrink the volume by the required amount by issuing the svctask shrinkvdisksize -size disk_size -unit b | kb | mb | gb | tb | pb vdisk_name | vdisk_id command.

476

Implementing the IBM System Storage SAN Volume Controller V6.1

Assuming your operating system supports it, you can use the svctask shrinkvdisksize command to decrease the capacity of a given volume. Example 9-69 shows an example of this command.
Example 9-69 svctask shrinkvdisksize

IBM_2145:ITSO-CLS4:admin>svctask shrinkvdisksize -size 44 -unit gb volume_A This command shrinks a volume called volume_A from a total size of 80 GB, by 44 GB, to a new total size of 36 GB.

9.5.17 Showing a volume on an MDisk


Use the svcinfo lsmdiskmember command to display information about the volume that is using space on a specific MDisk, as shown in Example 9-70.
Example 9-70 svcinfo lsmdiskmember command

IBM_2145:ITSO-CLS4:admin>svcinfo lsmdiskmember mdisk1 id copy_id 0 0 2 0 3 0 4 0 5 0 This command displays a list of all of the volume IDs that correspond to the volume copies that use mdisk1. To correlate the IDs displayed in this output to volume names we can run the svcinfo lsvdisk command, which we discuss in more detail in 9.5, Working with volumes on page 458.

9.5.18 Showing which volumes are using a storage pool


Use the svcinfo lsvdisk -filtervalue command, as shown in Example 9-71, to see which volumes are part of a specific storage pool. This command shows all of the volumes that are part of the storage pool named STGPool_DS4700_1.
Example 9-71 svcinfo lsvdisk -filtervalue: VDisks in the MDG

IBM_2145:ITSO-CLS4:admin>svcinfo lsvdisk -filtervalue mdisk_grp_name=STGPool_DS4700_1 -delim , id,name,IO_group_id,IO_group_name,status,mdisk_grp_id,mdisk_grp_name,capacity,type ,FC_id,FC_name,RC_id,RC_name,vdisk_UID,fc_map_count,copy_count,fast_write_state,se _copy_count 2,Volume_mirrored,0,io_grp0,online,2,STGPool_DS4700_1,1.00GB,striped,,,,,600507680 18281BEE000000000000004,0,1,empty,0

9.5.19 Showing which MDisks are used by a specific volume


Use the svcinfo lsvdiskmember command, as shown in Example 9-72 on page 478, to show which MDisks a specific volumes extents are from.

Chapter 9. SAN Volume Controller operations using the command-line interface

477

Example 9-72 svcinfo lsvdiskmember command

IBM_2145:ITSO-CLS4:admin>svcinfo lsvdiskmember 0 id 4 5 6 7 If you want to know more about these MDisks you can run the svcinfo lsmdisk command, as explained in 9.2, Working with managed disks and disk controller systems on page 441 (using the ID displayed in Example 9-72 rather than the name).

9.5.20 Showing from which storage pool a volume has its extents
Use the svcinfo lsvdisk command as shown in Example 9-73 to show to which storage pool a specific volume belongs.
Example 9-73 svcinfo lsvdisk command: storage pool name

IBM_2145:ITSO-CLS4:admin>svcinfo lsvdisk vdisk_D id 3 name vdisk_D IO_group_id 1 IO_group_name io_grp1 status online mdisk_grp_id 0 mdisk_grp_name STGPool_DS4700_1 capacity 80.0GB type striped formatted no mdisk_id mdisk_name FC_id FC_name RC_id RC_name vdisk_UID 60050768018301BF2800000000000003 throttling 0 preferred_node_id 6 fast_write_state empty cache readwrite udid fc_map_count 0 sync_rate 50 copy_count 1 copy_id 0 status online sync yes primary yes mdisk_grp_id 0 mdisk_grp_name STGPool_DS4700_1 type striped mdisk_id mdisk_name 478
Implementing the IBM System Storage SAN Volume Controller V6.1

fast_write_state empty used_capacity 80.00GB real_capacity 80.00GB free_capacity 0.00MB overallocation 100 autoexpand warning grainsize To learn more about these storage pools you can run the svcinfo lsmdiskgrp command, as explained in 9.2.10, Working with a storage pool on page 447.

9.5.21 Showing the host to which the volume is mapped


To show the hosts to which a specific volume has been assigned, run the svcinfo lsvdiskhostmap command as shown in Example 9-74.
Example 9-74 svcinfo lsvdiskhostmap command

IBM_2145:ITSO-CLS4:admin>svcinfo lsvdiskhostmap -delim , volume_B id,name,SCSI_id,host_id,host_name,wwpn,vdisk_UID 1,vdisk_B,2,1,Nile,210000E08B892BCD,60050768018301BF2800000000000001 1,vdisk_B,2,1,Nile,210000E08B89B8C0,60050768018301BF2800000000000001 This command shows the host or hosts to which the volume_B volume was mapped. It is normal for you to see duplicate entries, because there are more paths between the cluster and the host. To be sure that the operating system on the host sees the disk only one time, you must install and configure a multipath software application, such as the IBM Subsystem Driver (SDD). Specifying the -delim flag: Although the optional -delim flag normally comes at the end of the command string, in this case, you must specify this flag before the volume name. Otherwise, the command does not return any data.

9.5.22 Showing the volume to which the host is mapped


To show the volume to which a specific host has been assigned, run the svcinfo lshostvdiskmap command, as shown in Example 9-75.
Example 9-75 lshostvdiskmap command example

IBM_2145:ITSO-CLS4:admin>svcinfo lshostvdiskmap -delim , Siam id,name,SCSI_id,vdisk_id,vdisk_name,wwpn,vdisk_UID 3,Siam,0,5,MM_DB_Pri,210000E08B18FF8A,60050768018301BF2800000000000005 3,Siam,1,4,MM_DBLog_Pri,210000E08B18FF8A,60050768018301BF2800000000000004 3,Siam,2,6,MM_App_Pri,210000E08B18FF8A,60050768018301BF2800000000000006 This command shows which volumes are mapped to the host called Siam. Specifying the -delim flag: Although the optional -delim flag normally comes at the end of the command string, in this case you must specify this flag before the volume name. Otherwise, the command does not return any data.

Chapter 9. SAN Volume Controller operations using the command-line interface

479

9.5.23 Tracing a volume from a host back to its physical disk


In many cases you must verify exactly which physical disk is presented to the host; for example, from which storage pool a specific volume comes. However, from the host side it is not possible for the server administrator using the GUI to see on which physical disks the volumes are running. Instead, you must enter the command (listed in Example 9-76) from your multipath command prompt. 1. On your host, run the datapath query device command. You see a long disk serial number for each vpath device, as shown in Example 9-76.
Example 9-76 datapath query device

DEV#: 0 DEVICE NAME: Disk1 Part0 TYPE: 2145 POLICY: OPTIMIZED SERIAL: 60050768018301BF2800000000000005 ============================================================================ Path# Adapter/Hard Disk State Mode Select Errors 0 Scsi Port2 Bus0/Disk1 Part0 OPEN NORMAL 20 0 1 Scsi Port3 Bus0/Disk1 Part0 OPEN NORMAL 2343 0 DEV#: 1 DEVICE NAME: Disk2 Part0 TYPE: 2145 POLICY: OPTIMIZED SERIAL: 60050768018301BF2800000000000004 ============================================================================ Path# Adapter/Hard Disk State Mode Select Errors 0 Scsi Port2 Bus0/Disk2 Part0 OPEN NORMAL 2335 0 1 Scsi Port3 Bus0/Disk2 Part0 OPEN NORMAL 0 0 DEV#: 2 DEVICE NAME: Disk3 Part0 TYPE: 2145 POLICY: OPTIMIZED SERIAL: 60050768018301BF2800000000000006 ============================================================================ Path# Adapter/Hard Disk State Mode Select Errors 0 Scsi Port2 Bus0/Disk3 Part0 OPEN NORMAL 2331 0 1 Scsi Port3 Bus0/Disk3 Part0 OPEN NORMAL 0 0 State: In Example 9-76 the state of each path is OPEN. Sometimes you will see the state CLOSED. This does not necessarily indicate a problem, because it might be a result of the paths processing stage. 2. Run the svcinfo lshostvdiskmap command to return a list of all assigned volumes (Example 9-77).
Example 9-77 svcinfo lshostvdiskmap IBM_2145:ITSO-CLS4:admin>svcinfo lshostvdiskmap -delim , Siam id,name,SCSI_id,vdisk_id,vdisk_name,wwpn,vdisk_UID 3,Siam,0,5,MM_DB_Pri,210000E08B18FF8A,60050768018301BF2800000000000005 3,Siam,1,4,MM_DBLog_Pri,210000E08B18FF8A,60050768018301BF2800000000000004 3,Siam,2,6,MM_App_Pri,210000E08B18FF8A,60050768018301BF2800000000000006

480

Implementing the IBM System Storage SAN Volume Controller V6.1

Look for the disk serial number that matches your datapath query device output. This host was defined in our SVC as Siam. 3. Run the svcinfo lsvdiskmember vdiskname command for a list of the MDisk or MDisks that make up the specified volume (Example 9-78).
Example 9-78 svcinfo lsvdiskmember IBM_2145:ITSO-CLS1:admin>svcinfo lsvdiskmember MM_DBLog_Pri id 0 1 2 3 4 10 11 13 15 16 17

4. Query the MDisks with the svcinfo lsmdisk mdiskID to find their controller and LUN number information, as shown in Example 9-79. The output displays the controller name and the controller LUN ID to help you (provided you gave your controller a unique name, such as a serial number) to track back to a LUN within the disk subsystem; see Example 9-79.
Example 9-79 svcinfo lsmdisk command IBM_2145:ITSO-CLS1:admin>svcinfo lsmdisk 3 id 3 name mdisk3 status online mode managed mdisk_grp_id 0 mdisk_grp_name MDG_DS45 capacity 36.0GB quorum_index block_size 512 controller_name DS4500 ctrl_type 4 ctrl_WWNN 200400A0B8174431 controller_id 0 path_count 4 max_path_count 4 ctrl_LUN_# 0000000000000003 UID 600a0b8000174431000000e44713575400000000000000000000000000000000 preferred_WWPN 200400A0B8174433 active_WWPN 200400A0B8174433

9.6 Scripting under the CLI for SVC task automation


Using scripting constructs works better for the automation of regular operational jobs. You can use available shells to develop scripts. Scripting enhances the productivity of SVC administrators and the integration of their storage virtualization environment.

Chapter 9. SAN Volume Controller operations using the command-line interface

481

You can create your own customized scripts to automate a large number of tasks for completion at a variety of times and run them through the CLI. We suggest that in large SAN environments where scripting with svctask commands is used, to keep the scripting as simple as possible. It is harder to manage fallback, documentation, and verifying a successful script prior to execution in a large SAN environment. In this section we present an overview of how to automate various tasks by creating scripts using the IBM System Storage SAN Volume Controller (SVC) command-line interface (CLI).

9.6.1 Scripting structure


When creating scripts to automate tasks on the SVC, use the structure that is illustrated in Figure 9-2.

Create connection (SSH) to the SVC

Run the command(s) command

Scheduled activation or Manual activation

Perform logging

Figure 9-2 Scripting structure for SVC task automation

Creating a Secure Shell connection to the SVC


When creating a connection to the SVC, if you are running the script, you must have access to a public key that corresponds to a public key that has been previously uploaded to the SVC. The key is used to establish the Secure Shell (SSH) connection that is needed to use the CLI on the SVC. If the SSH keypair is generated without a passphrase, you can connect without the need of special scripting to parse in the passphrase. On UNIX systems, you can use the ssh command to create an SSH connection with the SVC. On Windows systems you can use a utility called plink.exe, which is provided with the PuTTY tool, to create an SSH connection with the SVC. In the following examples, we use plink to create the SSH connection to the SVC.

Executing the commands


When using the CLI refer to IBM System Storage SAN Volume Controller Command-Line Interface Users Guide to obtain the correct syntax and a detailed explanation of each

482

Implementing the IBM System Storage SAN Volume Controller V6.1

command. You can download it from the SVC documentation page for each SVC code level at this website: http://www-947.ibm.com/support/entry/portal/Documentation/Hardware/System_Storage/ Storage_software/Storage_virtualization/SAN_Volume_Controller_%282145%29Performing logging When using the CLI, not all commands provide a response to determine the status of the invoked command. Therefore, always create checks that can be logged for monitoring and troubleshooting purposes.

Connecting to the SVC using a predefined SSH connection


The easiest way to create an SSH connection to the SVC is when plink can call a predefined PuTTY session. Define a session, including this information: The auto-login user name and setting the auto-login user name to your SVC admin user name (for example, admin). This parameter is set under the Connection Data category as shown in Figure 9-3.

Figure 9-3 Auto-login configuration

The private key for authentication (for example, icat.ppk). This key is the private key that you have already created. This parameter is set under the Connection Session Auth category as shown in Figure 9-4 on page 484.

Chapter 9. SAN Volume Controller operations using the command-line interface

483

Figure 9-4 An ssh private key configuration

The IP address of the SVC cluster. This parameter is set under the Session category as shown in Figure 9-5.

Figure 9-5 IP address

A session name. Our example uses ITSO-CLS4. Our PuTTy version is 0.60.

484

Implementing the IBM System Storage SAN Volume Controller V6.1

To use this predefined PuTTY session, use the following syntax: plink ITSO-CLS4 If a predefined PuTTY session is not used, use this syntax: plink admin@<your cluster ip add> -i "C:\DirectoryPath\KeyName.PPK" Various limited scripts can be run directly in the SVC shell. Examples can be found at the following websites: http://www.db94.net/wtfwiki/pmwiki.php?n=Main.HandySVCMiniScripts http://www.db94.net/wtfwiki/pmwiki.php?n=Main.SVCMiniscriptStorage http://www.db94.net/wtfwiki/pmwiki.php?n=Main.SVCMiniscriptTesting Additionally, IBM provides a suite of scripting tools that is based on Perl. You can download these scripting tools from this website: http://www.alphaworks.ibm.com/tech/svctools

9.7 SVC advanced operations using the CLI


In the following sections we describe the commands that we think best represent advanced operational commands.

9.7.1 Command syntax


Two major command sets are available: The svcinfo command set allows you to query the various components within the SVC environment. The svctask command set allows you to make changes to the various components within the SVC. When the command syntax is shown, you see several parameters in square brackets, for example, [parameter], which indicates that the parameter is optional in most if not all instances. Any parameter that is not in square brackets is required information. You can view the syntax of a command by entering one of the following commands: svcinfo svctask svcinfo svctask svcinfo -? -? commandname -? commandname -? commandname -filtervalue? Shows a complete list of information commands. Shows a complete list of task commands. Shows the syntax of information commands. Shows the syntax of task commands. Shows which filters you can use to reduce the output of the information commands.

Help: You can also use -h instead of -?, for example, svcinfo -h or svctask commandname -h. If you look at the syntax of the command by typing svcinfo command name -?, you often see -filter listed as a parameter. Be aware that the correct parameter is -filtervalue. Tip: You can use the up and down arrow keys on your keyboard to recall commands that were recently issued. Then, you can use the left and right, Backspace, and Delete keys to edit commands before you resubmit them.
Chapter 9. SAN Volume Controller operations using the command-line interface

485

9.7.2 Organizing on window content


Sometimes the output of a command can be long and difficult to read in the window. In cases where you need information about a subset of the total number of available items, you can use filtering to reduce the output to a more manageable size.

Filtering
To reduce the output that is displayed by an svcinfo command, you can specify a number of filters, depending on which svcinfo command you are running. To see which filters are available, type the command followed by the -filtervalue? flag, as shown in Example 9-80.
Example 9-80 svcinfo lsvdisk -filtervalue? command

IBM_2145:ITSO-CLS4:admin>svcinfo lsvdisk -filtervalue? Filters for this view are : name id IO_group_id IO_group_name status mdisk_grp_name mdisk_grp_id capacity type FC_id FC_name RC_id RC_name vdisk_name vdisk_id vdisk_UID fc_map_count copy_count fast_write_state se_copy_count

When you know the filters, you can be more selective in generating output: Multiple filters can be combined to create specific searches. You can use an asterisk (*) as a wildcard when using names. When capacity is used, the units must also be specified using -u b | kb | mb | gb | tb | pb. For example, if we issue the svcinfo lsvdisk command with no filters but with the -delim parameter, we see the output that is shown in Example 9-81.
Example 9-81 svcinfo lsvdisk command: No filters

IBM_2145:ITSO-CLS4:admin>svcinfo lsvdisk -delim , id,name,IO_group_id,IO_group_name,status,mdisk_grp_id,mdisk_grp_name,capacity,type ,FC_id,FC_name,RC_id,RC_name,vdisk_UID,fc_map_count,copy_count,fast_write_state,se _copy_count 0,Volume_measured_only,0,io_grp0,online,1,ITSO-Storage_Pool-Single_Tier,10.00GB,st riped,,,,,60050768018281BEE000000000000000,0,1,not_empty,0 1,Volume_EasyTier_active,0,io_grp0,online,0,ITSO-Storage_Pool-Multi_Tier,10.00GB,s triped,,,,,60050768018281BEE000000000000003,0,1,not_empty,0

486

Implementing the IBM System Storage SAN Volume Controller V6.1

2,Volume_mirrored,0,io_grp0,online,2,STGPool_DS4700_1,1.00GB,striped,,,,,600507680 18281BEE000000000000004,0,1,empty,0 3,Volume_new,0,io_grp0,online,3,STGPool_DS4700_2,1.00GB,striped,,,,,60050768018281 BEE000000000000005,0,1,empty,0 Tip: The -delim parameter truncates the content in the window and separates data fields with colons as opposed to wrapping text over multiple lines. This parameter is normally used in cases where you need to get reports during script execution. If we now add a filter to our svcinfo command (mdisk_grp_name) we can reduce the output, as shown in Example 9-82.
Example 9-82 svcinfo lsvdisk command: With a filter

IBM_2145:ITSO-CLS4:admin>svcinfo lsvdisk -filtervalue mdisk_grp_name=STGPool_DS4700_1 -delim , id,name,IO_group_id,IO_group_name,status,mdisk_grp_id,mdisk_grp_name,capacity,type ,FC_id,FC_name,RC_id,RC_name,vdisk_UID,fc_map_count,copy_count,fast_write_state,se _copy_count 2,Volume_mirrored,0,io_grp0,online,2,STGPool_DS4700_1,1.00GB,striped,,,,,600507680 18281BEE000000000000004,0,1,empty,0

9.8 Managing the cluster using the CLI


In these sections we demonstrate how to perform cluster administration.

9.8.1 Viewing cluster properties


Use the svcinfo lscluster command to display summary information about all of the clusters that are configured to the SVC, as shown in Example 9-83.
Example 9-83 svcinfo lscluster command

IBM_2145:ITSO-CLS4:admin>svcinfo lscluster id name location partnership bandwidth id_alias 0000020060A06FB8 ITSO-CLS4 local 0000020060A06FB8 000002006440A068 ITSO-CLS1 remote fully_configured 20 000002006440A068 0000020061806FCA ITSO-CLS2 remote fully_configured 20 0000020061806FCA

9.8.2 Changing cluster settings


Use the svctask chcluster command to change the settings of the cluster. This command modifies the specific features of a cluster. You can change multiple features by issuing a single command. If the cluster IP address is changed, the open command-line shell closes during the processing of the command and you must reconnect to the new IP address. If this node cannot rejoin the cluster, you can bring the node up in service mode. In this mode, the node can be accessed as a stand-alone node using the service IP address. We discuss service IP address in more detail in 9.19, Working with the Service Assistant menu on page 576.

Chapter 9. SAN Volume Controller operations using the command-line interface

487

All command parameters are optional; however, you must specify at least one parameter. Important: Be aware of the following points: Only a user with administrator authority can change the password. As mentioned, if the cluster IP address is changed, the open command-line shell closes during the processing of the command and you must reconnect to the new IP address. Changing the speed on a running cluster breaks I/O service to the attached hosts. Before changing the fabric speed, stop I/O from the active hosts and force these hosts to flush any cached data by unmounting volumes (for UNIX host types) or by removing drive letters (for Windows host types). Specific hosts can need to be rebooted to detect the new fabric speed.

9.8.3 Performing cluster authentication


An important point of authentication is that the superuser user and password are the cluster administrator user and password. This user is a member of the Security admin. If this password is not known, you can reset it from the cluster front panel. We describe the authentication method in detail in Chapter 2, IBM System Storage SAN Volume Controller on page 7. Tip: If you do not want the password to display as you enter it on the command line, omit the new password. The command line then prompts you to enter and confirm the password without the password being displayed. The only authentication that can be changed from the chcluster command is the Service account user password, and to be able to change that, you need to have administrative rights. The Service account user password is changed in Example 9-84.
Example 9-84 svctask chcluster -servicepwd (for the Service account)

IBM_2145:ITSO-CLS1:admin>svctask chcluster -servicepwd Enter a value for -password : Enter password: Confirm password: IBM_2145:ITSO-CLS1:admin> See 9.11.1, Managing users using the CLI on page 500 for more information about managing users.

9.8.4 iSCSI configuration


Starting with SVC 5.1, iSCSI is introduced as a supported method of communication between the SVC and hosts. All back-end storage and intracluster communication still uses FC and the SAN, so iSCSI cannot be used for that communication. In Chapter 2, IBM System Storage SAN Volume Controller on page 7 we describe in detail how iSCSi works. In this section we show how we configured our cluster for use with iSCSI.

488

Implementing the IBM System Storage SAN Volume Controller V6.1

We configured our nodes to use the primary and secondary Ethernet ports for iSCSI and to contain the cluster IP. When we configured our nodes to be used with iSCSI, we did not affect our cluster IP. The cluster IP is changed, as shown in 9.8.2, Changing cluster settings on page 487. It is important to know that we can have more than a one IP address-to-one physical connection relationship. We have the capability to have a four-to-one relationship (4:1), consisting of two IPv4 plus two IPv6 addresses (four total) to one physical connection per port per node. Tip: When reconfiguring IP ports, be aware that already configured iSCSI connections will need to reconnect if changes are made to the IP addresses of the nodes. There are two ways to perform iSCSI authentication or CHAP, either for the whole cluster or per host connection. Example 9-85 shows configuring CHAP for the whole cluster.
Example 9-85 Setting a CHAP secret for the entire cluster to passw0rd

IBM_2145:ITSO-CLS1:admin>svctask chcluster -iscsiauthmethod chap -chapsecret passw0rd IBM_2145:ITSO-CLS1:admin> In our scenario we have a cluster IP of 9.64.210.64, which is not affected during our configuration of the nodes IP addresses. We start by listing our ports using the svcinfo lsportip command. We see that we have two ports per node with which to work. Both ports can have two IP addresses that can be used for iSCSI. We configure the secondary port in both nodes in our I/O Group as shown in Example 9-86.
Example 9-86 Configuring secondary Ethernet port on SVC nodes

IBM_2145:ITSO-CLS1:admin>svctask cfgportip -node 1 -ip 9.8.7.1 -gw 9.0.0.1 -mask


255.255.255.0 2

IBM_2145:ITSO-CLS1:admin>svctask cfgportip -node 2 -ip 9.8.7.3 -gw 9.0.0.1 -mask


255.255.255.0 2

While both nodes are online, each node will be available to iSCSI hosts on the IP address that we have configured. Note that iSCSI failover between nodes is enabled automatically. Therefore, if a node goes offline for any reason, its partner node in the I/O group will become available on the failed nodes port IP address. This ensures that hosts will continue to be able to perform I/O. The svcinfo lsportip command will display which port IP addresses are currently active on each node.

9.8.5 Modifying IP addresses


We can use both IP ports of the nodes. However, the first time that you configure a second port all IP information is required, because port 1 on the cluster must always have one stack fully configured. There are now two active cluster ports on the configuration node. If the cluster IP address is changed, the open command-line shell closes during the processing of the command. You must reconnect to the new IP address if connected through that port.

Chapter 9. SAN Volume Controller operations using the command-line interface

489

List the IP address of the cluster by issuing the svcinfo lsclusterip command. Modify the IP address by issuing the svctask chclusterip command. You can either specify a static IP address or have the system assign a dynamic IP address, as shown in Example 9-87.
Example 9-87 svctask chclusterip -clusterip

IBM_2145:ITSO-CLS1:admin>svctask chclusterip -clusterip 10.20.133.5 -gw 10.20.135.1 -mask 255.255.255.0 -port 1 This command changes the current IP address of the cluster to 10.20.133.5. Important: If you specify a new cluster IP address, then the existing communication with the cluster through the CLI is broken and the PuTTY application automatically closes. You must relaunch the PuTTY application and point to the new IP address, but your SSH key will still work.

9.8.6 Supported IP address formats


Table 9-1 lists the IP address formats.
Table 9-1 ip_address_list formats IP type IPv4 (no port set, SVC uses default) IPv4 with specific port Full IPv6, default port Full IPv6, default port, leading zeros suppressed Full IPv6 with port Zero-compressed IPv6, default port Zero-compressed IPv6 with port ip_address_list format 1.2.3.4 1.2.3.4:22 1234:1234:0001:0123:1234:1234:1234:1234 1234:1234:1:123:1234:1234:1234:1234 [2002:914:fc12:848:209:6bff:fe8c:4ff6]:23 2002::4ff6 [2002::4ff6]:23

At this point, we have completed the tasks that are required to change the IP addresses (cluster and service) of the SVC environment.

9.8.7 Setting the cluster time zone and time


Use the -timezone parameter to specify the numeric ID of the time zone that you want to set. Issue the svcinfo lstimezones command to list the time zones that are available on the cluster; this command displays a list of valid time zone settings. Tip: If you have changed the time zone, you must clear the event log dump directory before you can view the event log through the web application.

Setting the cluster time zone


Perform the following steps to set the cluster time zone and time: 1. Find out for which time zone your cluster is currently configured. Enter the svcinfo showtimezone command, as shown in Example 9-88 on page 491.

490

Implementing the IBM System Storage SAN Volume Controller V6.1

Example 9-88 svcinfo showtimezone command

IBM_2145:ITSO-CLS1:admin>svcinfo showtimezone id timezone 522 UTC 2. To find the time zone code that is associated with your time zone, enter the svcinfo lstimezones command, as shown in Example 9-89. A truncated list is provided for this example. If this setting is correct (for example, 522 UTC), go to Step 4. If not, continue with Step 3.
Example 9-89 svcinfo lstimezones command

IBM_2145:ITSO-CLS1:admin>svcinfo lstimezones id timezone . . 507 Turkey 508 UCT 509 Universal 510 US/Alaska 511 US/Aleutian 512 US/Arizona 513 US/Central 514 US/Eastern 515 US/East-Indiana 516 US/Hawaii 517 US/Indiana-Starke 518 US/Michigan 519 US/Mountain 520 US/Pacific 521 US/Samoa 522 UTC . . 3. Now that you know which time zone code is correct for you, set the time zone by issuing the svctask settimezone (Example 9-90) command.
Example 9-90 svctask settimezone command

IBM_2145:ITSO-CLS1:admin>svctask settimezone -timezone 520 4. Set the cluster time by issuing the svctask setclustertime command (Example 9-91).
Example 9-91 svctask setclustertime command

IBM_2145:ITSO-CLS1:admin>svctask setclustertime -time 061718402008 The format of the time is MMDDHHmmYYYY. You have completed the necessary tasks to set the cluster time zone and time.

9.8.8 Starting statistics collection


Statistics are collected at the end of each sampling period (as specified by the -interval parameter). These statistics are written to a file. A new file is created at the end of each sampling period. Separate files are created for MDisks, volumes, and node statistics.
Chapter 9. SAN Volume Controller operations using the command-line interface

491

Use the svctask startstats command to start the collection of statistics, as shown in Example 9-92.
Example 9-92 svctask startstats command

IBM_2145:ITSO-CLS1:admin>svctask startstats -interval 15 The interval that we specify (minimum 1, maximum 60) is in minutes. This command starts statistics collection and gathers data at 15-minute intervals. Statistics collection: To verify that statistics collection is set, display the cluster properties again, as shown in Example 9-93.
Example 9-93 Statistics collection status and frequency

IBM_2145:ITSO-CLS1:admin>svcinfo lscluster ITSO-CLS1 statistics_status on statistics_frequency 15 -- Note that the output has been shortened for easier reading. -At this point, we have completed the required tasks to start statistics collection on the cluster.

9.8.9 Stopping statistics collection


Use the svctask stopstats command to stop the collection of statistics within the cluster (Example 9-94).
Example 9-94 svctask stopstats command

IBM_2145:ITSO-CLS1:admin>svctask stopstats This command stops the statistics collection. Do not expect any prompt message from this command. To verify that the statistics collection is stopped, display the cluster properties again, as shown in Example 9-95.
Example 9-95 Statistics collection status and frequency

IBM_2145:ITSO-CLS1:admin>svcinfo lscluster ITSO-CLS1 statistics_status off statistics_frequency 15 -- Note that the output has been shortened for easier reading. -Notice that the interval parameter is not changed, but the status is off. At this point, we have completed the required tasks to stop statistics collection on our cluster.

9.8.10 Determining the status of a copy operation


Use the svcinfo lscopystatus command, as shown in Example 9-96 on page 493, to determine if a file copy operation is in progress. Only one file copy operation can be performed at a time. The output of this command is a status of active or inactive.

492

Implementing the IBM System Storage SAN Volume Controller V6.1

Example 9-96 lscopystatus command

IBM_2145:ITSO-CLS1:admin>svcinfo lscopystatus status inactive

9.8.11 Shutting down a cluster


If all input power to an SVC cluster is to be removed for more than a few minutes (for example, if the machine room power is to be shut down for maintenance), it is important to shut down the cluster before removing the power. If the input power is removed from the uninterruptible power supply units without first shutting down the cluster and the uninterruptible power supply units, the uninterruptible power supply units remain operational and eventually become drained of power. When input power is restored to the uninterruptible power supply units, they start to recharge. However, the SVC does not permit any I/O activity to be performed to the volumes until the uninterruptible power supply units are charged enough to enable all of the data on the SVC nodes to be destaged in the event of a subsequent unexpected power loss. Recharging the uninterruptible power supply can take as long as two hours. Shutting down the cluster prior to removing input power to the uninterruptible power supply units prevents the battery power from being drained. It also makes it possible for I/O activity to be resumed as soon as input power is restored. You can use the following procedure to shut down the cluster: 1. Use the svctask stopcluster command to shut down your SVC cluster (Example 9-97).
Example 9-97 svctask stopcluster

IBM_2145:ITSO-CLS1:admin>svctask stopcluster Are you sure that you want to continue with the shut down? This command shuts down the SVC cluster. All data is flushed to disk before the power is removed. At this point, you lose administrative contact with your cluster, and the PuTTY application automatically closes. 2. You will be presented with the following message: Warning: Are you sure that you want to continue with the shut down? Ensure that you have stopped all FlashCopy mappings, Metro Mirror (Remote Copy) relationships, data migration operations, and forced deletions before continuing. Entering y to this message will execute the command. No feedback is then displayed. Entering anything other than y(es) or Y(ES) will result in the command not executing. No feedback is displayed. Important: Before shutting down a cluster, ensure that all I/O operations are stopped that are destined for this cluster, because you will lose all access to all volumes being provided by this cluster. Failure to do so can result in failed I/O operations being reported to the host operating systems. Begin the process of quiescing all I/O to the cluster by stopping the applications on the hosts that are using the volumes provided by the cluster.

Chapter 9. SAN Volume Controller operations using the command-line interface

493

3. We have completed the tasks that are required to shut down the cluster. To shut down the uninterruptible power supply units, press the power on button on the front panel of each uninterruptible power supply unit. Restarting the cluster: To restart the cluster, you must first restart the uninterruptible power supply units by pressing the power button on their front panels. Then press the power on button on the service panel of one of the nodes within the cluster. After the node is fully booted up (for example, displaying Cluster: on line 1 and the cluster name on line 2 of the panel), you can start the other nodes in the same way. As soon as all of the nodes are fully booted, you can reestablish administrative contact using PuTTY, and your cluster will be fully operational again.

9.9 Nodes
This section details the tasks that can be performed at an individual node level.

9.9.1 Viewing node details


Use the svcinfo lsnode command to view the summary information about the nodes that are defined within the SVC environment. To view more details about a specific node, append the node name (for example, SVCNode_1) to the command. Example 9-98 shows both of these commands. Tip: The -delim parameter truncates the content in the window and separates data fields with colons (:) as opposed to wrapping text over multiple lines.
Example 9-98 svcinfo lsnode command

IBM_2145:ITSO-CLS1:admin>svcinfo lsnode -delim , id,name,UPS_serial_number,WWNN,status,IO_group_id,IO_group_name,config_node,UPS_un ique_id,hardware 1,node1,1000739007,50050768010037E5,online,0,io_grp0,yes,20400001C3240007,8G4 2,node2,1000739004,50050768010037DC,online,0,io_grp0,no,20400001C3240004,8G4 3,node3,100066C107,5005076801001D1C,online,1,io_grp1,no,20400001864C1007,8G4 4,node4,100066C108,50050768010027E2,online,1,io_grp1,no,20400001864C1008,8G4 IBM_2145:ITSO-CLS1:admin>svcinfo lsnode node1 id 1 name node1 UPS_serial_number 1000739007 WWNN 50050768010037E5 status online IO_group_id 0 IO_group_name io_grp0 partner_node_id 2 partner_node_name node2 config_node yes UPS_unique_id 20400001C3240007 port_id 50050768014037E5 port_status active port_speed 4Gb port_id 50050768013037E5 port_status active 494
Implementing the IBM System Storage SAN Volume Controller V6.1

port_speed 4Gb port_id 50050768011037E5 port_status active port_speed 4Gb port_id 50050768012037E5 port_status active port_speed 4Gb hardware 8G4

9.9.2 Adding a node


After cluster creation is completed through the service panel (the front panel of one of the SVC nodes) and cluster web interface, only one node (the configuration node) is set up. To have a fully functional SVC cluster, you must add a second node to the configuration. To add a node to a cluster, gather the necessary information, as explained in these steps: Before you can add a node, you must know which unconfigured nodes you have as candidates. Issue the svcinfo lsnodecandidate command (Example 9-99). You must specify to which I/O Group you are adding the node. If you enter the svcinfo lsnode command, you can easily identify the I/O Group ID of the group to which you are adding your node, as shown in Example 9-100.
Example 9-99 svctask lsnodecandidate command

IBM_2145:ITSO-CLS1:admin>svcinfo lsnodecandidate id panel_name UPS_serial_number


50050768010027E2 108283 50050768010037DC 104603 100066C108 1000739004

UPS_unique_id
20400001864C1008 8G4 20400001C3240004 8G4

hardware

Tip: The node that you want to add must have a separate uninterruptible power supply unit serial number from the uninterruptible power supply unit on the first node.
Example 9-100 svcinfo lsnode command

id,name,UPS_serial_number,WWNN,status,IO_group_id,IO_group_name,config_node,UPS_un ique_id,hardware,iscsi_name,iscsi_alias 1,ITSO_CLS1_0,100089J040,50050768010059E7,online,0,io_grp0,yes,2040000209680100,8G 4,iqn.1986-03.com.ibm:2145.ITSO_CLS1_0.ITSO_CLS1_0_N0, Now that we know the available nodes, we can use the svctask addnode command to add the node to the SVC cluster configuration. Example 9-101 shows the command to add a node to the SVC cluster.
Example 9-101 svctask addnode (wwnodename) command

IBM_2145:ITSO-CLS1:admin>svctask addnode -wwnodename 50050768010027E2 -name Node2 -iogrp io_grp0 Node, id [2], successfully added This command adds the candidate node with the wwnodename of 50050768010027E2 to the I/O Group called io_grp0.

Chapter 9. SAN Volume Controller operations using the command-line interface

495

We used the -wwnodename parameter (50050768010027E2). However, we can also use the -panelname parameter (108283) instead, as shown in Example 9-102. If standing in front of the node, it is easier to read the panel name than it is to get the WWNN.
Example 9-102 svctask addnode (panelname) command

IBM_2145:ITSO-CLS1:admin>svctask addnode -panelname 108283 -name Node2 -iogrp io_grp0 We also used the optional -name parameter (Node2). If you do not provide the -name parameter, the SVC automatically generates the name nodex (where x is the ID sequence number that is assigned internally by the SVC). Name: If you want to provide a name, you can use letters A to Z and a to z, numbers 0 to 9, the dash (-), and the underscore (_). The name can be between one and 63 characters in length. However, the name cannot start with a number, dash, or the word node (because this prefix is reserved for SVC assignment only). If the svctask addnode command returns no information, your second node is powered on, and the zones are correctly defined, then preexisting cluster configuration data can be stored in the node. If you are sure that this node is not part of another active SVC cluster, you can use the service panel to delete the existing cluster information. After this action is complete, reissue the svcinfo lsnodecandidate command and you will see it listed.

9.9.3 Renaming a node


Use the svctask chnode command to rename a node within the SVC cluster configuration.
Example 9-103 svctask chnode -name command

IBM_2145:ITSO-CLS1:admin>svctask chnode -name ITSO_CLS1_Node1 4 This command renames node ID 4 to ITSO_CLS1_Node1. Name: The chnode command specifies the new name first. You can use letters A to Z and a to z, numbers 0 to 9, the dash (-), and the underscore (_). The name can be between one and 63 characters in length. However, the name cannot start with a number, dash, or the word node (because this prefix is reserved for SVC assignment only).

9.9.4 Deleting a node


Use the svctask rmnode command to remove a node from the SVC cluster configuration (Example 9-101 on page 495).
Example 9-104 svctask rmnode command

IBM_2145:ITSO-CLS1:admin>svctask rmnode node4 This command removes node4 from the SVC cluster. Because node4 was also the configuration node, the SVC transfers the configuration node responsibilities to a surviving node, within the I/O Group. Unfortunately, the PuTTY session cannot be dynamically passed to the surviving node. Therefore, the PuTTY application loses communication and closes automatically.

496

Implementing the IBM System Storage SAN Volume Controller V6.1

We must restart the PuTTY application to establish a secure session with the new configuration node. Important: If this node is the last node in an I/O Group, and there are volumes still assigned to the I/O Group, the node is not deleted from the cluster. If this node is the last node in the cluster, and the I/O Group has no volumes remaining, the cluster is destroyed and all virtualization information is lost. Any data that is still required must be backed up or migrated prior to destroying the cluster.

9.9.5 Shutting down a node


On occasion, it can be necessary to shut down a single node within the cluster to perform tasks, such as scheduled maintenance, while leaving the SVC environment up and running. Use the svctask stopcluster -node command, as shown in Example 9-105, to shut down a single node.
Example 9-105 svctask stopcluster -node command

IBM_2145:ITSO-CLS1:admin>svctask stopcluster -node n4 Are you sure that you want to continue with the shut down? This command shuts down node n4 in a graceful manner. When this node has been shut down, the other node in the I/O Group will destage the contents of its cache and will go into write-through mode until the node is powered up and rejoins the cluster. Important: There is no need to stop FlashCopy mappings, Remote Copy relationships, and data migration operations. The other cluster will handle these activities, but be aware that this cluster is a single point of failure now. If this is the last node in an I/O Group, all access to the volumes in the I/O Group will be lost. Verify that you want to shut down this node before executing this command. You must specify the -force flag. By reissuing the svcinfo lsnode command (Example 9-106), we can see that the node is now offline.
Example 9-106 svcinfo lsnode command

IBM_2145:ITSO-CLS1:admin>svcinfo lsnode -delim , id,name,UPS_serial_number,WWNN,status,IO_group_id,IO_group_name,config_node,UPS_un ique_id,hardware 1,n1,1000739007,50050768010037E5,online,0,io_grp0,yes,20400001C3240007,8G4 2,n2,1000739004,50050768010037DC,online,0,io_grp0,no,20400001C3240004,8G4 3,n3,100066C107,5005076801001D1C,online,1,io_grp1,no,20400001864C1007,8G4 6,n4,100066C108,0000000000000000,offline,1,io_grp1,no,20400001864C1008,unknown IBM_2145:ITSO-CLS1:admin>svcinfo lsnode n4 CMMVC5782E The object specified is offline. Restart: To restart the node manually, press the power on button from the service panel of the node.

Chapter 9. SAN Volume Controller operations using the command-line interface

497

At this point we have completed the tasks that are required to view, add, delete, rename, and shut down a node within an SVC environment.

9.10 I/O Groups


This section explains the tasks that you can perform at an I/O Group level.

9.10.1 Viewing I/O Group details


Use the svcinfo lsiogrp command, as shown in Example 9-107, to view information about the I/O Groups that are defined within the SVC environment.
Example 9-107 I/O Group details

IBM_2145:ITSO-CLS1:admin>svcinfo lsiogrp id name node_count 0 io_grp0 2 1 io_grp1 2 2 io_grp2 0 3 io_grp3 0 4 recovery_io_grp 0

vdisk_count 3 4 0 0 0

host_count 3 3 2 2 0

As shown, the SVC predefines five I/O Groups. In a four-node cluster (similar to our example), only two I/O Groups are actually in use. The other I/O Groups (io_grp2 and io_grp3) are for a six- or eight-node cluster. The recovery I/O Group is a temporary home for volumes when all nodes in the I/O Group that normally owns them have suffered multiple failures. This design allows us to move the volumes to the recovery I/O Group and then into a working I/O Group. Note that while temporarily assigned to the recovery I/O Group, I/O access is not possible.

9.10.2 Renaming an I/O Group


Use the svctask chiogrp command to rename an I/O Group (Example 9-108).
Example 9-108 svctask chiogrp command

IBM_2145:ITSO-CLS1:admin>svctask chiogrp -name io_grpA io_grp1 This command renames the I/O Group io_grp1 to io_grpA. Name: The chiogrp command specifies the new name first. If you want to provide a name, you can use letters A to Z, letters a to z, numbers 0 to 9, the dash (-), and the underscore (_). The name can be between one and 63 characters in length. However, the name cannot start with a number, dash, or the word iogrp (because this prefix is reserved for SVC assignment only). To see whether the renaming was successful, issue the svcinfo lsiogrp command again to see the change. At this point we have completed the tasks that are required to rename an I/O Group.

498

Implementing the IBM System Storage SAN Volume Controller V6.1

9.10.3 Adding and removing hostiogrp


To map or unmap a specific host object to a specific I/O Group to reach the maximum number of hosts supported by an SVC cluster, use the svctask addhostiogrp command to map a specific host to a specific I/O Group, as shown in Example 9-109.
Example 9-109 svctask addhostiogrp command

IBM_2145:ITSO-CLS1:admin>svctask addhostiogrp -iogrp 1 Kanaga Parameters: -iogrp iogrp_list -iogrpall Specify a list of one or more I/O Groups that must be mapped to the host. This parameter is mutually exclusive with the -iogrpall option. The -iogrpall option specifies that all the I/O Groups must be mapped to the specified host. This parameter is mutually exclusive with -iogrp. -host host_id_or_name Identify the host either by ID or name to which the I/O Groups must be mapped. Use the svctask rmhostiogrp command to unmap a specific host to a specific I/O Group, as shown in Example 9-110.
Example 9-110 svctask rmhostiogrp command

IBM_2145:ITSO-CLS1:admin>svctask rmhostiogrp -iogrp 0 Kanaga Parameters: -iogrp iogrp_list -iogrpall Specify a list of one or more I/O Groups that must be unmapped to the host. This parameter is mutually exclusive with the -iogrpall option. The -iogrpall option specifies that all of the I/O Groups must be unmapped to the specified host. This parameter is mutually exclusive with -iogrp. -force If the removal of a host to I/O Group mapping will result in the loss of volume to host mappings, the command fails if the -force flag is not used. The -force flag, however, overrides this behavior and forces the deletion of the host to I/O Group mapping. host_id_or_name Identify the host either by the ID or name to which the I/O Groups must be mapped.

9.10.4 Listing I/O Groups


To list all of the I/O Groups that are mapped to the specified host and vice versa, use the svcinfo lshostiogrp command, specifying the host name Kanaga, as shown in Example 9-111.
Example 9-111 svcinfo lshostiogrp command

IBM_2145:ITSO-CLS1:admin>svcinfo lshostiogrp Kanaga id name 1 io_grp1

Chapter 9. SAN Volume Controller operations using the command-line interface

499

To list all of the host objects that are mapped to the specified I/O Group, use the svcinfo lsiogrphost command, as shown in Example 9-112.
Example 9-112 svcinfo lsiogrphost command

IBM_2145:ITSO-CLS1:admin>svcinfo lsiogrphost io_grp1 id name 1 Nile 2 Kanaga 3 Siam In Example 9-113, iogrp_1 is the I/O Group name.

9.11 Managing authentication


In the following sections we illustrate authentication administration.

9.11.1 Managing users using the CLI


Here we demonstrate how to operate and manage authentication by using the CLI. All users must now be a member of a predefined user group. You can list those groups by using the svcinfo lsusergrp command, as shown in Example 9-113.
Example 9-113 svcinfo lsusergrp command

IBM_2145:ITSO-CLS2:admin>svcinfo id name role 0 SecurityAdmin SecurityAdmin 1 Administrator Administrator 2 CopyOperator CopyOperator 3 Service Service 4 Monitor Monitor

lsusergrp remote no no no no no

Example 9-114 is a simple example of creating a user. User John is added to the user group Monitor with the password m0nitor.
Example 9-114 svctask mkuser called John with password m0nitor

IBM_2145:ITSO-CLS1:admin>svctask mkuser -name John -usergrp Monitor -password m0nitor User, id [2], successfully created

Local users are users that are not authenticated by a remote authentication server. Remote
users are users that are authenticated by a remote central registry server. The user groups already have a defined authority role, as listed in Table 9-2 on page 501.

500

Implementing the IBM System Storage SAN Volume Controller V6.1

Table 9-2 Authority roles User group Security admin Administrator Role All commands All commands except: svctask: chauthservice, mkuser, rmuser, chuser, mkusergrp, rmusergrp, chusergrp, and setpwdreset All svcinfo commands and the following svctask commands: prestartfcconsistgrp, startfcconsistgrp, stopfcconsistgrp, chfcconsistgrp, prestartfcmap, startfcmap, stopfcmap, chfcmap, startrcconsistgrp, stoprcconsistgrp, switchrcconsistgrp, chrcconsistgrp, startrcrelationship, stoprcrelationship, switchrcrelationship, chrcrelationship, and chpartnership All svcinfo commands and the following svctask commands: applysoftware, setlocale, addnode, rmnode, cherrstate, writesernum, detectmdisk, includemdisk, clearerrlog, cleardumps, settimezone, stopcluster, startstats, stopstats, and settime All svcinfo commands and the following svctask commands: finderr, dumperrlog, dumpinternallog, and chcurrentuser And svcconfig: backup User Superusers Administrators that control the SVC

Copy operator

For users that control all of the copy functionality of the cluster

Service

For users that perform service maintenance and other hardware tasks on the cluster

Monitor

For users only needing view access

9.11.2 Managing user roles and groups


Role-based security commands are used to restrict the administrative abilities of a user. We cannot create new user roles, but we can create new user groups and assign a predefined role to our group. To view the user roles on your cluster, use the svcinfo lsusergrp command, as shown in Example 9-115 on page 502, to list all users.

Chapter 9. SAN Volume Controller operations using the command-line interface

501

Example 9-115 svcinfo lsusergrp command

IBM_2145:ITSO-CLS2:admin>svcinfo id name role 0 SecurityAdmin SecurityAdmin 1 Administrator Administrator 2 CopyOperator CopyOperator 3 Service Service 4 Monitor Monitor

lsusergrp remote no no no no no

To view our currently defined users and the user groups to which they belong we use the svcinfo lsuser command, as shown in Example 9-116.
Example 9-116 svcinfo lsuser command

IBM_2145:ITSO-CLS2:admin>svcinfo lsuser -delim , id,name,password,ssh_key,remote,usergrp_id,usergrp_name 0,superuser,yes,no,no,0,SecurityAdmin 1,admin,no,yes,no,0,SecurityAdmin 2,Pall,yes,no,no,1,Administrator

9.11.3 Changing a user


To change user passwords, issue the svctask chuser command. For information about how to change the Service account user password, see 9.8.3, Performing cluster authentication on page 488. The chuser command allows you to modify a user that is already created. You can rename, assign a new password (if you are logged on with administrative privileges), and move a user from one user group to another user group. Be aware, however, that a member can only be a member of one group at a time.

9.11.4 Audit log command


The audit log can be extremely helpful in showing which commands have been entered on a cluster. Most action commands that are issued by the old or new CLI are recorded in the audit log: The native GUI performs actions by using the CLI programs. The SVC Console performs actions by issuing Common Information Model (CIM) commands to the CIM object manager (CIMOM), which then runs the CLI programs. Actions performed by using both the native GUI and the SVC Console are recorded in the audit log. Certain commands are not audited: svctask cpdumps svctask cleardumps svctask finderr 502
Implementing the IBM System Storage SAN Volume Controller V6.1

svctask dumperrlog svctask dumpinternallog The audit log contains approximately 1 MB of data, which can contain about 6000 average length commands. When this log is full, the cluster copies it to a new file in the /dumps/audit directory on the config node and resets the in-memory audit log. To display entries from the audit log, use the svcinfo catauditlog -first 5 command to return a list of five in-memory audit log entries, as shown in Example 9-117.
Example 9-117 catauditlog command

IBM_2145:ITSO-CLS1:admin>svcinfo catauditlog -first 5 -delim , 291,090904200329,superuser,10.64.210.231,0,,svctask mkvdiskhostmap -host 1 21 292,090904201238,admin,10.64.210.231,0,,svctask chvdisk -name swiss_cheese 21 293,090904204314,superuser,10.64.210.231,0,,svctask chhost -name ITSO_W2008 1 294,090904204314,superuser,10.64.210.231,0,,svctask chhost -mask 15 1 295,090904204410,admin,10.64.210.231,0,,svctask chvdisk -name SwissCheese 21

If you need to dump the contents of the in-memory audit log to a file on the current configuration node, use the svctask dumpauditlog command. This command does not provide any feedback; it only provides the prompt. To obtain a list of the audit log dumps, use the svcinfo lsauditlogdumps command as shown in Example 9-118.
Example 9-118 svctask dumpauditlog/svcinfo lsauditlogdumps command

IBM_2145:ITSO-CLS4:admin>svcinfo lsauditlogdumps id auditlog_filename 0 auditlog_0_80_20080619134139_0000020060c06fca 1 auditlog_0_2238_20080806160952_0000020060e06fca 2 auditlog_0_52_20100920161016_0000020060a06fb8

9.12 Managing Copy Services


In the following sections we illustrate how to manage Copy Services.

9.12.1 FlashCopy operations


In this section we use a scenario to illustrate how to use commands with PuTTY to perform FlashCopy. See IBM System Storage Open Software Family SAN Volume Controller: Command-Line Interface Users Guide, GC27-2287, for information about other commands.

Scenario description
We use the following scenario in both the command-line section and the GUI section. In the following scenario, we want to FlashCopy the following volumes: DB_Source Log_Source App_Source Database files Database log files Application files

We create Consistency Groups to handle the FlashCopy of DB_Source and Log_Source, because data integrity must be kept on DB_Source and Log_Source.

Chapter 9. SAN Volume Controller operations using the command-line interface

503

In our scenario, the application files are independent of the database, so we create a single FlashCopy mapping for App_Source. We will make two FlashCopy targets for DB_Source and Log_Source and therefore, two Consistency Groups. Figure 9-6 shows the scenario.

Figure 9-6 FlashCopy scenario

9.12.2 Setting up FlashCopy


We have already created the source and target volumes and the source and target volumes are identical in size, which is a requirement of the FlashCopy function: DB_Source, DB_Target1, and DB_Target2 Log_Source, Log_Target1, and Log_Target2 App_Source and App_Target1 To set up the FlashCopy, we perform the following steps. 1. Create two FlashCopy Consistency Groups: FCCG1 FCCG2 2. Create FlashCopy mappings for Source volumes: DB_Source FlashCopy to DB_Target1, the mapping name is DB_Map1 DB_Source FlashCopy to DB_Target2, the mapping name is DB_Map2 Log_Source FlashCopy to Log_Target1, the mapping name is Log_Map1 Log_Source FlashCopy to Log_Target2, the mapping name is Log_Map2 App_Source FlashCopy to App_Target1, the mapping name is App_Map1 Copyrate 50

504

Implementing the IBM System Storage SAN Volume Controller V6.1

9.12.3 Creating a FlashCopy Consistency Group


To create a FlashCopy Consistency Group, we use the command svctask mkfcconsistgrp to create a new Consistency Group. The ID of the new group is returned. If you have created several FlashCopy mappings for a group of volumes that contain elements of data for the same application, it might be convenient to assign these mappings to a single FlashCopy Consistency Group. Then you can issue a single prepare or start command for the whole group so that, for example, all files for a particular database are copied at the same time. In Example 9-119, the FCCG1 and FCCG2 Consistency Groups are created to hold the FlashCopy maps of DB and Log. This step is extremely important for FlashCopy on database applications because it helps to maintain data integrity during FlashCopy.
Example 9-119 Creating two FlashCopy Consistency Groups

IBM_2145:ITSO-CLS1:admin>svctask mkfcconsistgrp -name FCCG1 FlashCopy Consistency Group, id [1], successfully created IBM_2145:ITSO-CLS1:admin>svctask mkfcconsistgrp -name FCCG2 FlashCopy Consistency Group, id [2], successfully created In Example 9-120, we checked the status of Consistency Groups. Each Consistency Group has a status of empty.
Example 9-120 Checking the status

IBM_2145:ITSO-CLS1:admin>svcinfo lsfcconsistgrp id name status 1 FCCG1 empty 2 FCCG2 empty If you want to change the name of a Consistency Group, you can use the svctask chfcconsistgrp command. Type svctask chfcconsistgrp -h for help with this command.

9.12.4 Creating a FlashCopy mapping


To create a FlashCopy mapping, we use the svctask mkfcmap command. This command creates a new FlashCopy mapping, which maps a source volume to a target volume to prepare for subsequent copying. When executed, this command creates a new FlashCopy mapping logical object. This mapping persists until it is deleted. The mapping specifies the source and destination volumes. The destination must be identical in size to the source or the mapping will fail. Issue the svcinfo lsvdisk -bytes command to find the exact size of the source volume for which you want to create a target disk of the same size. In a single mapping, source and destination cannot be on the same volume. A mapping is triggered at the point in time when the copy is required. The mapping can optionally be given a name and assigned to a Consistency Group. These groups of mappings can be triggered at the same time, enabling multiple volumes to be copied at the same time, which creates a consistent copy of multiple disks. A consistent copy of multiple disks is required for database products in which the database and log files reside on separate disks. If no Consistency Group is defined, the mapping is assigned to the default group 0, which is a special group that cannot be started as a whole. Mappings in this group can only be started on an individual basis.

Chapter 9. SAN Volume Controller operations using the command-line interface

505

The background copy rate specifies the priority that must be given to completing the copy. If 0 is specified, the copy will not proceed in the background. The default is 50. Tip: There is a parameter to delete FlashCopy mappings automatically after completion of a background copy (when the mapping gets to the idle_or_copied state). Use the command: svctask mkfcmap -autodelete This command does not delete mappings in cascade with dependent mappings, because it cannot get to the idle_or_copied state in this situation. In Example 9-121, the first FlashCopy mapping for DB_Source and Log_Source is created.
Example 9-121 Create the first FlashCopy mapping for DB_Source, Log_Source, and App_Source

IBM_2145:ITSO-CLS1:admin>svctask mkfcmap -source DB_Source -target DB_Target_1 -name DB_Map1 -consistgrp FCCG1 FlashCopy Mapping, id [0], successfully created IBM_2145:ITSO-CLS1:admin>svctask mkfcmap -source Log_Source -target Log_Target_1 -name Log_Map1 -consistgrp FCCG1 FlashCopy Mapping, id [1], successfully created IBM_2145:ITSO-CLS1:admin>svctask mkfcmap -source App_Source -target App_Target_1 -name App_Map1 FlashCopy Mapping, id [2], successfully created Example 9-122 shows the command to create a second FlashCopy mapping for volume DB_Source and Log_Source.
Example 9-122 Create additional FlashCopy mappings

IBM_2145:ITSO-CLS1:admin>svctask mkfcmap -source DB_Source -target DB_Target2 -name DB_Map2 -consistgrp FCCG2 FlashCopy Mapping, id [3], successfully created IBM_2145:ITSO-CLS1:admin>svctask mkfcmap -source Log_Source -target Log_Target2 -name Log_Map2 -consistgrp FCCG2 FlashCopy Mapping, id [4], successfully created Example 9-123 shows the result of these FlashCopy mappings. The status of the mapping is idle_or_copied.
Example 9-123 Check the result of Multiple Target FlashCopy mappings

IBM_2145:ITSO-CLS1:admin>svcinfo lsfcmap id name source_vdisk_id source_vdisk_name target_vdisk_id target_vdisk_name group_id group_name status progress copy_rate clean_progress incremental partner_FC_id partner_FC_name restoring 0 DB_Map1 0 DB_Source 6 DB_Target_1 1 FCCG1 idle_or_copied 0 50 100 off 1 Log_Map1 1 Log_Source 4 Log_Target_1 1 FCCG1 idle_or_copied 0 50 100 off

no

no

506

Implementing the IBM System Storage SAN Volume Controller V6.1

2 App_Map1 2 App_Source App_Target_1 idle_or_copied 50 100 off 3 DB_Map2 0 DB_Source DB_Target_2 2 FCCG2 idle_or_copied 50 100 off 4 Log_Map2 1 Log_Source Log_Target_2 2 FCCG2 idle_or_copied 50 100 off IBM_2145:ITSO-CLS1:admin>svcinfo lsfcconsistgrp id name status 1 FCCG1 idle_or_copied 2 FCCG2 idle_or_copied

3 0 no 7 0 no 5 0 no

If you want to change the FlashCopy mapping, you can use the svctask chfcmap command. Type svctask chfcmap -h to get help with this command.

9.12.5 Preparing (pre-triggering) the FlashCopy mapping


At this point the mapping has been created, but the cache still accepts data for the source volumes. You can only trigger the mapping when the cache does not contain any data for FlashCopy source volumes. You must issue an svctask prestartfcmap command to prepare a FlashCopy mapping to start. This command tells the SVC to flush the cache of any content for the source volume and to pass through any further write data for this volume. When the svctask prestartfcmap command is executed, the mapping enters the Preparing state. After the preparation is complete, it changes to the Prepared state. At this point, the mapping is ready for triggering. Preparing and the subsequent triggering are usually performed on a Consistency Group basis. Only mappings belonging to Consistency Group 0 can be prepared on their own, because Consistency Group 0 is a special group that contains the FlashCopy mappings that do not belong to any Consistency Group. A FlashCopy must be prepared before it can be triggered. In our scenario, App_Map1 is not in a Consistency Group. In Example 9-124, we show how to initialize the preparation for App_Map1. Another option is that you add the -prep parameter to the svctask startfcmap command, which first prepares the mapping and then starts the FlashCopy. In the example, we also show how to check the status of the current FlashCopy mapping. App_Map1s status is prepared.
Example 9-124 Prepare a FlashCopy without a Consistency Group

IBM_2145:ITSO-CLS1:admin>svctask prestartfcmap App_Map1 IBM_2145:ITSO-CLS1:admin>svcinfo lsfcmap App_Map1 id 2 name App_Map1 source_vdisk_id 2 source_vdisk_name App_Source target_vdisk_id 3 target_vdisk_name App_Target_1 group_id group_name status prepared progress 0
Chapter 9. SAN Volume Controller operations using the command-line interface

507

copy_rate 50 start_time dependent_mappings 0 autodelete off clean_progress 100 clean_rate 50 incremental off difference 100 grain_size 256 IO_group_id 0 IO_group_name io_grp0 partner_FC_id partner_FC_name restoring no

9.12.6 Preparing (pre-triggering) the FlashCopy Consistency Group


We use the svctask prestartfcconsistsgrp command to prepare a FlashCopy Consistency Group. As with 9.12.5, Preparing (pre-triggering) the FlashCopy mapping on page 507, this command flushes the cache of any data that is destined for the source volume and forces the cache into the write-through mode until the mapping is started. The difference is that this command prepares a group of mappings (at a Consistency Group level) instead of one mapping. When you have assigned several mappings to a FlashCopy Consistency Group, you only have to issue a single prepare command for the whole group to prepare all of the mappings at one time. Example 9-125 shows how we prepare the Consistency Groups for DB and Log and check the result. After the command has executed all of the FlashCopy maps that we have, all of them are in the prepared status and all the Consistency Groups are in the prepared status, too. Now we are ready to start the FlashCopy.
Example 9-125 Prepare a FlashCopy Consistency Group

IBM_2145:ITSO-CLS1:admin>svctask IBM_2145:ITSO-CLS1:admin>svctask IBM_2145:ITSO-CLS1:admin>svcinfo id 1 name FCCG1 status prepared autodelete off FC_mapping_id 0 FC_mapping_name DB_Map1 FC_mapping_id 1 FC_mapping_name Log_Map1 IBM_2145:ITSO-CLS1:admin>svcinfo id name 1 FCCG1 2 FCCG2

prestartfcconsistgrp FCCG1 prestartfcconsistgrp FCCG2 lsfcconsistgrp FCCG1

lsfcconsistgrp status prepared prepared

9.12.7 Starting (triggering) FlashCopy mappings


The svctask startfcmap command is used to start a single FlashCopy mapping. When invoked, a point-in-time copy of the source volume is created on the target volume. 508
Implementing the IBM System Storage SAN Volume Controller V6.1

When the FlashCopy mapping is triggered, it enters the Copying state. The way that the copy proceeds depends on the background copy rate attribute of the mapping. If the mapping is set to 0 (NOCOPY), only data that is subsequently updated on the source will be copied to the destination. We suggest that you use this scenario as a backup copy while the mapping exists in the Copying state. If the copy is stopped, the destination is unusable. If you want to end up with a duplicate copy of the source at the destination, set the background copy rate greater than 0. This way, the system copies all of the data (even unchanged data) to the destination and eventually reaches the idle_or_copied state. After this data is copied, you can delete the mapping and have a usable point-in-time copy of the source at the destination. In Example 9-126, after the FlashCopy is started, App_Map1 changes to copying status.
Example 9-126 Start App_Map1

IBM_2145:ITSO-CLS1:admin>svctask startfcmap App_Map1 IBM_2145:ITSO-CLS1:admin>svcinfo lsfcmap id name source_vdisk_id source_vdisk_name target_vdisk_id target_vdisk_name group_id group_name status progress copy_rate clean_progress incremental partner_FC_id partner_FC_name restoring 0 DB_Map1 0 DB_Source 6 DB_Target_1 1 FCCG1 prepared 0 50 100 off 1 Log_Map1 1 Log_Source 4 Log_Target_1 1 FCCG1 prepared 0 50 100 off 2 App_Map1 2 App_Source 3 App_Target_1 copying 0 50 100 off 3 DB_Map2 0 DB_Source 7 DB_Target_2 2 FCCG2 prepared 0 50 100 off 4 Log_Map2 1 Log_Source 5 Log_Target_2 2 FCCG2 prepared 0 50 100 off IBM_2145:ITSO-CLS1:admin>svcinfo lsfcmap App_Map1 id 2 name App_Map1 source_vdisk_id 2 source_vdisk_name App_Source target_vdisk_id 3 target_vdisk_name App_Target_1 group_id group_name status copying progress 29 copy_rate 50 start_time 090826171647 dependent_mappings 0 autodelete off clean_progress 100 clean_rate 50 incremental off difference 100

no

no

no

no

no

Chapter 9. SAN Volume Controller operations using the command-line interface

509

grain_size 256 IO_group_id 0 IO_group_name io_grp0 partner_FC_id partner_FC_name restoring no

9.12.8 Starting (triggering) FlashCopy Consistency Group


We execute the svctask startfcconsistgrp command, as shown in Example 9-127, and afterward the database can be resumed. We have created two point-in-time consistent copies of the DB and Log volumes. After execution, the Consistency Group and the FlashCopy maps are all in the copying status.
Example 9-127 Start FlashCopy Consistency Group

IBM_2145:ITSO-CLS1:admin>svctask IBM_2145:ITSO-CLS1:admin>svctask IBM_2145:ITSO-CLS1:admin>svcinfo id 1 name FCCG1 status copying autodelete off FC_mapping_id 0 FC_mapping_name DB_Map1 FC_mapping_id 1 FC_mapping_name Log_Map1 IBM_2145:ITSO-CLS1:admin>svcinfo id name 1 FCCG1 2 FCCG2

startfcconsistgrp FCCG1 startfcconsistgrp FCCG2 lsfcconsistgrp FCCG1

lsfcconsistgrp status copying copying

9.12.9 Monitoring the FlashCopy progress


To monitor the background copy progress of the FlashCopy mappings, we issue the svcinfo lsfcmapprogress command for each FlashCopy mapping. Alternatively, you can also query the copy progress by using the svcinfo lsfcmap command. As shown in Example 9-128, both DB_Map1 and Log_Map1 return information that the background copy is 21% completed, and both DB_Map2 and Log_Map2 return information that the background copy is 18% completed.
Example 9-128 Monitoring background copy progress

IBM_2145:ITSO-CLS1:admin>svcinfo id progress 0 23 IBM_2145:ITSO-CLS1:admin>svcinfo id progress 1 23 IBM_2145:ITSO-CLS1:admin>svcinfo id progress 4 23 IBM_2145:ITSO-CLS1:admin>svcinfo id progress

lsfcmapprogress DB_Map1

lsfcmapprogress Log_Map1

lsfcmapprogress Log_Map2

lsfcmapprogress DB_Map2

510

Implementing the IBM System Storage SAN Volume Controller V6.1

3 23 IBM_2145:ITSO-CLS1:admin>svcinfo lsfcmapprogress App_Map1 id progress 2 53 When the background copy has completed, the FlashCopy mapping enters the idle_or_copied state. When all FlashCopy mappings in a Consistency Group enter this status, the Consistency Group will be at idle_or_copied status. When in this state, the FlashCopy mapping can be deleted and the target disk can be used independently if, for example, another target disk is to be used for the next FlashCopy of the particular source volume.

9.12.10 Stopping the FlashCopy mapping


The svctask stopfcmap command is used to stop a FlashCopy mapping. This command allows you to stop an active (copying) or suspended mapping. When executed, this command stops a single FlashCopy mapping. Tip: In a Multiple Target FlashCopy environment, if you want to stop a mapping or group, consider whether you want to keep any of the dependent mappings. If not, issue the stop command with the force parameter, which will stop all of the dependent maps and negate the need for the stopping copy process to run. When a FlashCopy mapping is stopped, the target volume becomes invalid and is set offline by the SVC. The FlashCopy mapping needs to be prepared again or retriggered to bring the target volume online again. Important: Only stop a FlashCopy mapping when the data on the target volume is not in use, or when you want to modify the FlashCopy mapping. When a FlashCopy mapping is stopped, the target volume becomes invalid and is set offline by the SVC, if the mapping is in the Copying state and progress=100. Example 9-129 shows how to stop the App_Map1 FlashCopy. The status of App_Map1 has changed to idle_or_copied.
Example 9-129 Stop APP_Map1 FlashCopy

IBM_2145:ITSO-CLS1:admin>svctask stopfcmap App_Map1 IBM_2145:ITSO-CLS1:admin>svcinfo lsfcmap App_Map1 id 2 name App_Map1 source_vdisk_id 2 source_vdisk_name App_Source target_vdisk_id 3 target_vdisk_name App_Target_1 group_id group_name status idle_or_copied progress 100 copy_rate 50 start_time 090826171647 dependent_mappings 0 autodelete off
Chapter 9. SAN Volume Controller operations using the command-line interface

511

clean_progress 100 clean_rate 50 incremental off difference 100 grain_size 256 IO_group_id 0 IO_group_name io_grp0 partner_FC_id partner_FC_name restoring no

9.12.11 Stopping the FlashCopy Consistency Group


The svctask stopfcconsistgrp command is used to stop any active FlashCopy Consistency Group. It stops all mappings in a Consistency Group. When a FlashCopy Consistency Group is stopped for all mappings that are not 100% copied, the target volumes become invalid and are set offline by the SVC. The FlashCopy Consistency Group needs to be prepared again and restarted to bring the target volumes online again. Important: Only stop a FlashCopy mapping when the data on the target volume is not in use, or when you want to modify the FlashCopy Consistency Group. When a Consistency Group is stopped, the target volume might become invalid and set offline by the SVC, depending on the state of the mapping. As shown in Example 9-130, we stop the FCCG1 and FCCG2 Consistency Groups. The status of the two Consistency Groups has changed to stopped. Most of the FlashCopy mapping relations now have the status stopped. As you can see, several of them have already completed the copy operation and are now in a status of idle_or_copied.
Example 9-130 Stop FCCG1 and FCCG2 Consistency Groups

IBM_2145:ITSO-CLS1:admin>svctask stopfcconsistgrp FCCG1 IBM_2145:ITSO-CLS1:admin>svctask stopfcconsistgrp FCCG2 IBM_2145:ITSO-CLS1:admin>svcinfo lsfcconsistgrp id name status 1 FCCG1 stopped 2 FCCG2 stopped IBM_2145:ITSO-CLS1:admin>svcinfo lsfcmap -delim , id,name,source_vdisk_id,source_vdisk_name,target_vdisk_id,target_vdisk_name,group_ id,group_name,status,progress,copy_rate,clean_progress,incremental,partner_FC_id,p artner_FC_name,restoring 0,DB_Map1,0,DB_Source,6,DB_Target_1,1,FCCG1,idle_or_copied,100,50,100,off,,,no 1,Log_Map1,1,Log_Source,4,Log_Target_1,1,FCCG1,idle_or_copied,100,50,100,off,,,no 2,App_Map1,2,App_Source,3,App_Target_1,,,idle_or_copied,100,50,100,off,,,no 3,DB_Map2,0,DB_Source,7,DB_Target_2,2,FCCG2,idle_or_copied,100,50,100,off,,,no 4,Log_Map2,1,Log_Source,5,Log_Target_2,2,FCCG2,idle_or_copied,100,50,100,off,,,no

9.12.12 Deleting the FlashCopy mapping


To delete a FlashCopy mapping, we use the svctask rmfcmap command. When the command is executed, it attempts to delete the specified FlashCopy mapping. If the 512
Implementing the IBM System Storage SAN Volume Controller V6.1

FlashCopy mapping is stopped, the command fails unless the -force flag is specified. If the mapping is active (copying), it must first be stopped before it can be deleted. Deleting a mapping only deletes the logical relationship between the two volumes. However, when issued on an active FlashCopy mapping using the -force flag, the delete renders the data on the FlashCopy mapping target volume as inconsistent. Tip: If you want to use the target volume as a normal volume, monitor the background copy progress until it is complete (100% copied) and, then, delete the FlashCopy mapping. Another option is to set the -autodelete option when creating the FlashCopy mapping. As shown in Example 9-131, we delete App_Map1.
Example 9-131 Delete App_Map1

IBM_2145:ITSO-CLS1:admin>svctask rmfcmap App_Map1

9.12.13 Deleting the FlashCopy Consistency Group


The svctask rmfcconsistgrp command is used to delete a FlashCopy Consistency Group. When executed, this command deletes the specified Consistency Group. If there are mappings that are members of the group, the command fails unless the -force flag is specified. If you want to delete all of the mappings in the Consistency Group as well, first delete the mappings and then delete the Consistency Group. As shown in Example 9-132, we delete all of the maps and Consistency Groups and then check the result.
Example 9-132 Remove fcmaps and fcconsistgrp

IBM_2145:ITSO-CLS1:admin>svctask rmfcmap DB_Map1 IBM_2145:ITSO-CLS1:admin>svctask rmfcmap DB_Map2 IBM_2145:ITSO-CLS1:admin>svctask rmfcmap Log_Map1 IBM_2145:ITSO-CLS1:admin>svctask rmfcmap Log_Map2 IBM_2145:ITSO-CLS1:admin>svctask rmfcconsistgrp FCCG1 IBM_2145:ITSO-CLS1:admin>svctask rmfcconsistgrp FCCG2 IBM_2145:ITSO-CLS1:admin>svcinfo lsfcconsistgrp IBM_2145:ITSO-CLS1:admin>svcinfo lsfcmap IBM_2145:ITSO-CLS1:admin>

Chapter 9. SAN Volume Controller operations using the command-line interface

513

9.12.14 Migrating a volume to a thin-provisioned volume


Use the following scenario to migrate a volume to a thin-provisioned volume: 1. Create a thin-provisioned space-efficient target volume with exactly the same size as the volume that you want to migrate. Example 9-133 shows the details of volume with ID 8. It has been created as a thin-provisioned volume with the same size of App_Source volume.
Example 9-133 svcinfo lsvdisk 8 command

IBM_2145:ITSO-CLS1:admin>svcinfo lsvdisk 8 id 8 name App_Source_SE IO_group_id 0 IO_group_name io_grp0 status online mdisk_grp_id 0 mdisk_grp_name MDG_DS47 capacity 1.00GB type striped formatted no mdisk_id mdisk_name FC_id FC_name RC_id RC_name vdisk_UID 6005076801AB813F100000000000000B throttling 0 preferred_node_id 2 fast_write_state empty cache readwrite udid 0 fc_map_count 0 sync_rate 50 copy_count 1 copy_id 0 status online sync yes primary yes mdisk_grp_id 0 mdisk_grp_name MDG_DS47 type striped mdisk_id mdisk_name fast_write_state empty used_capacity 0.41MB real_capacity 221.17MB free_capacity 220.77MB overallocation 462 autoexpand on warning 80 grainsize 32

514

Implementing the IBM System Storage SAN Volume Controller V6.1

2. Define a FlashCopy mapping in which the non thin-provisioned volume is the source and the thin-provisioned volume is the target. Specify a copy rate as high as possible and activate the -autodelete option for the mapping; see Example 9-134. Example 9-134 svctask mkfcmap IBM_2145:ITSO-CLS1:admin>svctask mkfcmap -source App_Source -target App_Source_SE -name MigrtoThinProv -copyrate 100 -autodelete FlashCopy Mapping, id [0], successfully created IBM_2145:ITSO-CLS1:admin>svcinfo lsfcmap 0 id 0 name MigrtoThinProv source_vdisk_id 2 source_vdisk_name App_Source target_vdisk_id 8 target_vdisk_name App_Source_SE group_id group_name status idle_or_copied progress 0 copy_rate 100 start_time dependent_mappings 0 autodelete on clean_progress 100 clean_rate 50 incremental off difference 100 grain_size 256 IO_group_id 0 IO_group_name io_grp0 partner_FC_id partner_FC_name restoring no 3. Run the svctask prestartfcmap command and the svcinfo lsfcmap MigrtoThinProv command, as shown in Example 9-135. Example 9-135 svctask prestartfcmap IBM_2145:ITSO-CLS1:admin>svctask prestartfcmap MigrtoThinProv IBM_2145:ITSO-CLS1:admin>svcinfo lsfcmap MigrtoThinProv id 0 name MigrtoThinProv source_vdisk_id 2 source_vdisk_name App_Source target_vdisk_id 8 target_vdisk_name App_Source_SE group_id group_name status prepared progress 0 copy_rate 100 start_time dependent_mappings 0
Chapter 9. SAN Volume Controller operations using the command-line interface

515

autodelete on clean_progress 100 clean_rate 50 incremental off difference 100 grain_size 256 IO_group_id 0 IO_group_name io_grp0 partner_FC_id partner_FC_name restoring no 4. Run the svctask startfcmap command, as shown in Example 9-136.
Example 9-136 svctask startfcmap command

IBM_2145:ITSO-CLS1:admin>svctask startfcmap MigrtoThinProv 5. Monitor the copy process using the svcinfo lsfcmapprogress command, as shown in Example 9-137.
Example 9-137 svcinfo lsfcmapprogress command

IBM_2145:ITSO-CLS1:admin>svcinfo lsfcmapprogress MigrtoThinProv id progress 0 63 6. The FlashCopy mapping has been deleted automatically, as shown in Example 9-138. Example 9-138 svcinfo lsfcmap command IBM_2145:ITSO-CLS1:admin>svcinfo lsfcmap MigrtoThinProv id 0 name MigrtoThinProv source_vdisk_id 2 source_vdisk_name App_Source target_vdisk_id 8 target_vdisk_name App_Source_SE group_id group_name status copying progress 73 copy_rate 100 start_time 090827095354 dependent_mappings 0 autodelete on clean_progress 100 clean_rate 50 incremental off difference 100 grain_size 256 IO_group_id 0 IO_group_name io_grp0 partner_FC_id partner_FC_name restoring no

516

Implementing the IBM System Storage SAN Volume Controller V6.1

IBM_2145:ITSO-CLS1:admin>svcinfo lsfcmap MigrtoThinProv CMMVC5754E The specified object does not exist, or the name supplied does not meet the naming rules. An independent copy of the source volume (App_Source) has been created. The migration has completed, as shown in Example 9-139. Example 9-139 svcinfo lsvdisk IBM_2145:ITSO-CLS1:admin>svcinfo lsvdisk App_Source_SE id 8 name App_Source_SE IO_group_id 0 IO_group_name io_grp0 status online mdisk_grp_id 0 mdisk_grp_name MDG_DS47 capacity 1.00GB type striped formatted no mdisk_id mdisk_name FC_id FC_name RC_id RC_name vdisk_UID 6005076801AB813F100000000000000B throttling 0 preferred_node_id 2 fast_write_state empty cache readwrite udid 0 fc_map_count 0 sync_rate 50 copy_count 1 copy_id 0 status online sync yes primary yes mdisk_grp_id 0 mdisk_grp_name MDG_DS47 type striped mdisk_id mdisk_name fast_write_state empty used_capacity 1.00GB real_capacity 1.00GB free_capacity 0.77MB overallocation 99 autoexpand on warning 80 grainsize 32 Real size: Independently of what you defined as the real size of the target thin-provisioned volume, the real size will be at least the capacity of the source volume.
Chapter 9. SAN Volume Controller operations using the command-line interface

517

To migrate a thin-provisioned volume to a fully allocated volume, you can follow the same scenario.

9.12.15 Reverse FlashCopy


You can also have a reverse FlashCopy mapping without having to remove the original FlashCopy mapping, and without restarting a FlashCopy mapping from the beginning. In Example 9-140, FCMAP_1 is the forward FlashCopy mapping, and FCMAP_rev_1 is a reverse FlashCopy mapping. We have also a cascade FCMAP_2 where its source is FCMAP_1s target volume, and its target is a different volume named Volume_FC_T1. In our example, after creating the environment, we started the FCMAP_1 and later FCMAP_2. As an example we started FCMAP_rev_1 without specifying the -restore parameter to show why we have to use it, and to show the message issued if you do not use it: CMMVC6298E The command failed because a target VDisk has dependent FlashCopy mappings. When starting a reverse FlashCopy mapping, you must use the -restore option to indicate that the user wants to overwrite the data on the source disk of the forward mapping.
Example 9-140 Reverse FlashCopy

IIBM_2145:ITSO-CLS4:admin>svcinfo lsvdisk id name IO_group_id IO_group_name status mdisk_grp_id mdisk_grp_name capacity type FC_id FC_name RC_id RC_name vdisk_UID fc_map_count copy_count fast_write_state se_copy_count 4 Volume_FC_S 0 io_grp0 online 2 STGPool_DS4700_1 1.00GB striped 60050768018281BEE000000000000006 0 1 empty 0 5 Volume_FC_T_S1 0 io_grp0 online 2 STGPool_DS4700_1 1.00GB striped 60050768018281BEE000000000000007 0 1 empty 0 6 Volume_FC_T1 0 io_grp0 online 2 STGPool_DS4700_1 1.00GB striped 60050768018281BEE000000000000008 0 1 empty 0 IBM_2145:ITSO-CLS4:admin>svctask mkfcmap -source Volume_FC_S -target Volume_FC_T_S1 -name FCMAP_1 -copyrate 50 FlashCopy Mapping, id [0], successfully created IBM_2145:ITSO-CLS4:admin>svctask mkfcmap -source Volume_FC_T_S1 -target Volume_FC_S -name FCMAP_rev_1 -copyrate 50 FlashCopy Mapping, id [1], successfully created IBM_2145:ITSO-CLS4:admin>svctask mkfcmap -source Volume_FC_T_S1 -target Volume_FC_T1 -name FCMAP_2 -copyrate 50 FlashCopy Mapping, id [2], successfully created IBM_2145:ITSO-CLS4:admin>svcinfo lsfcmap id name source_vdisk_id source_vdisk_name target_vdisk_id target_vdisk_name group_id group_name status progress copy_rate clean_progress incremental partner_FC_id partner_FC_name restoring 518
Implementing the IBM System Storage SAN Volume Controller V6.1

0 FCMAP_1 4 idle_or_copied 0 FCMAP_rev_1 no 1 FCMAP_rev_1 5 idle_or_copied 0 no 2 FCMAP_2 5 idle_or_copied 0 no

Volume_FC_S 50 100 Volume_FC_T_S1 100

Volume_FC_T_S1 off 1 4 off Volume_FC_S 0 Volume_FC_T1 off

50

FCMAP_1

Volume_FC_T_S1 50 100

IBM_2145:ITSO-CLS4:admin>svctask startfcmap -prep FCMAP_1 IBM_2145:ITSO-CLS4:admin>svcinfo lsfcmap id name source_vdisk_id source_vdisk_name target_vdisk_id target_vdisk_name group_id group_name status progress copy_rate clean_progress incremental partner_FC_id partner_FC_name restoring 0 FCMAP_1 4 Volume_FC_S 5 Volume_FC_T_S1 copying 0 50 100 off 1 FCMAP_rev_1 no 1 FCMAP_rev_1 5 Volume_FC_T_S1 4 Volume_FC_S idle_or_copied 0 50 100 off 0 FCMAP_1 no 2 FCMAP_2 5 Volume_FC_T_S1 6 Volume_FC_T1 idle_or_copied 0 50 100 off no IBM_2145:ITSO-CLS4:admin>svctask startfcmap -prep FCMAP_2 IBM_2145:ITSO-CLS4:admin>svcinfo lsfcmap id name source_vdisk_id source_vdisk_name target_vdisk_id target_vdisk_name group_id group_name status progress copy_rate clean_progress incremental partner_FC_id partner_FC_name restoring 0 FCMAP_1 4 Volume_FC_S 5 Volume_FC_T_S1 copying 8 50 91 off 1 FCMAP_rev_1 no 1 FCMAP_rev_1 5 Volume_FC_T_S1 4 Volume_FC_S idle_or_copied 0 50 100 off 0 FCMAP_1 no 2 FCMAP_2 5 Volume_FC_T_S1 6 Volume_FC_T1 copying 0 50 100 off no IBM_2145:ITSO-CLS4:admin>svctask startfcmap -prep FCMAP_rev_1 CMMVC6298E The command failed because a target VDisk has dependent FlashCopy mappings. IBM_2145:ITSO-CLS4:admin>svctask startfcmap -prep -restore FCMAP_rev_1 IBM_2145:ITSO-CLS4:admin>svcinfo lsfcmap id name source_vdisk_id source_vdisk_name target_vdisk_id target_vdisk_name group_id group_name status progress copy_rate clean_progress incremental partner_FC_id partner_FC_name restoring 0 FCMAP_1 4 Volume_FC_S 5 Volume_FC_T_S1 copying 20 50 86 off 1 FCMAP_rev_1 no

Chapter 9. SAN Volume Controller operations using the command-line interface

519

1 FCMAP_rev_1 5 copying 86 50 yes 2 FCMAP_2 5 copying 12 50 no

Volume_FC_T_S1 4 16 off Volume_FC_T_S1 100 6 off

Volume_FC_S 0 Volume_FC_T1

FCMAP_1

As you can see in Example 9-140 on page 518, FCMAP_rev_1 shows a restoring value of yes while the FlashCopy mapping is copying. After it has finished copying, the restoring value field will change to no.

9.12.16 Split-stopping of FlashCopy maps


The stopfcmap command has a -split option. This option allows the source target of a map, which is 100% complete, to be removed from the head of a cascade when the map is stopped. For example, if we have four volumes in a cascade (A B C D), and the map A B is 100% complete, then using the stopfcmap -split mapAB command results in mapAB becoming idle_copied and the remaining cascade becomes B C D. Without the -split option, volume A remains at the head of the cascade (A C D). Consider this sequence of steps: 1. User takes a backup using the mapping A B. A is the production volume; B is a backup. 2. At a later point, the user experiences corruption on A and so reverses the mapping B A. 3. The user then takes another backup from the production disk A, resulting in the cascade B A C. Stopping A B without the -split option results in the cascade B C. Note that the backup disk B is now at the head of this cascade. When the user next wants to take a backup to B, the user can still start mapping A B (using the -restore flag), but the user cannot then reverse the mapping to A (B A or C A). Stopping A B with the -split option results in the cascade A C. This action does not result in the same problem, because production disk A is at the head of the cascade instead of the backup disk B.

9.13 Metro Mirror operation


Note: This example is for intercluster operations only. If you want to set up intracluster operations, we highlight those parts of the following procedure that you do not need to perform. In the following scenario, we set up an intercluster Metro Mirror relationship between the SVC cluster ITSO-CLS1 primary site and the SVC cluster ITSO-CLS4 at the secondary site. Table 9-3 on page 521 shows the details of the volumes.

520

Implementing the IBM System Storage SAN Volume Controller V6.1

Table 9-3 Volume details Content of volume Database files Database log files Application files Volumes at primary site MM_DB_Pri MM_DBLog_Pri MM_App_Pri Volumes at secondary site MM_DB_Sec MM_DBLog_Sec MM_App_Sec

Because data consistency is needed across the MM_DB_Pri and MM_DBLog_Pri volumes, a CG_WIN2K3_MM Consistency Group is created to handle Metro Mirror relationships for them. Because in this scenario application files are independent of the database, a stand-alone Metro Mirror relationship is created for the MM_App_Pri volume. Figure 9-7 illustrates the Metro Mirror setup.

Figure 9-7 Metro Mirror scenario

9.13.1 Setting up Metro Mirror


In the following section, we assume that the source and target volumes have already been created and that the inter-switch links (ISLs) and zoning are in place, enabling the SVC clusters to communicate. To set up the Metro Mirror, perform the following steps: 1. Create an SVC partnership between ITSO-CLS1 and ITSO-CLS4 on both SVC clusters.

Chapter 9. SAN Volume Controller operations using the command-line interface

521

2. Create a Metro Mirror Consistency Group: Name CG_W2K3_MM 3. Create the Metro Mirror relationship for MM_DB_Pri: Master MM_DB_Pri Auxiliary MM_DB_Sec Auxiliary SVC cluster ITSO-CLS4 Name MMREL1 Consistency Group CG_W2K3_MM Master MM_DBLog_Pri Auxiliary MM_DBLog_Sec Auxiliary SVC cluster ITSO-CLS4 Name MMREL2 Consistency Group CG_W2K3_MM Master MM_App_Pri Auxiliary MM_App_Sec Auxiliary SVC cluster ITSO-CLS4 Name MMREL3

4. Create the Metro Mirror relationship for MM_DBLog_Pri:

5. Create the Metro Mirror relationship for MM_App_Pri:

In the following section, we perform each step by using the CLI.

9.13.2 Creating an SVC partnership between ITSO-CLS1 and ITSO-CLS4


We create the SVC partnership on both clusters. Intracluster Metro Mirror: If you are creating an intracluster Metro Mirror, do not perform the next step; instead, go to 9.13.3, Creating a Metro Mirror Consistency Group on page 524.

Preverification
To verify that both clusters can communicate with each other, use the svcinfo lsclustercandidate command. As shown in Example 9-141, ITSO-CLS4 is an eligible SVC cluster candidate at ITSO-CLS1 for the SVC cluster partnership, and vice versa. Therefore, both clusters are communicating with each other.
Example 9-141 Listing the available SVC cluster for partnership

IBM_2145:ITSO-CLS1:admin>svcinfo lsclustercandidate id configured name 0000020069E03A42 no ITSO-CLS3 0000020063E03A38 no ITSO-CLS4 0000020061006FCA no ITSO-CLS2 IBM_2145:ITSO-CLS4:admin>svcinfo lsclustercandidate id configured name 0000020069E03A42 no ITSO-CLS3 000002006AE04FC4 no ITSO-CLS1 0000020061006FCA no ITSO-CLS2

522

Implementing the IBM System Storage SAN Volume Controller V6.1

Example 9-142 shows the output of the svcinfo lscluster command, before setting up the Metro Mirror relationship. We show it so that you can compare with the same relationship after setting up the Metro Mirror relationship.
Example 9-142 Pre-verification of cluster configuration

IBM_2145:ITSO-CLS1:admin>svcinfo lscluster id name location id_alias 000002006AE04FC4 ITSO-CLS1 local 000002006AE04FC4 IBM_2145:ITSO-CLS4:admin>svcinfo lscluster id name location id_alias 0000020063E03A38 ITSO-CLS4 local 0000020063E03A38

partnership

bandwidth

partnership

bandwidth

Partnership between clusters


In Example 9-143, a partnership is created between ITSO-CLS1 and ITSO-CL4, specifying 50 MBps bandwidth to be used for the background copy. To check the status of the newly created partnership, issue the svcinfo lscluster command. Also, notice that the new partnership is only partially configured. It remains partially configured until the Metro Mirror relationship is created on the other node.
Example 9-143 Creating the partnership from ITSO-CLS1 to ITSO-CLS4 and verifying the partnership

IBM_2145:ITSO-CLS1:admin>svctask mkpartnership -bandwidth 50 ITSO-CLS4 IBM_2145:ITSO-CLS1:admin>svcinfo lscluster id name location partnership bandwidth id_alias 000002006AE04FC4 ITSO-CLS1 local 000002006AE04FC4 0000020063E03A38 ITSO-CLS4 remote fully_configured 50 0000020063E03A38 In Example 9-144, the partnership is created between ITSO-CLS4 back to ITSO-CLS1, specifying the bandwidth to be used for a background copy of 50 MBps. After creating the partnership, verify that the partnership is fully configured on both clusters by reissuing the svcinfo lscluster command.
Example 9-144 Creating the partnership from ITSO-CLS4 to ITSO-CLS1 and verifying the partnership

IBM_2145:ITSO-CLS4:admin>svctask mkpartnership -bandwidth 50 ITSO-CLS1 IBM_2145:ITSO-CLS4:admin>svcinfo lscluster id name location partnership bandwidth id_alias 0000020063E03A38 ITSO-CLS4 local 0000020063E03A38 000002006AE04FC4 ITSO-CLS1 remote fully_configured 50 000002006AE04FC4

Chapter 9. SAN Volume Controller operations using the command-line interface

523

9.13.3 Creating a Metro Mirror Consistency Group


In Example 9-145, we create the Metro Mirror Consistency Group using the svctask mkrcconsistgrp command. This Consistency Group will be used for the Metro Mirror relationships of the database volumes named MM_DB_Pri and MM_DBLog_Pri. The Consistency Group is named CG_W2K3_MM.
Example 9-145 Creating the Global Mirror Consistency Group CG_W2K3_MM

IBM_2145:ITSO-CLS1:admin>svctask mkrcconsistgrp -cluster ITSO-CLS4 -name CG_W2K3_MM RC Consistency Group, id [0], successfully created IBM_2145:ITSO-CLS1:admin>svcinfo lsrcconsistgrp id name master_cluster_id master_cluster_name aux_cluster_id aux_cluster_name primary state relationship_count copy_type 0 CG_W2K3_MM 000002006AE04FC4 ITSO-CLS1 0000020063E03A38 ITSO-CLS4 empty 0 empty_group

9.13.4 Creating the Metro Mirror relationships


In Example 9-146, we create the Metro Mirror relationships MMREL1 and MMREL2, for MM_DB_Pri and MM_DBLog_Pri. Also, we make them members of the Metro Mirror Consistency Group CG_W2K3_MM. We use the svcinfo lsvdisk command to list all of the volumes in the ITSO-CLS1 cluster, and we then use the svcinfo lsrcrelationshipcandidate command to show the volumes in the ITSO-CLS4 cluster. By using this command, we check the possible candidates for MM_DB_Pri. After checking all of these conditions, we use the svctask mkrcrelationship command to create the Metro Mirror relationship. To verify the newly created Metro Mirror relationships, list them with the svcinfo lsrcrelationship command.
Example 9-146 Creating Metro Mirror relationships MMREL1 and MMREL2

IBM_2145:ITSO-CLS1:admin>svcinfo lsvdisk -filtervalue name=MM* id name IO_group_id IO_group_name status mdisk_grp_id mdisk_grp_name capacity type FC_id FC_name RC_id RC_name vdisk_UID fc_map_count copy_count fast_write_state 13 MM_DB_Pri 0 io_grp0 online 0 MDG_DS47 1.00GB striped 6005076801AB813F1000000000000010 0 1 empty 14 MM_Log_Pri 0 io_grp0 online 0 MDG_DS47 1.00GB striped 6005076801AB813F1000000000000011 0 1 empty 15 MM_App_Pri 0 io_grp0 online 0 MDG_DS47 1.00GB striped 6005076801AB813F1000000000000012 0 1 empty IBM_2145:ITSO-CLS1:admin>svcinfo lsrcrelationshipcandidate id vdisk_name 0 DB_Source 1 Log_Source 2 App_Source

524

Implementing the IBM System Storage SAN Volume Controller V6.1

3 4 5 6 7 8 9 13 14 15

App_Target_1 Log_Target_1 Log_Target_2 DB_Target_1 DB_Target_2 App_Source_SE FC_A MM_DB_Pri MM_Log_Pri MM_App_Pri

IBM_2145:ITSO-CLS1:admin>svcinfo lsrcrelationshipcandidate -aux ITSO-CLS4 -master MM_DB_Pri id vdisk_name 0 MM_DB_Sec 1 MM_Log_Sec 2 MM_App_Sec IBM_2145:ITSO-CLS1:admin>svctask mkrcrelationship -master MM_DB_Pri -aux MM_DB_Sec -cluster ITSO-CLS4 -consistgrp CG_W2K3_MM -name MMREL1 RC Relationship, id [13], successfully created IBM_2145:ITSO-CLS1:admin>svctask mkrcrelationship -master MM_Log_Pri -aux MM_Log_Sec -cluster ITSO-CLS4 -consistgrp CG_W2K3_MM -name MMREL2 RC Relationship, id [14], successfully created IBM_2145:ITSO-CLS1:admin>svcinfo lsrcrelationship id name master_cluster_id master_cluster_name master_vdisk_id master_vdisk_name aux_cluster_id aux_cluster_name aux_vdisk_id aux_vdisk_name primary consistency_group_id consistency_group_name state bg_copy_priority progress copy_type 13 MMREL1 000002006AE04FC4 ITSO-CLS1 13 MM_DB_Pri 0000020063E03A38 ITSO-CLS4 0 MM_DB_Sec master 0 CG_W2K3_MM inconsistent_stopped 50 0 metro 14 MMREL2 000002006AE04FC4 ITSO-CLS1 14 MM_Log_Pri 0000020063E03A38 ITSO-CLS4 1 MM_Log_Sec master 0 CG_W2K3_MM inconsistent_stopped 50 0 metro

9.13.5 Creating a stand-alone Metro Mirror relationship for MM_App_Pri


In Example 9-147 on page 526, we create the stand-alone Metro Mirror relationship MMREL3 for MM_App_Pri. After it is created, we check the status of this Metro Mirror relationship. Notice that the state of MMREL3 is consistent_stopped. MMREL3 is in this state because it was created with the -sync option. The -sync option indicates that the secondary (auxiliary) volume is already synchronized with the primary (master) volume. Initial background synchronization is skipped when this option is used, even though the volumes are not actually synchronized in this scenario. We want to illustrate the option of pre-synchronized master and auxiliary volumes, before setting up the relationship. We have created the new relationship for MM_App_Sec using the -sync option.

Chapter 9. SAN Volume Controller operations using the command-line interface

525

Tip: The -sync option is only used when the target volume has already mirrored all of the data from the source volume. By using this option, there is no initial background copy between the primary volume and the secondary volume. MMREL2 and MMREL1 are in the inconsistent_stopped state because they were not created with the -sync option, so their auxiliary volumes need to be synchronized with their primary volumes.
Example 9-147 Creating a stand-alone relationship and verifying it

IBM_2145:ITSO-CLS1:admin>svctask mkrcrelationship -master MM_App_Pri -aux MM_App_Sec -sync -cluster ITSO-CLS4 -name MMREL3 RC Relationship, id [15], successfully created IBM_2145:ITSO-CLS1:admin>svcinfo lsrcrelationship 15 id 15 name MMREL3 master_cluster_id 000002006AE04FC4 master_cluster_name ITSO-CLS1 master_vdisk_id 15 master_vdisk_name MM_App_Pri aux_cluster_id 0000020063E03A38 aux_cluster_name ITSO-CLS4 aux_vdisk_id 2 aux_vdisk_name MM_App_Sec primary master consistency_group_id consistency_group_name state consistent_stopped bg_copy_priority 50 progress 100 freeze_time status online sync in_sync copy_type metro sync in_sync copy_type metro

9.13.6 Starting Metro Mirror


Now that the Metro Mirror Consistency Group and relationships are in place, we are ready to use Metro Mirror relationships in our environment. When implementing Metro Mirror, the goal is to reach a consistent and synchronized state that can provide redundancy for a dataset if a failure occurs that affects the production site. In the following section, we show how to stop and start stand-alone Metro Mirror relationships and Consistency Groups.

Starting a stand-alone Metro Mirror relationship


In Example 9-148 on page 527, we start a stand-alone Metro Mirror relationship named MMREL3. Because the Metro Mirror relationship was in the Consistent stopped state and no updates have been made to the primary volume, the relationship quickly enters the Consistent synchronized state. 526
Implementing the IBM System Storage SAN Volume Controller V6.1

Example 9-148 Starting the stand-alone Metro Mirror relationship

IBM_2145:ITSO-CLS1:admin>svctask startrcrelationship MMREL3 IBM_2145:ITSO-CLS1:admin>svcinfo lsrcrelationship MMREL3 id 15 name MMREL3 master_cluster_id 000002006AE04FC4 master_cluster_name ITSO-CLS1 master_vdisk_id 15 master_vdisk_name MM_App_Pri aux_cluster_id 0000020063E03A38 aux_cluster_name ITSO-CLS4 aux_vdisk_id 2 aux_vdisk_name MM_App_Sec primary master consistency_group_id consistency_group_name state consistent_synchronized bg_copy_priority 50 progress freeze_time status online sync copy_type metro IBM_2145:ITSO-CLS1:admin>

9.13.7 Starting a Metro Mirror Consistency Group


In Example 9-149, we start the Metro Mirror Consistency Group CG_W2K3_MM. Because the Consistency Group was in the Inconsistent stopped state, it enters the Inconsistent copying state until the background copy has completed for all of the relationships in the Consistency Group. Upon completion of the background copy, it enters the Consistent synchronized state.
Example 9-149 Starting the Metro Mirror Consistency Group

IBM_2145:ITSO-CLS1:admin>svctask startrcconsistgrp CG_W2K3_MM IBM_2145:ITSO-CLS1:admin>svcinfo lsrcconsistgrp CG_W2K3_MM id 0 name CG_W2K3_MM master_cluster_id 000002006AE04FC4 master_cluster_name ITSO-CLS1 aux_cluster_id 0000020063E03A38 aux_cluster_name ITSO-CLS4 primary master state inconsistent_copying relationship_count 2 freeze_time status sync copy_type metro RC_rel_id 13 RC_rel_name MMREL1
Chapter 9. SAN Volume Controller operations using the command-line interface

527

RC_rel_id 14 RC_rel_name MMREL2 IBM_2145:ITSO-CLS1:admin>

9.13.8 Monitoring the background copy progress


To monitor the background copy progress, we can use the svcinfo lsrcrelationship command. This command shows all of the defined Metro Mirror relationships if it is used without any arguments. In the command output, progress indicates the current background copy progress. Our Metro Mirror relationship is shown in Example 9-150. Using SNMP traps: Setting up SNMP traps for the SVC enables automatic notification when Metro Mirror Consistency Groups or relationships change state.
Example 9-150 Monitoring background copy progress example

IBM_2145:ITSO-CLS1:admin>svcinfo lsrcrelationship MMREL1 id 13 name MMREL1 master_cluster_id 000002006AE04FC4 master_cluster_name ITSO-CLS1 master_vdisk_id 13 master_vdisk_name MM_DB_Pri aux_cluster_id 0000020063E03A38 aux_cluster_name ITSO-CLS4 aux_vdisk_id 0 aux_vdisk_name MM_DB_Sec primary master consistency_group_id 0 consistency_group_name CG_W2K3_MM state consistent_synchronized bg_copy_priority 50 progress 35 freeze_time status online sync copy_type metro IBM_2145:ITSO-CLS1:admin>svcinfo lsrcrelationship MMREL2 id 14 name MMREL2 master_cluster_id 000002006AE04FC4 master_cluster_name ITSO-CLS1 master_vdisk_id 14 master_vdisk_name MM_Log_Pri aux_cluster_id 0000020063E03A38 aux_cluster_name ITSO-CLS4 aux_vdisk_id 1 aux_vdisk_name MM_Log_Sec primary master consistency_group_id 0 consistency_group_name CG_W2K3_MM state consistent_synchronized bg_copy_priority 50

528

Implementing the IBM System Storage SAN Volume Controller V6.1

progress 37 freeze_time status online sync copy_type metro When all Metro Mirror relationships have completed the background copy the Consistency Group enters the Consistent synchronized state, as shown in Example 9-151.
Example 9-151 Listing the Metro Mirror Consistency Group

IBM_2145:ITSO-CLS1:admin>svcinfo lsrcconsistgrp CG_W2K3_MM id 0 name CG_W2K3_MM master_cluster_id 000002006AE04FC4 master_cluster_name ITSO-CLS1 aux_cluster_id 0000020063E03A38 aux_cluster_name ITSO-CLS4 primary master state consistent_synchronized relationship_count 2 freeze_time status sync copy_type metro RC_rel_id 13 RC_rel_name MMREL1 RC_rel_id 14 RC_rel_name MMREL2

9.13.9 Stopping and restarting Metro Mirror


Now that the Metro Mirror Consistency Group and relationships are running, in this section and in the following sections we describe how to stop, restart, and change the direction of the stand-alone Metro Mirror relationships and the Consistency Group.

9.13.10 Stopping a stand-alone Metro Mirror relationship


Example 9-152 shows how to stop the stand-alone Metro Mirror relationship, while enabling access (write I/O) to both the primary and secondary volumes. It also shows the relationship entering the Idling state.
Example 9-152 Stopping stand-alone Metro Mirror relationship and enabling access to the secondary

IBM_2145:ITSO-CLS1:admin>svctask stoprcrelationship -access MMREL3 IBM_2145:ITSO-CLS1:admin>svcinfo lsrcrelationship MMREL3 id 15 name MMREL3 master_cluster_id 000002006AE04FC4 master_cluster_name ITSO-CLS1 master_vdisk_id 15 master_vdisk_name MM_App_Pri aux_cluster_id 0000020063E03A38 aux_cluster_name ITSO-CLS4 aux_vdisk_id 2

Chapter 9. SAN Volume Controller operations using the command-line interface

529

aux_vdisk_name MM_App_Sec primary consistency_group_id consistency_group_name state idling bg_copy_priority 50 progress freeze_time status sync in_sync copy_type metro

9.13.11 Stopping a Metro Mirror Consistency Group


Example 9-153 shows how to stop the Metro Mirror Consistency Group without specifying the -access flag. The Consistency Group enters the Consistent stopped state.
Example 9-153 Stopping a Metro Mirror Consistency Group

IBM_2145:ITSO-CLS1:admin>svctask stoprcconsistgrp CG_W2K3_MM IBM_2145:ITSO-CLS1:admin>svcinfo lsrcconsistgrp CG_W2K3_MM id 0 name CG_W2K3_MM master_cluster_id 000002006AE04FC4 master_cluster_name ITSO-CLS1 aux_cluster_id 0000020063E03A38 aux_cluster_name ITSO-CLS4 primary master state consistent_stopped relationship_count 2 freeze_time status sync in_sync copy_type metro RC_rel_id 13 RC_rel_name MMREL1 RC_rel_id 14 RC_rel_name MMREL2 If, afterwards, we want to enable access (write I/O) to the secondary volume, we reissue the svctask stoprcconsistgrp command specifying the -access flag. The Consistency Group transits to the Idling state as shown in Example 9-154.
Example 9-154 Stopping a Metro Mirror Consistency Group and enabling access to the secondary

IBM_2145:ITSO-CLS1:admin>svctask stoprcconsistgrp -access CG_W2K3_MM IBM_2145:ITSO-CLS1:admin>svcinfo lsrcconsistgrp CG_W2K3_MM id 0 name CG_W2K3_MM master_cluster_id 000002006AE04FC4 master_cluster_name ITSO-CLS1 aux_cluster_id 0000020063E03A38 aux_cluster_name ITSO-CLS4 primary state idling

530

Implementing the IBM System Storage SAN Volume Controller V6.1

relationship_count 2 freeze_time status sync in_sync copy_type metro RC_rel_id 13 RC_rel_name MMREL1 RC_rel_id 14 RC_rel_name MMREL2

9.13.12 Restarting a Metro Mirror relationship in the Idling state


When restarting a Metro Mirror relationship in the Idling state, we must specify the copy direction. If any updates have been performed on either the master or the auxiliary volume, consistency will be compromised. Therefore, we must issue the command with the -force flag to restart a relationship, as shown in Example 9-155.
Example 9-155 Restarting a Metro Mirror relationship after updates in the Idling state

IBM_2145:ITSO-CLS1:admin>svctask startrcrelationship -primary master -force MMREL3 IBM_2145:ITSO-CLS1:admin>svcinfo lsrcrelationship MMREL3 id 15 name MMREL3 master_cluster_id 000002006AE04FC4 master_cluster_name ITSO-CLS1 master_vdisk_id 15 master_vdisk_name MM_App_Pri aux_cluster_id 0000020063E03A38 aux_cluster_name ITSO-CLS4 aux_vdisk_id 2 aux_vdisk_name MM_App_Sec primary master consistency_group_id consistency_group_name state consistent_synchronized bg_copy_priority 50 progress freeze_time status online sync copy_type metro

9.13.13 Restarting a Metro Mirror Consistency Group in the Idling state


When restarting a Metro Mirror Consistency Group in the Idling state, we must specify the copy direction. If any updates have been performed on either the master or the auxiliary volume in any of the Metro Mirror relationships in the Consistency Group, the consistency is compromised. Therefore, we must use the -force flag to start a relationship. If the -force flag is not used, the command fails.

Chapter 9. SAN Volume Controller operations using the command-line interface

531

In Example 9-156, we change the copy direction by specifying the auxiliary volumes to become the primaries.
Example 9-156 Restarting a Metro Mirror relationship while changing the copy direction

IBM_2145:ITSO-CLS1:admin>svctask startrcconsistgrp -force -primary aux CG_W2K3_MM IBM_2145:ITSO-CLS1:admin>svcinfo lsrcconsistgrp CG_W2K3_MM id 0 name CG_W2K3_MM master_cluster_id 000002006AE04FC4 master_cluster_name ITSO-CLS1 aux_cluster_id 0000020063E03A38 aux_cluster_name ITSO-CLS4 primary aux state consistent_synchronized relationship_count 2 freeze_time status sync copy_type metro RC_rel_id 13 RC_rel_name MMREL1 RC_rel_id 14 RC_rel_name MMREL2

9.13.14 Changing copy direction for Metro Mirror


In this section, we show how to change the copy direction of the stand-alone Metro Mirror relationship and the Consistency Group.

9.13.15 Switching copy direction for a Metro Mirror relationship


When a Metro Mirror relationship is in the Consistent synchronized state, we can change the copy direction for the relationship using the svctask switchrcrelationship command, specifying the primary volume. If the specified volume is already a primary when you issue this command, then the command has no effect. In Example 9-157, we change the copy direction for the stand-alone Metro Mirror relationship by specifying the auxiliary volume to become the primary. Important: When the copy direction is switched, it is crucial that there is no outstanding I/O to the volume that transitions from the primary to the secondary, because all of the I/O will be inhibited to that volume when it becomes the secondary. Therefore, careful planning is required prior to using the svctask switchrcrelationship command.
Example 9-157 Switching the copy direction for a Metro Mirror Consistency Group

IBM_2145:ITSO-CLS1:admin>svcinfo lsrcrelationship MMREL3 id 15 name MMREL3 master_cluster_id 000002006AE04FC4 master_cluster_name ITSO-CLS1 532

Implementing the IBM System Storage SAN Volume Controller V6.1

master_vdisk_id 15 master_vdisk_name MM_App_Pri aux_cluster_id 0000020063E03A38 aux_cluster_name ITSO-CLS4 aux_vdisk_id 2 aux_vdisk_name MM_App_Sec primary master consistency_group_id consistency_group_name state consistent_synchronized bg_copy_priority 50 progress freeze_time status online sync copy_type metro IBM_2145:ITSO-CLS1:admin>svctask switchrcrelationship -primary aux MMREL3 IBM_2145:ITSO-CLS1:admin>svcinfo lsrcrelationship MMREL3 id 15 name MMREL3 master_cluster_id 000002006AE04FC4 master_cluster_name ITSO-CLS1 master_vdisk_id 15 master_vdisk_name MM_App_Pri aux_cluster_id 0000020063E03A38 aux_cluster_name ITSO-CLS4 aux_vdisk_id 2 aux_vdisk_name MM_App_Sec primary aux consistency_group_id consistency_group_name state consistent_synchronized bg_copy_priority 50 progress freeze_time status online sync copy_type metro

9.13.16 Switching copy direction for a Metro Mirror Consistency Group


When a Metro Mirror Consistency Group is in the Consistent synchronized state, we can change the copy direction for the Consistency Group by using the svctask switchrcconsistgrp command and specifying the primary volume. If the specified volume is already a primary when you issue this command, then the command has no effect. In Example 9-158 on page 534, we change the copy direction for the Metro Mirror Consistency Group by specifying the auxiliary volume to become the primary.

Chapter 9. SAN Volume Controller operations using the command-line interface

533

Important: When the copy direction is switched, it is crucial that there is no outstanding I/O to the volume that transitions from primary to secondary, because all of the I/O will be inhibited when that volume becomes the secondary. Therefore, careful planning is required prior to using the svctask switchrcconsistgrp command.
Example 9-158 Switching the copy direction for a Metro Mirror Consistency Group

IBM_2145:ITSO-CLS1:admin>svcinfo lsrcconsistgrp CG_W2K3_MM id 0 name CG_W2K3_MM master_cluster_id 000002006AE04FC4 master_cluster_name ITSO-CLS1 aux_cluster_id 0000020063E03A38 aux_cluster_name ITSO-CLS4 primary master state consistent_synchronized relationship_count 2 freeze_time status sync copy_type metro RC_rel_id 13 RC_rel_name MMREL1 RC_rel_id 14 RC_rel_name MMREL2 IBM_2145:ITSO-CLS1:admin>svctask switchrcconsistgrp -primary aux CG_W2K3_MM IBM_2145:ITSO-CLS1:admin>svcinfo lsrcconsistgrp CG_W2K3_MM id 0 name CG_W2K3_MM master_cluster_id 000002006AE04FC4 master_cluster_name ITSO-CLS1 aux_cluster_id 0000020063E03A38 aux_cluster_name ITSO-CLS4 primary aux state consistent_synchronized relationship_count 2 freeze_time status sync copy_type metro RC_rel_id 13 RC_rel_name MMREL1 RC_rel_id 14 RC_rel_name MMREL2

534

Implementing the IBM System Storage SAN Volume Controller V6.1

9.13.17 Creating an SVC partnership among many clusters


Starting with SVC 5.1, you can have a cluster partnership among many SVC clusters. This capability allows you to create four configurations using a maximum of four connected clusters: Star configuration Triangle configuration Fully connected configuration Daisy-chain configuration In this section, we describe how to configure the SVC cluster partnership for each configuration. Important: To have a supported and working configuration, all SVC clusters must be at level 5.1 or higher. In our scenarios, we configure the SVC partnership by referring to the clusters as A, B, C, and D: ITSO-CLS1 = A ITSO-CLS2 = B ITSO-CLS3 = C ITSO-CLS4 = D Example 9-159 shows the available clusters for a partnership using the lsclustercandidate command on each cluster.
Example 9-159 Available clusters

IBM_2145:ITSO-CLS1:admin>svcinfo lsclustercandidate id configured name 0000020069E03A42 no ITSO-CLS3 0000020063E03A38 no ITSO-CLS4 0000020061006FCA no ITSO-CLS2 IBM_2145:ITSO-CLS2:admin>svcinfo lsclustercandidate id configured cluster_name 000002006AE04FC4 no ITSO-CLS1 0000020069E03A42 no ITSO-CLS3 0000020063E03A38 no ITSO-CLS4 IBM_2145:ITSO-CLS3:admin>svcinfo lsclustercandidate id configured name 000002006AE04FC4 no ITSO-CLS1 0000020063E03A38 no ITSO-CLS4 0000020061006FCA no ITSO-CLS2 IBM_2145:ITSO-CLS4:admin>svcinfo lsclustercandidate id configured name 0000020069E03A42 no ITSO-CLS3 000002006AE04FC4 no ITSO-CLS1 0000020061006FCA no ITSO-CLS2

Chapter 9. SAN Volume Controller operations using the command-line interface

535

9.13.18 Star configuration partnership


Figure 9-8 shows the star configuration.

Figure 9-8 Star configuration

Example 9-160 shows the sequence of mkpartnership commands to execute to create a star configuration.
Example 9-160 Creating a star configuration using the mkpartnership command

From ITSO-CLS1 to multiple clusters IBM_2145:ITSO-CLS1:admin>svctask mkpartnership -bandwidth 50 ITSO-CLS2 IBM_2145:ITSO-CLS1:admin>svctask mkpartnership -bandwidth 50 ITSO-CLS3 IBM_2145:ITSO-CLS1:admin>svctask mkpartnership -bandwidth 50 ITSO-CLS4 From ITSO-CLS2 to ITSO-CLS1 IBM_2145:ITSO-CLS2:admin>svctask mkpartnership -bandwidth 50 ITSO-CLS1 From ITSO-CLS3 to ITSO-CLS1 IBM_2145:ITSO-CLS3:admin>svctask mkpartnership -bandwidth 50 ITSO-CLS1 From ITSO-CLS4 to ITSO-CLS1 IBM_2145:ITSO-CLS4:admin>svctask mkpartnership -bandwidth 50 ITSO-CLS1 From ITSO-CLS1 IBM_2145:ITSO-CLS1:admin>svcinfo lscluster -delim : id:name:location:partnership:bandwidth:id_alias 000002006AE04FC4:ITSO-CLS1:local:::000002006AE04FC4 0000020061006FCA:ITSO-CLS2:remote:fully_configured:50:0000020061006FCA 0000020069E03A42:ITSO-CLS3:remote:fully_configured:50:0000020069E03A42 0000020063E03A38:ITSO-CLS4:remote:fully_configured:50:0000020063E03A38 From ITSO-CLS2 IBM_2145:ITSO-CLS1:admin>svcinfo lscluster -delim : 536
Implementing the IBM System Storage SAN Volume Controller V6.1

id:name:location:partnership:bandwidth:id_alias 0000020061006FCA:ITSO-CLS2:local:::0000020061006FCA 000002006AE04FC4:ITSO-CLS1:remote:fully_configured::000002006AE04FC4 0000020069E03A42:ITSO-CLS3:remote:fully_configured:50:0000020069E03A42 0000020063E03A38:ITSO-CLS4:remote:fully_configured:50:0000020063E03A38 From ITSO-CLS3 IBM_2145:ITSO-CLS1:admin>svcinfo lscluster -delim : id:name:location:partnership:bandwidth:id_alias 0000020069E03A42:ITSO-CLS3:local:::0000020069E03A42 000002006AE04FC4:ITSO-CLS1:remote:fully_configured::000002006AE04FC4 0000020061006FCA:ITSO-CLS2:remote:fully_configured:50:0000020061006FCA 0000020063E03A38:ITSO-CLS4:remote:fully_configured:50:0000020063E03A38 From ITSO-CLS4 IBM_2145:ITSO-CLS1:admin>svcinfo lscluster -delim : id:name:location:partnership:bandwidth:id_alias 0000020063E03A38:ITSO-CLS4:local:::0000020063E03A38 000002006AE04FC4:ITSO-CLS1:remote:fully_configured:50:000002006AE04FC4 0000020061006FCA:ITSO-CLS2:remote:fully_configured:50:0000020061006FCA 0000020069E03A42:ITSO-CLS3:remote:fully_configured:50:0000020069E03A42 After the SVC partnership has been configured, you can configure any rcrelationship or rcconsistgrp that you need. Make sure that a single volume is only in one relationship.

Triangle configuration
Figure 9-9 shows the triangle configuration.

Figure 9-9 Triangle configuration

Example 9-161 shows the sequence of mkpartnership commands to execute to create a triangle configuration.
Example 9-161 Creating a triangle configuration

From ITSO-CLS1 to ITSO-CLS2 and ITSO-CLS3

Chapter 9. SAN Volume Controller operations using the command-line interface

537

IBM_2145:ITSO-CLS1:admin>svctask mkpartnership -bandwidth 50 ITSO-CLS2 IBM_2145:ITSO-CLS1:admin>svctask mkpartnership -bandwidth 50 ITSO-CLS3 From ITSO-CLS2 to ITSO-CLS1 and ITSO-CLS3 IBM_2145:ITSO-CLS2:admin>svctask mkpartnership -bandwidth 50 ITSO-CLS1 IBM_2145:ITSO-CLS1:admin>svctask mkpartnership -bandwidth 50 ITSO-CLS3 From ITSO-CLS3 to ITSO-CLS1 and ITSO-CLS2 IBM_2145:ITSO-CLS3:admin>svctask mkpartnership -bandwidth 50 ITSO-CLS1 IBM_2145:ITSO-CLS1:admin>svctask mkpartnership -bandwidth 50 ITSO-CLS2 From ITSO-CLS1 IBM_2145:ITSO-CLS1:admin>svcinfo lscluster -delim : id:name:location:partnership:bandwidth:id_alias 000002006AE04FC4:ITSO-CLS1:local:::000002006AE04FC4 0000020061006FCA:ITSO-CLS2:remote:fully_configured:50:0000020061006FCA 0000020069E03A42:ITSO-CLS3:remote:fully_configured:50:0000020069E03A42 From ITSO-CLS2 IBM_2145:ITSO-CLS1:admin>svcinfo lscluster -delim : id:name:location:partnership:bandwidth:id_alias 0000020061006FCA:ITSO-CLS2:local:::0000020061006FCA 000002006AE04FC4:ITSO-CLS1:remote:fully_configured:50:000002006AE04FC4 0000020069E03A42:ITSO-CLS3:remote:fully_configured:50:0000020069E03A42 From ITSO-CLS3 IBM_2145:ITSO-CLS1:admin>svcinfo lscluster -delim : id:name:location:partnership:bandwidth:id_alias 0000020069E03A42:ITSO-CLS3:local:::0000020069E03A42 000002006AE04FC4:ITSO-CLS1:remote:fully_configured:50:000002006AE04FC4 0000020061006FCA:ITSO-CLS2:remote:fully_configured:50:0000020061006FCA After the SVC partnership has been configured, you can configure any rcrelationship or rcconsistgrp that you need. Make sure that a single volume is only in one relationship.

Fully connected configuration


Figure 9-10 on page 539 shows the fully connected configuration.

538

Implementing the IBM System Storage SAN Volume Controller V6.1

Figure 9-10 Fully connected configuration

Example 9-162 shows the sequence of mkpartnership commands to execute to create a fully connected configuration.
Example 9-162 Creating a fully connected configuration

From ITSO-CLS1 to ITSO-CLS2, ITSO-CLS3 and ITSO-CLS4 IBM_2145:ITSO-CLS1:admin>svctask mkpartnership -bandwidth 50 ITSO-CLS2 IBM_2145:ITSO-CLS1:admin>svctask mkpartnership -bandwidth 50 ITSO-CLS3 IBM_2145:ITSO-CLS1:admin>svctask mkpartnership -bandwidth 50 ITSO-CLS4 From ITSO-CLS2 to ITSO-CLS1, ITSO-CLS3 and ITSO-CLS4 IBM_2145:ITSO-CLS2:admin>svctask mkpartnership -bandwidth 50 ITSO-CLS1 IBM_2145:ITSO-CLS2:admin>svctask mkpartnership -bandwidth 50 ITSO-CLS3 IBM_2145:ITSO-CLS2:admin>svctask mkpartnership -bandwidth 50 ITSO-CLS4 From ITSO-CLS3 to ITSO-CLS1, ITSO-CLS3 and ITSO-CLS4 IBM_2145:ITSO-CLS3:admin>svctask mkpartnership -bandwidth 50 ITSO-CLS1 IBM_2145:ITSO-CLS3:admin>svctask mkpartnership -bandwidth 50 ITSO-CLS2 IBM_2145:ITSO-CLS3:admin>svctask mkpartnership -bandwidth 50 ITSO-CLS4 From ITSO-CLS4 to ITSO-CLS1, ITSO-CLS2 and ITSO-CLS3 IBM_2145:ITSO-CLS4:admin>svctask mkpartnership -bandwidth 50 ITSO-CLS1 IBM_2145:ITSO-CLS4:admin>svctask mkpartnership -bandwidth 50 ITSO-CLS2 IBM_2145:ITSO-CLS4:admin>svctask mkpartnership -bandwidth 50 ITSO-CLS3 From ITSO-CLS1 IBM_2145:ITSO-CLS1:admin>svcinfo lscluster -delim : id:name:location:partnership:bandwidth:id_alias 000002006AE04FC4:ITSO-CLS1:local:::000002006AE04FC4 0000020061006FCA:ITSO-CLS2:remote:fully_configured:50:0000020061006FCA 0000020069E03A42:ITSO-CLS3:remote:fully_configured:50:0000020069E03A42 0000020063E03A38:ITSO-CLS4:remote:fully_configured:50:0000020063E03A38
Chapter 9. SAN Volume Controller operations using the command-line interface

539

From ITSO-CLS2 IBM_2145:ITSO-CLS1:admin>svcinfo lscluster -delim : id:name:location:partnership:bandwidth:id_alias 0000020061006FCA:ITSO-CLS2:local:::0000020061006FCA 000002006AE04FC4:ITSO-CLS1:remote:fully_configured::000002006AE04FC4 0000020069E03A42:ITSO-CLS3:remote:fully_configured:50:0000020069E03A42 0000020063E03A38:ITSO-CLS4:remote:fully_configured:50:0000020063E03A38 From ITSO-CLS3 IBM_2145:ITSO-CLS1:admin>svcinfo lscluster -delim : id:name:location:partnership:bandwidth:id_alias 0000020069E03A42:ITSO-CLS3:local:::0000020069E03A42 000002006AE04FC4:ITSO-CLS1:remote:fully_configured::000002006AE04FC4 0000020061006FCA:ITSO-CLS2:remote:fully_configured:50:0000020061006FCA 0000020063E03A38:ITSO-CLS4:remote:fully_configured:50:0000020063E03A38 From ITSO-CLS4 IBM_2145:ITSO-CLS1:admin>svcinfo lscluster -delim : id:name:location:partnership:bandwidth:id_alias 0000020063E03A38:ITSO-CLS4:local:::0000020063E03A38 000002006AE04FC4:ITSO-CLS1:remote:fully_configured:50:000002006AE04FC4 0000020061006FCA:ITSO-CLS2:remote:fully_configured:50:0000020061006FCA 0000020069E03A42:ITSO-CLS3:remote:fully_configured:50:0000020069E03A42

After the SVC partnership has been configured, you can configure any rcrelationship or rcconsistgrp that you need. Make sure that a single volume is only in one relationship.

Daisy-chain configuration
Figure 9-11 shows the daisy-chain configuration.

Figure 9-11 Daisy-chain configuration

Example 9-163 shows the sequence of mkpartnership commands to execute to create a daisy-chain configuration.
Example 9-163 Creating a daisy-chain configuration

From ITSO-CLS1 to ITSO-CLS2 IBM_2145:ITSO-CLS1:admin>svctask mkpartnership -bandwidth 50 ITSO-CLS2 540

Implementing the IBM System Storage SAN Volume Controller V6.1

From ITSO-CLS2 to ITSO-CLS1 and ITSO-CLS3 IBM_2145:ITSO-CLS2:admin>svctask mkpartnership -bandwidth 50 ITSO-CLS1 IBM_2145:ITSO-CLS2:admin>svctask mkpartnership -bandwidth 50 ITSO-CLS3 From ITSO-CLS3 to ITSO-CLS2 and ITSO-CLS4 IBM_2145:ITSO-CLS3:admin>svctask mkpartnership -bandwidth 50 ITSO-CLS2 IBM_2145:ITSO-CLS3:admin>svctask mkpartnership -bandwidth 50 ITSO-CLS4 From ITSO-CLS4 to ITSO-CLS3 IBM_2145:ITSO-CLS4:admin>svctask mkpartnership -bandwidth 50 ITSO-CLS3 From ITSO-CLS1 IBM_2145:ITSO-CLS1:admin>svcinfo lscluster -delim : id:name:location:partnership:bandwidth:id_alias 000002006AE04FC4:ITSO-CLS1:local:::000002006AE04FC4 0000020061006FCA:ITSO-CLS2:remote:fully_configured:50:0000020061006FCA

From ITSO-CLS2 IBM_2145:ITSO-CLS1:admin>svcinfo lscluster -delim : id:name:location:partnership:bandwidth:id_alias 0000020061006FCA:ITSO-CLS2:local:::0000020061006FCA 000002006AE04FC4:ITSO-CLS1:remote:fully_configured::000002006AE04FC4 0000020069E03A42:ITSO-CLS3:remote:fully_configured:50:0000020069E03A42 From ITSO-CLS3 IBM_2145:ITSO-CLS1:admin>svcinfo lscluster -delim : id:name:location:partnership:bandwidth:id_alias 0000020069E03A42:ITSO-CLS3:local:::0000020069E03A42 000002006AE04FC4:ITSO-CLS1:remote:fully_configured::000002006AE04FC4 0000020061006FCA:ITSO-CLS2:remote:fully_configured:50:0000020061006FCA From ITSO-CLS4 IBM_2145:ITSO-CLS4:admin>svcinfo lscluster -delim : id:name:location:partnership:bandwidth:id_alias 0000020063E03A38:ITSO-CLS4:local:::0000020063E03A38 After the SVC partnership has been configured, you can configure any rcrelationship or rcconsistgrp that you need. Make sure that a single volume is only in one relationship.

9.14 Global Mirror operation


In the following scenario, we set up an intercluster Global Mirror relationship between the SVC cluster ITSO-CLS1 at the primary site and the SVC cluster ITSO-CLS4 at the secondary site.

Chapter 9. SAN Volume Controller operations using the command-line interface

541

Note: This example is for an intercluster Global Mirror operation only. In case you want to set up an intracluster operation, we highlight those parts in the following procedure that you do not need to perform. Table 9-4 shows the details of the volumes.
Table 9-4 Details of volumes for Global Mirror relationship scenario Content of volume Database files Database log files Application files Volumes at primary site GM_DB_Pri GM_DBLog_Pri GM_App_Pri Volumes at secondary site GM_DB_Sec GM_DBLog_Sec GM_App_Sec

Because data consistency is needed across GM_DB_Pri and GM_DBLog_Pri, we create a Consistency Group to handle Global Mirror relationships for them. Because in this scenario the application files are independent of the database, we create a stand-alone Global Mirror relationship for GM_App_Pri. Figure 9-12 illustrates the Global Mirror relationship setup.

Primary Site SVC Cluster - ITSO - CLS1

Secondary Site SVC Cluster - ITSO - CLS4

Consistency Group
CG_W2K3_GM

GM_DB_Pri

GM Relationship 1

GM_DB_Sec

GM_Dlog_Pri

GM Relationship 2

GM_DBlog_Sec

GM_App_Pri

GM Relationship 3

GM_App_Sec

Figure 9-12 Global Mirror scenario

9.14.1 Setting up Global Mirror


In the following section, we assume that the source and target volumes have already been created and that the ISLs and zoning are in place, enabling the SVC clusters to communicate. To set up the Global Mirror, perform the following steps: 1. Create an SVC partnership between ITSO_CLS1 and ITSO_CLS4, on both SVC clusters: Bandwidth 10 MBps

542

Implementing the IBM System Storage SAN Volume Controller V6.1

2. Create a Global Mirror Consistency Group: Name CG_W2K3_GM 3. Create the Global Mirror relationship for GM_DB_Pri: Master GM_DB_Pri Auxiliary GM_DB_Sec Auxiliary SVC cluster ITSO-CLS4 Name GMREL1 Consistency Group CG_W2K3_GM Master GM_DBLog_Pri Auxiliary GM_DBLog_Sec Auxiliary SVC cluster ITSO-CLS4 Name GMREL2 Consistency Group CG_W2K3_GM Master GM_App_Pri Auxiliary GM_App_Sec Auxiliary SVC cluster ITSO-CLS4 Name GMREL3

4. Create the Global Mirror relationship for GM_DBLog_Pri:

5. Create the Global Mirror relationship for GM_App_Pri:

In the following sections, we perform each step by using the CLI.

9.14.2 Creating an SVC partnership between ITSO-CLS1 and ITSO-CLS4


We create an SVC partnership between both clusters. Note: If you are creating an intracluster Global Mirror, do not perform the next step; instead, go to 9.14.3, Changing link tolerance and cluster delay simulation on page 545.

Preverification
To verify that both clusters can communicate with each other, use the svcinfo lsclustercandidate command. Example 9-164 confirms that our clusters are communicating, because ITSO-CLS4 is an eligible SVC cluster candidate at ITSO-CLS1 for the SVC cluster partnership, and vice versa. Therefore, both clusters are communicating with each other.
Example 9-164 Listing the available SVC clusters for partnership

IBM_2145:ITSO-CLS1:admin>svcinfo lsclustercandidate id configured 0000020068603A42 no cluster_name ITSO-CLS4

IBM_2145:ITSO-CLS4:admin>svcinfo lsclustercandidate id configured 0000020060C06FCA no cluster_name ITSO-CLS1

In Example 9-165 on page 544, we show the output of the svcinfo lscluster command before setting up the SVC clusters partnership for Global Mirror. We show this output for comparison after we have set up the SVC partnership.

Chapter 9. SAN Volume Controller operations using the command-line interface

543

Example 9-165 Pre-verification of cluster configuration

IBM_2145:ITSO-CLS1:admin>svcinfo lscluster -delim : id:name:location:partnership:bandwidth:cluster_IP_address:cluster_service_IP_addre ss:cluster_IP_address_6:cluster_service_IP_address_6:id_alias 0000020060C06FCA:ITSO-CLS1:local:::10.64.210.240:10.64.210.241:::0000020060C06FCA IBM_2145:ITSO-CLS2:admin>svcinfo lscluster -delim : id:name:location:partnership:bandwidth:cluster_IP_address:cluster_service_IP_addre ss:cluster_IP_address_6:cluster_service_IP_address_6:id_alias 0000020063E03A38:ITSO-CLS4:local:::10.64.210.246.119:10.64.210.247:::0000020063E03 A38

Partnership between clusters


In Example 9-166, we create the partnership from ITSO-CLS1 to ITSO-CLS4, specifying a 10 MBps bandwidth to use for the background copy. To verify the status of the newly created partnership, we issue the svcinfo lscluster command. Notice that the new partnership is only partially configured. It will remain partially configured until we run the mkpartnership command on the other cluster.
Example 9-166 Creating the partnership from ITSO-CLS1 to ITSO-CLS4 and verifying the partnership

IBM_2145:ITSO-CLS1:admin>svctask mkpartnership -bandwidth 10 ITSO-CLS4 IBM_2145:ITSO-CLS1:admin>svcinfo lscluster id name location partnership bandwidth id_alias 000002006AE04FC4 ITSO-CLS1 local 000002006AE04FC4 0000020063E03A38 ITSO-CLS4 remote partially_configured_local 10 0000020063E03A38 In Example 9-167, we create the partnership from ITSO-CLS4 back to ITSO-CLS1, specifying a 10 MBps bandwidth to be used for the background copy. After creating the partnership, verify that the partnership is fully configured by reissuing the svcinfo lscluster command.
Example 9-167 Creating the partnership from ITSO-CLS4 to ITSO-CLS1 and verifying the partnership

IBM_2145:ITSO-CLS4:admin>svctask mkpartnership -bandwidth 10 ITSO-CLS1 IBM_2145:ITSO-CLS4:admin>svcinfo lscluster id name location partnership bandwidth id_alias 0000020063E03A38 ITSO-CLS4 local 0000020063E03A38 000002006AE04FC4 ITSO-CLS1 remote fully_configured 10 000002006AE04FC4 IBM_2145:ITSO-CLS1:admin>svcinfo lscluster id name location id_alias 000002006AE04FC4 ITSO-CLS1 local 000002006AE04FC4 0000020063E03A38 ITSO-CLS4 remote 0000020063E03A38

partnership

bandwidth

fully_configured 10

544

Implementing the IBM System Storage SAN Volume Controller V6.1

9.14.3 Changing link tolerance and cluster delay simulation


The gm_link_tolerance defines the sensitivity of the SVC to inter-link overload conditions. The value is the number of seconds of continuous link difficulties that will be tolerated before the SVC will stop the remote copy relationships to prevent affecting host I/O at the primary site. To change the value, use the following command: svctask chcluster -gmlinktolerance link_tolerance The link_tolerance value is between 60 and 86,400 seconds in increments of 10 seconds. The default value for the link tolerance is 300 seconds. A value of 0 disables link tolerance. Important: We strongly suggest that you use the default value. If the link is overloaded for a period, which affects host I/O at the primary site, the relationships will be stopped to protect those hosts.

Intercluster and intracluster delay simulation


This Global Mirror feature permits a simulation of a delayed write to a remote volume. This feature allows testing to be performed that detects colliding writes, and you can use this feature to test an application before the full deployment of the Global Mirror feature. The delay simulation can be enabled separately for each intracluster or intercluster Global Mirror. To enable this feature, run the following command either for the intracluster or intercluster simulation: For intercluster: svctask chcluster -gminterdelaysimulation <inter_cluster_delay_simulation> For intracluster: svctask chcluster -gmintradelaysimulation <intra_cluster_delay_simulation> The inter_cluster_delay_simulation and intra_cluster_delay_simulation values express the amount of time (in milliseconds) secondary I/Os are delayed respectively for intercluster and intracluster relationships. These values specify the number of milliseconds that I/O activity (that is, copying a primary volume to a secondary volume) is delayed. You can set a value from 0 to 100 milliseconds in 1 millisecond increments for the cluster_delay_simulation in the previous commands. A value of zero (0) disables the feature. To check the current settings for the delay simulation, use the following command: svcinfo lscluster <clustername> In Example 9-168, we show the modification of the delay simulation value and a change of the Global Mirror link tolerance parameters. We also show the changed values of the Global Mirror link tolerance and delay simulation parameters.
Example 9-168 Delay simulation and link tolerance modification

IBM_2145:ITSO-CLS1:admin>svctask IBM_2145:ITSO-CLS1:admin>svctask IBM_2145:ITSO-CLS1:admin>svctask IBM_2145:ITSO-CLS1:admin>svcinfo id 000002006AE04FC4 name ITSO-CLS1 location local partnership bandwidth total_mdisk_capacity 160.0GB

chcluster chcluster chcluster lscluster

-gminterdelaysimulation 20 -gmintradelaysimulation 40 -gmlinktolerance 200 000002006AE04FC4

Chapter 9. SAN Volume Controller operations using the command-line interface

545

space_in_mdisk_grps 160.0GB space_allocated_to_vdisks 19.00GB total_free_space 141.0GB statistics_status off statistics_frequency 15 required_memory 8192 cluster_locale en_US time_zone 520 US/Pacific code_level 5.1.0.0 (build 17.1.0908110000) FC_port_speed 2Gb console_IP id_alias 000002006AE04FC4 gm_link_tolerance 200 gm_inter_cluster_delay_simulation 20 gm_intra_cluster_delay_simulation 40 email_reply email_contact email_contact_primary email_contact_alternate email_contact_location email_state invalid inventory_mail_interval 0 total_vdiskcopy_capacity 19.00GB total_used_capacity 19.00GB total_overallocation 11 total_vdisk_capacity 19.00GB cluster_ntp_IP_address cluster_isns_IP_address iscsi_auth_method none iscsi_chap_secret auth_service_configured no auth_service_enabled no auth_service_url auth_service_user_name auth_service_pwd_set no auth_service_cert_set no relationship_bandwidth_limit 25

9.14.4 Creating a Global Mirror Consistency Group


In Example 9-169, we create the Global Mirror Consistency Group using the svctask mkrcconsistgrp command. We will use this Consistency Group for the Global Mirror relationships for the database volumes. The Consistency Group is named CG_W2K3_GM.
Example 9-169 Creating the Global Mirror Consistency Group CG_W2K3_GM

IBM_2145:ITSO-CLS1:admin>svctask mkrcconsistgrp -cluster ITSO-CLS4 -name CG_W2K3_GM RC Consistency Group, id [0], successfully created IBM_2145:ITSO-CLS1:admin>svcinfo lsrcconsistgrp id name master_cluster_id master_cluster_name aux_cluster_id aux_cluster_name primary state relationship_count copy_type

546

Implementing the IBM System Storage SAN Volume Controller V6.1

0 CG_W2K3_GM 0000020063E03A38 ITSO-CLS4 empty_group

000002006AE04FC4 ITSO-CLS1 empty 0

9.14.5 Creating Global Mirror relationships


In Example 9-171 on page 548, we create the GMREL1 and GMREL2 Global Mirror relationships for the GM_DB_Pri and GM_DBLog_Pri volumes. We also make them members of the CG_W2K3_GM Global Mirror Consistency Group. We use the svcinfo lsvdisk command to list all of the volumes in the ITSO-CLS1 cluster and, then, use the svcinfo lsrcrelationshipcandidate command to show the possible volumes candidates for GM_DB_Pri in ITSO-CLS4. After checking all of these conditions, we use the svctask mkrcrelationship command to create the Global Mirror relationship. To verify the newly created Global Mirror relationships, we list them with the svcinfo lsrcrelationship command.
Example 9-170 Creating GMREL1 and GMREL2 Global Mirror relationships IBM_2145:ITSO-CLS1:admin>svcinfo lsvdisk -filtervalue name=GM* id name IO_group_id IO_group_name status mdisk_grp_id mdisk_grp_name capacity type FC_id FC_name RC_id RC_name vdisk_UID fc_map_count copy_count fast_write_state 16 GM_App_Pri 0 io_grp0 online 0 MDG_DS47 1.00GB striped 6005076801AB813F1000000000000013 0 1 empty 17 GM_DB_Pri 0 io_grp0 online 0 MDG_DS47 1.00GB striped 6005076801AB813F1000000000000014 0 1 empty 18 GM_DBLog_Pri 0 io_grp0 online 0 MDG_DS47 1.00GB striped 6005076801AB813F1000000000000015 0 1 empty IBM_2145:ITSO-CLS1:admin>svcinfo lsrcrelationshipcandidate -aux ITSO-CLS4 -master GM_DB_Pri id vdisk_name 0 MM_DB_Sec 1 MM_Log_Sec 2 MM_App_Sec 3 GM_App_Sec 4 GM_DB_Sec 5 GM_DBLog_Sec 6 SEV IBM_2145:ITSO-CLS1:admin>svctask mkrcrelationship -master GM_DB_Pri -aux GM_DB_Sec -cluster ITSO-CLS2 -consistgrp CG_W2K3_GM -name GMREL1 -global RC Relationship, id [9], successfully created IBM_2145:ITSO-CLS1:admin>svctask mkrcrelationship -master GM_DBLog_Pri -aux GM_DBLog_Sec -cluster ITSO-CLS2 -consistgrp CG_W2K3_GM -name GMREL2 -global RC Relationship, id [10], successfully created IBM_2145:ITSO-CLS1:admin>svctask mkrcrelationship -master GM_DB_Pri -aux GM_DB_Sec -cluster ITSO-CLS4 -consistgrp CG_W2K3_GM -name GMREL1 -global RC Relationship, id [17], successfully created IBM_2145:ITSO-CLS1:admin>svctask mkrcrelationship -master GM_DBLog_Pri -aux GM_DBLog_Sec -cluster ITSO-CLS4 -consistgrp CG_W2K3_GM -name GMREL2 -global RC Relationship, id [18], successfully created Chapter 9. SAN Volume Controller operations using the command-line interface

547

IBM_2145:ITSO-CLS1:admin>svcinfo lsrcrelationship id name master_cluster_id master_cluster_name master_vdisk_id master_vdisk_name aux_cluster_id aux_cluster_name aux_vdisk_id aux_vdisk_name primary consistency_group_id consistency_group_name state bg_copy_priority progress copy_type 17 GMREL1 000002006AE04FC4 ITSO-CLS1 17 GM_DB_Pri 0000020063E03A38 ITSO-CLS4 4 GM_DB_Sec master 0 CG_W2K3_GM inconsistent_stopped 50 0 global 18 GMREL2 000002006AE04FC4 ITSO-CLS1 18 GM_DBLog_Pri 0000020063E03A38 ITSO-CLS4 5 GM_DBLog_Sec master 0 CG_W2K3_GM inconsistent_stopped 50 0 global

9.14.6 Creating the stand-alone Global Mirror relationship for GM_App_Pri


In Example 9-171, we create the stand-alone Global Mirror relationship GMREL3 for GM_App_Pri. After it is created, we will check the status of each of our Global Mirror relationships. Notice that the status of GMREL3 is consistent_stopped, because it was created with the -sync option. The -sync option indicates that the secondary (auxiliary) volume is already synchronized with the primary (master) volume. The initial background synchronization is skipped when this option is used. GMREL1 and GMREL2 are in the inconsistent_stopped state, because they were not created with the -sync option, so their auxiliary volumes need to be synchronized with their primary volumes.
Example 9-171 Creating a stand-alone Global Mirror relationship and verifying it IBM_2145:ITSO-CLS1:admin>svctask mkrcrelationship -master GM_App_Pri -aux GM_App_Sec -cluster ITSO-CLS4 -sync -name GMREL3 -global RC Relationship, id [16], successfully created IBM_2145:ITSO-CLS1:admin>svcinfo lsrcrelationship -delim : id:name:master_cluster_id:master_cluster_name:master_vdisk_id:master_vdisk_name:aux_cluster_id:aux_cluster_ name:aux_vdisk_id:aux_vdisk_name:primary:consistency_group_id:consistency_group_name:state:bg_copy_priority :progress:copy_type 16:GMREL3:000002006AE04FC4:ITSO-CLS1:16:GM_App_Pri:0000020063E03A38:ITSO-CLS4:3:GM_App_Sec:master:::consist ent_stopped:50:100:global 17:GMREL1:000002006AE04FC4:ITSO-CLS1:17:GM_DB_Pri:0000020063E03A38:ITSO-CLS4:4:GM_DB_Sec:master:0:CG_W2K3_G M:inconsistent_stopped:50:0:global 18:GMREL2:000002006AE04FC4:ITSO-CLS1:18:GM_DBLog_Pri:0000020063E03A38:ITSO-CLS4:5:GM_DBLog_Sec:master:0:CG_ W2K3_GM:inconsistent_stopped:50:0:global

9.14.7 Starting Global Mirror


Now that we have created the Global Mirror Consistency Group and relationships, we are ready to use the Global Mirror relationships in our environment. When implementing Global Mirror, the goal is to reach a consistent and synchronized state that can provide redundancy in case a hardware failure occurs that affects the SAN at the production site. In this section, we show how to start the stand-alone Global Mirror relationships and the Consistency Group.

548

Implementing the IBM System Storage SAN Volume Controller V6.1

9.14.8 Starting a stand-alone Global Mirror relationship


In Example 9-148 on page 527, we start the stand-alone Global Mirror relationship named GMREL3. Because the Global Mirror relationship was in the Consistent stopped state and no updates have been made to the primary volume, the relationship quickly enters the Consistent synchronized state.
Example 9-172 Starting the stand-alone Global Mirror relationship

IBM_2145:ITSO-CLS1:admin>svctask startrcrelationship GMREL3 IBM_2145:ITSO-CLS1:admin>svcinfo lsrcrelationship GMREL3 id 16 name GMREL3 master_cluster_id 000002006AE04FC4 master_cluster_name ITSO-CLS1 master_vdisk_id 16 master_vdisk_name GM_App_Pri aux_cluster_id 0000020063E03A38 aux_cluster_name ITSO-CLS4 aux_vdisk_id 3 aux_vdisk_name GM_App_Sec primary master consistency_group_id consistency_group_name state consistent_synchronized bg_copy_priority 50 progress freeze_time status online sync copy_type global

9.14.9 Starting a Global Mirror Consistency Group


In Example 9-149 on page 527, we start the CG_W2K3_GM Global Mirror Consistency Group. Because the Consistency Group was in the Inconsistent stopped state, it enters the Inconsistent copying state until the background copy has completed for all of the relationships that are in the Consistency Group. Upon completion of the background copy, the CG_W2K3_GM Global Mirror Consistency Group enters the Consistent synchronized state (see Example 9-173).
Example 9-173 Starting the Global Mirror Consistency Group

IBM_2145:ITSO-CLS1:admin>svctask startrcconsistgrp CG_W2K3_GM IBM_2145:ITSO-CLS1:admin>svcinfo lsrcconsistgrp CG_W2K3_GM id 0 name CG_W2K3_GM master_cluster_id 000002006AE04FC4 master_cluster_name ITSO-CLS1 aux_cluster_id 0000020063E03A38 aux_cluster_name ITSO-CLS4 primary master state inconsistent_copying relationship_count 2 freeze_time

Chapter 9. SAN Volume Controller operations using the command-line interface

549

status sync copy_type global RC_rel_id 17 RC_rel_name GMREL1 RC_rel_id 18 RC_rel_name GMREL2

9.14.10 Monitoring background copy progress


To monitor the background copy progress, use the svcinfo lsrcrelationship command. This command shows us all of the defined Global Mirror relationships if it is used without any parameters. In the command output, progress indicates the current background copy progress. Example 9-150 on page 528 shows our Global Mirror relationships. Using SNMP traps: Setting up SNMP traps for the SVC enables automatic notification when Global Mirror Consistency Groups or relationships change state.
Example 9-174 Monitoring background copy progress example

IBM_2145:ITSO-CLS1:admin>svcinfo lsrcrelationship GMREL1 id 17 name GMREL1 master_cluster_id 000002006AE04FC4 master_cluster_name ITSO-CLS1 master_vdisk_id 17 master_vdisk_name GM_DB_Pri aux_cluster_id 0000020063E03A38 aux_cluster_name ITSO-CLS4 aux_vdisk_id 4 aux_vdisk_name GM_DB_Sec primary master consistency_group_id 0 consistency_group_name CG_W2K3_GM state inconsistent_copying bg_copy_priority 50 progress 38 freeze_time status online sync copy_type global IBM_2145:ITSO-CLS1:admin>svcinfo lsrcrelationship GMREL2 id 18 name GMREL2 master_cluster_id 000002006AE04FC4 master_cluster_name ITSO-CLS1 master_vdisk_id 18 master_vdisk_name GM_DBLog_Pri aux_cluster_id 0000020063E03A38 aux_cluster_name ITSO-CLS4 aux_vdisk_id 5 aux_vdisk_name GM_DBLog_Sec primary master consistency_group_id 0

550

Implementing the IBM System Storage SAN Volume Controller V6.1

consistency_group_name CG_W2K3_GM state inconsistent_copying bg_copy_priority 50 progress 40 freeze_time status online sync copy_type global When all of the Global Mirror relationships complete the background copy, the Consistency Group enters the Consistent synchronized state, as shown in Example 9-151 on page 529.
Example 9-175 Listing the Global Mirror Consistency Group

IBM_2145:ITSO-CLS1:admin>svcinfo lsrcconsistgrp CG_W2K3_GM id 0 name CG_W2K3_GM master_cluster_id 000002006AE04FC4 master_cluster_name ITSO-CLS1 aux_cluster_id 0000020063E03A38 aux_cluster_name ITSO-CLS4 primary master state consistent_synchronized relationship_count 2 freeze_time status sync copy_type global RC_rel_id 17 RC_rel_name GMREL1 RC_rel_id 18 RC_rel_name GMREL2

9.14.11 Stopping and restarting Global Mirror


Now that the Global Mirror Consistency Group and relationships are running, we describe how to stop, restart, and change the direction of the stand-alone Global Mirror relationships and the Consistency Group. First, we show how to stop and restart the stand-alone Global Mirror relationships and the Consistency Group.

9.14.12 Stopping a stand-alone Global Mirror relationship


In Example 9-152 on page 529, we stop the stand-alone Global Mirror relationship while enabling access (write I/O) to both the primary and the secondary volume. As a result, the relationship enters the Idling state.
Example 9-176 Stopping the stand-alone Global Mirror relationship

IBM_2145:ITSO-CLS1:admin>svctask stoprcrelationship -access GMREL3 IBM_2145:ITSO-CLS1:admin>svcinfo lsrcrelationship GMREL3 id 16 name GMREL3 master_cluster_id 000002006AE04FC4

Chapter 9. SAN Volume Controller operations using the command-line interface

551

master_cluster_name ITSO-CLS1 master_vdisk_id 16 master_vdisk_name GM_App_Pri aux_cluster_id 0000020063E03A38 aux_cluster_name ITSO-CLS4 aux_vdisk_id 3 aux_vdisk_name GM_App_Sec primary consistency_group_id consistency_group_name state idling bg_copy_priority 50 progress freeze_time status sync in_sync copy_type global

9.14.13 Stopping a Global Mirror Consistency Group


In Example 9-153 on page 530, we stop the Global Mirror Consistency Group without specifying the -access parameter. Therefore, the Consistency Group enters the Consistent stopped state.
Example 9-177 Stopping a Global Mirror Consistency Group without specifying -access

IBM_2145:ITSO-CLS1:admin>svctask stoprcconsistgrp CG_W2K3_GM IBM_2145:ITSO-CLS1:admin>svcinfo lsrcconsistgrp CG_W2K3_GM id 0 name CG_W2K3_GM master_cluster_id 000002006AE04FC4 master_cluster_name ITSO-CLS1 aux_cluster_id 0000020063E03A38 aux_cluster_name ITSO-CLS4 primary master state consistent_stopped relationship_count 2 freeze_time status sync in_sync copy_type global RC_rel_id 17 RC_rel_name GMREL1 RC_rel_id 18 RC_rel_name GMREL2 If, afterwards, we want to enable access (write I/O) for the secondary volume, we can reissue the svctask stoprcconsistgrp command specifying the -access parameter. The Consistency Group transits to the Idling state, as shown in Example 9-154 on page 530.
Example 9-178 Stopping a Global Mirror Consistency Group

IBM_2145:ITSO-CLS1:admin>svctask stoprcconsistgrp -access CG_W2K3_GM IBM_2145:ITSO-CLS1:admin>svcinfo lsrcconsistgrp CG_W2K3_GM id 0

552

Implementing the IBM System Storage SAN Volume Controller V6.1

name CG_W2K3_GM master_cluster_id 000002006AE04FC4 master_cluster_name ITSO-CLS1 aux_cluster_id 0000020063E03A38 aux_cluster_name ITSO-CLS4 primary state idling relationship_count 2 freeze_time status sync in_sync copy_type global RC_rel_id 17 RC_rel_name GMREL1 RC_rel_id 18 RC_rel_name GMREL2

9.14.14 Restarting a Global Mirror relationship in the Idling state


When restarting a Global Mirror relationship in the Idling state, we must specify the copy direction. If any updates have been performed on either the master or the auxiliary volume, consistency will be compromised. Therefore, we must issue the -force parameter to restart the relationship. If the -force parameter is not used the command will fail, as shown in Example 9-155 on page 531.
Example 9-179 Restarting a Global Mirror relationship after updates in the Idling state

IBM_2145:ITSO-CLS1:admin>svctask startrcrelationship -primary master -force GMREL3 IBM_2145:ITSO-CLS1:admin>svcinfo lsrcrelationship GMREL3 id 16 name GMREL3 master_cluster_id 000002006AE04FC4 master_cluster_name ITSO-CLS1 master_vdisk_id 16 master_vdisk_name GM_App_Pri aux_cluster_id 0000020063E03A38 aux_cluster_name ITSO-CLS4 aux_vdisk_id 3 aux_vdisk_name GM_App_Sec primary master consistency_group_id consistency_group_name state consistent_synchronized bg_copy_priority 50 progress freeze_time status online sync copy_type global

Chapter 9. SAN Volume Controller operations using the command-line interface

553

9.14.15 Restarting a Global Mirror Consistency Group in the Idling state


When restarting a Global Mirror Consistency Group in the Idling state, we must specify the copy direction. If any updates have been performed on either the master or the auxiliary volume in any of the Global Mirror relationships in the Consistency Group, consistency will be compromised. Therefore, we must issue the -force parameter to start the relationship. If the -force parameter is not used, the command will fail. In Example 9-156 on page 532, we restart the Consistency Group and change the copy direction by specifying the auxiliary volumes to become the primaries.
Example 9-180 Restarting a Global Mirror relationship while changing the copy direction

IBM_2145:ITSO-CLS1:admin>svctask startrcconsistgrp -primary aux CG_W2K3_GM IBM_2145:ITSO-CLS1:admin>svcinfo lsrcconsistgrp CG_W2K3_GM id 0 name CG_W2K3_GM master_cluster_id 000002006AE04FC4 master_cluster_name ITSO-CLS1 aux_cluster_id 0000020063E03A38 aux_cluster_name ITSO-CLS4 primary aux state consistent_synchronized relationship_count 2 freeze_time status sync copy_type global RC_rel_id 17 RC_rel_name GMREL1 RC_rel_id 18 RC_rel_name GMREL2

9.14.16 Changing direction for Global Mirror


In this section we show how to change the copy direction of the stand-alone Global Mirror relationships and the Consistency Group.

9.14.17 Switching copy direction for a Global Mirror relationship


When a Global Mirror relationship is in the Consistent synchronized state, we can change the copy direction for the relationship by using the svctask switchrcrelationship command and specifying the primary volume. If the volume that is specified as the primary when issuing this command is already a primary, the command has no effect. In Example 9-157 on page 532, we change the copy direction for the stand-alone Global Mirror relationship, specifying the auxiliary volume to become the primary.

554

Implementing the IBM System Storage SAN Volume Controller V6.1

Important: When the copy direction is switched, it is crucial that there is no outstanding I/O to the volume that transits from primary to secondary, because all I/O will be inhibited to that volume when it becomes the secondary. Therefore, careful planning is required prior to using the svctask switchrcrelationship command.
Example 9-181 Switching the copy direction for a Global Mirror relationship

IBM_2145:ITSO-CLS1:admin>svcinfo lsrcrelationship GMREL3 id 16 name GMREL3 master_cluster_id 000002006AE04FC4 master_cluster_name ITSO-CLS1 master_vdisk_id 16 master_vdisk_name GM_App_Pri aux_cluster_id 0000020063E03A38 aux_cluster_name ITSO-CLS4 aux_vdisk_id 3 aux_vdisk_name GM_App_Sec primary master consistency_group_id consistency_group_name state consistent_synchronized bg_copy_priority 50 progress freeze_time status online sync copy_type global IBM_2145:ITSO-CLS1:admin>svctask switchrcrelationship -primary aux GMREL3 IBM_2145:ITSO-CLS1:admin>svcinfo lsrcrelationship GMREL3 id 16 name GMREL3 master_cluster_id 000002006AE04FC4 master_cluster_name ITSO-CLS1 master_vdisk_id 16 master_vdisk_name GM_App_Pri aux_cluster_id 0000020063E03A38 aux_cluster_name ITSO-CLS4 aux_vdisk_id 3 aux_vdisk_name GM_App_Sec primary aux consistency_group_id consistency_group_name state consistent_synchronized bg_copy_priority 50 progress freeze_time status online sync copy_type global

Chapter 9. SAN Volume Controller operations using the command-line interface

555

9.14.18 Switching copy direction for a Global Mirror Consistency Group


When a Global Mirror Consistency Group is in the Consistent synchronized state, we can change the copy direction for the relationship by using the svctask switchrcconsistgrp command and specifying the primary volume. If the volume that is specified as the primary when issuing this command is already a primary, the command has no effect. In Example 9-158 on page 534, we change the copy direction for the Global Mirror Consistency Group, specifying the auxiliary to become the primary. Important: When the copy direction is switched, it is crucial that there is no outstanding I/O to the volume that transits from primary to secondary, because all I/O will be inhibited when that volume becomes the secondary. Therefore, careful planning is required prior to using the svctask switchrcconsistgrp command.
Example 9-182 Switching the copy direction for a Global Mirror Consistency Group

IBM_2145:ITSO-CLS1:admin>svcinfo lsrcconsistgrp CG_W2K3_GM id 0 name CG_W2K3_GM master_cluster_id 000002006AE04FC4 master_cluster_name ITSO-CLS1 aux_cluster_id 0000020063E03A38 aux_cluster_name ITSO-CLS4 primary master state consistent_synchronized relationship_count 2 freeze_time status sync copy_type global RC_rel_id 17 RC_rel_name GMREL1 RC_rel_id 18 RC_rel_name GMREL2 IBM_2145:ITSO-CLS1:admin>svctask switchrcconsistgrp -primary aux CG_W2K3_GM IBM_2145:ITSO-CLS1:admin>svcinfo lsrcconsistgrp CG_W2K3_GM id 0 name CG_W2K3_GM master_cluster_id 000002006AE04FC4 master_cluster_name ITSO-CLS1 aux_cluster_id 0000020063E03A38 aux_cluster_name ITSO-CLS4 primary aux state consistent_synchronized relationship_count 2 freeze_time status sync copy_type global RC_rel_id 17 RC_rel_name GMREL1 RC_rel_id 18 RC_rel_name GMREL2

556

Implementing the IBM System Storage SAN Volume Controller V6.1

9.15 Service and maintenance


This section details the various service and maintenance tasks that you can execute within the SVC environment.

9.15.1 Upgrading software


This section explains how to upgrade the SVC software.

Package numbering and version


The format for software upgrade packages is four positive integers that are separated by periods. For example, a software upgrade package contains something similar to 6.1.0.0, and each software package is given a unique number. Requirement: It is mandatory that you run on at least SVC 5.1.0.6 cluster code before upgrading to SVC 6.1.0.0 cluster code. Check the recommended software levels at this website: http://www-03.ibm.com/systems/storage/software/virtualization/svc/index.html

SVC software upgrade test utility


The SAN Volume Controller Software Upgrade Test Utility, which resides on the Master Console, checks software levels in the system against the recommended levels, which will be documented on the support website. You will be informed if the software levels are current or if you need to download and install newer levels. You can download the utility and installation instructions from this link: http://www-01.ibm.com/support/docview.wss?rs=591&uid=ssg1S4000585 After the software file has been uploaded to the cluster (to the /home/admin/upgrade directory), you can select the software and apply it to the cluster. Use the web script and the svctask applysoftware command. When a new code level is applied, it is automatically installed on all of the nodes within the cluster. The underlying command-line tool runs the sw_preinstall script. This script checks the validity of the upgrade file and whether it can be applied over the current level. If the upgrade file is unsuitable, the pre-install script deletes the files, which prevents the buildup of invalid files on the cluster.

Precaution before upgrade


Software installation is normally considered to be a clients task. The SVC supports concurrent software upgrade. You can perform the software upgrade concurrently with I/O user operations and certain management activities, but only limited CLI commands will be operational from the time that the install command starts until the upgrade operation has either terminated successfully or been backed out. Certain commands will fail with a message indicating that a software upgrade is in progress. Before you upgrade the SVC software, ensure that all I/O paths between all hosts and SANs are working. Otherwise, the applications might have I/O failures during the software upgrade. Ensure that all I/O paths between all hosts and SANs are working by using the Subsystem Device Driver (SDD) command. Example 9-183 on page 558 shows the output.

Chapter 9. SAN Volume Controller operations using the command-line interface

557

Example 9-183 Query adapter

#datapath query adapter Active Adapters :2 Adpt# 0 1 Name State fscsi0 NORMAL fscsi1 NORMAL Mode ACTIVE ACTIVE Select 1445 1888 Errors 0 0 Paths 4 4 Active 4 4

#datapath query device Total Devices : 2 DEV#: 0 DEVICE NAME: vpath0 TYPE: 2145 POLICY: Optimized SERIAL: 60050768018201BF2800000000000000 ========================================================================== Path# Adapter/Hard Disk State Mode Select Errors 0 fscsi0/hdisk3 OPEN NORMAL 0 0 1 fscsi1/hdisk7 OPEN NORMAL 972 0 DEV#: 1 DEVICE NAME: vpath1 TYPE: 2145 POLICY: Optimized SERIAL: 60050768018201BF2800000000000002 ========================================================================== Path# Adapter/Hard Disk State Mode Select Errors 0 fscsi0/hdisk4 OPEN NORMAL 784 0 1 fscsi1/hdisk8 OPEN NORMAL 0 0 Write-through mode: During a software upgrade, there are periods when not all of the nodes in the cluster are operational and as a result, the cache operates in write-through mode. Note that write-through mode has an effect on the throughput, latency, and bandwidth aspects of performance. Verify that your uninterruptible power supply unit configuration is also set up correctly (even if your cluster is running without problems). Specifically, make sure that the following conditions are true: Your uninterruptible power supply units are all getting their power from an external source, and they are not daisy chained. Make sure that each uninterruptible power supply unit is not supplying power to another nodes uninterruptible power supply unit. The power cable and the serial cable, which comes from each node, go back to the same uninterruptible power supply unit. If the cables are crossed and go back to separate uninterruptible power supply units, then during the upgrade, while one node is shut down, another node might also mistakenly be shut down. Important: Do not share the SVC uninterruptible power supply unit with any other devices. You must also ensure that all I/O paths are working for each host that runs I/O operations to the SAN during the software upgrade. You can check the I/O paths by using the datapath query commands. You do not need to check for hosts that have no active I/O operations to the SAN during the software upgrade.

558

Implementing the IBM System Storage SAN Volume Controller V6.1

Procedure
To upgrade the SVC cluster software, perform the following steps: 1. Before starting the upgrade, you must back up the configuration (see 9.16, Backing up the SVC cluster configuration on page 572) and save the backup config file in a safe place. 2. Save the data collection for support diagnosis in case of problems, as shown in Example 9-184.
Example 9-184 svc_snap command

IBM_2145:ITSO-CLS1:admin>svc_snap -dumpall Collecting system information... Copying files, please wait... Copying files, please wait... Listing files, please wait... Copying files, please wait... Listing files, please wait... Copying files, please wait... Listing files, please wait... Dumping error log... Creating snap package... Snap data collected in /dumps/snap..100921.151239.tgz 3. List the dump that was generated by the previous command, as shown in Example 9-185.
Example 9-185 svcinfo ls2145dumps command

IBM_2145:ITSO-CLS1:admin>svcinfo ls2145dumps IBM_2145:ITSO-CLS4:admin>svcinfo ls2145dumps id 2145_filename 0 dump.104603.080801.161333 1 snap.104603.080815.153527.tgz 2 svc.config.cron.bak_n1 3 svc.config.cron.bak_node3 . above and below rows has been removed for brevity . 47 svc.config.cron.bak_104603 48 svc.config.cron.log_104603 49 svc.config.cron.xml_104603 50 svc.config.cron.sh_104603 51 snap..100921.151239.tgz 52 ups_log.b 53 ups_log.a 4. Save the generated dump in a safe place using the pscp command, as shown in Example 9-186 on page 560. Note: The pscp command will not work if you have not uploaded your PuTTy SSH private key into the PuTTy pageant agent as shown in Figure 9-13.

Chapter 9. SAN Volume Controller operations using the command-line interface

559

Figure 9-13 Pageant example Example 9-186 pscp -load command

C:\>pscp -load ITSOCL4 admin@10.18.229.84:/dumps/snap..100921.151239.tgz c:\ \dumps\snap..100921.151239.tgz | 597 kB | 597.7 kB/s | ETA: 00:00:00 | 100% 5. Upload the new software package using PuTTY Secure Copy. Enter the command as shown in Example 9-187.
Example 9-187 pscp -load command

C:\>pscp -load ITSOCL4 c:\IBM2145_INSTALL_6.1.0.0 admin@10.18.229.84:/home/admin/upgrade 100921a.tgz.gpg.gz | 300143 kB | 1389.6 kB/s | ETA: 00:00:00 | 100% c. Upload the SAN Volume Controller Software Upgrade Test Utility by using PuTTY Secure Copy. Enter the command as shown in Example 9-188.
Example 9-188 Upload utility

C:\>pscp -load ITSOCL4 IBM2145_INSTALL_svcupgradetest_4.1 admin@10.18.229.84:/home/admin/upgrade IBM2145_INSTALL_svcupgrad | 11 kB | 12.0 kB/s | ETA: 00:00:00 | 100% 6. Verify that the packages were successfully delivered through the PuTTY command-line application by entering the svcinfo lssoftwaredumps command, as shown in Example 9-189.
Example 9-189 svcinfo lssoftwaredumps command

IBM_2145:ITSO-CLS4:admin>svcinfo lssoftwaredumps id software_filename 0 IBM2145_INSTALL_6.1.0.0 1 IBM2145_INSTALL_svcupgradetest_4.1 7. Now that the packages are uploaded, install the SAN Volume Controller Software Upgrade Test Utility, as shown in Example 9-190 on page 561.

560

Implementing the IBM System Storage SAN Volume Controller V6.1

Example 9-190 svctask applysoftware command

IBM_2145:ITSO-CLS4:admin>svctask applysoftware -file IBM2145_INSTALL_svcupgradetest_4.1 CMMVC6227I The package installed successfully. 8. Using the following command, test the upgrade for known issues that might prevent a software upgrade from completing successfully, as shown in Example 9-191.
Example 9-191 svcupgradetest command

IBM_2145:ITSO-CLS4:admin>svcupgradetest svcupgradetest version 4.1 Please wait while the tool tests for issues that may prevent a software upgrade from completing successfully. The test will take approximately one minute to complete. The test has not found any problems with the 2145 cluster. Please proceed with the software upgrade. Important: If the svcupgradetest command produces any errors, troubleshoot the errors using the maintenance procedures before continuing. 9. Use the svctask command set to apply the software upgrade, as shown in Example 9-192.
Example 9-192 Apply upgrade command example

IBM_2145:ITSO-CLS4:admin>svctask applysoftware -file IBM2145_INSTALL_6.1.0.0 While the upgrade runs, you can check the status as shown in Example 9-193.
Example 9-193 Check update status

IBM_2145:ITSO-CLS4:admin>svcinfo lssoftwareupgradestatus status upgrading 10.The new code is distributed and applied to each node in the SVC cluster. After installation, each node is automatically restarted one at a time. If a node does not restart automatically during the upgrade, you must repair it manually. 11.Eventually both nodes display Cluster: on line one on the SVC front panel and the name of your cluster on line two of the panel. Be prepared for a wait (in our case, we waited approximately 40 minutes). Performance: During this process, both your CLI and GUI vary from sluggish (slow) to unresponsive. The important thing is that I/O to the hosts can continue through this process. 12.To verify that the upgrade was successful, you can perform either of the following options: You can run the svcinfo lscluster and svcinfo lsnodevpd commands as shown in Example 9-194. (We truncated the lscluster and lsnodevpd information for this example.)
Example 9-194 svcinfo lscluster and svcinfo lsnodevpd commands

IBM_2145:ITSO-CLS4:admin>svcinfo lscluster ITSO-CLS4

Chapter 9. SAN Volume Controller operations using the command-line interface

561

id 0000020060A06FB8 name ITSO-CLS4 location local partnership bandwidth total_mdisk_capacity 251.0GB space_in_mdisk_grps 251.0GB space_allocated_to_vdisks 27.00GB total_free_space 224.0GB statistics_status on statistics_frequency 15 required_memory 0 cluster_locale en_US time_zone 344 Etc/GMT-7 code_level 6.1.0.0 (build 47.6.1009140000) FC_port_speed 2Gb . tier_free_capacity 199.75GB email_contact2 email_contact2_primary email_contact2_alternate total_allocated_extent_capacity 28.50GB IBM_2145:ITSO-CLS4:admin>svcinfo lsnodevpd 1 id 1 system board: 21 fields part_number 31P1090 . software: 4 fields id 1 node_name n104603 WWNN 0x50050768010037dc code_level 6.1.0.0 (build 47.6.1009140000) Or you can check whether the code installation has completed without error by copying the log to your management workstation as explained in 9.15.2, Running maintenance procedures on page 562. Open the event log in WordPad and search for the Software Install completed. message. At this point you have completed the required tasks to upgrade the SVC software.

9.15.2 Running maintenance procedures


Use the svctask finderr command to generate a list of any unfixed errors in the system. This command analyzes the last generated log that resides in the /dumps/elogs/ directory on the cluster. To generate a new log before analyzing unfixed errors, run the svctask dumperrlog command (Example 9-195).
Example 9-195 svctask dumperrlog command

IBM_2145:ITSO-CLS4:admin>svctask dumperrlog

562

Implementing the IBM System Storage SAN Volume Controller V6.1

This command generates a errlog_timestamp file, such as errlog_107662_100921_170547, where: errlog is part of the default prefix for all event log files. 107662 is the panel name of the current configuration node. 100921 is the date (YYMMDD). 170547 is the time (HHMMSS). You can add the -prefix parameter to your command to change the default prefix of errlog to something else (Example 9-196).
Example 9-196 svctask dumperrlog -prefix command

IBM_2145:ITSO-CLS4:admin>svctask dumperrlog -prefix ITSO-SVC4_errlog This command creates a file called ITSO-SVC4_errlog_timestamp. To see the file name, enter the following command (Example 9-197).
Example 9-197 svcinfo lserrlogdumps command

IBM_2145:ITSO-CLS4:admin>svcinfo lserrlogdumps id filename 0 errlog_107662_100921_170547 1 ITSO-SVC4_errlog_107662_100921_170648 Maximum number of event log dump files: A maximum of ten event log dump files per node will be kept on the cluster. When the eleventh dump is made, the oldest existing dump file for that node will be overwritten. Note that the directory might also hold log files retrieved from other nodes. These files are not counted. The SVC will delete the oldest file (when necessary) for this node to maintain the maximum number of files. The SVC will not delete files from other nodes unless you issue the cleandumps command. After you generate your event log you can issue the svctask finderr command to scan the event log for any unfixed events, as shown in Example 9-198.
Example 9-198 svctask finderr command

IBM_2145:ITSO-CLS4:admin>svctask finderr Highest priority unfixed error code is [1550] As you can see, we have one unfixed event on our system. To analyze this event, download it onto your PC. To know more about this unfixed event, look at the event log in more detail. Use the PuTTY Secure Copy process to copy the file from the cluster to your local management workstation, as shown in Example 9-199.
Example 9-199 pscp command: Copy event logs off of the SVC

In W2K3 Start Run cmd C:\Program Files\PuTTY>pscp -load ITSO-CLS4 admin@10.18.229.84:/dumps/elogs/ITSO-SVC4_errlog_107662_100921_170648 c:\ITSO-SVC4_errlog_107662_100921_170648| 147 kB | 147.8 kB/s | ETA: 00:00:00 | 100%

Chapter 9. SAN Volume Controller operations using the command-line interface

563

To use the Run option, you must know where your pscp.exe file is located. In this case, it is in the C:\Program Files\PuTTY\ folder. This command copies the file called ITSO-SVC4_errlog_107662_100921_170648 to the C:\ directory on our local workstation and calls the file ITSO-SVC4_errlog_107662_100921_170648 Open the file in WordPad (Notepad does not format the window as well). You will see information similar to that is shown in Example 9-200. (We truncated this list for the purposes of this example.)
Example 9-200 errlog in WordPad

//------------------// Error Log Entries //-------------------

Error Log Entry 0 Node Identifier Object Type Object ID Copy ID Sequence Number Root Sequence Number First Error Timestamp Last Error Timestamp Error Count Error ID Error Code Status Flag Type Flag 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00

: : : : : : : : : : : : : : : 00 00 00 00 00 00 00

n104603 node 1 101 101 Fri Sep 17 01:20:06 2010 Epoch + 1284661206 Fri Sep 17 01:20:06 2010 Epoch + 1284661206 1 980221 : Error log cleared SNMP trap raised INFORMATION 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00

By scrolling through, or searching for the term unfixed, you can find more detail about the problem. You might see more entries in the errorlog that have the status of unfixed. After rectify the problem, you can mark the event as fixed in the log by issuing the svctask cherrstate command against its sequence number; see Example 9-201.
Example 9-201 svctask cherrstate command

IBM_2145:ITSO-CLS4:admin>svctask cherrstate -sequencenumber 37404 If you accidentally mark the wrong event as fixed, you can mark it as unfixed again by entering the same command and appending the -unfix flag to the end, as shown in Example 9-202 on page 565.

564

Implementing the IBM System Storage SAN Volume Controller V6.1

Example 9-202 unfix flag

IBM_2145:ITSO-CLS4:admin>svctask cherrstate -sequencenumber 37406 -unfix

9.15.3 Setting up SNMP notification


To set up event notification, use the svctask mksnmpserver command. Example 9-203 shows an example of the mksnmpserver command.
Example 9-203 svctask mksnmpserver command

IBM_2145:ITSO-CLS4:admin>svctask mksnmpserver -error on -warning on -info on 9.43.86.160 -community SVC SNMP Server id [1] successfully created

-ip

This command sends all events and warning to the SVC community on the SNMP manager with the IP address 9.43.86.160.

9.15.4 Set syslog event notification


You can save a syslog to a defined syslog server as the SVC provides support for syslog in addition to email and SNMP traps. The syslog protocol is a client-server standard for forwarding log messages from a sender to a receiver on an IP network. You can use syslog to integrate log messages from various types of systems into a central repository. You can configure SVC to send information to six syslog servers. You use the svctask mksyslogserver command to configure the SVC using the CLI, as shown in Example 9-204. Using this command with the -h parameter gives you information about all of the available options. In our example, we only configure the SVC to use the default values for our syslog server.
Example 9-204 Configuring the syslog

IBM_2145:ITSO-CLS4:admin>svctask mksyslogserver -ip 10.64.210.231 -name Syslogserv1 Syslog Server id [1] successfully created When we have configured our syslog server, we can display the current syslog server configurations in our cluster, as shown in Example 9-205.
Example 9-205 svcinfo lssyslogserver command

IBM_2145:ITSO-CLS4:admin>svcinfo lssyslogserver id name IP_address facility error warning info 0 Syslogsrv 10.64.210.230 on on on 1 Syslogserv1 10.64.210.231 on on on

4 0

Chapter 9. SAN Volume Controller operations using the command-line interface

565

9.15.5 Configuring error notification using an email server


The SVC can use an email server to send event notification and inventory emails to email users. It can transmit any combination of events, warning, and informational notification types. The SVC supports up to six email servers to provide redundant access to the external email network. The SVC uses the email servers in sequence until the email is successfully sent from the SVC. Important: Before the SVC can start sending emails we must run the svctask startemail command, which enables this service. The attempt is successful when the SVC gets a positive acknowledgement from an email server that the email has been received by the server. If no port is specified, port 25 is the default port, as shown in Example 9-206.
Example 9-206 The mkemailserver command syntax

IBM_2145:ITSO-CLS4:admin>svctask mkemailserver -ip 192.168.1.1 Email Server id [0] successfully created IBM_2145:ITSO-CLS4:admin>svcinfo lsemailserver 0 id 0 name emailserver0 IP_address 192.168.1.1 port 25 We can configure an email user that will receive email notifications from the SVC cluster. We can define 12 users to receive emails from our SVC. Using the svcinfo lsemailuser command, we can verify who is already registered and what type of information is sent to that user, as shown in Example 9-207.
Example 9-207 svcinfo lsemailuser command

IBM_2145:ITSO-CLS4:admin>svcinfo lsemailuser id name address user_type error warning info 0 IBM_Support_Center callhome0@de.ibm.com support on off off

inventory on

We can also create a new user, as shown in Example 9-208 for a SAN administrator.
Example 9-208 svctask mkemailuser command

IBM_2145:ITSO-CLS4:admin>svctask mkemailuser -address SANadmin@ibm.com -error on -warning on -info on -inventory on User, id [1], successfully created

9.15.6 Analyzing the event log


The following types of events are logged in the event log: Events: an occurrence of significance to a task or system. Events can include completion or failure of an operation, a user action, or the change in state of a process. Node Error codes now have two classifications: 566
Implementing the IBM System Storage SAN Volume Controller V6.1

Critical: Errors which put the node into service state and prevent the node from joining the cluster 500 699 Note: Deleting a node from a cluster will cause nodes to enter service state as well. Non-critical: Partial hardware faults (for example, one PSU failed in 2145-CF8) 800 - 899 To display the event log use the svcinfo lserrlog command or the svcinfo caterrlog command, as shown in Example 9-209 (the output is the same for either command).
Example 9-209 svcinfo caterrlog command

IBM_2145:ITSO-CLS4:admin>svcinfo caterrlog -first 10 -delim : id:type:fixed:SNMP_trap_raised:error_type:node_name:sequence_number:root_sequence_ number:first_timestamp:last_timestamp:number_of_errors:error_code:copy_id 0:cluster:no:yes:6:n104603:147:143:100922010005:100922010005:1:00981004: 0:cluster:no:no:5:n107662:0:0:100922010002:100922010002:1:00990203: 0:cluster:no:no:5:n107662:0:0:100921170859:100921170859:1:00990219: 0:cluster:no:no:5:n107662:0:0:100921170648:100921170648:1:00990220: 0:cluster:no:no:5:n107662:0:0:100921170547:100921170547:1:00990220: 0:cluster:no:no:5:n107662:0:0:100921170515:100921170515:1:00990219: 0:cluster:no:yes:6:n104603:146:143:100921165929:100921165929:1:00981003: 0:cluster:no:no:5:n107662:0:0:100921165909:100921165909:1:00990415: 1:node:no:yes:6:n104603:145:145:100921165904:100921165904:1:00987102: 1:node:no:yes:6:n107662:144:144:100921165904:100921165904:1:00980349: These commands allow you to view the last 10 events that were generated. Use the method described in 9.15.2, Running maintenance procedures on page 562 to upload and analyze the event log in more detail. To clear the event log, you can issue the svctask clearerrlog command, as shown in Example 9-210.
Example 9-210 svctask clearerrlog command

IBM_2145:ITSO-CLS4:admin>svctask clearerrlog Do you really want to clear the log? y Using the -force flag will stop any confirmation requests from appearing. When executed, this command will clear all of the entries from the event log. This process will proceed even if there are unfixed errors in the log. It also clears any status events that are in the log. Note: This command is a destructive command for the event log. Only use this command when you have either rebuilt the cluster, or when you have fixed a major problem that has caused many entries in the event log that you do not want to fix manually.

9.15.7 License settings


To change the licensing feature settings, use the svctask chlicense command. Before you change the licensing, you can display the licenses that you already have by issuing the svcinfo lslicense command, as shown in Example 9-211 on page 568.
Chapter 9. SAN Volume Controller operations using the command-line interface

567

Example 9-211 svcinfo lslicense command

IBM_2145:ITSO-CLS4:admin>svcinfo lslicense used_flash 0.01 used_remote 0.01 used_virtualization 0.25 license_flash 5 license_remote 5 license_virtualization 5 license_physical_disks 0 license_physical_flash off license_physical_remote off The current license settings for the cluster are displayed in the viewing license settings log window. These settings show whether you are licensed to use the FlashCopy, Metro Mirror, Global Mirror, or Virtualization features. They also show the storage capacity that is licensed for virtualization. Typically, the license settings log contains entries, because feature options must be set as part of the web-based cluster creation process. Consider, for example, that you have purchased an additional 5 TB of licensing for the Metro Mirror and Global Mirror feature from your actual 20 TB license. Example 9-212 shows the command that you enter.
Example 9-212 svctask chlicense command

IBM_2145:ITSO-CLS1:admin>svctask chlicense -remote 25 To turn a feature off, add 0 TB as the capacity for the feature that you want to disable. To verify that the changes you have made are reflected in your SVC configuration, you can issue the svcinfo lslicense command as before (see Example 9-213).
Example 9-213 svcinfo lslicense command: Verifying changes

IBM_2145:ITSO-CLS4:admin>svcinfo lslicense used_flash 0.01 used_remote 0.01 used_virtualization 0.25 license_flash 5 license_remote 25 license_virtualization 5 license_physical_disks 0 license_physical_flash off license_physical_remote off

9.15.8 Listing dumps


Several commands are available for you to list the dumps that were generated over a period of time. You can use the lsxxxxdumps command, where xxxx means the object dumps, to return a list of dumps in the appropriate directory. These object dumps are available: lserrlogdumps lsfeaturedumps lsiotracedumps lsiostatsdumps 568
Implementing the IBM System Storage SAN Volume Controller V6.1

lssoftwaredumps ls2145dumps If no node is specified, the command lists the dumps that are available on the configuration node.

Error or event dump


The dumps that are contained in the /dumps/elogs directory are dumps of the contents of the event log at the time that the dump was taken. You create an error or event log dump by using the svctask dumperrlog command. This command dumps the contents of the error or event log to the /dumps/elogs directory. If you do not supply a file name prefix, the system uses the default errlog_ file name prefix. The full, default file name is errlog_NNNNNN_YYMMDD_HHMMSS. In this file name, NNNNNN is the node front panel name. If the command is used with the -prefix option, the value that is entered for the -prefix is used instead of errlog. The svcinfo lserrlogdumps command lists all of the dumps in the /dumps/elogs directory (Example 9-214).
Example 9-214 svcinfo lserrlogdumps command

IBM_2145:ITSO-CLS4:admin>svcinfo lserrlogdumps id filename 0 errlog_107662_100921_170547 1 ITSO-SVC4_errlog_107662_100921_170648

Featurization log dump


The dumps that are contained in the /dumps/feature directory are dumps of the featurization log. A featurization log dump is created by using the svctask dumpinternallog command. This command dumps the contents of the featurization log to the /dumps/feature directory to a file called feature.txt. Only one of these files exists, so every time that the svctask dumpinternallog command is run, this file is overwritten. The svcinfo lsfeaturedumps command lists all of the dumps in the /dumps/feature directory (Example 9-215).
Example 9-215 svctask lsfeaturedumps command

IBM_2145:ITSO-CLS4:admin>svcinfo lsfeaturedumps id feature_filename 0 feature.txt

I/O trace dump


Dumps that are contained in the /dumps/iotrace directory are dumps of I/O trace data. The type of data that is traced depends on the options that are specified by the svctask settrace command. The collection of the I/O trace data is started by using the svctask starttrace command. The I/O trace data collection is stopped when the svctask stoptrace command is used. When the trace is stopped, the data is written to the file. The file name is prefix_NNNNNN_YYMMDD_HHMMSS, where NNNNNN is the node front panel name, and prefix is the value that is entered by the user for the -filename parameter in the svctask settrace command.

Chapter 9. SAN Volume Controller operations using the command-line interface

569

The command to list all of the dumps in the /dumps/iotrace directory is the svcinfo lsiotracedumps command (Example 9-216).
Example 9-216 svcinfo lsiotracedumps command

IBM_2145:ITSO-CLS4:admin>svcinfo lsiotracedumps id iotrace_filename 0 tracedump_104643_080624_172208 1 iotrace_104643_080624_172451

I/O statistics dump


The dumps that are contained in the /dumps/iostats directory are the dumps of the I/O statistics for the disks on the cluster. An I/O statistics dump is created by using the svctask startstats command. As part of this command, you can specify a time interval at which you want the statistics to be written to the file (the default is 15 minutes). Every time that the time interval is encountered, the I/O statistics that are collected up to this point are written to a file in the /dumps/iostats directory. The file names that are used for storing I/O statistics dumps are m_stats_NNNNNN_YYMMDD_HHMMSS or v_stats_NNNNNN_YYMMDD_HHMMSS, depending on whether the statistics are for MDisks or volumes. In these file names, NNNNNN is the node front panel name. The command to list all of the dumps that are in the /dumps/iostats directory is the svcinfo lsiostatsdumps command (Example 9-217).
Example 9-217 svcinfo lsiostatsdumps command

IBM_2145:ITSO-CLS4:admin>svcinfo lsiostatsdumps id iostat_filename 0 Nd_stats_107662_100922_061303 1 Nv_stats_107662_100922_061303 2 Nm_stats_107662_100922_061303 3 Nn_stats_107662_100922_061303 4 Nd_stats_107662_100922_062801 5 Nm_stats_107662_100922_062801 ........

Software dump
The svcinfo lssoftwaredump command lists the contents of the /home/admin/upgrade directory. Any files in this directory are copied there at the time that you perform a software upgrade. Example 9-218 shows the command.
Example 9-218 svcinfo lssoftwaredumps

IBM_2145:ITSO-CLS4:admin>svcinfo lssoftwaredumps id software_filename 0 IBM2145_INSTALL_6.1.0.0

Other node dumps


All of the svcinfo lsxxxxdumps commands can accept a node identifier as input (for example, append the node name to the end of any of the node dump commands). If this identifier is not specified, the list of files on the current configuration node is displayed. If the node identifier is specified, the list of files on that node is displayed.

570

Implementing the IBM System Storage SAN Volume Controller V6.1

However, files can only be copied from the current configuration node (using PuTTY Secure Copy). Therefore, you must issue the svctask cpdumps command to copy the files from a non-configuration node to the current configuration node. Subsequently, you can copy them to the management workstation using PuTTY Secure Copy. For example, suppose you discover a dump file and want to copy it to your management workstation for further analysis. In this case, you must first copy the file to your current configuration node. To copy dumps from other nodes to the configuration node, use the svctask cpdumps command. In addition to the directory, you can specify a file filter. For example, if you specified /dumps/elogs/*.txt, all of the files in the /dumps/elogs directory that end in .txt are copied. Wildcards: The following rules apply to the use of wildcards with the SAN Volume Controller CLI: The wildcard character is an asterisk (*). The command can contain a maximum of one wildcard. When you use a wildcard, you must surround the filter entry with double quotation marks (""), for example: >svctask cleardumps -prefix "/dumps/elogs/*.txt" Example 9-219 shows an example of the cpdumps command.
Example 9-219 svctask cpdumps command

IBM_2145:ITSO-CLS4:admin>svctask cpdumps -prefix /dumps/configs n4 Now that you have copied the configuration dump file from Node n4 to your configuration node, you can use PuTTY Secure Copy to copy the file to your management workstation for further analysis. To clear the dumps, you can run the svctask cleardumps command. Again, you can append the node name if you want to clear dumps off of a node other than the current configuration node (the default for the svctask cleardumps command). The commands in Example 9-220 clear all logs or dumps from the SVC Node n1.
Example 9-220 svctask cleardumps command

IBM_2145:ITSO-CLS4:admin>svctask IBM_2145:ITSO-CLS4:admin>svctask IBM_2145:ITSO-CLS4:admin>svctask IBM_2145:ITSO-CLS4:admin>svctask IBM_2145:ITSO-CLS4:admin>svctask IBM_2145:ITSO-CLS4:admin>svctask IBM_2145:ITSO-CLS4:admin>svctask

cleardumps cleardumps cleardumps cleardumps cleardumps cleardumps cleardumps

-prefix -prefix -prefix -prefix -prefix -prefix -prefix

/dumps n1 /dumps/iostats n1 /dumps/iotrace n1 /dumps/feature n1 /dumps/config n1 /dumps/elog n1 /home/admin/upgrade

n1

Application abends dump


The dumps that are contained in the /dumps directory are the dumps resulting from application (abnormal ends) abends. These dumps are written to the /dumps directory. The default file names are dump.NNNNNN.YYMMDD.HHMMSS. NNNNNN is the node front panel name.

Chapter 9. SAN Volume Controller operations using the command-line interface

571

In addition to the dump file, trace files can be written to this directory. These trace files are named NNNNNN.trc. The command to list all of the dumps in the /dumps directory is the svcinfo ls2145dumps command; see Example 9-221.
Example 9-221 svcinfo ls2145dumps command

IBM_2145:ITSO-CLS4:admin>svcinfo ls2145dumps id 2145_filename 0 107662.100917.062357.ups_log.tar.gz 1 107662.100917.141406.ups_log.tar.gz 2 107662.100917.144108.ups_log.tar.gz 3 dump.107662.100917.144514 4 000000.trc 5 ethernet.000000.trc 6 dump.107662.100917.145246 7 107662.100917.151723.ups_log.tar.gz 8 svc.config.cron.bak_104603 9 107662.100921.160923.ups_log.tar.gz 10 endd.trc.old 11 dpa_log_107662_20100921164920_00000000.xml.gz 12 ethernet.107662.trc 13 endd.trc 14 107662.trc 15 svc.config.cron.sh_107662 16 svc.config.cron.log_107662 17 svc.config.cron.xml_107662 18 dpa_heat.107662.100922.084254.data 19 ups_log.a 20 ups_log.b

9.16 Backing up the SVC cluster configuration


You can back up your cluster configuration by using the Backing Up a Cluster Configuration window or the CLI svcconfig command. In this section, we describe the overall procedure for backing up your cluster configuration and the conditions that must be satisfied to perform a successful backup. The backup command extracts configuration data from the cluster and saves it to the svc.config.backup.xml file in the /tmp directory. This process also produces an svc.config.backup.sh file. You can study this file to see what other commands were issued to extract information. An svc.config.backup.log log is also produced. You can study this log for the details of what was done and when it was done. This log also includes information about the other commands that were issued. Any pre-existing svc.config.backup.xml file is archived as the svc.config.backup.bak file. The system only keeps one archive. We strongly suggest that you immediately move the .XML file and related KEY files (see the following limitations) off the cluster for archiving. Then erase the files from the /tmp directory using the svcconfig clear -all command. We further advise that you change all of the objects having default names to non-default names. Otherwise, a warning is produced for objects with default names. 572
Implementing the IBM System Storage SAN Volume Controller V6.1

Also, the object with the default name is restored with its original name with an _r appended. The prefix _(underscore) is reserved for backup and restore command usage. Do not use this prefix in any object names. Important: The tool backs up logical configuration data only, not client data. It does not replace a traditional data backup and restore tool, but this tool supplements a traditional data backup and restore tool with a way to back up and restore the clients configuration. To provide a complete backup and disaster recovery solution, you must back up both user (non-configuration) data and configuration (non-user) data. After the restoration of the SVC configuration, you must fully restore user (non-configuration) data to the clusters disks.

9.16.1 Prerequisites
You must have the following prerequisites in place: All nodes must be online. No object name can begin with an underscore. All objects must have non-default names, that is, names that are not assigned by the SVC. Although we advise that objects have non-default names at the time that the backup is taken, this prerequisite is not mandatory. Objects with default names are renamed when they are restored. Example 9-222 shows an example of the svcconfig backup command.
Example 9-222 svcconfig backup command

IBM_2145:ITSO-CLS4:admin>svcconfig backup ............ CMMVC6130W Cluster ITSO-CLS1 with inter-cluster partnership fully_configured will not be restored . CMMVC6130W Cluster ITSO-CLS2 with inter-cluster partnership fully_configured will not be restored .................................................................................. ....................... CMMVC6155I SVCCONFIG processing completed successfully As you can see in Example 9-222, we received a CMMVC6130W Cluster ITSO-CLS1 with inter-cluster partnership fully_configured will not be restored message. This message indicates that individual clusters in a multi cluster environment will need to be backed-up individually. In the event that recovery is required, recovery will only be performed on the cluster where the recovery commands are executed. Example 9-223 shows the pscp command.
Example 9-223 pscp command

C:\Program Files\PuTTY>pscp -load ITSO-CLS4 admin@10.18.229.84:/tmp/svc.config.backup.xml c:\temp\clibackup.xml clibackup.xml | 97 kB | 97.2 kB/s | ETA: 00:00:00 | 100%

Chapter 9. SAN Volume Controller operations using the command-line interface

573

The following scenario illustrates the value of configuration backup: 1. Use the svcconfig command to create a backup file on the cluster that contains details about the current cluster configuration. 2. Store the backup configuration on a form of tertiary storage. You must copy the backup file from the cluster or it becomes lost if the cluster crashes. 3. If a sufficiently severe failure occurs, the cluster might be lost. Both the configuration data (for example, the cluster definitions of hosts, I/O Groups, MDGs, and MDisks) and the application data on the virtualized disks are lost. In this scenario, it is assumed that the application data can be restored from normal client backup procedures. However, before you can perform this restoration, you must reinstate the cluster as it was configured at the time of the failure. Therefore, you restore the same MDGs, I/O Groups, host definitions, and volumes that existed prior to the failure. Then you can copy the application data back onto these volumes and resume operations. 4. Recover the hardware: hosts, SVCs, disk controller systems, disks, and SAN fabric. The hardware and SAN fabric must physically be the same as the hardware and SAN fabric that were used before the failure. 5. Reinitialize the cluster with the configuration node; the other nodes will be recovered when restoring the configuration. 6. Restore your cluster configuration using the backup configuration file that was generated prior to the failure. 7. Restore the data on your volumes using your preferred restoration solution or with help from IBM Service. 8. Resume normal operations.

9.17 Restoring the SVC cluster configuration


Attention: It is extremely important that you always consult IBM Support before you restore the SVC cluster configuration from the backup. IBM Support can assist you in analyzing the root cause of why the cluster configuration was lost. After the svcconfig restore -execute command is started, consider any prior user data on the volumes destroyed. The user data must be recovered through your usual application data backup and restore process. See IBM TotalStorage Open Software Family SAN Volume Controller: Command-Line Interface Users Guide, GC27-2287, for more information about this topic. For a detailed description of the SVC configuration backup and restore functions, see IBM TotalStorage Open Software Family SAN Volume Controller: Configuration Guide, GC27-2286.

9.17.1 Deleting configuration backup


Here we describe in detail the tasks that you can perform to delete the configuration backup that is stored in the configuration file directory on the cluster. Never clear this configuration without having a backup of your configuration stored in a separate, secure place. When using the clear command, you erase the files in the /tmp directory. This command does not clear the running configuration and prevent the cluster from working, but the 574
Implementing the IBM System Storage SAN Volume Controller V6.1

command clears all of the configuration backup that is stored in the /tmp directory; see Example 9-224.
Example 9-224 svcconfig clear command

IBM_2145:ITSO-CLS4:admin>svcconfig clear -all . CMMVC6155I SVCCONFIG processing completed successfully

9.18 Working with the SVC Quorum MDisk


In this section we show how to list and change the SVC Cluster Quorum Managed Disk.

9.18.1 Listing the SVC Quorum MDisk


To list SVC cluster Quorum MDisks and view their number and status, issue the svcinfo lsquorum command as shown in Example 9-225. For more information about SVC Quorum Disk planning and configuration, see Chapter 3, Planning and configuration on page 57.
Example 9-225 lsquorum command and detail

IBM_2145:ITSO-CLS4:admin>svcinfo lsquorum quorum_index status id name controller_id 0 online 0 mdisk0 0 1 online 1 mdisk1 0 2 online 2 mdisk2 0 IBM_2145:ITSO-CLS4:admin>svcinfo lsquorum 0 quorum_index 0 status online id 0 name mdisk0 controller_id 0 controller_name ITSO-4700 active yes object_type mdisk

controller_name ITSO-4700 ITSO-4700 ITSO-4700

active yes no no

object_type mdisk mdisk mdisk

9.18.2 Changing the SVC Quorum Disk


To move one of your SVC Quorum MDisks from one MDisk to another, or from one storage subsystem to another, use the svctask chquorum command as shown in Example 9-226.
Example 9-226 chquorum command

IBM_2145:ITSO-CLS4:admin>svcinfo lsquorum quorum_index status id name controller_id 0 online 0 mdisk0 0 1 online 1 mdisk1 0 2 online 2 mdisk2 0

controller_name ITSO-4700 ITSO-4700 ITSO-4700

active yes no no

object_type mdisk mdisk mdisk

IBM_2145:ITSO-CLS4:admin>svctask chquorum -mdisk 9 2 IBM_2145:ITSO-CLS4:admin>svcinfo lsquorum quorum_index status id name controller_id controller_name active object_type
Chapter 9. SAN Volume Controller operations using the command-line interface

575

0 1 2

online 0 online 1 online 2

mdisk0 0 mdisk1 0 mdisk9 1

ITSO-4700 ITSO-4700 ITSO-XIV

yes no no

mdisk mdisk mdisk

As you can see in Example 9-226 on page 575, the quorum index 2 has been moved from MDisk2 on ITSO-4700 controller to MDisk9 on ITSO-XIV controller.

9.19 Working with the Service Assistant menu


SVC V6.1 introduces a new method for performing service tasks on the system. In addition to being able to perform service tasks from the front panel, you can now also service a node through an Ethernet connection using either a web browser or the CLI. The web browser runs a new service application called the Service Assistant. Service Assistant offers almost all of the function that was previously available through the front panel, but it is now available from the Ethernet connection with an interface that is easier to use and that you can use remotely from the cluster.

9.19.1 SVC CLI Service Assistant menu


A set of commands relating to the new method for performing service tasks on the system has been introduced. Two major command sets are available: The sainfo command set allows you to query the various components within the SVC environment. The satask command set allows you to make changes to the various components within the SVC. When the command syntax is shown, you will see certain parameters in square brackets, for example [parameter], indicating that the parameter is optional in most if not all instances. Any information that is not in square brackets is required information. You can view the syntax of a command by entering one of the following commands: sainfo satask sainfo satask -?: Shows a complete list of information commands. -?: Shows a complete list of task commands. commandname -?: Shows the syntax of information commands. commandname -?: Shows the syntax of task commands.

Example 9-227 shows the two new set of commands introduced with Service Assistant.
Example 9-227 sainfo and satask command

IBM_2145:ITSO-CLS4:admin>sainfo -h The following actions are available with this command : lscmdstatus lsfiles lsservicenodes lsservicerecommendation lsservicestatus IBM_2145:ITSO-CLS4:admin>satask -h The following actions are available with this command : chenclosurevpd chnodeled chserviceip

576

Implementing the IBM System Storage SAN Volume Controller V6.1

chwwnn cpfiles installsoftware leavecluster mkcluster rescuenode setlocale setpacedccu settempsshkey snap startservice stopnode stopservice t3recovery Attention: The sainfo and satask command set usage must be performed under IBM Support direction. Incorrect use of those commands can lead to unexpected results.

9.20 SAN troubleshooting and data collection


When we encounter a SAN issue, the SVC is often extremely helpful in troubleshooting the SAN because the SVC is the at the center of the environment through which the communication travels. SAN Volume Controller Best Practices and Performance Guidelines, SG24-7521, contains a detailed description of how to troubleshoot and collect data from the SVC: http://www.redbooks.ibm.com/abstracts/sg247521.html?Open Use the svcinfo lsfabric command regularly to obtain a complete picture about what is connected and visible from the SVC cluster through the SAN. The lsfabric command generates a report that displays the Fibre Channel connectivity between nodes, controllers, and hosts. Example 9-228 shows the report of an svcinfo lsfabric command.
Example 9-228 lsfabric command

IBM_2145:ITSO-CLS4:admin>svcinfo lsfabric remote_wwpn remote_nportid id node_name local_wwpn local_nportid state name cluster_name type 50050768012027E2 620900 1 n104603 50050768011037DC active n108283 ITSO-CLS1 node 50050768012027E2 620900 1 n104603 50050768012037DC active n108283 ITSO-CLS1 node 50050768012027E2 620900 2 n107662 5005076801101D1C active n108283 ITSO-CLS1 node . Above and below rows has been removed for brevity . 200700A0B84858A1 171B00 1 n104603 50050768014037DC inactive ITSO-4700 controller 200700A0B84858A1 171B00 1 n104603 50050768013037DC inactive ITSO-4700 controller

local_port 3 4 3 620600 620700 620400

1 2

170600 170700

Chapter 9. SAN Volume Controller operations using the command-line interface

577

200700A0B84858A1 171B00 inactive ITSO-4700 200700A0B84858A1 171B00 inactive ITSO-4700 . Above and below rows has been . 5005076801101D22 620200 active n100048 ITSO-CLS3 5005076801101D22 620200 active n100048 ITSO-CLS3 5005076801101D22 620200 active n100048 ITSO-CLS3 5005076801101D22 620200 active n100048 ITSO-CLS3 10000000C92B7F90 171300 active W2K3 10000000C92B7F90 171300 active W2K3 10000000C92B7F90 171300 active W2K3

2 n107662 5005076801401D1C 1 controller 2 n107662 5005076801301D1C 2 controller removed for brevity 1 n104603 node 1 n104603 node 2 n107662 node 2 n107662 node 1 n104603 host 2 n107662 host 2 n107662 host 50050768011037DC 3 50050768012037DC 4 5005076801101D1C 3 5005076801201D1C 4 50050768013037DC 2 5005076801401D1C 1 5005076801301D1C 2

170400 170500

620600 620700 620400 620500 170700 170400 170500

For more detail about the lsfabric command, see IBM System Storage SAN Volume Controller and Storwize V7000 Command-Line Interface User's Guide Version 6.1.0 GC27-2287.

9.21 T3 recovery process


A procedure known as T3 recovery has been tested and used in select cases where a cluster has been completely destroyed. (One example is simultaneously pulling power cords from all nodes to their uninterruptible power supply units; in this case, all nodes boot up to node error 578 when the power is restored.) This procedure, in certain circumstances, is able to recover most user data. However, it is not to be used by the client or IBM service representative without direct involvement from IBM level 3 technical support. This procedure is not published, but we refer to it here only to indicate that the loss of a cluster can be recoverable without total data loss. However, it requires a restoration of application data from the backup. T3 recovery is an extremely sensitive procedure that is only to be used as a last resort, and it cannot recover any data that was unstaged from cache at the time of the total cluster failure.

578

Implementing the IBM System Storage SAN Volume Controller V6.1

10

Chapter 10.

SAN Volume Controller operations using the GUI


In this chapter we illustrate IBM System Storage SAN Volume Controller (SVC) operational management using the SVC GUI. The information is divided into normal operations and advanced operations. We explain the basic configuration procedures that are required to get your SVC environment running as quickly as possible using its GUI. In Chapter 2, IBM System Storage SAN Volume Controller on page 7, we describe the features in greater depth. Here, we focus on the operational aspects.

Copyright IBM Corp. 2011. All rights reserved.

579

10.1 SVC normal operations using the GUI


In this section we discuss several of the operations that we have defined as normal, day-to-day activities. It is possible for many users to be logged into the GUI at any given time. However, no locking mechanism exists, so be aware that if two users change the same object at the same time, the last action entered from the GUI is the one that will take effect. Important: Data entries made through the GUI are case sensitive.

10.1.1 Introduction to SVC normal operations using the GUI


The SVC Welcome panel Getting Started (Figure 10-1) is an important panel and is referred to as the Welcome panel throughout this chapter. (We expect users to be able to locate this panel without our displaying it each time.)

Figure 10-1 The Welcome panel

From this Welcome panel, on the left panel, there is a dynamic menu.

Dynamic menu
This new version of the SVC GUI includes a new dynamic menu located in the left column of the window. To navigate using this menu, move the mouse over the various icons and choose a page that you want to display, as shown in Figure 10-2 on page 581.

580

Implementing the IBM System Storage SAN Volume Controller V6.1

Figure 10-2 The dynamic menu in the left column

A non-dynamic version of this menu exists for slow connections. To access the non-dynamic menu, select Low graphics mode as shown in Figure 10-3.

Figure 10-3 The SVC GUI Login panel

Figure 10-4 on page 582 shows the non-dynamic version of the menu.

Chapter 10. SAN Volume Controller operations using the GUI

581

Figure 10-4 Non-dynamic menu in the left column

In this case, in the upper part of the web page there are tabs for navigating between submenus. For example in Figure 10-4, All volumes, Volumes by Pool, and Volumes by Host are submenus (tabs) for the Volumes menu.

Persistent state notification Status Areas


A control panel is available in the bottom part of the window. This dashboard is divided into three Status Areas and it provides information about your cluster. These persistent state notification widgets are reduced by default, as shown in Figure 10-5.

Figure 10-5 Control panel view

To expand each of these Status Areas, click the Status Area.

icon. Following it is a description of each

Connection Status Area


The leftmost area of the control panel provides information about connectivity; see Figure 10-6.

Figure 10-6 Connection Status Area

582

Implementing the IBM System Storage SAN Volume Controller V6.1

If there are issues on your cluster nodes, external storage, or remote partnerships, you will be informed here, as shown in Figure 10-7.

Figure 10-7 Connectivity issue

You will be able to fix the error using the Fix Error button, which will direct you to the troubleshooting panel.

Storage Allocation Area


The area in the middle provides information about the storage allocation, as shown in Figure 10-8.

Figure 10-8 Storage Allocation Area

The following information is displayed in this window. To view all of them, you need to use the left and right arrows: Allocated Capacity Free Capacity Physical Capacity Virtual Capacity Over-allocation

Long Running Tasks Area


The rightmost area provides information about the running tasks, as shown in Figure 10-9 on page 584. Information such as Volume Migration, MDisk Removal, Image Mode Migration, Extend Migration, FlashCopy, Metro Mirror and Global Mirror, Volume Formatting, Space Efficient copy repair, Volume copy verification, and Volume copy synchronization, are displayed in this window.

Chapter 10. SAN Volume Controller operations using the GUI

583

Figure 10-9 Long Running Tasks Area

It also provides information about the recently completed tasks, as shown in Figure 10-10.

Figure 10-10 Recently Completed Tasks information

10.1.2 Organizing on window content


The following sections describe several windows within the SVC GUI where you can perform filtering (to minimize the amount of data that is shown on the window) and sorting and reorganizing (to organize the content on the window). This section provides a brief overview of these functions.

Table filtering
In most pages, in the upper right corner of the window, there is a search field to filter the elements, which is useful if the list of entries is too large to work with. Perform these steps to use search filtering: 1. Enter a value in the search box in the upper right corner of the window, as shown in Figure 10-11 on page 585.

584

Implementing the IBM System Storage SAN Volume Controller V6.1

Figure 10-11 Show Filter Row icon

2. Click the

icon.

3. This function enables you to filter your table based on the column names. In this example, a volume list is displayed containing names that include DB2 somewhere in the name, as shown in Figure 10-12.

Figure 10-12 Show Filter Row

4. You can remove this filtered view by clicking Reset, as shown in Figure 10-13 on page 586.

Chapter 10. SAN Volume Controller operations using the GUI

585

Figure 10-13 Reset the filtered view

Note: This filtering option is available in most pages.

Table information
With SVC 6.1, you are able to add or remove additional information in the tables available on most pages. As an example, in the All Volumes page we will add a column to our table. 1. Right-click the top part of the table; see Figure 10-14. A menu with all available columns appears.

Figure 10-14 Add or remove details in a table

2. Select the column that you want to add (or remove) from this table. In our example, we added the volume ID column as shown in Figure 10-15 on page 587.

586

Implementing the IBM System Storage SAN Volume Controller V6.1

Figure 10-15 Table with an added column

3. You can repeat this process several times to create custom tables that meet your requirements.

Reorganizing columns in tables


You are able to move columns by pressing the left mouse button and moving the column, as shown in Figure 10-16.

Figure 10-16 Reorganizing table columns

Chapter 10. SAN Volume Controller operations using the GUI

587

Sorting
Regardless of whether you use filter options, you can sort the displayed data by clicking one column's table as shown in Figure 10-17. In this example, we sort the table by volume ID.

Figure 10-17 Selecting on column to sort using this field.

After we click the volume ID column, the table is sorted by volume ID as shown in Figure 10-18 on page 589.

588

Implementing the IBM System Storage SAN Volume Controller V6.1

Figure 10-18 Table sort by ID volume

Note: By repeatedly clicking a column, you can sort this table based on that column in ascending or descending order.

10.1.3 Help
To access online help, click the Help link in the upper right corner of any panel, as shown in Figure 10-19.

Figure 10-19 Help link

This action opens a new window where you can find help on different topics (see Figure 10-20).

Figure 10-20 Online help window

Chapter 10. SAN Volume Controller operations using the GUI

589

10.2 Working with External Disk Controllers


This section describes the various configuration and administration tasks that you can perform on External Disk Controllers within the SVC environment.

10.2.1 Viewing Disk Controller details


Perform the following steps to view information about a back-end disk controller in use by the SVC environment: 1. Select Physical Storage in the dynamic menu and then select External. 2. The External panel shown in Figure 10-21 opens. For more detailed information about a specific controller, click one Storage System in the left column (highlighted in the figure).

Figure 10-21 Disk controller systems

590

Implementing the IBM System Storage SAN Volume Controller V6.1

10.2.2 Renaming a disk controller


Perform the following steps to rename a disk controller that is used by the SVC cluster: 1. In the left panel, select the controller that you want to rename. Click its name to rename it, as shown in Figure 10-22.

Figure 10-22 Renaming a Storage System

2. Type the new name that you want to assign to the controller, and press Enter as shown in Figure 10-23.

Figure 10-23 Changing the name for Storage System

Controller name: The name can consist of the letters A to Z and a to z, the numbers 0 to 9, the dash (-), and the underscore (_) character. The name can be between one and 63 characters in length. However, the name cannot start with a number, the dash or the underscore. 3. A task is launched to change the name of this Storage System. When it is completed, you can close this window. 4. The new name of your controller is displayed on the Disk Controller Systems panel.

Chapter 10. SAN Volume Controller operations using the GUI

591

10.2.3 Discovering MDisks from the External panel


You can discover managed disks (MDisk) from the External panel. Perform the following steps to discover new MDisks: 1. Select a controller in the left panel. 2. Click Detect MDisks button to discover MDisks from this controller, as shown in Figure 10-24.

Figure 10-24 Detect MDisks action

3. The Discover devices task runs. 4. When the task is completed, click Close and see the new MDisks available.

10.3 Working with Storage Pools


In this section we describe the tasks that can be performed with the Storage Pools. From the Welcome panel that is shown in Figure 10-1 on page 580, select Physical Storage then Pools.

10.3.1 Viewing Storage Pool information


We perform each of the following tasks from the Pools panel (Figure 10-25 on page 593). To access this panel, from the SVC Welcome panel, click Physical Storage and then click Pools.

592

Implementing the IBM System Storage SAN Volume Controller V6.1

Figure 10-25 Viewing Storage Pools panel

You can add information (new columns) to the table, as explained in Table information on page 586. To retrieve more detailed information about a specific Storage Pool, select any Storage Pool in the left column. The top left corner of the panel, shown in Figure 10-26, contains the following information about this pool: Status Number of MDisks Number of volumes copies If Easy Tiering is active on this pool

Figure 10-26 Detailed information about a pool

The top right corner of this panel, shown in Figure 10-27 on page 594, contains the following information about the pool: Volume Allocation Used Capacity Virtual Capacity Capacity

Chapter 10. SAN Volume Controller operations using the GUI

593

Figure 10-27 Detailed capacity information about a pool

The main part of this panel displays the MDisks that are present in this Storage Pool, as shown in Figure 10-28.

Figure 10-28 MDisks presents in a Storage Pool

10.3.2 Discovering MDisks


Perform the following steps to discover newly assigned MDisks: 1. From the SVC Welcome panel (Figure 10-1 on page 580) click Physical Storage, and then click Pools. 2. Click Detect MDisks, as shown in Figure 10-29.

Figure 10-29 Detect MDisks action

3. The Discover Device window is displayed. 4. Click Close to see the newly discovered MDisks.

594

Implementing the IBM System Storage SAN Volume Controller V6.1

10.3.3 Creating Storage Pools


Perform the following steps to create a Storage Pool: 1. From the SVC Welcome panel (Figure 10-1 on page 580), click Physical Storage and then click Pools. The Pools panel opens. On this page click New Pool, as shown in Figure 10-30.

Figure 10-30 Selecting the option to create a Storage Pool

2. The wizard Create Storage Pools opens. 3. On this first page, complete the following elements as shown in Figure 10-31 on page 596: a. You can specify a name for the Storage Pool as we have in Figure 10-31 on page 596. If you do not provide a name, the SVC automatically generates the name mdiskgrpx, where x is the ID sequence number that is assigned by the SVC internally. Storage Pool name: You can use the letters A to Z and a to z, the numbers 0 to 9, and the underscore (_) character. The name can be between one and 63 characters in length and is case sensitive, but it cannot start with a number or the word MDiskgrp because this prefix is reserved for SVC assignment only. b. You can also change the icon associated with this Storage Pool as shown in Figure 10-31 on page 596. c. If you expand the Advanced Settings box, you can specify: The Extent Size (by default at 256 MB) The Warning threshold to send a warning to the event log when the capacity is first exceeded (by default at 80%).

d. Click Next.

Chapter 10. SAN Volume Controller operations using the GUI

595

Figure 10-31 Create Storage Pool window: Step 1 of 2

4. On this page (Figure 10-32), you are able to detect new MDisks by using Detect MDisks. For more information about this topic, see 10.4.3, Discovering MDisks on page 602. a. Select the MDisks that you want to add to this Storage Pool. Tip: To add multiple MDisks, hold down Ctrl and use your mouse to select the entries you want to add. b. Click Finish to complete the creation.

Figure 10-32 Create Storage Pool window: Step 2 of 2

5. In the Storage Pools panel (Figure 10-33 on page 597), the new Storage Pool is displayed.

596

Implementing the IBM System Storage SAN Volume Controller V6.1

Figure 10-33 A new Storage Pool was added successfully

At this point, you have completed the tasks that are required to create a Storage Pool.

10.3.4 Renaming a Storage Pool


To rename a Storage Pool, perform the following steps: 1. In the left panel, select the Storage Pool that you want to rename, then click its name to rename it as shown in Figure 10-34.

Figure 10-34 Renaming a Storage Pool

Chapter 10. SAN Volume Controller operations using the GUI

597

2. Type the new name that you want to assign to the Storage Pool and press Enter (Figure 10-35).

Figure 10-35 Changing the name for a Storage Pool

Storage Pools name: The name can consist of the letters A to Z and a to z, the numbers 0 to 9, the dash (-), and the underscore (_) character. The name can be between one and 63 characters in length. However, the name cannot start with a number, the dash or the underscore.

3. A task is launched to change the name of this pool. When it is completed, you can close this window. 4. From the Storage Pools panel, the new Storage Pool name is displayed.

10.3.5 Deleting a Storage Pool


To delete a Storage Pool, perform the following steps: 1. Select the Storage Pool that you want to delete and then click Delete Pool in the Actions menu (Figure 10-36).

Figure 10-36 Delete Pool menu

2. In the Delete Pool window, click Delete to confirm that you want to delete the Storage Pool (Figure 10-37 on page 599). If there are MDisks and volumes within the Storage Pool that you are deleting, you must select the Delete all volumes, host mappings, and MDisks that are associated with this pool. option.

598

Implementing the IBM System Storage SAN Volume Controller V6.1

Figure 10-37 Deleting a pool

Attention: If you delete a Storage Pool by using the Delete all volumes, host mappings, and MDisks that are associated with this pool option, and volumes were associated with that Storage Pool, you will lose the data on your volumes because they are deleted before the Storage Pool. If you want to save your data, then migrate or mirror the volumes to another Storage Pool before you delete the Storage Pool previously assigned to the volumes.

10.3.6 Adding or removing MDisks from a Storage Pool


For information about adding MDisks to a Storage Pool, see 10.4.4, Adding MDisks to a Storage Pool on page 603. For information about removing MDisks from a Storage Pool, see 10.4.5, Removing MDisks from a Storage Pool on page 604.

10.3.7 Showing the volumes that are associated with a Storage Pool
To show the volumes that are associated with a Storage Pool, click volumes and then click volumes by Pool. For more information about this feature see 10.7, Working with volumes on page 630.

10.4 Working with managed disks


This section describes the various configuration and administration tasks that you can perform on the managed disks (MDisks) within the SVC environment.

10.4.1 MDisk information


From the SVC Welcome panel, click Physical Storage MDisks. The MDisks panel opens as shown in Figure 10-38 on page 600.

Chapter 10. SAN Volume Controller operations using the GUI

599

Figure 10-38 Viewing Managed Disks panel

To retrieve more detailed information about a specific MDisk, perform the following steps: 1. In the MDisks panel (Figure 10-38), right-click an MDisk. 2. As shown in Figure 10-39, click Properties.

Figure 10-39 MDisks menu

3. For the selected MDisk, an overview is displayed showing its various parameters and dependent volumes; see Figure 10-40 on page 601.

Note: To obtain all information about the MDisk, select Show Details as shown in Figure 10-40.

600

Implementing the IBM System Storage SAN Volume Controller V6.1

Figure 10-40 MDisk Details page

4. Clicking Dependent Volumes displays information about volumes that reside on this MDisk, as shown in Figure 10-41. The volume panel is discussed in more detail in 10.7, Working with volumes on page 630.

Figure 10-41 Dependent volumes for an MDisk

5. Click Close to return to the previous window.

Chapter 10. SAN Volume Controller operations using the GUI

601

10.4.2 Renaming an MDisk


Perform the following steps to rename an MDisk that is controlled by the SVC cluster: 1. Select the MDisk that you want to rename in the panel shown in Figure 10-38 on page 600. 2. Click Add to Pool in the Actions menu (Figure 10-42).

Figure 10-42 Rename Action

Note: You can also right-click this MDisk as shown in Figure 10-39 on page 600 and select Rename from the list. 3. In the Rename MDisk window (Figure 10-43), type the new name that you want to assign to the MDisk and click Rename.

Figure 10-43 Renaming an MDisk

MDisk name: The name can consist of the letters A to Z and a to z, the numbers 0 to 9, the dash (-), and the underscore (_) character. The name can be between one and 63 characters in length.

10.4.3 Discovering MDisks


Perform the following steps to discover newly assigned MDisks: 1. In the menu, select Physical Storage MDisks. 2. Click Detect MDisks, as shown in Figure 10-44 on page 603.

602

Implementing the IBM System Storage SAN Volume Controller V6.1

Figure 10-44 Detect MDisks action

The Discover Device window is displayed. 3. When the task is completed, click Close. 4. Newly assigned MDisks are displayed as Unmanaged as shown in Figure 10-45.

Figure 10-45 mdisk4: Newly discovered managed disk

Troubleshooting: If your MDisks are still not visible, check that the logical unit numbers (LUNs) from your subsystem are properly assigned to the SVC (for example, using storage partitioning with a DS5000) and that appropriate zoning is in place (for example, the SVC can see the disk subsystem).

10.4.4 Adding MDisks to a Storage Pool


If you created an empty Storage Pool or you simply assign additional MDisks to your SVC environment later, you can add MDisks to existing Storage Pools by performing the following steps: Note: You can only add unmanaged MDisks to a Storage Pool. 1. Select the unmanaged MDisk that you want to add to a Storage Pool. 2. Click Add to Pool in the Actions menu (Figure 10-46 on page 604).

Chapter 10. SAN Volume Controller operations using the GUI

603

Figure 10-46 Actions: Add to Pool

Note: You can also access the Add to Pool action by right-clicking an unmanaged MDisk. 3. From the Add MDisk to Pool window, select in which pool you want to integrate this MDisk and then click Add to Pool, as shown in Figure 10-47.

Figure 10-47 Adding an MDisk to an existing Storage Pool

10.4.5 Removing MDisks from a Storage Pool


To remove an MDisk from a Storage Pool, perform the following steps: 1. Select the MDisk that you want to remove from a Storage Pool. 2. Click Remove from Pool in the Actions menu (Figure 10-48 on page 605).

604

Implementing the IBM System Storage SAN Volume Controller V6.1

Figure 10-48 Actions: Remove from Pool

Note: You can also access the Remove from Pool action by right-clicking an unmanaged MDisk. 3. From the Remove from Pool window (Figure 10-49), you need to validate the number of MDisks that you want to remove from this pool. This verification has been added to secure the process of deleting data. If volumes are using the MDisks that you are removing from the Storage Pool, you must select the option Remove the MDisk from the storage pool even if it has data on it. The system migrates the data to other MDisks in the pool. to confirm the removal of the MDisk. 4. Click Delete as shown in Figure 10-49.

Figure 10-49 Deleting an MDisk to an existing Storage Pool

An error message is displayed, as shown in Figure 10-50 on page 606, if there is insufficient space to migrate the volume data to other extents on other MDisks in that Storage Pool.

Chapter 10. SAN Volume Controller operations using the GUI

605

Figure 10-50 Remove MDisk error message

10.4.6 Including an excluded MDisk


If a significant number of errors occur on an MDisk, the SVC automatically excludes it. These errors can result from a hardware problem, a storage area network (SAN) zoning problem, or the result of poorly planned maintenance. If it is a hardware fault, you will receive Simple Network Management Protocol (SNMP) alerts in regard to the state of the hardware (before the disk was excluded) and preventive maintenance that has been undertaken. If not, the hosts that were using volumes, which used the excluded MDisk, now have I/O errors. After you take the necessary corrective action to repair the MDisk (for example, replace the failed disk and repair the SAN zones), you can tell the SVC to include the MDisk again. Perform the following steps to include an excluded MDisk: 1. From the SVC Welcome panel, click Physical Storage in the left menu, and then click the MDisks panel. 2. Select the MDisk that you want to include again. 3. Click Include Excluded MDisk in the Actions menu. Note: You can also include an excluded MDisk by right-clicking an MDisk and selecting Include Excluded MDisk from the list.

10.4.7 Activating EasyTier


To activate Easy Tier you need to have a true multidisk tier pool with generic hdd and ssd drives. MDisks, after they are detected, have a default disk tier of generic_hdd (shown as Hard Disk Drive in Figure 10-51 on page 607).

606

Implementing the IBM System Storage SAN Volume Controller V6.1

Figure 10-51 Default disk tier

Note: For more detailed information about Easy Tier, see Chapter 7, Easy Tier on page 345. Easy Tier is also still inactive (Figure 10-51) for the storage pool because we do not yet have a true multidisk tier pool. To activate the pool we have to set the SSD MDisks to their correct generic_ssd tier. To set an MDisk as ssd on a Storage Pool, perform the following steps: Note: Repeat this action for each of your ssd MDisks. 1. Select the MDisk. 2. Click Select Tier in the Actions menu as shown in Figure 10-52. Note: You can also access the Select Tier action by right-clicking an MDisk.

Figure 10-52 Select Tier menu

3. In the Select MDisk Tier window, shown in Figure 10-53 on page 608, select Solid-State Drive using the drop-down list and then click OK.

Chapter 10. SAN Volume Controller operations using the GUI

607

Figure 10-53 Select MDisk Tier window

4. The Easy Tier is now activated in this multidisk tier pool (Hard Disk Drive and Solid-State Drive) in this pool as shown in Figure 10-54.

Figure 10-54 EasyTier activated on a storage pool

10.5 Migration
See Chapter 6, Data migration on page 227 for a comprehensive description of data migration.

10.6 Working with hosts


In this section we describe the various configuration and administration tasks that you can perform on the host that is connected to your SVC. Note: For more details about connecting hosts to an SVC in a SAN environment, see Chapter 5, Host configuration on page 137. A host system is a computer that is connected to the SAN Volume Controller through either a Fibre Channel interface or an IP network. A host object is a logical object in the SAN Volume Controller that represents a list of worldwide port names (WWPNs) and a list of iSCSI names that identify the interfaces that the host system uses to communicate with the SAN Volume Controller. iSCSI names can be either iSCSI qualified names (IQNs) or extended unique identifiers (EUIs). A typical configuration has one host object for each host system that is attached to the SAN Volume Controller. If a cluster of hosts accesses the same storage, you can add HBA ports

608

Implementing the IBM System Storage SAN Volume Controller V6.1

from several hosts to one host object to make a simpler configuration. A host object can have both WWPNs and iSCSI names. There are three ways to visualize and manage your hosts: By using the All Hosts panel, as shown in Figure 10-55

Figure 10-55 All Hosts panel

By using the Ports by Host panel, as shown in Figure 10-56

Figure 10-56 Ports by Host panel

By using the Host Mapping panel, as shown in Figure 10-57 on page 610

Chapter 10. SAN Volume Controller operations using the GUI

609

Figure 10-57 Host Mapping panel

Important: Several actions on the hosts are specific to the Ports by Host or the Host Mapping panels, but all these actions and others are accessible from the All Hosts panel. For this reason, all actions on hosts will be executed from the All Hosts panel.

10.6.1 Host information


To access the All Hosts panel from the SVC Welcome panel on Figure 10-1 on page 580, click Hosts All Hosts (Figure 10-55 on page 609). You can add information (new columns) to the table in the All Hosts panel as shown in Figure 10-55 on page 609; see Table information on page 586. To retrieve more information about a specific Host, perform the following steps: 1. Select a Host in the table. 2. Click Properties in the Actions menu (Figure 10-58).

Figure 10-58 Actions: Host Properties

Note: You can also access the Properties action by right-clicking a host.

610

Implementing the IBM System Storage SAN Volume Controller V6.1

3. For a given host in the Overview window you will be presented with information as shown in Figure 10-59.

Figure 10-59 Host Details: Overview

Note: To obtain more information about the hosts select Show Details (Figure 10-59). 4. On the Mapped Volumes tab (Figure 10-60), you will see the volumes that are mapped to this host.

Figure 10-60 Host Details: Mapped volumes

Chapter 10. SAN Volume Controller operations using the GUI

611

5. The Port Definitions tab (Figure 10-61) displays attachment information such as the worldwide port names (WWPNs) that are defined for this host or the iSCSI qualified name (IQN) that are defined for this host.

Figure 10-61 Host Details: Port Definitions

When you are finished viewing the details, click Close to return to the previous window.

10.6.2 Creating a host


There are two types of connections to hosts, Fibre Channel (FC) and iSCSI. In this section we detail both these methods. For Fibre Channel hosts, see the steps in Fibre Channel attached hosts. For iSCSI hosts, see the steps in iSCSI-attached hosts on page 615.

Fibre Channel attached hosts


To create a new host that uses the FC connection type, perform the following steps: 1. Go to the All Hosts panel from the SVC Welcome panel on Figure 10-1 on page 580, and then click Hosts All Hosts (Figure 10-55 on page 609). 2. Click New Host as shown in Figure 10-62.

Figure 10-62 New Host action

3. Select Fibre-Channel Host from the two types of connection available (Figure 10-63). 612
Implementing the IBM System Storage SAN Volume Controller V6.1

Figure 10-63 Create Host window

4. In the Creating Hosts window (Figure 10-64 on page 614), type a name for your host (Host Name). Host name: If you do not provide a name, the SVC automatically generates the name hostx (where x is the ID sequence number that is assigned by the SVC internally). If you want to provide a name, you can use the letters A to Z and a to z, the numbers 0 to 9, and the underscore (_) character. The host name can be between one and 63 characters in length. 5. Fibre-Channel Ports Section: Use the drop-down list to select the WWPNs that correspond to your HBA or HBAs and click Add Port to List in the Fibre-Channel Ports window. To add additional ports, repeat this action. Note: If you added a wrong Fibre-Channel port, you can delete it from the list by clicking the red cross. If your WWPNs are not being displayed, click Rescan to rediscover new WWPNs available since the last scan. Note: In certain cases your WWPNs still might not be displayed, even though you are sure that your adapter is functioning (for example, you see the WWPN in the switch name server) and your zones are correctly set up. To rectify this, type the WWPN of your HBA or HBAs into the drop-down list and click Add Port to List. It will be displayed as unverified. 6. Advanced Settings Section: If you need to modify the I/O Group, the Port Mask or the Host Type, you must select Advanced to access these Advanced Settings as shown in Figure 10-64 on page 614. Select one or more I/O groups from which the host can access volumes. By default, all I/O Groups are selected. You can use a port mask to control the node target ports that a host can access. The port mask applies to the logins from the host initiator port that is associated with the host object.

Chapter 10. SAN Volume Controller operations using the GUI

613

Note: For each login between a host bus adapter (HBA) port and a node port, the node examines the port mask that is associated with the host object for which the HBA is a member and determines if access is allowed or denied. If access is denied, the node responds to SCSI commands as though the HBA port is unknown. Select the Host Type. The default type is Generic. Use generic for all hosts, unless you use Hewlett-Packard UNIX (HP-UX) or Sun. For these, select HP_UX (to have more than eight LUNs supported for HP_UX machines) or TPGS for Sun hosts using MPxIO.

Figure 10-64 Creating a new Fibre Channel connected host

7. Click the Create Host button as shown in Figure 10-64. This action brings you back to the All Hosts panel (Figure 10-65 on page 614) where you can see the newly added FC host.

Figure 10-65 Create host results

614

Implementing the IBM System Storage SAN Volume Controller V6.1

iSCSI-attached hosts
To create a new host that uses the iSCSI connection type, perform the following steps: 1. Go to the All Hosts panel from the SVC Welcome panel on Figure 10-1 on page 580 and click Hosts All Hosts (Figure 10-55 on page 609). 2. Click New Host, as shown in Figure 10-66.

Figure 10-66 New Host action

3. Select iSCSI Host from the two types of connection (Figure 10-67).

Figure 10-67 Create Host window

4. In the Creating Hosts window (Figure 10-68 on page 617), type a name for your host (Host Name). Host name: If you do not provide a name, the SVC automatically generates the name hostx (where x is the ID sequence number that is assigned by the SVC internally). If you want to provide a name, you can use the letters A to Z and a to z, the numbers 0 to 9, and the underscore. The host name can be between one and 63 characters in length. 5. iSCSI ports Section: Enter the iSCSI initiator or IQN as an iSCSI port, and then click Add Port to List. This IQN is obtained from the server and generally has the same purpose as the WWPN. To add additional ports, repeat this action.

Chapter 10. SAN Volume Controller operations using the GUI

615

Note: If you add the wrong iSCSI port, you can delete it from the list by clicking the red cross. If needed, select Use CHAP authentication (all ports) and enter the CHAP secret as shown in Figure 10-68 on page 617. The CHAP secret is the authentication method that is used to restrict access for other iSCSI hosts to use the same connection. You can set the CHAP for the whole cluster under cluster properties or for each host definition. The CHAP must be identical on the server and the cluster/host definition. You can create an iSCSI host definition without using a CHAP. 6. Advanced Settings Section: If you need to modify the I/O Group, the Port Mask or the Host Type, you have to select the Advanced button to access these settings as shown in Figure 10-64 on page 614. Select one or more I/O groups from which the host can access volumes. By default, all I/O Groups are selected. You can use a port mask to control the node target ports that a host can access. The port mask applies to the logins from the host initiator port that is associated with the host object. Note: For each login between a host bus adapter (HBA) port and a node port, the node examines the port mask that is associated with the host object for which the HBA is a member and determines if access is allowed or denied. If access is denied, the node responds to SCSI commands as though the HBA port is unknown. Select the Host Type. The default type is Generic. Use generic for all hosts, unless you use Hewlett-Packard UNIX (HP-UX) or Sun. For these, select HP_UX: (to have more than eight LUNs supported for HP_UX machines) or TPGS for Sun hosts using MPxIO.

616

Implementing the IBM System Storage SAN Volume Controller V6.1

Figure 10-68 Creating a new iSCSI host

7. Click Create Host as shown in Figure 10-68. This action brings you back to the All Hosts panel (Figure 10-69) where you can see the newly added iSCSI host.

Figure 10-69 Create host results

10.6.3 Renaming a host


Perform the following steps to rename a host: 1. Select the host that you want to rename in the table. 2. Click Rename in the Actions menu (Figure 10-70 on page 618).

Chapter 10. SAN Volume Controller operations using the GUI

617

Figure 10-70 Rename Action

Note: There are two other ways to rename a host. You can right-click a host and select Rename from the list, or use the method described in 10.6.4, Modifying a host on page 618. 3. In the Rename Host window, type the new name that you want to assign and click Rename (Figure 10-71).

Figure 10-71 Renaming a host

Host name: If you do not provide a name, the SVC automatically generates the name hostx (where x is the ID sequence number that is assigned by the SVC internally). If you want to provide a name, you can use the letters A to Z and a to z, the numbers 0 to 9, and the underscore (_) character. The host name can be between one and 63 characters in length.

10.6.4 Modifying a host


To modify a host, perform the following steps: 1. Select the host that you want to modify in the table. 2. Click Properties in the Actions menu (Figure 10-72 on page 619).

618

Implementing the IBM System Storage SAN Volume Controller V6.1

Figure 10-72 Host Properties

Note: You can also right-click a host and select Properties from the list. 3. In the Overview tab, click Edit to be able to modify parameters for this host. You can modify: The Host Name Host name: If you do not provide a name, the SVC automatically generates the name hostx (where x is the ID sequence number that is assigned by the SVC internally). If you want to provide a name, you can use the letters A to Z and a to z, the numbers 0 to 9, and the underscore (_) character. The host name can be between one and 63 characters in length. The Host Type: The default type is Generic. Use generic for all hosts, unless you use Hewlett-Packard UNIX (HP-UX) or Sun. For these, select HP_UX (to have more than eight LUNs supported for HP_UX machines) or TPGS for Sun hosts using MPxIO. Advanced Settings: If you need to modify the I/O Group, the Port Mask or the iSCSI CHAP Secret (in case you want to convert it to an iSCSI Host), you must select Advanced to access these settings, as shown in Figure 10-73 on page 620.

Chapter 10. SAN Volume Controller operations using the GUI

619

Figure 10-73 Modifying a host

4. Save the changes by clicking Save. 5. You can close the Host Details window by clicking Close.

10.6.5 Deleting a host


To delete a host, perform the following steps: 1. Select the host or hosts that you want to delete in the table. 2. Click Delete in the Actions menu (Figure 10-74).

Figure 10-74 Delete Action

Note: You can also right-click a host and select Delete from the list.

3. The Delete Host window opens as shown in Figure 10-75 on page 621. In the field Verify the number of hosts that you are deleting, enter a value matching the correct number

620

Implementing the IBM System Storage SAN Volume Controller V6.1

of hosts that you want to remove. This verification has been added to secure the process of inadvertently deleting wrong hosts. If you still have volumes associated with the host and if you are sure that you want to delete it even if these volumes are no longer accessible, select the Delete the host even if volumes are mapped to them. These volumes will no longer be accessible to the hosts. option. 4. Click Delete to complete the operation (Figure 10-75).

Figure 10-75 Deleting a host

10.6.6 Adding ports


If you add an HBA or a network interface controller (NIC) to a server that is already defined within the SVC, you can simply add additional ports to your host definition by performing the steps described in this section. Note: A host can have FC and iSCSI port defined, but it is better to avoid using them at the same time. To add a port to a host, perform the following steps: 1. Select the host in the table. 2. Click Properties in the Actions menu (Figure 10-72 on page 619).

Figure 10-76 Host Properties

Chapter 10. SAN Volume Controller operations using the GUI

621

Note: You can also right-click a host and select Properties from the list. 3. On the Properties window, click Port Definitions (Figure 10-77).

Figure 10-77 Port Definitions tab

4. Click Add and select the type of port that you want to add to your host (Fibre Channel Port or iSCSI Port) as shown in Figure 10-78. In this example, we selected a Fibre-Channel Port.

Figure 10-78 Adding a Fibre Channel or an iSCSI Port action

5. In the Add Fibre-Channel Ports window (Figure 10-79 on page 623), use the drop-down list to select the WWPNs that correspond to your HBA or HBAs and click Add Port to List in the Fibre-Channel Ports window. To add additional ports, repeat this action. Note: If you added the wrong Fibre-Channel port, you can delete it from the list by clicking the red cross. If your WWPNs are not displayed, click Rescan to rediscover any new WWPNs available since the last scan.

622

Implementing the IBM System Storage SAN Volume Controller V6.1

Note: In certain cases your WWPNs might still not be displayed, even though you are sure your adapter is functioning (for example, you see the WWPN in the switch name server) and your zones are correctly set up. To rectify this, type the WWPN of your HBA or HBAs into the drop-down list and click Add Port to List. It will be displayed as unverified. 6. To finish, click Add Ports to Host.

Figure 10-79 Adding Fibre-Channel Ports

7. This action takes you back to the Port Definitions window (Figure 10-80), where you can see the newly added ports.

Figure 10-80 Port Definitions tab updated

Chapter 10. SAN Volume Controller operations using the GUI

623

Note: This action is exactly the same for iSCSI Ports, except that you have to add iSCSI ports.

10.6.7 Deleting ports


To delete a port from a host, perform the following steps: 1. Select the host in the table. 2. Click Properties in the Actions menu (Figure 10-81).

Figure 10-81 Host Properties

Tip: You can also right-click a host and select Properties from the list.

3. On the opened window, click Port Definitions (Figure 10-82).

Figure 10-82 Port Definitions tab

4. Select the port or ports that you want to remove. 624


Implementing the IBM System Storage SAN Volume Controller V6.1

5. Click Delete Port (Figure 10-83).

Figure 10-83 Port Definitions tab: Delete port

6. In the Delete Port window (Figure 10-84), in the field Verify the number of ports to delete, you need to enter a value matching the correct number of ports that you want to remove. This verification has been added to secure the process of inadvertently deleting the wrong hosts.

Figure 10-84 Delete Port window

7. Click Delete to remove the port or ports. 8. This action brings you back to the Port Definitions window.

10.6.8 Creating or modifying the host mapping


To modify the host Mapping, perform the following steps: 1. Select the host in the table. 2. Click Modify Mappings in the Actions menu (Figure 10-85 on page 626). Tip: You can also right-click a host and select Modify Mappings from the list.

Chapter 10. SAN Volume Controller operations using the GUI

625

Figure 10-85 Modify Mappings Action

3. On the Modify Mappings window select the volume or volumes that you want to map to this host and move each of them to the right table using the right arrow, as shown in Figure 10-86. If you need to remove them, use the left arrow.

Figure 10-86 Modify Mappings window: Adding volumes to a host

In the right table you can edit the SCSI ID by selecting a mapping that is highlighted in yellow, indicating that the mapping is new. Click Edit SCSI ID (Figure 10-86). Note: Only new mappings can have their SCSI ID changed. To edit an existing mapping SCSI ID, you must unmap the volume and recreate the map to the volume. In the Edit SCSI ID window, change the SCSI ID then click OK (Figure 10-87 on page 627).

626

Implementing the IBM System Storage SAN Volume Controller V6.1

Figure 10-87 Modify Mappings window: Edit SCSI ID

4. After all the volumes you wanted to map to this host have been added, click OK to create the Host mapping relationships.

10.6.9 Deleting a host mapping


To delete a host mapping, perform the following steps: 1. Select the host in the table. 2. Click Properties in the Actions menu (Figure 10-88).

Figure 10-88 Host Properties

Tip: You can also right-click a host and select Properties from the list. 3. On the opened window, click Mapped volumes (Figure 10-89 on page 628).

Chapter 10. SAN Volume Controller operations using the GUI

627

Figure 10-89 Mapped volumes tab

4. Select the host mapping or mappings that you want to remove. 5. Click Unmap (Figure 10-90)

Figure 10-90 Mapped volumes tab: Unmap a volume

In the Unmap from Host window (Figure 10-91 on page 629), in the field Verify the number of mappings that this operation affects:, enter a value matching the correct number of ports that you want to remove. This verification has been added to secure the process of inadvertently deleting the wrong hosts.

628

Implementing the IBM System Storage SAN Volume Controller V6.1

Figure 10-91 Unmap from Host window

6. Click Unmap to remove the host mapping or mappings. This action brings you back to the Mapped volumes window.

10.6.10 Deleting all host mappings for a given host


To delete all host mappings for a given host, perform the following steps: 1. Select the host in the table. 2. Click Unmap All volumes in the Actions menu (Figure 10-92).

Figure 10-92 Unmap All volumes from Actions menu

Tip: You can also right-click a host and select Unmap All volumes from the list.

From the Unmap from Host window (Figure 10-93 on page 630), in the Verify the number of mappings that this operation affects: field, enter a value matching the correct number of ports that you want to remove. This verification has been added to secure the process of inadvertently deleting the wrong hosts.

Chapter 10. SAN Volume Controller operations using the GUI

629

Figure 10-93 Unmap from Host window

3. Click Unmap to remove the host mapping or mappings. This action brings you back to the All Hosts window.

10.7 Working with volumes


In this section, we describe the tasks that you can perform at a volume level. There are three ways to visualize and manage your volumes: You can use the All volumes panel, as shown in Figure 10-94.

Figure 10-94 All volumes panel

Or you can use the Volumes by Pool panel, as shown in Figure 10-95 on page 631.

630

Implementing the IBM System Storage SAN Volume Controller V6.1

Figure 10-95 Volumes by Pool panel

Or you can use the Volumes by Host panel, as shown in Figure 10-96.

Figure 10-96 Volumes panel by Host panel

Important: Several actions on the hosts are specific to the Volumes by Pool or to the Volumes by Host panels. However, all these actions and others are accessible from the All volumes panel. All actions in the following sections are executed from the All Volumes panel.

10.7.1 Volume information


To access the All volumes panel from the SVC Welcome panel on Figure 10-1 on page 580, click Volumes All Volumes (Figure 10-94 on page 630).

Chapter 10. SAN Volume Controller operations using the GUI

631

You can add information (new columns) to the table in the All Volumes panel as shown in Figure 10-94 on page 630; see Table information on page 586. To retrieve more information about a specific volume, perform the following steps: 1. Select a volume in the table. 2. Click Properties in the Actions menu (Figure 10-97).

Figure 10-97 Volume Properties action

Tip: You can also access the Properties action by right-clicking a volume. 3. The Overview tab shows information about a given volume (Figure 10-98).

Figure 10-98 Volume properties: Overview tab

632

Implementing the IBM System Storage SAN Volume Controller V6.1

Note: To obtain more information about the volume, select Show Details (Figure 10-98 on page 632). 4. The Host Maps tab (Figure 10-99) displays the hosts that are mapped with this volume.

Figure 10-99 Volume properties: Mapped volumes

5. The Member MDisks tab (Figure 10-100 on page 634) displays the used MDisks for this volume. You can perform actions on the MDisks such as removing them from a pool, adding them to a tier, renaming them, showing their dependent volumes, or seeing their properties.

Chapter 10. SAN Volume Controller operations using the GUI

633

Figure 10-100 Volume properties: Member MDisks

6. When you have finished viewing the details, click Close to return to the All Volumes panel.

10.7.2 Creating a volume


To create a new volume, perform the following steps: 1. Go to the All Volumes panel from the SVC Welcome panel on Figure 10-1 on page 580, and click Volumes All Volumes. 2. Click New Volume (Figure 10-101).

Figure 10-101 New Volume action

3. Select one of the following presets, as shown in Figure 10-102 on page 635: Generic: Create volumes that use a set amount of capacity from the selected storage pool. Thin Provision: Create volumes whose capacity is large, but which only use the capacity that is written by the host application from the pool. Mirror: Create volumes with two physical copies that provide data protect. Each copy can belong to a different storage pool to protect data from storage failures.

634

Implementing the IBM System Storage SAN Volume Controller V6.1

Thin Mirror: Create volumes with two physical copies to protect data from failures while using only the capacity that is written by the host application. Note: For our example we chose the Generic preset. However, whatever the selected preset is, you have the opportunity afterwards to reconsider your decision by customizing the volume using the Advanced... button.

Figure 10-102 New volume: Select a Preset

4. After selecting a preset, in our example Generic, you must select the Storage Pool on which the data will be striped (Figure 10-103).

Figure 10-103 Select the Storage Pool

5. After the Storage Pool has been selected, the window will be updated automatically and you will have to select a volume name and size as shown in Figure 10-104 on page 636. Enter a name if you want to create a single volume, or a naming prefix if you want to create multiple volumes.

Chapter 10. SAN Volume Controller operations using the GUI

635

Volume name: You can use the letters A to Z and a to z, the numbers 0 to 9, and the underscore (_) character. The host name can be between one and 63 characters in length. Enter the size of the volume that you want to create and select the capacity measurement (bytes, KB, MB, GB or TB) from the list. Note: An entry of 1 GB uses 1024 MB. An updated summary automatically appears in the bottom of the window to give you an idea of the space that will be used and that is remaining in the pool.

Figure 10-104 New volume: Select Name and Size

Various optional actions are available from this window: You can modify the Storage Pool by clicking Edit. In this case, you can select another storage pool. You can create additional volumes by clicking the button. This action can be repeated as many times as necessary. You can remove them by clicking the button. Note: When you create more than one volume, the wizard does not ask you for a name for each volume to be created. Instead, the name that you use here will become the prefix and have a number, starting at zero, appended to it as each volume is created. 6. You can activate and customize advanced features such as thin-provisioning or mirroring, depending on the preset you selected. To access these settings, click Advanced...: On the Characteristics tab (Figure 10-105 on page 637), you can set the following options: General: Format the new volume by selecting the Format Before Use check box (formatting writes zeros to the volume before it can be used; that is, it will write zeros to its MDisk extents). Locality: Choose an I/O Group and then select a preferred node.

636

Implementing the IBM System Storage SAN Volume Controller V6.1

OpenVMS only: Enter the UDID (OpenVMS). This field needs to be completed only for OpenVMS system. Note: Each OpenVMS fibre-attached volume requires a user-defined identifier or unit device identifier (UDID). A UDID is a nonnegative integer that is used when an OpenVMS device name is created. To recognize volumes, OpenVMS issues a UDID value, which is a unique numerical number.

Figure 10-105 Advanced Settings: Characteristics

On the Thin Provisioning tab (Figure 10-106 on page 638), after you activate thin provisioning by selecting the Thin provisioning check box, you can set the following options: Real: Type the Real size that you want to allocate. This size is the amount of disk space that will actually be allocated. It can either be a percentage of the virtual size or a specific number in GB. Automatically Expand: Select auto expand, which allows the real disk size to grow as required. Warning Threshold: Type a percentage or select a specific size for the usage threshold warning. It will generate a warning when the used disk capacity on the space-efficient copy first exceeds the specified threshold. Thin-Provisioned Grain size: Select the Grain size (32 KB, 64 KB, 128 KB or 256 KB). Smaller grain sizes save space and larger grain sizes produce better performance. Try to match the FlashCopy grain size if the volume will be used for FlashCopy.

Chapter 10. SAN Volume Controller operations using the GUI

637

Figure 10-106 Advanced Settings: Thin Provisioning

Important: If the Thin Provision or Thin Mirror preset is selected on the first page (Figure 10-102 on page 635), the Thin provisioning check box is already selected and the parameter presets are the following: Real: 2% of Virtual Capacity Automatically Expand: Selected Warning Threshold: Selected with a value 80% of Virtual Capacity Thin-Provisioned Grain size: 32 KB On the Mirroring tab (Figure 10-107 on page 639), after you activate mirroring by selecting the Create Mirrored Copy check box, you can set the following option: Mirror Sync Rate: Enter the Mirror Synchronization rate. It is the I/O governing rate in a percentage that determines how quickly copies are synchronized. A zero value disables synchronization. Important: If you activate this feature from the Advanced menu, you will have to select a secondary pool on the main window (Figure 10-104 on page 636). The Primary Pool is going to be used as the primary and preferred copy for read operations. The secondary pool will be used as the secondary copy.

638

Implementing the IBM System Storage SAN Volume Controller V6.1

Figure 10-107 Advanced Settings: Mirroring

Important: If the Mirror or Thin Mirror preset is selected on the first page (Figure 10-102 on page 635), the Mirroring check box is already selected and the parameter preset is the following: Mirror Sync Rate: 80% of Maximum 7. After all the advanced settings have been set, click OK to return to the main menu (Figure 10-104 on page 636). 8. Then, you have the choice to only create the volume using the Create button, or to create and map it using the Create and Map to Host button. If you select to only create the volume, you will return to the main All Volumes panel and you will see your volume created but not mapped (Figure 10-108). You can map it later.

Figure 10-108 Volume created without mapping

If you want to create and map it on the volume creation window, click the Continue button and another window opens. In the Modify Mappings window, select on which host you want to map this volume by using the drop-down button and then clicking Next (Figure 10-109 on page 640).

Chapter 10. SAN Volume Controller operations using the GUI

639

Figure 10-109 Select the host to which to map your volume

In the Modify Mappings window, verify the mapping. If you want to modify it, select the volume or volumes that you want to map to a host and move each of them to the right table using the right arrow, as shown in Figure 10-110. If you need to remove them, use the left arrow.

Figure 10-110 Modify Mappings window: Adding volumes to a host

In the right table, you can edit the SCSI ID by selecting a mapping that is highlighted in yellow, indicating that the mapping is new. Next, click Edit SCSI ID (shown in Figure 10-86 on page 626). Note: Only new mappings can have their SCSI ID changed. To edit an existing mapping SCSI ID, you must unmap the volume and recreate the map to the volume. In the Edit SCSI ID window, change the SCSI ID then click OK (Figure 10-111 on page 641).

640

Implementing the IBM System Storage SAN Volume Controller V6.1

Figure 10-111 Modify Mappings window: Edit SCSI ID

After all volumes that you wanted to map to this host have been added, click OK to create the Host mapping relationships and finalize the volume creation. You will return to the main All Volume window and see your volume created and mapped as shown in Figure 10-112.

Figure 10-112 Volume created with mapping

10.7.3 Renaming a volume


Perform the following steps to rename a volume: 1. Select the volume that you want to rename in the table. 2. Click Rename in the Actions menu (Figure 10-113 on page 642).

Chapter 10. SAN Volume Controller operations using the GUI

641

Figure 10-113 Rename Action

Tip: There are two other ways to rename a volume. You can right-click a volume and select Rename from the list, or you can use the method explained in Figure 10.7.4 on page 642.

3. In the Rename Volume window, type the new name that you want to assign to the volume, and click OK (Figure 10-114).

Figure 10-114 Renaming a volume

Volume name: You can use the letters A to Z and a to z, the numbers 0 to 9, and the underscore (_) character. The volume name can be between one and 63 characters in length.

10.7.4 Modifying a volume


To modify a volume, perform the following steps: 1. Select the volume that you want to modify in the table. 2. Click Properties in the Actions menu (Figure 10-115 on page 643).

642

Implementing the IBM System Storage SAN Volume Controller V6.1

Figure 10-115 Properties action

Tip: You can also right-click a volume and select Properties from the list. 3. In the Overview tab, click Edit to modify parameters for this volume (Figure 10-116 on page 644). From this window, you can modify the following parameters: Volume Name: You can modify the volume name. Volume name: You can use the letters A to Z and a to z, the numbers 0 to 9, and the underscore. The host name can be between one and 63 characters in length. I/O Group: You can select an alternate I/O Group from the list to alter the I/O Group to which it is assigned. You can also select the Force check box. This option changes the I/O group when the cache state is either Not Empty or corrupts and stops synchronization for mirrored volumes.

Preferred node: You can change the preferred node for this volume. Hosts try to access the volume through the preferred node. By default, the system automatically balances the load between nodes. Mirror Sync Rate: Change the Mirror Sync rate. It is the I/O governing rate in a percentage that determines how quickly copies are synchronized. A zero value disables synchronization. Cache Mode: By uncloaking the check box, the SVC cache is disabled (read/write cache is disabled) OpenVMS: Enter the UDID (OpenVMS). This field needs to be completed only for an OpenVMS system. Note: Each OpenVMS fibre-attached volume requires a user-defined identifier or unit device identifier (UDID). A UDID is a nonnegative integer that is used when an OpenVMS device name is created. To recognize volumes, OpenVMS issues a UDID value, which is a unique numerical you will number.

Chapter 10. SAN Volume Controller operations using the GUI

643

Figure 10-116 Modify a volume

4. Save the changes by clicking Save. 5. You can close the Host Details window by clicking Close.

10.7.5 Modifying thin-provisioning volume properties


For thin-provisioned volumes, in addition to the properties that you can modify by following the instructions in Figure 10.7.4 on page 642, there are other properties specific to thin provisioning that you can modify by performing the following steps: 1. Depending on the case, use one of the following actions: For a non-mirrored volume: Select the volume and in the Actions menu, click Volume Copy Actions Thin Provisioned Edit Properties as shown in Figure 10-117.

Figure 10-117 Non-mirrored volume: Thin-provisioned properties action menu

644

Implementing the IBM System Storage SAN Volume Controller V6.1

Tip: You can also right-click the volume and select Volume Copy Actions Thin Provisioned Edit Properties from the list. For a mirrored volume: Select the thin-provisioned copy of the mirrored volume that you want to modify. In the Actions menu, click Thin Provisioned Edit Properties as shown in Figure 10-118.

Figure 10-118 Mirrored volume: Thin-provisioned properties action menu

Tip: You can also right-click the thin provisioned copy and select Thin Provisioned Edit Properties from the list.

2. The Edit Properties: volumename (where volumename is the volume that you selected in the previous step) window opens (Figure 10-119). From this window, you are able to modify: Warning Threshold: Type a percentage. It will generate a warning when the used disk capacity on the thin-provisioned copy first exceeds the specified threshold. Automatically Expand: Autoexpand allows the real disk size to grow as required automatically.

Figure 10-119 Edit thin-provisioning properties window

Note: You can modify the real size of your thin-provisioned volume by using the GUI. Refer to 10.7.12, Shrinking the real capacity of a thin-provisioned volume on page 657 or 10.7.13, Expanding the real capacity of a thin provisioned volume on page 659, depending on your needs.

Chapter 10. SAN Volume Controller operations using the GUI

645

10.7.6 Deleting a volume


To delete a volume, perform the following steps: 1. Select the volume or volumes that you want to delete in the table. 2. Click Delete in the Actions menu (Figure 10-120).

Figure 10-120 Delete Action

Tip: You can also right-click a volume and select Delete from the list.

3. The Delete Volume window opens as shown in Figure 10-121 on page 647. In the field Verify the number of volumes that you are deleting, enter a value matching the correct number of volumes that you want to remove. This verification has been added to secure the process of deleting wrong volumes. Important: Deleting a volume is a destructive action for user data residing in that volume. If you still have a volume (or volumes) associated with a host (or hosts) used with FlashCopy or remote copy, and you definitely want to delete the volume (or volumes), select the Delete the volume even if it has host mappings or is used in FlashCopy mappings or remote-copy relationships. option. Click Delete to complete the operation (Figure 10-121 on page 647).

646

Implementing the IBM System Storage SAN Volume Controller V6.1

Figure 10-121 Delete Volume

10.7.7 Creating or modifying the host mapping


To create or modify a host mapping, perform the following steps: 1. Select the volume in the table. 2. Click Map to Host in the Actions menu (Figure 10-122). Tip: You can also right-click a volume and select Map to Host from the list.

Figure 10-122 Map to Host action

3. On the Modify Mappings window, select the host on which you want to map this volume using the drop-down button and then click Next (Figure 10-109 on page 640).

Chapter 10. SAN Volume Controller operations using the GUI

647

Figure 10-123 Select the host to which you want to map your volume

4. On the Modify Mappings window, verify the mapping. If you want to modify it, select the volume or volumes that you want to map to a host and move each of them to the right table using the right arrow as shown in Figure 10-124. If you need to remove them, use the left arrow.

Figure 10-124 Modify Mappings window: Adding volumes to a host

In the right table, you can edit the SCSI ID. Select a mapping that is highlighted in yellow, which indicates that the mapping is new, and click Edit SCSI ID (shown in Figure 10-86 on page 626). Note: Only new mappings can have their SCSI ID changed. To edit an existing mapping SCSI ID, you must unmap the volume and recreate the map to the volume. In the Edit SCSI ID window, change the SCSI ID then click OK (Figure 10-125 on page 649).

648

Implementing the IBM System Storage SAN Volume Controller V6.1

Figure 10-125 Modify Mappings window: Edit SCSI ID

5. After all the volumes you want to map to this host have been added, click OK. You will return to the main All Volumes panel.

10.7.8 Deleting a host mapping


Note: Before deleting a host mapping, make sure that the host is no longer using that disk. Unmapping a disk from a host does not destroy the disks contents. Unmapping a disk has the same effect as powering off the computer without first performing a clean shutdown and, thus, might leave the data in an inconsistent state. Also, any running application that was using the disk will begin to receive I/O errors. To delete a host mapping to a volume, perform the following steps: 1. Select the volume in the table. 2. Click Properties in the Actions menu (Figure 10-126 on page 650).

Chapter 10. SAN Volume Controller operations using the GUI

649

Figure 10-126 Volume Properties

Tip: You can also right-click a volume and select Properties from the list. 3. On the Properties window, click the Host Maps tab (Figure 10-127).

Figure 10-127 Host Maps window

Note: You can also access this window by selecting the volume in the table and clicking View Mapped Hosts in the Actions menu (Figure 10-128).

650

Implementing the IBM System Storage SAN Volume Controller V6.1

Figure 10-128 View Mapped Hosts

4. Select the host mapping or mappings that you want to remove. 5. Click Unmap from Host (Figure 10-129).

Figure 10-129 Host Maps window: Unmap from Host action

In the Unmap Host window (Figure 10-130 on page 652), in the field Verify the number of hosts that this operation affects: enter a value matching the correct number of ports that you want to remove. This verification has been added to secure the process of deleting wrong hosts.

Chapter 10. SAN Volume Controller operations using the GUI

651

Figure 10-130 Unmap Host

6. Click Unmap to remove the host mapping or mappings. This action returns you to the Host Maps window. 7. Click Close to return to the main All Volumes panel.

10.7.9 Deleting all host mappings for a given volume


To delete all host mappings for a given host, perform the following steps: 1. Select the volume in the table. 2. Click Unmap All volumes in the Actions menu (Figure 10-131).

Figure 10-131 Unmap All Hosts from Actions menu

Tip: You can also right-click a volume and select Unmap All Hosts from the list.

652

Implementing the IBM System Storage SAN Volume Controller V6.1

3. In the Unmap from Hosts window (Figure 10-132), in the field Verify the number of mappings that this operation affects: enter a value matching the correct number of ports that you want to remove. This verification has been added to secure the process of deleting wrong hosts.

Figure 10-132 Unmap from Hosts window

4. Click Unmap to remove the host mapping or mappings. This action returns you to the All Volumes panel.

10.7.10 Shrinking a volume


Important: For thin-provisioned volumes, using this method to shrink a volume results in shrinking its virtual capacity. To shrink its real capacity, refer to the information provided in 10.7.12, Shrinking the real capacity of a thin-provisioned volume on page 657. The method that the SVC uses to shrink a volume is to remove the required number of extents from the end of the volume. Depending on where the data actually resides on the volume, this action can be quite destructive. For example, you might have a volume that consists of 128 extents (0 to 127) of 16 MB (2 GB capacity), and you want to decrease the capacity to 64 extents (1 GB capacity). In this case, the SVC simply removes extents 64 to 127. Depending on the operating system, there is no easy way to ensure that your data resides entirely on extents 0 through 63, so be aware that you might lose data. Although easily done using the SVC, you must ensure that your operating system supports shrinking, either natively or by using third-party tools, before using this function. In addition, it is good practice to always have a good current backup before you execute this task. Shrinking a volume is useful in certain circumstances, such as: Reducing the size of a candidate target volume of a copy relationship to make it the same size as the source Releasing space from volumes to have free extents in the Storage Pool, provided that you do not use that space any more and take precautions with the remaining data

Chapter 10. SAN Volume Controller operations using the GUI

653

Assuming your operating system supports it, perform the following steps to shrink a volume: 1. Perform any necessary steps on your host to ensure that you are not using the space that you are about to remove. 2. Select the volume that you want to shrink in the table. 3. Click Shrink in the Actions menu (Figure 10-133).

Figure 10-133 Shrink Action

Tip: You can also right-click a volume and select Shrink from the list.

4. The Shrink Volume: volumename window (where volumename is the volume that you selected in the previous step) opens. See Figure 10-134 on page 655. You can either enter how much you want to shrink the volume using the field Shrink By or you can directly enter the final size that you want to use for the volume using the field Final Size. The other field will be computed automatically. For example, if you have a 20 GB disk and you want it to become 15 GB, you can specify 5 GB in Shrink By field or you can directly specify 15 GB in Final Size field as shown in Figure 10-134 on page 655. 5. When you are finished, click Shrink as shown in Figure 10-134 on page 655, and the changes become visible on your host.

654

Implementing the IBM System Storage SAN Volume Controller V6.1

Figure 10-134 Shrinking a volume

10.7.11 Expanding a volume


Important: For thin-provisioned volumes, using this method results in expanding its virtual capacity. If you want to expand its real capacity, see 10.7.13, Expanding the real capacity of a thin provisioned volume on page 659. Expanding a volume presents a larger capacity disk to your operating system. Although you can expand a volume easily using the SVC, you must ensure that your operating system is prepared for it and supports the volume expansion before you use this function. Dynamic expansion of a volume is only supported when the volume is in use by one of the following operating systems: AIX 5L V5.2 and higher Microsoft Windows Server 2000, Windows Server 2003 and Windows Server 2008 for basic disks. Microsoft Windows Server 2000, Windows Server 2003 with a hot fix from Microsoft (Q327020) for dynamic disks, and Windows Server 2008. If your operating system supports it, perform the following steps to expand a volume: 1. Select the volume in the table. 2. Click Expand in the Actions menu (Figure 10-135 on page 656).

Chapter 10. SAN Volume Controller operations using the GUI

655

Figure 10-135 Expand Action

Tip: You can also right-click a volume and select Expand from the list.

3. The Expand Volume: volumename window (where volumename is the volume that you selected in the previous step) opens; see Figure 10-136 on page 657. You can either enter how much you want to enlarge the volume by using the field Expand By, or you can directly enter the final size that you want to use for the volume by using the field Final Size. The other field will be computed automatically. For example, if you have a 10 GB disk and you want it to become 20 GB, you can specify 10 GB in the Expand By field or you can directly specify 20 GB in the Final Size field as shown in Figure 10-136 on page 657. Volume expansion notes: No support exists for the expansion of image mode volumes. If there are insufficient extents to expand your volume to the specified size, you receive an error message. If you use volume mirroring, all copies must be synchronized before expanding. 4. When you are finished, click Expand (see Figure 10-136 on page 657).

656

Implementing the IBM System Storage SAN Volume Controller V6.1

Figure 10-136 Expanding a volume

10.7.12 Shrinking the real capacity of a thin-provisioned volume


Important: From a hosts perspective, the virtual capacity shrinkage (see 10.7.10, Shrinking a volume on page 653) of a volume impacts the host access. To determine these impacts, see 10.7.10, Shrinking a volume on page 653. The real capacity shrinkage of a volume, described in this section, is transparent to the hosts. To shrink the real size of a thin-provisioned volume, perform the following steps: 1. Depending on the case, use one of the following actions: For a non-mirrored volume: Select the volume and in the Actions menu, click Volume Copy Actions Thin provisioned Shrink as shown in Figure 10-137.

Figure 10-137 Non-mirrored volume: Thin provisioned shrink action menu

Tip: You can also right-click the volume and select Volume Copy Actions Thin provisioned Shrink from the list.

Chapter 10. SAN Volume Controller operations using the GUI

657

For a mirrored volume: Select the thin-provisioned copy of the mirrored volume that you want to modify and in the Actions menu, click Thin Provisioned Shrink as shown in Figure 10-138.

Figure 10-138 Mirrored volume: Thin-provisioned shrink action menu

Tip: You can also right-click the thin provisioned copy and select Thin Provisioned Shrink from the list.

2. The Shrink Volume: volumename window (where volumename is the volume that you selected in the previous step) opens; see Figure 10-139. You can either enter how much you want to shrink the volume by using the field Shrink By, or you can directly enter the final real capacity that you want to use for the volume by using the field Final Real Capacity. The other field will be computed automatically. For example, if you have a current real capacity equal to 118.8 MB and you want a final real size equal to 10 MB, you can specify 108.8 MB in the Shrink By field, or you can directly specify 10 MB in the Final Real Capacity field as shown in Figure 10-139. 3. When you are finished, click Shrink (Figure 10-139) and the changes will become visible on your host.

Figure 10-139 Shrink real capacity window

658

Implementing the IBM System Storage SAN Volume Controller V6.1

10.7.13 Expanding the real capacity of a thin provisioned volume


Important: From a host perspective, the virtual capacity expansion (10.7.11, Expanding a volume on page 655) of a volume impacts the host access. To know these impacts, see 10.7.11, Expanding a volume on page 655. The real capacity expansion of a volume, described in this paragraph, is transparent to the hosts. To expand the real size of a thin-provisioned volume, perform the following steps: 1. Depending on the case, use one of the following actions: For a non-mirrored volume: Select the volume and in the Actions menu, click Volume Copy Actions Thin provisioned Expand (Figure 10-140).

Figure 10-140 Non-mirrored volume: Thin provisioned expand action menu

Tip: You can also right-click the volume and select Volume Copy Actions Thin provisioned Expand from the list.

For a mirrored volume: Select the thin provisioned copy of the mirrored volume that you want to modify and in the Actions menu, then click Thin Provisioned Expand (Figure 10-141).

Figure 10-141 Mirrored volume: Thin provisioned expand action menu

Tip: You can also right-click the thin provisioned copy and select Thin Provisioned Expand from the list.

Chapter 10. SAN Volume Controller operations using the GUI

659

2. The Expand Volume: volumename window (where volumename is the volume that you selected in the previous step) opens (Figure 10-142). You can either enter how much you want to expand the volume using the field Expand By, or you can directly enter the final real capacity that you want to use for the volume using the field Final Real Capacity. The other field will be computed automatically. For example, if you have a current real capacity equal to 10 MB and you want a final real size equal to 100 MB, you can specify 90 MB in the Expand By field or you can directly specify 100 MB in the Final Real Capacity field, as shown in Figure 10-142. 3. When you are finished, click Expand (Figure 10-142) and the changes will become visible on your host.

Figure 10-142 Expand real capacity window

10.7.14 Migrating a volume


To migrate a volume, perform the following steps: 1. Select the volume that you want to migrate in the table. 2. Click Migrate to Another Pool in the Actions menu (Figure 10-143).

Figure 10-143 Migrate to Another Pool action

660

Implementing the IBM System Storage SAN Volume Controller V6.1

Tip: You can also right-click a volume and select Migrate to Another Pool from the list. 3. The Migrate Volume Copy window opens (Figure 10-144). Select the Storage Pool to which you want to reassign the volume. You will only be presented with a list of Storage Pools with the same extent size. 4. When you have finished making your selections, click Migrate to begin the migration process.

Figure 10-144 Migrate Volume Copy window

Important: After a migration starts, you cannot stop it. Migration continues until it is complete unless it is stopped or suspended by an error condition, or the volume that is being migrated is deleted.

5. You can check the migration using the Running Tasks menu (Figure 10-145 on page 662).

Chapter 10. SAN Volume Controller operations using the GUI

661

Figure 10-145 Long Running Tasks Area

To expand this area, click the icon and then click Migration. Figure 10-146 shows a detailed view of the running tasks.

Figure 10-146 Long Running Task: Volume migration

6. When the migration is finished, the volume will be part of the new pool.

10.7.15 Adding a mirrored copy to an existing volume


You can add a mirrored copy to an existing volume. This will give you two copies of the underlying disk extents. Tip: You can also create a new mirrored volume by selecting the Mirror or Thin Mirror preset during the volume creation, as shown in Figure 10-102 on page 635.

662

Implementing the IBM System Storage SAN Volume Controller V6.1

You can use a volume mirror for any operation for which you can use a volume. It is transparent to higher level operations such as Metro Mirror, Global Mirror, or FlashCopy. Creating a volume mirror from an existing volume is not restricted to the same Storage Pool, so it is an ideal method to use to protect your data from a disk system or an array failure. If one copy of the mirror fails, it provides continuous data access to the other copy. When the failed copy is repaired, the copies automatically resynchronize. You can also use a volume mirror as an alternative migration tool, where you can synchronize the mirror before splitting off the original side of the mirror. The volume stays online, and can be used normally, while the data is being synchronized. The copies can also be separate structures (that is, striped, image, sequential, or space-efficient) and separate extent sizes. To create a mirror copy from within a volume, perform the following steps; 1. Select the volume in the table. 2. In the Actions menu, click Volume Copy Actions Add Mirrored Copy (Figure 10-147).

Figure 10-147 Add Mirrored Copy actions

Tip: You can also right-click a volume and select Volume Copy Actions and then Add Mirrored Copy from the list. 3. The Add Volume Copy: volumename window (where volumename is the volume that you selected in the previous step) opens (Figure 10-148 on page 664). You can perform the following steps separately or in combination: Select the Storage Pool in which you want to put the copy. To maintain higher availability, choose a separate group. Select the Enable Thin Provisioning check box to make the copy space-efficient. The following parameters are used for this thin-provisioned copy: Real Size: 2% of Virtual Capacity Automatically Expand: Active Warning Threshold: 80% of Virtual Capacity Thin-Provisioned Grain size: 32 KB

Chapter 10. SAN Volume Controller operations using the GUI

663

Note: Real Size, Auto expand, and Warning Threshold can be changed only after the thin-provisioned volume copy has been added. For information about modifying the real size of your thin-provisioned volume, see 10.7.12, Shrinking the real capacity of a thin-provisioned volume on page 657 and 10.7.13, Expanding the real capacity of a thin provisioned volume on page 659. For information about modifying the Auto expand and Warning Threshold of your thin provisioned volume, see 10.7.5, Modifying thin-provisioning volume properties on page 644. 4. Click Add Copy (Figure 10-148).

Figure 10-148 Add Copy to volume window

5. You can check the migration using the Running Tasks menu (see Figure 10-145 on page 662). To expand this Status Area, click the icon and click Volume Synchronization. Figure 10-149 on page 665 shows a detailed view of the running tasks.

664

Implementing the IBM System Storage SAN Volume Controller V6.1

Figure 10-149 Running Task: Volume Synchronization

Note: You can change the Mirror Sync Rate (the default is 50%) by modifying the volume properties. For more information, see Figure 10.7.4 on page 642. 6. When synchronization is finished, the volume will be part of the new pool (Figure 10-150).

Figure 10-150 Mirrored volume

Note: As shown in Figure 10-150, the primary copy is identified with an asterisk (*). In this example, Copy 0 is the primary copy.

10.7.16 Deleting a mirrored copy from a volume mirror


To remove a volume copy, perform the following steps: 1. Select the volume copy that you want to remove in the table and in the Actions menu, click Delete this Copy (Figure 10-151 on page 666).

Chapter 10. SAN Volume Controller operations using the GUI

665

Figure 10-151 Delete this Copy action

Tip: You can also right-click a volume and select Delete this Copy from the list.

2. The Warning window opens (Figure 10-152). Click OK to confirm your choice.

Figure 10-152 Warning window

Note: If you try to remove the primary copy, before it has been synchronized with the other one, you will receive the message: The command failed because the copy specified is the only synchronized copy. You must wait until the end of the synchronization to be able to remove this copy. 3. The copy is now deleted.

10.7.17 Splitting a volume copy


To split off a synchronized volume copy to a new volume, perform the following steps: 1. Select the volume copy that you want to split in the table and in the Actions menu, click Split into New Volume (Figure 10-153 on page 667).

666

Implementing the IBM System Storage SAN Volume Controller V6.1

Figure 10-153 Split into New Volume action

Tip: You can also right-click a volume and select Split into New Volume from the list.

2. The Split Volume Copy window opens (Figure 10-154). In this window, type a name for the new volume. Volume name: If you do not provide a name, the SVC automatically generates the name vdiskx (where x is the ID sequence number that is assigned by the SVC internally). If you want to provide a name, you can use the letters A to Z and a to z, the numbers 0 to 9, and the underscore. The host name can be between one and 63 characters in length. 3. Click Split Volume Copy (Figure 10-154).

Figure 10-154 Split Volume Copy window

4. This new volume is now available to be mapped to a host. Important: After you split a volume mirror, you cannot resynchronize or recombine them. You must create a volume copy from scratch.

10.7.18 Validating volume copies


To validate the copies of a mirrored volume, perform the following steps: 1. Select a copy of this volume in the table and in the Actions menu, click Validate Volume Copies (Figure 10-155 on page 668).

Chapter 10. SAN Volume Controller operations using the GUI

667

Figure 10-155 Validate Volume Copies actions

2. The Validate Volume Copies window opens (Figure 10-156). In this window, select one of the following options: Generate Event of differences: Use this option if you only want to verify that the mirrored volume copies are identical. If any difference is found, the command stops and logs an error that includes the logical block address (LBA) and the length of the first difference. You can use this option, starting at a different LBA each time, to count the number of differences on a volume. Overwrite differences: Use this option to overwrite contents from the primary volume copy to the other volume copy. The command corrects any differing sectors by copying the sectors from the primary copy to the copies being compared. Upon completion, the command process logs an event. This indicates the number of differences that were corrected. Use this option if you are sure that either the primary volume copy data is correct, or that your host applications can handle incorrect data. Return Media Error to Host: Use this option to convert sectors on all volumes copies that contain different contents into virtual medium errors. Upon completion, the command logs an event, which indicates the number of differences that were found, the number that were converted into medium errors, and the number that were not converted. Use this option if you are unsure what the correct data is, and you do not want an incorrect version of the data to be used.

Figure 10-156 Validate Volume Copies

668

Implementing the IBM System Storage SAN Volume Controller V6.1

3. Click Validate (Figure 10-156 on page 668). 4. The volume is now checked.

10.7.19 Migrating to a thin-provisioned volume using volume mirroring


To migrate to a thin-provisioned, perform the following steps; 1. Select the volume in the table. 2. In the Actions menu, click Volume Copy Actions Add Mirrored Copy (Figure 10-157).

Figure 10-157 Add Mirrored Copy actions

Tip: You can also right-click a volume and select Volume Copy Actions then Add Mirrored Copy from the list. 3. The Add Volume Copy: volumename window (where volumename is the volume that you selected in the previous step) opens (Figure 10-158 on page 670). You can perform the following steps separately or in combination: Select the Storage Pool in which you want to put the copy. To maintain higher availability, choose a separate group. Select the Enable Thin Provisioning check box to make the copy space-efficient. The following parameters are used for this thin-provisioned copy: Real Size: 2% of Virtual Capacity Automatically Expand: Active Warning Threshold: 80% of Virtual Capacity Thin-Provisioned Grain size: 32 KB Note: Real Size, Auto expand, and Warning Threshold can be changed after the volume copy has been added in the GUI. For the Thin-Provisioned Grain size, you need to use the CLI. 4. Click Add Copy.

Chapter 10. SAN Volume Controller operations using the GUI

669

Figure 10-158 Add Copy to volume window

5. You can check the migration using the Running Tasks Status Area menu as shown in Figure 10-145 on page 662. To expand this Status Area, click the icon and click Volume Synchronization. Figure 10-159 shows the detailed view of the running tasks.

Figure 10-159 Running Task: Volume Synchronization

Note: You can change the Mirror Sync Rate (by default at 50%) by modifying the volume properties. For more information, see Figure 10.7.4 on page 642. 6. When the synchronization is finished, select the non thin-provisioned copy that you want to remove in the table and in the Actions menu, click Delete this Copy (Figure 6).

670

Implementing the IBM System Storage SAN Volume Controller V6.1

Figure 10-160 Delete this Copy window

Tip: You can also right-click a volume and select Delete this Copy from the list. 7. The Warning window opens (Figure 10-161). Click OK to confirm your choice.

Figure 10-161 Warning window

Note: If you try to remove the primary copy before it has been synchronized with the other one, you will receive the following message: The command failed because the copy specified is the only synchronized copy. You must wait till the end of the synchronization to be able to remove this copy.

8. When the copy is deleted, your thin-provisioned volume is ready to be used. At this point, you have completed the required tasks to manage volumes within an SVC environment.

10.7.20 Creating a volume in image mode


Refer to Chapter 6, Data migration on page 227 for the steps required to create a volume in image mode.

10.7.21 Migrating a volume to an image mode volume


Refer to Chapter 6, Data migration on page 227 for the steps required to migrate a volume to an image mode volume.

10.7.22 Creating an image mode mirrored volume


Refer to Chapter 6, Data migration on page 227 for the steps required to create an image mode mirrored volume.
Chapter 10. SAN Volume Controller operations using the GUI

671

10.8 Copy Services: managing FlashCopy


It is often easier to control working with FlashCopy by using the GUI if you have a small number of mappings. When using many mappings, however, use the CLI to execute your commands. Note: See Chapter 8, Advanced Copy Services on page 363 for more information about the functionality of Copy Services in the SVC environment. In this section, we describe the tasks that you can perform at a FlashCopy level. There are three ways to visualize and manage your FlashCopy: By using the FlashCopy panel (Figure 10-162). In its basic mode, the IBM FlashCopy function copies the contents of a source volume to a target volume. Any data that existed on the target volume is lost and is replaced by the copied data.

Figure 10-162 FlashCopy panel

By using the Consistency Groups panel (Figure 10-163 on page 673). A Consistency Group is a container for mappings. You can add many mappings to a Consistency Group.

672

Implementing the IBM System Storage SAN Volume Controller V6.1

Figure 10-163 Consistency Groups panel

By using the FlashCopy Mappings panel (Figure 10-164 on page 674). A FlashCopy mapping defines the relationship between a source volume and a target volume.

Chapter 10. SAN Volume Controller operations using the GUI

673

Figure 10-164 FlashCopy Mappings panel

10.8.1 Creating a FlashCopy Mapping


In this section, we create FlashCopy mappings for volumes with their respective targets. To perform this action, follow these steps: 1. From the SVC Welcome panel, click Copy Services FlashCopy. The FlashCopy panel opens (Figure 10-165 on page 675).

674

Implementing the IBM System Storage SAN Volume Controller V6.1

Figure 10-165 FlashCopy panel

2. Select the volume that you want to create the FlashCopy relationship for (Figure 10-166). Note: To create many FlashCopy mappings at one time, select multiple volumes by holding down the Ctrl key and using the mouse to select the entries that you want.

Figure 10-166 FlashCopy mapping: Select the volume (or volumes)

Chapter 10. SAN Volume Controller operations using the GUI

675

Depending on whether or not you have already created the target volumes for your FlashCopy mappings, there are two options: If you have already created the target volumes, see Using existing target volumes on page 676. If you want SVC to create the target volumes you, see Creating new target volumes on page 680.

Using existing target volumes


1. Click Advanced FlashCopy... and then click Use existing target volumes in the Actions menu (Figure 10-167).

Figure 10-167 Use existing target volumes action

2. The New FlashCopy Mapping window opens (see Figure 10-168). In this window, you have to create the relationship between the source volume (the disk that is copied) and the target volume (the disk that receives the copy). A mapping can be created between any two volumes in a cluster. Select a volume in the Target Volumes column using the drop-down list for your selected Source Volume, then click Add button (Figure 10-195 on page 692). If you need to create other relations, repeat this action. Important: The source and target volumes must be of equal size. So, for a given source volume, only targets of the appropriate size are visible.

Figure 10-168 New FlashCopy Mapping

To remove a relation created, use the

button (Figure 10-169 on page 677).

676

Implementing the IBM System Storage SAN Volume Controller V6.1

Note: The volumes do not have to be in the same I/O group or storage pool. 3. Click Next after all relationships that you wanted to create are registered (Figure 10-169).

Figure 10-169 New FlashCopy Mapping with relations created

4. On the next window, select one FlashCopy preset. The GUI interface provides three presets (Snapshot, Clone, Backup) to simplify the more common FlashCopy operations (Figure 10-170). The presets and their use cases are described here: Snapshot Clone Backup Create a copy-on-write point-in-time copy with the following parameters: Creates an exact replica of the source volume on a target volume. The copy can be changed without impacting the original volume. Creates a FlashCopy mapping that can be used to recover data or objects if the system experiences data loss. These backups can be copied multiple times from source and target volumes.

Figure 10-170 New FlashCopy Mapping window

For whichever preset you select, you can customize various advanced options. You access these settings by clicking Advanced Settings (Figure 10-171 on page 678). If you prefer not to customize these settings, go directly to step 5 on page 678.

Chapter 10. SAN Volume Controller operations using the GUI

677

You can customize the following options, as shown in Figure 10-171: Background Copy Rate This determines the priority that is given to the copy process. A faster rate increases the priority of the process, which can affect the performance of other operations. Incremental This copies only the parts of the source or target volumes that have changed since the last copy. Incremental copies reduce the completion time of the copy operation. Note: Even if the type of the FlashCopy mapping is incremental, the first copy process copies all of the data from the source to the target volume. Delete after completion This automatically deletes a FlashCopy mapping after the background copy is completed. Do not use this option when the background copy rate is set to zero (0). Cleaning Rate This minimizes the amount of time that a mapping is in the stopping state. If the mapping has not completed, the target volume is offline while the mapping is stopping.

Figure 10-171 New FlashCopy Mapping Advanced Settings

5. If you want to include this FlashCopy mapping in a Consistency Group, in the window that shown in Figure 10-172 on page 679, select Yes, add the mappings to a Consistency Group and also select the Consistency Group from the drop-down list.

678

Implementing the IBM System Storage SAN Volume Controller V6.1

Figure 10-172 Add the mappings to a Consistency Group

If you do not want to include this FlashCopy mapping in a Consistency Group, select No, do not add the mappings to a Consistency Group (Figure 10-173).

Figure 10-173 Do not add the mappings to a Consistency Group

6. Then click Finish as shown in Figure 10-172 and Figure 10-173. 7. Check the result of this FlashCopy mapping (Figure 10-174 on page 680). For each FlashCopy mapping relationship created, a mapping name is automatically generated starting with fcmapX, where X is an available number. If needed, you can rename these mappings; see Figure 10.7.4 on page 642, for more information about this topic.

Chapter 10. SAN Volume Controller operations using the GUI

679

Figure 10-174 Flash Copy Mapping

At this point, the FlashCopy mapping is now ready to be used.

Creating new target volumes


1. If you have not created a target volume for this source volume, click Advanced FlashCopy... then Create new target volumes in the Actions (Figure 10-175). Note: If the target volume does not exist, it will be created with a name based on its source volume and a generated number at the end, for example: source_volume_name_XX, where XX is a number generated dynamically.

Figure 10-175 Create new target volumes action

2. On the New FlashCopy Mapping window (Figure 10-176 on page 681), you need to select one FlashCopy preset. The GUI interface provides three presets (Snapshot, Clone, Backup) to simplify the more common FlashCopy operations.

680

Implementing the IBM System Storage SAN Volume Controller V6.1

The presets and their use cases are described here: Snapshot Clone Backup Create a copy-on-write point-in-time copy with the following parameters: Creates an exact replica of the source volume on a target volume. The copy can be changed without impacting the original volume. Creates a FlashCopy mapping that can be used to recover data or objects if the system experiences data loss. These backups can be copied multiple times from source and target volumes. Figure 10-176

Figure 10-176 New FlashCopy Mapping window

Whichever preset you select, you can customize various advanced options. To access these settings, click Advanced Settings (Figure 10-177 on page 682). If you prefer not to customize these settings, go directly to step 3 on page 682. You can customize the following options, as shown in Figure 10-177 on page 682: Background Copy Rate This determines the priority that is given to the copy process. A faster rate increases the priority of the process, which can affect performance of other operations. Incremental This copies only the parts of the source or target volumes that have changed since the last copy. Incremental copies reduce the completion time of the copy operation. Note: Even if the type of the FlashCopy mapping is incremental, the first copy process copies all of the data from the source to the target volume. Delete after completion This automatically deletes a FlashCopy mapping after the background copy is completed. Do not be use this option when background copy rate is set to 0. Cleaning Rate This minimizes the amount of time that a mapping is in the stopping state. If the mapping has not completed, the target volume is offline while the mapping is stopping.

Chapter 10. SAN Volume Controller operations using the GUI

681

Figure 10-177 New FlashCopy Mapping Advanced Settings

3. If you want to include this FlashCopy mapping in a Consistency Group, in the next window select Yes, add the mappings to a Consistency Group and select the Consistency Group in the drop-down list (Figure 10-178). If you do not want to include this FlashCopy mapping in a Consistency Group, select No, do not add the mappings to a Consistency Group. Choose whichever option you prefer, then click Next (Figure 10-178).

Figure 10-178 Add the mappings to a Consistency Group

4. In the next window (Figure 10-179 on page 683), select the storage pool that is used to automatically create new targets. You can choose to use the same storage pool that is used by the source volume, or you can select it from a list. In that case, select one storage pool and then click Next.

682

Implementing the IBM System Storage SAN Volume Controller V6.1

Figure 10-179 Select the storage pool

5. Select if you want to have a targeted volume using thin provisioning or not. There are three choices available, as shown in Figure 10-180 on page 684: Yes, in which case enter the following parameters: Real: Type the Real size that you want to allocate. This size is the amount of disk space that will actually be allocated. It can either be a percentage of the virtual size or a specific number in GB. Automatically Expand: Select auto expand, which allows the real disk size to grow as required. Warning Threshold: Type a percentage or select a specific size for the usage threshold warning. It will generate a warning when the used disk capacity on the space-efficient copy first exceeds the specified threshold.

No Inherit properties from source volume Click Finish to complete the FlashCopy Mapping operation.

Chapter 10. SAN Volume Controller operations using the GUI

683

Figure 10-180 Thin provisioning option

6. Check the result of this FlashCopy mapping, as shown in Figure 10-181. For each FlashCopy mapping relationship created, a mapping name is automatically generated starting with fcmapX where X is an available number. If needed, you can rename these mappings; see Figure 10.7.4 on page 642.

Figure 10-181 FlashCopy mapping

At this point, the FlashCopy mapping is ready to be used. Tip: You can invoke FlashCopy from the SVC GUI, but using the SVC GUI might be impractical if you plan to handle a large number of FlashCopy mappings or Consistency Groups periodically, or at varying times. In such cases, creating a script by using the CLI might be more convenient.

684

Implementing the IBM System Storage SAN Volume Controller V6.1

10.8.2 Creating and starting a snapshot preset with a single click


To create and start a snapshot with one click, perform these steps. Note: The snapshot creates a point-in-time view of production data. The snapshot is not intended to be an independent copy, but instead is used to maintain a view of the production data at the time the snapshot is created. Therefore, the snapshot holds only the data from regions of the production volume that have changed since the snapshot was created. Because the snapshot preset uses thin provisioning, only the capacity that is required for the changes is used. Snapshot preset parameters: No Background Copy Incremental: No Delete after completion: No Cleaning rate: No Target pool is primary copy source pool 1. From the SVC Welcome panel, click Copy Services in the left menu and then, click the FlashCopy panel. 2. Select the volume that you want to snapshot. 3. Click New Snapshot in the Actions menu (Figure 10-182).

Figure 10-182 New Snapshot option

4. A volume is created as a target volume for this snapshot in the same pool as the source volume. The FlashCopy mapping is created and it is started. You can check the FlashCopy progress in the Progress column or in the Running Tasks column as shown in Figure 10-183 on page 686.

Chapter 10. SAN Volume Controller operations using the GUI

685

Figure 10-183 Snapshot created and started

10.8.3 Creating and starting a clone preset with a single click


To create and start a clone with one click, perform these steps. Note: The clone preset creates an exact replica of the volume, which can be changed without impacting the original volume. After the copy completes, the mapping that was created by the preset is automatically deleted. Clone preset parameters: Background Copy rate: 50 Incremental: No Delete after completion: Yes Cleaning rate: 50 Target pool is primary copy source pool 1. From the SVC Welcome panel, click Copy Services in the left menu and then click the FlashCopy panel. 2. Select the volume that you want to clone. 3. Click New Clone in the Actions menu (Figure 10-184 on page 687).

686

Implementing the IBM System Storage SAN Volume Controller V6.1

Figure 10-184 New clone option

4. A volume is created as a target volume for this clone in the same pool as the source volume. The FlashCopy mapping is created and started as shown in Figure 10-185. You can check the FlashCopy progress in the Progress column or in the Running Tasks column.

Figure 10-185 Clone created and started

10.8.4 Creating and starting a backup preset with a single click


To create and start a backup with one click, perform these steps.

Chapter 10. SAN Volume Controller operations using the GUI

687

Note: The backup preset creates a point-in-time replica of the production data. After the copy completes, the backup view can be refreshed from the production data, with minimal copying of data from the production volume to backup volume. Clone preset parameters: Background Copy rate: 50 Incremental: Yes Delete after completion: No Cleaning rate: 50 Target pool is primary copy source pool 1. From the SVC Welcome panel, click Copy Services in the left menu and then click the FlashCopy panel. 2. Select the volume that you want to back up. 3. Click New Backup in the Actions menu (Figure 10-186).

Figure 10-186 New backup option

4. A volume is created as a target volume for this backup in the same pool as the source volume. The FlashCopy mapping is created and started. You can check the FlashCopy progress in the Progress column or in the Running Tasks column (Figure 10-187 on page 689).

688

Implementing the IBM System Storage SAN Volume Controller V6.1

Figure 10-187 Backup created and started

10.8.5 Creating a FlashCopy Consistency Group


To create a FlashCopy Consistency Group in the SVC GUI, perform these steps: 1. From the SVC Welcome panel, click Copy Services and then click Consistency Groups. The Consistency Groups panel opens (Figure 10-188 on page 690).

Chapter 10. SAN Volume Controller operations using the GUI

689

Figure 10-188 Consistency Group panel

2. Click Create a Consistency Group (Figure 10-189).

Figure 10-189 Create a FlashCopy Consistency Group

3. Enter the desired FlashCopy Consistency Group name and click Create (Figure 10-190).

Figure 10-190 New Consistency Group window

Consistency Group name: You can use the letters A to Z and a to z, the numbers 0 to 9, and the underscore (_) character. The volume name can be between one and 63 characters in length. 4. Figure 10-191 on page 691 shows the result.

690

Implementing the IBM System Storage SAN Volume Controller V6.1

Figure 10-191 View Consistency Group

10.8.6 Creating FlashCopy mappings in a Consistency Group


In this section, we create FlashCopy mappings for volumes with their respective targets. The source and target volumes were created prior to this operation. To perform this action, follow these steps: 1. From the SVC Welcome panel, click Copy Services and then click Consistency Groups. The Consistency Groups panel opens as shown in Figure 10-188 on page 690. 2. In the left column, select in which Consistency Group (see Figure 10-192), you want to create the FlashCopy mapping. If you prefer not to create a FlashCopy mapping in a Consistency Group, select Not in a Group in the list.

Figure 10-192 Consistency Group selection

Chapter 10. SAN Volume Controller operations using the GUI

691

3. If you select a Consistency Group, click New FlashCopy Mapping in the Actions menu (Figure 10-193).

Figure 10-193 New FlashCopy mapping action for a Consistency Group

If you did not select a Consistency Group, click New FlashCopy Mapping (Figure 10-194). Consistency Groups: If no Consistency Group is defined, the mapping is a stand-alone mapping, and it can be prepared and started without affecting other mappings. All mappings in the same Consistency Group must have the same status to maintain the consistency of the group.

Figure 10-194 New FlashCopy Mapping

4. The New FlashCopy Mapping window opens (Figure 10-195). In this window you must create the relationships between the source volumes (the disks that are copied) and the target volumes (the disks that receive the copy). A mapping can be created between any two volumes in a cluster. Important: The source and target volumes must be of equal size.

Figure 10-195 New FlashCopy Mapping

Note: The volumes do not have to be in the same I/O group or storage pool.

692

Implementing the IBM System Storage SAN Volume Controller V6.1

5. Select a volume in the Sources Volumes column using the drop-down list, then select a volume in the Target Volumes column using the drop-down list and click Add as shown in Figure 10-195 on page 692. Repeat this action to create others relations. To remove a relationship that has been created, use the button.

Important: The source and target volumes must be of equal size. So for a given source volume, only the targets with the appropriate size are area. 6. Click Next after all the relationships that you wanted to create are registered (Figure 10-196).

Figure 10-196 New FlashCopy Mapping with relationships created

7. In the next window, you need to select one FlashCopy preset. The GUI interface provides three presets (Snapshot, Clone, Backup) to simplify the more common FlashCopy operations (Figure 10-197). The presets and their use cases are described here: Snapshot Clone Backup Create a copy-on-write point-in-time copy with the following parameters: This creates an exact replica of the source volume on a target volume. The copy can be changed without impacting the original volume. This creates a FlashCopy mapping that can be used to recover data or objects if the system experiences data loss. These backups can be copied multiple times from source and target volumes.

Figure 10-197 New FlashCopy Mapping window

Whichever preset you select, you can customize various advanced options. To access these settings, click the Advanced Settings button.

Chapter 10. SAN Volume Controller operations using the GUI

693

If you prefer not to customize these settings, go directly to step 8. You can customize the following options as shown in Figure 10-198: Background Copy Rate This determines the priority that is given to the copy process. A faster rate increases the priority of the process, which might affect performance of other operations. Incremental This copies only the parts of the source or target volumes that have changed since the last copy. Incremental copies reduce the completion time of the copy operation. Note: Even if the type of the FlashCopy mapping is incremental, the first copy process copies all of the data from the source to the target volume. Delete after completion This automatically deletes a FlashCopy mapping after the background copy is completed. Do not use this option when background copy rate is set to zero (0). Cleaning Rate This minimizes the amount of time that a mapping is in the stopping state. If the mapping has not completed, the target volume is offline while the mapping is stopping.

Figure 10-198 New FlashCopy Mapping Advanced Settings

8. If you did not create these FlashCopy mappings from a Consistency Group (see step 3 on page 692), you will have to confirm your choice by selecting No, do not add the mappings to a Consistency Group (Figure 10-199 on page 695).

694

Implementing the IBM System Storage SAN Volume Controller V6.1

Figure 10-199 Add the mappings to a Consistency Group window.

9. Click Finish as shown in Figure 10-198 on page 694. 10.Check the result of this FlashCopy mapping in the Consistency Groups window, as shown in Figure 10-200. For each FlashCopy mapping relationship created, a mapping name is automatically generated starting with fcmapX where X is an available number. If needed, you can rename these mappings; see Figure 10.7.4 on page 642.

Figure 10-200 FlashCopy mappings result

Tip: You can invoke FlashCopy from the SVC GUI, but using the SVC GUI might be impractical if you plan to handle a large number of FlashCopy mappings or Consistency Groups periodically, or at varying times. In this case, creating a script by using the CLI might be more convenient.

Chapter 10. SAN Volume Controller operations using the GUI

695

10.8.7 Show Dependent Mappings


Perform the following steps to show Dependent Mappings for a given FlashCopy mapping: 1. From the SVC Welcome panel, click Copy Services in the left menu and then click the FlashCopy, Consistency Groups, or FlashCopy Mappings panel. 2. Select the volume (from the FlashCopy panel only) or the FlashCopy mapping that you want to remove from a Consistency Group. 3. Click Show Dependent Mappings in the Actions menu (Figure 10-201). Tip: You can also right-click a FlashCopy mapping and select Show Dependent Mappings from the list.

Figure 10-201 Show Dependent Mappings

In the Dependent Mappings window (Figure 10-202), you can see the dependent mapping for a given volume or a FlashCopy mapping. If you click one of these volumes, you can see its properties. For more information about volume properties, see 10.7.1, Volume information on page 631.

Figure 10-202 Dependent Mappings

4. Click Close to close this window.

10.8.8 Moving a FlashCopy mapping to a Consistency Group


Perform the following steps to move a FlashCopy mapping to a Consistency Group: 1. From the SVC Welcome panel, click Copy Services in the left menu and then click the FlashCopy, Consistency Groups, or FlashCopy Mappings panel. 2. Select the FlashCopy mapping that you want to move to Consistency Group or that you want to change the Consistency Group.

696

Implementing the IBM System Storage SAN Volume Controller V6.1

3. Click Move to Consistency Group in the Actions menu (Figure 10-203). Tip: You can also right-click a FlashCopy mapping and select Move to Consistency Group from the list.

Figure 10-203 Move to Consistency Group action

4. In the Move a FlashCopy Mapping to a Consistency Group window, select the Consistency Group for this FlashCopy mapping using the drop-down list (Figure 10-204):

Figure 10-204 Move a FlashCopy mapping to a Consistency Group

5. Click Move to a Consistency Group to confirm your changes.

10.8.9 Removing a FlashCopy mapping from a Consistency Group


Perform the following steps to remove a FlashCopy mapping from a Consistency Group: 1. From the SVC Welcome panel, click Copy Services in the left menu and then click the FlashCopy, Consistency Groups, or FlashCopy Mappings panel. 2. Select the FlashCopy mapping that you want to remove from a Consistency Group. 3. Click Remove from Consistency Group in the Actions menu (Figure 10-205 on page 698). Tip: You can also right-click a FlashCopy mapping and select Remove from Consistency Group from the list.

Chapter 10. SAN Volume Controller operations using the GUI

697

Figure 10-205 Remove from Consistency Group action

In the Remove FlashCopy Mapping from Consistency Group window, click Remove (Figure 10-206).

Figure 10-206 Remove FlashCopy mapping

10.8.10 Modifying a FlashCopy mapping


Perform the following steps to modify a FlashCopy mapping: 1. From the SVC Welcome panel, click Copy Services in the left menu and then click the FlashCopy, Consistency Groups, or FlashCopy Mappings panel. 2. Select the FlashCopy mapping that you want to modify in the table. 3. Click Edit Properties in the Actions menu (Figure 10-207).

Figure 10-207 Edit properties

Tip: You can also right-click a FlashCopy mapping and select Edit Properties from the list.

698

Implementing the IBM System Storage SAN Volume Controller V6.1

4. In the Edit Properties window, you can modify the following parameters for a selected FlashCopy mapping as shown in Figure 10-208: Background Copy Rate This determines the priority that is given to the copy process. A faster rate increases the priority of the process, which might affect performance of other operations. Cleaning Rate This minimizes the amount of time that a mapping is in the stopping state. If the mapping has not completed, the target volume is offline while the mapping is stopping.

Figure 10-208 Edit FlashCopy Mapping

5. Click Save to confirm your changes.

10.8.11 Renaming a FlashCopy mapping


Perform the following steps to rename a FlashCopy mapping: 1. From the SVC Welcome panel, click Copy Services and then click Consistency Groups or FlashCopy Mappings. 2. Select the FlashCopy mapping that you want to rename in the table. 3. Click Rename in the Actions menu (Figure 10-209). Tip: You can also right-click a FlashCopy mapping and select Rename from the list.

Figure 10-209 Rename Action

4. In the Rename Mapping window, type the new name that you want to assign to the FlashCopy mapping and click Rename (Figure 10-210 on page 700).

Chapter 10. SAN Volume Controller operations using the GUI

699

Figure 10-210 Renaming a FlashCopy mapping

FlashCopy name: You can use the letters A to Z and a to z, the numbers 0 to 9, and the underscore (_) character. The mapping name can be between one and 63 characters in length.

10.8.12 Renaming a Consistency Group


To rename a Consistency Group, perform the following steps: 1. From the SVC Welcome panel, click Copy Services menu and then click Consistency Group. 2. Select the Consistency Group that you want to rename from the left panel. Then click its name to rename it (Figure 10-211).

Figure 10-211 Renaming a Consistency Group

700

Implementing the IBM System Storage SAN Volume Controller V6.1

3. Type the new name that you want to assign to the Consistency Group and press Enter (Figure 10-212).

Figure 10-212 Changing the name for a Storage Pool

Consistency Group name: The name can consist of the letters A to Z and a to z, the numbers 0 to 9, the dash (-), and the underscore (_) character. The name can be between one and 15 characters in length. However, the name cannot start with a number, the dash or the underscore.

4. From the Consistency Group panel, the new Consistency Group name is displayed.

10.8.13 Deleting a FlashCopy mapping


Perform the following steps to delete a FlashCopy mapping: 1. From the SVC Welcome panel, click Copy Services and then click the FlashCopy, Consistency Groups or the FlashCopy Mappings panel. 2. Select the FlashCopy mapping that you want to delete in the table. Note: To select multiple FlashCopy mappings, hold down the Ctrl key and use the mouse to select the entries (this is only available in the Consistency Groups and FlashCopy mappings panels). 3. Click Delete Mapping in the Actions menu (Figure 10-213). Tip: You can also right-click a FlashCopy mapping and select Delete Mapping from the list.

Figure 10-213 Delete Mapping action

4. The Delete Mapping window opens as shown in Figure 10-214 on page 702. In the field Verify the number of FlashCopy mappings you are deleting, you need to enter a value

Chapter 10. SAN Volume Controller operations using the GUI

701

matching the correct number of volumes that you want to remove. This verification has been added to secure the process of deleting wrong mappings. If you still have target volumes that are inconsistent with the source volumes and you definitely want to delete these FlashCopy mappings, then select the Delete the FlashCopy mapping even when the data on the target volume is inconsistent with the source volume option. Click Delete to complete the operation (Figure 10-214).

Figure 10-214 Delete FlashCopy Mapping

10.8.14 Deleting a FlashCopy Consistency Group


Important: Deleting a Consistency Group does not delete the FlashCopy mappings. Perform the following steps to delete a FlashCopy Consistency Group: 1. From the SVC Welcome panel, click Copy Services and then click the Consistency Groups panel. 2. Select the FlashCopy Consistency Group that you want to delete in the left column. 3. Click Delete in the Actions menu (Figure 10-215 on page 703).

702

Implementing the IBM System Storage SAN Volume Controller V6.1

Figure 10-215 Delete action

4. The Warning window opens (Figure 10-216). Click OK to complete the operation.

Figure 10-216 Warning window

10.8.15 Starting FlashCopy mappings


When the FlashCopy mapping is created, the copy process can be started. Only mappings that are not a member of a Consistency Group, or the only mapping in a Consistency Group, can be started individually. 1. From the SVC Welcome panel, click Copy Services and then click the FlashCopy or the FlashCopy Mappings panel. 2. Select the FlashCopy mapping that you want to start in the table. 3. Click Start in the Actions menu (Figure 10-217 on page 704) to start the FlashCopy Mapping. Tip: You can also right-click a FlashCopy mapping and select Start from the list.

Chapter 10. SAN Volume Controller operations using the GUI

703

Figure 10-217 Start action

4. You can check the FlashCopy progress in the Progress column of the table or in the Running Tasks section (Figure 10-218).

Figure 10-218 Checking FlashCopy progress

5. After the task is completed, the FlashCopy status is in a Copied state (Figure 10-219).

Figure 10-219 Copied FlashCopy

10.8.16 Starting a FlashCopy Consistency Group


All of the mappings in a Consistency Group will be brought to the same state.

704

Implementing the IBM System Storage SAN Volume Controller V6.1

To start the FlashCopy Consistency Group, perform these steps: 1. From the SVC Welcome window, click Copy Services and then click the Consistency Groups panel. 2. From the left panel, select the Consistency Group that you want to start (Figure 10-220).

Figure 10-220 FlashCopy Consistency Groups window

3. Click Start in the Actions menu (Figure 10-221) to start the FlashCopy Consistency Group.

Figure 10-221 Start action

4. You can check the FlashCopy Consistency Group progress in the Progress column or in the Running Tasks section (Figure 10-222 on page 706).

Chapter 10. SAN Volume Controller operations using the GUI

705

Figure 10-222 Checking FlashCopy Consistency Group progress

5. After the task is completed, the FlashCopy status is in a Copied state (Figure 10-223).

Figure 10-223 Copied FlashCopy Consistency Group

10.8.17 Stopping the FlashCopy Consistency Group


When a FlashCopy Consistency Group is stopped, the target volumes become invalid and are set offline by the SVC. The FlashCopy mapping or Consistency Group must be prepared again or retriggered to bring the target volumes online again. Important: Only stop a FlashCopy mapping when the data on the target volume is useless, or if you want to modify the FlashCopy mapping. When a FlashCopy mapping is stopped, the target volume becomes invalid and is set offline by the SVC, as shown in Figure 10-226 on page 707.

706

Implementing the IBM System Storage SAN Volume Controller V6.1

Perform the following steps to stop a FlashCopy Consistency Group: 1. From the SVC Welcome panel, click Copy Services and then click the FlashCopy, Consistency Groups, or the FlashCopy Mappings panel. 1. Select the FlashCopy mapping that you want to stop in the table. 2. Click Stop in the Actions menu (Figure 10-224) to stop the FlashCopy mapping.

Figure 10-224 Stop action

3. Notice that the FlashCopy mapping status has changed to Stopped (Figure 10-225).

Figure 10-225 FlashCopy Consistency Group status

4. The targeted volume is now shown as Offline in the Volumes menu (Figure 10-226).

Figure 10-226 Targeted volume is offline

Chapter 10. SAN Volume Controller operations using the GUI

707

10.8.18 Stopping the FlashCopy mapping


When a FlashCopy is stopped, the target volumes become invalid and are set offline by the SVC. The FlashCopy mapping must be retriggered to bring the target volumes online again. Important: Only stop a FlashCopy mapping when the data on the target volume is useless, or if you want to modify the FlashCopy mapping. When a FlashCopy mapping is stopped, the target volume becomes invalid and is set offline by the SVC.

Perform the following steps to stop a FlashCopy mapping: 1. From the SVC Welcome panel, click Copy Services and then click the Consistency Groups panel. 1. In the left side of this panel, select the Consistency Group that you want to stop. 2. Click Stop in the Actions menu (Figure 10-227) to stop the FlashCopy Consistency Group.

Figure 10-227 Stopping the FlashCopy Consistency Group

3. Notice that the FlashCopy Consistency Group status has now changed to Stopped (Figure 10-228 on page 709).

708

Implementing the IBM System Storage SAN Volume Controller V6.1

Figure 10-228 FlashCopy Consistency Group status

10.8.19 Migrating between a fully allocated volume and a Space-Efficient volume


If you want to migrate from a fully allocated volume to a Space-Efficient volume, follow the same procedure as described in 10.8.1, Creating a FlashCopy Mapping on page 674. However, make sure that you either select a Space-Efficient volume that has already been created as your target volume, or create one. You can use this same method to migrate from a Space-Efficient volume to a fully allocated volume. Create a FlashCopy mapping with the fully allocated volume as the source and the Space-Efficient volume as the target. Important: The copy process overwrites all of the data on the target volume. You must back up all of the data before you start the copy process.

10.8.20 Reversing and splitting a FlashCopy mapping


You can now perform a reverse FlashCopy mapping without having to remove the original FlashCopy mapping, and without restarting a FlashCopy mapping from the beginning. Figure 10-229 on page 710 shows an example of reverse FlashCopy dependency. You can start a FlashCopy mapping whose target is the source of another FlashCopy mapping.

Chapter 10. SAN Volume Controller operations using the GUI

709

Figure 10-229 Dependent Mappings

This capability enables you to reverse the direction of a FlashCopy map without having to remove existing maps, and without losing the data from the target as shown in Figure 10-230.

Figure 10-230 Reverse FlashCopy

10.9 Copy Services: managing Remote Copy


It is often easier to control working with Metro Mirror or Global Mirror by using the GUI, as long as you have a small number of mappings. When using many mappings, use the CLI to execute your commands. Note: See Chapter 8, Advanced Copy Services on page 363 for more information about the functionality of Copy Services in the SVC environment. In this section, we describe the tasks that you can perform at a remote copy level. There are two panels to use to visualize and manage your remote copies: 1. The Remote Copy panel, shown in Figure 10-231 on page 711 The Metro Mirror and Global Mirror Copy Services features enable you to set up a relationship between two volumes, so that updates that are made by an application to one volume are mirrored on the other volume. The volumes can be in the same cluster or on two different clusters.

710

Implementing the IBM System Storage SAN Volume Controller V6.1

Figure 10-231 Remote Copy panel

2. The Partnerships panel, shown in Figure 10-232 on page 712 Partnerships can be used to create a disaster recovery environment, or to migrate data between clusters that are in different locations. Partnerships define an association between a local cluster and a remote cluster.

Chapter 10. SAN Volume Controller operations using the GUI

711

Figure 10-232 Partnerships panel

10.9.1 Cluster partnership


You have the opportunity to create more than a one-to-one cluster partnership. You can have a cluster partnership among multiple SVC clusters, which allows you to create four types of configurations, using a maximum of four connected clusters: Star configuration, as shown in Figure 10-233.

Figure 10-233 Star configuration

Triangle configuration, as shown in Figure 10-234 on page 713.

712

Implementing the IBM System Storage SAN Volume Controller V6.1

Figure 10-234 Triangle configuration

Fully connected configuration, as shown in Figure 10-235.

Figure 10-235 Fully connected configuration

Daisy-chain configuration, as shown in Figure 10-236.

Figure 10-236 Daisy-chain configuration

Important: All SVC clusters must be at level 5.1 or higher.

Chapter 10. SAN Volume Controller operations using the GUI

713

10.9.2 Creating the SVC partnership between two remote SVC Clusters
We perform this operation to create the partnership on both clusters. Note: If you are creating an intracluster Metro Mirror, do not perform this next step to create the SVC cluster Metro Mirror partnership. Instead, go to 10.9.3, Creating stand-alone remote copy relationships on page 716. To create a partnership between the SVC clusters using the GUI, follow these steps: 1. From the SVC Welcome panel, click Copy Services Partnerships. The Partnerships panel opens as shown in Figure 10-237.

Figure 10-237 Partnerships panel

2. Click the New Partnership button to create a new partnership with another cluster, as shown in Figure 10-238.

Figure 10-238 New partnership button

3. On the New Partnership window (Figure 10-239 on page 715), complete the following elements: Select an available cluster in the drop-down list. If there is no candidate, you will receive the following error message: This cluster does not have any candidates. Enter a bandwidth (MBps) that is used by the background copy process between the clusters in the partnership. Set this value so that it is less than or equal to the

714

Implementing the IBM System Storage SAN Volume Controller V6.1

bandwidth that can be sustained by the communication link between the cluster. The link must be able to sustain any host requests and the rate of background copy.

Figure 10-239 New partnership window

4. Click the Create button to confirm the partnership relation. As shown in Figure 10-240, our partnership is in the Partially Configured state, because we have only performed the work on one side of the partnership so far.

Figure 10-240 Viewing cluster partnerships

To fully configure the cluster partnership, we must perform the same steps on the other SVC cluster (ITSO-CLS2) as we did on this one (ITSO-CLS1). For simplicity and brevity, only the two most significant windows are shown when the partnership is fully configured. 5. Launching the SVC GUI for ITSO-CLS2, we select ITSO-CLS1 for the cluster partnership and specify the available bandwidth for the background copy, again 200 MBps, and then click Create. Now that both sides of the SVC cluster partnership are defined, the resulting windows shown in Figure 10-241 and Figure 10-242 on page 716 confirm that our cluster partnership is now in the Fully Configured state. Figure 10-241 shows Cluster ITSO-CLS1.

Figure 10-241 Cluster ITSO-CLS1 - Fully configured cluster partnership

Figure 10-242 on page 716 shows Cluster ITSO-CLS2.

Chapter 10. SAN Volume Controller operations using the GUI

715

Figure 10-242 Cluster ITSO-CLS2 - Fully configured cluster partnership

10.9.3 Creating stand-alone remote copy relationships


In this section, we create remote copy mappings for volumes with their respective remove targets. The source and target volumes have been created prior to this operation on both clusters. To perform this action, follow these steps: 1. From the SVC Welcome panel, click Copy Services Remote Copy. 2. Click New Relationship as shown in Figure 10-243.

Figure 10-243 New relationship action

3. In the New Relationship window, select the type of relationship that you want to create (Figure 10-244 on page 717): Metro Mirror This is a type of remote copy that creates a synchronous copy of data from a primary volume to a secondary volume. A secondary volume can either be located on the same cluster or on another cluster. Global Mirror It provides a consistent copy of a source volume on a target volume. Data is written to the target volume asynchronously, so that the copy is continuously updated, but the copy might not contain the last few updates in the event that a disaster recovery operation is performed. Then, click Next. 716
Implementing the IBM System Storage SAN Volume Controller V6.1

Figure 10-244 Select the type of relation that you want to create

4. In the next window, select where the auxiliary volumes are located as shown in Figure 10-245: On this system - this means the volumes are located locally On another system - in this case, select the remote system from the drop-down list.

Figure 10-245 Auxiliary volumes location

5. In this window you can create new relationships. Select a volume in the Master drop-down list, then select a volume in the Auxiliary drop-down lists for this master and click Add (Figure 10-246 on page 718). If needed, repeat this action to create other relationships. Important: The Master and Auxiliary must be of equal size. So for a given source volume, only the targets with the appropriate size are returned.

Chapter 10. SAN Volume Controller operations using the GUI

717

Figure 10-246 Create relationships between master and auxiliary volumes

To remove a relation created, use the button shown in Figure 10-246. After all the relationships that you wanted to create are registered, click Next. 6. Select if the volumes are already synchronized or not as shown in Figure 10-247, then click Next.

Figure 10-247 Volumes synchronized

7. Finally, on the last window, select if you want to start to copy the data as shown in Figure 10-248 and then click Finish.

Figure 10-248 Synchronize now

The relationships are visible in the Remote Copy panel. If you selected to copy the data, you can see that their status is Inconsistent Copying. You can check the copying progress in the Running tasks as shown in Figure 10-249 on page 719.

718

Implementing the IBM System Storage SAN Volume Controller V6.1

Figure 10-249 Remote Copy panel with an inconsistent copying status

After the copy is finished, the relationships status changes to Consistent synchronized.

10.9.4 Creating a Consistency Group


To create a Consistency Group, follow these steps: 1. From the SVC Welcome panel, click Copy Services Remote Copy. 2. Click New Consistency Group (Figure 10-250).

Figure 10-250 New Consistency Group action

Chapter 10. SAN Volume Controller operations using the GUI

719

3. Enter a name for the Consistency Group and then click Next (Figure 10-251). Note: If you do not provide a name, the SVC automatically generates the name rccstgrpX, where X is the ID sequence number that is assigned by the SVC internally. You can use the letters A to Z and a to z, the numbers 0 to 9, and the underscore (_) character. The Consistency Group can be between 1 and 15 characters in length.

Figure 10-251 Enter a Consistency Group name

4. In the next window, select where the auxiliary volumes are located as shown in Figure 10-252: On this system - this means the volumes are located locally On another system - in that case, select the remote system in the drop-down list. After you make a selection, click Next.

Figure 10-252 Auxiliary volumes location

5. Select if you want to add relationships to this group as shown in Figure 10-253. There are two options: If you answer Yes. click Next to continue the wizard and go to step 6. If you answer No, click Finish to create an empty Consistency Group that can be used later.

Figure 10-253 Add relationships to this group

720

Implementing the IBM System Storage SAN Volume Controller V6.1

6. Select the type of relationship that you want to create (Figure 10-254): Metro Mirror This is a type of remote copy that creates a synchronous copy of data from a primary volume to a secondary volume. A secondary volume can either be located on the same cluster or on another cluster. Global Mirror This provides a consistent copy of a source volume on a target volume. Data is written to the target volume asynchronously so that the copy is continuously updated, but the copy might not contain the last few updates in the event that a disaster recovery operation is performed. Click Next.

Figure 10-254 Select the type of relation that you want to create

7. As shown in Figure 10-255, you can optionally select existing relationships to add to the group, then click Next. Note: To select multiple relationships, hold down Ctrl and use your mouse to select the entries you want to include.

Figure 10-255 Select existing relationships to add to the group

Chapter 10. SAN Volume Controller operations using the GUI

721

8. In this window, you can create new relationships. Select a volume in the Master drop-down list then select a volume in the Auxiliary drop-down lists for this master. Click Add as shown in Figure 10-256. Repeat this action to create other relationships if needed. Important: The Master and Auxiliary must be of equal size. So for a given source volume, only the targets with the appropriate size are included. To remove a relation created, use the button as shown in Figure 10-256. After all the relationships that you want to create are registered, click Next.

Figure 10-256 Create relationships between Master and Auxiliary volumes

9. Select if the volumes are already synchronized or not, as shown in Figure 10-257, then click Next.

Figure 10-257 Volumes synchronized

10.Finally, on the last window, select if you want to start to copy the data as shown in Figure 10-258 on page 723, and then click Finish.

722

Implementing the IBM System Storage SAN Volume Controller V6.1

Figure 10-258 Synchronize now

11.The relationships are visible in the Remote copy panel. If you selected to copy the data, you can see that their status is Inconsistent Copying. You can check the copying progress in the Running tasks as shown in Figure 10-259.

Figure 10-259 Consistency Group created with relationship in copying status

After the copies are completed, the relationships and the Consistency Group changes to Consistent synchronized status.

10.9.5 Renaming a Consistency Group


To rename a Consistency Group, perform the following steps: 1. From the SVC Welcome panel, click Copy Services menu Remote Copy. 2. Select the Consistency Group that you want to rename in the left panel. Then click its name to rename it, as shown in Figure 10-260 on page 724.

Chapter 10. SAN Volume Controller operations using the GUI

723

Figure 10-260 Renaming a Consistency Group

3. Type the new name that you want to assign to the Consistency Group and press Enter (Figure 10-261).

Figure 10-261 Changing the name for a storage pool

Consistency Group name: The Consistency Group name can consist of the letters A to Z and a to z, the numbers 0 to 9, the dash (-), and the underscore (_) character. The name can be between one and 15 characters in length. However, the name cannot start with a number, the dash, or the underscore.

4. From the Remote Copy panel, the new Consistency Group name is displayed.

10.9.6 Renaming a Remote Copy relationship


Perform the following steps to rename a Remote Copy relationship: 1. From the SVC Welcome panel, click Copy Services menu Remote Copy. 2. Select the Remote Copy relationship mapping that you want to rename in the table. 3. Click Rename in the Actions menu (Figure 10-262 on page 725). Tip: You can also right-click a Remote Copy relationship and select Rename from the list.

724

Implementing the IBM System Storage SAN Volume Controller V6.1

Figure 10-262 Rename Action

4. In the Rename relationship window, type the new name that you want to assign to the FlashCopy mapping and click OK (Figure 10-263).

Figure 10-263 Renaming a remote copy relationship

Remote Copy relationship name: You can use the letters A to Z and a to z, the numbers 0 to 9, and the underscore (_) character. The Remote Copy name can be between one and 63 characters in length.

10.9.7 Moving a stand-alone Remote Copy relationship to a Consistency Group


Perform the following steps to move a Remote Copy relationship to a Consistency Group: 1. From the SVC Welcome panel, click Copy Services Remote Copy. 2. In the left column, select Not in a Group. 3. Select the relationship that you want to move to Consistency Group. 4. Click Add to Consistency Group in the Actions menu as shown in Figure 10-264 on page 726. Tip: You can also right-click a Remote Copy relationship and select Add to Consistency Group from the list.

Chapter 10. SAN Volume Controller operations using the GUI

725

Figure 10-264 Adding to Consistency Group action

5. In the Add Relationship to Consistency Group window, select the Consistency Group for this Remote Copy relationship using the drop-down list (Figure 10-265).

Figure 10-265 Adding a relationship to a Consistency Group

6. Click Add to Consistency Group to confirm your changes.

10.9.8 Removing stand-alone Remote Copy relationship from a Consistency Group


Perform the following steps to remove a Remote Copy relationship from a Consistency Group: 1. From the SVC Welcome panel, click Copy Services Remote Copy. 2. In the left column, select a Consistency Group. 3. Select the Remote Copy relationship that you want to remove from a Consistency Group. 4. Click Remove from Consistency Group in the Actions menu (Figure 10-266 on page 727). Tip: You can also right-click a Remote Copy relationship and select Remove from Consistency Group from the list.

726

Implementing the IBM System Storage SAN Volume Controller V6.1

Figure 10-266 Remove from Consistency Group action

5. In the Remove Relationship From Consistency Group window, click Remove (Figure 10-267).

Figure 10-267 Remove relationship from Consistency Group

10.9.9 Starting a Remote Copy relationship


When a Remote Copy relationship is created, the Remote Copy process can be started. Only relationships that are not members of a Consistency Group, or the only relationship in a Consistency Group, can be started individually. Perform the following steps to start a Remote Copy relationship: 1. From the SVC Welcome panel, click Copy Services Remote Copy. 2. In the left column, select Not in a Group. 3. Select the Remote Copy relationship that you want to start in the table. 4. Click Start in the Actions menu (Figure 10-268 on page 728) to start the Remote Copy process. Tip: You can also right-click a relationship and select Start from the list.

Chapter 10. SAN Volume Controller operations using the GUI

727

Figure 10-268 Start action

5. If the relationship was not consistent, the Remote Copy progress can be checked in the Running tasks (Figure 10-269).

Figure 10-269 Checking Remote Copy synchronization progress

6. After the task is completed, the Remote Copy relationship status has a Consistent Synchronized state (Figure 10-219 on page 704).

Figure 10-270 Consistent synchronized Remote Copy relationship

728

Implementing the IBM System Storage SAN Volume Controller V6.1

10.9.10 Starting a Remote Copy Consistency Group


All of the mappings in a Consistency Group will be brought to the same state. To start the Remote Copy Consistency Group, follow these steps: 1. From the SVC Welcome panel, click Copy Services Remote Copy. 2. In the left side of this panel, select the Consistency Group that you want to start (Figure 10-271).

Figure 10-271 Remote Copy Consistency Groups view

3. Click Start in the Actions menu (Figure 10-272) to start the Remote Copy Consistency Group.

Figure 10-272 Start action

4. You can check the Remote Copy Consistency Group progress in the Running tasks as shown in Figure 10-273 on page 730.

Chapter 10. SAN Volume Controller operations using the GUI

729

Figure 10-273 Checking Remote Copy Consistency Group progress

5. After the task is completed, the Consistency Group and all its relationship statuses are in a Consistent Synchronized state (Figure 10-274).

Figure 10-274 Consistent synchronized Consistency Group

10.9.11 Switching the copy direction for a Remote Copy relationship


When a Remote Copy relationship is in the Consistent Synchronized state, the copy direction for the relationship can be changed. Only relationships that are not a member of a Consistency Group, or the only relationship in a Consistency Group, can be switched individually. Such relationships can be switched from master to auxiliary or from auxiliary to master, depending the case.

730

Implementing the IBM System Storage SAN Volume Controller V6.1

Important: When the copy direction is switched, it is crucial that no outstanding I/O exists to the volume that transits from primary to secondary, because all of the I/O will be inhibited to that volume when it becomes the secondary. Therefore, careful planning is required prior to switching the copy direction for a Remote Copy relationship. Perform the following steps to switch a Remote Copy relationship: 1. From the SVC Welcome panel, click Copy Services Remote Copy. 2. In the left column, select Not in a Group. 3. Select the Remote Copy relationship that you want to switch in the table. 4. Click Switch in the Actions menu (Figure 10-275) to start the Remote Copy process. Tip: You can also right-click a relationship and select Switch from the list.

Figure 10-275 Switch action

5. A Warning window opens (Figure 10-276). A confirmation is needed to switch the Remote Copy relationship direction. As shown in Figure 10-276, the Remote Copy is switched from the master volume to the auxiliary volume. Click OK to confirm your choice.

Figure 10-276 Warning window - WAS_M

6. The copy direction is now switched as shown in Figure 10-269 on page 728. The auxiliary volume is now accessible and indicated as the primary volume. There is now a synchronization from auxiliary to master Volume.

Chapter 10. SAN Volume Controller operations using the GUI

731

Figure 10-277 Checking Remote Copy synchronization direction

10.9.12 Switching the copy direction for a Consistency Group


When a Consistency Group is in the Consistent Synchronized state, the copy direction for this Consistency Group can be changed. Important: When the copy direction is switched, it is crucial that no outstanding I/O exists to the volume that transits from primary to secondary, because all of the I/O will be inhibited to that volume when it becomes the secondary. Therefore, careful planning is required prior to switching the copy direction for a Consistency Group. Perform the following steps to switch a Consistency Group: 1. From the SVC Welcome panel, click Copy Services Remote Copy. 2. In the left column, select the Consistency Group you want to switch. 3. Click Switch in the Actions menu (Figure 10-278) to start the Remote Copy process.

Tip: You can also right-click a relationship and select Switch from the list.

Figure 10-278 Switch action

4. A Warning window opens (Figure 10-279 on page 733). A confirmation is needed to switch the Consistency Group direction. In the example shown in Figure 10-279 on page 733, the Consistency Group is switched from the master group to the auxiliary group. Click OK to confirm your choice.

732

Implementing the IBM System Storage SAN Volume Controller V6.1

Figure 10-279 Warning window for ITSO-CLS2

5. The Remote Copy direction is now switched, as shown in Figure 10-280. The auxiliary volume is now accessible and indicated as primary volume. There is now a synchronization from auxiliary to master volume.

Figure 10-280 Checking Consistency Group synchronization direction

10.9.13 Stopping a Remote Copy relationship


After it is started, the Remote Copy process can be stopped if needed. Only relationships that are not a member of a Consistency Group, or the only relationship in a Consistency Group, can be stopped individually. You can also use this command to enable write access to a consistent secondary volume. Perform the following steps to stop a Remote Copy relationship: 1. From the SVC Welcome panel, click Copy Services Remote Copy. 2. In the left column, select Not in a Group. 3. Select the remote copy relationship that you want to stop in the table. 4. Click Stop in the Actions menu (Figure 10-281 on page 734) to stop the Remote Copy process. Tip: You can also right-click a relationship and select Stop from the list.

Chapter 10. SAN Volume Controller operations using the GUI

733

Figure 10-281 Stop action

5. The Stop Remove Copy Relationship window opens (Figure 10-282). To allow secondary read/write access, select Allow secondary read/write access then click Stop Relationship to confirm your choice.

Figure 10-282 Stop Remote Copy Relationship window

6. The new relationship status can be checked as shown in Figure 10-283. The relationship is now stopped.

Figure 10-283 Checking Remote Copy synchronization status

10.9.14 Stopping a Consistency Group


After it is started, the Consistency Group can be stopped if necessary. You can also use this command to enable write access to consistent secondary volumes. Perform the following steps to stop a Consistency Group: 1. From the SVC Welcome panel, click Copy Services Remote Copy. 2. In the left column, select the Consistency Group you want to stop. 3. Select the Consistency Group that you want to stop in the table. 734
Implementing the IBM System Storage SAN Volume Controller V6.1

4. Click Stop in the Actions menu (Figure 10-284) to stop the Remote Copy Consistency Group. Tip: You can also right-click a relationship and select Stop from the list.

Figure 10-284 Stop action

5. The Stop Remote Copy Consistency Group window opens (Figure 10-285). To allow secondary read/write access, select Allow secondary read/write access then click Stop Consistency Group to confirm your choice.

Figure 10-285 Stop Remote Copy Consistency Group window

6. The new relationship status can be checked as shown in Figure 10-286. The relationship is now stopped.

Figure 10-286 Checking Remote Copy synchronization status

Chapter 10. SAN Volume Controller operations using the GUI

735

10.9.15 Deleting stand-alone Remote Copy relationships


Perform the following steps to delete a stand-alone Remote Copy mapping: 1. From the SVC Welcome panel, click Copy Services Remote Copy. 2. Select the remote copy relationship that you want to delete in the table. Note: To select multiple remote copy mappings, hold down Ctrl and use your mouse to select the entries you want. 3. Click Delete relationship in the Actions menu (Figure 10-287). Tip: You can also right-click a Remote Copy mapping and select Delete Relationship from the list.

Figure 10-287 Delete Relationship action

4. The Delete Relationship window opens (Figure 10-288 on page 737). In the field Verify the number of relationships you are deleting, enter a value matching the correct number of volumes that you want to remove. This verification has been added to secure the process of deleting wrong relationships. Click Delete to complete the operation (Figure 10-288 on page 737).

736

Implementing the IBM System Storage SAN Volume Controller V6.1

Figure 10-288 Delete Remote Copy relationship

10.9.16 Deleting a Consistency Group


Important: Deleting a Consistency Group does not delete its Remote Copy mappings. Perform the following steps to delete a Consistency Group: 1. From the SVC Welcome panel, click Copy Services Remote Copy. 2. Select the Consistency Group that you want to delete in the left column. 3. Click Delete in the Actions menu (Figure 10-289).

Figure 10-289 Delete Consistency Group action

4. A Warning window opens as shown in Figure 10-290. Click OK to complete the operation.

Figure 10-290 Confirmation message

Chapter 10. SAN Volume Controller operations using the GUI

737

10.10 Managing the cluster using the GUI


This section explains the various configuration and administrative tasks that you can perform on the cluster.

10.10.1 System Status information


From the System Status panel, perform the following steps to display the cluster and nodes information: 1. From the SVC Welcome panel, select Home System Status. 2. The System Status panel (Figure 10-291) opens.

Figure 10-291 System Status panel

By simply moving the mouse over the tower in the left part of the panel, you are able to view the global storage usage as shown in Figure 10-292 on page 739. Using this method, you can monitor the Physical Capacity and the Used Capacity of your cluster.

738

Implementing the IBM System Storage SAN Volume Controller V6.1

Figure 10-292 Physical Capacity information

10.10.2 View I/O groups and their associated nodes


The right side of the System Status panel shows an overview of the cluster with I/O groups and their associated nodes. In this dynamic illustration, the node status can be checked by using a color code depending on the status (Figure 10-293).

Figure 10-293 Cluster view with node status

10.10.3 View cluster properties


1. From the System Status panel, to obtain information about the cluster, click the cluster as shown in Figure 10-294 on page 740.

Chapter 10. SAN Volume Controller operations using the GUI

739

Figure 10-294 General cluster information

2. When you click the Info tab, the following information is displayed: General information Name ID Location Capacity information Total MDisk Capacity Space in MDisk Groups Space Allocated to Volumes Total Free Space Total Volume Capacity Total Volume Copy Capacity Total Used Capacity Total Over Allocation

10.10.4 Renaming an SVC cluster


From the System Status panel, perform the following steps to rename the cluster: 1. Click the cluster name as shown in Figure 10-294. 2. Click Manage. 3. Specify a new name for the cluster as shown in Figure 10-295.

Figure 10-295 Manage tab: Change cluster name

740

Implementing the IBM System Storage SAN Volume Controller V6.1

Cluster name: You can use the letters A to Z and a to z, the numbers 0 to 9, and the underscore (_) character. The node name can be between one and 63 characters in length. 4. Click Save. 5. A Warning window opens as shown in Figure 10-296. In fact, if you are using the iSCSI protocol, changing either name also changes the iSCSI Qualified Name (IQN) of all of the nodes in the cluster and might require reconfiguration of all iSCSI-attached hosts. This is because the IQN for each node is generated using the cluster and node names.

Figure 10-296 Warning window

6. Click OK to confirm that you want to change the cluster name.

10.10.5 Shutting down a cluster


If all input power to a SAN volume Controller cluster is removed for more than a few minutes (for example, if the machine room power is shut down for maintenance), it is important that you shut down the cluster before you remove the power. Shutting down the cluster while it is still connected to the main power ensures that the uninterruptible power supply unit batteries will still be fully charged when power is restored. If you remove the mains power while the cluster is still running, the uninterruptible power supply unit will detect the loss of power and instruct the nodes to shut down. This shutdown can take several minutes to complete, and although the uninterruptible power supply unit has sufficient power to perform the shutdown, you will be unnecessarily draining the uninterruptible power supply unit batteries. When power is restored, the SVC nodes will start. However, one of the first checks that the SVC nodes make is to ensure that the uninterruptible power supply unit batteries have sufficient power to survive another power failure, thereby enabling the node to perform a clean shutdown. (You do not want the uninterruptible power supply unit to run out of power when the nodes shutdown activities have not yet completed). If the uninterruptible power supply unit batteries are not sufficiently charged, the node will not start. Be aware that it can take up to three hours to charge the batteries sufficiently for a node to start. Note: When a node shuts down due to loss of power, the node will dump the cache to an internal hard drive so that the cached data can be retrieved when the cluster starts. With 8F2/8G4 nodes, the cache is 8 GB. With CF8, the cache is 24 GB. So it can take several minutes to dump to the internal drive.

Chapter 10. SAN Volume Controller operations using the GUI

741

SVC uninterruptible power supply units are designed to survive at least two power failures in a short time before nodes will refuse to start until the batteries have sufficient power (to survive another immediate power failure). If, during your maintenance activities, the uninterruptible power supply unit detected power and a loss of power multiple times (and thus the nodes start and shut down more than one time in a short time frame), you might find that you have unknowingly drained the uninterruptible power supply unit batteries. You will have to wait until they are charged sufficiently before the nodes will start. Important: Before shutting down a cluster, quiesce all I/O operations that are destined for this cluster, because you will lose access to all of the volumes that are provided by this cluster. Failure to do so might result in failed I/O operations being reported to your host operating systems. There is no need to quiesce all I/O operations if you are only shutting down one SVC node. Begin the process of quiescing all I/O to the cluster by stopping the applications on your hosts that are using the volumes that are provided by the cluster. If you are unsure which hosts are using the volumes that are provided by the cluster, follow the procedure explained in 9.5.21, Showing the host to which the volume is mapped on page 479, and repeat this procedure for all volumes. From the System Status panel, perform the following steps to shut down your cluster: 1. Click the cluster name as shown in Figure 10-297.

Figure 10-297 General cluster information

2. Click the Manage tab and then click Shut Down Cluster as shown in Figure 10-298 on page 743.

742

Implementing the IBM System Storage SAN Volume Controller V6.1

Figure 10-298 Manage tab: Shut Down Cluster

3. The Confirm Cluster Shutdown cluster window (Figure 10-299) opens. You will receive a message asking you to confirm whether you want to shut down the cluster. Ensure that you have stopped all FlashCopy mappings, Remote Copy relationships, data migration operations, and forced deletions before continuing. Click Yes to begin the shutdown process. Important: At this point, you will lose administrative contact with your cluster.

Figure 10-299 Shutting down the cluster confirmation window

You have now completed the required tasks to shut down the cluster. At this point you can shut down the uninterruptible power supply units by pressing the power buttons on their front panels. Tip: When you shut down the cluster, it will not automatically start. You must manually start the cluster. If the cluster shuts down because the uninterruptible power supply unit has detected a loss of power, it will automatically restart when the uninterruptible power supply unit detects that the power has been restored (and the batteries have sufficient power to survive another immediate power failure).

Chapter 10. SAN Volume Controller operations using the GUI

743

Note: To restart the SVC cluster, you must first restart the uninterruptible power supply units by pressing the power buttons on their front panels. After they are on, go to the service panel of one of the nodes within your SVC cluster and press the power on button, releasing it quickly. After it is fully booted (for example, displaying Cluster: on line 1 and the cluster name on line 2 of the SVC front panel), you can start the other nodes in the same way. As soon as all nodes are fully booted and you have reestablished administrative contact using the GUI, your cluster is fully operational again.

10.10.6 Upgrading software


From the System Status panel, perform the following steps to upgrade the software of your cluster: 1. Click the cluster name as shown in Figure 10-297 on page 742.

Figure 10-300 General cluster information

2. Click the Manage tab and then click Upgrade Cluster as shown in Figure 10-301.

Figure 10-301 Manage tab: Software update link

3. Follow the instruction provided in 10.15.11, Upgrading software on page 799.

744

Implementing the IBM System Storage SAN Volume Controller V6.1

10.11 Managing I/O Groups


In the following sections we illustrate how to manage I/O Groups.

10.11.1 View I/O group properties


From the System Status panel, you can see the I/O group properties. 1. Click an I/O group as shown in Figure 10-302.

Figure 10-302 I/O group information

2. Click the Info tab to obtain the following information: General information Name ID Numbers of Nodes Numbers of Hosts Numbers of Volumes Memory information FlashCopy Global Mirror and Metro Mirror Volume Mirroring RAID

10.11.2 Modifying I/O group properties


From the System Status panel, perform the following steps to modify your cluster: 1. Click an I/O group as shown in Figure 10-303 on page 746.

Chapter 10. SAN Volume Controller operations using the GUI

745

Figure 10-303 I/O group information

2. Click the Manage tab. 3. From this tab, as shown in Figure 10-304 on page 747, you can modify: The I/O Group name I/O Group name: You can use the letters A to Z and a to z, the numbers 0 to 9, and the underscore (_) character. The host name can be between one and 63 characters in length. The amount of memory for the following features: FlashCopy (default 20 MB - maximum 512 MB) Global Mirror and Metro Mirror (default 20 MB - maximum 512 MB) Volume Mirroring (default 20 MB - maximum 512 MB) RAID (default 40 MB - maximum 512 MB)

Important: For Volume mirroring, Copy Services (FlashCopy, Metro Mirror, and Global Mirror) and RAID operations, memory is traded against memory that is available to the cache. The amount of memory can be decreased or increased. The maximum combined memory size across all features is 552 MB.

746

Implementing the IBM System Storage SAN Volume Controller V6.1

Figure 10-304 Modify I/O Group properties window

10.12 Managing nodes


In this section we show how to manage nodes.

10.12.1 View node properties


From this panel, you can obtain detailed information about node properties. 1. Click a node as shown in Figure 10-305.

Figure 10-305 Node information

Chapter 10. SAN Volume Controller operations using the GUI

747

2. Click the Info tab and to obtain the following information: General information Name ID Status Hardware WWNN I/O Group Configuration node Failover Partner node iSCSI Name (IQN) iSCSI Alias Failover iSCSI Name Failover iSCSI Alias if iSCSI Failover is active Serial Number Unique ID WWPNs Status Speed

Redundancy information

iSCSI information

UPS information

Ports information

3. Click the VPD tab to display the vital product data (VPD) for this node. Note: The amount of information in the vital product data (VPD) tab is extensive, so we do not describe it in this section. For the list of these elements, refer to Command-Line Interface User's Guide - Version 6.1.0 and search for the lsnodevpd command.

10.12.2 Renaming a node


From the System Status panel, perform the following steps to rename a node: 1. Click a node as shown in Figure 10-306 on page 749.

748

Implementing the IBM System Storage SAN Volume Controller V6.1

Figure 10-306 Node information window

2. Click the Manage tab. 3. Specify a new name for the node as shown in Figure 10-307.

Figure 10-307 Manage tab: Change node name

Node name: You can use the letters A to Z and a to z, the numbers 0 to 9, and the underscore (_) character. The node name can be between one and 63 characters in length. 4. Click Save. 5. A Warning window opens as shown in Figure 10-308 on page 750. This is due to the fact that the iSCSI Qualified Name (IQN) for each node is generated using the cluster and
Chapter 10. SAN Volume Controller operations using the GUI

749

node names. If you are using the iSCSI protocol, changing either name also changes the IQN of all of the nodes in the cluster and might require reconfiguration of all iSCSI-attached hosts.

Figure 10-308 Warning window - changing the node name

6. To confirm that you want to change the node name, click OK.

10.12.3 Adding a node to the cluster


To complete this operation, perform the following steps: 1. Click an empty node position to view the candidate nodes as shown in Figure 10-309.

Figure 10-309 Add node window

Important: Keep in mind that you need to have at least two nodes in an I/O group. Add your available nodes in sequence. 2. Select the node you want to add to your cluster using the drop-down list. Change its name, if needed, and click Add Node as shown in Figure 10-310 on page 751.

750

Implementing the IBM System Storage SAN Volume Controller V6.1

Figure 10-310 Add a node to the cluster

3. As shown in Figure 10-311, a window appears to inform you about the time required to add a node to the cluster.

Figure 10-311 Warning message

4. If you want to add it, click OK. Important: When a node is added to a cluster, it displays a state of adding and a yellow color, as shown in Figure 10-293 on page 739. It can take as long as 30 minutes for the node to be added to the cluster, particularly if the software version of the node has changed.

10.12.4 Removing a node from the cluster


From the System Status panel, perform the following steps to remove a node: 1. Click a node as shown in Figure 10-312 on page 752.

Chapter 10. SAN Volume Controller operations using the GUI

751

Figure 10-312 Node information window

2. Click the Manage tab and then click Remove node as shown in Figure 10-313.

Figure 10-313 Manage tab: Remove node

3. A Warning window opens as shown in Figure 10-314 on page 753. By default, the cache is flushed before the node is deleted to prevent data loss if a failure occurs on the other node in the I/O group. In certain circumstances, such as when the system is already degraded, you can take the specified node offline immediately without flushing the cache or ensuring data loss does not occur, by selecting the Bypass check for volumes that will go offline, and remove the node immediately without flushing its cache check box.

752

Implementing the IBM System Storage SAN Volume Controller V6.1

Figure 10-314 Warning window - removing a node

If this node is the last node in the cluster the warning message is different, as shown in Figure 10-315. Before you delete the last node in the cluster, ensure that you want to destroy the cluster. Removing the last node in the cluster destroys the cluster. The user interface and any open CLI sessions are lost.

Figure 10-315 Warning window for the last node

4. If you want to remove it, click OK. This makes the node a candidate to be added back into this cluster or into another cluster.

10.13 Troubleshooting
Events detected by the system are saved in an event log. When an entry is made in this event log, the condition is analyzed and classified to help you diagnose problems.

10.13.1 Recommended Actions panel


The Recommended Actions panel (Figure 10-316 on page 754) displays event conditions that require action, and procedures to diagnose and fix them. To access this panel, perform the following action: From the Welcome panel that is shown in Figure 10-1 on page 580, select Troubleshooting Recommended Actions.

Chapter 10. SAN Volume Controller operations using the GUI

753

Figure 10-316 Recommended Actions panel

The highest-priority event is indicated, along with information about how long ago the event occurred. It is important to note that if an event is reported, you must select the event and run a fix procedure.

Event properties
To retrieve properties and sense about a specific event, perform the following steps: 1. Select an event in the table. 2. Click Properties in the Actions menu (Figure 10-317 on page 755).

754

Implementing the IBM System Storage SAN Volume Controller V6.1

Figure 10-317 Event properties action

Tip: You can also obtain access to the Properties action by right-clicking an event. 3. The Properties and Sense Data for Event sequence_number window (where sequence_number is the sequence number of the event that you selected in the previous step) opens, as shown in Figure 10-318 on page 756.

Chapter 10. SAN Volume Controller operations using the GUI

755

Figure 10-318 Properties and sense data for event window

Tip: From the Properties and Sense Data for Event window, you can use the Previous and Next buttons to navigate between events. 4. Click Close to return to the Recommended Actions panel.

Run Fix Procedure


To run a procedure to fix event, perform the following steps: 1. Select an event in the table. Tip: You can also click Run Fix Procedure on top of the panel (see Figure 10-319) to solve the most critical event. 2. Click Run Fix Procedure in the Actions menu (Figure 10-319 on page 757).

756

Implementing the IBM System Storage SAN Volume Controller V6.1

Figure 10-319 Run Fix Procedure Action

Tip: You can also obtain access to the Run Fix Procedure action by right-clicking an event. 3. The Directed Maintenance Procedure window opens as shown in Figure 10-320. You have to follow the wizard and its different steps to fix the event. Note: We do not describe here all the possible steps because the steps involve depend on the event.

Figure 10-320 Directed Maintenance Procedure wizard

4. Click Close to return to the Recommended Actions panel.

10.13.2 Event Log panel


The Event Log panel (Figure 10-321 on page 758) displays two types of events, namely messages and alerts, and it indicates the cause of any log entry.

Chapter 10. SAN Volume Controller operations using the GUI

757

To access this panel, from the Welcome panel shown in Figure 10-1 on page 580, select Troubleshooting Event Log.

Figure 10-321 Event Log panel

Certain alerts have a four-digit error code and a fix procedure that helps you fix the problem. Other alerts also require action, but do not have a fix procedure. Messages are fixed when you acknowledge reading them.

Filtering events
You can filter events in different ways. Filtering can be based on event status (see Basic filtering), or over a period of time (see Time filtering on page 759). Certain events require a certain number of occurrences in 25 hours before they are displayed as unfixed. If they do not reach this threshold in 25 hours, they are flagged as expired. Monitoring events are below the coalesce threshold and are usually transient. You can also sort events by time or error code. When you sort by error code, the most serious events (those with the lowest numbers) are displayed first.

Basic filtering
The event log display can be filtered in three ways using the drop-down menu in the upper right corner of the panel (see Figure 10-322 on page 759): Display all unfixed alerts and messages: Default (events requiring attention) Display all alerts and messages: Expanded (include fixed events) Display all events alerts, messages, monitoring, and expired: Show all (include below-threshold events)

758

Implementing the IBM System Storage SAN Volume Controller V6.1

Figure 10-322 Filter even log display

Time filtering
There are two ways to perform time filtering: by selecting a start date and time and an end date and time; and by selecting an event and showing the entries within in a certain period of time of this event. In this section we demonstrate both methods. By selecting a start date and time, and an end date and time To use this time frame filter, perform the following steps: Click Filter by Date in the Actions menu (Figure 10-323).

Figure 10-323 Filter by date action

Tip: You can also obtain access to the Filter by Date action by right-clicking an event. The Date/Time Filter window opens (Figure 10-324). From this window, select a start date and time and an end date and time.

Figure 10-324 Date/Time Filter window

Click Filter and Close. Your panel is now filtered based on the time frame. To disable this time frame filter, click Reset Date Filter in the Actions menu (Figure 10-325 on page 760).

Chapter 10. SAN Volume Controller operations using the GUI

759

Figure 10-325 Reset Date Filter action

Select an event and show the entries within in a certain period of time of this event To use this time frame filter, perform the following steps: Select an event in the table. In the Actions menu, click Show entries within... and select minutes, hours, or days and finally select a value (Figure 10-326).

Figure 10-326 Show entries within... action

Tip: You can also access the Show entries within... action by right-clicking an event. c. Your window is now filtered based on the time frame (Figure 10-327).

Figure 10-327 Time frame filtering

760

Implementing the IBM System Storage SAN Volume Controller V6.1

To disable this time frame filter, click Reset Date Filter in the Actions menu (Figure 10-328).

Figure 10-328 Reset Date Filter action

Event properties
To retrieve properties and sense about a specific event, perform the following steps: 1. Select an event in the table. 2. Click Properties in the Actions menu (Figure 10-329).

Figure 10-329 Event properties action

Tip: You can also access the Properties action by right-clicking an event.

3. The Properties and Sense Data for Event sequence_number window (where sequence_number is the sequence number of the event that you selected in the previous step) opens, as shown in Figure 10-318 on page 756.

Chapter 10. SAN Volume Controller operations using the GUI

761

Figure 10-330 Properties and sense data for event window

Tip: From the Properties and Sense Data for Event window, you can use the Previous and Next buttons to navigate between events. 4. Click Close to return to the Event log.

Mark an event as fixed


To mark one or more events as fixed, perform the following steps: 1. Select one or more entries in the table. Tip: To select multiple events, hold down the Ctrl key and use the mouse to select the entries you want to select. 2. Click Mark as fixed in the Actions menu (Figure 10-331).

Figure 10-331 Mark as fixed action

762

Implementing the IBM System Storage SAN Volume Controller V6.1

Tip: You can also access the Mark as fixed action by right-clicking an event. 3. The Warning window opens (Figure 10-332).

Figure 10-332 Warning window

4. Click OK to confirm your choice. Note: To be able to see fixed events, you need to filter the event log panel using the Expanded (include fixed events) filter profile or the Show all (include below-threshold events) filter profile.

Mark an event as unfixed


To mark one or more events as unfixed, perform the following steps: 1. Select one or more entries in the table. Tip: To select multiple events, hold down the Ctrl key and use the mouse to select the entries you want to include. 2. Click Mark as unfixed in the Actions menu (Figure 10-333).

Figure 10-333 Mark as unfixed action

Tip: You can also access the Mark as unfixed action by right-clicking an event.

3. The Warning window opens (Figure 10-334 on page 764).

Chapter 10. SAN Volume Controller operations using the GUI

763

Figure 10-334 Warning message

4. Click OK to confirm your choice.

10.13.3 Run fix procedure


Note: Several alerts have a four-digit error code and a fix procedure that helps you fix the problem. Those are the steps described here. Other alerts also require action but do not have a fix procedure. Messages are fixed when you acknowledge reading them, as shown in Figure 10-335. To run a procedure to fix alert, perform the following steps: 1. Select an alert with a four-digit error code in the table. 2. Click Run Fix Procedure in the Actions menu (Figure 10-335).

Figure 10-335 Run Fix Procedure action

Tip: You can also access the Run Fix Procedure action by right-clicking an alert.

3. The Directed Maintenance Procedure window opens (Figure 10-336 on page 765). You must follow the wizard and its steps to fix the event. Note: We do not describe all the various steps, because they depend on the alert.

764

Implementing the IBM System Storage SAN Volume Controller V6.1

Figure 10-336 Directed Maintenance Procedure wizard

4. Click Close to return to the Event Log window.

Clear log
To clear the logs, perform the following steps: 1. Click Clear Log (Figure 10-337).

Figure 10-337 Clear log button

2. A Warning window opens (Figure 10-338). From this window, you must confirm that you want to delete the logs.

Figure 10-338 Warning window

3. Click OK to confirm your choice.

Chapter 10. SAN Volume Controller operations using the GUI

765

10.13.4 Support panel


From the support panel shown in Figure 10-339, you can download support packages that contain log files and information that can be sent to support personnel to help troubleshoot the system. You can either download individual log files or download statesaves, which are dumps or livedumps of system data.

Figure 10-339 Support panel

Download support packages


To download the support packages, perform the following steps: 1. Click Download Support Packages (Figure 10-340).

Figure 10-340 Download Support Packages

2. A Download Support Packages window opens (Figure 10-341 on page 767). From there, select which kind of logs you want to download: Standard logs These contain the most recent logs that have been collected for the cluster. These logs are the most commonly used by support to diagnose and solve problems. Standard logs plus one existing statesave These contain the standard logs for the cluster and the most recent statesave from any of the nodes in the cluster. Statesaves are also known as dumps or livedumps. Standard logs plus most recent statesave from each node These contain the standard logs for the cluster and the most recent statesave from each node in the cluster. Statesaves are also known as dumps or livedumps. Standard logs plus new statesaves These generate a new statesave (livedump) for all the nodes in the cluster and packages them with the most recent logs.

766

Implementing the IBM System Storage SAN Volume Controller V6.1

Figure 10-341 Download Support Package window

Note: Depending on your choice, this action can take several minutes to complete.

3. Click Download to confirm your choice (Figure 10-341). 4. Finally, select where you want to save these logs (Figure 10-342).

Figure 10-342 Save the logs file on your workstation

Download individual packages


To manually download packages, perform the following tasks: 1. Activate the individual log files view (Figure 10-345 on page 768) by clicking the Show full log listing... link (Figure 10-343).

Figure 10-343 Show full log listing link

2. On the detailed view, select the node from which you want to download logs using the drop-down menu in the upper right corner of the panel (Figure 10-344 on page 768).

Chapter 10. SAN Volume Controller operations using the GUI

767

Figure 10-344 Node selection

3. Select the package or packages that you want to download (Figure 10-345).

Figure 10-345 Selection of individuals packages

Tip: To select multiple packages, hold down the Ctrl key and use the mouse to select the entries you want to include.

4. Click Download in the Actions menu (Figure 10-346).

Figure 10-346 Download packages

Tip: You can also access the Download action by right-clicking a package. 5. Finally, select where you want to save these logs in you workstation.

768

Implementing the IBM System Storage SAN Volume Controller V6.1

Tip: You can also delete packages by clicking Delete in the Actions menu.

CIMOM Logging Level


Select this option to include CIMOM tracing components and logging details. Note: The maximum login level can have a significant impact on the performance of the CIMOM interface. To change the CIMOM Logging Level, use the drop-down menu in the upper right corner of the panel as shown in Figure 10-347: CIMOM Logging Level: Low CIMOM Logging Level: Medium CIMOM Logging Level: High

Figure 10-347 Change the CIMOM Login Level

10.14 User Management


Users are managed from within the User Management menu in the SAN Volume Controller GUI, as shown in Figure 10-348 on page 770.

Chapter 10. SAN Volume Controller operations using the GUI

769

Figure 10-348 User Management menu

Each user account has a name, a role, and password assigned to it, which differs from the Secure Shell (SSH) key-based role approach that is used by the CLI. We describe authentication in detail in 2.8.6, User authentication on page 41. The role-based security feature organizes the SVC administrative functions into groups, which are known as roles, so that permissions to execute the various functions can be granted differently to the separate administrative users. There are four major roles and one special role. Table 10-1 on page 771 lists the user roles.

770

Implementing the IBM System Storage SAN Volume Controller V6.1

Table 10-1 Authority roles Role Security Admin Administrator Allowed Commands All commands All commands except: svctask: chauthservice, mkuser, rmuser, chuser, mkusergrp, rmusergrp, chusergrp, and setpwdreset All svcinfo commands and the following svctask commands: prestartfcconsistgrp, startfcconsistgrp, stopfcconsistgrp, chfcconsistgrp, prestartfcmap, startfcmap, stopfcmap, chfcmap, startrcconsistgrp, stoprcconsistgrp, switchrcconsistgrp, chrcconsistgrp, startrcrelationship, stoprcrelationship, switchrcrelationship, chrcrelationship, and chpartnership All svcinfo commands and the following svctask commands: applysoftware, setlocale, addnode, rmnode, cherrstate, writesernum, detectmdisk, includemdisk, clearerrlog, cleardumps, settimezone, stopcluster, startstats, stopstats, and settime All svcinfo commands and the following svctask commands: finderr, dumperrlog, dumpinternallog, chcurrentuser and the svcconfig command: backup User Superusers Administrators that control the SVC

Copy Operator

For users that control all copy functionality of the cluster

Service

For users that perform service maintenance and other hardware tasks on the cluster

Monitor

For users only needing view access

The superuser user is a built-in account that has the Security Admin user role permissions. You cannot change permissions or delete this superuser account; you can only change the password. You can also change this password manually on the front panels of the cluster nodes. An audit log keeps track of actions that are issued through the management GUI or the command-line interface. For more information about this topic, see 10.14.9, Audit log information on page 783.

Chapter 10. SAN Volume Controller operations using the GUI

771

10.14.1 Creating a user


Perform the following steps to create a user: 1. From the SVC Welcome panel, click User Management in the left menu, and then click the All Users panel. 2. Click New User (Figure 10-349).

Figure 10-349 Click New User

3. The New User window opens (Figure 10-350).

Figure 10-350 New User window

Enter a new user name in the Name field.

772

Implementing the IBM System Storage SAN Volume Controller V6.1

User name: You can use the letters A to Z and a to z, the numbers 0 to 9, and the underscore (_) character. The user name can be between one and 128 characters in length.

Authentication Mode section


There are two types of authentication available in this section: Local: The authentication method is located on the system. Users must be part of a user group which authorizes them to specific sets of operations. If you select this type of authentication, use the drop-down list to select the user group (Table 10-1 on page 771) that you want the user to be part of. Remote: Remote authentication allows users of SAN management applications, such as IBM Tivoli Storage Productivity Center, to authenticate to the cluster using the authentication service provided by the SAN management application. Ensure that the remote authentication service is configured for the SAN management application. To complete this task, you need the following information regarding the remote authentication service: The web address for the remote authentication service. The user name and password for HTTP basic authentication. These credentials are created by and obtained from the administrator of the remote authentication service.

Local credentials section


There are two types of local credentials that can be configured in this section, depending on your needs: GUI Authentication: The Password authenticates users to the management GUI. Enter the password in the Password field. Password: The password can be between 6 and 64 characters in length and it cannot begin or end with a space. CLI Authentication: The SSH Key authenticates users to the command-line interface. The SSH Public Key need to be uploaded using the Browse... button in the SSH Public Key field. 4. Then to create the user, click the Create button as shown in Figure 10-350 on page 772.

10.14.2 Modifying user properties


Perform the following steps to change user properties: 1. From the SVC Welcome panel, click User Management in the left menu, and then click the Users panel. 2. In the left column, select a User Group. 3. Select a user. 4. Click Properties in the Actions menu (Figure 10-351 on page 774). Tip: You can also change user properties by right-clicking a user and selecting Properties from the list.

Chapter 10. SAN Volume Controller operations using the GUI

773

Figure 10-351 Properties Action

5. The User Properties window opens (Figure 10-352).

Figure 10-352 User Properties window

From this window, you can change the authentication mode and Local credentials. Authentication Mode There are two types of authentication available in this section: Local: The authentication method is located on the system. Users must be part of a user group which authorizes them to specific sets of operations. If you select this type of authentication, use the drop-down list to select the user group (Table 10-1 on page 771) that you want the user to be part of. Remote: Remote authentication allows users of SAN management applications, such as IBM Tivoli Storage Productivity Center, to authenticate to the cluster using the authentication service provided by the SAN management application. Ensure that the remote authentication service is configured for the SAN management application.

774

Implementing the IBM System Storage SAN Volume Controller V6.1

To complete this task, you need the following information regarding the remote authentication service: The web address for the remote authentication service. The user name and password for HTTP basic authentication. These credentials are created by and obtained from the administrator of the remote authentication service. Local Credentials There are two types of local credentials that can be configured in this section depending on your needs: GUI authentication: The Password authenticates users to the management GUI. You need to enter the password in the Password field. Password: The password can be between 6 and 64 characters in length and it cannot begin or end with a space. CLI authentication: The SSH Key authenticates users to the command-line interface. The SSH Public Key need to be uploaded using the Browse... button in the SSH Public Key field. 6. To confirm the changes, click OK (see Figure 10-352 on page 774).

10.14.3 Removing a user password


Note: To be able to remove the password for a given user, the SSH Public Key must be defined. Otherwise, this action is not available. Perform the following steps to remove a user password: 1. From the SVC Welcome panel, click User Management and then click the Users panel. 2. Select the user. 3. Click Remove Password in the Actions menu as shown in Figure 10-353 on page 776. Tip: You can also remove the password by right-clicking a user and selecting Remove Password from the list.

Chapter 10. SAN Volume Controller operations using the GUI

775

Figure 10-353 Remove Password action

4. The Warning window opens (Figure 10-354). Click OK to complete the operation.

Figure 10-354 Warning window

10.14.4 Removing a user SSH Public Key


Note: To be able to remove the SSH Public Key for a given user, the password must be defined. Otherwise, this action is not available. Perform the following steps to remove a user password: 1. From the SVC Welcome panel, click User Management and then click the Users panel. 2. Select the user. 3. Click Remove SSH Key in the Actions menu as shown in Figure 10-355 on page 777. Tip: You can also remove the SSH Public Key by right-clicking a user and selecting Remove SSH Key from the list.

776

Implementing the IBM System Storage SAN Volume Controller V6.1

Figure 10-355 Remove Password action

4. The Warning window opens (Figure 10-356). Click OK to complete the operation.

Figure 10-356 Warning window

10.14.5 Deleting a user


Perform the following steps to delete a user: 1. From the SVC Welcome panel, click User Management and then click the Users panel. 2. Select the user. Important: To select multiple users to delete, hold down the Ctrl key and use the mouse to select the entries you want to delete. 3. Click Delete in the Actions menu as shown in Figure 10-357 on page 778. Tip: You can also delete a user by right-clicking the user and selecting Delete from the list.

Chapter 10. SAN Volume Controller operations using the GUI

777

Figure 10-357 Delete action

4. The Delete User window opens (Figure 10-358). Click Delete to complete the operation.

Figure 10-358 Delete User window

10.14.6 Creating a user group


Five user groups are created by default on the SVC. If needed, you can create additional ones. Perform the following steps to create a user group: 1. From the SVC Welcome panel, click User Management in the left menu and then click the Users panel. 2. Click Global Actions and then select New User Group (Figure 10-359 on page 779).

778

Implementing the IBM System Storage SAN Volume Controller V6.1

Figure 10-359 Selecting New User Group

3. The New User Group window opens (Figure 10-360).

Figure 10-360 New User Group window

Enter a name for the group in the Group Name field. Group name: You can use the letters A to Z and a to z, the numbers 0 to 9, and the underscore (_) character. The group name can be between one and 63 characters in length. Role section A role needs to be selected between Monitor, Copy Operator, Service, Administrator or Security Administrator. See Table 10-1 on page 771 for more information about these roles. Remote authentication section Select this option if you want to enable remote authentication for the group.

Chapter 10. SAN Volume Controller operations using the GUI

779

Note: Remote authentication allows users of SAN management applications, such as IBM Tivoli Storage Productivity Center, to authenticate to the cluster using the authentication service provided by the SAN management application. 4. To create the group name, click Create (Figure 10-360 on page 779). 5. You can verify the creation in the Users panel (Figure 10-361).

Figure 10-361 Verify user group creation

10.14.7 Modifying user group properties


Note: For preset user groups (SecurityAdmin, Administrator, CopyOperator, Service and Monitor), you cannot change their respective roles. You can only update the remote authentication section. Perform the following steps to change user properties: 1. From the SVC Welcome panel, click User Management in the left menu and then click the Users panel. 2. In the left column, select the User Group. 3. Click Properties in the Actions menu as shown in Figure 10-362 on page 781.

780

Implementing the IBM System Storage SAN Volume Controller V6.1

Figure 10-362 Properties action

4. The User Group Properties window opens (Figure 10-363).

Figure 10-363 User group properties window

From this window, you can change the role and remote authentication: Role A role needs to be selected between Monitor, Copy Operator, Service, Administrator or Security Administrator. See Table 10-1 on page 771 for more information about these roles. Remote Authentication Select this option if you want to enable remote authentication for the group. Note: Remote authentication allows users of SAN management applications, such as IBM Tivoli Storage Productivity Center, to authenticate to the cluster using the authentication service provided by the SAN management application.

Chapter 10. SAN Volume Controller operations using the GUI

781

5. To confirm the changes, click OK (Figure 10-363 on page 781).

10.14.8 Deleting a user group


Perform the following steps to delete a user group: 1. From the SVC Welcome panel, click User Management in the left menu and then click the Users panel. 2. In the left column, select the User Group. 3. Click Delete in the Actions menu (Figure 10-364). Important: You cannot delete preset user groups SecurityAdmin, Administrator, CopyOperator, Service, or Monitor.

Figure 10-364 Delete action

4. There are two options: If you do not have any users in this group, the Delete User Group window opens as shown in Figure 10-358 on page 778. Click Delete to complete the operation.

Figure 10-365 Delete user group window

If you have users in this group, the Delete User Group window opens as shown in Figure 10-366 on page 783. The users of this group will be moved to the Monitor user group.

782

Implementing the IBM System Storage SAN Volume Controller V6.1

Figure 10-366 Delete User Group window

10.14.9 Audit log information


An audit log keeps track of actions that are issued through the management GUI or the command-line interface. You can use the audit log to monitor user activity on your system. Perform the following steps to view the audit log: From the SVC Welcome panel, click User Management in the left menu and then click the Audit Log panel as shown in Figure 10-367. The audit log entries provide the following information: The time and date when the action or command was issued on the system The name of the user who performed the action or command. The IP address of the system where the action or command was issued The parameters that were issued with the command The results of the command or action The sequence number and the object identifier that is associated with the command or action

Figure 10-367 Audit log entries

Chapter 10. SAN Volume Controller operations using the GUI

783

Time filtering
There are two ways to perform time filtering: by selecting a start date and time and an end date and time; and by selecting an event and showing the entries within in a certain period of time of this event. In this section we demonstrate both methods. By selecting a start date and time and an end date and time To use this time frame filter, perform the following steps: Click Filter by Date in the Actions menu (Figure 10-368).

Figure 10-368 Filter by date action

Tip: You can also access the Filter by Date action by right-clicking an entry.

The Date/Time Filter window opens (Figure 10-369). From this window, select a start date and time and an end date and time.

Figure 10-369 Date/Time Filter window

Click Filter and Close. Your panel is now filtered based on its time frame. To disable this time frame filter, click Reset Date Filter in the Actions menu (Figure 10-370).

Figure 10-370 Reset Date Filter action

784

Implementing the IBM System Storage SAN Volume Controller V6.1

By selecting an entry and showing the entries within in a certain period of time of this event To use this time frame filter, perform the following steps: Select an entry in the table. In the Actions menu, click Show entries within... and select minutes, hours, or days and finally select a value (Figure 10-371).

Figure 10-371 Show entries within... action

Tip: You can also access the Show entries within... action by right-clicking an entry. Your panel is now filtered based on the time frame (Figure 10-327 on page 760).

Figure 10-372 Time frame filtering

Chapter 10. SAN Volume Controller operations using the GUI

785

To disable this time frame filter, click Reset Date Filter in the Actions menu (Figure 10-373).

Figure 10-373 Reset Date Filter action

10.15 Configuration
In this section we describe how to configure different aspects of the SVC.

10.15.1 Configuring Network


With the SVC, you can use both IP ports of each node. There are two active cluster ports on each node. We describe the two active cluster ports on each node in further detail in 2.6.1, Use of IP addresses and Ethernet ports on page 30.

Management IP addresses
In this section, we discuss the modification of management IP addresses. Management IP addresses can be defined for the system. The system supports one to four IP addresses. You can assign these addresses to two Ethernet ports and their backup ports. Multiple ports and IP addresses provide redundancy for the system in the event of connection interruptions. At any point in time, the system has an active management interface. Ethernet Port 1 must always be configured and the use of Port 2 is optional. Configuring both ports provides redundancy for the Ethernet connections. If you have configured both ports and you cannot connect through one IP address, attempt to access the system through the alternate IP address. Both IPv4 and IPv6 address formats are supported. Ethernet ports can have either IPv4 addresses or IPv6 addresses, or both. Important: If you specify a new cluster IP address, the existing communication with the cluster through the GUI is lost. You need to relaunch the SAN Volume Controller Application from the GUI Welcome panel. You must use the new IP address to reconnect to the management GUI. When you reconnect, accept the new site certificate. Modifying the IP address of the cluster, although quite simple, requires reconfiguration for other items within the SVC environments, including reconfiguring the central administration GUI by adding the cluster again with its new IP address. Perform the following steps to modify the cluster IP addresses of our SVC configuration: 1. From the SVC Welcome panel, select Configuration and then Network. 2. In the left column, select Management IP Addresses. 3. The Management IP Addresses window opens (Figure 10-374 on page 787). 786
Implementing the IBM System Storage SAN Volume Controller V6.1

Figure 10-374 Modify management IP address

4. Click a port to configure the cluster's management IP address. Notice that you can configure both ports on the SVC node (Figure 10-375).

Figure 10-375 Modify management IP addresses

5. Depending on whether you select to configure an IPv4 or IPv6 cluster, there is different information to enter.

Chapter 10. SAN Volume Controller operations using the GUI

787

For IPv4: Type an IPv4 address in the IP Address field. Type an IPv4 gateway in the Gateway field. Type an IPv4 Subnet Mask. For IPv6: Select the Show IPv6 button. Type an IPv6 prefix in the IPv6 Network Prefix field. The Prefix field can have a value of 0 to 127. Type an IPv6 address in the IP Address field. Type an IPv6 gateway in the Gateway field. 6. After the information is filled in, click OK to confirm the modification (Figure 10-375 on page 787).

10.15.2 Configuring the Service IP addresses


The service IP address is used to access the service assistant tool, which you can use to perform service-related actions on the node. All nodes in the cluster have different service addresses. A node that is operating in service state does not operate as a member of the cluster. Configuring this service IP is important because it will let you access the Service Assistant Tool. In case of an issue with a node, you can view a detailed status and error summary, and manage service actions on it. Perform the following steps to modify the service IP addresses of our SVC configuration: 1. From the SVC Welcome panel, select Configuration and then Network. 2. In the left column, select Service IP Addresses (Figure 10-376).

Figure 10-376 Service IP Addresses window

3. Select one node, then click the port you want to assign a service IP address (Figure 10-377 on page 789).

788

Implementing the IBM System Storage SAN Volume Controller V6.1

Figure 10-377 Configure Service IP window

4. Depending on whether you installed an IPv4 or IPv6 cluster, there is different information to enter. For IPv4: Type an IPv4 address in the IP Address field. Type an IPv4 gateway in the Gateway field. Type an IPv4 Subnet Mask. For IPv6: Select the Show IPv6 button. Type an IPv6 prefix in the IPv6 Network Prefix field. The Prefix field can have a value of 0 to 127. Type an IPv6 address in the IP Address field. Type an IPv6 gateway in the Gateway field. 5. After the information is filled in, click OK to confirm modification (Figure 10-378).

Figure 10-378 Service IP window

6. Repeat steps 3 and 4 for each node of your cluster.

10.15.3 iSCSI configuration


From the iSCSI panel, you can configure settings for the cluster to attach to iSCSI-attached hosts as shown in Figure 10-379 on page 790.

Chapter 10. SAN Volume Controller operations using the GUI

789

Figure 10-379 iSCSI Configuration

The following parameters can be updated: Cluster Name It is important to set the cluster name correctly because it is part of the iSCSI qualified name (IQN) for the node. Important: If you change the name of the cluster after iSCSI is configured, iSCSI hosts might need to be reconfigured. To change the cluster name, click the cluster name and specify the new name Cluster name: You can use the letters A to Z and a to z, the numbers 0 to 9, and the underscore (_) character. The node name can be between one and 63 characters in length. iSCSI Ethernet Ports iSCSI configuration can be set for each Ethernet ports. Perform the following steps to change an iSCSI IP: Click a port and, depending if you installed an IPv4 or IPv6 cluster, enter the appropriate information. For IPv4: enter an IP address, a gateway and a Subnet Mask. For IPv6: enter an IP prefix, an IP address and a gateway.

After the information is filled in, click OK to confirm modification. Important: When reconfiguring IP ports, be aware that you must reconnect already configured iSCSI connections if changes are made on the IP addresses of the nodes.

790

Implementing the IBM System Storage SAN Volume Controller V6.1

iSCSI Aliases An iSCSI alias is a user-defined name that identifies the node to the host. Perform the following steps to change an iSCSI alias: Click an iSCSI alias. Specify a name for it. Each node has a unique iSCSI name associated with two IP addresses. After the host has initiated the iSCSI connection to a target node, this IQN from the target node will be visible in the iSCSI configuration tool on the host. iSNS and CHAP You can specify the IP address for the iSCSI Storage Name Service (iSNS). Host systems use the iSNS server to manage iSCSI targets and for iSCSI discovery. You can also enable CHAP to authenticate the system and iSCSI-attached hosts with the specified shared secret. The CHAP secret is the authentication method that is used to restrict access for other iSCSI hosts to use the same connection. You can set the CHAP for the whole cluster under cluster properties or for each host definition. The CHAP must be identical on the server and the cluster/host definition. You can create an iSCSI host definition without using a CHAP.

10.15.4 Fibre Channel information


As shown in Figure 10-380, the Fibre Channel panel can be used to display the Fibre Channel connectivity between nodes and other storage systems and hosts that are attached through the Fibre Channel network. Filtering can be done by selecting one of the following fields: All nodes, storage systems, and hosts Cluster Nodes Storage Systems Hosts

Figure 10-380 Fibre Channel

Chapter 10. SAN Volume Controller operations using the GUI

791

10.15.5 Event notifications


SAN Volume Controller can use Simple Network Management Protocol (SNMP) traps, syslog messages, and Call Home email to notify you and the IBM Support Center when significant events are detected. Any combination of these notification methods can be used simultaneously. Notifications are normally sent immediately after an event is raised. However, there are events that can occur because of service actions that are being performed. If a recommended service action is active, these events are notified only if they are still unfixed when the service action completes.

10.15.6 Email notifications


The Call Home feature transmits operational and event-related data to you and IBM through a Simple Mail Transfer Protocol (SMTP) server connection in the form of an event notification email. When configured, this function alerts IBM service personnel about hardware failures and potentially serious configuration or environmental issues. Perform the following steps to configure email event notifications: 1. From the SVC Welcome panel, select Configuration and then Event Notifications. 2. In the left column, select Email. 3. Click Enable Email Event Notification (Figure 10-381).

Figure 10-381 Email Event Notification

4. A wizard appears (Figure 10-382). You must enter contact information to enable IBM Support personnel to contact this person to assist with problem resolution (Contact Name, Email reply Address, Machine Location and Phone numbers). Ensure that all contact information is valid, then click Next.

Figure 10-382 Define Company Contact information

792

Implementing the IBM System Storage SAN Volume Controller V6.1

5. On the next page (Figure 10-383), configure at least one email server that is used by your site and optionally enable inventory reporting. Enter a valid IP address and a server port for each server added. Ensure that the email servers are valid. Inventory reports allow IBM service personnel to proactively notify you of any known issues with your system. To activate it, enable the inventory reporting and choose a reporting interval in this window.

Figure 10-383 Configure Email Servers and Inventory Reporting window

6. Next (Figure 10-384), you can configure email addresses to receive notifications. It is advisable to configure an email address belonging to a support user with the error event notification type enabled, to notify IBM service personnel if an error condition occurs on your system. Ensure that all email addresses are valid.

Figure 10-384 Configure Email Addresses window

7. The last window (Figure 10-385 on page 794) displays a summary of your Email Event Notification wizard. Click Finish to complete the setup.

Chapter 10. SAN Volume Controller operations using the GUI

793

Figure 10-385 Email Event Notification Summary

8. The wizard is now closed. Additional information has been added to the panel as shown on Figure 10-386. You can edit or disable email notification from this window.

Figure 10-386 Configure Email Event Notification window configured

10.15.7 SNMP notifications


Simple Network Management Protocol (SNMP) is a standard protocol for managing networks and exchanging messages. The system can send SNMP messages that notify personnel about an event. You can use an SNMP manager to view the SNMP messages that SAN Volume Controller sends. You can configure an SNMP server to receive various informational, errors, or warning notifications entering the following information (see Figure 10-387 on page 795): IP Address The address for the SNMP server

794

Implementing the IBM System Storage SAN Volume Controller V6.1

Server port The remote port number for the SNMP server. The remote port number must be a value between 1 - 65535. Community The SNMP community is the name of group to which devices and management stations that run SNMP belong. Event notifications Select Error if you want that the user receives messages about problems, such as hardware failures, that must be resolved immediately. Important: Navigate to Recommended Actions to run fix procedures on these notifications. Select Warning if you want that the user receives messages about problems and unexpected conditions. Investigate the cause immediately to determine any corrective action. Important: Navigate to Recommended Actions to run fix procedures on these notifications. Select Info if you want that the user receives messages about expected events. No action is required for these events.

Figure 10-387 SNMP configuration

To remove an SNMP Server, use the To add another SNMP Server, use the

button. button.

Syslog notifications
The syslog protocol is a standard protocol for forwarding log messages from a sender to a receiver on an IP network. The IP network can be either IPv4 or IPv6. The system can send syslog messages that notify personnel about an event.
Chapter 10. SAN Volume Controller operations using the GUI

795

You can configure a syslog server to receive log messages from various systems and stores them in a central repository entering the following information (see Figure 10-388): IP Address The address for the syslog server Facility The facility determines the format for the syslog messages and can be used to determine the source of the message. Message format The message format depends on the facility. The system can transmit syslog messages in two formats: Concise message format provides standard detail on the event. Expanded format provides more details about the event. Event notifications Select Error if you want that the user receives messages about problems, such as hardware failures, that must be resolved immediately. Important: Navigate to Recommended Actions to run fix procedures on these notifications. Select Warning if you want that the user receives messages about problems and unexpected conditions. Investigate the cause immediately to determine any corrective action. Important: Navigate to Recommended Actions to run fix procedures on these notifications. Select Info if you want that the user receives messages about expected events. No action is required for these events.

Figure 10-388 Syslog configuration

To remove a syslog server, use the To add another syslog server, use the

button. button.

The syslog messages can be sent in either compact message format or expanded message format. Example 10-1 on page 797 shows a compact format syslog message.

796

Implementing the IBM System Storage SAN Volume Controller V6.1

Example 10-1 Compact syslog message example

IBM2145 #NotificationType=Error #ErrorID=077001 #ErrorCode=1070 #Description=Node CPU fan failed #ClusterName=SVCCluster1 #Timestamp=Wed Jul 02 08:00:00 2008 BST #ObjectType=Node #ObjectName=Node1 #CopyID=0 #ErrorSequenceNumber=100 Example 10-2 shows a expanded format syslog message.
Example 10-2 Full format syslog message example

IBM2145 #NotificationType=Error #ErrorID=077001 #ErrorCode=1070 #Description=Node CPU fan failed #ClusterName=SVCCluster1 #Timestamp=Wed Jul 02 08:00:00 2008 BST #ObjectType=Node #ObjectName=Node1 #CopyID=0 #ErrorSequenceNumber=100 #ObjectID=2 #NodeID=2 #MachineType=21454F2#SerialNumber=1234567 #SoftwareVersion=5.1.0.0 (build 8.14.0805280000)#FRU=fan 24P1118, system board 24P1234 #AdditionalData(0->63)=00000000210000000000000000000000000000000000000000000000000 00000000000000000000000000000000000000000000000000000000000000000000000#Additional Data(64-127)=000000000000000000000000000000000000000000000000000000000000000000000 00000000000000000000000000000000000000000000000000000000000

10.15.8 Using the Advanced panel


Use the Advanced panel to change time and date settings, work with license options, download configuration settings, download software upgrade packages, and change management GUI preferences.

10.15.9 Date and Time


Perform the following steps to configure time settings: 1. From the SVC Welcome panel, select Configuration and then Advanced. 2. In the left column, select Date and Time (Figure 10-389).

Figure 10-389 Date and Time window

Chapter 10. SAN Volume Controller operations using the GUI

797

3. From this panel, you can modify: The time zone Select a time zone for your cluster using the drop-down list. The date and time Two options are available: If you are not using a Network Time Protocol (NTP) server, select the Set Date and Time button and then manually enter the date and the time for your cluster as shown in Figure 10-390. You can also use the Use Browser Setting button to automatically adjust date and time of your SVC cluster with your local workstation date and time.

Figure 10-390 Set Date and Time window

If you are using a Network Time Protocol (NTP) server, select the Set NTP Server IP Address button and then enter the IP address of the NTP server as shown in Figure 10-391.

Figure 10-391 Set NTP Server IP Address window

4. Finally, click Save to validate your changes.

10.15.10 Licensing
Perform the following steps to configure licensing settings: 1. From the SVC Welcome panel, select Configuration and then Advanced. 2. In the left column, select Licensing (Figure 10-392 on page 799).

798

Implementing the IBM System Storage SAN Volume Controller V6.1

Figure 10-392 Licensing window

3. Set the licensing values for the IBM System Storage SAN Volume Controller for the following elements: Virtualization Limit Enter the capacity of the storage that will be virtualized by this cluster. FlashCopy Limit Enter the capacity that is available for FlashCopy mappings. Important: The used capacity for FlashCopy mapping is the sum of all of the volumes that are the source volumes of a FlashCopy mapping. Global and Metro Mirror Limit Enter the capacity that is available for Metro Mirror and Global Mirror relationships. Important: The used capacity for Global Mirror and Metro Mirror is the sum of the capacities of all of the volumes that are in a Metro Mirror or Global Mirror relationship; both master and auxiliary volumes are counted.

10.15.11 Upgrading software


See 10.16, Upgrading SVC software on page 801, for information about this topic.

10.15.12 Setting GUI Preferences


Perform the following steps to configure licensing settings: 1. From the SVC Welcome window, select Configuration Advanced. 2. In the left column, select GUI Preferences (Figure 10-393 on page 800).
Chapter 10. SAN Volume Controller operations using the GUI

799

Figure 10-393 GUI Preferences window

3. From here you can configure the following elements: Refresh GUI Objects This action causes the GUI to refresh every one of its views. It clears the GUI cache. The GUI will look up every object again. Important: This is a support only action button. Restore Default Browser Preferences This action deletes all GUI preferences that are stored in the browser and restores default preferences. Table Selection If selected, this action shows Select/Deselect All in each table in the cluster (Figure 10-394).

Figure 10-394 Select/Deselect All

Navigation If selected, this action shows navigation as tabs when not in low graphics mode (Figure 10-395 on page 801).

800

Implementing the IBM System Storage SAN Volume Controller V6.1

Figure 10-395 Tabs example

10.16 Upgrading SVC software


In this section we explain the operations to be performed to upgrade your SVC software from 6.1.0.0 to a new version 6.1.0.3. The format for the software upgrade package name ends in four positive integers separated by dots. For example, a software upgrade package might have the name IBM_2145_INSTALL_6.1.0.3.

10.16.1 Precautions before upgrade


Take the following precautions before attempting an upgrade. Important: Before attempting any SVC code update, read and understand the SVC concurrent compatibility and code cross-reference matrix. Go to the following site and click the link for Latest SAN Volume Controller code: http://www-1.ibm.com/support/docview.wss?uid=ssg1S1001707 During the upgrade, each node in your SVC cluster will be automatically shut down and restarted by the upgrade process. Because each node in an I/O Group provides an alternate path to volumes, use Subsystem Device Driver (SDD) to make sure that all I/O paths between all hosts and SANs are working. If you have not performed this check, then certain hosts might lose connectivity to their volumes and experience I/O errors when the SVC node that is providing that access is shut down during the upgrade process. You can check the I/O paths by using SDD datapath query commands. Double-check that your uninterruptible power supply unit power configuration is also set up correctly (even if your cluster is running without problems). Specifically, double-check these areas: Ensure that your uninterruptible power supply units are all getting their power from an external source, and that they are not daisy-chained. Make sure that each uninterruptible power supply unit is not supplying power to another nodes uninterruptible power supply unit. Ensure that the power cable, and the serial cable coming from the back of each node, goes back to the same uninterruptible power supply unit. If the cables are crossed and are going back to separate uninterruptible power supply units, then during the upgrade, as one node is shut down, another node might also be mistakenly shut down.

Chapter 10. SAN Volume Controller operations using the GUI

801

10.16.2 SVC software upgrade test utility


The SVC software upgrade test utility is an SVC software utility that checks for known issues that can cause problems during an SVC software upgrade. It is available from the following location: http://www-01.ibm.com/support/docview.wss?rs=591&uid=ssg1S4000585 You can use the svcupgradetest utility to check for known issues that might cause problems during a SAN Volume Controller software upgrade. The software upgrade test utility can be downloaded in advance of the upgrade process, or it can be downloaded and run directly during the software upgrade, as guided by the upgrade wizard. You can run the utility multiple times on the same cluster to perform a readiness check in preparation for a software upgrade. We strongly advise running this utility for a final time immediately prior to applying the SVC upgrade to ensure that there have not been any new releases of the utility since it was originally downloaded. The installation and usage of this utility are nondisruptive and do not require restarting any SVC nodes, so there is no interruption to host I/O. The utility is only installed on the current configuration node. System administrators must continue to check whether the version of code that they plan to install is the latest version. You can obtain information about the latest information at this website: http://www-01.ibm.com/support/docview.wss?rs=591&uid=ssg1S1001707#_Latest_SAN_Volu me_Controller%20Code This utility is intended to supplement rather than duplicate the existing tests that are carried out by the SVC upgrade procedure (for example, checking for unfixed errors in the error log).

10.16.3 Upgrade procedure


To upgrade the SVC cluster software, perform the following steps: 1. With a supported web browser, put your cluster IP address in the following link; the SVC GUI login window will then display as shown in Figure 10-396 on page 803. http://<your cluster ip address>/service/

802

Implementing the IBM System Storage SAN Volume Controller V6.1

Figure 10-396 SVC GUI login window

2. Login with your superuser/password; the SVC management home page will display. From there, go to the Configuration Advanced menu (Figure 10-397) and click Advanced.

Figure 10-397 Configuration menu

3. In the Advanced menu, click the Upgrade Software item; the window shown in Figure 10-398 on page 804 will display.

Chapter 10. SAN Volume Controller operations using the GUI

803

Figure 10-398 Upgrade Software

From the window shown in Figure 10-398, you can click the following buttons: Check for updates: Use this to check, on the IBM website, whether there is an SVC software version available that is newer than the version you have installed in your SVC. You need an Internet connection to perform this check. Launch Upgrade Wizard: Use this to launch the software upgrade process. 4. Click Launch Upgrade Wizard to start the upgrade process; you will be redirected to the window shown in Figure 10-399.

Figure 10-399 Upgrade Package

From the window shown in Figure 10-399 you can download the Upgrade Test Utility from the IBM website, or you can browse and upload the Upgrade Test Utility from the location where you saved it, as shown in Figure 10-400 on page 805.

804

Implementing the IBM System Storage SAN Volume Controller V6.1

Figure 10-400 Upload Test Utility

5. When the Upgrade Test Utility has been uploaded, the window shown in Figure 10-401 displays.

Figure 10-401 Upload completed

6. When you click Next (Figure 10-401), the Upgrade Test Utility will be applied. You will be redirected to the window shown in Figure 10-402.

Figure 10-402 Upgrade Test Utility applied

Chapter 10. SAN Volume Controller operations using the GUI

805

7. Click Close (Figure 10-402 on page 805), and you will be redirected to the window shown in Figure 10-403. From here you can run your Upgrade Test Utility for the level you need.

Figure 10-403 Run Upgrade Test Utility

8. Click Next (Figure 10-403), and you will be redirected to the window shown in Figure 10-404. At this point the Upgrade Test Utility will run. You will see the suggested actions (if any are needed) or simply the window shown in Figure 10-404.

Figure 10-404 Upgrade Test Utility result

9. Click Next (Figure 10-404) to start the SVC software upload procedure, and you will be redirected to the window shown in Figure 10-405.

Figure 10-405 Upgrade Package

From the window shown in Figure 10-405 you can download the SVC software upgrade package directly from the IBM website, or you can browse and upload the software upgrade package from the location where you saved it, as shown in Figure 10-406 on page 807.

806

Implementing the IBM System Storage SAN Volume Controller V6.1

Figure 10-406 Upload SVC software upgrade package

Click Open (Figure 10-406), and you will be redirected on the windows shown in Figure 10-407 and Figure 10-408.

Figure 10-407 Uploading SVC software package

Figure 10-408 shows that the SVC package uploading has completed.

Figure 10-408 Uploading SVC software package complete

10.Click Next and you will be redirected to the window shown in Figure 10-409.

Figure 10-409 System ready for upgrade

Chapter 10. SAN Volume Controller operations using the GUI

807

11.When you click Finish (Figure 10-409 on page 807), the SVC software upgrade will start and you will be redirected to the window shown in Figure 10-410.

Figure 10-410 Upgrading a node

When you click Close (Figure 10-410), the warning message shown in Figure 10-411 will be displayed.

Figure 10-411 Warning message

12.When you click OK (Figure 10-411), you will have completed upgrading the SVC software. Now you are redirected to the window shown in Figure 10-412.

Figure 10-412 Upgrade in progress

After a few minutes the window shown in Figure 10-413 on page 809 will display, showing that the first node has been upgraded.

808

Implementing the IBM System Storage SAN Volume Controller V6.1

Figure 10-413 First node is upgraded

Now the process will install the new SVC software version on the remaining node in the cluster. You can check the upgrade status as shown in Figure 10-413. 13.After all nodes have been rebooted, you will have completed the SVC software upgrade task.

10.17 Service Assistant with the GUI


SVC V6.1 introduces a new method for performing service tasks on the system. In addition to being able to perform service tasks from the front panel, you can now also service a node through an Ethernet connection using either a web browser or command line interface. The web browser runs a new service application known as the Service Assistant. Almost of all the functions that were previously possible through the front panel are now available from the Ethernet connection, offering the benefits of an easier-to-use interface that can be used remotely from the cluster. In this section we describe useful tasks you can perform with the new Service Assistant application using a web browser GUI. Attention: We do not detail certain actions because those actions must be run under the direction of IBM Support. Do not try to perform actions of this kind without IBM Support. direction. To be able to use the SVC Service Assistant application with the GUI, you must first have a service IP address configured for each node of your cluster. For more information about how to set the SVC service IP address, see 4.4.3, Configuring the Service IP Addresses on page 119. With a supported web browser, address the following link and you will reach the Service Assistant login window (Figure 10-414 on page 810). https://<your service ip address>/service/

Chapter 10. SAN Volume Controller operations using the GUI

809

Figure 10-414 Service Assistant login page

Login with your superuser password and you will reach the Service Assistant Home page (Figure 10-415).

Figure 10-415 Service Assistant Home page

From the Service Assistant Home page (Figure 10-415) you can obtain an overview of your SVC cluster and the node status. You can view a detailed status and error summary and manage service actions for the current node. The current node is the node on which service-related actions are performed. The connected node displays the Service Assistant and provides the interface for working with

810

Implementing the IBM System Storage SAN Volume Controller V6.1

other nodes on the system. To manage a different node, select the radio button on the left of your node panel name, and the details for the selected node will be shown. Using the pull-down menu in the Service Assistant Home page, you can select which action you want to execute in the selected node (Figure 10-416).

Figure 10-416 Service Assistant Home page - possible actions

As shown in Figure 10-416, for the selected node it is possible to: Enter in Service State Power off Restart Reload

10.17.1 Placing an SVC node into Service State


To place a node into a Service State, select the node where the action will be performed from the Service Assistant Home Page. From the pull-down menu, select Enter Service State and then click GO (Figure 10-416). A confirmation window displays (Figure 10-417 on page 812). Click OK.

Chapter 10. SAN Volume Controller operations using the GUI

811

Figure 10-417 Service State confirmation window

At this point the information window displays (Figure 10-418). Wait until the node is available, then click OK.

Figure 10-418 Action completed window

Now you will be returned to the Service Assistant Home Page. You will be able to see the status of the node just entered into Service State (Figure 10-419 on page 813). Also note an event code 690, which means several resources have entered a Service State.

812

Implementing the IBM System Storage SAN Volume Controller V6.1

Figure 10-419 Node in service state

Now you can have different choices from the Service Assistant Home Page pull-down menu as shown in Figure 10-420. Hold in Service State Power off Restart Reload

Figure 10-420 Possible actions

10.17.2 Exiting an SVC node from Service State


To exit a node from Service State, select the node where the action will be performed from the Service Assistant Home Page. From the pull-down menu, select Exit Service State then click GO (Figure 10-421 on page 814).

Chapter 10. SAN Volume Controller operations using the GUI

813

Figure 10-421 Exit Service State action

A confirmation window will display (Figure 10-422). Then click OK.

Figure 10-422 Confirmation window

At this point the information window for your action will display (Figure 10-418 on page 812). Wait until the node is available, then click OK. When the node is available, the window shown in Figure 10-423 displays.

Figure 10-423 Exiting from Service Status

You can see that the node is starting and the event shown in the Error column is simply a regular message. Click Refresh until you can see your node is active and no event is displayed in the Error column. In our example we used the Exit from Service State action from the Service Assistant Home Page, but it is also possible to exit from a Service State using the restart action.

814

Implementing the IBM System Storage SAN Volume Controller V6.1

10.17.3 Rebooting an SVC node


To reboot a node, select the node where the action will be performed from the Service Assistant Home Page. From the pull-down menu, select Reboot and then click GO (Figure 10-424).

Figure 10-424 Reboot action

A confirmation window is displayed (Figure 10-425).

Figure 10-425 Confirmation window

On the next confirmation window, wait until the operation completes successfully and then click OK (Figure 10-426 on page 816).

Chapter 10. SAN Volume Controller operations using the GUI

815

Figure 10-426 Operation completed

From the Service Assistant Home Page, notice that the node that you just rebooted has disappeared (Figure 10-427). This node will still be visible in an Offline State from the GUI or from the SVC command line interface.

Figure 10-427 Only one node remaining

The node you just rebooted has to complete before it becomes visible. Normally a node reboot takes about 14 minutes.

10.17.4 Collect Logs page


With the Service Assistant application you can create and download a package of log and trace files, or download existing log files from the node. The support package, which is also called SNAP files, can be used by support personnel to understand problems on the system. Unless advised by support, collect the latest statesave. Figure 10-428 shows the Service Assistant page, where it is possible to collect logs.

Figure 10-428 Collect Logs page

To create a support package with the last statesave, select the related option, click Create, and download. The page shown in Figure 10-429 on page 817 is displayed.

816

Implementing the IBM System Storage SAN Volume Controller V6.1

Figure 10-429 Action completed page

You will be asked where you want to save the support package (Figure 10-430).

Figure 10-430 Save page

10.17.5 Manage Cluster page


In this page you can see cluster configuration data for the current node (Figure 10-431).

Figure 10-431 Manage Cluster page

Chapter 10. SAN Volume Controller operations using the GUI

817

10.17.6 Recover Cluster


You can recover the entire cluster using the cluster recovery procedure (also known as T3 recovery) if cluster data has been lost from all nodes. The cluster recovery procedure recreates the cluster using saved configuration data. However, it might not be able to restore all volume data. This action cannot be performed on an active node. To recover the cluster, the node must either be a candidate node or in service state. Before attempting the cluster recovery procedure, investigate the cause of the cluster failure and attempt to resolve those issues using other service procedures. Attention: We do not detail this procedure because the recover cluster action must be run under the direction and guidance of IBM Support. Do not attempt this action unless ordered to by IBM Support. We include this description simply to make you aware that there is a recover cluster process. The cluster recovery procedure is a two-stage process. 1. Click Prepare for Recovery to search the system for the most recent backup file and quorum drive. If this step is successful, the Recover Cluster panel displays the details of the backup file and quorum drive that was found. Verify the dates and times for these file are the most recent. 2. If you are satisfied with these files, click Recover to recreate the cluster. If the backup file or quorum drive are not suitable for the recovery, exit the task by selecting a different menu option. Note: If the connected node and the current node are the same, the connection to the node can be lost. Figure 10-432 shows the Recover Cluster page.

Figure 10-432 Recover Cluster page

10.17.7 Reinstall software


You can either install a package from the support site or rescue the software from another node that is visible on the fabric. When the node is added to a cluster, the software level on the node is updated to match the level of the cluster software. This action cannot be performed on an active node.

818

Implementing the IBM System Storage SAN Volume Controller V6.1

To re-install the software, the node must either be a candidate node or in service state. During the re-installation, the node becomes unavailable. If the connected node and the current node are the same, the connection to the Service Assistant might be lost. Figure 10-433 shows the Re-install software page. On this page, clicking Check for software updates redirects you to the IBM website, where you will find any available update for the SVC software as shown in the following link. http://www-01.ibm.com/support/docview.wss?rs=591&uid=ssg1S1001707#_Latest_SAN_Volu me_Controller%20Code

Figure 10-433 Re-install Software page

Attention: We do not detail this procedure because the reinstallation of software action must be run under the direction of IBM support. Do not try to perform this action unless guided by IBM support.

10.17.8 Upgrade Manually


During a standard upgrade procedure, the cluster upgrades each of the nodes systematically. The standard upgrade procedure is the best practice method for upgrading software on nodes. However, to provide more flexibility in the upgrade process, you can also upgrade each node individually. When upgrading the software manually, you remove a node from the cluster, upgrade the software on the node, and return the node to the cluster. You repeat this process for the remaining nodes until the last node is removed from the cluster. At this point the remaining nodes switch to running the new software. When the last node is returned to the cluster, it upgrades and runs the new level of software. This action cannot be performed on an active node. To upgrade software manually, the nodes must either be candidate nodes or in service state. During this procedure, every node must be upgraded to the same software level. You cannot interrupt the upgrade and switch to installing a different software level. During the upgrade, the node becomes unavailable. If the connected node and the current node are the same, the connection to the service assistant might be lost. Figure 10-434 on page 820 shows the Upgrade Manually page.

Chapter 10. SAN Volume Controller operations using the GUI

819

Figure 10-434 Upgrade Manually page

10.17.9 Modify WWNN


You can verify that the WWNN for node is consistent. Only change the WWNN if directed to do so in the service procedures. This action cannot be performed on an active node. To modify the WWNN, the node must either be a candidate node or in service state. Attention: Only change the WWNN if directed to do so in the service procedures. Figure 10-435 shows the Modify WWNN page.

Figure 10-435 Modify WWNN page

10.17.10 Change Service IP


You can set the service IP address assigned to Ethernet port 1 for the current node. This IP address is used to access the service assistant and the service command line. All nodes in the cluster have different service addresses. If the connected node and the current node are the same, the connection to the service assistant might be lost. To regain access to the service assistant, log in to the service assistant using the new service IP address. Figure 10-436 on page 821 shows the Change Service IP page.

820

Implementing the IBM System Storage SAN Volume Controller V6.1

Figure 10-436 Change Service IP page

10.17.11 Configure CLI access


Use this panel if a valid superuser SSH key is not available for either a node that is currently in service state or a candidate node. The SSH key can be used to temporarily gain access to the command-line interface or to use secure copy tools, such as scp. The key is removed when the node is restarted or rejoins a cluster. Figure 10-437 shows the Configure CLI access page.

Figure 10-437 Config CLI access page

10.17.12 Restart Service


With the Service Assistant application, you can restart any of the following services: CIMOM Web server Easy Tier Service Location Figure 10-438 on page 822 shows the Restart Service page.

Chapter 10. SAN Volume Controller operations using the GUI

821

Figure 10-438 Restart Service page

822

Implementing the IBM System Storage SAN Volume Controller V6.1

Appendix A.

Performance data and statistics gathering


In this appendix we provide a brief overview of the performance analysis capabilities of SVC 6.1. We also describe a method you can use to collect and process SVC performance statistics. It is beyond the scope of this book to provide an in-depth understanding of performance statistics or explain how to interpret them. For a more comprehensive look at the performance of the IBM System Storage SAN Volume Controller (SVC), see SAN Volume Controller Best Practices and Performance Guidelines, SG24-7521, which is available at this site: http://www.redbooks.ibm.com/abstracts/sg247521.html?Open

Copyright IBM Corp. 2011. All rights reserved.

823

SVC performance overview


Although storage virtualization with SVC provides many administrative benefits, it can also provide a substantial increase in performance for a variety of workloads. The SVCs caching capability and its ability to stripe volumes across multiple disk arrays can provide a significant performance improvement over what can otherwise be achieved when using midrange disk subsystems. To ensure that the desired performance levels of your system are maintained, monitor performance periodically to provide visibility to potential problems that exist or are developing so they can be addressed.

Performance considerations
When designing an SVC storage infrastructure or maintaining an existing infrastructure, you need to consider many factors in terms of their potential impact on performance. These factors include dissimilar workloads competing for the same resources; overloaded resources; insufficient resources available; poor performing resources; and so on. Monitoring performance can both provide a validation that design expectations are met and identify opportunities for improvement.

SVC
The SVC cluster is scalable up to eight nodes. The performance is near linear when adding additional nodes into the cluster until performance eventually becomes limited by attached components. Though virtualization with the SVC provides significant flexibility in terms of the components used, it does not diminish the necessity of designing the system around the components so that it can deliver the desired level of performance. Essentially, SVC performance improvements are gained by spreading the workload across a greater number of back-end resources. Eventually, however, the performance of individual resources will become the limiting factor.

Performance monitoring
This section highlights several performance monitoring techniques.

Collecting performance statistics


By default, performance statistics files are created at five-minute intervals. The statistics files are saved for 16 contiguous intervals before they are overlaid in a rotating log fashion. This provides statistics for the most recent 80-minute period. The SVC supports user-defined sampling intervals of from 1 to 60 minutes. You define the sampling interval by using the svctask startstats -interval 2 command; see 9.8.8, Starting statistics collection on page 491 and 9.8.9, Stopping statistics collection on page 492.

824

Implementing the IBM System Storage SAN Volume Controller V6.1

Statistics file naming


The files that are generated are written to the /dumps/iostats/ directory. The file name is of the format: Nm_stats_<node_frontpanel_id>_<date>_<time> for MDisk statistics Nv_stats_<node_frontpanel_id>_<date>_<time> for volume statistics Nn_stats_<node_frontpanel_id>_<date>_<time> for node statistics Nd_stats_<node_frontpanel_id>_<date>_<time> for disk drive statistics, not used for SVC The node_frontpanel_id is of the node on which the statistics were collected. The date is in the form <yymmdd> and the time is in the form <hhmmss>. Following is an example of an MDisk statistics filename: Nm_stats_110711_101004_155932 Example A-1 shows typical MDisk, volume, node, and disk drive statistics file names.
Example A-1 Filename of per node statistics IBM_2145:ITSO-CLS1:admin>svcinfo lsiostatsdumps id iostat_filename 0 Nm_stats_110711_101004_155932 1 Nv_stats_110711_101004_155932 2 Nn_stats_110711_101004_155932 3 Nd_stats_110711_101004_155932 4 Nm_stats_110711_101004_160032 5 Nv_stats_110711_101004_160032 6 Nn_stats_110711_101004_160032 7 Nd_stats_110711_101004_160032 8 Nm_stats_110711_101004_160132 9 Nv_stats_110711_101004_160132 10 Nn_stats_110711_101004_160132 11 Nd_stats_110711_101004_160132 12 Nm_stats_110711_101004_160232 13 Nv_stats_110711_101004_160232 14 Nn_stats_110711_101004_160232 15 Nd_stats_110711_101004_160232

Tip: The performance statistics files can be copied from the SVC nodes to a local drive on your workstation using the pscp.exe (included with PuTTY) from an MS-DOS command line, as shown in this example: C:\Program Files\PuTTY>pscp -unsafe -load ITSO-CLS1 admin@10.18.229.81:/dumps/iostats/* c:\statfiles Use the -load parameter to specify the session that is defined in PuTTY. Specify the -unsafe parameter when you use wildcards. The performance statistics files are in .xml format. They can be manipulated using various tools and techniques.

Appendix A. Performance data and statistics gathering

825

An example of a tool that you can use to analyze these files is the SVC Performance Monitor (svcmon). Note: The svcmon tool is not an officially supported tool. It is provided on an as is basis. You can obtain this tool from the following website: http://www-03.ibm.com/support/techdocs/atsmastr.nsf/WebIndex/PRS3177 Figure A-1 shows an example of the type of chart that you can produce using the SVC performance statistics.

Figure A-1 Spreadsheet example

Performance data collection and Tivoli Storage Productivity Center for Disk
Although you can obtain performance statistics in standard .xml files, that is not the most practical or user-friendly way to analyze SVC performance statistics. The Tivoli Storage Productivity Center (TPC) for Disk is the official and supported IBM tool used to collect and analyze SVC performance statistics. Tivoli Storage Productivity Center for Disk comes preinstalled on the System Storage Productivity Center Console and can be made available by activating the license.

826

Implementing the IBM System Storage SAN Volume Controller V6.1

For more information about using Tivoli Storage Productivity Center to monitor your storage subsystem, see Monitoring Your Storage Subsystems with TotalStorage Productivity Center, SG24-7364, which is available at the following website: http://www.redbooks.ibm.com/abstracts/sg247364.html?Open Note: Tivoli Storage Productivity Center for Disk for TPC Version 4.2.1 supports new SVC port quality statistics provided in SVC Versions 4.3 and above. Monitoring these metrics in addition to the performance metrics can help you to maintain a stable SAN environment.

Appendix A. Performance data and statistics gathering

827

828

Implementing the IBM System Storage SAN Volume Controller V6.1

Appendix B.

Terminology
In this appendix we define terms commonly used within this book that relate to the SVC and its concepts. To see the complete set of terms that are related to the SVC, refer to the Glossary section of the SVC Information Center. It is available at the following website: http://publib.boulder.ibm.com/infocenter/svc/ic/index.jsp

Copyright IBM Corp. 2011. All rights reserved.

829

Commonly encountered terms


This book includes the following SVC-related terminology.

Auto Data Placement Mode


Auto Data Placement Mode is an Easy Tier operating mode in which the host activity on all the volume extents in a pool are measured, a migration plan is created, and then automatic extent migration is performed.

back-end
See front-end and back-end.

channel extender
A channel extender is a device used for long distance communication connecting other SAN fabric components. Generally, channel extenders can involve protocol conversion to asynchronous transfer mode (ATM), Internet Protocol (IP), or another long distance communication protocol.

cluster (SVC)
A cluster is a group of eight SVC nodes that presents a single configuration or management and service interface to the user.

cold extent
A cold extent is a volumes extent that will not get any performance benefit if moved from HDD to SSD. A cold extent also refers to an extent that needs to be migrated onto HDD if it currently resides on SSD.

Consistency Group
A Consistency Group is a group of copy relationships between virtual volumes or data sets that are maintained with the same time reference so that all copies are consistent in time. A Consistency Group can be managed as a single entity.

copied
The copied is a FlashCopy state that indicates that a copy has been triggered after the copy relationship was created. The copied state indicates that the copy process is complete, and the target disk has no further dependence on the source disk. The time of the last trigger event is normally displayed with this status.

configuration node
While the cluster is operational, a single node in the cluster is appointed to provide configuration and service functions over the network interface. This node is termed the configuration node. This configuration node manages the information that describes the cluster configuration, and it provides a focal point for configuration commands. If the configuration node fails, another node in the cluster will assume the role.

830

Implementing the IBM System Storage SAN Volume Controller V6.1

counterpart SAN
A counterpart SAN is a non-redundant portion of a redundant SAN. A counterpart SAN provides all of the connectivity of the redundant SAN, but without the 100% redundancy. SVC nodes are typically connected to a redundant SAN made up of two counterpart SANs. A counterpart SAN is often called a SAN fabric.

disk tier
It is likely that the MDisks (LUNs) presented to the SVC cluster will have different performance attributes due to the type of disk or RAID array that they reside on. The MDisks may be on 15 K RPM Fibre Channel or SAS disk, Nearline SAS or SATA, or even solid state disk (SSDs). Thus, a storage tier attribute is assigned to each MDisk, the default being generic_hdd. With SVC 6.1 a new disk tier attribute is available for SSDs known as generic_ssd.

Directed Maintenance Procedures


The fix procedures, also known as Directed Maintenance Procedures (DMPs), ensure that you fix any outstanding errors in the error log. To do so, from the Home panel click Troubleshooting Recommended Actions. Select the error, and then click Run Fix Procedure.

Easy Tier
Easy Tier is a volume performance function within the SVC that provides automatic data placement of a volumes extents in a multitiered storage pool. The pool will normally contain a mix of SSDs and HDDs. Easy Tier measures host I/O activity on the volumes extents and will migrate hot extents onto the SSDs to ensure maximum performance.

evaluation mode
The evaluation mode is an Easy Tier operating mode in which the host activity on all the volume extents in a pool are measured only. No automatic extent migration is performed.

event (error)
An event is an occurrence of significance to a task or system. Events can include completion or failure of an operation, a user action, or the change in state of a process. Prior to SVC V6.1, this was known as an error.

event code
An event code is a value used to identify an event condition to a user. This value might map to one or more event IDs or to values that are presented on the service panel. This value is used to report error conditions to IBM and to provide an entry point into the service guide.

event ID
An event ID is a value that is used to identify a unique error condition detected by the 2145 cluster. An event ID is used internally in the cluster to identify the error.

excluded
The excluded condition is a status condition that describes an MDisk that the 2145 cluster has decided is no longer sufficiently reliable to be managed by the cluster. The user must issue a command to include the MDisk in the cluster-managed storage.

Appendix B. Terminology

831

extent
An extent is a fixed size unit of data that is used to manage the mapping of data between MDisks and Volumes. The size of the extent can range from 16 MB to 8 GB in size.

FC port logins
FC port logins refers to the number of hosts that can see any one SVC node port. The SVC has a maximum limit per node port of Fibre Channel logins allowed.

front-end and back-end


The SVC takes MDisks to create pools of capacity from which volumes are created and presented to application servers (hosts). The MDisks reside in the controllers at the back-end of the SVC, SVC to back-end controller zones. The volumes are presented to the hosts reside in the front-end of the SVC, SVC to host zones.

field replaceable units


Field replaceable units (FRUs) are individual parts that are replaced in entirety when any one of the units components fails. They are held as spares by the service organization.

grain
A grain is the unit of data that is represented by a single bit in a FlashCopy bitmap (64 KB/256 KB) in the SVC. It is also the unit to extend the real size of a thin provisioned volume (32, 64, 128, or 256 KB).

hard disk drive (HDD)


A hard disk drive (HDD) is a rigid disk drive or array of drives, for example Fibre Channel Disk, SAS or SATA. It is defined as disk tier generic_hdd.

host bus adapter (HBA)


A host bus adapter (HBA) is an interface card that connects a host bus such as a Peripheral Component Interconnect (PCI) with the SAN fiber.

host ID
A hod ID is a numeric identifier assigned to a group of host FC ports or iSCSI host names for the purposes of LUN mapping. For each host ID, there is a separate mapping of SCSI IDs to volumes. The intent is to have a one-to-one relationship between hosts and host IDs, although this relationship cannot be policed.

host mapping
Host mapping refers to the process of controlling which hosts have access to specific volumes within a cluster (it is equivalent to LUN masking). Prior to SVC V6.1, this was known as VDisk-to-Host mapping.

hot extent
A hot extent is a volumes extent that gets a performance benefit if moved from HDD onto SSD.

internal storage
Internal storage refers to an array of managed disks (MDisks) and drives that are held in enclosures and in nodes that are part of the SVC cluster.

832

Implementing the IBM System Storage SAN Volume Controller V6.1

IQN (iSCSI qualified name)


IQN refers to special names that identify both iSCSI initiators and targets. One of the three name formats that iSCSI provides is IQN. The format is iqn.yyyy-mm.{reversed domain name}; for example, the default for an SVC node is: iqn.1986-03.com.ibm:2145.<clustername>.<nodename>

iSNS (Internet storage name service)


iSNS refers to the Internet storage name service (iSNS) protocol that is used by a host system to manage iSCSI targets and automated iSCSI discovery, management, and configuration of iSCSI and FC devices. It has been defined in RFC 4171.

image mode
The image mode is an access mode that establishes a one-to-one mapping of extents in an existing LUN or (image mode) MDisk with the extents in a volume. The last MDisk extent can be partially used if the size of the volume is not an exact multiple of the extent size.

I/O group
Each pair of SVC nodes is known as an input/output (I/O) group. An I/O group has a set of volumes associated with it that are presented to host systems. Each SVC node is associated with exactly one I/O group. The nodes in an I/O group provide a failover, failback function for each other.

ISL hop
An inter-switch link (ISL) is a connection between two switches and is counted as one ISL hop. The number of hops is always counted on the shortest route between two N-ports (device connections). In an SVC environment, the number of ISL hops is counted on the shortest route between the pair of nodes farthest apart. The SVC recommends maximum hops for some fabric paths.

local fabric
The local fabric is composed of SAN components (switches, cables, and so on) that connect the components (nodes, hosts, and switches) of the local cluster together.

local and remote fabric interconnect


The local fabric interconnect and the remote fabric interconnect are the SAN components that are used to connect the local and remote fabrics. Usually depending on the distance between the two fabrics, they can be single-mode optical fibers that are driven by LW gigabit interface converters (GBICs) or SFPs, or more sophisticated components, such as channel extenders or special SFP modules that are used to extend the distance between SAN components.

LU and LUN
LUN is formally defined by the SCSI standards as a logical unit number. It is used as an abbreviation for an entity, which exhibits disk-like behavior, for example, a volume or an MDisk.

Managed disk (MDisk)


An MDisk is a SCSI disk that is presented by a RAID controller and that is managed by the cluster. The MDisk is not visible to host systems on the SAN.

Managed Disk Group (storage pool)


See storage pool.

Appendix B. Terminology

833

Master Console (MC)


The Master Console is an SVC term often used to refer to the System Storage Productivity Center server that runs optional software, used to assist in the management of the SVC.

mirrored volume
A mirrored volume is a single virtual volume that has two physical volume copies. The primary physical copy is known within the SVC as copy 0, and the secondary copy is known within the SVC as copy 1.

node
A node is a single processing unit that provides virtualization, cache, and copy services for the cluster. SVC nodes are deployed in pairs called I/O groups. One node in the cluster is designated the configuration node.

oversubscription
The term oversubscription refers to the ratio of the sum of the traffic on the initiator N-port connections to the traffic on the most heavily loaded ISLs, where more than one connection is used between these switches. Oversubscription assumes a symmetrical network, and a specific workload applied equally from all initiators and sent equally to all targets. A symmetrical network means that all the initiators are connected at the same level, and all the controllers are connected at the same level.

preparing phase
Before you start the FlashCopy process, you must prepare a FlashCopy mapping. The preparing phase is used to flush a volumes data from cached in preparation for FlashCopy operation.

RAS
RAS stands for reliability, availability, and serviceability.

RAID
RAID stands for a redundant array of independent disks, which is a collection of two or more physical disk drives that present to the host an image of one or more logical disk drives. The most common RAID formats are 0, 1, 5, 6, 10.

RAID 0
RAID 0 is a data striping technique used across an array. It includes no data protection.

RAID 1
RAID 1 is a mirroring technique used on a storage array in which two or more identical copies of data are maintained on separate mirrored disks.

RAID 10
RAID 10 is a combination of a RAID 0 stripe which is mirrored, RAID 1. Thus, two identical copies of striped data exist; there is no parity.

RAID 5
RAID 5 is an array that has a data stripe which includes a single logical parity drive. The parity check data is distributed across all the array's disks.

834

Implementing the IBM System Storage SAN Volume Controller V6.1

RAID 6
RAID 6 is a form of RAID that has two logical parity drives per stripe and therefore can continue to process read and write requests to all of the array's virtual disks in the presence of two concurrent disk failures.

redundant SAN
A redundant SAN is a SAN configuration in which there is no single point of failure (SPoF), so no matter what component fails, data traffic will continue. Connectivity between the devices within the SAN is maintained, although possibly with degraded performance, when an error has occurred. A redundant SAN design is normally achieved by splitting the SAN into two independent counterpart SANs (two SAN fabrics), so that if one path of the counterpart SAN is destroyed, the other counterpart SAN path keeps functioning.

remote fabric
The remote fabric is composed of SAN components (switches, cables, and so on) that connect the components (nodes, hosts, and switches) of the remote cluster together. There can be significant distances between the components in the local cluster and those components in the remote cluster.

SAN
SAN stands for storage area network.

SAN Volume Controller (SVC)


The IBM System Storage SAN Volume Controller is a SAN-based appliance designed for attachment to a variety of host computer systems, which carries out block-level virtualization of disk storage.

SCSI
SCSI stands for Small Computer Systems Interface.

Service Location Protocol


The Service Location Protocol (SLP) is an Internet service discovery protocol that allows computers and other devices to find services in a local area network without prior configuration. It has been defined in RFC 2608.

Solid State Disk


Solid State Disk (SSD) is a disk made from solid state memory and thus has no moving parts. Most SSDs use NAND-based flash memory technology. It is defined to the SVC as a disk tier generic_ssd.

storage pool (Managed Disk group)


A storage pool is a collection of storage capacity, made up of MDisks, which provides the pool of storage capacity for a specific set of volumes. A storage pool can contain more than one tier of disk, known as a multitier storage pool, which is a prerequisite of Easy Tier automatic data placement. Prior to SVC V6.1, this was known as a Managed Disk Group (MDG).

System Storage Productivity Center


IBM System Storage Productivity Center (SSPC) is a hardware server on which many software products are preinstalled. The required storage management products are activated or enabled through licenses. The SSPC can be used to manage the SVC and DS8000 products.

Appendix B. Terminology

835

thin provisioning (thin provisioned volume)


Thin provisioning refers to the ability to define storage, usually a storage pool or volume, with a logical capacity size that is larger than the actual physical capacity assigned to that pool or volume. Thus, a thin provisioned volume is a volume with a virtual capacity that is different from its real capacity. Prior to SVC V6.1 this was known as space efficient.

volume
A volume is an SVC logical device that appears to host systems attached to the SAN as a SCSI disk. Each volume is associated with exactly one I/O Group. It will have a preferred node within the I/O group. Prior to SVC 6.1, this was known as a VDisk or virtual disk.

volume copy
A volume copy is a physical copy of the data that is stored on a volume. Mirrored volumes have two such copies. Non-mirrored volumes have one copy.

836

Implementing the IBM System Storage SAN Volume Controller V6.1

Related publications
The publications listed in this section are considered particularly suitable for a more detailed discussion of the topics covered in this book.

IBM Redbooks publications


For information about ordering these publications, see How to get IBM Redbooks publications on page 839. Note that several of the documents referenced here might be available in softcopy only. Introduction to Storage Area Networks, SG24-5470 IBM System Storage: Implementing an IBM SAN, SG24-6116 DS4000 Best Practices and Performance Tuning Guide, SG24-6363 IBM System Storage Business Continuity: Part 1 Planning Guide, SG24-6547 IBM System Storage Business Continuity: Part 2 Solutions Guide, SG24-6548 Get More Out of Your SAN with IBM Tivoli Storage Manager, SG24-6687 IBM Tivoli Storage Area Network Manager: A Practical Introduction, SG24-6848 DS8000 Performance Monitoring and Tuning, SG24-7146 Monitoring Your Storage Subsystems with TotalStorage Productivity Center, SG24-7364 Using the SVC for Business Continuity, SG24-7371 SAN Volume Controller: Best Practices and Performance Guidelines, SG24-7521 SAN Volume Controller V4.3.0 Advanced Copy Services, SG24-7574 IBM XIV Storage System: Architecture, Implementation and Usage, SG24-7659 IBM Tivoli Storage Productivity Center V4.1 Release Guide, SG24-7725 IBM SAN Volume Controller 4.2.1 Cache Partitioning, REDP-4426

Other publications
These publications are also relevant as further information sources: IBM System Storage SAN Volume Controller: Planning Guide, GA32-0551 IBM System Storage Open Software Family SAN Volume Controller: Planning Guide, GA22-1052 IBM System Storage SAN Volume Controller: Service Guide, GC26-7901 IBM System Storage SAN Volume Controller Model 2145-8A4 Hardware Installation Guide, GC27-2219 IBM System Storage SAN Volume Controller Model 2145-8G4 Hardware Installation Guide, GC27-2220 IBM System Storage SAN Volume Controller Models 2145-8F2 and 2145-8F4 Hardware Installation Guide, GC27-2221

Copyright IBM Corp. 2011. All rights reserved.

837

IBM SAN Volume Controller Software Installation and Configuration Guide, GC27-2286 IBM System Storage SAN Volume Controller Command-Line Interface Users Guide, GC27-2287 IBM System Storage Master Console: Installation and Users Guide, GC30-4090 Multipath Subsystem Device Driver Users Guide, GC52-1309 IBM System Storage SAN Volume Controller Model 2145-CF8 Hardware Installation Guide, GC52-1356 IBM System Storage Productivity Center Software Installation and Users Guide, SC23-8823 IBM System Storage Productivity Center Introduction and Planning Guide, SC23-8824 Subsystem Device Driver Users Guide for the IBM TotalStorage Enterprise Storage Server and the IBM System Storage SAN Volume Controller, SC26-7540 IBM System Storage Open Software Family SAN Volume Controller: Installation Guide, SC26-7541 IBM System Storage Open Software Family SAN Volume Controller: Service Guide, SC26-7542 IBM System Storage Open Software Family SAN Volume Controller: Configuration Guide, SC26-7543 IBM System Storage Open Software Family SAN Volume Controller: Command-Line Interface Users Guide, SC26-7544 IBM System Storage Open Software Family SAN Volume Controller: CIM Agent Developers Reference, SC26-7545 IBM System Storage Open Software Family SAN Volume Controller: Host Attachment Guide, SC26-7563 Command-Line Interface Users Guide, SC27-2287 IBM System Storage Productivity Center Users Guide Version 1 Release 4, SC27-2336 IBM TotalStorage Multipath Subsystem Device Driver Users Guide, SC30-4096 IBM System Storage SAN Volume Controller V5.1.0 - Host Attachment Guide, SG26-7905 IBM Tivoli Storage Productivity Center IBM Tivoli Storage Productivity Center for Replication Installation and Configuration Guide, SC27-2337 IBM TotalStorage Multipath Subsystem Device Driver Users Guide, SC30-4096

Online resources
These websites are also relevant as further information sources: IBM TotalStorage home page http://www.storage.ibm.com SAN Volume Controller supported platform http://www-1.ibm.com/servers/storage/support/software/sanvc/index.html Download site for Windows Secure Shell (SSH) freeware http://www.chiark.greenend.org.uk/~sgtatham/putty

838

Implementing the IBM System Storage SAN Volume Controller V6.1

IBM site to download SSH for AIX http://oss.software.ibm.com/developerworks/projects/openssh Open source site for SSH for Windows and Mac http://www.openssh.com/windows.html Cygwin Linux-like environment for Windows http://www.cygwin.com IBM Tivoli Storage Area Network Manager site http://www-306.ibm.com/software/sysmgmt/products/support/IBMTivoliStorageAreaNe tworkManager.html Microsoft Knowledge Base Article 131658 http://support.microsoft.com/support/kb/articles/Q131/6/58.asp Microsoft Knowledge Base Article 149927 http://support.microsoft.com/support/kb/articles/Q149/9/27.asp Sysinternals home page http://www.sysinternals.com Subsystem Device Driver download site http://www-1.ibm.com/servers/storage/support/software/sdd/index.html IBM TotalStorage Virtualization home page http://www-1.ibm.com/servers/storage/software/virtualization/index.html SVC support page http://www-947.ibm.com/systems/support/supportsite.wss/selectproduct?taskind=4& brandind=5000033&familyind=5329743&typeind=0&modelind=0&osind=0&psid=sr&continu e.x=1 SVC online documentation http://publib.boulder.ibm.com/infocenter/svcic/v3r1m0/index.jsp lBM Redbooks publications about SVC http://www.redbooks.ibm.com/cgi-bin/searchsite.cgi?query=SVC

How to get IBM Redbooks publications


You can search for, view, or download IBM Redbooks publications, Redpapers, Webdocs, draft publications and additional materials, as well as order hardcopy IBM Redbooks publications, at this website: ibm.com/redbooks

Help from IBM


IBM Support and downloads ibm.com/support

Related publications

839

IBM Global Services ibm.com/services

840

Implementing the IBM System Storage SAN Volume Controller V6.1

Index
A
abends 571 abends dump 571 active quorum disk 36 add a node 495 add additional ports 621 add an HBA 455 Add SSH Public Key 125 administration tasks 608 Advanced Copy Services 84 Advanced Settings 613, 773, 775 AIX host system 158 AIX specific information 149 AIX toolbox 158 AIX-based hosts 149 alias 29 alias string 145 aliases 29 analysis 91, 824 application server guidelines 83 application testing 365 assign VDisks 473 assigned VDisk 154 asynchronous notifications 386387 asynchronous remote 413 asynchronous remote copy 34, 390, 413414 asynchronous replication 431 asynchronously 413 audit log 41 authentication 42, 128, 147 authentication service 45 automate tasks 482 automatic Linux system 201 automatic update process 201 automatically discover 442 automation 481 auxiliary 420, 532, 554 auxiliary VDisk 414, 421, 428 available managed disks 444 boss node 36 bottleneck 49 bottlenecks 9192 budget 28 budget allowance 28 business requirements 91, 824

C
cable connections 63 cable length 48 cache 38, 377, 414 caching 92 caching capability 91, 824 candidate node 495 capacity 81 capacity measurement 636 CDB 29 challenge message 32 Challenge-Handshake Authentication Protocol 32, 147, 453 change the IP addresses 490 Channel extender 830 channel extender 835 CHAP 32, 147, 453 CHAP authentication 32, 147, 616 CHAP secret 32, 147, 616 chpartnership 434 chrcconsistgrp 435 chrcrelationship 435 chunks 79, 233 CIM agent 39 CIM Client 39 CIMOM 30, 39, 147 CLI 122, 541 commands 158 scripting for SVC task automation 481 cluster 35 creation 495 IP address 104 shutting down 442, 492493, 502 time zone 490 viewing properties 487 cluster (SVC) 830 Cluster management 39 cluster nodes 35 cluster overview 35 cluster partnership 396 cluster properties 492 clustered ethernet port 148 clustered server resources 35 clusters 58 Colliding writes 415 colliding writes 416 Command Descriptor Block 29

B
back-end application 832 background copy 405, 413, 421, 428 background copy bandwidth 434 background copy progress 528, 550 background copy rate 383384 backup 364 of data with minimal impact on production 370 backup time 364 bandwidth 58, 86, 420 bandwidth impact 434 basic setup requirements 126 bind 225 boot 89 Copyright IBM Corp. 2011. All rights reserved.

841

command syntax 485 COMPASS architecture 46 compression 89 concepts 7 concurrent instances 232 concurrent software upgrade 557 configurable warning capacity 27 configuration 137 configuration node 36, 148, 495, 830 configure AIX 149 configure SDD 225 configuring the GUI 108 connected 399400, 423 connected state 402, 423, 425426 connectivity 37 consistency 424 consistency freeze 402, 411, 426 Consistency Group 370, 373, 830 consistency group 371 limits 373 consistent 34, 400401, 423424 consistent data set 364 Consistent Stopped state 398, 422 Consistent Synchronized state 399, 422 ConsistentDisconnected 404, 427 ConsistentStopped 402, 426 ConsistentSynchronized 403, 426 container 79 contingency capacity 27 controller, renaming 442 conventional storage 227 cookie crumbs recovery 578 cooling 59 copied state 830 copy bandwidth 86, 434 copy operation 34 copy process 410, 436 copy rate 384 copy rate parameter 84 Copy Services managing 503 COPY_COMPLETED 386 copying state 509 counterpart SAN 93, 831, 835 CPU cycle 50 create a FlashCopy 505 create a new VDisk 634 create an SVC partnership 714 create mapping command 505, 684685, 695 create SVC partnership 522, 543 creating a VDisk 458 creating managed disk groups 595 current cluster state 36 Cygwin 190

data consistency 503 data corruption 424 data flow 67 data migration 59, 232 data migration and moving 364 data mining 365 data mover appliance 475 degraded mode 76 delete a FlashCopy 512 a host 455 a host port 457 a port 624, 627, 629, 649, 652 a VDisk 471, 646 ports 456 Delete consistency group command 513 dependent writes 372, 417 destaged 38 destructive 567 detect the new MDisks 442 detected 443 device-specific modules 165 differentiator 51 directory protocol 45 dirty bit 406, 428 disconnected 399400, 423 disconnected state 423 discovering assigned VDisk 154, 166 discovering newly assigned MDisks 594, 602 disk access profile 469 disk controller renaming 591 systems 441 viewing details 441, 590 disk internal controllers 51 disk timeout value 219 disk zone 66 Diskpart 172 display summary information 444 distance 389, 833 distance limitations 390 documentation 58, 589 DSMs 165 dump I/O statistics 570 I/O trace 569 listing 568 other nodes 570 durability 51 dynamic pathing 222223 dynamic shrinking 653 dynamic tracking 150

E
elapsed time 84 empty MDG 446 empty state 405, 428 Enterprise Storage Server (ESS) 388 entire VDisk 370 error 402, 422, 426, 445, 567

D
data backup with minimal impact on production 370 moving and migration 364 data change rates 89

842

Implementing the IBM System Storage SAN Volume Controller V6.1

Error Code 831 error handling 385 Error ID 831 error log 566 error notification 565 ESS (Enterprise Storage Server) 388 ESS to SVC 237 eth0 49 eth1 49 Ethernet 63 Ethernet connection 64 event 566 event log 569 events 398, 421 Excluded 831 excludes 606 Execute Metro Mirror 526, 548 expand a VDisk 171, 471 a volume 172 expand a space-efficient VDisk 471 extended distance solutions 389 Extent 832 extent 79, 228 extent level 228 extent sizes 79

Idling/Copied 381 Prepared 382 Preparing 382 Stopped 381 Suspended 382 FlashCopy mappings 373 FlashCopy properties 373 FlashCopy rate 84 flexibility 91, 824 foreground I/O latency 434 free extents 471 front-end application 832 FRU 832 Full Feature Phase 30

G
gateway IP address 105 GBICs 833 general housekeeping 589 generating output 486 generator 124 geographically dispersed 388 Global Mirror guidelines 87 Global Mirror protocol 34 Global Mirror relationship 416 Global Mirror remote copy technique 413 gminterdelaysimulation 431 gmintradelaysimulation 431 gmlinktolerance 430431 governing 28 governing rate 28 graceful manner 497 grain 374, 832 grain sizes 84 grains 84, 384 granularity 370 GUI 127

F
fabric remote 93 fabric interconnect 833 failover 222, 414 failover only 204 failover situation 390 fast fail 150 FAStT 388 FC optical distance 48 features, licensing 567 featurization log 569 Fibre Channel interfaces 48 Fibre Channel port fan in 93, 835 Fibre Channel Port Login 30 Fibre Channel port logins 832 Fibre Channel ports 63 file system 207 filtering 486, 584 filters 486 FlashCopy 34, 364 bitmap 374 how it works 365, 369 image mode disk 378 indirection layer 374 mapping 365 mapping events 379 serialization of I/O 385 synthesis 385 FlashCopy indirection layer 374 FlashCopy mapping 370 FlashCopy mapping states 381 Copying 381

H
Hardware Management Console 39 hardware nodes 46 hardware overview 46 HBA 451, 832 HBA ports 83 heartbeat signal 37 heartbeat traffic 86 help 589 high availability 35, 58 home directory 158 host and application server guidelines 83 configuration 137 creating 451 deleting 620 information 610 showing 479 systems 65 host adapter configuration settings 160 host bus adapter 451

Index

843

Host ID 832 Host Type 613, 616 HP-UX support information 222223

I
I/O budget 28 I/O governing 28, 469 I/O governing rate 469 I/O Group 833 I/O group 833 renaming 498 viewing details 498 I/O pair 60 I/O per secs 58 I/O statistics dump 570 I/O trace dump 569 ICAT 3940 identical data 420 idling 403, 426 idling state 410, 436 IdlingDisconnected 404, 427 Image Mode 833 image mode 235, 671 image mode disk 378 image mode MDisk 235 image mode to image mode 264 image mode VDisk 230 image mode virtual disks 81 inappropriate zoning 74 inconsistent 400, 423 Inconsistent Copying state 399, 422 Inconsistent Stopped state 398, 422 InconsistentCopying 402, 425 InconsistentDisconnected 404, 427 InconsistentStopped 402, 425 indirection layer 374 indirection layer algorithm 375 informational error logs 386 initiator 144 initiator name 29 initiator port 613, 616 input power 493 install 57 insufficient bandwidth 384 integrity 371372 interaction with the cache 377 intercluster link 396 intercluster link bandwidth 434 intercluster link maintenance 396397, 419 intercluster Metro Mirror 389, 413 intercluster zoning 396397, 419 Internet Storage Name Service 32, 147, 833 interswitch link (ISL) 834 interval 492 intracluster Metro Mirror 389, 413 IP address modifying 489, 786 IP addresses 59, 786, 788, 792 IP subnet 64 ipconfig 133

IPv4 132 IPv6 132 IPv6 addresses 133 IQN 29, 145, 833 IQNs 29 iSCSI 28, 49, 58, 146 iSCSI Address 29 iSCSI client 144 iSCSI IP address failover 148 iSCSI Multipathing 32 iSCSI Name 29 iSCSI node 29 iSCSI Qualified Name 29, 833 iSCSI target node failover 148 ISL (interswitch link) 834 ISL hop count 389, 413 iSNS 32, 147, 833 issue CLI commands 190

J
jumbo frames 32

K
kernel level 201 key 147 key files on AIX 158

L
LAN Interfaces 49 last extent 237 latency 34, 86 LBA 405, 428 LDAP 44 license 104 licensing feature 567 licensing feature settings 567 Lightweight Directory Access Protocol 44 limiting factor 91 link errors 48 Linux 158 Linux kernel 35 Linux on Intel 200 list dump 568 listing dumps 568 Load balancing 204 Local authentication 41 local cluster 407, 430 Local fabric 833 local fabric interconnect 833 Local users 43 logged 566 Logical Block Address 405, 428 logical configuration data 573 Login Phase 30 logins 613, 616 lsrcrelationshipcandidate 435 LU 833 LUNs 833

844

Implementing the IBM System Storage SAN Volume Controller V6.1

M
magnetic disks 50 maintenance levels 160 maintenance tasks 557 Managed 833 Managed disk 833 managed disk 833 working with 590 managed disk group 447 creating 595 viewing 596 Managed Disks 833 managed mode MDisk 235 managed mode to image mode 260 managed mode virtual disk 81 management 91, 824 map a VDisk to a host 472 mapping 369 mapping events 379 Master 834 master 420 master console 59 master VDisk 421, 428 MC 834 MDG 833 MDG level 447 MDGs 59 MDisk 59, 833 adding 446, 599, 603 discovering 442, 602 including 445, 606 information 599 modes 235 name parameter 444 removing 450, 599, 604 renaming 445, 602 showing 477 showing in group 447 MDisk group creating 449, 595 deleting 450, 598 renaming 449, 597 showing 447, 478 viewing information 449 MDiskgrp 833 Metro Mirror 388 Metro Mirror consistency group 408, 410412, 434435, 437438 Metro Mirror features 390, 414 Metro Mirror process 420 Metro Mirror relationship 409410, 412, 416, 434, 436, 438 microcode 37 Microsoft Active Directory 44 Microsoft Cluster 171 Microsoft Multi Path Input Output 165 migrate 227 migrate a VDisk 230 migrate between MDGs 230 migrate data 235

migrate VDisks 474 migrating multiple extents 228 migration algorithm 233 functional overview 232 operations 228 overview 228 tips 237 migration activities 228 migration process 475 migration progress 232 migration threads 228 mirrored 414 mirrored copy 413 mkpartnership 433 mkrcconsistgrp 434 mkrcrelationship 435 MLC 49 modify a host 454 modifying a VDisk 469 mount 207 mount point 207 moving and migrating data 364 MPIO 83, 165 MSCS 171 MTU sizes 32 multi layer cell 49 multipath I/O 83 multipath storage solution 165 multipathing device driver 83 Multipathing drivers 32 multiple disk arrays 91, 824 multiple extents 228 multiple paths 32 multiple virtual machines 214

N
network bandwidth 89 Network Entity 145 Network Portals 145 Network Time Protocol 798 new mapping 472 Node 834 node 36, 494 adding 495 deleting 496 failure 385 port 832 renaming 496 shutting down 497 viewing details 494 node details 494 node dumps 570 node level 494 Node Unique ID 36 nodes 58 non-preferred path 222 non-redundant 831 non-zero contingency 27 N-port 834 Index

845

NTP 798

O
offline rules 230 offload features 31 older disk systems 92 on screen content 486, 584 online help 589 on-screen content 486 OpenSSH 158 OpenSSH client 190 operating system versions 160 ordering 34, 372 organizing on-screen content 486 other node dumps 570 overall performance needs 58 Oversubscription 834 oversubscription 834 overwritten 369, 563

P
package numbering and version 557, 801 parallelism 232 partial last extent 236 partnership 430 passphrase 124 path failover 222 path failure 386 path offline 386 path offline for source VDisk 386 path offline for target VDisk 386 path offline state 386 path-selection policy algorithms 204 peak 434 peak workload 86 pended 28 per cluster 232 per managed disk 233 performance 81 performance advantage 91, 824 performance considerations 824 performance improvement 91, 824 performance monitoring tool 87 performance requirements 58 performance scalability 35 performance statistics 87 physical location 59 physical planning 59 physical rules 60 physical site 59 Physical Volume Links 223 PiT 34 PiT consistent data 364 PiT copy 374 planning rules 58 plink 482 PLOGI 30 Point in Time (PiT) technology 34 point-in-time copy 401, 425

policing 28 policy decision 406, 429 port adding 455, 621 deleting 456, 624 port binding 225 Port Mask 613, 616 port mask 8384 Power Systems 158 PPRC background copy 405, 413, 428 commands 407, 430 configuration limits 429 detailed states 402, 425 preferred access node 81 preferred path 222 pre-installation planning 58 Prepare 834 prepare (pre-trigger) FlashCopy mapping command 507 PREPARE_COMPLETED 386 preparing volumes 157 pre-trigger 507 primary 414, 532, 554 priority 475 priority setting 475 private key 122, 124, 158 production VDisk 428 provisioning 434 public key 122, 124, 158, 482 PuTTY 40, 122, 126, 494 CLI session 130 default location 124 security alert 131 PuTTY application 130, 496 PuTTY Installation 190 PuTTY Key Generator 124125 PuTTY Key Generator GUI 123 PuTTY Secure Copy 560 PuTTY session 131 PuTTY SSH client software 190 PVLinks 223

Q
QLogic HBAs 201 Quality Of Service 28 Queue Full Condition 28 quiesce 493 quorum candidates 36 Quorum Disk 36 quorum disk 36 quorum disk candidate 37

R
RAID 834 RAID controller 6566 RAMAC 50 RAS 834 real capacity 27 real-time synchronized 388

846

Implementing the IBM System Storage SAN Volume Controller V6.1

reassign the VDisk 474 recall commands 440, 485 Redbooks Web site 839 Contact us xxii redundancy 49, 86 redundant 831 Redundant SAN 835 redundant SAN 835 relationship 370, 420 relationship state diagram 398, 421 reliability 81 Reliability, Availability, and Serviceability (RAS) 834 remote 835 Remote authentication 41 remote fabric 93, 833 interconnect 833 Remote users 44 remove a disk 187 remove an MDG 450 remove WWPN definitions 456 rename a disk controller 591 rename an MDG 597, 700, 723 rename an MDisk 602, 617, 641, 699, 724 repartitioning 81 rescan disks 169 restart the cluster 494 restart the node 497 restarting 531, 553 restore points 366 Reverse FlashCopy 35, 366 RFC3720 29 rmrcconsistgrp 438 rmrcrelationship 437 round robin 81, 205, 222

S
SAN Boot Support 222, 224 SAN definitions 93 SAN fabric 65 SAN planning 63 SAN Volume Controller 835 documentation 589 general housekeeping 589 help 589 virtualization 39 SAN zoning 122 SATA 88 scalable 92, 824 SCM 51 scripting 406, 429, 481 scripts 172, 482, 829 SCSI 835 SCSI Disk 833 SCSI primitives 442 SDD 81, 83, 150, 152, 156, 224 SDD (Subsystem Device Driver) 156, 202, 224, 239 SDD Dynamic Pathing 222 SDD installation 153 SDD package version 162 SDDDSM 165

secondary 414 secondary site 58 secure session 497 Secure Shell (SSH) 122 Secure Shell connection 39 separate physical IP networks 49 sequential 81, 459 serialization 385 serialization of I/O by FlashCopy 385 Service Location Protocol 32, 147, 835 set up Metro Mirror 520, 541 SEV 470 shells 481 shrink a VDisk 653 shrinking 653 shrinkvdisksize 476 shut down 171 shut down a single node 497 shut down the cluster 493, 741 Simple Network Management Protocol 406, 429, 445 single layer cell 49 single point in time 35 single point of failure 835 single sign-on 40, 45 site 59 SLC 49 SLP 32, 147, 835 SLP daemon 32 SNIA 2 SNMP 406, 429, 445 SNMP alerts 606 SNMP manager 565 SNMP trap 386 software upgrade 557 software upgrade packages 801 Solid State Drive 35 Solid State Drives 47 solution guidelines 91 sort 588 sorting 588 source 384 space-efficient 462 Space-efficient background copy 420 space-efficient VDisk 476 space-efficient volume 476 special migration 237 Split 77 split brain 16, 36 split I/O Group 77 split per second 84 splitting the SAN 835 SPoF 835 spreading the load 81 SSD market 51 SSD solution 50 SSH 39, 482 SSH (Secure Shell) 122 SSH Client 40 SSH client 158, 190 SSH client software 122

Index

847

SSH key 42 SSH keys 122, 126 SSH server 122 SSH-2 122 SSO 45 stack 234 stand-alone Metro Mirror relationship 525, 548 start (trigger) FlashCopy mapping command 508, 510, 704, 729 start a PPRC relationship command 410, 436 startrcrelationship 436 state 402, 425 connected 399, 423 consistent 400401, 423424 ConsistentDisconnected 404, 427 ConsistentStopped 402, 426 ConsistentSynchronized 403, 426 disconnected 399, 423 empty 405, 428 idling 403, 426 IdlingDisconnected 404, 427 inconsistent 400, 423 InconsistentCopying 402, 425 InconsistentDisconnected 404, 427 InconsistentStopped 402, 425 overview 398, 422 synchronized 401, 424 state fragments 400, 423 state overview 399, 429 state transitions 386, 422 states 384, 398, 421 statistics 492 statistics collection stopping 492 statistics dump 570 stop 422 stop FlashCopy consistency group 512, 708 stop FlashCopy mapping command 511 STOP_COMPLETED 386 stoprcconsistgrp 437 stoprcrelationship 436 storage cache 38 storage capacity 58 Storage Class Memory 51 stripe VDisks 91, 824 striped VDisk 459 subnet mask IP address 104 Subsystem Device Driver (SDD) 156, 202, 224, 239 Subsystem Device Driver DSM 165 SUN Solaris support information 222 superuser 488 surviving node 496 suspended mapping 511 SVC basic installation 101 task automation 481 SVC cluster partnership 407, 430 SVC cluster software 802 SVC configuration 58 SVC Console 39

SVC device 836 SVC GUI 40 SVC installations 76 SVC master console 122 SVC node 76 SVC PPRC functions 390 SVC setup 138 SVC superuser 42 svcinfo 440, 445, 485, 576 svcinfo lsfreeextents 232 svcinfo lshbaportcandidate 456 svcinfo lsmdiskextent 232 svcinfo lsmigrate 232 svcinfo lsVDisk 477 svcinfo lsVDiskextent 232 svcinfo lsVDiskmember 477 svctask 440, 445, 485, 487, 576 svctask chlicense 567 svctask finderr 562 svctask mkfcmap 407410, 430, 433436, 505, 689 switching copy direction 532, 554 switchrcconsistgrp 438 switchrcrelationship 438 symmetrical 2 symmetrical network 834 symmetrical virtualization 2 synchronized 401, 420, 424 synchronizing 420 synchronous reads 234 synchronous writes 234 synthesis 385 System Storage Productivity Center 835

T
target 144 target name 29 thin-provisioned 25 threshold level 28 tie breaker 36 tie-break situations 36 tie-breaker 36 time 490 time zone 490 timeout 219 Time-Zero (T0) copy 34 Time-Zero copy 34 Tivoli Directory Server 44 Tivoli Embedded Security Services 41, 45 Tivoli Integrated Portal 40 Tivoli Storage Productivity Center 40 Tivoli Storage Productivity Center for Data 40 Tivoli Storage Productivity Center for Disk 40 Tivoli Storage Productivity Center for Replication 40 Tivoli Storage Productivity Center Standard Edition 40 token facility 45 trace dump 569 traffic 86 traffic profile activity 58 transitions 235 trigger 508, 510

848

Implementing the IBM System Storage SAN Volume Controller V6.1

U
unallocated capacity 174 unallocated region 420 unconfigured nodes 495 undetected data corruption 424 uninterruptible power supply 63, 77, 493, 558 unmanaged MDisk 235 unmap a VDisk 474 up2date 201 updates 201 upgrade 801 upgrade precautions 557 use of Metro Mirror 405, 428 used capacity 27 used free capacity 27 using SDD 156, 202, 224

web interface 225 Windows 2000 host configuration 159, 213 Windows 2000-based hosts 159 Windows 2003 165 Windows host system CLI 190 Windows NT and 2000 specific information 159 working with managed disks 590 workload cycle 87 worldwide port name 151 Write data 38 Write ordering 424 write ordering 394, 417, 424 write through mode 77 write workload 87 write-through mode 38 WWPNs 151, 451, 456, 613, 622623

V
VDisk 605 assigning to host 472 creating 458, 462, 634 creating in image mode 462, 671 deleting 471, 646 discovering assigned 154, 166 expanding 471 I/O governing 469 image mode migration concept 235 information 460 mapped to this host 474 migrating 83, 474, 660 modifying 469 path offline for source 386 path offline for target 386 showing 599 showing for MDisk 477 showing using group 477 shrinking 475, 671 working with 458 VDisk discovery 147 VDisk-to-host mapping 474 deleting 649 Veritas Volume Manager 222 View I/O Group details 498 viewing managed disk groups 596 virtual disk 370, 458, 577, 630 Virtual Machine File System 214 virtualization 39 VLUN 833 VMFS 214215 VMFS datastore 217 Volume I/O governing 28 Volume Mirroring 77 Voting Set 36 voting set 36 vpath configured 155

X
xt 806

Y
YaST Online Update 201

Z
zero buffer 420 zero contingency 27 zero-detection algorithm 27 zone 65 zoning capabilities 66 zoning recommendation 170, 184

W
warning capacity 27 warning threshold 476 Index

849

850

Implementing the IBM System Storage SAN Volume Controller V6.1

Implementing the IBM System Storage SAN Volume Controller V6.1

(1.5 spine) 1.5<-> 1.998 789 <->1051 pages

Implementing the IBM System Storage SAN Volume Controller V6.1

Implementing the IBM System Storage SAN Volume Controller V6.1

Implementing the IBM System Storage SAN Volume Controller V6.1

Implementing the IBM System Storage SAN Volume Controller V6.1

Implementing the IBM System Storage SAN Volume Controller V6.1

Back cover

Implementing the IBM System Storage SAN Volume Controller V6.1


Install, use, and troubleshoot the SAN Volume Controller Become familiar with the exciting new GUI Learn how to use the Easy Tier function
This IBM Redbooks publication is a detailed technical guide to the IBM System Storage SAN Volume Controller (SVC) Version 6.1.0. SAN Volume Controller is a virtualization appliance solution which maps virtualized volumes that are visible to hosts and applications to physical volumes on storage devices. Each server within the storage area network (SAN) has its own set of virtual storage addresses that are mapped to physical addresses. If the physical addresses change, the server continues running using the same virtual addresses that it had before. Therefore, volumes or storage can be added or moved while the server is still running. The IBM virtualization technology improves the management of information at the block level in a network, thus enabling applications and servers to share storage devices on a network. This book is intended for readers who need to implement the SVC at a 6.1.0 release level with a minimum of effort.

INTERNATIONAL TECHNICAL SUPPORT ORGANIZATION

BUILDING TECHNICAL INFORMATION BASED ON PRACTICAL EXPERIENCE


IBM Redbooks are developed by the IBM International Technical Support Organization. Experts from IBM, Customers and Partners from around the world create timely technical information based on realistic scenarios. Specific recommendations are provided to help you implement IT solutions more effectively in your environment.

For more information: ibm.com/redbooks


SG24-7933-00 ISBN 0738435228

Potrebbero piacerti anche