Documenti di Didattica
Documenti di Professioni
Documenti di Cultura
Jon Tate Angelo Bernasconi Alexandre Chabrol Peter Crowhurst Frank Enders Ian MacQuarrie
ibm.com/redbooks
International Technical Support Organization Implementing the IBM System Storage SAN Volume Controller V6.1 May 2011
SG24-7933-00
Note: Before using this information and the product it supports, read the information in Notices on page xvii.
First Edition (May 2011) This edition applies to Version 6, Release 1, Modification 0 of the IBM System Storage SAN Volume Controller.
Copyright International Business Machines Corporation 2011. All rights reserved. Note to U.S. Government Users Restricted Rights -- Use, duplication or disclosure restricted by GSA ADP Schedule Contract with IBM Corp.
Contents
Notices . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xvii Trademarks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xviii Preface . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xix The team who wrote this book . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xix Now you can become a published author, too! . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xxii Comments welcome. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xxii Stay connected to IBM Redbooks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xxiii Chapter 1. Introduction to storage virtualization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.1 Storage virtualization terminology . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.2 User requirements driving storage virtualization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.2.1 Benefits of using the SVC . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.3 What is new in SVC V6.1.0 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.4 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1 2 5 5 6 6
Chapter 2. IBM System Storage SAN Volume Controller. . . . . . . . . . . . . . . . . . . . . . . . . 7 2.1 Brief history of the SAN Volume Controller . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8 2.2 SVC architectural overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8 2.2.1 SAN Volume Controller topology . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11 2.3 SVC terminology . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12 2.4 SAN Volume Controller components . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13 2.4.1 Nodes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13 2.4.2 I/O Groups . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14 2.4.3 Cluster. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14 2.4.4 MDisks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15 2.4.5 Quorum disk . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16 2.4.6 Disk tier . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16 2.4.7 Storage pool . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16 2.4.8 Volumes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18 2.4.9 Easy Tier performance function . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19 2.4.10 Hosts . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20 2.4.11 Maximum supported configurations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20 2.5 Volume overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21 2.5.1 Image mode volumes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21 2.5.2 Managed mode volumes. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22 2.5.3 Cache mode and cache-disabled volumes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23 2.5.4 Mirrored volumes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24 2.5.5 Thin-provisioned volumes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25 2.5.6 Volume I/O governing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28 2.6 iSCSI overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28 2.6.1 Use of IP addresses and Ethernet ports . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 30 2.6.2 iSCSI volume discovery . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31 2.6.3 iSCSI authentication . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 32 2.6.4 iSCSI multipathing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 32 2.7 Advanced Copy Services overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33 2.7.1 Synchronous/Asynchronous remote copy. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33 2.7.2 FlashCopy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 34 2.8 SVC cluster overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35
Copyright IBM Corp. 2011. All rights reserved.
iii
2.8.1 Quorum disks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.8.2 Split I/O groups or split cluster . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.8.3 Cache . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.8.4 Cluster management. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.8.5 IBM System Storage Productivity Center . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.8.6 User authentication . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.8.7 SVC roles and user groups . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.8.8 SVC local authentication . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.8.9 SVC remote authentication and single sign-on . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.9 SVC hardware overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.9.1 Fibre Channel interfaces . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.9.2 LAN interfaces . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.10 Solid-state drives. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.10.1 Storage bottleneck problem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.10.2 Solid-state drive solution . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.10.3 Solid-state drive market . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.10.4 Solid-state drives and SVC V6.1. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.11 Easy Tier . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.11.1 Evaluation mode . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.11.2 Automatic data placement mode . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.12 What is new with SVC 6.1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.12.1 SVC 6.1 supported hardware list, device driver, and firmware levels . . . . . . . . . 2.12.2 SVC 6.1.0 new features . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.13 Useful SVC web links . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Chapter 3. Planning and configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.1 General planning rules . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.2 Physical planning . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.2.1 Preparing your uninterruptible power supply unit environment . . . . . . . . . . . . . . . 3.2.2 Physical rules . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.2.3 Cable connections . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.3 Logical planning . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.3.1 Management IP addressing plan . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.3.2 SAN zoning and SAN connections . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.3.3 iSCSI IP addressing plan . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.3.4 Back-end storage subsystem configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.3.5 SVC cluster configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.3.6 Split-cluster configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.3.7 Storage Pool configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.3.8 Virtual disk configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.3.9 Host mapping (LUN masking) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.3.10 Advanced Copy Services . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.3.11 SAN boot support . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.3.12 Data migration from a non-virtualized storage subsystem . . . . . . . . . . . . . . . . . 3.3.13 SVC configuration backup procedure . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.4 Performance considerations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.4.1 SAN. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.4.2 Disk subsystems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.4.3 SVC. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.4.4 Performance monitoring . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
36 37 38 39 39 41 42 43 44 46 48 49 49 49 50 51 51 52 52 52 52 53 53 55 57 58 59 60 60 63 63 64 65 71 74 76 77 79 81 83 84 89 90 90 91 91 91 92 93
Chapter 4. SAN Volume Controller initial configuration . . . . . . . . . . . . . . . . . . . . . . . . 95 4.1 Managing the cluster . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 96
iv
4.1.1 TCP/IP requirements for SAN Volume Controller . . . . . . . . . . . . . . . . . . . . . . . . . 96 4.2 System Storage Productivity Center overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 98 4.2.1 IBM System Storage Productivity Center hardware . . . . . . . . . . . . . . . . . . . . . . 100 4.2.2 SVC installation planning information for System Storage Productivity Center . 100 4.3 Setting up the SVC cluster . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 101 4.3.1 Introducing the service panels . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 101 4.3.2 Prerequisites . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 104 4.3.3 Initiating cluster creation from the front panel . . . . . . . . . . . . . . . . . . . . . . . . . . . 105 4.4 Configuring the GUI . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 107 4.4.1 Completing the Create Cluster Wizard . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 108 4.4.2 Changing the default superuser password . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 117 4.4.3 Configuring the Service IP Addresses . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 119 4.4.4 Postrequisites . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 121 4.5 Secure Shell overview. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 122 4.5.1 Generating public and private SSH key pairs using PuTTY . . . . . . . . . . . . . . . . 123 4.5.2 Uploading the SSH public key to the SVC cluster. . . . . . . . . . . . . . . . . . . . . . . . 125 4.5.3 Configuring the PuTTY session for the CLI. . . . . . . . . . . . . . . . . . . . . . . . . . . . . 126 4.5.4 Starting the PuTTY CLI session . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 130 4.5.5 Configuring SSH for AIX clients . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 132 4.6 Using IPv6 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 132 4.6.1 Migrating a cluster from IPv4 to IPv6 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 133 4.6.2 Migrating a cluster from IPv6 to IPv4 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 136 Chapter 5. Host configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.1 Host attachment overview for IBM System Storage SAN Volume Controller . . . . . . . 5.2 SVC setup . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.2.1 Fibre Channel and SAN setup overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.2.2 Port mask . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.3 iSCSI . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.3.1 Initiators and targets . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.3.2 Nodes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.3.3 IQN . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.3.4 Setting up the host server . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.3.5 Volume discovery . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.3.6 Authentication . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.4 AIX-specific information . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.4.1 Configuring the AIX host . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.4.2 Operating system versions and maintenance levels. . . . . . . . . . . . . . . . . . . . . . 5.4.3 HBAs for IBM System p hosts. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.4.4 Configuring fast fail and dynamic tracking . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.4.5 Installing the 2145 host attachment support package. . . . . . . . . . . . . . . . . . . . . 5.4.6 Subsystem Device Driver Path Control Module . . . . . . . . . . . . . . . . . . . . . . . . . 5.4.7 Configuring assigned volume using SDDPCM . . . . . . . . . . . . . . . . . . . . . . . . . . 5.4.8 Using SDDPCM . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.4.9 Creating and preparing volumes for use with AIX V6.1 and SDDPCM. . . . . . . . 5.4.10 Expanding an AIX volume. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.4.11 Running SVC commands from an AIX host system . . . . . . . . . . . . . . . . . . . . . 5.5 Windows-specific information . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.5.1 Configuring Windows Server 2000, 2003, 2008 hosts . . . . . . . . . . . . . . . . . . . . 5.5.2 Configuring Windows . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.5.3 Hardware lists, device driver, HBAs, and firmware levels. . . . . . . . . . . . . . . . . . 5.5.4 Host adapter installation and configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.5.5 Changing the disk timeout on Microsoft Windows Server. . . . . . . . . . . . . . . . . . 137 138 138 139 143 144 144 145 145 146 147 147 149 149 150 150 150 152 152 154 156 157 157 158 159 159 159 160 160 162
Contents
5.5.6 Installing the SDD driver on Windows. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.5.7 Installing the SDDDSM driver on Windows. . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.6 Discovering assigned volumes in Windows Server 2000 and Windows Server 2003. 5.6.1 Extending a Windows Server 2000 or Windows Server 2003 volume . . . . . . . . 5.7 Example configuration - attaching an SVC to Windows Server 2008 host . . . . . . . . . 5.7.1 Installing SDDDSM on a Windows Server 2008 host . . . . . . . . . . . . . . . . . . . . . 5.7.2 Installing SDDDSM . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.7.3 Attaching SVC volumes to Windows Server 2008 . . . . . . . . . . . . . . . . . . . . . . . 5.7.4 Extending a Windows Server 2008 volume . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.7.5 Removing a disk on Windows . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.8 Using the SVC CLI from a Windows host . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.9 Microsoft Volume Shadow Copy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.9.1 Installation overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.9.2 System requirements for the IBM System Storage hardware provider . . . . . . . . 5.9.3 Installing the IBM System Storage hardware provider . . . . . . . . . . . . . . . . . . . . 5.9.4 Verifying the installation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.9.5 Creating the free and reserved pools of volumes . . . . . . . . . . . . . . . . . . . . . . . . 5.9.6 Changing the configuration parameters . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.10 Specific Linux (on Intel) information . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.10.1 Configuring the Linux host . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.10.2 Configuration information . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.10.3 Disabling automatic Linux system updates . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.10.4 Setting queue depth with QLogic HBAs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.10.5 Multipathing in Linux . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.10.6 Creating and preparing the SDD volumes for use . . . . . . . . . . . . . . . . . . . . . . 5.10.7 Using the operating system MPIO . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.10.8 Creating and preparing MPIO volumes for use. . . . . . . . . . . . . . . . . . . . . . . . . 5.11 VMware configuration information. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.11.1 Configuring VMware hosts . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.11.2 Operating system versions and maintenance levels. . . . . . . . . . . . . . . . . . . . . 5.11.3 HBAs for hosts running VMware . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.11.4 VMware storage and zoning guidance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.11.5 Setting the HBA timeout for failover in VMware . . . . . . . . . . . . . . . . . . . . . . . . 5.11.6 Multipathing in ESX. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.11.7 Attaching VMware to volumes. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.11.8 Volume naming in VMware . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.11.9 Setting the Microsoft guest operating system timeout . . . . . . . . . . . . . . . . . . . 5.11.10 Extending a VMFS volume . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.11.11 Removing a datastore from an ESX host . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.12 Sun Solaris support information . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.12.1 Operating system versions and maintenance levels. . . . . . . . . . . . . . . . . . . . . 5.12.2 SDD dynamic pathing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.13 Hewlett-Packard UNIX configuration information . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.13.1 Operating system versions and maintenance levels. . . . . . . . . . . . . . . . . . . . . 5.13.2 Multipath solutions supported . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.13.3 Coexistence of SDD and PV Links . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.13.4 Using an SVC volume as a cluster lock disk. . . . . . . . . . . . . . . . . . . . . . . . . . . 5.13.5 Support for HP-UX with greater than eight LUNs . . . . . . . . . . . . . . . . . . . . . . . 5.14 Using SDDDSM, SDDPCM, and SDD web interface . . . . . . . . . . . . . . . . . . . . . . . . 5.15 Calculating the queue depth . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.16 Further sources of information . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.16.1 Publications containing SVC storage subsystem attachment guidelines . . . . .
162 164 166 171 176 176 179 181 187 187 190 191 191 192 192 196 197 198 200 200 201 201 201 202 206 208 208 212 213 213 213 214 214 215 215 218 219 219 221 222 222 222 223 223 223 223 224 224 224 225 226 226
vi
Chapter 6. Data migration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.1 Migration overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.2 Migration operations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.2.1 Migrating multiple extents (within a storage pool) . . . . . . . . . . . . . . . . . . . . . . . . 6.2.2 Migrating extents off an MDisk that is being deleted. . . . . . . . . . . . . . . . . . . . . . 6.2.3 Migrating a volume between storage pools. . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.2.4 Migrating the volume to image mode . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.2.5 Migrating a volume between I/O Groups. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.2.6 Monitoring the migration progress. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.3 Functional overview of migration. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.3.1 Parallelism . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.3.2 Error handling . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.3.3 Migration algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.4 Migrating data from an image mode volume . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.4.1 Image mode volume migration concept . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.4.2 Migration tips. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.5 Data migration for Windows using the SVC GUI. . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.5.1 Windows Server 2008 host system connected directly to the DS4700. . . . . . . . 6.5.2 Adding the SVC between the host system and the DS4700. . . . . . . . . . . . . . . . 6.5.3 Importing the migrated disks into an online Windows Server 2008 host. . . . . . . 6.5.4 Adding the SVC between the host and DS4700 using the CLI . . . . . . . . . . . . . . 6.5.5 Migrating a volume from managed mode to image mode. . . . . . . . . . . . . . . . . . 6.5.6 Migrating the volume from image mode to image mode . . . . . . . . . . . . . . . . . . . 6.5.7 Removing image mode data from the SVC. . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.5.8 Map the free disks onto the Windows Server 2008. . . . . . . . . . . . . . . . . . . . . . . 6.6 Migrating Linux SAN disks to SVC disks. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.6.1 Connecting the SVC to your SAN fabric . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.6.2 Preparing your SVC to virtualize disks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.6.3 Moving the LUNs to the SVC . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.6.4 Migrating the image mode volumes to managed MDisks . . . . . . . . . . . . . . . . . . 6.6.5 Preparing to migrate from the SVC . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.6.6 Migrating the volumes to image mode volumes . . . . . . . . . . . . . . . . . . . . . . . . . 6.6.7 Removing the LUNs from the SVC . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.7 Migrating ESX SAN disks to SVC disks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.7.1 Connecting the SVC to your SAN fabric . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.7.2 Preparing your SVC to virtualize disks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.7.3 Moving the LUNs to the SVC . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.7.4 Migrating the image mode volumes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.7.5 Preparing to migrate from the SVC . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.7.6 Migrating the managed volumes to image mode volumes . . . . . . . . . . . . . . . . . 6.7.7 Removing the LUNs from the SVC . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.8 Migrating AIX SAN disks to SVC volumes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.8.1 Connecting the SVC to your SAN fabric . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.8.2 Preparing your SVC to virtualize disks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.8.3 Moving the LUNs to the SVC . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.8.4 Migrating image mode volumes to volumes . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.8.5 Preparing to migrate from the SVC . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.8.6 Migrating the managed volumes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.8.7 Removing the LUNs from the SVC . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.9 Using SVC for storage migration. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.10 Using volume mirroring and thin-provisioned volumes together . . . . . . . . . . . . . . . . 6.10.1 Zero detect feature . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.10.2 Volume mirroring with thin-provisioned volumes. . . . . . . . . . . . . . . . . . . . . . . .
227 228 228 228 229 229 230 231 232 232 232 233 233 235 235 237 237 238 240 254 257 260 264 274 277 278 280 281 285 288 291 294 295 298 299 301 304 307 310 312 313 316 318 319 324 326 328 331 332 335 336 336 338 vii
Contents
Chapter 7. Easy Tier . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.1 Overview of Easy Tier . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.2 Easy Tier concepts . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.2.1 SSD arrays and MDisks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.2.2 Disk tiers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.2.3 Single tier storage pools . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.2.4 Multiple tier storage pools . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.2.5 Easy Tier process . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.2.6 Easy Tier operating modes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.2.7 Easy Tier activation. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.3 Easy Tier implementation considerations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.3.1 Prerequisites . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.3.2 Implementation rules. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.3.3 Limitations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.4 Measuring and activating Easy Tier . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.4.1 Measuring by using the Storage Advisor Tool . . . . . . . . . . . . . . . . . . . . . . . . . . 7.5 Using Easy Tier with the SVC CLI . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.5.1 Initial cluster status . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.5.2 Turning on Easy Tier evaluation mode . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.5.3 Creating a multitier storage pool . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.5.4 Setting the disk tier . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.5.5 Checking a volumes Easy Tier mode. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.5.6 Final cluster status . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.6 Using Easy Tier with the SVC GUI . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.6.1 Setting the disk tier on MDisks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.6.2 Checking Easy Tier status . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Chapter 8. Advanced Copy Services. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.1 FlashCopy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.1.1 Business requirement . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.1.2 Backup . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.1.3 Restore . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.1.4 Moving and migrating data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.1.5 Application testing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.1.6 Host considerations to ensure FlashCopy integrity. . . . . . . . . . . . . . . . . . . . . . . 8.1.7 FlashCopy attributes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.2 Reverse FlashCopy. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.2.1 FlashCopy and Tivoli Storage Manager . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.3 FlashCopy functional overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.4 Implementing SVC FlashCopy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.4.1 FlashCopy mappings . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.4.2 Multiple Target FlashCopy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.4.3 Consistency Groups . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.4.4 FlashCopy indirection layer. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.4.5 Grains and the FlashCopy bitmap. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.4.6 Interaction and dependency between Multiple Target FlashCopy mappings . . . 8.4.7 Summary of the FlashCopy indirection layer algorithm. . . . . . . . . . . . . . . . . . . . 8.4.8 Interaction with the cache . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.4.9 FlashCopy and image mode disks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.4.10 FlashCopy mapping events . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.4.11 FlashCopy mapping states . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.4.12 Thin-provisioned FlashCopy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.4.13 Background copy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
345 346 346 346 347 347 347 348 349 350 350 350 350 351 352 352 353 354 354 356 357 358 358 359 359 361 363 364 364 364 364 364 365 365 365 366 367 369 370 370 371 371 374 374 375 377 377 378 379 381 383 384
viii
8.4.14 Synthesis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.4.15 Serialization of I/O by FlashCopy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.4.16 Event handling . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.4.17 Asynchronous notifications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.4.18 Interoperation with Metro Mirror and Global Mirror . . . . . . . . . . . . . . . . . . . . . . 8.4.19 FlashCopy presets . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.5 Metro Mirror. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.5.1 Metro Mirror overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.5.2 Remote copy techniques . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.5.3 Metro Mirror features. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.5.4 Multiple Cluster Mirroring . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.5.5 Importance of write ordering . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.5.6 Remote copy intercluster communication . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.5.7 Metro Mirror attributes. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.5.8 Methods of synchronization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.5.9 Metro Mirror states and events . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.5.10 Practical use of Metro Mirror . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.5.11 Valid combinations of FlashCopy, Metro Mirror, and Global Mirror . . . . . . . . . 8.5.12 Metro Mirror configuration limits . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.6 Metro Mirror commands . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.6.1 Listing available SVC cluster partners . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.6.2 Creating the SVC cluster partnership . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.6.3 Creating a Metro Mirror Consistency Group . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.6.4 Creating a Metro Mirror relationship . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.6.5 Changing a Metro Mirror relationship . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.6.6 Changing a Metro Mirror Consistency Group . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.6.7 Starting a Metro Mirror relationship. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.6.8 Stopping a Metro Mirror relationship . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.6.9 Starting a Metro Mirror Consistency Group. . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.6.10 Stopping a Metro Mirror Consistency Group. . . . . . . . . . . . . . . . . . . . . . . . . . . 8.6.11 Deleting a Metro Mirror relationship . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.6.12 Deleting a Metro Mirror Consistency Group . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.6.13 Reversing a Metro Mirror relationship . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.6.14 Reversing a Metro Mirror Consistency Group. . . . . . . . . . . . . . . . . . . . . . . . . . 8.6.15 Background copy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.7 Global Mirror . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.7.1 Intracluster Global Mirror. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.7.2 Intercluster Global Mirror. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.7.3 Asynchronous remote copy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.7.4 SVC Global Mirror features . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.7.5 Global Mirror relationship between master and auxiliary volumes . . . . . . . . . . . 8.7.6 Importance of write ordering . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.7.7 Global Mirror Consistency Groups . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.7.8 Distribution of work among nodes. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.7.9 Background copy performance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.7.10 Thin-provisioned background copy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.8 Global Mirror process . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.8.1 Methods of synchronization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.8.2 Global Mirror states and events . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.8.3 Practical use of Global Mirror . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.8.4 Valid combinations of FlashCopy, Metro Mirror, and Global Mirror . . . . . . . . . . 8.8.5 Global Mirror configuration limits. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.9 Global Mirror commands . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Contents
385 385 385 386 387 387 388 389 389 390 391 394 396 397 397 398 405 406 406 407 407 408 408 409 409 410 410 410 411 411 411 412 412 412 413 413 413 413 413 414 416 417 417 419 419 420 420 420 421 428 429 429 430 ix
8.9.1 Listing the available SVC cluster partners . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.9.2 Creating an SVC cluster partnership. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.9.3 Creating a Global Mirror Consistency Group . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.9.4 Creating a Global Mirror relationship . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.9.5 Changing a Global Mirror relationship. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.9.6 Changing a Global Mirror Consistency Group . . . . . . . . . . . . . . . . . . . . . . . . . . 8.9.7 Starting a Global Mirror relationship . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.9.8 Stopping a Global Mirror relationship . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.9.9 Starting a Global Mirror Consistency Group . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.9.10 Stopping a Global Mirror Consistency Group . . . . . . . . . . . . . . . . . . . . . . . . . . 8.9.11 Deleting a Global Mirror relationship. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.9.12 Deleting a Global Mirror Consistency Group. . . . . . . . . . . . . . . . . . . . . . . . . . . 8.9.13 Reversing a Global Mirror relationship . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.9.14 Reversing a Global Mirror Consistency Group . . . . . . . . . . . . . . . . . . . . . . . . . Chapter 9. SAN Volume Controller operations using the command-line interface. . 9.1 Normal operations using CLI. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.1.1 Command syntax and online help. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.2 Working with managed disks and disk controller systems . . . . . . . . . . . . . . . . . . . . . 9.2.1 Viewing disk controller details . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.2.2 Renaming a controller . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.2.3 Discovery status . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.2.4 Discovering MDisks. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.2.5 Viewing MDisk information . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.2.6 Renaming an MDisk . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.2.7 Including an MDisk . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.2.8 Adding MDisks to a storage pool . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.2.9 Showing MDisks in a storage pool . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.2.10 Working with a storage pool . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.2.11 Creating a storage pool. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.2.12 Viewing storage pool information . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.2.13 Renaming a storage pool . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.2.14 Deleting a storage pool . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.2.15 Removing MDisks from a storage pool . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.3 Working with hosts . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.3.1 Creating a Fibre Channel-attached host . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.3.2 Creating an iSCSI-attached host . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.3.3 Modifying a host . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.3.4 Deleting a host . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.3.5 Adding ports to a defined host . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.3.6 Deleting ports . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.4 Working with the Ethernet port for iscsi. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.5 Working with volumes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.5.1 Creating a volume . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.5.2 Volume information . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.5.3 Creating a thin-provisioned volume. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.5.4 Creating a volume in image mode . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.5.5 Adding a mirrored volume copy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.5.6 Splitting a mirrored volume . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.5.7 Modifying a volume . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.5.8 I/O governing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.5.9 Deleting a volume . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.5.10 Expanding a volume . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
430 433 434 434 435 435 436 436 437 437 437 438 438 438 439 440 440 441 441 442 442 442 444 445 445 446 447 447 447 449 449 450 450 451 451 452 454 455 455 456 457 458 458 460 462 462 463 467 469 469 471 471
9.5.11 Assigning a volume to a host . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.5.12 Showing volumes to host mapping . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.5.13 Deleting a volume to host mapping. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.5.14 Migrating a volume . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.5.15 Migrating a fully managed volume to an image mode volume . . . . . . . . . . . . . 9.5.16 Shrinking a volume . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.5.17 Showing a volume on an MDisk . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.5.18 Showing which volumes are using a storage pool . . . . . . . . . . . . . . . . . . . . . . 9.5.19 Showing which MDisks are used by a specific volume . . . . . . . . . . . . . . . . . . . 9.5.20 Showing from which storage pool a volume has its extents . . . . . . . . . . . . . . . 9.5.21 Showing the host to which the volume is mapped . . . . . . . . . . . . . . . . . . . . . . 9.5.22 Showing the volume to which the host is mapped . . . . . . . . . . . . . . . . . . . . . . 9.5.23 Tracing a volume from a host back to its physical disk . . . . . . . . . . . . . . . . . . . 9.6 Scripting under the CLI for SVC task automation . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.6.1 Scripting structure . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.7 SVC advanced operations using the CLI . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.7.1 Command syntax . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.7.2 Organizing on window content . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.8 Managing the cluster using the CLI. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.8.1 Viewing cluster properties . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.8.2 Changing cluster settings . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.8.3 Performing cluster authentication . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.8.4 iSCSI configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.8.5 Modifying IP addresses. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.8.6 Supported IP address formats . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.8.7 Setting the cluster time zone and time . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.8.8 Starting statistics collection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.8.9 Stopping statistics collection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.8.10 Determining the status of a copy operation. . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.8.11 Shutting down a cluster. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.9 Nodes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.9.1 Viewing node details . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.9.2 Adding a node . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.9.3 Renaming a node . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.9.4 Deleting a node . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.9.5 Shutting down a node . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.10 I/O Groups. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.10.1 Viewing I/O Group details . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.10.2 Renaming an I/O Group . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.10.3 Adding and removing hostiogrp . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.10.4 Listing I/O Groups . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.11 Managing authentication . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.11.1 Managing users using the CLI . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.11.2 Managing user roles and groups . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.11.3 Changing a user . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.11.4 Audit log command . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.12 Managing Copy Services . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.12.1 FlashCopy operations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.12.2 Setting up FlashCopy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.12.3 Creating a FlashCopy Consistency Group . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.12.4 Creating a FlashCopy mapping . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.12.5 Preparing (pre-triggering) the FlashCopy mapping. . . . . . . . . . . . . . . . . . . . . . 9.12.6 Preparing (pre-triggering) the FlashCopy Consistency Group . . . . . . . . . . . . .
Contents
472 474 474 474 475 476 477 477 477 478 479 479 480 481 482 485 485 486 487 487 487 488 488 489 490 490 491 492 492 493 494 494 495 496 496 497 498 498 498 499 499 500 500 501 502 502 503 503 504 505 505 507 508 xi
9.12.7 Starting (triggering) FlashCopy mappings. . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.12.8 Starting (triggering) FlashCopy Consistency Group . . . . . . . . . . . . . . . . . . . . . 9.12.9 Monitoring the FlashCopy progress . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.12.10 Stopping the FlashCopy mapping . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.12.11 Stopping the FlashCopy Consistency Group . . . . . . . . . . . . . . . . . . . . . . . . . 9.12.12 Deleting the FlashCopy mapping . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.12.13 Deleting the FlashCopy Consistency Group . . . . . . . . . . . . . . . . . . . . . . . . . . 9.12.14 Migrating a volume to a thin-provisioned volume . . . . . . . . . . . . . . . . . . . . . . 9.12.15 Reverse FlashCopy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.12.16 Split-stopping of FlashCopy maps . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.13 Metro Mirror operation. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.13.1 Setting up Metro Mirror . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.13.2 Creating an SVC partnership between ITSO-CLS1 and ITSO-CLS4 . . . . . . . . 9.13.3 Creating a Metro Mirror Consistency Group . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.13.4 Creating the Metro Mirror relationships. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.13.5 Creating a stand-alone Metro Mirror relationship for MM_App_Pri. . . . . . . . . . 9.13.6 Starting Metro Mirror . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.13.7 Starting a Metro Mirror Consistency Group. . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.13.8 Monitoring the background copy progress . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.13.9 Stopping and restarting Metro Mirror . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.13.10 Stopping a stand-alone Metro Mirror relationship . . . . . . . . . . . . . . . . . . . . . . 9.13.11 Stopping a Metro Mirror Consistency Group. . . . . . . . . . . . . . . . . . . . . . . . . . 9.13.12 Restarting a Metro Mirror relationship in the Idling state. . . . . . . . . . . . . . . . . 9.13.13 Restarting a Metro Mirror Consistency Group in the Idling state . . . . . . . . . . 9.13.14 Changing copy direction for Metro Mirror . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.13.15 Switching copy direction for a Metro Mirror relationship . . . . . . . . . . . . . . . . . 9.13.16 Switching copy direction for a Metro Mirror Consistency Group . . . . . . . . . . . 9.13.17 Creating an SVC partnership among many clusters . . . . . . . . . . . . . . . . . . . . 9.13.18 Star configuration partnership . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.14 Global Mirror operation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.14.1 Setting up Global Mirror . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.14.2 Creating an SVC partnership between ITSO-CLS1 and ITSO-CLS4 . . . . . . . . 9.14.3 Changing link tolerance and cluster delay simulation . . . . . . . . . . . . . . . . . . . . 9.14.4 Creating a Global Mirror Consistency Group . . . . . . . . . . . . . . . . . . . . . . . . . . 9.14.5 Creating Global Mirror relationships . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.14.6 Creating the stand-alone Global Mirror relationship for GM_App_Pri. . . . . . . . 9.14.7 Starting Global Mirror . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.14.8 Starting a stand-alone Global Mirror relationship . . . . . . . . . . . . . . . . . . . . . . . 9.14.9 Starting a Global Mirror Consistency Group . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.14.10 Monitoring background copy progress . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.14.11 Stopping and restarting Global Mirror . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.14.12 Stopping a stand-alone Global Mirror relationship . . . . . . . . . . . . . . . . . . . . . 9.14.13 Stopping a Global Mirror Consistency Group . . . . . . . . . . . . . . . . . . . . . . . . . 9.14.14 Restarting a Global Mirror relationship in the Idling state . . . . . . . . . . . . . . . . 9.14.15 Restarting a Global Mirror Consistency Group in the Idling state . . . . . . . . . . 9.14.16 Changing direction for Global Mirror . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.14.17 Switching copy direction for a Global Mirror relationship . . . . . . . . . . . . . . . . 9.14.18 Switching copy direction for a Global Mirror Consistency Group . . . . . . . . . . 9.15 Service and maintenance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.15.1 Upgrading software . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.15.2 Running maintenance procedures . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.15.3 Setting up SNMP notification . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.15.4 Set syslog event notification . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xii
Implementing the IBM System Storage SAN Volume Controller V6.1
508 510 510 511 512 512 513 514 518 520 520 521 522 524 524 525 526 527 528 529 529 530 531 531 532 532 533 535 536 541 542 543 545 546 547 548 548 549 549 550 551 551 552 553 554 554 554 556 557 557 562 565 565
9.15.5 Configuring error notification using an email server . . . . . . . . . . . . . . . . . . . . . 9.15.6 Analyzing the event log . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.15.7 License settings . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.15.8 Listing dumps . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.16 Backing up the SVC cluster configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.16.1 Prerequisites . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.17 Restoring the SVC cluster configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.17.1 Deleting configuration backup. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.18 Working with the SVC Quorum MDisk . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.18.1 Listing the SVC Quorum MDisk . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.18.2 Changing the SVC Quorum Disk . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.19 Working with the Service Assistant menu . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.19.1 SVC CLI Service Assistant menu . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.20 SAN troubleshooting and data collection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.21 T3 recovery process . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Chapter 10. SAN Volume Controller operations using the GUI. . . . . . . . . . . . . . . . . . 10.1 SVC normal operations using the GUI . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10.1.1 Introduction to SVC normal operations using the GUI . . . . . . . . . . . . . . . . . . . 10.1.2 Organizing on window content . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10.1.3 Help. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10.2 Working with External Disk Controllers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10.2.1 Viewing Disk Controller details . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10.2.2 Renaming a disk controller . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10.2.3 Discovering MDisks from the External panel . . . . . . . . . . . . . . . . . . . . . . . . . . 10.3 Working with Storage Pools . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10.3.1 Viewing Storage Pool information . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10.3.2 Discovering MDisks. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10.3.3 Creating Storage Pools . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10.3.4 Renaming a Storage Pool . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10.3.5 Deleting a Storage Pool . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10.3.6 Adding or removing MDisks from a Storage Pool . . . . . . . . . . . . . . . . . . . . . . . 10.3.7 Showing the volumes that are associated with a Storage Pool . . . . . . . . . . . . 10.4 Working with managed disks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10.4.1 MDisk information . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10.4.2 Renaming an MDisk . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10.4.3 Discovering MDisks. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10.4.4 Adding MDisks to a Storage Pool . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10.4.5 Removing MDisks from a Storage Pool . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10.4.6 Including an excluded MDisk . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10.4.7 Activating EasyTier . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10.5 Migration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10.6 Working with hosts . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10.6.1 Host information . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10.6.2 Creating a host . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10.6.3 Renaming a host . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10.6.4 Modifying a host . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10.6.5 Deleting a host . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10.6.6 Adding ports . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10.6.7 Deleting ports . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10.6.8 Creating or modifying the host mapping . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10.6.9 Deleting a host mapping . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10.6.10 Deleting all host mappings for a given host . . . . . . . . . . . . . . . . . . . . . . . . . .
566 566 567 568 572 573 574 574 575 575 575 576 576 577 578 579 580 580 584 589 590 590 591 592 592 592 594 595 597 598 599 599 599 599 602 602 603 604 606 606 608 608 610 612 617 618 620 621 624 625 627 629
Contents
xiii
10.7 Working with volumes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10.7.1 Volume information . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10.7.2 Creating a volume . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10.7.3 Renaming a volume . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10.7.4 Modifying a volume . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10.7.5 Modifying thin-provisioning volume properties . . . . . . . . . . . . . . . . . . . . . . . . . 10.7.6 Deleting a volume . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10.7.7 Creating or modifying the host mapping . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10.7.8 Deleting a host mapping . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10.7.9 Deleting all host mappings for a given volume . . . . . . . . . . . . . . . . . . . . . . . . . 10.7.10 Shrinking a volume . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10.7.11 Expanding a volume . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10.7.12 Shrinking the real capacity of a thin-provisioned volume . . . . . . . . . . . . . . . . 10.7.13 Expanding the real capacity of a thin provisioned volume . . . . . . . . . . . . . . . 10.7.14 Migrating a volume . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10.7.15 Adding a mirrored copy to an existing volume . . . . . . . . . . . . . . . . . . . . . . . . 10.7.16 Deleting a mirrored copy from a volume mirror. . . . . . . . . . . . . . . . . . . . . . . . 10.7.17 Splitting a volume copy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10.7.18 Validating volume copies . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10.7.19 Migrating to a thin-provisioned volume using volume mirroring . . . . . . . . . . . 10.7.20 Creating a volume in image mode . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10.7.21 Migrating a volume to an image mode volume . . . . . . . . . . . . . . . . . . . . . . . . 10.7.22 Creating an image mode mirrored volume . . . . . . . . . . . . . . . . . . . . . . . . . . . 10.8 Copy Services: managing FlashCopy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10.8.1 Creating a FlashCopy Mapping. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10.8.2 Creating and starting a snapshot preset with a single click . . . . . . . . . . . . . . . 10.8.3 Creating and starting a clone preset with a single click . . . . . . . . . . . . . . . . . . 10.8.4 Creating and starting a backup preset with a single click . . . . . . . . . . . . . . . . . 10.8.5 Creating a FlashCopy Consistency Group . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10.8.6 Creating FlashCopy mappings in a Consistency Group . . . . . . . . . . . . . . . . . . 10.8.7 Show Dependent Mappings . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10.8.8 Moving a FlashCopy mapping to a Consistency Group . . . . . . . . . . . . . . . . . . 10.8.9 Removing a FlashCopy mapping from a Consistency Group . . . . . . . . . . . . . . 10.8.10 Modifying a FlashCopy mapping. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10.8.11 Renaming a FlashCopy mapping . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10.8.12 Renaming a Consistency Group . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10.8.13 Deleting a FlashCopy mapping . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10.8.14 Deleting a FlashCopy Consistency Group . . . . . . . . . . . . . . . . . . . . . . . . . . . 10.8.15 Starting FlashCopy mappings . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10.8.16 Starting a FlashCopy Consistency Group . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10.8.17 Stopping the FlashCopy Consistency Group . . . . . . . . . . . . . . . . . . . . . . . . . 10.8.18 Stopping the FlashCopy mapping . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10.8.19 Migrating between a fully allocated volume and a Space-Efficient volume. . . 10.8.20 Reversing and splitting a FlashCopy mapping . . . . . . . . . . . . . . . . . . . . . . . . 10.9 Copy Services: managing Remote Copy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10.9.1 Cluster partnership . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10.9.2 Creating the SVC partnership between two remote SVC Clusters . . . . . . . . . . 10.9.3 Creating stand-alone remote copy relationships. . . . . . . . . . . . . . . . . . . . . . . . 10.9.4 Creating a Consistency Group . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10.9.5 Renaming a Consistency Group . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10.9.6 Renaming a Remote Copy relationship . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10.9.7 Moving a stand-alone Remote Copy relationship to a Consistency Group. . . . 10.9.8 Removing stand-alone Remote Copy relationship from a Consistency Group . xiv
Implementing the IBM System Storage SAN Volume Controller V6.1
630 631 634 641 642 644 646 647 649 652 653 655 657 659 660 662 665 666 667 669 671 671 671 672 674 685 686 687 689 691 696 696 697 698 699 700 701 702 703 704 706 708 709 709 710 712 714 716 719 723 724 725 726
10.9.9 Starting a Remote Copy relationship . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10.9.10 Starting a Remote Copy Consistency Group . . . . . . . . . . . . . . . . . . . . . . . . . 10.9.11 Switching the copy direction for a Remote Copy relationship . . . . . . . . . . . . . 10.9.12 Switching the copy direction for a Consistency Group . . . . . . . . . . . . . . . . . . 10.9.13 Stopping a Remote Copy relationship. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10.9.14 Stopping a Consistency Group . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10.9.15 Deleting stand-alone Remote Copy relationships . . . . . . . . . . . . . . . . . . . . . . 10.9.16 Deleting a Consistency Group . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10.10 Managing the cluster using the GUI . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10.10.1 System Status information . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10.10.2 View I/O groups and their associated nodes. . . . . . . . . . . . . . . . . . . . . . . . . . 10.10.3 View cluster properties . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10.10.4 Renaming an SVC cluster. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10.10.5 Shutting down a cluster. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10.10.6 Upgrading software . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10.11 Managing I/O Groups . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10.11.1 View I/O group properties . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10.11.2 Modifying I/O group properties . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10.12 Managing nodes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10.12.1 View node properties . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10.12.2 Renaming a node . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10.12.3 Adding a node to the cluster . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10.12.4 Removing a node from the cluster . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10.13 Troubleshooting . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10.13.1 Recommended Actions panel . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10.13.2 Event Log panel . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10.13.3 Run fix procedure . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10.13.4 Support panel . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10.14 User Management. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10.14.1 Creating a user . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10.14.2 Modifying user properties . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10.14.3 Removing a user password. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10.14.4 Removing a user SSH Public Key. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10.14.5 Deleting a user . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10.14.6 Creating a user group . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10.14.7 Modifying user group properties . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10.14.8 Deleting a user group . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10.14.9 Audit log information . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10.15 Configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10.15.1 Configuring Network . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10.15.2 Configuring the Service IP addresses. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10.15.3 iSCSI configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10.15.4 Fibre Channel information. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10.15.5 Event notifications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10.15.6 Email notifications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10.15.7 SNMP notifications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10.15.8 Using the Advanced panel . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10.15.9 Date and Time . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10.15.10 Licensing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10.15.11 Upgrading software . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10.15.12 Setting GUI Preferences . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10.16 Upgrading SVC software. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10.16.1 Precautions before upgrade . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Contents
727 729 730 732 733 734 736 737 738 738 739 739 740 741 744 745 745 745 747 747 748 750 751 753 753 757 764 766 769 772 773 775 776 777 778 780 782 783 786 786 788 789 791 792 792 794 797 797 798 799 799 801 801 xv
10.16.2 SVC software upgrade test utility . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10.16.3 Upgrade procedure . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10.17 Service Assistant with the GUI . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10.17.1 Placing an SVC node into Service State. . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10.17.2 Exiting an SVC node from Service State . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10.17.3 Rebooting an SVC node . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10.17.4 Collect Logs page . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10.17.5 Manage Cluster page . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10.17.6 Recover Cluster . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10.17.7 Reinstall software . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10.17.8 Upgrade Manually . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10.17.9 Modify WWNN. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10.17.10 Change Service IP . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10.17.11 Configure CLI access . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10.17.12 Restart Service . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Appendix A. Performance data and statistics gathering. . . . . . . . . . . . . . . . . . . . . . . SVC performance overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Performance considerations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . SVC. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Performance monitoring . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Collecting performance statistics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Performance data collection and Tivoli Storage Productivity Center for Disk . . . . . . . .
802 802 809 811 813 815 816 817 818 818 819 820 820 821 821 823 824 824 824 824 824 826
Appendix B. Terminology . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 829 Commonly encountered terms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 830 Related publications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . IBM Redbooks publications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Other publications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Online resources . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . How to get IBM Redbooks publications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Help from IBM . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 837 837 837 838 839 839
Index . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 841
xvi
Notices
This information was developed for products and services offered in the U.S.A. IBM may not offer the products, services, or features discussed in this document in other countries. Consult your local IBM representative for information on the products and services currently available in your area. Any reference to an IBM product, program, or service is not intended to state or imply that only that IBM product, program, or service may be used. Any functionally equivalent product, program, or service that does not infringe any IBM intellectual property right may be used instead. However, it is the user's responsibility to evaluate and verify the operation of any non-IBM product, program, or service. IBM may have patents or pending patent applications covering subject matter described in this document. The furnishing of this document does not give you any license to these patents. You can send license inquiries, in writing, to: IBM Director of Licensing, IBM Corporation, North Castle Drive, Armonk, NY 10504-1785 U.S.A. The following paragraph does not apply to the United Kingdom or any other country where such provisions are inconsistent with local law: INTERNATIONAL BUSINESS MACHINES CORPORATION PROVIDES THIS PUBLICATION "AS IS" WITHOUT WARRANTY OF ANY KIND, EITHER EXPRESS OR IMPLIED, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF NON-INFRINGEMENT, MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE. Some states do not allow disclaimer of express or implied warranties in certain transactions, therefore, this statement may not apply to you. This information could include technical inaccuracies or typographical errors. Changes are periodically made to the information herein; these changes will be incorporated in new editions of the publication. IBM may make improvements and/or changes in the product(s) and/or the program(s) described in this publication at any time without notice. Any references in this information to non-IBM websites are provided for convenience only and do not in any manner serve as an endorsement of those websites. The materials at those websites are not part of the materials for this IBM product and use of those websites is at your own risk. IBM may use or distribute any of the information you supply in any way it believes appropriate without incurring any obligation to you. Information concerning non-IBM products was obtained from the suppliers of those products, their published announcements or other publicly available sources. IBM has not tested those products and cannot confirm the accuracy of performance, compatibility or any other claims related to non-IBM products. Questions on the capabilities of non-IBM products should be addressed to the suppliers of those products. This information contains examples of data and reports used in daily business operations. To illustrate them as completely as possible, the examples include the names of individuals, companies, brands, and products. All of these names are fictitious and any similarity to the names and addresses used by an actual business enterprise is entirely coincidental. COPYRIGHT LICENSE: This information contains sample application programs in source language, which illustrate programming techniques on various operating platforms. You may copy, modify, and distribute these sample programs in any form without payment to IBM, for the purposes of developing, using, marketing or distributing application programs conforming to the application programming interface for the operating platform for which the sample programs are written. These examples have not been thoroughly tested under all conditions. IBM, therefore, cannot guarantee or imply reliability, serviceability, or function of these programs.
xvii
Trademarks
IBM, the IBM logo, and ibm.com are trademarks or registered trademarks of International Business Machines Corporation in the United States, other countries, or both. These and other IBM trademarked terms are marked on their first occurrence in this information with the appropriate symbol ( or ), indicating US registered or common law trademarks owned by IBM at the time this information was published. Such trademarks may also be registered or common law trademarks in other countries. A current list of IBM trademarks is available on the Web at http://www.ibm.com/legal/copytrade.shtml The following terms are trademarks of the International Business Machines Corporation in the United States, other countries, or both:
AIX 5L AIX BladeCenter DB2 developerWorks DS4000 DS6000 DS8000 FlashCopy GPFS IBM Systems Director Active Energy Manager IBM Power Systems Redbooks Redpaper Redbooks (logo) Solid System i System p System Storage DS System Storage System x Tivoli TotalStorage WebSphere XIV z/OS zSeries
The following terms are trademarks of other companies: Snapshot, and the NetApp logo are trademarks or registered trademarks of NetApp, Inc. in the U.S. and other countries. Java, and all Java-based trademarks are trademarks of Sun Microsystems, Inc. in the United States, other countries, or both. Microsoft, Windows, and the Windows logo are trademarks of Microsoft Corporation in the United States, other countries, or both. Intel Xeon, Intel, Intel logo, Intel Inside logo, and Intel Centrino logo are trademarks or registered trademarks of Intel Corporation or its subsidiaries in the United States and other countries. UNIX is a registered trademark of The Open Group in the United States and other countries. Linux is a trademark of Linus Torvalds in the United States, other countries, or both. Other company, product, or service names may be trademarks or service marks of others.
xviii
Preface
This IBM Redbooks publication is a detailed technical guide to the IBM System Storage SAN Volume Controller (SVC) Version 6.1.0. SAN Volume Controller is a virtualization appliance solution which maps virtualized volumes that are visible to hosts and applications to physical volumes on storage devices. Each server within the storage area network (SAN) has its own set of virtual storage addresses that are mapped to physical addresses. If the physical addresses change, the server continues running using the same virtual addresses that it had before. Therefore, volumes or storage can be added or moved while the server is still running. The IBM virtualization technology improves the management of information at the block level in a network, thus enabling applications and servers to share storage devices on a network. This book is intended for readers who need to implement the SVC at a 6.1.0 release level with a minimum of effort.
xix
Peter Crowhurst is a Consulting IT Specialist in the Systems and Technology Group, IBM Australia. He has 33 years of experience in the IT industry, including eight years working in a customer organization as an applications and systems programmer and in network planning and design. He joined IBM 25 years ago and has worked mainly in a large systems technical pre-sales support role for zSeries and storage products. Peter has coauthored three IBM Redbooks publications about ESS and SVC products. Frank Enders has worked for the last four years for EMEA IBM System Storage SAN Volume Controller Level 2 support in Mainz, Germany, providing pre-sales and post-sales support. He has been with IBM Germany for 16 years, starting as a disk production technician with IBM Mainz and later working in the magnetic head production area. In 2001, IBM ceased disk production in Mainz and Frank joined ESCC Mainz as a member of the Installation Readiness team for the DS8000, DS6000, and IBM System Storage SAN Volume Controller. During that time he also studied for four years to earn a diploma in Electrical Engineering. Ian MacQuarrie is a Senior Technical Staff Member in the IBM Systems and Technology Group, San Jose, California. He has 26 years of experience in Enterprise Storage Systems, working in a variety of test and support roles. He is currently a member of the STG Field Assist Team (FAST), which supports clients through critical account engagements, availability assessments, and technical advocacy. His areas of expertise include Storage Area Network, Open Systems storage solutions, and performance analysis. Ian has coauthored a previous IBM Redbooks publication about SVC Best Practices and Performance Guidelines.
We extend our thanks to the following people for their contributions to this project, including the development and PFE teams in Hursley. Matt Smith was also instrumental in moving any issues and ensuring that they maintained a high profile. In particular, we thank the previous authors of this book: Matt Amanat Pall Beck Angelo Bernasconi Steve Cody Sean Crawford Sameer Dhulekar Werner Eggli Katja Gebuhr Deon George xx
Implementing the IBM System Storage SAN Volume Controller V6.1
Amarnath Hiriyannappa Thorsten Hoss Juerg Hossli Philippe Jachimczyk Kamalakkannan J Jayaraman Dan Koeck Bent Lerager Craig McKenna Andy McManus Joao Marcos Leite Barry Mellish Suad Musovich Massimo Rosati Fred Scholten Robert Symons Marcus Thordal Xiao Peng Zhao Thanks also to the following people for their contributions to previous editions, and to those who contributed to this edition: Chris Canto Peter Eccles Carlos Fuente Alex Howell Colin Jewell Geoff Lane Andrew Martin Paul Merrison Steve Randle Lucy Harris (nee Raw) Bill Scales Dave Sinclair Matt Smith Steve White Barry Whyte Evelyn Wick IBM Hursley Marc Bruni IBM Houston Larry Chiu Paul Muench IBM Almaden Bill Wiegand IBM Advanced Technical Support Sharon Wang IBM Chicago Chris Saul IBM San Jose
Preface
xxi
Lisa Dorr IBM Colorado Tina Sampson IBM Tucson Rita Roque IBM Rochester Yan H. Chu IBM San Jose Sangam Racherla IBM ITSO Special thanks to the Brocade staff for their unparalleled support of this residency in terms of equipment and support in many areas: Jim Baldyga Mansi Botadra Yong Choi Silviano Gaona Brian Steffler Marcus Thordal Steven Tong Brocade Communications Systems
Comments welcome
Your comments are important to us! We want our books to be as helpful as possible. Send us your comments about this book or other IBM Redbooks publications in one of the following ways: Use the online Contact us review Redbooks form found at: ibm.com/redbooks Send your comments in an email to: redbooks@us.ibm.com
xxii
Mail your comments to: IBM Corporation, International Technical Support Organization Dept. HYTD Mail Station P099 2455 South Road Poughkeepsie, NY 12601-5400
Preface
xxiii
xxiv
Chapter 1.
The key concept of virtualization is to decouple the storage from the storage functions required in todays storage area network (SAN) environment.
Decoupling means abstracting the physical location of data from the logical representation of the data. The virtualization engine presents logical entities to the user and internally manages the process of mapping these entities to the actual location of the physical storage.
The actual mapping performed is dependent upon the specific implementation, as is the granularity of the mapping, which can range from a small fraction of a physical disk, up to the full capacity of a physical disk. A single block of information in this environment is identified by its logical unit number (LUN), which is the physical disk, and an offset within that LUN, which is known as a logical block address (LBA). Note that the term physical disk is used in this context to describe a piece of storage that might be carved out of a RAID array in the underlying disk subsystem. Specific to the SVC implementation, the address space that is mapped between the logical entity is referred to as volume, and the physical disk is referred to as managed disks (MDisks). Figure 1-2 on page 4 shows an overview of block-level virtualization.
The server and application are only aware of the logical entities, and access these entities using a consistent interface that is provided by the virtualization layer. The functionality of a volume that is presented to a server, such as expanding or reducing the size of a volume, mirroring a volume, creating a FlashCopy, thin provisioning, and so on, is implemented in the virtualization layer. It does not rely in any way on the functionality that is provided by the underlying disk subsystem. Data that is stored in a virtualized environment is stored in a location-independent way, which allows a user to move or migrate data between physical locations, referred to as storage pools. We refer to block-level storage virtualizations as the cornerstones of virtualization. These are the core benefits that a product such as the SVC can provide over traditional directly attached or SAN storage. The SVC provides the following benefits: The SVC provides online volume migration while applications are running, which is possibly the greatest single benefit for storage virtualization. This capability allows data to be migrated on and between the underlying storage subsystems without any impact to the servers and applications. In fact, this migration is performed without the knowledge by servers and applications that it even occurred. The SVC simplifies storage management by providing a single image for multiple controllers and a consistent user interface for provisioning heterogeneous storage. The SVC provides enterprise-level copy services functions. Performing the copy services functions within the SVC removes dependencies on the storage subsystems, thereby enabling the source and target copies to be on other storage subsystem types. Storage utilization can be increased by pooling storage across the SAN. System performance is often improved with SVC as a result of volume striping across multiple arrays or controllers and the additional cache it provides. The SVC delivers these functions in a homogeneous way on a scalable and highly available platform, over any attached storage, and to any attached server. 4
Implementing the IBM System Storage SAN Volume Controller V6.1
1.4 Summary
Storage virtualization is no longer merely a concept or an unproven technology. All major storage vendors offer storage virtualization products. Making use of storage virtualization as the foundation for a flexible and reliable storage solution helps enterprises to better align business and IT by optimizing the storage infrastructure and storage management to meet business demands. The IBM System Storage SAN Volume Controller is a mature, sixth-generation virtualization solution that uses open standards and is consistent with the Storage Networking Industry Association (SNIA) storage model. The SVC is an appliance-based in-band block virtualization process, in which intelligence, including advanced storage functions, is migrated from individual storage devices to the storage network. The IBM System Storage SAN Volume Controller can improve the utilization of your storage resources, simplify your storage management, and improve the availability of your applications.
Chapter 2.
There are two major approaches in use today to be considered for the implementation of block-level aggregation and virtualization: Symmetric: in-band appliance The device is a SAN appliance that sits in the data path, and all I/O flows through the device. This kind of implementation is also referred to as symmetric virtualization or in-band. The device is both target and initiator. It is the target of I/O requests from the host perspective, and the initiator of I/O requests from the storage perspective. The redirection is performed by issuing new I/O requests to the storage. The SVC uses symmetric virtualization. Asymmetric: out-of-band or controller-based The device is usually a storage controller that provides an internal switch for external storage attachment. In this approach, the storage controller intercepts and redirects I/O requests to the external storage as it does for internal storage. The actual I/O requests are themselves redirected. This kind of implementation is also referred to as asymmetric virtualization or out-of-band. Figure 2-1 shows variations of the two virtualization approaches.
Although these approaches provide essentially the same cornerstones of virtualization, there can be interesting side effects, as discussed here.
The controller-based approach has high functionality, but it fails in terms of scalability or upgradability. Because of the nature of its design, there is no true decoupling with this approach, which becomes an issue for the life cycle of this solution, such as a controller. You will be challenged with data migration issues and questions, such as how to reconnect the servers to the new controller, and how to reconnect them online without any impact to your applications. Be aware that with this approach, you not only replace a controller but also implicitly replace your entire virtualization solution. In addition to replacing the hardware, can also be necessary to update or repurchase the licenses for the virtualization feature, advanced copy functions, and so on. With a SAN or fabric-based appliance solution that is based on a scale-out cluster architecture, life cycle management tasks such as adding or replacing new disk subsystems or migrating data between them, are extremely simple. Servers and applications remain online, data migration takes place transparently on the virtualization platform, and licenses for virtualization and copy services require no update; that is, they require no additional costs when disk subsystems are replaced. Only the fabric-based appliance solution provides an independent and scalable virtualization platform that can provide enterprise-class copy services; is open for future interfaces and protocols; allows you to choose the disk subsystems that best fit your requirements; and does not lock you into specific SAN hardware. For these reasons, IBM has chosen the SAN or fabric-based appliance approach for the implementation of the IBM System Storage SAN Volume Controller (SVC). The SVC possesses the following key characteristics: It is highly scalable, providing an easy growth path to two-n nodes (grow in a pair of nodes). It is SAN interface-independent. It actually supports FC and iSCSI, but is also open for future enhancements. It is host-independent, for fixed block-based Open Systems environments. It is external storage RAID controller-independent, providing a continual and ongoing process to qualify additional types of controllers. It is able to utilize disks internally located within the nodes (solid state disks). It is able to utilize disks locally attached to the nodes (SAS drives). On the SAN storage that is provided by the disk subsystems, the SVC can offer the following services. It can create and manage a single pool of storage attached to the SAN. It can manage multiple tiers of storage. It provides block-level virtualization (logical unit virtualization). It provides automatic block-, or sub-LUN-, level data migration between storage tiers. It provides advanced functions to the entire SAN, such as: Large scalable cache Advanced Copy Services FlashCopy (point-in-time copy) Metro Mirror and Global Mirror (remote copy, synchronous/asynchronous)
This list of features will grow with each future release, because the layered architecture of the SVC can easily implement new storage features.
11
A cluster of SVC nodes are connected to the same fabric and present logical disks (virtual disk) or volumes to the hosts. These volumes are created from managed LUNs or MDisks that are presented by the RAID disk subsystems. There are two distinct zones shown in the fabric: A host zone, in which the hosts can see and address the SVC nodes A storage zone, in which the SVC nodes can see and address the MDisk/logical unit numbers (LUNs) presented by the RAID subsystems. Hosts are not permitted to operate on the RAID LUNs directly, and all data transfer happens through the SVC nodes. This design is commonly described as symmetric virtualization. For iSCSI-based access, using two networks and separating iSCSI traffic within the networks by using a dedicated virtual local area network (VLAN) path for storage traffic will prevent any IP interface, switch, or target port failure from compromising the host servers access to the volumes LUNs.
12
Table 2-1 New SVC terminology mapping 6.1.0 SAN Volume Controller terminology event Previous SAN Volume Controller term error Description An occurrence of significance to a task or system. Events can include completion or failure of an operation, a user action, or the change in state of a process. The process of controlling which hosts have access to specific volumes within a cluster. A collection of storage capacity that provides the capacity requirements for a volume. The ability to define a storage unit (full system, storage pool, volume) with a logical capacity size that is larger than the physical capacity assigned to that storage unit. A discrete unit of storage on disk, tape, or other data recording medium that supports a form of identifier and parameter list, such as a volume label or input/output control.
host mapping
VDisk-to-host mapping
storage pool
space-efficient
volume
For a detailed glossary containing the terms and definitions used in the SAN Volume Controller see Appendix B, Terminology on page 829.
2.4.1 Nodes
Each SAN volume Controller hardware unit is called a node. The node provides the virtualization for a set of volumes, cache, and copy services functions. SVC nodes are deployed in pairs and multiple pairs make up a cluster. A cluster can consist of between one and four SVC node pairs.
Chapter 2. IBM System Storage SAN Volume Controller
13
One of the nodes within the cluster will be known as the configuration node. The configuration node manages the configuration activity for the cluster. If this node fails, the cluster will choose a new node to become the configuration node. Because the nodes are installed in pairs, each node provides a failover function to its partner node in the event of a node failure.
2.4.3 Cluster
The cluster consists of between one and four I/O Groups. Certain configuration limitations are then set for the individual cluster. For example, the maximum number of volumes supported per cluster is 8192, or the maximum managed disk supported is 32 PB per cluster. All configuration, monitoring, and service tasks are performed at the cluster level. Configuration settings are replicated to all nodes in the cluster. To facilitate these tasks, a management IP address is set for the cluster. A process is provided to back up the cluster configuration data onto disk so that the cluster can be restored in the event of a disaster. Note that this method does not back up application data. Only SVC cluster configuration information is backed up. For the purposes of remote data mirroring, two or more clusters must form a partnership prior to creating relationships between mirrored volumes.
14
For details about the Maximum Configurations applicable to the Cluster, I/O Group and nodes, select the restrictions hot link in the section corresponding to your SVC code level: http://www-03.ibm.com/systems/storage/software/virtualization/svc/interop.html
2.4.4 MDisks
The SVC cluster and its I/O Groups view the storage that is presented to the SAN by the back-end controllers as a number of disks or LUNs, known as managed disks or MDisks. Because the SVC does not attempt to provide recovery from physical disk failures within the back-end controllers, an MDisk is usually provisioned from a RAID array. The application servers, however, do not see the MDisks at all. Instead they see a number of logical disks, known as virtual disks or volumes, which are presented by the SVC I/O Groups through the SAN (FC) or LAN (iSCSI) to the servers. The MDisks are placed into storage pools where they are divided up into a number of extents, which can range in size from 16 MB to 8182 MB, as defined by the SVC administrator. A volume is host-accessible storage that has been provisioned out of one Storage Pool, or if it is a mirrored volume, out of two Storage Pools. The maximum size of an MDisk is 1 PB. An SVC cluster supports up to 4096 MDisks. At any point in time, an MDisk is in one of the following three modes: Unmanaged MDisk An MDisk is reported as unmanaged when it is not a member of any storage pool. An unmanaged MDisk is not associated with any volumes and has no metadata stored on it. SVC will not write to an MDisk that is in unmanaged mode, except when it attempts to change the mode of the MDisk to one of the other modes. SVC can see the resource, but it is not assigned to a storage pool. Managed MDisk Managed mode MDisks are always members of a storage pool, and they contribute extents to the storage pool. Volumes (if not operated in image mode) are created from these extents. MDisks operating in managed mode might have metadata extents allocated from them and can be used as quorum disks. This is the most common and normal mode of an MDisk. Image mode MDisk Image mode provides a direct block-for-block translation from the MDisk to the volume by using virtualization. This mode is provided to satisfy three major usage scenarios: Image mode allows virtualization of MDisks already containing data that was written directly and not through an SVC; rather, it was created by a direct-connected host. This mode allows a client to insert the SVC into the data path of an existing storage volume or LUN with minimal downtime. Chapter 6, Data migration on page 227, provides details of the data migration process. Image mode allows a volume that is managed by the SVC to be used with the native copy services function provided by the underlying RAID controller. To avoid the loss of data integrity when the SVC is used in this way, it is important that you disable the SVC cache for the volume. SVC provides the ability to migrate to image mode, which allows the SVC to export volumes and access them directly from a host without the SVC in the path. Each MDisk presented from an external disk controller has an online path count that is the number of nodes having access to that MDisk. The maximum count is the maximum paths detected at any point in time by the cluster. The current count is what the cluster sees at
Chapter 2. IBM System Storage SAN Volume Controller
15
this point in time. A current value less than the maximum can indicate that SAN fabric paths have been lost. See 2.5.1, Image mode volumes on page 21 for more details. Starting with SVC 6.1, internal SSD drives do not appear as MDisks. Internal SSDs will be used and appear as disk drives, and therefore additional RAID protection is required. Note: Users of internal solid-state devices (SSDs) on the SAN Volume Controller 2145-CF8 cannot install SVC 6.1.0 at this time.
16
Each MDisk in the storage pool is divided into a number of extents. The size of the extent will be selected by the administrator at the creation time of the storage pool and cannot be changed later. The size of the extent ranges from 16 MB up to 8 GB. It is a best practice to use the same extent size for all storage pools in a cluster; this is a prerequisite for supporting volume migration between two storage pools. If the storage pool extent sizes are not the same, then you must use volume mirroring (see 2.5.4, Mirrored volumes on page 24) to copy volumes between pools. SVC limits the number of extents in a cluster to 222 ~= 4 million. Because the number of addressable extents is limited, the total capacity of an SVC cluster depends on the extent size that is chosen by the SVC administrator. The capacity numbers that are specified in Table 2-2 for an SVC cluster assume that all defined storage pools have been created with the same extent size.
Table 2-2 Extent size-to-addressability matrix Extent size maximum 16 MB 32 MB 64 MB 128 MB 4096 MB Cluster capacity 64 TB 128 TB 256 TB 512 TB 16 PB Extent size maximum 256 MB 512 MB 1024 MB 2048 MB 8192 MB Cluster capacity 1 PB 2 PB 4 PB 8 PB 32 PB
For most clusters, a capacity of 1 to 2 PB is sufficient. A best practice is to use 256 MB or, for larger clusters, 512 MB as the standard extent size.
17
2.4.8 Volumes
Volumes are logical disks presented to the host or application servers by the SVC. The hosts cannot see the MDisks; they can only see the logical volumes created from combining extents from a storage pool.
There are three types of volumes: striped, sequential, and image. These types are determined by the way in which the extents are allocated from the storage pool, as explained here: A volume created in striped mode has extents allocated from each MDisk in the storage pool in a round-robin fashion. With a sequential mode volume, extents are allocated sequentially from an MDisk. Image mode is a one-to-one mapped extent mode volume. Using striped mode is the best method to use for most cases. However, sequential extent allocation mode can slightly increase the sequential performance for certain workloads. Figure 2-4 on page 19 shows striped volume mode and sequential volume mode, and illustrates how the extent allocation from the storage pool differs.
18
You can allocate the extents for a volume in many ways. The process is under full user control at volume creation time and can be changed at any time by migrating single extents of a volume to another MDisk within the storage pool. Chapter 6, Data migration on page 227, Chapter 10, SAN Volume Controller operations using the GUI on page 579, and Chapter 9, SAN Volume Controller operations using the command-line interface on page 439, provide detailed explanations about how to create volumes and migrate extents by using the GUI or CLI.
19
Easy Tier will create a migration report every 24 hours on the number of extents that would be moved if the pool were a multitiered storage pool. So even though Easy Tier extent migration is not possible within a single tier pool, the Easy Tier statistical measurement function is available. The Easy Tier function can make it more appropriate to use smaller storage pool extent sizes. The usage statistics file can be offloaded from the SVC nodes. Then you can use an IBM Storage Advisor Tool to create a summary report. Contact your IBM representative or IBM Business Partner for more information about the Storage Advisor Tool. For more detailed information about Easy Tier functionality, see Chapter 7, Easy Tier on page 345.
2.4.10 Hosts
Volumes can be mapped to a host to allow access for a specific server to a set of volumes. A host within the SVC is a collection of HBA worldwide port names (WWPNs) or iSCSI qualified names (IQNs), defined on the specific server. Note that iSCSI names are internally identified by fake WWPNs, or WWPNs that are generated by the SVC. Volumes can be mapped to multiple hosts, for example, a volume that is accessed by multiple hosts of a server cluster. iSCSI is an alternative means of attaching hosts. However, all communication with back-end storage subsystems, and with other SVC clusters, is still through FC. Node failover can be handled without having a multipath driver installed on the iSCSI server. An iSCSI-attached server can simply reconnect after a node failover to the original target IP address, which is now presented by the partner node. To protect the server against link failures in the network or host bus adapter (HBA) failures, using a multipath driver is mandatory. Volumes are LUN masked to the hosts HBA WWPNs by a process called host mapping. Mapping a volume to the host makes it accessible to the WWPNs or iSCSI names (IQNs) that are configured on the host object. For a SCSI over Ethernet connection, the IQN identifies the iSCSI target (destination) adapter. Host objects can have both IQNs and WWPNs.
20
21
this specific storage pool must be the same as the extent size in which you plan to migrate the data to. All of the SVC copy services functions can be applied to image mode disks.
22
The allocation of a specific number of extents from a specific set of MDisks is performed by the following algorithm: if the set of MDisks from which to allocate extents contains more than one disk, extents are allocated from MDisks in a round-robin fashion. If an MDisk has no free extents when its turn arrives, its turn is missed and the round-robin moves to the next MDisk in the set that has a free extent. When creating a new volume, the first MDisk from which to allocate an extent is chosen in a pseudo-random way rather than simply choosing the next disk in a round-robin fashion. The pseudo-random algorithm avoids the situation whereby the striping effect inherent in a round-robin algorithm places the first extent for a large number of volumes on the same MDisk. Placing the first extent of a number of volumes on the same MDisk can lead to poor performance for workloads that place a large I/O load on the first extent of each volume, or that create multiple sequential streams.
23
A second copy can be added to a volume with a single copy, or removed from a volume with two copies. Checks prevent the accidental removal of the only remaining copy of a volume. A newly created, unformatted volume with two copies will initially have the two copies in an out-of-synchronization state. The primary copy will be defined as fresh and the secondary copy as stale. The synchronization process will update the secondary copy until it is fully synchronized. This is done at the default synchronization rate or at a rate defined when creating the volume or modifying it. The synchronization status for mirrored volumes is recorded on the quorum disk.
24
If a two-copy mirrored volume is created with the format parameter, then both copies are formatted in parallel and the volume comes online when both operations are complete with the copies in sync. If mirrored volumes are expanded or shrunk, all of their copies are also expanded or shrunk. If it is known that MDisk space, which will be used for creating copies, is already formatted, or if the user does not require read stability, then a no synchronization option can be selected which declares the copies as synchronized (even when they are not). To minimize the time required to resynchronize a copy that has become out of sync, only the 256 KB grains that have been written to since the synchronization was lost are copied. This approach is known as an incremental synchronization. Only the changed grains need to be copied to restore synchronization. Important: An unmirrored volume can be migrated from one location to another by simply adding a second copy to the desired destination, waiting for the two copies to synchronize, and then removing the original copy 0. This operation can be stopped at any time. The two copies can be in separate storage pools with separate extent sizes. Where there are two copies of a volume, one copy is known as the primary copy. If the primary is available and synchronized, reads from the volume are directed to it. The user can select the primary when creating the volume, or can change it later. Placing the primary copy on a high-performance controller will maximize the read performance of the volume. The write performance will be constrained if one copy is on a lower-performance controller. This is because writes must complete to both copies before the volume can provide acknowledgment to the host that the write completed successfully. Remember that writes to both copies must complete to be considered successfully written even if volume mirroring has one copy in a solid-state drive storage pool and the second copy in a storage pool containing resources from a disk subsystem. A volume with copies can be checked to see whether all of the copies are identical or consistent. If a medium error is encountered while reading from one copy, it will be repaired using data from the other copy. This consistency check is performed asynchronously with host I/O. Important: Mirrored volumes can be taken offline if there is no quorum disk available. This behavior occurs because the synchronization status for mirrored volumes is recorded on the quorum disk. Mirrored volumes consume bitmap space at a rate of 1 bit per 256 KB grain, which translates to 1 MB of bitmap space supporting 2 TB-worth of mirrored volume. The default allocation of bitmap space in 20 MB, which supports 40 TB of mirrored volumes. If all 512 MB of variable bitmap space is allocated to mirrored volumes, 1 PB of mirrored volumes can be supported.
25
virtual capacity available to the host. In a fully allocated volume, these two values will be the same. Thus, the real capacity will determine the quantity of MDisk extents that will be initially allocated to the volume. The virtual capacity will be the capacity of the volume reported to all other SVC components (for example, FlashCopy, Cache, and Remote Copy) and to the host servers. The real capacity is used to store both the user data and the metadata for the thin-provisioned volume. The real capacity can be specified as an absolute value or a percentage of the virtual capacity. Thin-provisioned volumes can be used as volumes assigned to the host; by FlashCopy to implement thin-provisioned FlashCopy targets; and also with the mirrored volumes feature. When a thin-provisioned volume is initially created, a small amount of the real capacity will be used for initial metadata. Write I/Os to grains of the thin volume that have not previously been written to will cause grains of the real capacity to be used to store metadata and the actual user data. Write I/Os to grains that have previously been written to will update the grain where data was previously written. The grain size is defined when the volume is created and can be 32 KB, 64 KB, 128 KB, or 256 KB. Figure 2-8 illustrates the thin-provisioning concept.
Thin-provisioned volumes store both user data and metadata. Each grain of data requires metadata to be stored. This means the I/O rates that are obtained from thin-provisioned volumes will be less than fully allocated volumes. The metadata storage overhead will never be greater than 0.1% of the user data. The overhead is independent of the virtual capacity of the volume. If you are using thin-provisioned volumes in a FlashCopy map, then for best performance use the same grain
26
size as the map grain size. If you are using the thin-provisioned volume directly with a host system, then use a small grain size. Thin-provisioned volume format: Thin-provisioned volumes do not need formatting. A read I/O, which requests data from unallocated data space, will return zeroes. When a write I/O causes space to be allocated, the grain will be zeroed prior to use. However, if the node is a CF8, space is not allocated for a host write that contains all zeros. The formatting flag will be ignored when a thin volume is created or when the real capacity is expanded; the virtualization component will never format the real capacity of a thin-provisioned volume. The real capacity of a thin volume can be changed if the volume is not in image mode. Increasing the real capacity allows a larger amount of data and metadata to be stored on the volume. Thin-provisioned volumes use the real capacity provided in ascending order as new data is written to the volume. If the user initially assigns too much real capacity to the volume, the real capacity can be reduced to free storage for other uses. A thin-provisioned volume can be configured to autoexpand. This feature causes the SVC to automatically add a fixed amount of additional real capacity to the thin volume as required. Autoexpand therefore attempts to maintain a fixed amount of unused real capacity for the volume. This amount is known as the contingency capacity. The contingency capacity is initially set to the real capacity that is assigned when the volume is created. If the user modifies the real capacity, the contingency capacity is reset to be the difference between the used capacity and real capacity. A volume that is created without the autoexpand feature, and thus has a zero contingency capacity, will go offline as soon as the real capacity is used and needs to expand. Autoexpand will not cause the real capacity to grow much beyond the virtual capacity. The real capacity can be manually expanded to more than the maximum that is required by the current virtual capacity, and the contingency capacity will be recalculated. To support the autoexpansion of thin-provisioned volumes, the storage pools from which they are allocated have a configurable capacity warning. When the used capacity of the pool exceeds the warning capacity, a warning event is logged. For example, if a warning of 80% has been specified, the event will be logged when 20% of the free capacity remains. Thin-provisioned volume performance: Thin-provisioned volumes require additional I/O operations to read and write metadata to the back-end storage, which also generates additional load on the SVC nodes. Therefore, avoid using thin-provisioned volumes for high performance applications, or for any workloads with a high write I/O component. A thin-provisioned volume can be converted nondisruptively to a fully allocated volume, or vice versa, by using the volume mirroring function. For example, you can add a thin-provisioned copy to a fully allocated primary volume and then remove the fully allocated copy from the volume after they are synchronized. The fully allocated to thin-provisioned migration procedure uses a zero-detection algorithm so that grains containing all zeros do not cause any real capacity to be used. Note: Consider using thin-provisioned volumes as targets in Flash Copy relationships. Using them as a target in Metro Mirror or Global Mirror relationships makes no sense because during the initial synchronization, the target will become fully allocated.
27
28
A pure SCSI architecture is based on the client/server model. A client (for example, server or workstation) initiates read or write requests for data from a target server (for example, a data storage system). Commands, which are sent by the client and processed by the server, are put into the Command Descriptor Block (CDB). The server executes a command, and completion is indicated by a special signal alert. The major functions of iSCSI include encapsulation and the reliable delivery of CDB transactions between initiators and targets through the TCP/IP network, especially over a potentially unreliable IP network. The concepts of names and addresses have been carefully separated in iSCSI: An iSCSI name is a location-independent, permanent identifier for an iSCSI node. An iSCSI node has one iSCSI name, which stays constant for the life of the node. The terms initiator name and target name also refer to an iSCSI name. An iSCSI address specifies not only the iSCSI name of an iSCSI node, but also a location of that node. The address consists of a host name or IP address, a TCP port number (for the target), and the iSCSI name of the node. An iSCSI node can have any number of addresses, which can change at any time, particularly if they are assigned by way of Dynamic Host Configuration Protocol (DHCP). An SVC node represents an iSCSI node and provides statically allocated IP addresses. Each iSCSI node, that is, an initiator or target, has a unique iSCSI Qualified Name (IQN), which can have a size of up to 255 bytes. The IQN is formed according to the rules adopted for Internet nodes. The iSCSI qualified name format is defined in RFC3720 and contains (in order) these elements: The string iqn. A date code specifying the year and month in which the organization registered the domain or sub-domain name used as the naming authority string. The organizational naming authority string, which consists of a valid, reversed domain or a subdomain name. Optionally, a colon (:), followed by a string of the assigning organizations choosing, which must make each assigned iSCSI name unique. For SVC, the IQN for its iSCSI target is specified as: iqn.1986-03.com.ibm:2145.<clustername>.<nodename> On a Windows server, the IQN, that is, the name for the iSCSI Initiator, can be defined as: iqn.1991-05.com.microsoft:<computer name> The IQNs can be abbreviated used a descriptive name, known as an alias. An alias can be assigned to an initiator or a target. The alias is independent of the name and does not have to be unique. Because it is not unique, the alias must be used in a purely informational way. It cannot be used to specify a target at login or used during authentication. Both targets and initiators can have aliases. An iSCSI name provides the correct identification of an iSCSI device irrespective of its physical location. Remember, the IQN is an identifier, not an address.
29
Be careful: Before changing cluster or node names for an SVC cluster that has servers connected to it by way of SCSI, be aware that because the cluster and node name are part of the SVCs IQN, you can lose access to your data by changing these names. The SVC GUI will display a specific warning, but the CLI does not display a warning. The iSCSI session, which consists of a login phase and a full feature phase, is completed with a special command. The login phase of the iSCSI is identical to the FC port login process (PLOGI). It is used to adjust various parameters between two network entities and to confirm the access rights of an initiator. If the iSCSI login phase is completed successfully, the target confirms the login for the initiator; otherwise, the login is not confirmed and the TCP connection breaks. As soon as the login is confirmed, the iSCSI session enters the full feature phase. If more than one TCP connection was established, then iSCSI requires that each command and response pair must go through one TCP connection. Thus, each separate read or write command will be carried out without the necessity to trace each request for passing separate flows. However, separate transactions can be delivered through separate TCP connections within one session. Figure 2-9 illustrates an overview of the various block-level storage protocols and shows where the iSCSI layer is positioned.
Only one node, the configuration node, presents a cluster management IP address at any one time. There can be two cluster management IP addresses, one for each of the two Ethernet ports. Configuration node failover is also supported. Port IP address This address is used to perform iSCSI I/O to the cluster. Each node can have a port IP address for each of its ports. Figure 2-10 shows an overview of the IP addresses on an SVC node port and illustrates how these IP addresses are moved between the nodes of an I/O Group. The management IP addresses and the iSCSI target IP addresses will fail over to the partner node N2 if node N1 fails (and vice versa). The iSCSI target IPs will fail back to their corresponding ports on node N1 when node N1 is running again.
It is a best practice to keep all of the eth0 ports on all of the nodes in the cluster on the same subnet. The same applies for the eth1 ports; however, it can be a separate subnet to the eth0 ports. In an SVC cluster running V6.1 code, there is a maximum of 256 iSCSI sessions per SAN volume Controller iSCSI target. You can find detailed examples of the SVC port configuration in Chapter 9, SAN Volume Controller operations using the command-line interface on page 439 and in Chapter 10, SAN Volume Controller operations using the GUI on page 579.
31
(up to 95 MBps user data) on each of the two 1 Gbps LAN ports. The use of jumbo frames (maximum transmission unit (MTU) sizes greater than 1,500 bytes) is best practice. Hosts can discover volumes through one of the following mechanisms: Internet Storage Name Service (iSNS) SVC can register itself with an iSNS name server; you set the IP address of this server by using the svctask chcluster command. A host can then query the iSNS server for available iSCSI targets. Service Location Protocol (SLP) The SVC node runs an SLP daemon, which responds to host requests. This daemon reports the available services on the node, such as the CIMOM service that runs on the configuration node; the iSCSI I/O service can now also be reported. iSCSI Send Target request The host can also send a Send Target request using the iSCSI protocol to the iSCSI TCP/IP port (port 3260).
period of time and its volumes continue to be available for I/O. iSCSI allows failover without host multipathing. To achieve this, the partner node in the I/O group takes over the port IP addresses and iSCSI names of a failed node. Be aware: With the iSCSI implementation in SVC, an IP address failover/failback between partner nodes of an I/O Group will only take place in cases of a planned or unplanned node restart - node offline. When the partner node returns to online status, there is a delay of 5 minutes before failback occurs for the IP addresses and iSCSI names. A host multipathing driver for iSCSI is required if you want these capabilities: Protecting a server from network link failures Protecting a server from network failures, if the server is connected through two separate networks Providing load balancing on the servers network links
Synchronous remote copy ensures that updates are committed at both the primary and the secondary before the application considers the updates complete; therefore, the secondary is fully up-to-date if it is needed in a failover. However, the application is fully exposed to the latency and bandwidth limitations of the communication link to the secondary. In a truly remote situation, this extra latency can have a significant adverse effect on application performance.
Special configuration guidelines exist for SAN fabrics that are used for data replication. It is necessary to consider the distance and available bandwidth of the intersite links. The SVC Support Portal contains details regarding these guidelines: http://www-947.ibm.com/support/entry/portal/Overview/Hardware/System_Storage/Stora ge_software/Storage_virtualization/SAN_Volume_Controller_%282145%29 Refer to 8.5, Metro Mirror on page 388 for more details about SVCs synchronous mirroring.
Chapter 2. IBM System Storage SAN Volume Controller
33
In asynchronous remote copy, the application is provided acknowledgement that the write is complete prior to the write being committed at the secondary. Thus, on a failover, certain updates (data) might be missing at the secondary. The application must have an external mechanism for recovering the missing updates if possible. This mechanism can involve user intervention. Recovery on the secondary site involves bringing up the application on this recent backup and then rolling forward or backward to the most recent commit point. The asynchronous remote copy must present at the secondary a view to the application that might not contain the latest updates, but is always consistent. If consistency has to be guaranteed at the secondary, then applying updates in an arbitrary order is not an option. At the primary side, the application is enforcing an ordering implicitly by not scheduling an I/O until a previous dependent I/O has completed. By applying I/Os at the secondary in the order that they were completed at the primary, the secondary will always reflect a state that will have been seen at the primary if we had frozen I/O there. The SVC Global Mirror protocol operates to identify small groups of I/Os, which are known to be active concurrently in the primary cluster. The process to identify these groups of I/Os does not significantly contribute to the latency of these I/Os when they execute at the primary. These groups are applied at the secondary in the order in which they were executed at the primary. The secondary data copy is not accessible for application I/O. However, the SVC allows read-only access to the secondary storage when it contains a consistent image. This capability is only intended to allow boot time operating system discovery to complete without error so that any hosts at the secondary site can be ready to start the applications with minimum delay, if required. For example, many operating systems need to read logical block address (LBA) 0 to configure a logical unit. The underlying storage at the primary or secondary of a remote copy will normally be RAID storage, but it can be any storage that can be managed by the SVC. Refer to 8.7, Global Mirror on page 413 for more details about SVCs asynchronous mirroring. Most clients will aim to automate failover or recovery of the remote copy through failover management software. SVC provides Simple Network Management Protocol (SNMP) traps and interfaces to enable this automation. IBM Support for automation is provided by IBM Tivoli Storage Productivity Center for Replication. The Tivoli documentation can also be accessed online at the IBM Tivoli Storage Productivity Center information center: http://publib.boulder.ibm.com/infocenter/tivihelp/v4r1/index.jsp
2.7.2 FlashCopy
FlashCopy makes a copy of a source volume on a target volume. The original content of the target volume is lost. After the copy operation has started, the target volume has the contents of the source volume as it existed at a single point in time. Although the copy operation takes time, the resulting data at the target appears as though the copy was made instantaneously. FlashCopy is sometimes described as an instance of a Time-Zero copy (T0) or a Point in Time (PiT) copy technology. Although the FlashCopy operation takes a finite time, this time is several orders of magnitude less than the time that is required to copy the data using conventional techniques.
34
FlashCopy can be performed on multiple source and target volumes. FlashCopy permits the management operations to be coordinated so that a common single point in time is chosen for copying target volumes from their respective source volumes. This capability allows a consistent copy of data, which spans multiple volumes. SVC also permits multiple Target volumes to be FlashCopied from the same Source volume. This capability can be used to create images from separate points in time for the Source volume, and create multiple images from a Source volume at a common point in time. Source and Target volumes can be thin-provisioned volumes. Reverse FlashCopy enables target volumes to become restore points for the source volume without breaking the FlashCopy relationship and without having to wait for the original copy operation to complete. SVC supports multiple targets and thus multiple rollback points. Most clients aim to integrate the FlashCopy feature for point in time copies and quick recovery of their applications and databases. IBM Support is provided by Tivoli Storage FlashCopy Manager: http://www-01.ibm.com/software/tivoli/products/storage-flashcopy-mgr/ You can read a detailed description of FlashCopy copy services in Chapter 8, Advanced Copy Services on page 363.
35
code upgrade), adding new nodes, or removing old nodes from a cluster or node failures therefore cannot impact the SVCs availability. It is key for all active nodes of a cluster to know that they are members of the cluster. Especially in situations such as the split-brain scenario where single nodes lose contact with other nodes, it is key to have a solid mechanism to decide which nodes form the active cluster. A worst case scenario is a cluster that splits into two separate clusters. Within an SVC cluster, the voting set and a quorum disk are responsible for the integrity of the cluster. If nodes are added to a cluster, they get added to the voting set. If nodes are removed, they will also quickly be removed from the voting set. Over time the voting set, and thus the nodes in the cluster, can completely change so that the cluster has migrated onto a completely separate set of nodes from the set on which it started. The SVC cluster implements a dynamic quorum. Following a loss of nodes, if the cluster can continue operation, the cluster will adjust the quorum requirement so that further node failure can be tolerated. The lowest Node Unique ID in a cluster becomes the boss node for the group of nodes, and it proceeds to determine (from the quorum rules) whether the nodes can operate as the cluster. This node also presents the maximum two-cluster IP addresses on one or both of its nodes Ethernet ports to allow access for cluster management.
36
If possible, the SVC will place the quorum candidates on separate disk subsystems. After the quorum disk has been selected, however, no attempt is made to ensure that the other quorum candidates are presented through separate disk subsystems. Important: Quorum disk placement verification and adjustment to separate storage systems (if possible) reduces the dependency from a single storage system and can increase the Quorum disk availability significantly. Quorum disk candidates and the active quorum disk in a cluster can be listed by the svcinfo lsquorum command. When the set of quorum disk candidates has been chosen, it is fixed. However, a new quorum disk candidate can be chosen in one of these conditions: When the administrator requests that a specific MDisk is to become a quorum disk by using the svctask setquorum command When an MDisk that is a quorum disk is deleted from a storage pool When an MDisk that is a quorum disk changes to image mode An offline MDisk will not be replaced as a quorum disk candidate. For disaster recovery purposes a cluster needs to be regarded as a single entity, so the cluster and the quorum disk need to be colocated. There are special considerations concerning the placement of the active quorum disk for a stretched or split cluster and split I/O Group configurations. Details are available at this website: http://www-01.ibm.com/support/docview.wss?rs=591&uid=ssg1S1003311 Important: Running an SVC cluster without a quorum disk can seriously affect your operation. A lack of available quorum disks for storing metadata will prevent any migration operation (including a forced MDisk delete). Mirrored volumes can be taken offline if there is no quorum disk available. This behavior occurs because synchronization status for mirrored volumes is recorded on the quorum disk. During the normal operation of the cluster, the nodes communicate with each other. If a node is idle for a few seconds, a heartbeat signal is sent to ensure connectivity with the cluster. If a node fails for any reason, the workload that is intended for it is taken over by another node until the failed node has been restarted and readmitted to the cluster (which happens automatically). If the microcode on a node becomes corrupted, resulting in a failure, the workload is transferred to another node. The code on the failed node is repaired, and the node is readmitted to the cluster (again, all automatically).
37
Generally, when the nodes in a cluster have been split across sites, the SVC cluster must be configured as listed here: Site 1 contains half of SAN Volume Controller cluster nodes + one quorum disk candidate. Site 2 contains half of SAN Volume Controller cluster nodes + one quorum disk candidate. Site 3 contains an active quorum disk. This configuration ensures that a quorum disk is always available, even after a single site failure. All internode communication between SVC node ports in the same cluster must not cross ISLs. The same is also true for SVC to back-end disk controllers. This means that the FC path between sites cannot use an inter-switch ISL path. The remote node must have a direct path to the switch that its partner and other cluster nodes are connected to. To reach the 10 km maximum distance, Long Wave SFPs must be used in the node. Other SVC configuration rules also continue to apply, for example the Ethernet port, eth0 on every SVC node, local or remote site, must still be connected to the same subnet or subnets. For more details about split cluster configuration, see 3.3.6, Split-cluster configuration on page 77.
2.8.3 Cache
The primary benefit of storage cache is to improve I/O response time. Reads and writes to a magnetic disk drive suffer from both seek and latency time at the drive level, which can result in from one to 10 ms of response time (for an enterprise-class disk). The new 2145-CF8 nodes combined with SVC 6.1 provide 24 GB memory per node, or 48 GB per I/O Group, or 192 GB per SVC cluster. The SVC provides a flexible cache model, and the nodes memory can be used as read or write cache. The size of the write cache is limited to a maximum of 12 GB of the nodes memory. Dependent on the current I/O conditions on a node, the entire 24 GB of memory can be fully used as read cache. Cache is allocated in 4 KB segments. A segment will hold part of one track. A track is the unit of locking and destage granularity in the cache. The cache virtual track size is 32 KB (eight segments). A track might only be partially populated with valid pages. The SVC coalesces writes up to the 32 KB track size if the writes reside in the same tracks prior to destage; for example, if 4 KB is written into a track, another 4 KB is written to another location in the same track. Therefore, the blocks written from the SVC to the disk subsystem can be any size between 512 bytes up to 32 KB. When data is written by the host, the preferred node within the I/O Group saves the data in its cache. Before the cache returns completion to the host, the write must be mirrored to the partner node, or copied in the cache of its partner node, for availability reasons. After having a copy of the written data, the cache returns completion to the host. A volume that has not received a write updates during the last two minutes will automatically have all modified data destaged to disk. If one node of an I/O Group is missing, due to a restart or a hardware failure, the remaining node empties all of its write cache and proceeds in a operation mode, which is referred to as write-through mode. A node operating in write-through mode writes data directly to the disk subsystem before sending an I/O complete status message back to the host. Running in this mode can degrade the performance of the specific I/O Group.
38
Write cache is partitioned by storage pool. This feature restricts the maximum amount of write cache that a single storage pool can allocate in a cluster. Table 2-3 shows the upper limit of write cache data that a single storage pool in a cluster can occupy.
Table 2-3 Upper limit of write cache per storage pool One storage pool 100% Two storage pools 66% Three storage pools 40% Four storage pools 33% More than four storage pools 25%
For in-depth information about SVC cache partitioning, it is important to read IBM SAN Volume Controller 4.2.1 Cache Partitioning, REDP-4426, which is available at this website: http://www.redbooks.ibm.com/abstracts/redp4426.html?Open An SVC node will treat part of its physical memory as non-volatile. Non-volatile means that its contents are preserved across power losses and resets. Bitmaps for Flash Copy and Remote Mirroring relationships, the Virtualization Table and the Write Cache are items in the non-volatile memory. In the event of a disruption or external power loss, the physical memory is copied to a file in the file system on the nodes internal disk drive, so that the contents can be recovered when external power is restored. The uninterruptible power supply units, which are delivered with each nodes hardware, ensure that there is sufficient internal power to keep a node operational to perform this dump when external power is removed. After dumping the content of the non-volatile part of the memory to disk, the SVC node shuts down.
Management console
The management console for SVC is referred to as the IBM System Storage Productivity Center (SSPC). SSPC is a hardware and software solution that includes a suite of storage infrastructure management software that can centralize, automate, and simplify the management of complex and heterogeneous storage environments.
39
IBM System Storage Productivity Center contains the functions listed here. Tivoli Integrated Portal IBM Tivoli Integrated Portal is a standards-based architecture for web administration. The installation of Tivoli Integrated Portal is required to enable single sign-on (SSO) for Tivoli Storage Productivity Center. Tivoli Storage Productivity Center now installs Tivoli Integrated Portal along with Tivoli Storage Productivity Center. Tivoli Storage Productivity Center IBM Tivoli Storage Productivity Center Basic Edition is preinstalled on the IBM System Storage Productivity Center server. There are several other commercially available products of Tivoli Storage Productivity Center that provide additional functionality beyond Tivoli Storage Productivity Center Basic Edition. You can activate these packages by adding the specific licenses to the preinstalled Basic Edition: Tivoli Storage Productivity Center for Disk allows you to monitor storage systems for performance. Tivoli Storage Productivity Center for Data allows you to collect and monitor file systems and databases. Tivoli Storage Productivity Center Standard Edition is a bundle that includes all of the other packages, along with SAN planning tools that make use of information that is collected from the Tivoli Storage Productivity Center components. Tivoli Storage Productivity Center for Replication The functions of Tivoli Storage Productivity Center for Replication provide the management of the IBM FlashCopy, Metro Mirror, and Global Mirror capabilities for the DS8000, IBM SAN Volume Controller and others. This package can also be activated by installing the specific licenses. Web Browser to access the GUI SSH Client (PuTTY) DS CIM agents Windows Server 2008 Enterprise Edition Several base software packets that are required for Tivoli Productivity Center Optional software packages, such as anti-virus software or DS3000/4000/5000 Storage Manager, can be installed on the IBM System Storage Productivity Center server by the client. Using Tivoli Storage Productivity Center or IBM System Director provides greater integration points and launch in-context capabilities. Figure 2-11 on page 41 provides an overview of the SVC management components. We describe the details in Chapter 4, SAN Volume Controller initial configuration on page 95. You can obtain further details about the IBM System Storage Productivity Center in IBM System Storage Productivity Center Users Guide Version 1 Release 4, SC27-2336, and in IBM System Storage Productivity Center Introduction and Planning Guide, SC23-8824.
40
41
SVC superuser
There is a special local user called the superuser that always exists on every cluster. It cannot be deleted. Its password is set by the user during cluster initialization. The superuser password can be reset from the nodes front panel, and this function can be disabled, although doing this makes the cluster inaccessible if all of the users forget their passwords or lose their SSH keys. To register an SSH key for the superuser to provide command-line access, use Service Assistant Configure CLI Access to assign a temporary key. However, the key will be lost during a node restart so the more permanent way is to add the key through the normal GUI, that is, use the User Management superuser Properties panels. The superuser is always a member of user group 0, which has the most privileged role within the SVC.
The access rights for a user belonging to a specific user group are defined by the role that is assigned to the user group. It is the role that defines what a user can or cannot do on an SVC cluster. Table 2-5 on page 43 shows the roles ordered (from the top) by starting with the least privileged Monitor role down to the most privileged SecurityAdmin role.
42
Table 2-5 Commands permitted for each role Role Monitor Allowed commands All svcinfo or informational commands, plus: svctask finderr, dumperrlog, dumpinternallog,chcurrentuser, ping, svcconfig backup All commands allowed for Monitor role, plus: applysoftware, setlocale, addnode, rmnode, cherrstate,writesernum, detectmdisk, includemdisk, clearerrlog, cleardumps, settimezone, stopcluster,startstats, stopstats, and settime All commands allowed for Monitor role, plus: prestartfcconsistgrp, startfcconsistgrp, stopfcconsistgrp, chfcconsistgrp, prestartfcmap, startfcmap, stopfcmap, chfcmap, startrcconsistgrp, stoprcconsistgrp, switchrcconsistgrp, chrcconsistgrp, startrcrelationship, stoprcrelationship, switchrcrelationship,chrcrelationship, and chpartnership All commands, except: chauthservice, mkuser, rmuser, chuser, mkusergrp,rmusergrp, chusergrp, and setpwdreset All commands.
Service
CopyOperator
Administrator
SecurityAdmin
43
44
The authentication service supported by SVC is the Tivoli Embedded Security Services server component level 6.2. The Tivoli Embedded Security Services server provides the following key features: Tivoli Embedded Security Services isolates the SVC from the actual directory protocol in use, which means that the SVC communicates only with Tivoli Embedded Security Services to get its authentication information. The type of protocol that is used to access the central directory or the kind of the directory system that is used is transparent to SVC. Tivoli Embedded Security Services provides a secure token facility that is used to enable single sign-on (SSO). SSO means that users do not have to log in multiple times when using what appears to them to be a single system. It is used within Tivoli Productivity Center. When SVC access is launched from within Tivoli Productivity Center, the user will not have to log on to the SVC, because the user has already logged in to Tivoli Productivity Center.
45
2. Configure user groups on the cluster matching those user groups that are used by the authentication service. For each group of interest that is known to the authentication service, there must be an SVC user group with the same name and the remote setting enabled. For example, you can have a group called sysadmins, whose members require the SVC Administrator role. Configure this group using the following command: svctask mkusergrp -name sysadmins -remote -role Administrator If none of a users groups match any of the SVC user groups, the user is not permitted to access the cluster. 3. Configure users that do not require SSH access. Any SVC users that are to be used with the remote authentication service and do not require SSH access need to be deleted from the system. The superuser cannot be deleted; it is a local user and cannot use the remote authentication service. 4. Configure users that do require SSH access. Any SVC users that are to be used with the remote authentication service and do require SSH access must have their remote setting enabled and the same password set on the cluster and the authentication service. The remote setting instructs SVC to consult the authentication service for group information after the SSH key authentication step to determine the users role. The need to configure the users password on the cluster in addition to the authentication service is due to a limitation in the Tivoli Embedded Security Services server software. 5. Configure the system time. For correct operation, both the SVC cluster and the system running the Tivoli Embedded Security Services server must have the exact same view of the current time; the easiest way is to have them both use the same Network Time Protocol (NTP) server. Failure to follow this step can lead to poor interactive performance of the SVC user interface or incorrect user-role assignments. Also, Tivoli Productivity Center leverages the Tivoli Integrated Portal infrastructure and its underlying WebSphere Application Server capabilities to make use of an LDAP registry and enable single sign-on (SSO). You can obtain more information about implementing SSO within Tivoli Productivity Center 4.1 in the chapter about LDAP authentication support and single sign-on in IBM Tivoli Storage Productivity Center V4.1 Release Guide, SG24-7725, which is available at this website: http://www.redbooks.ibm.com/redpieces/abstracts/sg247725.html?Open
46
Note: Since the writing of this book, IBM has announced the IBM System Storage SAN Volume Controller Storage Engine offers 10 Gigabit Ethernet connectivity. For more information about this topic, see:
http://www-01.ibm.com/common/ssi/cgi-bin/ssialias?infotype=an&subtype=ca&appname=gpateam &supplier=897&letternum=ENUS111-083
The new SVC Storage Engine adds 10 Gigabit Ethernet connectivity to help improve throughput. This solution includes a Common Information Model (CIM) Agent to enable unified storage management based on open standards for units that comply with CIM Agent standards. The new SVC 2145-CF8 Storage Engine has the following key hardware features: Intel Core i7 Xeon 5500 2.4 GHz quad-core processor (Nehelam) 24 GB memory, with future growth possibilities Four 8 Gbps FC ports Up to four solid-state drives, enabling scale-out high performance solid-state drive support with SVC V5.1 Two, redundant power supplies Double bandwidth compared to its predecessor node (2145-8G4) Up to double IOPS compared to its predecessor node (2145-8G4) A 19-inch rack-mounted enclosure IBM Systems Director Active Energy Manager-enabled The 2145-CF8 nodes can be easily integrated within existing SVC clusters. The nodes can be intermixed in pairs within existing SVC clusters. Mixing node types in a cluster results in volume performance characteristics dependant on the node type in the volumes I/O Group. The standard nondisruptive cluster upgrade process can be used to replace older engines with new 2145-CF8 engines, see IBM SAN Volume Controller Software Installation and Configuration Guide, GC27-2286, for more information about this topic. Integration into existing clusters requires that the cluster runs at least SVC 5.1 level code. The 2145-CF8 only runs SVC V5.1 or above. The nodes are 1U high, fit into 19-inch racks, and use the same uninterruptible power supply unit models as previous models. Figure 2-14 shows the front-side view of the SVC 2145-CF8 node.
47
Remember that several SVC features, such as iSCSI, are software features and are therefore available on all nodes types running SVC V5.1 or above.
2 Gbps FC 4 Gbps FC
48
FC-O
8 Gbps FC limiting
Table 2-7 shows the rules that apply with respect to the number of interswitch link (ISL) hops allowed in a SAN fabric between SVC nodes or the cluster.
Table 2-7 Number of supported ISL hops Between nodes in an I/O Group 0 (connect to the same switch) Between nodes in separate I/O Groups 0 (connect to the same switch) Between nodes and the disk subsystem 1 (recommended: 0, connect to the same switch) Between nodes and the host server Maximum 3
49
The actual times shown are not that important, but note the dramatic difference between accessing data that is located in cache and data that is located on external disk. We have added a second scale to Figure 2-15, which gives you an idea of how long it takes to access the data in a scenario where a single CPU cycle takes 1 second. This scale gives you an idea of the importance of future storage technologies closing or reducing the gap between access times for data stored in cache/memory versus access times for data stored on a external medium. Since magnetic disks were first introduced by IBM in 1956 (RAMAC), they have shown a remarkable performance regarding capacity growth, form factor/size reduction, price decrease ($/GB), and reliability. However, the number of I/Os that a disk can handle and the response time that it takes to process a single I/O have not improved at the same rate, although they have certainly improved. In actual environments, we can expect from todays enterprise-class FC serial-attached SCSI (SAS) disk up to 200 IOPS per disk with an average response time (a latency) of approximately 6 ms per I/O. To summarize, todays rotating disks continue to advance in capacity (several TBs), form factor/footprint (3.5 inches, 2.5 inches, and 1.8 inches), and price ($/GB), but they are not getting much faster. The limiting factor is the number of revolutions per minute (RPM) that a disk can perform (say 15,000). This factor defines the time that is required to access a specific data block on a rotating device. There will likely be small improvements in the future, but a big step, such as doubling the RPM, if technically even possible, inevitably has an associated increase in power consumption and a price that will be an inhibitor.
50
Enterprise-class solid-state drives deliver typically 50,000 read and 20,000 write IOPs with latencies of typically 50us for reads and 800us for writes. Their form factors (2.5 inches/3.5 inches) and their interfaces (FC/SAS/Serial Advanced Technology Attachment (SATA)) make them easy to integrate into existing disk shelves.
51
For a more effective use of SSDs, place the SSD MDisks into a multitiered storage pool combined with HDD MDisks (generic_hdd tier). Then, with Easy Tier turned on, it will automatically detect and migrate high workload extents onto the solid-state MDisks.
52
Note: Since the writing of this book, IBM has announced IBM System Storage SAN Volume Controller Version 6.2. For more information about this topic, see:
http://www-01.ibm.com/common/ssi/cgi-bin/ssialias?infotype=AN&subtype=CA&htmlfid=897/ ENUS211-175&appname=USN
IBM System Storage SAN Volume Controller (SVC) V6.2.0 is designed to provide improved tracking by introducing real-time performance monitoring. Immediate performance information can be received, including CPU utilization and I/O rates, to monitor environmental changes and troubleshoot, and when pairing up the information with historical detailed data from Tivoli Storage Productivity Center, can be better positioned to develop the best performance solutions. VMware virtual environments can be improved with SVC by using the vStorage API for Array Integration (VAAI). This new API delegates certain VMware functions to SVC to enhance performance. In the vSphere 4.1 release, this offload capability to SVC supports full copy, block zeroing, and hardware-assisted locking. The introduction of 10 Gigabit Ethernet (GbE) hardware for the SAN Volume Controller allows clients to continue to focus on cost efficiency with higher network performance by offering 10 Gigabit iSCSI host attachment. SVC is scalable to manage up to 32 PB of storage by allowing managed disks to be as large as 256 TB on select storage systems. IBM System Storage Easy Tier is designed to automate data placement throughout the SVC managed disk group onto two tiers of storage to intelligently align the system with current workload requirements. It is now available for use with solid-state devices installed on SVC 2145 models CF8 and CG8. SVC interoperability now supports additional storage products, including HP StorageWorks P9500 Disk Array, Hitachi Data Systems Virtual Storage Platform, Texas Memory Systems RamSan-620, and EMC VNX models.
2.12.1 SVC 6.1 supported hardware list, device driver, and firmware levels
With the SVC 6.1 release, as in every release, IBM offers functional enhancements and new hardware that can be integrated into existing or new SVC clusters and also interoperability enhancements or new support for servers, SAN switches, and disk subsystems. See the most current information at this website: http://www-03.ibm.com/systems/storage/software/virtualization/svc/interop.html
53
New Cluster capacities SVC increases the flexibility of the storage it manages by raising the supported managed disk (MDisk) size to 1 PB. A new, larger extent size of 8 GB is also introduced which increases the maximum managed storage per SVC cluster up to 32 PBs. Increased WWNN support The number of storage WWNNs that can attach to SVC is now 256 WWNNs per cluster. This is especially important when attaching to storage controllers that are designed to claim one WWNN per Fibre Channel port (WWPN). Long Object Names - maximum 63 characters All objects in a cluster (hosts, volumes, MDisks) have user-defined or system-generated names. When creating an object you can now define a more meaningful name because the maximum length has been increased to 63 characters. SVC V4.3 or V5.1 clusters will show truncated volume names when partnered for copy services functions with a V6.1 cluster. Tiered storage Deploying tiered storage is an important strategy for controlling storage cost, where various types of storage with various performance and cost characteristics are used to match separate business requirements. To meet these requirements, the SAN volume controller now supports multiple tiers of storage or MDisks and multitiered storage pools. IBM System Storage Easy Tier Easy Tier automates data placement throughout the SVC multitiered storage pool to intelligently align the placement of a volumes extents with current workload requirements. Easy Tier includes the ability to automatically and nondisruptively relocate data (at the extent level) from one tier to another in either direction to achieve the best storage performance available. New GUI user interface A newly designed user interface that delivers many more functional enhancements and greater ease of use is provided. Enhancements to the user interface include greater flexibility of views, increased number of characters allowed for naming objects, display of the command lines being executed, and improved user customization. Clients using Tivoli Storage Productivity Center and IBM System Director will also have greater integration points and launch in-context capabilities. The new GUI interface and its associated web server now run in the SVC cluster rather than on the SSPC console; thus, it can be accessed directly from any w browser. IBM Storwize V7000 - array support When SVC V6.1 is running on an IBM Storwize V7000, users of solid-state devices (SSDs) are now able to safeguard their data beyond volume mirroring because the SVC now provides RAID (0, 1, 5, 6, and 10) control. Therefore, you can create arrays on internally attached disk SAS or SSD. New CLI commands There are new functions enabled through the CLI; for example, the ability to view and update the license values of the cluster. There is also an entire set of new CLI commands to support drives, arrays, and enclosures of the Storwize V7000 product. Events Events are errors, warnings, and informational messages generated by SVC. If you are familiar with specific SVC error codes from previous releases of SVC, note that several numbers have changed in the V6.1 release.
54
Service Assistant Tool SVC V6.1 introduces a new method for performing service tasks on the system. In addition to being able to perform service tasks from the front panel, you can also service a node through an Ethernet connection using either a web browser (GUI) or command-line interface. This new function is called the Service Assistant Tool. The service assistant GUI interface is available through a new service assistant IP address that you assign on each node. iSCSI enhancements The enhancements include multisession iSCSI for improved failover performance improvements, and persistent reserve support for MSCS over iSCSI. Vmware iSCSI support has also been enhanced. Additional support for back-end controllers Several additional storage controllers are now supported, including EMC VMAX. To see the full list of supported controllers, visit the Interoperability website listed in 2.13, Useful SVC web links on page 55.
55
56
Chapter 3.
57
58
7. Determine the SVC service IP address and the IBM System Storage Productivity Center (SVC console). 8. Determine the IP addresses for the SVC cluster and for the host that is connected through iSCSI connections. 9. Define a naming convention for the SVC nodes, the host, and the storage subsystem. 10.Define the managed disks (MDisks) in the disk subsystem. 11.Define the Storage Pools. The Storage Pools depend on the disk subsystem in place and the data migration requirements. 12.Plan the logical configuration of the volume within the I/O Groups and the Storage Pools in such a way as to optimize the I/O load between the hosts and the SVC. 13.Plan for the physical location of the equipment in the rack. SVC planning can be categorized into two types: Physical planning Logical planning We describe these planning types in more detail in the following sections.
59
2145 UPS-1U
The 2145 Uninterruptible Power Supply-1U (2145 UPS-1U) is one EIA unit high and is shipped, and can only operate, on the following node types: SAN Volume Controller 2145-CF8 SAN Volume Controller 2145-8A4 SAN Volume Controller 2145-8G4 SAN Volume Controller 2145-8F4 When configuring the 2145 UPS-1U, the voltage that is supplied to it must be 200 - 240 V, single phase. Tip: The 2145 UPS-1U has an integrated circuit breaker and does not require external protection.
60
There are guidelines to follow for Fibre Channel (FC) cable connections. Occasionally, the introduction of a new SVC hardware model means that there are internal changes. One example is the worldwide port name (WWPN) mapping in the port mapping. The 2145-8G4 and 2145-CF8 have the same mapping. Figure 3-2 on page 62 shows the WWPN mapping.
61
Figure 3-3 on page 63 shows a sample layout where nodes within each I/O Group have been split between separate racks. This protects against power failures and other events that only affect a single rack.
62
63
Volume configuration Host mapping (LUN masking) Advanced copy functions SAN boot support Data migration from non-virtualized storage subsystems SVC configuration backup procedure
management IP add. 10.11.12.120 service IP add. 10.11.12.121 Each node in an SVC cluster needs to have at least one Ethernet connection. With SVC 6.1, the cluster management is now performed through an embedded GUI running on the nodes. A separate console such as the traditional SVC Hardware Management Console (HMC) or IBM System Storage Productivity Center (SSPC) is no longer required to access the management interface. To access the management GUI you direct a web browser at the system management IP address. The cluster must first be created specifying either an IPv4 or an IPv6 cluster address for port 1. After the cluster is created, additional IP addresses can be created on port 1 and port 2 until both ports have an IPv4 and an IPv6 address defined. This allows the cluster to be managed on separate networks, which provides redundancy in the event of a network failure. Figure 3-4 on page 65 shows the IP configuration possibilities.
64
Support for iSCSI provides one additional IPv4 and one additional IPv6 address for each Ethernet port on every node. These IP addresses are independent of the cluster configuration IP addresses. When accessing the SVC through the GUI or Secure Shell (SSH), choose one of the available IP addresses to connect to. There is no automatic failover capability so if one network is down, use an IP address on the alternate network. Clients may be able to use intelligence in domain name servers (DNS) to provide partial failover.
65
The zoning capabilities of the SAN switch are used to create three distinct zones. SVC 6.1 supports 2 Gbps, 4 Gbps, or 8 Gbps FC fabric, depending on the hardware platform and on the switch where the SVC is connected. In an environment where you have a fabric with multiple speed switches, best practice is to connect the SVC and the disk subsystem to the switch operating at the highest speed. All SVC nodes in the SVC cluster are connected to the same SANs, and they present volumes to the hosts. These volumes are created from Storage Pools that are composed of MDisks presented by the disk subsystems. There must be three distinct zones in the fabric. SVC cluster zone: Create one zone per fabric with all of the SVC ports cabled to this fabric to allow SVC internode communication. Host zones: Create an SVC host zone for each server accessing storage from the SVC cluster. Storage zone: Create one SVC storage zone for each storage subsystem that is virtualized by the SVC.
Configure your SAN so that FC traffic can be passed between the two clusters. To configure the SAN this way, you can connect the clusters to the same SAN, merge the SANs, or use routing technologies. Configure zoning to allow all of the nodes in the local fabric to communicate with all of the nodes in the remote fabric.
66
Optionally, modify the zoning so that the hosts that are visible to the local cluster can recognize the remote cluster. This capability allows a host to have access to data in both the local and remote clusters. Verify that cluster A cannot recognize any of the back-end storage that is owned by cluster B. A cluster cannot access logical units (LUs) that a host or another cluster can also access. Figure 3-5 shows the SVC zoning topology.
Figure 3-6 on page 68 shows an example of SVC, host, and storage subsystem connections.
67
Figure 3-6 Example of SVC, host, and storage subsystem connections You must also observe the following additional guidelines: LUNs (MDisks) must have exclusive access to a single SVC cluster and cannot be shared between other SVC clusters or hosts. A storage controller can present LUNs to both the SVC (as MDisks) and to other hosts in the SAN. However, in this case it is better to avoid having SVC and hosts share the same storage ports. Mixed port speeds are not permitted for intracluster communication. All node ports within a cluster must be running at the same speed. ISLs are not to be used for intracluster node communication or node-to-storage controller access. The switch configuration in an SVC fabric must comply with the switch manufacturers configuration rules, which can impose restrictions on the switch configuration. For example, a switch manufacturer might limit the number of supported switches in a SAN. Operation outside of the switch manufacturers rules is not supported. Host bus adapters (HBAs) in dissimilar hosts or dissimilar HBAs in the same host need to be in separate zones. For example, if you have AIX and Microsoft hosts, they need to be in separate zones. In this case, dissimilar means that the hosts are running separate operating systems or are using separate hardware platforms. Therefore, various levels of the same operating system are regarded as similar. Note that this requirement is a SAN interoperability issue, rather than an SVC requirement. Host zones are to contain only one initiator (HBA) each, and as many SVC node ports as you need, depending on the high availability and performance that you want from your configuration.
68
Attention: Be aware of the following considerations. The use of ISLs for intracluster node communication can negatively impact the availability of the system due to the high dependency on the quality of these links to maintain heartbeat and other cluster management services. Therefore it is strongly advised that they only be used as part of an interim configuration to facilitate SAN migrations, and not be part of the architected solution. The use of ISLs for SVC node to storage controller access can lead to port congestion, which can negatively impact the performance and resiliency of the SAN. Therefore it is strongly advised that they only be used as part of an interim configuration to facilitate SAN migrations, and not be part of the architected solution. The use of mixed port speeds used for intercluster communication can lead to port congestion, which can negatively impact the performance and resiliency of the SAN and is therefore not supported.
You can use the svcinfo lsfabric command to generate a report that displays the connectivity between nodes and other controllers and hosts. This report is particularly helpful in diagnosing SAN problems.
Zoning examples
Figure 3-7 shows an SVC cluster zoning example.
69
70
You can set up the equivalent configuration with only IPv6 addresses.
Chapter 3. Planning and configuration
71
Figure 3-11 shows the use of IPv4 management and iSCSI addresses in two separate subnets.
Figure 3-13 on page 73 shows the use of a redundant network and a third subnet for management.
72
Figure 3-14 shows the use of a redundant network for both iSCSI data and management.
Be aware of these considerations: All of the examples are valid using IPv4 and IPv6 addresses. It is valid to use IPv4 addresses on one port and IPv6 addresses on the other port. It is valid to have separate subnet configurations for IPv4 and IPv6 addresses.
73
74
In general, configure disk subsystems as though there is no SVC. However, we suggest the following specific guidelines: Disk drives Exercise caution with large disk drives so that you do not have too few spindles to handle the load. Using RAID-5 is suggested for the vast majority of workloads. Array sizes 8+P or 4+P is suggested for the DS4000 and DS5000 families, if possible. Use the DS4000 segment size of 128 KB or larger to help the sequential performance. Upgrade to EXP810 drawers, if possible. Create LUN sizes that are equal to the RAID array and rank size. When adding more disks to a subsystem, consider adding the new MDisks to existing Storage Pools versus creating additional small Storage Pools. Scripts are available to restripe volume extents evenly across all MDisks in the Storage Pools if required. Go to the website http://www.ibm.com/alphaworks and search for svctools. Maximum of 256 worldwide node names (WWNNs) EMC DMX/SYMM, all HDS, and SUN/HP HDS clones use one WWNN per port. Each WWNN appears as a separate controller to the SVC. IBM, EMC Clariion, and HP use one WWNN per subsystem. Each WWNN appears as a single controller with multiple ports/WWPNs, for a maximum of 16 ports/WWPNs per WWNN using one out of the maximum of 256. DS8000 using four or eight 4 port HA cards Use port 1 and 3 or 2 and 4 on each card. This setup provides 8 or 16 ports for SVC use. Use 8 ports minimum up to 40 ranks. Use 16 ports, which is the maximum, for 40 or more ranks. DS4000/DS5000 EMC CLARiiON/CX Both systems have the preferred controller architecture, and SVC supports this configuration. Use a minimum of 4 ports, and preferably 8 or more ports up to maximum of 16 ports, so that more ports equate to more concurrent I/O that is driven by the SVC. Support for mapping controller A ports to Fabric A and controller B ports to Fabric B or cross-connecting ports to both fabrics from both controllers. The cross-connecting approach is preferred to avoid AVT/Trespass occurring if a fabric or all paths to a fabric fail. DS3400 Use a minimum of 4 ports. XIV requirements and restrictions The use of XIV extended functions including snaps, thin-provisioning, synchronous replication, and LUN expansion is not LUNs presented to the SVC is not supported. A maximum of 511 LUNs from one XIV system can be mapped to an SVC cluster.
75
Full 15 module XIV recommendations 161 TB usable Use two interface host ports from each of the six interface modules. Use ports 1 and 3 from each interface module and zone these 12 ports with all SVC node ports. Create 48 LUNs of equal size, each of which is a multiple of 17 GB, and you will get 1632 GB approximately if using the entire full frame XIV with the SVC. Map LUNs to the SVC as 48 MDisks, and add all of them to the one XIV Storage Pool so that the SVC will drive the I/O to four MDisks/LUNs for each of the 12 XIV FC ports. This design provides a good queue depth on the SVC to drive XIV adequately. Six module XIV recommendations - 55 TB usable Use two interface host ports from each of the two active interface modules. Use ports 1 and 3 from interface modules 4 and 5. (Interface module 6 is inactive.) And, zone these four ports with all SVC node ports. Create 16 LUNs of equal size, each of which is a multiple of 17 GB, and you will get 1632 GB approximately if using the entire XIV with the SVC. Map LUNs to the SVC as 16 MDisks, and add all of them to the one XIV Storage Pool that the SVC will drive I/O to four MDisks/LUNs per each of the four XIV FC ports. This design provides a good queue depth on the SVC to drive XIV adequately. Nine module XIV recommendations - 87 TB usable: Use two interface host ports from each of the four active interface modules. Use ports 1 and 3 from interface modules 4, 5, 7, and 8. (Interface modules 6 and 9 are inactive.) Also, zone these eight ports with all of the SVC node ports. Create 26 LUNs of equal size, each of which is a multiple of 17 GB, and you will get 1632 GB approximately if using the entire XIV with the SVC. Map LUNs to SVC as 26 MDisks, and map add all of them to the one XIV Storage Pool, so that the SVC will drive I/O to three MDisks/LUNs on each of six ports and four MDisks/LUNs on the other two XIV FC ports. This design provides a useful queue depth on SVC to drive XIV adequately. Configure XIV host connectivity for the SVC cluster: Create one host definition on XIV, and include all SVC node WWPNs. You can create clustered host definitions (one per I/O Group), but the preceding method is easier. Map all LUNs to all SVC node WWPNs.
configuration. The remaining node operates in write-through mode, meaning that the data is written directly to the disk subsystem (the cache is disabled for the write). The uninterruptible power supply unit must be in the same rack as the node to which it provides power, and each uninterruptible power supply unit can only have one node connected. The FC SAN connections between the SVC node and the switches are optical fiber. These connections can run at either 2 Gbps, 4 Gbps, or 8 Gbps, depending on your SVC and switch hardware. The 2145-CF8, 2145-8A4, 2145-8G4, and 2145-8F4 SVC nodes auto-negotiate the connection speed with the switch. The SVC node ports must be connected to the FC fabric only. Direct connections between the SVC and the host, or the disk subsystem, are unsupported. Two SVC clusters cannot have access to the same LUNs within a disk subsystem. Configuring zoning such that two SVC clusters have access to the same LUNs (MDisks) can, and will likely, result in data corruption. The two nodes within an I/O Group can be co-located (within the same set of racks) or can be located in separate racks and separate rooms. See 3.3.6, Split-cluster configuration on page 77 for more information about this topic. The SVC uses three MDisks as quorum disks for the cluster. A best practice for redundancy is to have each quorum disk be located in a separate storage subsystem where possible. The current locations of the quorum disks can be displayed using the svcinfo lsquorum command and relocated using the svcinfo chquorum command.
77
In this configuration, the connections between SAN Volume Controller nodes in the cluster are greater than 100 meters apart, and therefore must be longwave Fibre Channel connections.
In Figure 3-15, the storage system that hosts the third-site quorum disk is attached directly to a switch at both the primary and secondary sites using longwave Fibre Channel connections. If either the primary site or the secondary site fails, you must ensure that the remaining site has retained direct access to the storage system that hosts the quorum disks. Restriction: Do not connect a storage system in one site directly to a switch fabric in the other site. An alternative configuration can use an additional Fibre Channel switch at the third site with connections from that switch to the primary site and to the secondary site. A split-site configuration is supported only when the storage system that hosts the quorum disks supports extended quorum. Although SAN Volume Controller can use other types of storage systems for providing quorum disks, access to these quorum disks is always through a single path. For quorum disk configuration requirements, see the Guidance for Identifying and Changing Managed Disks Assigned as Quorum Disk Candidates technote at the following website: http://www-01.ibm.com/support/docview.wss?rs=591&uid=ssg1S1003311
78
There are several additional Storage Pool considerations: Maximum cluster capacity is related to the extent size. 16 MB extent = 64 TB and doubles for each increment in extent size; for example, 32 MB = 128 TB. We strongly advise a minimum 128/256 MB. The IBM Storage Performance Council (SPC) benchmarks used a 256 MB extent. Pick the extent size and use that size for all Storage Pools.
79
You cannot migrate volumes between Storage Pools with different extent sizes. However, you can use volume mirroring to create copies between Storage Pools with different extent sizes. Storage Pool reliability, availability, and serviceability (RAS) considerations. It might make sense to create multiple Storage Pools if you ensure a host only gets its volumes built from one of the Storage Pools. If the Storage Pool goes offline, it impacts only a subset of all of the hosts using the SVC. However, creating multiple Storage Pools can cause a high number of Storage Pools, approaching the SVC limits. If you do not isolate hosts to Storage Pools, create one large Storage Pool. Creating one large Storage Pool assumes that the physical disks are all the same size, speed, and RAID level. The Storage Pool goes offline if an MDisk is unavailable, even if the MDisk has no data on it. Do not put MDisks into a Storage Pool until needed. Create at least one separate Storage Pool for all the image mode volumes. Make sure that the LUNs that are given to the SVC have any host persistent reserves removed. Storage Pool performance considerations. It might make sense to create multiple Storage Pools if you are attempting to isolate workloads to separate disk spindles. Storage Pools with too few MDisks cause an MDisk overload, so it is better to have more spindle counts in a Storage Pool to meet workload requirements. The Storage Pool and SVC cache relationship. SVC employs cache partitioning to limit the potentially negative effect that a poorly performing storage controller can have on the cluster. The partition allocation size is defined based on the number of Storage Pools configured. This design protects against individual controller overloading and failures from consuming write cache and degrading performance of other Storage Pools in the cluster. More details are discussed in 2.8.3, Cache on page 38. Table 3-2 shows the limit of the write cache data.
Table 3-2 Limit of the cache data Number of Storage Pools 1 2 3 4 5 or more Upper limit 100% 66% 40% 30% 25%
Consider the rule to be that no single partition can occupy more than its upper limit of cache capacity with write data. These limits are upper limits, and they are the points at which the SVC cache will start to limit incoming I/O rates for volumes created from the Storage Pool. If a particular partition reaches this upper limit, the net result is the same as a global cache resource that is full. That is, the host writes will be serviced on a one-out-one-in basis, because the cache destages writes to the back-end disks. However, only writes targeted at the full partition are limited. All I/O destined for other (non-limited) Storage Pools will continue as normal. Read I/O requests for the limited partition will also continue as normal. However, because the SVC is destaging write data 80
Implementing the IBM System Storage SAN Volume Controller V6.1
at a rate that is obviously greater than the controller can sustain (otherwise the partition does not reach the upper limit), read response times are also likely to be impacted.
81
Thin-Provisioned volume considerations When creating the Thin-Provisioned volume, you need to understand the utilization patterns by the applications or group users accessing this volume. You must take into consideration items such as the actual size of the data, the rate of creation of new data, modifying or deleting existing data, and so on. There are two operating modes for Thin-Provisioned volumes
Autoexpand volumes allocate storage from a Storage Pool on demand with minimal
user intervention required. However, a misbehaving application can cause a volume to expand until it has consumed all of the storage in a Storage Pool.
Non-autoexpand volumes have a fixed amount of storage assigned. In this case, the
user must monitor the volume and assign additional capacity when required. A misbehaving application can only cause the volume that it is using to fill up.
Depending on the initial size for the real capacity, the grain size and a warning level can be set. If a volume goes offline, either through a lack of available physical storage for autoexpand, or because a volume marked as non-expand had not been expanded in time, there is a danger of data being left in the cache until storage is made available. This situation is not a data integrity or data loss issue, but you must not rely on the SVC cache as a backup storage mechanism. Important: Keep a warning level on the used capacity so that it provides adequate time to respond and provision more physical capacity. Warnings must not be ignored by an administrator. Use the autoexpand feature of the Thin-Provisioned volumes. The grain size allocation unit for the real capacity in the volume can be set as 32 KB, 64 KB, 128 KB, or 256 KB. A smaller grain size utilizes space more effectively, but it results in a larger directory map, which can reduce performance. Thin-Provisioned volumes require more I/Os because of directory accesses. For truly random workloads with 70% read and 30% write, a Thin-Provisioned volume will require approximately one directory I/O for every user I/O. As a result, performance can be up to 50% less than that of a normal volume for writes. The directory is two-way write-back-cached (just like the SVC fastwrite cache), so certain applications will perform better. Thin-Provisioned volumes require more CPU processing, so the performance per I/O Group can also be reduced. A Thin-Provisioned volume feature called zero detect provides clients with the ability to reclaim unused allocated disk space (zeros) when converting a fully allocated volume to a Thin-Provisioned volume using volume mirroring. Volume mirroring guidelines Create or identify two separate Storage Pools to allocate space for your mirrored volume. Allocate the Storage Pools containing the mirrors from separate storage controllers. If possible, use a Storage Pool with MDisks that share the same characteristics. Otherwise, the volume performance can be affected by the poorer performing MDisk.
82
83
The port mask is an optional parameter of the svctask mkhost and chhost commands. The port mask is four binary bits. Valid mask values range from 0000 (no ports enabled) to 1111 (all ports enabled). For example, a mask of 0011 enables port 1 and port 2. The default value is 1111 (all ports enabled). The SVC supports connection to the Cisco MDS family and Brocade family. See the following website for the latest support information: http://www-03.ibm.com/systems/storage/software/virtualization/svc/interop.html
FlashCopy guidelines
Consider these FlashCopy guidelines: Identify each application that must have a FlashCopy function implemented for its volume. FlashCopy is a relationship between volumes. Those volumes can belong to separate Storage Pools and separate storage subsystems. You can use FlashCopy for backup purposes by interacting with the Tivoli Storage Manager Agent, or for cloning a particular environment. Define which FlashCopy best fits your requirements: No copy, Full copy, Thin-Provisioned, or Incremental. Define which FlashCopy rate best fits your requirement in terms of performance and time to get the FlashCopy completed. The relationship of the background copy rate value to the attempted number of grains to be split per second is shown in Table 3-3. Define the grain size that you want to use. A grain is the unit of data represented by a single bit in the FlashCopy bitmap table. Larger grain sizes can cause a longer FlashCopy elapsed time and a higher space usage in the FlashCopy target volume. Smaller grain sizes can have the opposite effect. Remember that the data structure and the source data location can modify those effects. In an actual environment, check the results of your FlashCopy procedure in terms of the data copied at every run and in terms of elapsed time, comparing them to the new SVC FlashCopy results. Eventually, adapt the grain/second and the copy rate parameter to fit your environments requirements.
Table 3-3 Grain splits per second User percentage 1 - 10 11 - 20 21 - 30 31 - 40 41 - 50 Data copied per second 128 KB 256 KB 512 KB 1 MB 2 MB 256 KB grain per second 0.5 1 2 4 8 64 KB grain per second 2 4 8 16 32
84
Figure 3-16 contains two redundant fabrics. Part of each fabric exists at the local cluster and at the remote cluster. There is no direct connection between the two fabrics. Technologies for extending the distance between two SVC clusters can be broadly divided into two categories: FC extenders SAN multiprotocol routers Due to the more complex interactions involved, IBM explicitly tests products of this class for interoperability with the SVC. The current list of supported SAN routers can be found in the supported hardware list on the SVC support website: http://www.ibm.com/storage/support/2145
Chapter 3. Planning and configuration
85
IBM has tested a number of FC extenders and SAN router technologies with the SVC, which must be planned, installed, and tested so that the following requirements are met: The round-trip latency between sites must not exceed 80 ms (40 ms one-way). For Global Mirror, this limit allows a distance between the primary and secondary sites of up to 8000 km (4970.96 miles) using a planning assumption of 100 km (62.13 miles) per 1 ms of round-trip link latency. The latency of long distance links depends upon the technology that is used to implement them. A point-to-point dark fiber-based link will typically provide a round-trip latency of 1ms per 100 km (62.13 miles) or better. Other technologies will provide longer round-trip latencies, which will affect the maximum supported distance. The configuration must be tested with the expected peak workloads. When Metro Mirror or Global Mirror is used, a certain amount of bandwidth will be required for SVC intercluster heartbeat traffic. The amount of traffic depends on how many nodes are in each of the two clusters. Figure 3-17 shows the amount of heartbeat traffic, in megabits per second, that is generated by various sizes of clusters.
These numbers represent the total traffic between the two clusters when no I/O is taking place to mirrored volumes. Half of the data is sent by one cluster, and half of the data is sent by the other cluster. The traffic will be divided evenly over all available intercluster links. Therefore, if you have two redundant links, half of this traffic will be sent over each link during fault-free operation. The bandwidth between sites must, at the least, sized to meet the peak workload requirements in addition to maintaining the maximum latency specified previously. The peak workload requirement must be evaluated by considering the average write workload over a period of one minute or less, plus the required synchronization copy bandwidth. With no synchronization copies active and no write I/O disks in Metro Mirror or Global Mirror relationships, the SVC protocols will operate with the bandwidth indicated in Figure 3-17. However, the true bandwidth required for the link can only be determined by considering the peak write bandwidth to volumes participating in Metro Mirror or Global Mirror relationships and adding to it the peak synchronization copy bandwidth. If the link between the sites is configured with redundancy so that it can tolerate single failures, the link must be sized so that the bandwidth and latency statements continue to be true even during single failure conditions. The configuration is tested to simulate the failure of the primary site (to test the recovery capabilities and procedures), including eventual failback to the primary site from the secondary. The configuration must be tested to confirm that any failover mechanisms in the intercluster links interoperate satisfactorily with the SVC.
86
The FC extender must be treated as a normal link. The bandwidth and latency measurements must be made by, or on behalf of, the client. They are not part of the standard installation of the SVC by IBM. Make these measurements during installation, and record the measurements. Testing must be repeated following any significant changes to the equipment providing the intercluster link.
87
If gmlinktolerance is disabled for the duration of the maintenance, it must be re-enabled after the maintenance is complete. Global Mirror volumes must have their preferred nodes evenly distributed between the nodes of the clusters. Each volume within an I/O Group has a preferred node property that can be used to balance the I/O load between nodes in that group. Figure 3-18 shows the correct relationship between volumes in a Metro Mirror or Global Mirror solution.
The capabilities of the storage controllers at the secondary cluster must be provisioned to allow for the peak application workload to the Global Mirror volumes, plus the client-defined level of background copy, plus any other I/O being performed at the secondary site. The performance of applications at the primary cluster can be limited by the performance of the back-end storage controllers at the secondary cluster to maximize the amount of I/O that applications can perform to Global Mirror volumes. Do a complete review before using SATA for Metro Mirror or Global Mirror secondary volumes. Using a slower disk subsystem for the secondary volumes for high performance primary volumes can mean that the SVC cache might not be able to buffer all the writes, and flushing cache writes to SATA might slow I/O at the production site. Global Mirror volumes at the secondary cluster must be in dedicated storage pools (which contain no non-Global Mirror volumes). Storage controllers must be configured to support the Global Mirror workload that is required of them. You can: dedicate storage controllers to only Global Mirror volumes; configure the controller to guarantee sufficient quality of service for the disks being used by Global Mirror; or ensure that physical disks are not shared between Global Mirror volumes and other I/O (for example, by not splitting an individual RAID array). MDisks within a Global Mirror storage pool must be similar in their characteristics (for example, RAID level, physical disk count, and disk speed). This requirement is true of all storage pools, but it is particularly important to maintain performance when using Global Mirror. When a consistent relationship is stopped, for example, by a persistent I/O error on the intercluster link, the relationship enters the consistent_stopped state. I/O at the primary site continues, but the updates are not mirrored to the secondary site. Restarting the relationship will begin the process of synchronizing new data to the secondary disk. While this synchronization is in progress, the relationship will be in the inconsistent_copying
88
state. Therefore, the Global Mirror secondary volume will not be in a usable state until the copy has completed and the relationship has returned to a Consistent state. For this reason it is highly advisable to create a FlashCopy of the secondary volume before restarting the relationship. When started, the FlashCopy will provide a consistent copy of the data, even while the Global Mirror relationship is copying. If the Global Mirror relationship does not reach the Synchronized state (if, for example, the intercluster link experiences further persistent I/O errors), the FlashCopy target can be used at the secondary site for disaster recovery purposes. If you are planning to use an FCIP intercluster link, it is extremely important to design and size the pipe correctly. Example 3-2 shows a best-guess bandwidth sizing formula.
Example 3-2 WAN link calculation example
Amount of write data within 24 hours times 4 to allow for peaks Translate into MB/s to determine WAN link needed Example: 250 GB a day 250 GB * 4 = 1 TB 24 hours * 3600 secs/hr. = 86400 secs 1,000,000,000,000/ 86400 = approximately 12 MB/s Which means OC3 or higher is needed (155 Mbps or higher) If compression is available on routers or WAN communication devices, smaller pipelines might be adequate. Note that workload is probably not evenly spread across 24 hours. If there are extended periods of high data change rates, consider suspending Global Mirror during that time frame. If the network bandwidth is too small to handle the traffic, the application write I/O response times might be elongated. For the SVC, Global Mirror must support short-term Peak Write bandwidth requirements. Remember that SVC Global Mirror is much more sensitive to a lack of bandwidth than the DS8000. You will need to consider the initial sync and re-sync workload, as well. The Global Mirror partnerships background copy rate must be set to a value that is appropriate to the link and secondary back-end storage. The more bandwidth that you give to the sync and re-sync operation, the less workload can be delivered by the SVC for the regular data traffic. The Metro Mirror or Global Mirror background copy rate is predefined, the per volume limit is 25 MBps, and the maximum per I/O Group is roughly 200 MBps. Be careful using Thin-Provisioned secondary volumes at the disaster recovery site, because a Thin-Provisioned volume can have performance of up to 50% less than that of a normal volume and can affect the performance of the volumes at the primary site. Do not propose Global Mirror if the data change rate will exceed the communication bandwidth or if the round-trip latency exceeds 80 - 120 ms. Greater than 80 ms round-trip latency requires SCORE/RPQ submission.
89
configuration. Best practice is to implement an automatic configuration backup by applying the configuration backup command. We describe this command for the CLI and the GUI in Chapter 9, SAN Volume Controller operations using the command-line interface on page 439 and in Chapter 10, SAN Volume Controller operations using the GUI on page 579.
3.4.1 SAN
The SVC now has many models: 2145-8F4, 2145-8G4, 2145-8A4, and 2145-CF8. All of them can connect to 2 Gbps, 4 Gbps, or 8 Gbps switches. From a performance point of view, it is better to connect the SVC to 8 Gbps switches. Correct zoning on the SAN switch will bring security and performance together. Implement a dual HBA approach at the host to access the SVC.
91
Using as many as possible 15,000 RPM disks will improve performance considerably. Creating one LUN per array will help in a sequential workload environment. In most cases, the SVC will be able to improve performance, especially on middle- to low-end disk subsystems, older disk subsystems with slow controllers, or uncached disk systems, for these reasons: The SVC has the capability to stripe across disk arrays, and it can do so across the entire set of supported physical disk resources. The SVC has a 4 GB, 8 GB, or 24 GB cache in the latest 2145-CF8 model and it has an advanced caching mechanism. The SVC is capable of providing automated performance optimizing of hotspots through the use of Solid State Drives (SSDs) and Easy Tier. The SVCs large cache and advanced cache management algorithms also allow it to improve upon the performance of many types of underlying disk technologies. The SVCs capability to manage, in the background, the destaging operations incurred by writes (in addition to still supporting full data integrity) has the potential to be particularly important in achieving good database performance. Depending upon the size, age, and technology level of the disk storage system, the total cache available in the SVC can be larger, smaller, or about the same as that associated with the disk storage. Because hits to the cache can occur in either the upper (SVC) or the lower (disk controller) level of the overall system, the system as a whole can take advantage of the larger amount of cache wherever it is located. Thus, if the storage control level of cache has the greater capacity, expect hits to this cache to occur, in addition to hits in the SVC cache. Also, regardless of their relative capacities, both levels of cache will tend to play an important role in allowing sequentially organized data to flow smoothly through the system. The SVC cannot increase the throughput potential of the underlying disks in all cases, because this depends upon both the underlying storage technology and the degree to which the workload exhibits hot spots or sensitivity to cache size or cache algorithms. IBM SAN Volume Controller 4.2.1 Cache Partitioning, REDP-4426, explains the SVCs cache partitioning capability: http://www.redbooks.ibm.com/abstracts/redp4426.html?Open
3.4.3 SVC
The SVC cluster is scalable up to eight nodes, and the performance is nearly linear when adding more nodes into an SVC cluster, until it becomes limited by other components in the storage infrastructure. Although virtualization with the SVC provides a great deal of flexibility, it does not diminish the necessity to have a SAN and disk subsystems that can deliver the desired performance. Essentially, SVC performance improvements are gained by having as many MDisks as possible, therefore creating a greater level of concurrent I/O to the back-end without overloading a single disk or array. Assuming that there are no bottlenecks in the SAN or on the disk subsystem, remember that specific guidelines must be followed when you are performing these tasks: Creating a Storage Pool Creating volumes Connecting or configuring hosts that must receive disk space from an SVC cluster
92
You can obtain more detailed information about performance and best practices for the SVC in SAN Volume Controller Best Practices and Performance Guidelines, SG24-7521: http://www.redbooks.ibm.com/abstracts/sg247521.html?Open
93
94
Chapter 4.
95
Note that you have full management control of the SVC regardless of which method you choose. IBM System Storage Productivity Center is supplied by default when you purchase your SVC cluster. If you already have a previously installed SVC cluster in your environment, it is possible that you are using the SVC Console (Hardware Management Console (HMC)). You can still use it together with IBM System Storage Productivity Center, but you can only log in to your SVC from one of them at a time. If you decide to manage your SVC cluster with the SVC CLI, it does not matter if you are using the SVC Console or IBM System Storage Productivity Center, because the SVC CLI is located on the cluster and accessed through Secure Shell (SSH), which can be installed anywhere.
96
Figure 4-2 shows the TCP/IP ports and services that are used by the SVC.
For more information about TCP/IP prerequisites, see Chapter 3, Planning and configuration on page 57 and also the IBM System Storage Productivity Center: Introduction and Planning Guide, SC23-8824. To assist you in starting an SVC initial configuration, Figure 4-3 shows a common flowchart that covers all of the types of management.
97
In the next sections, we describe each of the steps shown in Figure 4-3.
98
Tivoli Storage Productivity Center for Replication is pre-installed. An additional license is required. IBM System Storage DS Storage Manager 10.70 is available for you to optionally install on the System Storage Productivity Center server, or on a remote server. The DS Storage Manager 10.70 can manage the IBM DS3000, IBM DS4000, and IBM DS5000. With DS Storage Manager 10.70, when you use Tivoli Storage Productivity Center to add and discover a DS CIM Agent, you can launch the DS Storage Manager from the topology viewer, the Configuration Utility, or the Disk Manager of the Tivoli Storage Productivity Center. IBM Java 1.6 is pre-installed. IBM Java is pre-installed and supports DS Storage Manager 10.70. You do not need to download Java from Sun Microsystems. DS CIM Agent management commands. The DS CIM Agent management commands (DSCIMCLI) for 5.5.0.3 are pre-installed on the System Storage Productivity Center. SSPC supports SVC 6.1 and the new Storwize V7000, and also supports manual install of the 5.1 GUI (the SVC Console needed for SVC 5.1 or previous SVC release is also available on the IBM website). With SVC 6.1, the GUI console is embedded in the SVC Cluster, so there is no longer a need to install any SVC software directly on the SSPC. IBM DB2 Enterprise Server Edition PuTTY (SSH client software) Figure 4-4 shows the product stack in the IBM System Storage Productivity Center Console 1.5.
99
IBM System Storage Productivity Center has all of the software components pre-installed and tested on a System xTM machine model IBM System Storage Productivity Center 2805-MC5 with Windows installed on it. All the software components installed on the IBM System Storage Productivity Center can be ordered and installed on hardware that meets or exceeds minimum requirements. For a detailed guide to the IBM System Storage Productivity Center, refer to IBM System Storage Productivity Center Software Installation and Users Guide, SC23-8823. For information pertaining to physical connectivity to the SVC, see Chapter 3, Planning and configuration on page 57.
4.2.2 SVC installation planning information for System Storage Productivity Center
Consider the following steps when planning the System Storage Productivity Center installation: Verify that the hardware and software prerequisites have been met. Determine the location of the rack where the System Storage Productivity Center is to be installed. Verify that the System Storage Productivity Center will be installed in line of sight to the SVC nodes. Verify that you have a keyboard, mouse, and monitor available to use. Determine the cabling required. Determine the network IP address. Determine the System Storage Productivity Center host name.
100
For detailed installation guidance, see IBM System Storage Productivity Center: Introduction and Planning Guide, SC23-8824: https://www-304.ibm.com/systems/support/supportsite.wss/supportresources?brandind= 5000033&familyind=5356448 Also see IBM Tivoli Storage Productivity Center IBM Tivoli Storage Productivity Center for Replication Installation and Configuration Guide, SC27-2337: http://http://www-01.ibm.com/support/docview.wss?rs=1181&uid=ssg1S7002597 Figure 4-5 shows the front view of the System Storage Productivity Center Console based on the 2805-MC5 hardware.
Figure 4-6 shows a rear view of System Storage Productivity Center Console based on the 2805-MC5 hardware.
101
Figure 4-7 SVC 8F2 node and SVC 8F4 node front and operator panel
Use Figure 4-8 for the SVC Node 2145-8G4 and 2145-8A4 models.
102
Use Figure 4-9 as a reference for the SVC Node 2145-CF8 model; the figure shows the CF8 model front panel.
103
SVC V6.1 introduces a new method for performing service tasks. In addition to being able to perform service tasks from the front panel, you can also service a node through an Ethernet connection using either a web browser or the command-line interface. An additional Service IP address for each node canister is required. For more details see 4.4.3, Configuring the Service IP Addresses on page 119 and 10.17, Service Assistant with the GUI on page 809.
4.3.2 Prerequisites
Ensure that the SVC nodes are physically installed and that Ethernet and Fibre Channel connectivity has been correctly configured. For information about physical connectivity to the SVC, see Chapter 3, Planning and configuration on page 57. Prior to configuring the cluster, ensure that the following information is available: License The license indicates whether the client is permitted to use FlashCopy, MetroMirror, or both. It also indicates how much capacity the client is licensed to virtualize. For IPv4 addressing Cluster IPv4 addresses - These addresses include one address for the cluster and another address for the service address. IPv4 subnet mask. Gateway IPv4 address.
104
For IPv6 addressing Cluster IPv6 addresses - These addresses include one address for the cluster and another address for the service address. IPv6 prefix. Gateway IPv6 address. You must create a cluster to use the SAN Volume Controller virtualized storage. The first phase to create a cluster is performed from the front panel of the SAN Volume Controller. The second phase is performed from a web browser accessing the management GUI.
Figure 4-10 Cluster IPv4? and Cluster IPv6? options on the front panel display
105
If the New Cluster IPv4? or New Cluster IPv6? actions are displayed, move directly to step 5. If the New Cluster IPv4? or New Cluster IPv6? actions are not displayed, it means that this node is already a member of a cluster. a. Press and release the up or down button until Actions is displayed. b. Press and release the select button to return to the Main Options menu. c. Press and release the up or down button until Cluster: is displayed. The name of the cluster that the node belongs to is displayed on line 2 of the panel. In this case there are two options. a. If you want to delete this node from cluster: i. Press and release the up or down button until Actions is displayed. ii. Press and release the select button. iii. Press and release the up or down button until Remove Cluster? is displayed. iv. Press and hold the up button v. Press and release the select button. vi. Press and release the up or down button until Confirm remove? is displayed. vii. Press and release the select button. viii.Release the up button. ix. Then, release the up button, which deletes the cluster information from the node. Go back to step 1 on page 105 and start again. b. If you do not want this node to be removed from an existing cluster, review the situation and determine the correct nodes to include in the new cluster. 5. Press and release the select button to create the new cluster. 6. Press and release the select button again to modify the IP. 7. Use the up or down navigation buttons to change the value of the first field of the IP address to the value that has been chosen. Notes: For IPv4, pressing and holding the up or down buttons will increment or decrease the IP address field by units of 10. The field value rotates from 0 to 255 with the down button, and from 255 to 0 with the up button. For IPv6, the address and the gateway address consist of eight 4-digit hexadecimal values. Enter the full address by working across a series of four panels to update each of the 4-digit hexadecimal values that make up the IPv6 addresses. The panels consist of eight fields, where each field is a 4-digit hexadecimal value. 8. Use the right navigation button to move to the next field. Use the up or down navigation buttons to change the value of this field. 9. Repeat step 7 for each of the remaining fields of the IP address. 10.When the last field of the IP address has been changed, press the select button. 11.Press the right arrow button: a. For IPv4, IPv4 Subnet: is displayed. b. For IPv6, IPv6 Prefix: is displayed. 12.Press the select button. 106
Implementing the IBM System Storage SAN Volume Controller V6.1
13.Change the fields for IPv4 Subnet in the same way that the IPv4 IP address fields were changed. There is only a single field for IPv6 Prefix. 14.When the last field of IPv4 Subnet/IPv6 Mask has been changed, press the select button. 15.Press the right navigation button: a. For IPv4, IPv4 Gateway: is displayed. b. For IPv6, IPv6 Gateway: is displayed. 16.Press the select button. 17.Change the fields for the appropriate Gateway in the same way that the IPv4/IPv6 address fields were changed. 18.When the changes to all of the Gateway fields have been made, press the select button. 19.To review the settings before creating the cluster, use the right and left buttons. Make any necessary changes, then use right and left buttons to Confirm Created?, and press the select button. 20.After you complete this task, the following information is displayed on the service display panel: Cluster: is displayed on line 1. A temporary, system-assigned cluster name that is based on the IP address is displayed on line 2. If the cluster is not created, Create Failed: is displayed on line 1 of the service display. Line 2 contains an error code. Refer to the error codes that are documented in IBM System Storage SAN Volume Controller: Service Guide, GC26-7901, to identify the reason why the cluster creation failed and the corrective action to take. After you have created the cluster on the front panel with the correct IP address format, you can finish the cluster configuration by accessing the management GUI, completing the Create Cluster wizard, and adding nodes to the cluster. Important: At this time, do not repeat this procedure to add other nodes to the cluster. To add nodes to the cluster, follow the steps described in 9.9.2, Adding a node on page 495 and in 10.12.3, Adding a node to the cluster on page 750.
107
2. Enter the default superuser password: passw0rd (with a zero) and click Continue, as shown in Figure 4-12.
3. On the next page, read the license agreement carefully. To agree with it, select I agree with the terms in the license agreement and click Next as shown in Figure 4-13.
108
4. At the Name, Date, and Time window (Figure 4-14), fill in the following details: A Cluster Name (System Name): This name is case sensitive and can consist of A to Z, a to z, 0 to 9, and the underscore (_). It cannot start with a number. It has a minimum of one character and a maximum of 60 characters. A Time Zone: You can select the time zone for the cluster here. A Date and a Time: Here you can change the date and the time of your cluster. If you are using an Network Time Protocol (NTP) server, you can enter the IP address of the NTP server by selecting Set NTP Server IP Address. Click Next to confirm your changes.
109
5. The Change Date and Time Settings window appears to complete updates on the cluster; see Figure 4-15. When the task is completed, click Close.
6. Next, the System License window is displayed, as shown in Figure 4-16. To continue, fill out the fields for Virtualization Limit, FlashCopy Limit, and Global and Metro Mirror Limit for the number of Terabytes that are licensed. If you do not have a license for any of these features, leave the value at 0. Click Next.
7. The Configure Email Event Notification window is displayed as shown in Figure 4-17.
110
If you do not want to configure it or if you want to do it later, click Next and go to step 8 on page 114. To ensure your system continues to run smoothly, you can enable email event notifications. Email event notifications send messages about error, warning, or informational events and inventory reports to an email address of local or remote support personnel. Ensure that all the information is valid, or email notification is disabled. If you want to configure it, click Configure Email Event Notifications and a wizard appears. a. On the first page, shown in Figure 4-18, fill in the information required to enable IBM Support personnel to contact this person to assist with problem resolution (Contact Name, Email Reply Address, Machine Location and Phone). Ensure that all contact information is valid. Then, click Next.
b. On the next page, shown in Figure 4-19, configure at least one email server that is used by your site and optionally, enable inventory reporting. Enter a valid IP address and a server port for each server added. Ensure that the email servers are valid. Inventory reports allow IBM service personnel to proactively notify you of any known issues with your system. To activate it, enable inventory reporting and choose a Reporting Interval in this window.
111
c. Next, as shown on Figure 4-20, you can configure email addresses to receive notifications. It is a best practice to have one of the email addresses be a support user with the error event notification type enabled to notify IBM service personnel if an error condition occurs on your system. Ensure that all email addresses are valid.
d. The last window, Figure 4-20, is a summary of your Email Event Notification wizard. Click Finish to complete the setup.
112
e. The wizard is now closed and additional information has been added, as shown in Figure 4-22. You can edit or discard your changes from this window. Then, click Next.
Figure 4-22 Configure Email Event Notification window with configuration information
113
8. Next, you can add available nodes to your cluster; see Figure 4-23.
To complete this operation, click an empty node position to view the candidate nodes. Important: Keep in mind that you need to have at least two nodes by IO group. Add your available nodes in sequence. For an empty slot, select the node you want to add to your cluster using the drop-down list. Then change its name and click Add Node, as shown in Figure 4-24.
114
A pop-up window appears to inform you about the time required to add a node to the cluster; see Figure 4-25. If you want to add it, click the OK button.
The Add New Node window appears to complete the update on the cluster, as shown on Figure 4-26. When the task is completed, click Close.
After your node has been successfully added to the cluster, you have an updated view of the Figure 4-23 as shown in Figure 4-27.
115
Figure 4-27 Hardware window with a second node added to the cluster
When all your nodes have been added to your cluster, click Finish. 9. Several operations will be done to update the cluster configuration, as shown in Figure 4-28. When the task is completed, click Close.
10.Your cluster is now successfully created. However, there are several remaining tasks to be completed before you use the cluster, such as changing the default superuser password or defining an IP address for service. We guide you through these tasks in the following sections.
116
2. From the GUI, select User Management Users as shown in Figure 4-30.
117
3. Right -click the superuser user and select Modify as shown in Figure 4-31.
5. Enter the new password twice and validate your change by clicking OK, as shown in Figure 4-33.
118
119
3. Select one node, then click the port you want to assign a service IP address; see Figure 4-36.
120
4. Depending on whether you have installed an IPv4 or an IPv6 cluster, there is other information to enter. For IPv4: Type an IPv4 address in the IP Address field. Type an IPv4 gateway in the Gateway field. Type an IPv4 Subnet Mask. For IPv6: Select the Show IPv6 Button Type an IPv6 prefix in the IPv6 Network Prefix field. The Prefix field can have a value of 0 to 127. Type an IPv6 address in the IP Address field. Type an IPv6 gateway in the Gateway field. After the information has been entered, click OK to confirm modification as shown in Figure 4-37.
4.4.4 Postrequisites
Perform the following steps to complete the SVC cluster configuration. We explain all of these steps in greater detail in Chapter 9, SAN Volume Controller operations using the
121
command-line interface on page 439, and in Chapter 10, SAN Volume Controller operations using the GUI on page 579. a. Configure SSH keys for the command line user, as shown in 4.5, Secure Shell overview on page 122. b. Configure user authentication and authorization. c. Set up event notifications and inventory reporting. d. Create the storage pools. e. Add an MDisk to the storage pool. f. Identify and create volumes. g. Create a map host objects map. h. Identify and configure FlashCopy mappings and Metro Mirror relationship. i. Back up configuration data.
122
4.5.1 Generating public and private SSH key pairs using PuTTY
Perform the following steps to generate SSH keys on the SSH client system: Start the PuTTY Key Generator to generate public and private SSH keys. From the client desktop, select Start Programs PuTTY PuTTYgen. 6. On the PuTTY Key Generator GUI window (Figure 4-38), generate the keys: a. Select SSH2 RSA. b. Leave the number of bits in a generated key value at 1024. c. Click Generate.
7. Move the cursor onto the blank area to generate the keys. To generate keys: The blank area indicated by the message is the large blank rectangle on the GUI inside the section of the GUI labeled Key. Continue to move the mouse pointer over the blank area until the progress bar reaches the far right. This action generates random characters to create a unique key pair. 8. After the keys are generated, save them for later use: a. Click Save public key, as shown in Figure 4-39.
123
b. You are prompted for a name (for example, pubkey) and a location for the public key (for example, C:\Support Utils\PuTTY). Click Save. If another name or location is chosen, ensure that a record of the name or location is kept, because the name and location of this SSH public key must be specified in the steps that are documented in 4.5.2, Uploading the SSH public key to the SVC cluster on page 125. Tip: The PuTTY Key Generator saves the public key with no extension, by default. Use the string pub in naming the public key, for example, pubkey, to easily differentiate the SSH public key from the SSH private key. c. In the PuTTY Key Generator window, click Save private key. d. You are prompted with a warning message, as shown in Figure 4-40. Click Yes to save the private key without a passphrase.
124
e. When prompted, enter a name (for example, icat) and location for the private key (for example, C:\Support Utils\PuTTY). Click Save. We suggest that you use the default name icat.ppk, because in SVC clusters running on versions prior to SVC 5.1, this key has been used for icat application authentication and must have this default name. Private key extension: The PuTTY Key Generator saves the private key with the PPK extension. 9. Close the PuTTY Key Generator GUI. 10.Navigate to the directory where the private key was saved (for example, C:\Support Utils\PuTTY).
2. From the Create a User window, insert the user ID name that you want to create and the password. Also select the access level that you want to assign to your user (remember that the Security Administrator is the maximum level) and choose the location where you want to upload the SSH pub key file you have created for this user, as shown in Figure 4-42. Click Ok.
Chapter 4. SAN Volume Controller initial configuration
125
3. You have completed your user creation process and uploaded the users SSH public key that will be paired later with the users private .ppk key, as described in 4.5.3, Configuring the PuTTY session for the CLI on page 126. Figure 4-43 shows the successful upload of the SSH admin key.
You have now completed the basic setup requirements for the SVC cluster using the SVC cluster web interface.
126
Perform these steps to configure the PuTTY session on the SSH client system: 1. From the System Storage Productivity Center Windows desktop, select Start Programs PuTTY PuTTY to open the PuTTY Configuration GUI window. 2. In the PuTTY Configuration window (Figure 4-44), from the Category pane on the left, click Session, if it is not selected. Tip: The items selected in the Category pane affect the content that appears in the right pane.
3. In the right pane, under the Specify the destination you want to connect to section, select SSH. Under the Close window on exit section, select Only on clean exit, which ensures that if there are any connection errors, they will be displayed on the users window. 4. From the Category pane on the left side of the PuTTY Configuration window, click Connection SSH to display the PuTTY SSH Configuration window, as shown in Figure 4-45.
127
5. In the right pane, in the Preferred SSH protocol version section, select 2. 6. From the Category pane on the left side of the PuTTY Configuration window, select Connection SSH Auth. 7. On Figure 4-46, in the right pane, in the Private key file for authentication: field under the Authentication Parameters section, either browse to or type the fully qualified directory path and file name of the SSH client private key file created earlier (for example, C:\Support Utils\PuTTY\icat.PPK). See Figure 4-46.
128
8. From the Category pane on the left side of the PuTTY Configuration window, click Session. 9. In the right pane, follow these steps, as shown in Figure 4-47: a. Under the Load, save, or delete a stored session section, select Default Settings, and click Save. b. For the Host Name (or IP address), type the IP address of the SVC cluster. c. In the Saved Sessions field, type a name (for example, SVC) to associate with this session. d. Click Save.
129
You can now either close the PuTTY Configuration window or leave it open to continue. Tips: When you want to enter the Host Name or IP address in Putty, insert your SVC user followed by @ previous to your Host Name or IP address as shown previously. this way you will not have to enter your user each time you want to access your SVC cluster. Normally, output that comes from the SVC is wider than the default PuTTY window size. Change your PuTTY window appearance to use a font with a character size of 8. To change, click the Appearance item in the Category tree, as shown in Figure 4-47, and then click Font. Choose a font with a character size of 8.
130
4. If this is the first time that the PuTTY application is being used since you generated and uploaded the SSH key pair, a PuTTY Security Alert window with a prompt opens stating that there is a mismatch between the private and public keys, as shown in Figure 4-49. Click Yes, which invokes the CLI.
5. As shown in Example 4-1, the private key used in this PuTTY session is now authenticated against the public key that was uploaded to the SVC cluster.
Example 4-1 Authenticating
Using username "admin". Authenticating with public key "rsa-key-20100909" IBM_2145:ITSO-CLS1:admin> You have now completed the tasks that are required to configure the CLI for SVC administration from the SVC Console. You can close the PuTTY session.
131
132
Ethernet adapter IPv6: Connection-specific IP Address. . . . . Subnet Mask . . . . IP Address. . . . . IP Address. . . . . Default Gateway . . DNS . . . . . . . . . . Suffix . . . . . . . . . . . . . . . . . . . . . . . . . . : : : : : :
To update a cluster, follow these steps: 1. Select Configuration Network, as shown in Figure 4-50.
133
2. Select Management IP Addresses, then click port 1 of one of the node as shown in Figure 4-51.
3. In the window that is shown in Figure 4-52, follow these steps: a. Select Show IPv6. b. Type an IPv6 prefix in the IPv6 Network Prefix field. The Prefix field can have a value of 0 to 127. c. Type an IPv6 address in the IP Address field. d. Type an IPv6 gateway in the Gateway field. e. Click OK.
134
5. The Change Management task is launched on the server as shown in Figure 4-54. Click Close when the task is completed.
6. Test the IPv6 connectivity using the ping command from a cmd.exe session on your local workstation (as shown in Example 4-3).
Example 4-3 Testing IPv6 connectivity to the SVC cluster
C:\Documents and Settings\Administrator>ping 2001:0610:0000:0000:0000:0000:0000:119 Pinging 2001:610::119 from 2001:610::115 with 32 bytes of data: Reply Reply Reply Reply from from from from 2001:610::119: 2001:610::119: 2001:610::119: 2001:610::119: time=3ms time<1ms time<1ms time<1ms
Ping statistics for 2001:610::119: Packets: Sent = 4, Received = 4, Lost = 0 (0% loss),
135
Approximate round trip times in milli-seconds: Minimum = 0ms, Maximum = 3ms, Average = 0ms 7. Test the IPv6 connectivity to the cluster using a compatible IPv6 and SVC web browser on your local workstation; see Figure 4-55.
Figure 4-55 Testing IPv6 SVC GUI access using a compatible web browser
Tip: To access an IPv6 address in a web browser, you need to put this IP between square brackets as shown at the top in Figure 4-55. 8. Finally, remove the IPv4 address in the SVC GUI accessing the same windows as shown in Figure 4-52, and validate this change by clicking OK.
136
Chapter 5.
Host configuration
In this chapter we describe the basic host configuration procedures that are required to connect supported hosts to the IBM System Storage SAN Volume Controller (SVC).
137
5.1 Host attachment overview for IBM System Storage SAN Volume Controller
The IBM System Storage SAN Volume Controller supports a wide range of host types (both IBM and non-IBM), thereby making it possible to consolidate storage in an open systems environment into a common pool of storage. The storage pool can then be utilized and managed more efficiently as a single entity from a central point on the SAN. The ability to consolidate storage for open systems attach hosts provides the following benefits: Storage is easier to manage. The utilization of capacity is increased. Support is provided for applying advanced Copy Services functions across storage systems from many vendors.
138
139
In this figure, the optical distance between SVC Node 1 and Host 2 is just over 40 km. For high performance servers, the rule is to avoid ISL hops, that is, connect the servers to the same switch to which the SVC is connected, if possible. Remember these limits when connecting host servers to an SVC: Up to 256 hosts per I/O Group, which results in a total of 1,024 hosts per cluster. Note that if the same host is connected to multiple I/O Groups of a cluster, it counts as a host in each of these groups. A total of 512 distinct configured host worldwide port names (WWPNs) are supported per I/O Group. This limit is the sum of the FC host ports and the host iSCSI names (an internal WWPN is generated for each iSCSI name) associated with all of the hosts that are associated with a single I/O Group. Access from a server to an SVC cluster through the SAN fabric is defined by the use of switch zoning. Consider these rules for zoning hosts with the SVC: Homogeneous HBA port zones Switch zones containing HBAs must contain HBAs from similar host types and similar HBAs in the same host. For example, AIX and NT hosts must be in separate zones and QLogic and Emulex adapters must also be in separate zones. Important: A configuration that breaches this rule is unsupported because it can introduce instability into the environment.
140
HBA to SVC port zones Place each host HBA in a separate zone along with two SVC ports, one from each node in the I/O Group. Do not place more than two SVC ports in a zone with an HBA, because this will produce more than the recommended number of paths as seen from the host multipath driver. Recommended number of paths per volume: (n+1 redundancy) With 2 HBA ports: zone HBA ports to SVC ports 1 to 2 for a total of 4 paths With 4 HBA ports: zone HBA ports to SVC ports 1 to 1 for a total of 4 paths Optional: (n+2 redundancy) With 4 HBA ports: zone HBA ports to SVC ports 1 to 2 for a total of 8 paths Note: Here the term HBA port is used to describe the SCSI Initiator and SVC port is used to describe the SCSI target. Maximum host paths per LU For any volume, the number of paths through the SAN from the SVC nodes to a host must
not exceed eight. For most configurations, four paths to an I/O Group (four paths to each
volume that is provided by this I/O Group) are sufficient. Note: The maximum number of host paths per LU is not to exceed 8. Balanced Host Load across HBA ports To obtain the best performance from a host with multiple ports, ensure that each host port is zoned with a separate group of SVC ports. Balanced Host Load across SVC ports To obtain the best overall performance of the subsystem and to prevent overloading, the workload to each SVC port must be equal. You can achieve this balance by zoning approximately the same number of host ports to each SVC port. Figure 5-3 on page 142 shows an overview of a configuration where servers contain two single port HBAs each. Attempt to distribute the attached hosts equally between two logical sets per I/O Group. Connect hosts from each set to the same group of SVC ports. This port group includes exactly one port from each SVC node in the I/O Group. The zoning defines the correct connections. The port groups are defined as follows: Hosts in host set one of an I/O Group are always zoned to the P1 and P4 ports on both nodes, for example, N1/N2 of I/O Group zero. Hosts in host set two of an I/O Group are always zoned to the P2 and P3 ports on both nodes of an I/O Group. You can create aliases for these port groups (per I/O Group): Fabric A: IOGRP0_PG1 N1_P1;N2_P1,IOGRP0_PG2 N1_P3;N2_P3 Fabric B: IOGRP0_PG1 N1_P4;N2_P4,IOGRP0_PG2 N1_P2;N2_P2 Create host zones by always using the host port WWPN, plus the PG1 alias for hosts in the first host set. Always use the host port WWPN, plus the PG2 alias for hosts from the
141
second host set. If a host has to be zoned to multiple I/O Groups, simply add the PG1 or PG2 aliases from the specific I/O Groups to the host zone. Using this schema provides four paths to one I/O Group for each host and helps to maintain an equal distribution of host connections on the SVC ports. Figure 5-3 shows an overview of this host zoning schema.
When possible, use the minimum number of paths necessary to achieve a sufficient level of redundancy. For SVC environment, no more than four paths per I/O Group are required to accomplish this. Remember that all paths must be managed by the multipath driver on the host side. If we assume a server is connected through four ports to the SVC, each volume is seen through eight paths. With 125 volumes mapped to this server, the multipath driver has to support handling up to 1000 active paths (8 x 125). You can find configuration and operational details about the IBM Subsystem Device Driver (SDD) Storage Multipath Subsystem Device Driver Users Guide, GC52-1309, at the following website: http://www-01.ibm.com/support/docview.wss?uid=ssg1S7000303&aid=1 For hosts using four HBAs/ports with eight connections to an I/O Group, use the zoning schema that is shown in Figure 5-4 on page 143. You can combine this schema with the previous four-path zoning schema.
142
143
For each login between an HBA port and an SVC node port, SVC allows access based on the port mask defined within the host object to which the HBA belongs. If access is denied, SVC responds to SCSI commands as though the HBA port is unknown to the SVC.
5.3 iSCSI
iSCSI is a block-level protocol that encapsulates SCSI commands into TCP/IP packets and, thereby, leverages an existing IP network instead of requiring FC HBAs and SAN fabric infrastructure. The iSCSI standard is defined by RFC 3720. For SAN Volume Controller, only connections from iSCSI-attached hosts to nodes are supported. The network interface controller (NIC) cards carry iSCSI traffic and are also used for the configuration of UI traffic. Important: iSCSI connections from SAN Volume Controller nodes to storage systems are not supported. The network interface controller (NIC) cards carry iSCSI traffic and are also used for the configuration of UI traffic.
144
5.3.2 Nodes
There are one or more iSCSI nodes within a network entity. The iSCSI node is accessible through one or more network portals. A network portal is a component of a network entity that has a TCP/IP network address and that can be used by an iSCSI node. An iSCSI node is identified by its unique iSCSI name and is referred to as an IQN. Remember that this name serves only for the identification of the node; it is not the nodes address, and in iSCSI, the name is separated from the addresses. This separation allows multiple iSCSI nodes to use the same addresses, or, while it is implemented in the SVC, the same iSCSI node to use multiple addresses.
5.3.3 IQN
An SVC cluster can provide up to eight iSCSI targets, one per node. Each SVC node has its own IQN, which by default will be in this form: iqn.1986-03.com.ibm:2145.<clustername>.<nodename> An iSCSI host in SVC is defined by specifying its iSCSI initiator names, for an example of an IQN of a Windows Server: iqn.1991-05.com.microsoft:itsoserver01 During the configuration of an iSCSI host in the SVC, you must specify the hosts initiator IQNs. You can read about host creation in detail in Chapter 9, SAN Volume Controller operations using the command-line interface on page 439, and in Chapter 10, SAN Volume Controller operations using the GUI on page 579. An alias string can also be associated with an iSCSI node. The alias allows an organization to associate a user friendly string with the iSCSI name. However, the alias string is not a substitute for the iSCSI name. Figure 5-6 on page 146 shows an overview of iSCSI implementation in the SVC.
145
A host that is using iSCSI as the communication protocol to access its volumes on an SVC cluster uses its single or multiple Ethernet adapters to connect to an IP LAN. The nodes of the SVC cluster are connected to the LAN by the existing 1 Gbps Ethernet ports on the node. For iSCSI, both ports can be used. Note that Ethernet link aggregation (port trunking) or channel bonding for the SVC nodes Ethernet ports is not supported for the 1 Gbps ports in this release. For each SVC node, that is, for each instance of an iSCSI target node in the SVC node, two IPv4 and two IPv6 addresses or iSCSI network portals can be defined. Figure 2-10 on page 31 shows one IPv4 and one IPv6 address per Ethernet port.
b. Configure the node Ethernet ports on each node in the cluster with the svctask cfgportip command. c. Verify that you have configured the node and the clustered Ethernet ports correctly by reviewing the output of the svcinfo lsportip command and svcinfo lsclusterip command. d. Use the svctask mkvdisk command to create volumes on the SAN Volume Controller cluster. e. Use the svctask mkhost command to create a host object on the SAN Volume Controller server that describes the iSCSI server initiator to which the volumes are to be mapped. f. Use the svctask mkvdiskhostmap command to map the volume to the host object in the SAN Volume Controller. 2. Set up your host server. a. Ensure that you have configured your IP interfaces on the server. b. Install the software for the iSCSI software-based initiator on the server. c. On the host server, run the configuration methods for iSCSI so that the host server iSCSI initiator logs in to the SAN Volume Controller cluster and discovers the SAN Volume Controller volumes. The host then creates host devices for the volumes. 3. After the host devices are created, you can use them with your host applications.
5.3.6 Authentication
Authentication of hosts is optional; by default, it is disabled. The user can choose to enable Challenge Handshake Authentication Protocol (CHAP) or CHAP authentication, which involves sharing a CHAP secret between the cluster and the host. If the correct key is not provided by the host, the SVC will not allow it to perform I/O to volumes. The cluster can also be assigned a CHAP secret. A new feature with iSCSI is that you can move IP addresses, which are used to address an iSCSI target on the SVC node, between the nodes of an I/O Group. IP addresses will only be moved from one node to its partner node if a node goes through a planned or unplanned restart. If the Ethernet link to the SVC cluster fails due to a cause outside of the SVC (such as the cable being disconnected, the Ethernet router failing, and so on), the SVC makes no attempt to fail over an IP address to restore IP access to the cluster. To enable validation of
147
the Ethernet access to the nodes, it will respond to ping with the standard one-per-second rate without frame loss. There is a concept, which is used for handling the iSCSI IP address failover, that is called a clustered Ethernet port. A clustered Ethernet port consists of one physical Ethernet port on each node in the cluster. The clustered Ethernet port contains configuration settings that are shared by all of these ports. These clustered ports are referred to as Port 1 and Port 2 in the CLI or GUI on each node of an SVC cluster. Clustered Ethernet ports can be used for iSCSI or management ports. Figure 5-7 shows an example of an iSCSI target node failover. It gives a simplified overview of what happens during a planned or unplanned node restart in an SVC I/O Group. 1. During normal operation, one iSCSI node target node instance is running on each SVC node. All of the IP addresses (IPv4/IPv6) belonging to this iSCSI target, including the management addresses if the node acts as the configuration node, are presented on the two ports (P1/P2) of a node. 2. During a restart of an SVC node (N1), the iSCSI initiator, including all of its network portal (IPv4/IPv6) IP addresses defined on Port1/Port2 and the management (IPv4/IPv6) IP addresses (if N1 acted as the configuration node), will fail over to Port1/Port2 of the partner node within the I/O Group, that is, node N2. An iSCSI initiator running on a server will execute a reconnect to its iSCSI target, that is, the same IP addresses presented now by a new node of the SVC cluster. 3. As soon as the node (N1) has finished its restart, the iSCSI target node (including its IP addresses) running on N2 will fail back to N1. Again, the iSCSI initiator running on a server will execute a reconnect to its iSCSI target. The management addresses will not fail back. N2 will remain in the role of the configuration node for this cluster.
From a server perspective, it is not required to have a multipathing driver (MPIO) in place to be able to handle an SVC node failover. In the case of a node restart, the server simply reconnects to the IP addresses of the iSCSI target node that will reappear after several seconds on the ports of the partner node. 148
Implementing the IBM System Storage SAN Volume Controller V6.1
A host multipathing driver for iSCSI is required in these situations: To protect a server from network link failures, including port failures on the SVC nodes To protect a server from a server HBA failure (if two HBAs are in use) To protect a server form network failures, if the server is connected through two HBAs to two separate networks To provide load balancing on the servers HBA and the network links The commands for the configuration of the iSCSI IP addresses have been separated from the configuration of the cluster IP addresses. The following commands are new commands for managing iSCSI IP addresses: The svcinfo lsportip command lists the iSCSI IP addresses assigned for each port on each node in the cluster. The svctask cfgportip command assigns an IP address to each nodes Ethernet port for iSCSI I/O. The following commands are new commands for managing the cluster IP addresses: The svcinfo lsclusterip command returns a list of the cluster management IP addresses configured for each port. The svctask chclusterip command modifies the IP configuration parameters for the cluster. For a detailed description about how to use these commands, see Chapter 9, SAN Volume Controller operations using the command-line interface on page 439. The parameters for remote services (ssh and Web services) will remain associated with the cluster object. During a software upgrade the configuration settings for the cluster will be used to configure clustered Ethernet Port1. For iSCSI-based access, using two separate networks and separating iSCSI traffic within the networks by using a dedicated VLAN path for storage traffic will prevent any IP interface, switch, or target port failure from compromising the host servers access to the volumes.
149
3. Connect the AIX host system to the FC switches. 4. Configure the FC switch zoning. 5. Install the 2145 host attachment support package. 6. Install and configure the Subsystem Device Driver Path Control Module (SDDPCM). 7. Perform the logical configuration on the SAN Volume Controller to define the host, volumes, and host mapping. 8. Run cfgmgr to discover and configure the SVC volumes. The following sections detail the current support information. It is vital that you check the websites that are listed regularly for any updates.
150
2. Issue the following command to enable dynamic tracking for each FC device: chdev -l fscsi0 -a dyntrk=yes The preceding example command was for adapter fscsi0. Example 5-2 shows the command for both adapters on our test system running AIX 5L V5.3.
Example 5-2 Enable dynamic tracking
Note: The fast fail and dynamic tracking attributes do not persist through an adapter delete and reconfigure. Thus, if the adapters are deleted and then configured back into the system, these attributes will be lost and will need to be reapplied.
#lsdev -Cc adapter |grep fcs fcs0 Available 1Z-08 FC Adapter fcs1 Available 1D-08 FC Adapter
You can display the worldwide port number (WWPN), along with other attributes including firmware level, by using the command shown in Example 5-4. Note that the WWPN is represented as Network Address.
Example 5-4 FC host adapter settings and WWPN
U0.1-P2-I4/Q1
FC Adapter
Part Number.................00P4494 EC Level....................A Serial Number...............1E3120A68D Manufacturer................001E Device Specific.(CC)........2765 FRU Number.................. 00P4495 Network Address.............10000000C932A7FB ROS Level and ID............02C03951 Device Specific.(Z0)........2002606D Device Specific.(Z1)........00000000 Device Specific.(Z2)........00000000 Device Specific.(Z3)........03000909
Chapter 5. Host configuration
151
PLATFORM SPECIFIC Name: fibre-channel Model: LP9002 Node: fibre-channel@1 Device Type: fcp Physical Location: U0.1-P2-I4/Q1
Concurrent download of supported storage devices licensed machine code Prevention of a single-point failure The AIX MPIO device driver along with SDDPCM enhances the data availability and I/O load balancing of SVC volumes. Note: For AIX hosts, use the Subsystem Device Driver Path Control Module (SDDPCM) as the multipath software over the legacy Subsystem Device Driver (SDD). Although still supported, a discussion of SDD is beyond the scope of this publication. For information regarding SDD, see Multipath Subsystem Device Driver Users Guide, GC52-1309.
SDDPCM installation
Download the appropriate version of SDDPCM and install using the standard AIX installation procedure. The latest SDDPCM software versions are available at the following website: http://www-304.ibm.com/systems/support/supportsite.wss/supportresources?brandind=5 000033&familyind=5329528&taskind=2 Check the driver readme file and make sure your AIX system meets all prerequisites. Example 5-5 shows the appropriate version of SDDPCM downloaded into the /tmp/sddpcm directory. From here, we extract it and initiate the inutoc command, which generates a dot.toc (.toc) file that is needed by the installp command prior to installing SDDPCM. Finally, we initiate the installp command, which installs SDDPCM onto this AIX host.
Example 5-5 Installing SDDPCM on AIX
# ls -l total 3232 -rw-r----1 root system 1648640 Jul 15 13:24 devices.sddpcm.61.rte.tar # tar -tvf devices.sddpcm.61.rte.tar -rw-r----- 271001 449628 1638400 Oct 31 12:16:23 2007 devices.sddpcm.61.rte # tar -xvf devices.sddpcm.61.rte.tar x devices.sddpcm.61.rte, 1638400 bytes, 3200 media blocks. # inutoc . # ls -l total 6432 -rw-r--r-1 root system 531 Jul 15 13:25 .toc -rw-r----1 271001 449628 1638400 Oct 31 2007 devices.sddpcm.61.rte -rw-r----1 root system 1648640 Jul 15 13:24 devices.sddpcm.61.rte.tar # installp -ac -d . all Example 5-6 shows the lslpp command that can be used to check the version of SDDPCM currently installed.
Example 5-6 Checking SDDPCM device driver
2.2.0.0 2.2.0.0
COMMITTED COMMITTED
IBM SDD PCM for AIX V61 IBM SDD PCM for AIX V61
Enabling the SDDPCM web interface is described in 5.14, Using SDDDSM, SDDPCM, and SDD web interface on page 224.
153
# lscfg -vl fcs* |egrep fcs|Network fcs1 U0.1-P2-I4/Q1 FC Adapter Network Address.............10000000C932A865 Physical Location: U0.1-P2-I4/Q1 fcs2 U0.1-P2-I5/Q1 FC Adapter Network Address.............10000000C94C8C1C
IBM_2145:ITSO-CLS2:admin>svcinfo lshost Atlantic id 8 name Atlantic port_count 2 type generic mask 1111 iogrp_count 4 WWPN 10000000C94C8C1C node_logged_in_count 2 state active WWPN 10000000C932A865 node_logged_in_count 2 state active IBM_2145:ITSO-CLS2:admin>svcinfo lshostvdiskmap Atlantic 154
Implementing the IBM System Storage SAN Volume Controller V6.1
id name SCSI_id vdisk_id wwpn vdisk_UID 8 Atlantic 0 14 10000000C94C8C1C 6005076801A180E90800000000000060 8 Atlantic 1 22 10000000C94C8C1C 6005076801A180E90800000000000061 8 Atlantic 2 23 10000000C94C8C1C 6005076801A180E90800000000000062 IBM_2145:ITSO-CLS2:admin>
# lsdev -Cc disk hdisk0 Available hdisk1 Available hdisk2 Available hdisk3 Available hdisk4 Available hdisk5 Available
16 Bit LVD SCSI Disk Drive 16 Bit LVD SCSI Disk Drive 16 Bit LVD SCSI Disk Drive MPIO FC 2145 MPIO FC 2145 MPIO FC 2145
The mkvg command can now be used to create a Volume Group with the three newly configured hdisks, as shown in Example 5-11.
Example 5-11 Running the mkvg command
# mkvg -y itsoaixvg hdisk3 0516-1254 mkvg: Changing the PVID in the ODM. itsoaixvg # mkvg -y itsoaixvg1 hdisk4 0516-1254 mkvg: Changing the PVID in the ODM. itsoaixvg1 # mkvg -y itsoaixvg2 hdisk5 0516-1254 mkvg: Changing the PVID in the ODM. itsoaixvg2 The lspv output now shows the new Volume Group label on each of the hdisks that were included in the Volume Groups, as seen in Example 5-12.
Example 5-12 Showing the vpath assignment into the Volume Group
0009cdcaeb48d3a3 0009cdcac26dbb7c
rootvg rootvg
# pcmpath query adapter Active Adapters :2 Adpt# 0 1 Name fscsi1 fscsi2 State NORMAL NORMAL Mode ACTIVE ACTIVE Select 407 425 Errors 0 0 Paths 6 6 Active 6 6
The pcmpath query device command displays the current state of the adapters. In Example 5-14, we can see the path State and Mode for each of the defined hdisks. The status that both adapters are showing as optimal with State=NORMAL and Mode=ACTIVE. Additionally, an asterisk (*) displayed next to paths indicates inactive paths that are configured to the non-preferred SVC nodes in the IO Group.
Example 5-14 SDDPCM commands that are used to check the availability of the devices
# pcmpath query device Total Devices : 3 DEV#: 3 DEVICE NAME: hdisk3 TYPE: 2145 ALGORITHM: Load Balance SERIAL: 6005076801A180E90800000000000060 ========================================================================== Path# Adapter/Path Name State Mode Select Errors 0 fscsi1/path0 OPEN NORMAL 152 0 1* fscsi1/path1 OPEN NORMAL 48 0 2* fscsi2/path2 OPEN NORMAL 48 0 3 fscsi2/path3 OPEN NORMAL 160 0 DEV#: 4 DEVICE NAME: hdisk4 TYPE: 2145 ALGORITHM: Load Balance SERIAL: 6005076801A180E90800000000000061 ========================================================================== Path# Adapter/Path Name State Mode Select Errors 0* fscsi1/path0 OPEN NORMAL 37 0 1 fscsi1/path1 OPEN NORMAL 66 0 2 fscsi2/path2 OPEN NORMAL 71 0 3* fscsi2/path3 OPEN NORMAL 38 0
156
DEV#: 5 DEVICE NAME: hdisk5 TYPE: 2145 ALGORITHM: Load Balance SERIAL: 6005076801A180E90800000000000062 ========================================================================== Path# Adapter/Path Name State Mode Select Errors 0 fscsi1/path0 OPEN NORMAL 66 0 1* fscsi1/path1 OPEN NORMAL 38 0 2* fscsi2/path2 OPEN NORMAL 38 0 3 fscsi2/path3 OPEN NORMAL 70 0 #
5.4.9 Creating and preparing volumes for use with AIX V6.1 and SDDPCM
The itsoaixvg Volume Group is created using hdisk3. A logical volume is created using the Volume Group. Then, the testlv1 file system is created and mounted on the /testlv1 mount point, as shown in Example 5-15.
Example 5-15 Host system new Volume Group and file system configuration
# lsvg -o itsoaixvg2 itsoaixvg1 itsoaixvg rootvg # crfs -v jfs2 -g itsoaixvg -a size=3G File system created successfully. 3145428 kilobytes total disk space. New File System size is 6291456 # lsvg -l itsoaixvg itsoaixvg: LV NAME TYPE LPs loglv00 jfs2log 1 fslv00 jfs2 384 #
-m /itsoaixvg -p rw -a agblksize=4096
PPs 1 384
PVs 1 1
157
3. Display the current AIX configured capacity using the lspv hdisk command. The capacity will be shown in the TOTAL PPs field in MBs. 4. To expand the capacity of the SVC volume, use the svctask expandvdisksize command. 5. After the capacity of the volume has been expanded, AIX will need to update its configured capacity. To initiate the capacity update on AIX, use the chvg -g vg_name command, where vg_name is the Volume Group the expanded volume resides in. If AIX does not return any messages, it means that the command was successful and the volume changes in this Volume Group have been saved. If AIX cannot see any changes in the volumes, it will return an explanatory message. 6. Display the new AIX configured capacity using the lspv hdisk command, again the capacity will be shown in the TOTAL PPs field in MBs.
158
159
160
8. From the Configuration Settings menu, select Advanced Adapter Settings. 9. From the Advanced Adapter Settings menu, set the following parameters: a. Execution throttle: 100 b. Luns per Target: 0 c. Enable LIP Reset: No d. Enable LIP Full Login: Yes e. Enable Target Reset: No Note: If you are using a subsystem device driver (SDD) lower than 1.6, set Enable Target Reset to Yes. f. Login Retry Count: 30 g. Port Down Retry Count: 15 h. Link Down Timeout: 30 i. Extended error logging: Disabled (might be enabled for debugging) j. RIO Operation Mode: 0 k. Interrupt Delay Timer: 0 10.Press Esc to return to the Configuration Settings menu. 11.Press Esc. 12.From the Configuration settings modified window, select Save changes. 13.From the Fast!UTIL Options menu, select Select Host Adapter if more than one QLogic adapter were installed in your system. 14.Select the other Host Adapter and repeat all steps from step 4 to 12. 15.You must repeat this process for all installed QLogic adapters in your system. When you are done, press Esc to exit the QLogic BIOS and restart the server.
161
Note: The parameters that are shown in Table 5-1 correspond to the parameters in HBAnywhere.
See the following website for the latest information about SDD for Windows: http://www-1.ibm.com/support/docview.wss?rs=540&context=ST52G7&dc=DA400&uid=ssg1S7 001350&loc=en_US&cs=utf-8&lang=en
162
Important: Use SDD only on existing systems where you do not want to change from SDD to SDDDSM. New operating systems will only be supported with SDDDSM. Before installing the SDD driver, the HBA driver has to be installed on your system. SDD requires the HBA SCSI port driver. After downloading the appropriate version of SDD from the website, extract the file and run setup.exe to install SDD. A command line will appear. Answer Y (Figure 5-9) to install the driver.
After the setup has completed, answer Y again to reboot your system (Figure 5-10).
To check whether your SDD installation is complete, open the Windows Device Manager, expand SCSI and RAID Controllers, right-click Subsystem Device Driver Management, and click Properties (see Figure 5-11 on page 164).
163
The Subsystem Device Driver Management Properties window opens. Select the Driver tab, and make sure that you have installed the correct driver version (see Figure 5-12).
164
To check which levels are available, go to the website: http://www-1.ibm.com/support/docview.wss?rs=540&context=ST52G7&dc=DA400&uid=ssg1S7 001350&loc=en_US&cs=utf-8&langw=en#WindowsSDDDSM To download SDDDSM, go to the website: http://www-1.ibm.com/support/docview.wss?rs=540&context=ST52G7&dc=D430&uid=ssg1S40 00350&loc=en_US&cs=utf-8&lang=en The installation procedure for SDDDSM and SDD are the same, but remember that you have to use the StorPort HBA driver instead of the SCSI driver. We describe the SDD installation in 5.5.6, Installing the SDD driver on Windows on page 162. After completing the installation, you will see the Microsoft MPIO in Device Manager (Figure 5-13 on page 166).
165
We describe the SDDDSM installation for Windows Server 2008 in 5.7, Example configuration - attaching an SVC to Windows Server 2008 host on page 176.
5.6 Discovering assigned volumes in Windows Server 2000 and Windows Server 2003
In this section, we describe how to discover assigned volumes in Windows Server 2000 and Windows Server 2003. The figures show a Windows Server 2003 host with SDDDSM installed. Discovering the disks in Windows Server 2000 or with SDD is the same procedure. Before adding a new volume from the SVC, the Windows Server 2003 host system had the configuration that is shown in Figure 5-14 on page 167, with only local disks.
166
Figure 5-14 Windows Server 2003 host system before adding a new volume from SVC
We can check that the WWPN is logged into the SVC for the host named Senegal by entering the following command (Example 5-16): svcinfo lshost Senegal
Example 5-16 Host information for Senegal
IBM_2145:ITSO-CLS2:admin>svcinfo lshost Senegal id 1 name Senegal port_count 2 type generic mask 1111 iogrp_count 4 WWPN 210000E08B89B9C0 node_logged_in_count 2 state active WWPN 210000E08B89CCC2 node_logged_in_count 2 state active The configuration of the Senegal host, the Senegal_bas0001 volume, and the mapping between the host and the volume are defined in the SVC, as shown in Example 5-17. In our example, the Senegal_bas0002 and Senegal_bas003 volumes have the same configuration as the Senegal_bas0001 volume.
Example 5-17 Host mapping: Senegal
IBM_2145:ITSO-CLS2:admin>svcinfo lshostvdiskmap Senegal id name SCSI_id vdisk_id vdisk_name wwpn vdisk_UID 1 Senegal 0 7 Senegal_bas0001 210000E08B89B9C0 6005076801A180E9080000000000000F 1 Senegal 1 8 Senegal_bas0002 210000E08B89B9C0 6005076801A180E90800000000000010
167
210000E08B89B9C0
IBM_2145:ITSO-CLS2:admin>svcinfo lsvdisk Senegal_bas0001 id 7 name Senegal_bas0001 IO_group_id 0 IO_group_name io_grp0 status online mdisk_grp_id 0 mdisk_grp_name MDG_0_DS45 capacity 10.0GB type striped formatted yes mdisk_id mdisk_name FC_id FC_name RC_id RC_name vdisk_UID 6005076801A180E9080000000000000F throttling 0 preferred_node_id 3 fast_write_state empty cache readwrite udid 0 fc_map_count 0 sync_rate 50 copy_count 1 copy_id 0 status online sync yes primary yes mdisk_grp_id 0 mdisk_grp_name MDG_0_DS45 type striped mdisk_id mdisk_name fast_write_state empty used_capacity 10.00GB real_capacity 10.00GB free_capacity 0.00MB overallocation 100 autoexpand warning grainsize We can also obtain the serial number of the volumes by entering the following command (Example 5-18): svcinfo lsvdiskhostmap Senegal_bas0001
Example 5-18 Volume serial number: Senegal_bas0001
168
id name SCSI_id host_id host_name wwpn 7 Senegal_bas0001 0 1 Senegal 210000E08B89B9C0 6005076801A180E9080000000000000F 7 Senegal_bas0001 0 1 Senegal 210000E08B89CCC2 6005076801A180E9080000000000000F
vdisk_UID
After installing the necessary drivers and the rescan disks operation completes, the new disks are found in the Computer Management window, as shown in Figure 5-15.
Figure 5-15 Windows Server 2003 host system with three new volumes from SVC
In Windows Device Manager, the disks are shown as IBM 2145 SCSI Disk Device (Figure 5-16 on page 170). The number of IBM 2145 SCSI Disk Devices that you see is equal to: (number of volumes) x (number of paths per I/O Group per HBA) x (number of HBAs) The IBM 2145 Multi-Path Disk Devices are the devices that are created by the multipath driver (Figure 5-16 on page 170). The number of these devices is equal to the number of volumes that are presented to the host.
169
Figure 5-16 Windows Server 2003 Device Manager with assigned volumes
When following the SAN zoning recommendation, this calculation gives us, for one volume and a host with two HBAs: (number of volumes) x (number of paths per I/O Group per HBA) x (number of HBAs) = 1 x 2 x 2 = 4 paths You can check if all of the paths are available if you select Start All Programs Subsystem Device Driver (DSM) Subsystem Device Driver (DSM). The SDD (DSM) command-line interface will appear. Enter the following command to see which paths are available to your system (Example 5-19).
Example 5-19 Datapath query device
Microsoft Windows [Version 5.2.3790] (C) Copyright 1985-2003 Microsoft Corp. C:\Program Files\IBM\SDDDSM>datapath query device Total Devices : 3
DEV#: 0 DEVICE NAME: Disk1 Part0 TYPE: 2145 POLICY: OPTIMIZED SERIAL: 6005076801A180E9080000000000002A ============================================================================ Path# Adapter/Hard Disk State Mode Select Errors 0 Scsi Port2 Bus0/Disk1 Part0 OPEN NORMAL 47 0 1 Scsi Port2 Bus0/Disk1 Part0 OPEN NORMAL 0 0 2 Scsi Port3 Bus0/Disk1 Part0 OPEN NORMAL 0 0 3 Scsi Port3 Bus0/Disk1 Part0 OPEN NORMAL 28 0 DEV#: 1 DEVICE NAME: Disk2 Part0 TYPE: 2145 POLICY: OPTIMIZED SERIAL: 6005076801A180E90800000000000010 ============================================================================ Path# Adapter/Hard Disk State Mode Select Errors 0 Scsi Port2 Bus0/Disk2 Part0 OPEN NORMAL 0 0 1 Scsi Port2 Bus0/Disk2 Part0 OPEN NORMAL 162 0 170
Implementing the IBM System Storage SAN Volume Controller V6.1
2 3
OPEN OPEN
NORMAL NORMAL
155 0
0 0
DEV#: 2 DEVICE NAME: Disk3 Part0 TYPE: 2145 POLICY: OPTIMIZED SERIAL: 6005076801A180E90800000000000011 ============================================================================ Path# Adapter/Hard Disk State Mode Select Errors 0 Scsi Port2 Bus0/Disk3 Part0 OPEN NORMAL 51 0 1 Scsi Port2 Bus0/Disk3 Part0 OPEN NORMAL 0 0 2 Scsi Port3 Bus0/Disk3 Part0 OPEN NORMAL 0 0 3 Scsi Port3 Bus0/Disk3 Part0 OPEN NORMAL 25 0 C:\Program Files\IBM\SDDDSM> Note: All path states have to be OPEN. The path state can be OPEN or CLOSE. If one path state is CLOSE, it means that the system is missing a path that it saw during startup. If you restart your system, the CLOSE paths are removed from this view.
171
To expand a volume in use on Windows Server 2000 and Windows Server 2003, we used Diskpart. The Diskpart tool is part of Windows Server 2003; for other Windows versions, you can download it free of charge from Microsoft. Diskpart is a tool that was developed by Microsoft to ease administration of storage. It is a command-line interface which you can use to manage disks, partitions, and volumes by using scripts or direct input on the command line. You can list disks and volumes, select them, and after selecting them, get more detailed information, create partitions, extend volumes, and more. For more information, see the Microsoft website: http://www.microsoft.com Or see the following website: http://support.microsoft.com/default.aspx?scid=kb;en-us;304736&sd=tech An example of how to expand a volume on a Windows Server 2003 host, where the volume is a volume from the SVC, is shown in the following discussion. To list a volume size, use the svcinfo lsvdisk <VDisk_name> command. This command gives this information for the Senegal_bas0001 before expanding the volume (Example 5-17 on page 167). Here, we can see that the capacity is 10 GB, and also what the vdisk_UID is. To find on what vpath this volume is on the Windows Server 2003 host, we use the datapath query device SDD command on the Windows host (Figure 5-17). We can see that the serial 6005076801A180E9080000000000000F of Disk1 on the Windows host (Figure 5-17) matches the volume ID of Senegal_bas0001 (Example 5-17 on page 167). To see the size of the volume on the Windows host we use Disk Manager, as shown in Figure 5-17.
172
This window shows that the volume size is 10 GB. To expand the volume on the SVC, we use the svctask expandvdisksize command to increase the capacity on the volume. In this example, we expand the volume by 1 GB (Example 5-20).
Example 5-20 svctask expandvdisksize command
IBM_2145:ITSO-CLS2:admin>svctask expandvdisksize -size 1 -unit gb Senegal_bas0001 IBM_2145:ITSO-CLS2:admin>svcinfo lsvdisk Senegal_bas0001 id 7 name Senegal_bas0001 IO_group_id 0 IO_group_name io_grp0 status online mdisk_grp_id 0 mdisk_grp_name MDG_0_DS45 capacity 11.0GB type striped formatted yes mdisk_id mdisk_name FC_id FC_name RC_id RC_name vdisk_UID 6005076801A180E9080000000000000F throttling 0 preferred_node_id 3 fast_write_state empty cache readwrite udid 0 fc_map_count 0 sync_rate 50 copy_count 1 copy_id 0 status online sync yes primary yes mdisk_grp_id 0 mdisk_grp_name MDG_0_DS45 type striped mdisk_id mdisk_name fast_write_state empty used_capacity 11.00GB real_capacity 11.00GB free_capacity 0.00MB overallocation 100 autoexpand warning grainsize To check that the volume has been expanded, we use the svctask expandvdisksize command. In Example 5-20, we can see that the Senegal_bas0001 volume has been expanded to 11 GB in capacity.
Chapter 5. Host configuration
173
After performing a Disk Rescan in Windows, you will see the new unallocated space in Windows Disk Management, as shown in Figure 5-18.
This window shows that Disk1 now has 1 GB unallocated new capacity. To make this capacity available for the file system, use the following commands, as shown in Example 5-21. diskpart list volume select volume detail volume extend Starts DiskPart in a DOS prompt Shows you all available volumes Selects the volume to expand Displays details for the selected volume, including the unallocated capacity Extends the volume to the available unallocated space
C:\>diskpart Microsoft DiskPart version 5.2.3790.3959 Copyright (C) 1999-2001 Microsoft Corporation. On computer: SENEGAL DISKPART> list volume Volume ### ---------Volume 0 Volume 1 Volume 2 Ltr --C S D Label ----------SVC_Senegal Fs ----NTFS NTFS Type ---------Partition Partition DVD-ROM Size ------75 GB 10 GB 0 B Status --------Healthy Healthy Healthy Info -------System
DISKPART> select volume 1 Volume 1 is the selected volume. DISKPART> detail volume
174
Status ---------Online
Size ------11 GB
Free ------1020 MB
Dyn ---
Gpt ---
Readonly : No Hidden : No No Default Drive Letter: No Shadow Copy : No DISKPART> extend DiskPart successfully extended the volume. DISKPART> detail volume Disk ### -------* Disk 1 Status ---------Online Size ------11 GB Free ------0 B Dyn --Gpt ---
Readonly : No Hidden : No No Default Drive Letter: No Shadow Copy : No After extending the volume, the detail volume command shows that there is no free capacity on the volume anymore. The list volume command shows the file system size. The Disk Management window also shows the new disk size; see Figure 5-19.
The example here is referred to as a Windows Basic Disk. Dynamic disks can be expanded by expanding the underlying SVC volume. The new space will appear as unallocated space at the end of the disk.
175
In this case, you do not need to use the DiskPart tool. Instead, you can use Windows Disk Management functions to allocate the new space. Expansion works irrespective of the volume type (simple, spanned, mirrored, and so on) on the disk. Dynamic disks can be expanded without stopping I/O in most cases. Important: Never try to upgrade your Basic Disk to Dynamic Disk or vice versa without backing up your data, because this operation is disruptive for the data due to a change in the position of the logical block address (LBA) on the disks.
176
5. Right-click the HBA, and select Update driver Software (Figure 5-20).
7. Enter the path to the extracted QLogic driver, and click Next (Figure 5-22 on page 178).
177
178
9. When the driver update is complete, click Close to exit the wizard (Figure 5-24).
10.Repeat steps 1 to 8 for all of the HBAs that are installed in the system.
5. After the SDDDSM Setup is finished, type Y and press Enter to restart your system. After the reboot, the SDDDSM installation is complete. You can verify the installation completion in Device Manager, because the SDDDSM device will appear (Figure 5-26 on page 180), and the SDDDSM tools will have been installed (Figure 5-27 on page 180).
179
180
IBM_2145:ITSO-CLS2:admin>svcinfo lshostvdiskmap Diomede id name SCSI_id vdisk_id vdisk_name wwpn 0 Diomede 0 20 Diomede_0001 210000E08B0541BC 6005076801A180E9080000000000002B 0 Diomede 1 21 Diomede_0002 210000E08B0541BC 6005076801A180E9080000000000002C 0 Diomede 2 22 Diomede_0003 210000E08B0541BC 6005076801A180E9080000000000002D
vdisk_UID
Perform the following steps to use the devices on your Windows Server 2008 host: 1. Click Start, and click Run. 2. Enter the diskmgmt.msc command, and click OK. The Disk Management window opens. 3. Select Action, and click Rescan Disks (Figure 5-28).
4. The SVC disks will now appear in the Disk Management window (Figure 5-29 on page 182).
181
After you have assigned the SVC disks, they are also available in Device Manager. The three assigned drives are represented by SDDDSM/MPIO as IBM-2145 Multipath disk devices in the Device Manager (Figure 5-30).
182
5. To check that the disks are available, select Start All Programs Subsystem Device Driver DSM, and click Subsystem Device Driver DSM (Figure 5-31). The SDDDSM Command Line Utility will appear.
Figure 5-31 Windows Server 2008 Subsystem Device Driver DSM utility
6. Enter the datapath query device command and press Enter (Example 5-23). This command will display all of the disks and the available paths, including their states.
Example 5-23 Windows Server 2008 SDDDSM command-line utility
DEV#: 0 DEVICE NAME: Disk1 Part0 TYPE: 2145 POLICY: OPTIMIZED SERIAL: 6005076801A180E9080000000000002B ============================================================================ Path# Adapter/Hard Disk State Mode Select Errors 0 Scsi Port2 Bus0/Disk1 Part0 OPEN NORMAL 0 0 1 Scsi Port2 Bus0/Disk1 Part0 OPEN NORMAL 1429 0 2 Scsi Port3 Bus0/Disk1 Part0 OPEN NORMAL 1456 0 3 Scsi Port3 Bus0/Disk1 Part0 OPEN NORMAL 0 0 DEV#: 1 DEVICE NAME: Disk2 Part0 TYPE: 2145 POLICY: OPTIMIZED SERIAL: 6005076801A180E9080000000000002C ============================================================================ Path# Adapter/Hard Disk State Mode Select Errors 0 Scsi Port2 Bus0/Disk2 Part0 OPEN NORMAL 1520 0 1 Scsi Port2 Bus0/Disk2 Part0 OPEN NORMAL 0 0
Chapter 5. Host configuration
183
2 3
OPEN OPEN
NORMAL NORMAL
0 1517
0 0
DEV#: 2 DEVICE NAME: Disk3 Part0 TYPE: 2145 POLICY: OPTIMIZED SERIAL: 6005076801A180E9080000000000002D ============================================================================ Path# Adapter/Hard Disk State Mode Select Errors 0 Scsi Port2 Bus0/Disk3 Part0 OPEN NORMAL 27 0 1 Scsi Port2 Bus0/Disk3 Part0 OPEN NORMAL 1396 0 2 Scsi Port3 Bus0/Disk3 Part0 OPEN NORMAL 1459 0 3 Scsi Port3 Bus0/Disk3 Part0 OPEN NORMAL 0 0 C:\Program Files\IBM\SDDDSM> SAN zoning: When following the SAN zoning guidance, we get this result, using one volume and a host with two HBAs, (number of volumes) x (number of paths per I/O Group per HBA) x (number of HBAs) = 1 x 2 x 2 = four paths. 7. Right-click the disk in Disk Management, and select Online to place the disk online (Figure 5-32).
8. Repeat step 7 for all of your attached SVC disks. 9. Right-click one disk again, and select Initialize Disk (Figure 5-33).
184
10.Mark all of the disks that you want to initialize, and click OK (Figure 5-34).
11.Right-click the unallocated disk space, and select New Simple Volume (Figure 5-35).
12.The New Simple Volume Wizard window opens. Click Next. 13.Enter a disk size, and click Next (Figure 5-36).
185
186
16.Click Finish, and repeat this step for every SVC disk on your host system (Figure 5-39).
187
Figure 5-17 on page 172 shows the Disk Manager before removing the disk. We will remove Disk 1. To find the correct volume information, we find the Serial/UID number using SDD (Example 5-24).
Example 5-24 Removing SVC disk from the Windows server
DEV#: 0 DEVICE NAME: Disk1 Part0 TYPE: 2145 POLICY: OPTIMIZED SERIAL: 6005076801A180E9080000000000000F ============================================================================ Path# Adapter/Hard Disk State Mode Select Errors 0 Scsi Port2 Bus0/Disk1 Part0 OPEN NORMAL 1471 0 1 Scsi Port2 Bus0/Disk1 Part0 OPEN NORMAL 0 0 2 Scsi Port3 Bus0/Disk1 Part0 OPEN NORMAL 0 0 3 Scsi Port3 Bus0/Disk1 Part0 OPEN NORMAL 1324 0 DEV#: 1 DEVICE NAME: Disk2 Part0 TYPE: 2145 POLICY: OPTIMIZED SERIAL: 6005076801A180E90800000000000010 ============================================================================ Path# Adapter/Hard Disk State Mode Select Errors 0 Scsi Port2 Bus0/Disk2 Part0 OPEN NORMAL 20 0 1 Scsi Port2 Bus0/Disk2 Part0 OPEN NORMAL 94 0 2 Scsi Port3 Bus0/Disk2 Part0 OPEN NORMAL 55 0 3 Scsi Port3 Bus0/Disk2 Part0 OPEN NORMAL 0 0 DEV#: 2 DEVICE NAME: Disk3 Part0 TYPE: 2145 POLICY: OPTIMIZED SERIAL: 6005076801A180E90800000000000011 ============================================================================ Path# Adapter/Hard Disk State Mode Select Errors 0 Scsi Port2 Bus0/Disk3 Part0 OPEN NORMAL 100 0 1 Scsi Port2 Bus0/Disk3 Part0 OPEN NORMAL 0 0 2 Scsi Port3 Bus0/Disk3 Part0 OPEN NORMAL 0 0 3 Scsi Port3 Bus0/Disk3 Part0 OPEN NORMAL 69 0
Knowing the Serial/UID of the volume and the host name Senegal, we find the host mapping to remove by using the lshostvdiskmap command on the SVC, and then we remove the actual host mapping (Example 5-25).
Example 5-25 Finding and removing the host mapping
IBM_2145:ITSO-CLS2:admin>svcinfo lshostvdiskmap Senegal id name SCSI_id vdisk_id vdisk_name wwpn 1 Senegal 0 7 Senegal_bas0001 210000E08B89B9C0 6005076801A180E9080000000000000F 1 Senegal 1 8 Senegal_bas0002 210000E08B89B9C0 6005076801A180E90800000000000010 1 Senegal 2 9 Senegal_bas0003 210000E08B89B9C0 6005076801A180E90800000000000011
vdisk_UID
IBM_2145:ITSO-CLS2:admin>svcinfo lshostvdiskmap Senegal id name SCSI_id vdisk_id vdisk_name wwpn 1 Senegal 1 8 Senegal_bas0002 210000E08B89B9C0 6005076801A180E90800000000000010 1 Senegal 2 9 Senegal_bas0003 210000E08B89B9C0 6005076801A180E90800000000000011
vdisk_UID
Here, we can see that the volume is removed from the server. On the server, we then perform a disk rescan in Disk Management, and we now see that the correct disk (Disk1) has been removed, as shown in Figure 5-40.
SDD also shows us that the status for all paths to Disk1 has changed to CLOSE, because the disk is not available (Example 5-26 on page 190).
189
DEV#: 0 DEVICE NAME: Disk1 Part0 TYPE: 2145 POLICY: OPTIMIZED SERIAL: 6005076801A180E9080000000000000F ============================================================================ Path# Adapter/Hard Disk State Mode Select Errors 0 Scsi Port2 Bus0/Disk1 Part0 CLOSE NORMAL 1471 0 1 Scsi Port2 Bus0/Disk1 Part0 CLOSE NORMAL 0 0 2 Scsi Port3 Bus0/Disk1 Part0 CLOSE NORMAL 0 0 3 Scsi Port3 Bus0/Disk1 Part0 CLOSE NORMAL 1324 0 DEV#: 1 DEVICE NAME: Disk2 Part0 TYPE: 2145 POLICY: OPTIMIZED SERIAL: 6005076801A180E90800000000000010 ============================================================================ Path# Adapter/Hard Disk State Mode Select Errors 0 Scsi Port2 Bus0/Disk2 Part0 OPEN NORMAL 20 0 1 Scsi Port2 Bus0/Disk2 Part0 OPEN NORMAL 124 0 2 Scsi Port3 Bus0/Disk2 Part0 OPEN NORMAL 72 0 3 Scsi Port3 Bus0/Disk2 Part0 OPEN NORMAL 0 0 DEV#: 2 DEVICE NAME: Disk3 Part0 TYPE: 2145 POLICY: OPTIMIZED SERIAL: 6005076801A180E90800000000000011 ============================================================================ Path# Adapter/Hard Disk State Mode Select Errors 0 Scsi Port2 Bus0/Disk3 Part0 OPEN NORMAL 134 0 1 Scsi Port2 Bus0/Disk3 Part0 OPEN NORMAL 0 0 2 Scsi Port3 Bus0/Disk3 Part0 OPEN NORMAL 0 0 3 Scsi Port3 Bus0/Disk3 Part0 OPEN NORMAL 82 0 The disk (Disk1) is now removed from the server. However, to remove the SDD information of the disk, we need to reboot the server, but we can wait until a more suitable time.
190
For more information about the CLI, see Chapter 9, SAN Volume Controller operations using the command-line interface on page 439.
191
Before you begin, you must have experience with, or knowledge of, administering a Windows operating system. And you must also have experience with, or knowledge of, administering a SAN Volume Controller. You will need to complete the following tasks: Verify that the system requirements are met. Install the SAN Volume Controller Console if it is not already installed. Install the IBM System Storage hardware provider. Verify the installation. Create a free pool of volumes and a reserved pool of volumes on the SAN Volume Controller.
5.9.2 System requirements for the IBM System Storage hardware provider
Ensure that your system satisfies the following requirements before you install the IBM System Storage Support for Microsoft Volume Shadow Copy Service and Virtual Disk Service software on the Windows operating system: SAN Volume Controller and Master Console Version 2.1.0 or later with FlashCopy enabled. You must install the SAN Volume Controller Console before you install the IBM System Storage Hardware provider. IBM System Storage Support for Microsoft Volume Shadow Copy Service and Virtual Disk Service software Version 3.1 or later.
192
Figure 5-41 IBM System Storage Support for Microsoft Volume Shadow Copy installation
5. The License Agreement window opens (Figure 5-42). Read the license agreement information. Select whether you accept the terms of the license agreement, and click Next. If you do not accept, it means that you cannot continue with the installation.
Figure 5-42 IBM System Storage Support for Microsoft Volume Shadow Copy installation
193
6. The Choose Destination Location window opens (Figure 5-43). Click Next to accept the default directory where the setup program will install the files, or click Change to select another directory. Click Next.
Figure 5-43 IBM System Storage Support for Microsoft Volume Shadow Copy installation
Figure 5-44 IBM System Storage Support for Microsoft Volume Shadow Copy installation
194
8. From the next window, select the required CIM server, or select Enter the CIM Server address manually, and click Next (Figure 5-45).
Figure 5-45 IBM System Storage Support for Microsoft Volume Shadow Copy installation
9. The Enter CIM Server Details window opens. Enter the following information in the fields (Figure 5-46): a. In the CIM Server Address field, type the name of the server where the SAN Volume Controller Console is installed. b. In the CIM User field, type the user name that the IBM System Storage Support for Microsoft Volume Shadow Copy Service and Virtual Disk Service software will use to gain access to the server where the SAN Volume Controller Console is installed. c. In the CIM Password field, type the password for the user name that the IBM System Storage Support for Microsoft Volume Shadow Copy Service and Virtual Disk Service software will use to gain access to the SAN Volume Controller Console. d. Click Next.
Figure 5-46 IBM System Storage Support for Microsoft Volume Shadow Copy installation
10.In the next window, click Finish. If necessary, the InstallShield Wizard prompts you to restart the system (Figure 5-47 on page 196).
195
Figure 5-47 IBM System Storage Support for Microsoft Volume Shadow Copy installation
Additional information: If these settings change after installation, you can use the ibmvcfg.exe tool to update the Microsoft Volume Shadow Copy and Virtual Disk Services software with the new settings. If you do not have the CIM Agent server, port, or user information, contact your CIM Agent administrator.
C:\Documents and Settings\Administrator>vssadmin list providers vssadmin 1.1 - Volume Shadow Copy Service administrative command-line tool (C) Copyright 2001 Microsoft Corp. Provider name: 'Microsoft Software Shadow Copy provider 1.0' Provider type: System Provider Id: {b5946137-7b9f-4925-af80-51abd60b20d5} Version: 1.0.0.7
196
Provider name: 'IBM System Storage Volume Shadow Copy Service Hardware Provider' Provider type: Hardware Provider Id: {d90dd826-87cf-42ce-a88d-b32caa82025b} Version: 3.1.0.1108 If you are able to successfully perform all of these verification tasks, it means that the IBM System Storage Support for Microsoft Volume Shadow Copy Service and Virtual Disk Service software was successfully installed on the Windows server.
IBM_2145:ITSO-CLS2:admin>svctask mkhost -name VSS_FREE -hbawwpn 5000000000000000 -force Host, id [2], successfully created 2. Create a virtual host for the reserved pool of volumes. You can use the default name VSS_RESERVED or specify another name. Associate the host with the WWPN 5000000000000001 (14 zeroes); see Example 5-29.
Example 5-29 Creating an mkhost for the reserved pool
IBM_2145:ITSO-CLS2:admin>svctask mkhost -name VSS_RESERVED -hbawwpn 5000000000000001 -force Host, id [3], successfully created 3. Map the logical units (volumes) to the free pool of volumes. The volumes cannot be mapped to any other hosts. If you already have volumes created for the free pool of volumes, you must assign the volumes to the free pool. 4. Create host mappings between the volumes selected in step 3 and the VSS_FREE host to add the volumes to the free pool. Alternatively, you can use the ibmvcfg add command to add volumes to the free pool; see Example 5-30 on page 198.
197
IBM_2145:ITSO-CLS2:admin>svctask mkvdiskhostmap -host VSS_FREE msvc0001 Virtual Disk to Host map, id [0], successfully created IBM_2145:ITSO-CLS2:admin>svctask mkvdiskhostmap -host VSS_FREE msvc0002 Virtual Disk to Host map, id [1], successfully created 5. Verify that the volumes have been mapped. If you do not use the default WWPNs 5000000000000000 and 5000000000000001, you must configure the IBM System Storage hardware provider with the WWPNs; see Example 5-31.
Example 5-31 Verify hosts
IBM_2145:ITSO-CLS2:admin>svcinfo lshostvdiskmap VSS_FREE id name SCSI_id vdisk_id vdisk_name wwpn vdisk_UID 2 VSS_FREE 0 10 msvc0001 5000000000000000 6005076801A180E90800000000000012 2 VSS_FREE 1 11 msvc0002 5000000000000000 6005076801A180E90800000000000013
C:\Program Files\IBM\Hardware Provider for VSS-VDS>ibmvcfg.exe IBM System Storage VSS Provider Configuration Tool Commands ---------------------------------------ibmvcfg.exe <command> <command arguments> Commands: /h | /help | -? | /? showcfg listvols <all|free|unassigned> add <volume esrial number list> (separated by spaces) rem <volume serial number list> (separated by spaces) Configuration: set user <CIMOM user name> set password <CIMOM password> set trace [0-7] set trustpassword <trustpassword> set truststore <truststore location> set usingSSL <YES | NO> set vssFreeInitiator <WWPN> set vssReservedInitiator <WWPN> set FlashCopyVer <1 | 2> (only applies to ESS) set cimomPort <PORTNUM> set cimomHost <Hostname> set namespace <Namespace>
198
set targetSVC <svc_cluster_ip> set backgroundCopy <0-100> Table 5-4 lists the available commands.
Table 5-4 Available ibmvcfg.util commands Command ibmvcfg showcfg ibmvcfg set username <username> ibmvcfg set password <password> Description This lists the current settings. This sets the user name to access the SAN Volume Controller Console. This sets the password of the user name that will access the SAN Volume Controller Console. This specifies the IP address of the SAN Volume Controller on which the volumes are located when volumes are moved to and from the free pool with the ibmvcfg add and ibmvcfg rem commands. The IP address is overridden if you use the -s flag with the ibmvcfg add and ibmvcfg rem commands. This sets the background copy rate for FlashCopy. This specifies whether to use Secure Sockets Layer protocol to connect to the SAN Volume Controller Console. This specifies the SAN Volume Controller Console port number. The default value is 5999. This sets the name of the server where the SAN Volume Controller Console is installed. This specifies the namespace value that the Master Console is using. The default value is \root\ibm. This specifies the WWPN of the host. The default value is 5000000000000000. Modify this value only if there is a host already in your environment with a WWPN of 5000000000000000. Example ibmvcfg showcfg ibmvcfg set username Dan
199
Description This specifies the WWPN of the host. The default value is 5000000000000001. Modify this value only if there is a host already in your environment with a WWPN of 5000000000000001. This lists all volumes, including information about the size, location, and host mappings. This lists all volumes, including information about the size, location, and host mappings. This lists the volumes that are currently in the free pool. This lists the volumes that are currently not mapped to any hosts. This adds one or more volumes to the free pool of volumes. Use the -s parameter to specify the IP address of the SAN Volume Controller where the volumes are located. The -s parameter overrides the default IP address that is set with the ibmvcfg set targetSVC command. This removes one or more volumes from the free pool of volumes. Use the -s parameter to specify the IP address of the SAN Volume Controller where the volumes are located. The -s parameter overrides the default IP address that is set with the ibmvcfg set targetSVC command.
ibmvcfg listvols
ibmvcfg listvols
200
3. Install the supported HBA driver/firmware and upgrade the kernel if required, as described in 5.10.2, Configuration information on page 201. 4. Connect the Linux server FC host adapters to the switches. 5. Configure the switches (zoning) if needed. 6. Install SDD for Linux, as described in 5.10.5, Multipathing in Linux on page 202. 7. Configure the host, volumes, and host mapping in the SAN Volume Controller. 8. Rescan for LUNs on the Linux server to discover the volumes that were created on the SVC.
201
2. Rebuild the RAM disk that is associated with the kernel being used by using one of the following commands: If you are running on a SUSE Linux Enterprise Server operating system, run the mk_initrd command. If you are running on a Red Hat Enterprise Linux operating system, run the mkinitrd command, and then restart.
Installing SDD
This section describes how to install SDD for older distributions. Before performing these steps, always check for the currently supported levels, as described in 5.10.2, Configuration information on page 201. The cat /proc/scsi/scsi command displayed in Example 5-33 shows the devices that the SCSI driver has probed. In our configuration, we have two HBAs installed in our server, and we configured the zoning to access our volume from four paths.
Example 5-33 cat /proc/scsi/scsi command example
[root@diomede sdd]# cat /proc/scsi/scsi Attached devices: Host: scsi4 Channel: 00 Id: 00 Lun: 00 Vendor: IBM Model: 2145 Type: Unknown Host: scsi5 Channel: 00 Id: 00 Lun: 00 Vendor: IBM Model: 2145 Type: Unknown [root@diomede sdd]#
Rev: 0000 ANSI SCSI revision: 04 Rev: 0000 ANSI SCSI revision: 04
The rpm -ivh IBMsdd-1.6.3.0-5.i686.rhel4.rpm command installs the package, as shown in Example 5-34.
Example 5-34 rpm command example
[root@Palau sdd]# rpm -ivh IBMsdd-1.6.3.0-5.i686.rhel4.rpm Preparing... ########################################### [100%] 1:IBMsdd ########################################### [100%] Added following line to /etc/inittab: srv:345:respawn:/opt/IBMsdd/bin/sddsrv > /dev/null 2>&1 [root@Palau sdd]# To manually load and configure SDD on Linux, use the service sdd start command (SUSE Linux users can use the sdd start command). If you are not running a supported kernel, you will get an error message. If your kernel is supported, you see an OK success message, as shown in Example 5-35 on page 203.
202
[root@Palau sdd]# sdd start Starting IBMsdd driver load: [ Issuing killall sddsrv to trigger respawn... Starting IBMsdd configuration: [ OK OK ] ]
Issue the cfgvpath query command to view the name and serial number of the volume that is configured in the SAN Volume Controller, as shown in Example 5-36.
Example 5-36 cfgvpath query example
[root@Palau ~]# cfgvpath query RTPG command = a3 0a 00 00 00 00 00 00 08 20 00 00 total datalen=52 datalen_str=0x00 00 00 30 RTPG succeeded: sd_name=/dev/sda df_ctlr=0 /dev/sda ( 8, 0) host=0 ch=0 id=0 lun=0 vid=IBM pid=2145 serial=60050768018201bee000000000000035 lun_id=60050768018201bee000000000000035 ctlr_flag=1 ctlr_nbr=1 df_ctlr=0 RTPG command = a3 0a 00 00 00 00 00 00 08 20 00 00 total datalen=52 datalen_str=0x00 00 00 30 RTPG succeeded: sd_name=/dev/sdb df_ctlr=0 /dev/sdb ( 8, 16) host=0 ch=0 id=1 lun=0 vid=IBM pid=2145 serial=60050768018201bee000000000000035 lun_id=60050768018201bee000000000000035 ctlr_flag=1 ctlr_nbr=0 df_ctlr=0 RTPG command = a3 0a 00 00 00 00 00 00 08 20 00 00 total datalen=52 datalen_str=0x00 00 00 30 RTPG succeeded: sd_name=/dev/sdc df_ctlr=0 /dev/sdc ( 8, 32) host=1 ch=0 id=0 lun=0 vid=IBM pid=2145 serial=60050768018201bee000000000000035 lun_id=60050768018201bee000000000000035 ctlr_flag=1 ctlr_nbr=0 df_ctlr=0 RTPG command = a3 0a 00 00 00 00 00 00 08 20 00 00 total datalen=52 datalen_str=0x00 00 00 30 RTPG succeeded: sd_name=/dev/sdd df_ctlr=0 /dev/sdd ( 8, 48) host=1 ch=0 id=1 lun=0 vid=IBM pid=2145 serial=60050768018201bee000000000000035 lun_id=60050768018201bee000000000000035 ctlr_flag=1 ctlr_nbr=1 df_ctlr=0 [root@Palau ~]# The cfgvpath command configures the SDD vpath devices, as shown in Example 5-37.
Example 5-37 cfgvpath command example
[root@Palau ~]# cfgvpath c--------- 1 root root 253, 0 Jun 5 WARNING: vpatha path sda has WARNING: vpatha path sdb has WARNING: vpatha path sdc has WARNING: vpatha path sdd has Writing out new configuration to file [root@Palau ~]#
09:04 /dev/IBMsdd already been configured. already been configured. already been configured. already been configured. /etc/vpath.conf
The configuration information is saved by default in the /etc/vpath.conf file. You can save the configuration information to a specified file name by entering the following command: cfgvpath -f file_name.cfg
Chapter 5. Host configuration
203
Issue the chkconfig command to enable SDD to run at system startup: chkconfig sdd on To verify the setting, enter the following command: chkconfig --list sdd This verification is shown in Example 5-38.
Example 5-38 sdd run level example
[root@Palau sdd]# chkconfig --list sdd sdd 0:off 1:off 2:on [root@Palau sdd]#
3:on
4:on
5:on
6:off
If necessary, you can disable the startup option by entering this command: chkconfig sdd off Run the datapath query commands to display the online adapters and the paths to the adapters. Notice that the preferred paths are used from one of the nodes, that is, path 0 and path 2. Path 1 and path 3 connect to the other node and are used as alternate or backup paths for high availability, as shown in Example 5-39.
Example 5-39 datapath query command example
[root@Palau ~]# datapath query adapter Active Adapters :2 Adpt# Name State Mode 0 Host0Channel0 NORMAL ACTIVE 1 Host1Channel0 NORMAL ACTIVE [root@Palau ~]# [root@Palau ~]# datapath query device Total Devices : 1 Select 1 0 Errors 0 0 Paths 2 2 Active 0 0
DEV#: 0 DEVICE NAME: vpatha TYPE: 2145 POLICY: Optimized Sequential SERIAL: 60050768018201bee000000000000035 ============================================================================ Path# Adapter/Hard Disk State Mode Select Errors 0 Host0Channel0/sda CLOSE NORMAL 1 0 1 Host0Channel0/sdb CLOSE NORMAL 0 0 2 Host1Channel0/sdc CLOSE NORMAL 0 0 3 Host1Channel0/sdd CLOSE NORMAL 0 0 [root@Palau ~]# SDD has three path-selection policy algorithms: Failover only (fo): All I/O operations for the device are sent to the same (preferred) path unless the path fails because of I/O errors. Then, an alternate path is chosen for subsequent I/O operations. Load balancing (lb): The path to use for an I/O operation is chosen by estimating the load on the adapter to which each path is attached. The load is a function of the number of I/O operations currently in process. If multiple paths have the same load, a path is chosen at
204
random from those paths. Load-balancing mode also incorporates failover protection. The load-balancing policy is also known as the optimized policy. Round-robin (rr): The path to use for each I/O operation is chosen at random from paths that were not used for the last I/O operation. If a device has only two paths, SDD alternates between the two paths. You can dynamically change the SDD path-selection policy algorithm by using the datapath set device policy SDD command. You can see the SDD path-selection policy algorithm that is active on the device when you use the datapath query device command. Example 5-39 on page 204 shows that the active policy is optimized, which means that the SDD path-selection policy algorithm active is Optimized Sequential. Example 5-40 shows the volume information from the SVC command-line interface.
Example 5-40 svcinfo redhat1
IBM_2145:ITSOSVC42A:admin>svcinfo lshost linux2 id 6 name linux2 port_count 2 type generic mask 1111 iogrp_count 4 WWPN 210000E08B89C1CD node_logged_in_count 2 state active WWPN 210000E08B054CAA node_logged_in_count 2 state active IBM_2145:ITSOSVC42A:admin> IBM_2145:ITSOSVC42A:admin>svcinfo lshostvdiskmap linux2 id name SCSI_id vdisk_id wwpn vdisk_UID 6 linux2 0 33 210000E08B89C1CD 60050768018201BEE000000000000035 IBM_2145:ITSOSVC42A:admin> IBM_2145:ITSOSVC42A:admin>svcinfo lsvdisk linux_vd1 id 33 name linux_vd1 IO_group_id 0 IO_group_name io_grp0 status online mdisk_grp_id 1 mdisk_grp_name MDG0 capacity 1.0GB type striped formatted no mdisk_id mdisk_name FC_id FC_name RC_id RC_name
vdisk_name linux_vd1
205
vdisk_UID 60050768018201BEE000000000000035 throttling 0 preferred_node_id 1 fast_write_state empty cache readwrite udid 0 fc_map_count 0 IBM_2145:ITSOSVC42A:admin>
[root@Palau ~]# fdisk /dev/vpatha Device contains neither a valid DOS partition table, nor Sun, SGI or OSF disklabel Building a new DOS disklabel. Changes will remain in memory only, until you decide to write them. After that, of course, the previous content won't be recoverable. Warning: invalid flag 0x0000 of partition table 4 will be corrected by w(rite) Command (m for help): m Command action a toggle a bootable flag b edit bsd disklabel c toggle the dos compatibility flag d delete a partition l list known partition types m print this menu n add a new partition o create a new empty DOS partition table p print the partition table q quit without saving changes s create a new empty Sun disklabel t change a partition's system id u change display/entry units v verify the partition table w write table to disk and exit x extra functionality (experts only) Command (m for help): n Command action e extended p primary partition (1-4) e Partition number (1-4): 1 First cylinder (1-1011, default 1): Using default value 1 Last cylinder or +size or +sizeM or +sizeK (1-1011, default 1011): Using default value 1011 Command (m for help): w 206
Implementing the IBM System Storage SAN Volume Controller V6.1
The partition table has been altered! Calling ioctl() to re-read partition table. Syncing disks. [root@Palau ~]# 2. Create a file system on the vpath, as shown in Example 5-42.
Example 5-42 mkfs command example
[root@Palau ~]# mkfs -t ext3 /dev/vpatha mke2fs 1.35 (28-Feb-2004) Filesystem label= OS type: Linux Block size=4096 (log=2) Fragment size=4096 (log=2) 131072 inodes, 262144 blocks 13107 blocks (5.00%) reserved for the super user First data block=0 Maximum filesystem blocks=268435456 8 block groups 32768 blocks per group, 32768 fragments per group 16384 inodes per group Superblock backups stored on blocks: 32768, 98304, 163840, 229376 Writing inode tables: done Creating journal (8192 blocks): done Writing superblocks and filesystem accounting information: done This filesystem will be automatically checked every 27 mounts or 180 days, whichever comes first. Use tune2fs -c or -i to override. [root@Palau ~]# 3. Create the mount point, and mount the vpath drive, as shown in Example 5-43.
Example 5-43 Mount point
[root@Palau ~]# mkdir /itsosvc [root@Palau ~]# mount -t ext3 /dev/vpatha /itsosvc 4. The drive is now ready for use. The df command shows us the mounted disk /itsosvc, and the datapath query command shows that four paths are available; see Example 5-44.
Example 5-44 Display mounted drives
[root@Palau ~]# df Filesystem 1K-blocks /dev/mapper/VolGroup00-LogVol00 74699952 /dev/hda1 101086 none 1033136 /dev/vpatha 1032088 [root@Palau ~]#
Used Available Use% Mounted on 2564388 13472 0 34092 68341032 82395 1033136 945568 4% 15% 0% 4% / /boot /dev/shm /itsosvc
207
Total Devices : 1
DEV#: 0 DEVICE NAME: vpatha TYPE: 2145 POLICY: Optimized Sequential SERIAL: 60050768018201bee000000000000035 ============================================================================ Path# Adapter/Hard Disk State Mode Select Errors 0 Host0Channel0/sda OPEN NORMAL 1 0 1 Host0Channel0/sdb OPEN NORMAL 6296 0 2 Host1Channel0/sdc OPEN NORMAL 6178 0 3 Host1Channel0/sdd OPEN NORMAL 0 0 [root@Palau ~]#
Tip: Run insserv boot.multipath multipathd to automatically load the multipath driver and multipathd daemon during startup. 2. Enable MPIO for RHEL5 by running the following commands: modprobe dm-multipath modprobe dm-round-robin service multipathd start chkconfig multipathd on
208
Example 5-45 shows the commands issued on a Red Hat Enterprise Linux 5.1 operating system.
Example 5-45 Starting MPIO daemon on Red Hat Enterprise Linux
~]# modprobe dm-round-robin ~]# multipathd start ~]# chkconfig multipathd on ~]#
3. Open the multipath.conf file and follow the instructions to enable multipathing for IBM devices. The file is located in the /etc directory. Example 5-46 shows editing using vi.
Example 5-46 Editing the multipath.conf file
[root@palau etc]# vi multipath.conf 4. Add the following entry to the multipath.conf file: device { vendor "IBM" product "2145" path_grouping_policy group_by_prio prio_callout "/sbin/mpath_prio_alua /dev/%n" } 5. Restart the multipath daemon; see Example 5-47.
Example 5-47 Stopping and starting the multipath daemon
[root@palau ~]# service multipathd stop Stopping multipathd daemon: [root@palau ~]# service multipathd start Starting multipathd daemon:
[ [
OK OK
] ]
6. Type the multipath -dl command to see the mpio configuration. You will see two groups with two paths each. All paths must have the state [active][ready] and one group will be [enabled].
209
7. Use the fdisk command to create a partition on the SVC disk, as shown in Example 5-48.
Example 5-48 fdisk
[root@palau scsi]# fdisk -l Disk /dev/hda: 80.0 GB, 80032038912 bytes 255 heads, 63 sectors/track, 9730 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Device Boot /dev/hda1 * /dev/hda2 Start 1 14 End 13 9730 Blocks 104391 78051802+ Id 83 8e System Linux Linux LVM
Disk /dev/sda: 4244 MB, 4244635648 bytes 131 heads, 62 sectors/track, 1020 cylinders Units = cylinders of 8122 * 512 = 4158464 bytes Disk /dev/sda doesn't contain a valid partition table Disk /dev/sdb: 4244 MB, 4244635648 bytes 131 heads, 62 sectors/track, 1020 cylinders Units = cylinders of 8122 * 512 = 4158464 bytes Disk /dev/sdb doesn't contain a valid partition table Disk /dev/sdc: 4244 MB, 4244635648 bytes 131 heads, 62 sectors/track, 1020 cylinders Units = cylinders of 8122 * 512 = 4158464 bytes Disk /dev/sdc doesn't contain a valid partition table Disk /dev/sdd: 4244 MB, 4244635648 bytes 131 heads, 62 sectors/track, 1020 cylinders Units = cylinders of 8122 * 512 = 4158464 bytes Disk /dev/sdd doesn't contain a valid partition table Disk /dev/sde: 4244 MB, 4244635648 bytes 131 heads, 62 sectors/track, 1020 cylinders Units = cylinders of 8122 * 512 = 4158464 bytes Disk /dev/sde doesn't contain a valid partition table Disk /dev/sdf: 4244 MB, 4244635648 bytes 131 heads, 62 sectors/track, 1020 cylinders Units = cylinders of 8122 * 512 = 4158464 bytes Disk /dev/sdf doesn't contain a valid partition table Disk /dev/sdg: 4244 MB, 4244635648 bytes 131 heads, 62 sectors/track, 1020 cylinders Units = cylinders of 8122 * 512 = 4158464 bytes Disk /dev/sdg doesn't contain a valid partition table
210
Disk /dev/sdh: 4244 MB, 4244635648 bytes 131 heads, 62 sectors/track, 1020 cylinders Units = cylinders of 8122 * 512 = 4158464 bytes Disk /dev/sdh doesn't contain a valid partition table Disk /dev/dm-2: 4244 MB, 4244635648 bytes 255 heads, 63 sectors/track, 516 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Disk /dev/dm-2 doesn't contain a valid partition table Disk /dev/dm-3: 4244 MB, 4244635648 bytes 255 heads, 63 sectors/track, 516 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Disk /dev/dm-3 doesn't contain a valid partition table [root@palau scsi]# fdisk /dev/dm-2 Device contains neither a valid DOS partition table, nor Sun, SGI or OSF disklabel Building a new DOS disklabel. Changes will remain in memory only, until you decide to write them. After that, of course, the previous content won't be recoverable. Warning: invalid flag 0x0000 of partition table 4 will be corrected by w(rite) Command (m for help): n Command action e extended p primary partition (1-4) e Partition number (1-4): 1 First cylinder (1-516, default 1): Using default value 1 Last cylinder or +size or +sizeM or +sizeK (1-516, default 516): Using default value 516 Command (m for help): w The partition table has been altered! Calling ioctl() to re-read partition table. WARNING: Re-reading the partition table failed with error 22: Invalid argument. The kernel still uses the old table. The new table will be used at the next reboot. [root@palau scsi]# shutdown -r now
211
[root@palau ~]# mkfs -t ext3 /dev/dm-2 mke2fs 1.39 (29-May-2006) Filesystem label= OS type: Linux Block size=4096 (log=2) Fragment size=4096 (log=2) 518144 inodes, 1036288 blocks 51814 blocks (5.00%) reserved for the super user First data block=0 Maximum filesystem blocks=1061158912 32 block groups 32768 blocks per group, 32768 fragments per group 16192 inodes per group Superblock backups stored on blocks: 32768, 98304, 163840, 229376, 294912, 819200, 884736 Writing inode tables: done Creating journal (16384 blocks): done Writing superblocks and filesystem accounting information: done This filesystem will be automatically checked every 29 mounts or 180 days, whichever comes first. Use tune2fs -c or -i to override. [root@palau ~]# 9. Create a mount point, and mount the drive, as shown in Example 5-50.
Example 5-50 Mount point
[root@palau ~]# mkdir /svcdisk_0 [root@palau ~]# cd /svcdisk_0/ [root@palau svcdisk_0]# mount -t ext3 /dev/dm-2 /svcdisk_0 [root@palau svcdisk_0]# df Filesystem 1K-blocks Used Available Use% Mounted on /dev/mapper/VolGroup00-LogVol00 73608360 1970000 67838912 3% / /dev/hda1 101086 15082 80785 16% /boot tmpfs 967984 0 967984 0% /dev/shm /dev/dm-2 4080064 73696 3799112 2% /svcdisk_0
212
213
IBM_2145:ITSO-CLS1:admin>svcinfo lshost Nile id 1 name Nile port_count 2 type generic mask 1111 iogrp_count 2 WWPN 210000E08B892BCD node_logged_in_count 4 state active WWPN 210000E08B89B8C0 node_logged_in_count 4 state active Then, we have to set the SCSI Controller Type in VMware. By default, ESX Server disables the SCSI bus sharing and does not allow multiple virtual machines to access the same VMFS file at the same time; see Figure 5-48 on page 216. But in many configurations, such as those configurations for high availability, the virtual machines have to share the same VMFS file to share a disk. To set the SCSI Controller Type in VMware: 1. Log on to your Infrastructure Client, shut down the virtual machine, right-click it, and select Edit settings. 2. Highlight the SCSI Controller, and select one of the three available settings, depending on your configuration: None: Disks cannot be shared by other virtual machines. Virtual: Disks can be shared by virtual machines on the same server. Physical: Disks can be shared by virtual machines on any server. Click OK to apply the setting.
215
3. Create your volumes on the SVC, then map them to the ESX hosts. Tips: If you want to use features, such as VMotion, the volumes that own the VMFS file have to be visible to every ESX host that will be able to host the virtual machine. In SVC, select Allow the virtual disks to be mapped even if they are already mapped to a host. The volume has to have the same SCSI ID on each ESX host. For this configuration, we created one volume and mapped it to our ESX host, as shown in Example 5-53.
Example 5-53 Mapped volume to ESX host Nile
IBM_2145:ITSO-CLS1:admin>svcinfo lshostvdiskmap Nile id name SCSI_id vdisk_id vdisk_name wwpn 1 Nile 0 12 VMW_pool 210000E08B892BCD 60050768018301BF2800000000000010
vdisk_UID
ESX does not automatically scan for SAN changes (except when rebooting the entire ESX server). If you have made any changes to your SVC or SAN configuration, perform the following steps: 1. Open your VMware Infrastructure Client. 2. Select the host. 3. In the Hardware window, choose Storage Adapters. 4. Click Rescan.
216
To configure a storage device to use it in VMware, perform the following steps: 1. Open your VMware Infrastructure Client. 2. Select the host for which you want to see the assigned volumes, and click the Configuration tab. 3. In the Hardware window on the left side, click Storage. 4. To create a new storage pool, select click here to create a datastore or Add storage if the yellow field does not appear (Figure 5-49).
5. The Add storage wizard will appear. 6. Select Create Disk/Lun, and click Next. 7. Select the SVC volume that you want to use for the datastore, and click Next. 8. Review the disk layout and click Next. 9. Enter a datastore name and click Next. 10.Select a block size, enter the size of the new partition, and then, click Next. 11.Review your selections, and click Finish. Now, the created VMFS datastore appears in the Storage window (Figure 5-50). You will see the details for the highlighted datastore. Check whether all of the paths are available and that the Path Selection is set to Round Robin.
217
If not all of the paths are available, check your SAN and storage configuration. After fixing the problem, select Refresh to perform a path rescan. The view will be updated to the new configuration. Best practice is to use the Round Robin Multipath Policy for SVC. If you have to edit this policy, perform the following steps: 1. Highlight the datastore. 2. Click Properties. 3. Click Managed Paths. 4. Click Change (see Figure 5-50 on page 217). 5. Select Round Robin. 6. Click OK. 7. Click Close. Now, your VMFS datastore has been created, and you can start using it for your guest operating systems. Round Robin will distribute the I/O load across all available paths. If you do want to use a fixed path, the policy setting Fixed is supported as well.
218
IBM_2145:ITSO-CLS1:admin>svcinfo lsvdisk VMW_pool id 12 name VMW_pool IO_group_id 0 IO_group_name io_grp0 status online mdisk_grp_id 0 mdisk_grp_name MDG_DS45 capacity 60.0GB type striped formatted yes mdisk_id mdisk_name FC_id FC_name RC_id RC_name vdisk_UID 60050768018301BF2800000000000010 throttling 0 preferred_node_id 2 fast_write_state empty cache readwrite udid 0 fc_map_count 0 sync_rate 50 copy_count 1 copy_id 0 status online sync yes primary yes mdisk_grp_id 0 mdisk_grp_name MDG_DS45 type striped mdisk_id mdisk_name
Chapter 5. Host configuration
219
fast_write_state empty used_capacity 60.00GB real_capacity 60.00GB free_capacity 0.00MB overallocation 100 autoexpand warning grainsize IBM_2145:ITSO-CLS1:admin>svctask expandvdisksize -size 5 -unit gb VMW_pool IBM_2145:ITSO-CLS1:admin>svcinfo lsvdisk VMW_pool id 12 name VMW_pool IO_group_id 0 IO_group_name io_grp0 status online mdisk_grp_id 0 mdisk_grp_name MDG_DS45 capacity 65.0GB type striped formatted yes mdisk_id mdisk_name FC_id FC_name RC_id RC_name vdisk_UID 60050768018301BF2800000000000010 throttling 0 preferred_node_id 2 fast_write_state empty cache readwrite udid 0 fc_map_count 0 sync_rate 50 copy_count 1 copy_id 0 status online sync yes primary yes mdisk_grp_id 0 mdisk_grp_name MDG_DS45 type striped mdisk_id mdisk_name fast_write_state empty used_capacity 65.00GB real_capacity 65.00GB free_capacity 0.00MB overallocation 100 autoexpand warning grainsize IBM_2145:ITSO-CLS1:admin>
220
2. Open the Virtual Infrastructure Client. 3. Select the host. 4. Select Configuration. 5. Select Storage Adapters. 6. Click Rescan. 7. Make sure that the Scan for new Storage Devices check box is marked, and click OK. After the scan has completed, the new capacity is displayed in the Details section. 8. Click Storage. 9. Right-click the VMFS volume and click Properties. 10.Click Add Extend. 11.Select the new free space, and click Next. 12.Click Next. 13.Click Finish. The VMFS volume has now been extended, and the new space is ready for use.
221
OS cluster support
Solaris with Symantec Cluster V4.1, Symantec SFHA and SFRAC V4.1/5.0, and Solaris with Sun Cluster V3.1/3.2 are supported at the time of writing.
222
223
Flat Space Addressing mode is used rather than the Peripheral Device Addressing Mode.
When an inquiry command for any page is sent to LUN 0 using Peripheral Device Addressing, it is reported as Peripheral Device Type 0Ch (controller). When any command other than an inquiry is sent to LUN 0 using Peripheral Device Addressing, SVC will respond as an unmapped LUN 0 normally responds. When an inquiry is sent to LUN 0 using Flat Space Addressing, it is reported as Peripheral Device Type 00h (Direct Access Device) if a LUN is mapped at LUN 0 or 1Fh Unknown Device Type. When an inquiry is sent to an unmapped LUN that is not LUN 0 using Peripheral Device Addressing, the Peripheral qualifier returned is 001b and the Peripheral Device type is 1Fh (unknown or no device type). This response is in contrast to the behavior for generic hosts, where peripheral Device Type 00h is returned.
224
It is also possible to configure the multipath driver so that it offers a web interface to run the commands. Before this configuration can work, we need to configure the web interface. Sddsrv does not bind to any TCP/IP port by default, but it allows port binding to be dynamically enabled or disabled. For all platforms except Linux, the multipath driver package ships an sddsrv.conf template file named the sample_sddsrv.conf file. On all UNIX platforms except Linux, the sample_sddsrv.conf file is located in the /etc directory. On Windows platforms, the sample_sddsrv.conf file is in the directory where SDD is installed. You must use the sample_sddsrv.conf file to create the sddsrv.conf file in the same directory as the sample_sddsrv.conf file by simply copying it and naming the copied file sddsrv.conf. You can then dynamically change port binding by modifying the parameters in the sddsrv.conf file and changing the values of Enableport and Loopbackbind to True. Figure 5-52 shows the start window of the multipath driver web interface.
225
226
Chapter 6.
Data migration
In this chapter we explain how to migrate from a conventional storage infrastructure to a virtualized storage infrastructure by using the IBM System Storage SAN Volume Controller (SVC). We also explain how the SVC can be phased out of a virtualized storage infrastructure, for example, after a trial period or after using the SVC as a data migration tool. Nest, we describe how to migrate from a fully allocated volume to a thin-provisioned volume by using the volume mirroring feature and the thin-provisioned volume together. Finally, we provide you with examples of using intracluster Metro Mirror to migrate data.
227
228
If the type of the volume is image, then the volume type transitions to striped when the first extent is migrated. The MDisk access mode transitions from image to managed.
Using the -force flag: If the -force flag is not used and if volumes occupy extents on one or more of the MDisks that are specified, the command fails. When the -force flag is used and if volumes occupy extents on one or more of the MDisks that are specified, all extents on the MDisks will be migrated to the other MDisks in the storage pool if there are enough free extents in the storage pool. The deletion of the MDisks is postponed until all extents are migrated, which can take time. In the case where there are insufficient free extents in the storage pool, the command fails.
Rule: For the migration to be acceptable, the source and destination storage pool must have the same extent size. Note that volume mirroring can also be used to migrate a volume between storage pools. This method can be used if the extent sizes of the two pools are not the same.
229
In Figure 6-1, we illustrate volume V3 migrating from Pool 2 to Pool 3. Extents are allocated to the migrating volume from the set of MDisks in the target storage pool, using the extent allocation algorithm. The process can be prioritized by specifying the number of threads that will be used in parallel (from 1 to 4) while migrating; using only one thread will put the least background load on the system. The offline rules apply to both storage pools. Therefore, referring back to Figure 6-1, if any of the M4, M5, M6, or M7 MDisks go offline, then the V3 volume goes offline. If the M4 MDisk goes offline, then V3 and V5 go offline, but V1, V2, V4, and V6 remain online. If the type of the volume is image, then the volume type transitions to striped when the first extent is migrated. The MDisk access mode transitions from image to managed. For the duration of the move, the volume is listed as being a member of the original storage pool. For the purposes of configuration, the volume moves to the new storage pool instantaneously at the end of the migration.
230
Migrate image mode-to-image mode between storage pools. Migrate managed mode-to-image mode between storage pools. These conditions must apply to be able to migrate: The destination MDisk must be greater than or equal to the size of the volume. The MDisk that is specified as the target must be in an unmanaged state at the time that the command is run. If the migration is interrupted by a cluster recovery, the migration will resume after the recovery completes. If the migration involves moving between storage pools, the volume behaves as described in 6.2.3, Migrating a volume between storage pools on page 229. Regardless of the mode in which the volume starts, it is reported as being in managed mode during the migration. Also, both of the MDisks involved are reported as being in image mode during the migration. Upon completion of the command, the volume is classified as an image mode volume.
231
6.3.1 Parallelism
You can perform several of the following activities in parallel.
Per cluster
An SVC cluster supports up to 32 active concurrent instances of members of the set of migration activities: Migrate multiple extents Migrate between storage pools Migrate off of a deleted MDisk Migrate to image mode These high-level migration tasks operate by scheduling single extent migrations: Up to 256 single extent migrations can run concurrently. This number is made up of single extent migrates, which result from the operations previously listed. The Migrate Multiple Extents and Migrate Between storage pools commands support a flag that allows you to specify the number of parallel threads to use, between 1 and 4. This parameter affects the number of extents that will be concurrently migrated for that migration operation. Thus, if the thread value is set to 4, up to four extents can be migrated concurrently for that operation, subject to other resource constraints.
232
Per MDisk
The SVC supports up to four concurrent single extent migrates per MDisk. This limit does not take into account whether the MDisk is the source or the destination. If more than four single extent migrates are scheduled for a particular MDisk, further migrations are queued pending the completion of one of the currently running migrations.
Chunks
Regardless of the extent size for the storage pool, data is migrated in units of 16 MB. In this description, this unit is referred to as a chunk. We describe the algorithm that is used to migrate an extent: 1. Pause (pause means to queue all new I/O requests in the virtualization layer in SVC and to wait for all outstanding requests to complete) all I/O on the source MDisk on all nodes in the SVC cluster. The I/O to other extents is unaffected. 2. Unpause (resume) I/O on all of the source MDisk extents apart from writes to the specific chunk that is being migrated. Writes to the extent are mirrored to the source and destination. 3. On the node that is performing the migration, for each 256 KB section of the chunk: Synchronously read 256 KB from the source. Synchronously write 256 KB to the target. 4. After the entire chunk has been copied to the destination, repeat the process for the next chunk within the extent.
Chapter 6. Data migration
233
5. After the entire extent has been migrated, pause all I/O to the extent being migrated, perform a checkpoint on the extent move to on-disk metadata, redirect all further reads to the destination, and stop mirroring writes (writes only to destination). 6. If the checkpoint fails, the I/O is unpaused. During the migration, the extent can be divided into three regions, as shown in Figure 6-2. Region B is the chunk that is being copied. Writes to Region B are queued (paused) in the virtualization layer waiting for the chunk to be copied. Reads to Region A are directed to the destination, because this data has already been copied. Writes to Region A are written to both the source and the destination extent to maintain the integrity of the source extent. Reads and writes to Region C are directed to the source, because this region has yet to be migrated. The migration of a chunk requires 64 synchronous reads and 64 synchronous writes. During this time, all writes to the chunk from higher layers in the software stack (such as cache destages) are held back. If the back-end storage is operating with significant latency, it is possible that this operation might take time (minutes) to complete, which can have an adverse affect on the overall performance of the SVC. To avoid this situation, if the migration of a particular chunk is still active after one minute, the migration is paused for 30 seconds. During this time, writes to the chunk are allowed to proceed. After 30 seconds, the migration of the chunk is resumed. This algorithm is repeated as many times as necessary to complete the migration of the chunk.
16 MB
Figure 6-2 Migrating an extent
Not to scale
SVC guarantees read stability during data migrations even if the data migration is stopped by a node reset or a cluster shutdown. This read stability is possible because SVC disallows writes on all nodes to the area being copied, and upon a failure, the extent migration is restarted from the beginning. At the conclusion of the operation, we will have these results: Extents migrated in 16 MB chunks, one chunk at a time. Chunks are either copied, in progress, or not copied. When the extent is finished, its new location is saved.
234
Figure 6-3 shows the data migration and write operation relationship.
MDisk modes
There are three MDisk modes: Unmanaged MDisk An MDisk is reported as unmanaged when it is not a member of any storage pool. An unmanaged MDisk is not associated with any volumes and has no metadata stored on it. The SVC will not write to an MDisk that is in unmanaged mode except when it attempts to change the mode of the MDisk to one of the other modes. Image mode MDisk Image mode provides a direct block-for-block translation from the MDisk to the volume with no virtualization. Image mode volumes have a minimum size of one block (512 bytes) and always occupy at least one extent. An image mode MDisk is associated with exactly one volume. Managed mode MDisk Managed mode Mdisks contribute extents to the pool of available extents in the storage pool. Zero or more managed mode volumes might use these extents.
235
Managed mode to unmanaged mode This transition occurs when an MDisk is removed from a storage pool. Unmanaged mode to image mode This transition occurs when an image mode MDisk is created on an MDisk that was previously unmanaged. It also occurs when an MDisk is used as the target for a migration to image mode. Image mode to unmanaged mode There are two distinct ways in which this transition can happen: When an image mode volume is deleted. The MDisk that supported the volume becomes unmanaged. When an image mode volume is migrated in image mode to another MDisk, the MDisk that is being migrated from remains in image mode until all data has been moved off of it. It then transitions to unmanaged mode. Image mode to managed mode This transition occurs when the image mode volume that is using the MDisk is migrated into managed mode. Managed mode to image mode is impossible There is no operation that will take an MDisk directly from managed mode to image mode. You can achieve this transition by performing operations that convert the MDisk to unmanaged mode and then to image mode.
add to group
Not in group
remove from group
Managed mode
complete migrate
Image mode
Image mode volumes have the special property that the last extent in the volume can be a partial extent. Managed mode disks do not have this property. To perform any type of migration activity on an image mode volume, the image mode disk must first be converted into a managed mode disk. If the image mode disk has a partial last
236
extent, this last extent in the image mode volume must be the first extent to be migrated. This migration is handled as a special case. After this special migration operation has occurred, the volume becomes a managed mode volume and is treated in the same way as any other managed mode volume. If the image mode disk does not have a partial last extent, no special processing is performed. The image mode volume is simply changed into a managed mode volume and is treated in the same way as any other managed mode volume. After data is migrated off a partial extent, there is no way to migrate data back onto the partial extent.
237
Migrating your volume to an image mode volume Perform this activity if you are removing the SVC from your SAN environment after a trial period. We describe this step in detail in 6.5.5, Migrating a volume from managed mode to image mode on page 260. Moving an image mode volume to another image mode volume Use this procedure to migrate data from one storage subsystem to another storage subsystem. We describe this step in detail in 6.6.6, Migrating the volumes to image mode volumes on page 294. You can use these activities individually or together to migrate your servers LUNs from one storage subsystem to another storage subsystem using the SVC as your migration tool. The only downtime that is required for these activities is the time that it takes you to remask and remap the LUNs between the storage subsystems and your SVC.
6.5.1 Windows Server 2008 host system connected directly to the DS4700
In our example configuration, we use a Windows Server 2008 host, a DS4700, and a DS4500. The host has two LUNs (drive X and Y). The two LUNs are part of one DS4700 array. Before the migration, LUN masking is defined in the DS4700 to give access to the Windows Server 2008 host system for the volume from DS4700 labeled X and Y (see Figure 6-6 on page 239). Figure 6-5 shows the starting zoning scenario.
Figure 6-6 on page 239 shows the two LUNs (drive X and Y).
238
Figure 6-7 shows the properties of one of the DS4700 disks using the Subsystem Device Driver DSM (SDDDSM). The disk appears as an IBM 1814 Fast Multipath Device.
239
6.5.2 Adding the SVC between the host system and the DS4700
Figure 6-8 shows the new environment with the SVC and a second storage subsystem attached to the SAN. The second storage subsystem is not required to migrate to the SVC, but in the following examples, we show that it is possible to move data across storage subsystems without any host downtime.
To add the SVC between the host system and the DS4700 storage subsystem, perform the following steps: 1. Check that you have installed supported device drivers on your host system. 2. Check that your SAN environment fulfills the supported zoning configurations. 3. Shut down the host. 4. Change the LUN masking in the DS4700. Mask the LUNs to the SVC, and remove the masking for the host. Figure 6-9 on page 241 shows the two LUNs with LUN IDs 16 and 33 remapped to SVC ITSO-CLS1.
240
Attention: To avoid potential data loss, back up all the data stored on your external storage before using the wizard. 5. Logon to your SVC Console and open Physical Storage and Migration; see Figure 6-10.
6. Click Start New Migration; this will start a wizard as shown in Figure 6-11 on page 242.
Chapter 6. Data migration
241
7. Follow the Storage Migration Wizard as shown in Figure 6-12, then click Next.
8. Figure 6-13 on page 243 shows the Prepare Environment for Migration information; click Next.
242
Figure 6-13 Migration Wizard - Step 2 of 8 - preparing the environment for migration
243
11.Figure 6-16 shows the available MDisks for Migration; click Next.
12.Mark both MDisks for migrating as shown in Figure 6-17 on page 245, and then click Next.
244
13.Figure 6-18 shows the MDisk import process. During the import process a new storage pool is automatically created, in our case Migrationpool_8192. You can see the command that the wizard is issuing is creating an image mode volume with a one-to-one mapping to mdisk5. Click Close to continue.
14.Now we create a new host object that we will later map the volume to. Click New Host as shown in Figure 6-19 on page 246.
245
15.Figure 6-20 shows the empty fields that we need to complete to match our host requirements.
16.Here you type the name you want to use for the Host, add the Fibre Channel port, and then select a Host Type. In our case, the name is W2k8_Server. Click Create Host as shown in Figure 6-21 on page 247.
246
18.Figure 6-23 on page 248 shows that the host was created successfully. Click Next to continue.
247
19.Figure 6-24 shows all the available volumes to map to a host. Click Next to continue.
20.Mark both volumes and click Map to Host as shown in Figure 6-25 on page 249.
248
21.Modify Mapping by choosing the host using the drop-down menu as shown in Figure 6-26, and then click Next.
22.The rightmost side of Figure 6-27 on page 250 shows the volumes that can be marked to map to your host. Mark both volumes and click OK.
249
23.Figure 6-28 shows the progress of the volume mapping to host. Click Close when finished.
24.After the volume to host mapping task is completed, notice that beneath the column heading Host Mapping a host is shown marked Yes; see Figure 6-29 on page 251. Click Next.
250
25.Select the storage pool you want to use for migration, in our case DS4700_2 as shown in Figure 6-30, and click Next.
Figure 6-30 Migration Wizard - Step 7 - selecting a storage pool to use for migration
26.Migration starts automatically by doing a volume copy, as shown in Figure 6-31 on page 252.
251
27.Figure 6-32 then appears, advising that migration has begun. Click Finish.
28.The window in Figure 6-33 on page 253 will appear automatically to show the progress of the migration.
252
29.Go to Volumes Volumes by host as shown in Figure 6-34 to see all the volumes served by the newly created host for this migration step.
30.Figure 6-35 on page 254 shows all the volumes (copy0* and copy1) served by the created host.
253
You can see in Figure 6-35 that the migrated volume is actually a mirrored volume with one copy on the image mode pool and another copy in a managed mode storage pool. The administrator can choose to leave the volume like this or split the initial copy from the mirror.
6.5.3 Importing the migrated disks into an online Windows Server 2008 host
To import the migrated disks into an online Windows 2008 Server host, perform these steps: 1. Start the Windows Server 2008 host system again, and expand Computer Management to see the new disk properties changed to a 2145 Multi-Path Disk Device (Figure 6-36 on page 255).
254
255
3. Select Start All Programs Subsystem Device Driver DSM Subsystem Device Driver DSM to open the SDDDSM command-line utility; see Figure 6-38.
256
4. Enter the datapath query device command to check whether all paths are available, as planned in your SAN environment; see Example 6-1.
Example 6-1 The zdatapath query device command
DEV#: 0 DEVICE NAME: Disk0 Part0 TYPE: 2145 POLICY: OPTIMIZED SERIAL: 6005076801A680E90800000000000007 ============================================================================ Path# Adapter/Hard Disk State Mode Select Errors 0 Scsi Port2 Bus0/Disk0 Part0 OPEN NORMAL 180 0 1 Scsi Port2 Bus0/Disk0 Part0 OPEN NORMAL 0 0 2 Scsi Port2 Bus0/Disk0 Part0 OPEN NORMAL 145 0 3 Scsi Port2 Bus0/Disk0 Part0 OPEN NORMAL 0 0 DEV#: 1 DEVICE NAME: Disk1 Part0 TYPE: 2145 POLICY: OPTIMIZED SERIAL: 6005076801A680E90800000000000005 ============================================================================ Path# Adapter/Hard Disk State Mode Select Errors 0 Scsi Port2 Bus0/Disk1 Part0 OPEN NORMAL 25 0 1 Scsi Port2 Bus0/Disk1 Part0 OPEN NORMAL 164 0 2 Scsi Port2 Bus0/Disk1 Part0 OPEN NORMAL 0 0 3 Scsi Port2 Bus0/Disk1 Part0 OPEN NORMAL 136 0 C:\Program Files\IBM\SDDDSM>
6.5.4 Adding the SVC between the host and DS4700 using the CLI
In this section we only use CLI commands to add direct attached storage to the SVCs managed storage. To read about our preparation of the environment see 6.5.1, Windows Server 2008 host system connected directly to the DS4700 on page 238.
IBM_2145:ITSO-CLS1:admin>svcinfo lsmdiskgrp id name status mdisk_count vdisk_count capacity extent_size free_capacity virtual_capacity used_capacity real_capacity overallocation warning easy_tier easy_tier_status 0 DS4700_1 online 2 0 49.50GB 256 49.50GB 0.00MB 0.00MB 0.00MB 0 80 auto inactive 1 DS4700_2 online 2 0 50.00GB 256 50.00GB 0.00MB 0.00MB 0.00MB 0 80 auto inactive
257
IBM_2145:ITSO-CLS1:admin>svctask mkmdiskgrp -name imagepool -tier generic_hdd -ext 256 MDisk Group, id [2], successfully created IBM_2145:ITSO-CLS1:admin>
IBM_2145:ITSO-CLS1:admin>svcinfo lsmdiskgrp id name status mdisk_count vdisk_count capacity extent_size free_capacity virtual_capacity used_capacity real_capacity overallocation warning easy_tier easy_tier_status 0 DS4700_1 online 2 0 49.50GB 256 49.50GB 0.00MB 0.00MB 0.00MB 0 80 auto inactive 1 DS4700_2 online 2 0 50.00GB 256 50.00GB 0.00MB 0.00MB 0.00MB 0 80 auto inactive 2 imagepool online 0 0 0 256 0 0.00MB 0.00MB 0.00MB 0 0 auto inactive IBM_2145:ITSO-CLS1:admin>
IBM_2145:ITSO-CLS1:admin>svctask mkvdisk -name imagepool -vtype image -mdisk mdisk4 -syncrate Virtual Disk, id [0], successfully created IBM_2145:ITSO-CLS1:admin>svctask mkvdisk -name imagepool -vtype image -mdisk mdisk5 -syncrate Virtual Disk, id [1], successfully created IBM_2145:ITSO-CLS1:admin>
258
IBM_2145:ITSO-CLS1:admin>svcinfo lsvdisk id name IO_group_id IO_group_name status mdisk_grp_id mdisk_grp_name capacity type FC_id FC_name RC_id RC_name vdisk_UID fc_map_count copy_count fast_wri te_state se_copy_count 0 image1 0 io_grp0 online 2 imagepool 5.00GB image 6005076801910281A000000000000022 0 1 empty 0 1 image2 0 io_grp0 online 2 imagepool 5.00GB image 6005076801910281A000000000000023 0 1 empty 0 IBM_2145:ITSO-CLS1:admin>
IBM_2145:ITSO-CLS1:admin>svctask mkvdiskhostmap -host W2K8_Server -scsi 0 -force image1 Virtual Disk to Host map, id [0], successfully created IBM_2145:ITSO-CLS1:admin> IBM_2145:ITSO-CLS1:admin>svctask mkvdiskhostmap -host W2K8_Server -scsi 1 -force image2 Virtual Disk to Host map, id [1], successfully created IBM_2145:ITSO-CLS1:admin>
IBM_2145:ITSO-CLS1:admin>svctask addvdiskcopy -mdiskgrp DS4700_2 image1 Vdisk [0] copy [1] successfully created IBM_2145:ITSO-CLS1:admin> IBM_2145:ITSO-CLS1:admin>svctask addvdiskcopy -mdiskgrp DS4700_2 image2 Vdisk [1] copy [1] successfully created IBM_2145:ITSO-CLS1:admin>
259
IBM_2145:ITSO-CLS1:admin>svcinfo lsvdisk id name IO_group_id IO_group_name status mdisk_grp_id mdisk_grp_name capacity type FC_id FC_name RC_id RC_name vdisk_UID fc_map_count copy_count fast_write_state se_copy_count 0 image1 0 io_grp0 online many many 5.00GB many 6005076801910281A000000000000022 0 2 empty 0 1 image2 0 io_grp0 online many many 5.00GB many 6005076801910281A000000000000023 0 2 empty 0 IBM_2145:ITSO-CLS1:admin>
3. To create an empty storage pool for migration, perform Step 1 and Step 2 as shown in Figure 6-40 on page 261 and Figure 6-41 on page 261.
260
4. Figure 6-42 reminds you that an empty storage pool has been created. Click OK.
5. Figure 6-43 on page 262 shows the progress status of creating a storage pool for migration. Click Close to continue.
261
6. From the Volumes > All Volumes panel, select the volume that you want to migrate to image mode and select Export to Image Mode from the drop-down menu as shown in Figure 6-44.
7. Select the MDisk to migrate the volume onto, as shown in Figure 6-45 on page 263, and then click Next.
262
8. Select a storage pool in which the image mode volume will be placed after migration is completed, in our case for migration, and click Finish; see Figure 6-46.
263
9. The volume is exported to image mode and placed in the For Migration pool; see Figure 6-47. Click Close.
10.Navigate to the Physical Storage >MDisks section; notice that MDisk5 is now an image mode MDisk as shown in Figure 6-48.
11.Repeat these steps for every volume that you want to migrate to an image mode volume. 12.Delete the image mode data from the SVC by using the procedure described in 6.5.7, Removing image mode data from the SVC on page 274.
264
the SVC fully managed mode. The data stays available for the applications during this migration. This procedure is nearly the same as the procedure described in 6.5.5, Migrating a volume from managed mode to image mode on page 260. In our example, we migrate the windows server W2k8_Log volume to another disk subsystem as an image mode volume. The second storage subsystem is a DS4500; a new LUN is configured on the storage and mapped to the SVC cluster. The LUN is available to the SVC as an unmanaged MDisk5 as shown in Figure 6-49.
265
To migrate the image mode volume to another image mode volume, perform the following steps: 1. Mark the unmanaged MDisk5 and click either Actions or the right-side mouse button and select Import from the list as shown in Figure 6-50.
2. The Introduction window opens describing the process of importing the MDisk and mapping an image mode volume to it, as shown in Figure 6-51. Click Next.
3. Do not select a target pool because you do not want to migrate into an SVC managed volume pool. Instead, simply click Finish; see Figure 6-52 on page 267.
266
4. Figure 6-53 shows a warning message indicating a storage pool has not been selected and the volume will remain in the temporary pool. Click OK to continue.
267
5. The import process starts, as shown in Figure 6-54, by creating a temporary storage pool Migrationpool_8192 (8 GB) and an image volume. Click Close to continue.
Figure 6-54 Import of MDisk and creation of temporary storage pool Migrationpool_8192
6. As shown in Figure 6-55, there is now an image mode mdisk5 with the import controller name and SCSI ID as its name.
268
7. Now create a new storage pool Migration_out (with a same extent size (8 GB) as the automatically created storage pool Migrationpool_8192) for transferring the image mode disk. Go to Physical Storage Pools, as shown in Figure 6-56.
8. Click New Pool to create an empty storage pool, as shown in Figure 6-57.
269
9. Give your new storage pool the meaningful name Migration_out and click the Advanced Settings drop-down menu. Choose 8 GB as the extent size for your new storage pool, as shown in Figure 6-58.
Figure 6-58 Step 1 of 2 - create an empty storage pool with extent size 8 GB
10.Figure 6-59 shows a storage pool window without any disks. Click Finish to continue to create an empty storage pool.
11.The warning in Figure 6-60 on page 271 pops up to remind you that an empty storage pool will be created. Click OK to continue.
270
12.Figure 6-61 shows the progress of creating the storage pool Migration_out. Click Close to continue.
13.The empty storage pool for image to image migration has been created. Go to Volumes Volumes by Pool as shown in Figure 6-62.
271
14.Select the storage pool of the imported disk, Migrationpool_8192 in the left panel. Then mark the image disk you want to migrate out and select Actions. From the drop-down menu select Export to Image Mode, as shown in Figure 6-63.
15.Select the target MDisk on the new disk controller that you want to migrate to. Click Next, as shown in Figure 6-64.
272
16.Select the target migrate out (empty) storage pool, as shown in Figure 6-65. Click Finish.
17.Figure 6-66 shows the progress status of the Export Volume to Image process. Click Close to continue.
18.Figure 6-67 on page 274 shows that the MDisk location has changed as expected to the new storage pool Migration_out.
273
19.Repeat these steps for all image mode volumes that you want to migrate. 20.If you want to delete the data from the SVC, use the procedure described in 6.5.7, Removing image mode data from the SVC on page 274.
274
If the command succeeds on an image mode volume, the underlying back-end storage controller will be consistent with the data that a host might previously have read from the image mode volume; that is, all fast write data will have been flushed to the underlying LUN. Deleting an image mode volume causes the MDisk that is associated with the volume to be ejected from the storage pool. The mode of the MDisk will be returned to unmanaged. Note: This situation only applies to image mode volumes. If you delete a normal volume, all of the data will also be deleted. As shown in Example 6-1 on page 257, the SAN disks currently reside on the SVC 2145 device. Check that you have installed the supported device drivers on your host system. To switch back to the storage subsystem, perform the following steps: 1. Shut down your host system. 2. Edit the LUN masking on your storage subsystem. Remove the SVC from the LUN masking, and add the host to the masking. 3. Open the view Volumes by Host window to see which volumes are currently mapped to your host as shown in Figure 6-68.
4. Check your Host and select your volume. Then, show the drop-down menu by clicking the right mouse button and select Unmap all Hosts as shown in Figure 6-69 on page 276.
275
5. Verify your unmap process, as shown in Figure 6-70, and click Unmap.
6. Figure 6-71 shows that the volume has been removed from the SVC.
7. Repeat steps 3 to 5 for every image mode volume that you want to remove from the SVC.
276
6.5.8 Map the free disks onto the Windows Server 2008
To detect and map the disks which have been freed from SVC management, go to the WIndows Server 2008: 1. Using your DS4500 Storage Manager interface, now remap the two LUNs that were MDisks back to your Windows Server 2008 server. 2. Open your Computer Management window. Figure 6-72 shows that the LUNs are now back to an IBM 1814 type.
3. Open your Disk Management window and notice that the disks have appeared. You might need to reactivate your disk by using the right-click option on each disk.
277
278
You can use these three activities individually, or together, to migrate your Linux servers LUNs from one storage subsystem to another storage subsystem using the SVC as your migration tool. If you do not use all three activities, you can introduce or remove the SVC from your environment. The only downtime required for these activities is the time that it takes to remask and remap the LUNs between the storage subsystems and your SVC. In Figure 6-74, we show our Linux environment.
SAN
Green Zone
Figure 6-74 shows our Linux server connected to our SAN infrastructure. It has two LUNs that are masked directly to it from our storage subsystem: The LUN with SCSI ID 0 has the host operating system (our host is Red Hat Enterprise Linux V5.1), and this LUN is used to boot the system directly from the storage subsystem. The operating system identifies it as /dev/mapper/VolGroup00-LogVol00. SCSI LUN ID 0: To successfully boot a host off of the SAN, you must have assigned the LUN as SCSI LUN ID 0. Linux sees this LUN as our /dev/sda disk. We have also mapped a second disk (SCSI LUN ID 1) to the host. It is 5 GB in size, and it is mounted in the / data folder on the /dev/dm-2 disk. Example 6-11 on page 280 shows our disks that are directly attached to the Linux hosts.
279
[root@Palau data]# df Filesystem 1K-blocks /dev/mapper/VolGroup00-LogVol00 10093752 /dev/sda1 101086 tmpfs 1033496 /dev/dm-2 5160576 [root@Palau data]#
Used Available Use% Mounted on 1971344 12054 0 158160 7601400 83813 1033496 4740272 21% 13% 0% 4% / /boot /dev/shm /data
Our Linux server represents a typical SAN environment with a host directly using LUNs that were created on a SAN storage subsystem, as shown in Figure 6-74 on page 279: The Linux servers host bus adapter (HBA) cards are zoned so that they are in the Green Zone with our storage subsystem. The two LUNs that have been defined on the storage subsystem, using LUN masking, are directly available to our Linux server.
280
SAN
By Pinocchio 12-09-2005
IBM_2145:ITSO-CLS1:admin>svctask mkmdiskgrp -name Palau_Pool1 -ext 512 MDisk Group, id [2], successfully created IBM_2145:ITSO-CLS1:admin>svcinfo lsmdiskgrp id name status mdisk_count vdisk_count capacity extent_size free_capacity virtual_capacity used_capacity real_capacity overallocation warning easy_tier easy_tier_status 2 Palau_Pool1 online 0 0 0 512 0 0.00MB 0.00MB 0.00MB 0 0 auto inactive
281
0 0.00MB
0 0
512 0
0 auto
IBM_2145:ITSO-CLS1:admin>svcinfo lshbaportcandidate id 210000E08B89C1CD 210000E08B054CAA 210000E08B0548BC 210000E08B0541BC 210000E08B89CCC2 IBM_2145:ITSO-CLS1:admin> If you do not know the WWN of your Linux server, you can look at which WWNs are currently configured on your storage subsystem for this host. Figure 6-76 shows our configured ports on an IBM DS4700 storage subsystem.
282
After verifying that the SVC can see our host (linux2), we create the host entry and assign the WWN to this entry. Example 6-14 shows these commands.
Example 6-14 Create the host entry
IBM_2145:ITSO-CLS1:admin>svctask mkhost -name Palau -hbawwpn 210000E08B054CAA:210000E08B89C1CD Host, id [0], successfully created IBM_2145:ITSO-CLS1:admin>svcinfo lshost Palau id 0 name Palau port_count 2 type generic mask 1111 iogrp_count 4 WWPN 210000E08B89C1CD node_logged_in_count 4 state inactive WWPN 210000E08B054CAA node_logged_in_count 4 state inactive IBM_2145:ITSO-CLS1:admin>
IBM_2145:ITSO-CLS1:admin>svcinfo lscontroller id controller_name ctrl_s/n vendor_id product_id_low product_id_high 0 DS4500 IBM 1742-900 1 DS4700 IBM 1814 FAStT IBM_2145:ITSO-CLS1:admin> You can rename the storage subsystem to a more meaningful name (if we had multiple storage subsystems that were connected to our SAN fabric, renaming them makes it considerably easier to identify them) with the svctask chcontroller -name command.
283
284
Before we move the LUNs to the SVC, we must configure the host multipath configuration for the SVC. Add the following entry to your multipath.conf file, as shown in Example 6-16, and add the content of Example 6-17 to the file.
Example 6-16 Edit the multipath.conf file
[root@Palau ~]# vi /etc/multipath.conf [root@Palau ~]# service multipathd stop Stopping multipathd daemon: [root@Palau ~]# service multipathd start Starting multipathd daemon: [root@Palau ~]#
Example 6-17 Data to add to the multipath.conf file
[ [
OK OK
] ]
# SVC device { vendor "IBM" product "2145CF8" path_grouping_policy group_by_serial } We are now ready to move the ownership of the disks to the SVC, discover them as MDisks, and give them back to the host as volumes.
285
3. Using Storage Manager (our storage subsystem management tool), we can unmap and unmask the disks from the Linux server and remap and remask the disks to the SVC. LUN IDs: Even though we are using boot from SAN, you can also map the boot disk with any LUN number to the SVC. It does not have to be 0 until later when we configure the mapping in the SVC to the host. 4. From the SVC, discover the new disks with the svctask detectmdisk command. The disks will be discovered and named mdiskN, where N is the next available MDisk number (starting from 0). Example 6-18 shows the commands that we used to discover our MDisks and to verify that we have the correct MDisks.
Example 6-18 Discover the new MDisks
IBM_2145:ITSO-CLS1:admin>svctask detectmdisk IBM_2145:ITSO-CLS1:admin>svcinfo lsmdisk id name status mode mdisk_grp_id mdisk_grp_name capacity ctrl_LUN_# controller_name UID tier 26 mdisk26 online unmanaged 12.0GB 0000000000000008 DS4700 600a0b800026b2820000428f48739bca00000000000000000000000000000000 generic_hdd 27 mdisk27 online unmanaged 5.0GB 0000000000000009 DS4700 600a0b800026b282000042f84873c7e100000000000000000000000000000000 generic_hdd IBM_2145:ITSO-CLS1:admin> Important: Match your discovered MDisk serial numbers (UID on the svcinfo lsmdisk task display) with the serial number that you recorded earlier (in Figure 6-77 and Figure 6-78 on page 284). 5. After we have verified that we have the correct MDisks, we rename them to avoid confusion in the future when we perform other MDisk-related tasks (Example 6-19).
Example 6-19 Rename the MDisks
IBM_2145:ITSO-CLS1:admin>svctask chmdisk -name md_palauS mdisk26 IBM_2145:ITSO-CLS1:admin>svctask chmdisk -name md_palauD mdisk27 IBM_2145:ITSO-CLS1:admin>svcinfo lsmdisk id name status mode mdisk_grp_id mdisk_grp_name capacity ctrl_LUN_# controller_name UID tier 26 md_palauS online unmanaged 12.0GB 0000000000000008 DS4700 600a0b800026b2820000428f48739bca00000000000000000000000000000000 generic_hdd 27 md_palauD online unmanaged 5.0GB 0000000000000009 DS4700 600a0b800026b282000042f84873c7e100000000000000000000000000000000 generic_hdd IBM_2145:ITSO-CLS1:admin>
286
6. We create our image mode volumes with the svctask mkvdisk command and the -vtype image option (Example 6-20). This command virtualizes the disks in the exact same layout as though they were not virtualized.
Example 6-20 Create the image mode volumes
IBM_2145:ITSO-CLS1:admin>svctask mkvdisk -mdiskgrp Palau_Pool1 -iogrp 0 -vtype image -mdisk md_palauS -name palau_SANB Virtual Disk, id [29], successfully created IBM_2145:ITSO-CLS1:admin> IBM_2145:ITSO-CLS1:admin>svctask mkvdisk -mdiskgrp Palau_Pool2 -iogrp 0 -vtype image -mdisk md_palauD -name palau_Data Virtual Disk, id [30], successfully create IBM_2145:ITSO-CLS1:admin>svcinfo lsmdisk id name status mode mdisk_grp_id mdisk_grp_name capacity ctrl_LUN_# controller_name UID tier 26 md_palauS online image 2 Palau_Pool1 12.0GB 0000000000000008 DS4700 600a0b800026b2820000428f48739bca00000000000000000000000000000000 generic_hdd 27 md_palauD online image 3 Palau_Pool2 5.0GB 0000000000000009 DS4700 600a0b800026b282000042f84873c7e100000000000000000000000000000000 generic_hdd IBM_2145:ITSO-CLS1:admin> IBM_2145:ITSO-CLS1:admin>svcinfo lsvdisk id name IO_group_id IO_group_name status mdisk_grp_id mdisk_grp_name capacity type FC_id FC_name RC_id RC_name vdisk_UID fc_map_count copy_count fast_wri te_state se_copy_count 29 palau_SANB 0 io_grp0 online 4 Palau_Pool1 12.0GB image 60050768018301BF280000000000002B 0 1 empty 0 30 palau_Data 0 io_grp0 online 4 Palau_Pool2 5.0GB image 60050768018301BF280000000000002C 0 1 empty 0 7. Map the new image mode volumes to the host (Example 6-21). Important: Make sure that you map the boot volume with SCSI ID 0 to your host. The host must be able to identify the boot volume during the boot process.
Example 6-21 Map the volumes to the host
IBM_2145:ITSO-CLS1:admin>svctask mkvdiskhostmap -host Palau -scsi 0 palau_SANB Virtual Disk to Host map, id [0], successfully created IBM_2145:ITSO-CLS1:admin>svctask mkvdiskhostmap -host Palau -scsi 1 palau_Data Virtual Disk to Host map, id [1], successfully created IBM_2145:ITSO-CLS1:admin>svcinfo lshostvdiskmap Palau id name SCSI_id vdisk_id vdisk_name wwpn vdisk_UID 0 Palau 0 29 palau_SANB 210000E08B89C1CD 60050768018301BF280000000000002B
Chapter 6. Data migration
287
palau_Data
FlashCopy: While the application is in a quiescent state, you can choose to use FlashCopy to copy the new image volumes onto other volumes. You do not need to wait until the FlashCopy process has completed before starting your application. 8. Power on your host server and enter your Fibre Channel (FC) HBA adapter BIOS before booting the operating system, and make sure that you change the boot configuration so that it points to the SVC. In our example, we performed the following steps on a QLogic HBA: a. Press Ctrl+Q to enter the HBA BIOS. b. Open Configuration Settings. c. Open Selectable Boot Settings. d. Change the entry from your storage subsystem to the SVC 2145 LUN with SCSI ID 0. e. Exit the menu and save your changes. 9. Boot up your Linux operating system. If you only moved the application LUN to the SVC and left your Linux server running, you only need to follow these steps to see the new volume: a. Load your HBA driver with the modprobe DRIVER_NAME command. If you did not (and cannot) unload your HBA driver, you can issue commands to the kernel to rescan the SCSI bus to see the new volumes (these details are beyond the scope of this book). b. Check your syslog, and verify that the kernel found the new volumes. On Red Hat Enterprise Linux, the syslog is stored in the /var/log/messages directory. c. If your application and data are on an LVM volume, rediscover the VG and then run the vgchange -a y VOLUME_GROUP command to activate the VG. 10.Mount your file systems with the mount /MOUNT_POINT command (Example 6-22). The df output shows us that all of disks are available again.
Example 6-22 Mount data disk
[root@Palau data]# mount /dev/dm-2 /data [root@Palau data]# df Filesystem 1K-blocks Used Available Use% Mounted on /dev/mapper/VolGroup00-LogVol00 10093752 1938056 7634688 21% / /dev/sda1 101086 12054 83813 13% /boot tmpfs 1033496 0 1033496 0% /dev/shm /dev/dm-2 5160576 158160 4740272 4% /data [root@Palau data]# 11.You are now ready to start your application.
288
IBM_2145:ITSO-CLS1:admin>svctask detectmdisk IBM_2145:ITSO-CLS1:admin>svctask mkmdiskgrp -name MD_palauVD -ext 512 MDisk Group, id [8], successfully created IBM_2145:ITSO-CLS1:admin>svcinfo lsmdisk id name status mode mdisk_grp_id mdisk_grp_name capacity ctrl_LUN_# controller_name UID tier 26 md_palauS online image 2 Palau_Pool1 12.0GB 0000000000000008 DS4700 600a0b800026b2820000428f48739bca00000000000000000000000000000000 generic_hdd 27 md_palauD online image 3 Palau_Pool2 5.0GB 0000000000000009 DS4700 600a0b800026b282000042f84873c7e100000000000000000000000000000000 generic_hdd 28 mdisk28 online unmanaged 8.0GB 0000000000000010 DS4500 600a0b8000174233000000b9487778ab00000000000000000000000000000000 generic_hdd 29 mdisk29 online unmanaged 8.0GB 0000000000000011 DS4500 600a0b80001744310000010f48776bae00000000000000000000000000000000 generic_hdd 30 mdisk30 online unmanaged 8.0GB 0000000000000012 DS4500 600a0b8000174233000000bb487778d900000000000000000000000000000000 generic_hdd IBM_2145:ITSO-CLS1:admin>svctask chmdisk -name palau-md1 mdisk28 IBM_2145:ITSO-CLS1:admin>svctask chmdisk -name palau-md2 mdisk29 IBM_2145:ITSO-CLS1:admin>svctask chmdisk -name palau-md3 mdisk30 IBM_2145:ITSO-CLS1:admin>svctask addmdisk -mdisk palau-md1 MD_palauVD IBM_2145:ITSO-CLS1:admin>svctask addmdisk -mdisk palau-md2 MD_palauVD IBM_2145:ITSO-CLS1:admin>svctask addmdisk -mdisk palau-md3 MD_palauVD IBM_2145:ITSO-CLS1:admin>svcinfo lsmdisk id name status mode mdisk_grp_id mdisk_grp_name capacity ctrl_LUN_# controller_name UID tier 26 md_palauS online image 2 Palau_Pool1 12.0GB 0000000000000008 DS4700 600a0b800026b2820000428f48739bca00000000000000000000000000000000 generic_hdd 27 md_palauD online image 3 Palau_Pool2 5.0GB 0000000000000009 DS4700 600a0b800026b282000042f84873c7e100000000000000000000000000000000 generic_hdd 28 palau-md1 online unmanaged 8 MD_palauVD 8.0GB 0000000000000010 DS4500 600a0b8000174233000000b9487778ab00000000000000000000000000000000 generic_hdd
289
29 palau-md2 online unmanaged 8 MD_palauVD 8.0GB 0000000000000011 DS4500 600a0b80001744310000010f48776bae00000000000000000000000000000000 generic_hdd 30 palau-md3 online unmanaged 8 MD_palauVD 8.0GB 0000000000000012 DS4500 600a0b8000174233000000bb487778d900000000000000000000000000000000 generic_hdd IBM_2145:ITSO-CLS1:admin>
IBM_2145:ITSO-CLS1:admin>svctask migratevdisk -vdisk palau_SANB -mdiskgrp MD_palauVD IBM_2145:ITSO-CLS1:admin>svctask migratevdisk -vdisk palau_Data -mdiskgrp MD_palauVD IBM_2145:ITSO-CLS1:admin>svcinfo lsmigrate migrate_type MDisk_Group_Migration progress 25 migrate_source_vdisk_index 29 migrate_target_mdisk_grp 8 max_thread_count 4 migrate_source_vdisk_copy_id 0 migrate_type MDisk_Group_Migration progress 70 migrate_source_vdisk_index 30 migrate_target_mdisk_grp 8 max_thread_count 4 migrate_source_vdisk_copy_id 0 IBM_2145:ITSO-CLS1:admin> After this task has completed, Example 6-25 shows that the volumes are now spread over three MDisks.
Example 6-25 Migration complete
IBM_2145:ITSO-CLS1:admin>svcinfo lsmdiskgrp MD_palauVD id 8 name MD_palauVD status online mdisk_count 3 vdisk_count 2 capacity 24.0GB extent_size 512 free_capacity 7.0GB virtual_capacity 17.00GB used_capacity 17.00GB 290
Implementing the IBM System Storage SAN Volume Controller V6.1
real_capacity 17.00GB overallocation 70 warning 0 IBM_2145:ITSO-CLS1:admin>svcinfo lsvdiskmember palau_SANB id 28 29 30 IBM_2145:ITSO-CLS1:admin>svcinfo lsvdiskmember palau_Data id 28 29 30 IBM_2145:ITSO-CLS1:admin> Our migration to striped volumes on another storage subsystem (DS4500) is now complete. The original MDisks (palau-md1, palau-md2, and palau-md3) can now be removed from the SVC, and these LUNs can be removed from the storage subsystem. If these LUNs are the last LUNs that were used on our DS4700 storage subsystem, we can remove it from our SAN fabric.
291
SAN
IBM_2145:ITSO-CLS1:admin>svcinfo lscontroller id controller_name ctrl_s/n vendor_id product_id_high 0 controller0 IBM FAStT IBM_2145:ITSO-CLS1:admin>
product_id_low 1814
It is also a good idea to rename the new storage subsystems controller to a more useful name, which can be done with the svctask chcontroller -name command as in Example 6-27 on page 293.
292
IBM_2145:ITSO-CLS1:admin>svctask chcontroller -name ITSO-4700 0 IBM_2145:ITSO-CLS1:admin> Also verify that controller name was changed as you wanted, as shown in Example 6-28.
Example 6-28 Recheck controller name
IBM_2145:ITSO-CLS1:admin>svcinfo lscontroller id controller_name ctrl_s/n vendor_id product_id_high 0 ITSO-4700 IBM FAStT IBM_2145:ITSO-CLS1:admin>
product_id_low 1814
IBM_2145:ITSO-CLS1:admin>svctask detectmdisk IBM_2145:ITSO-CLS1:admin>svcinfo lsmdisk id name status mode mdisk_grp_id mdisk_grp_name capacity ctrl_LUN_# controller_name UID 0 mdisk0 online managed 600a0b800026b282000042f84873c7e100000000000000000000000000000000 28 palau-md1 online managed 8 MD_palauVD 8.0GB 0000000000000010 DS4500 600a0b8000174233000000b9487778ab00000000000000000000000000000000 29 palau-md2 online managed 8 MD_palauVD 8.0GB 0000000000000011 DS4500 600a0b80001744310000010f48776bae00000000000000000000000000000000 30 palau-md3 online managed 8 MD_palauVD 8.0GB 0000000000000012 DS4500 600a0b8000174233000000bb487778d900000000000000000000000000000000 31 mdisk31 online unmanaged 6.0GB 0000000000000013 DS4500 600a0b8000174233000000bd4877890f00000000000000000000000000000000 32 mdisk32 online unmanaged 12.5GB 0000000000000014 DS4500 600a0b80001744310000011048777bda00000000000000000000000000000000 IBM_2145:ITSO-CLS1:admin> Even though the MDisks will not stay in the SVC for long, we suggest that you rename them to more meaningful names so that they do not get confused with other MDisks that are used by other activities. Also, we create the storage pools to hold our new MDisks, which is shown in Example 6-30 on page 294.
293
IBM_2145:ITSO-CLS1:admin>svctask chmdisk -name mdpalau_ivd mdisk32 IBM_2145:ITSO-CLS1:admin>svctask mkmdiskgrp -name MDG_Palauivd -ext 512 MDisk Group, id [9], successfully created IBM_2145:ITSO-CLS1:admin>svctask mkmdiskgrp -name MDG_Palauivd -ext 512 CMMVC5758E Object name already exists. IBM_2145:ITSO-CLS1:admin>svcinfo lsmdiskgrp id name status mdisk_count vdisk_count capacity extent_size free_capacity virtual_capacity used_capacity real_capacity overallocation warning easy_tier easy_tier_status 8 MD_palauVD online 3 2 24.0GB 512 7.0GB 17.00GB 17.00GB 17.00GB 70 0 auto inactive 9 MDG_Palauivd online 0 0 0 512 0 0.00MB 0.00MB 0.00MB 0 0 auto inactive IBM_2145:ITSO-CLS1:admin>
Our SVC environment is now ready for the volume migration to image mode volumes.
IBM_2145:ITSO-CLS1:admin>svctask migratetoimage -vdisk palau_SANB -mdisk mdpalau_ivd -mdiskgrp MD_palauVD IBM_2145:ITSO-CLS1:admin>svctask migratetoimage -vdisk palau_Data -mdisk mdpalau_ivd1 -mdiskgrp MD_palauVD IBM_2145:ITSO-CLS1:admin>svcinfo lsmdisk id name status mode mdisk_grp_id mdisk_grp_name capacity ctrl_LUN_# controller_name UID 28 palau-md1 online managed 8 MD_palauVD 8.0GB 0000000000000010 DS4500 600a0b8000174233000000b9487778ab00000000000000000000000000000000 29 palau-md2 online managed 8 MD_palauVD 8.0GB 0000000000000011 DS4500 600a0b80001744310000010f48776bae00000000000000000000000000000000 30 palau-md3 online managed 8 MD_palauVD 8.0GB 0000000000000012 DS4500 600a0b8000174233000000bb487778d900000000000000000000000000000000 31 mdpalau_ivd1 online image 8 MD_palauVD 6.0GB 0000000000000013 DS4500 600a0b8000174233000000bd4877890f00000000000000000000000000000000 32 mdpalau_ivd online image 8 MD_palauVD 12.5GB 0000000000000014 DS4500 600a0b80001744310000011048777bda00000000000000000000000000000000 IBM_2145:ITSO-CLS1:admin>svcinfo lsmigrate migrate_type Migrate_to_Image progress 4
294
migrate_source_vdisk_index 29 migrate_target_mdisk_index 32 migrate_target_mdisk_grp 8 max_thread_count 4 migrate_source_vdisk_copy_id 0 migrate_type Migrate_to_Image progress 30 migrate_source_vdisk_index 30 migrate_target_mdisk_index 31 migrate_target_mdisk_grp 8 max_thread_count 4 migrate_source_vdisk_copy_id 0 IBM_2145:ITSO-CLS1:admin> During the migration, our Linux server is unaware that its data is being physically moved between storage subsystems. After the migration has completed, the image mode volumes are ready to be removed from the Linux server, and the real LUNs can be mapped and masked directly to the host by using the storage subsystems tool.
295
rediscover the disks later). It is possible to tell the Linux SCSI subsystem to rescan for new disks without requiring you to unload the HBA driver; however, we do not provide these details here. 3. Remove the volumes from the host by using the svctask rmvdiskhostmap command (Example 6-32). To double-check that you have removed the volumes, use the svcinfo lshostvdiskmap command, which shows that these disks are no longer mapped to the Linux server.
Example 6-32 Remove the volumes from the host
IBM_2145:ITSO-CLS1:admin>svctask rmvdiskhostmap -host Palau palau_SANB IBM_2145:ITSO-CLS1:admin>svctask rmvdiskhostmap -host Palau palau_Data IBM_2145:ITSO-CLS1:admin>svcinfo lshostvdiskmap Palau IBM_2145:ITSO-CLS1:admin>
4. Remove the volumes from the SVC by using the svctask rmvdisk command. This step makes them unmanaged, as seen in Example 6-33. Cached data: When you run the svctask rmvdisk command, the SVC will first double-check that there is no outstanding dirty cached data for the volume that is being removed. If there is still uncommitted cached data, the command fails with the following error message: CMMVC6212E The command failed because data in the cache has not been committed to disk You will have to wait for this cached data to be committed to the underlying storage subsystem before you can remove the volume. The SVC will automatically destage uncommitted cached data two minutes after the last write activity for the volume. How much data there is to destage, and how busy the I/O subsystem is, determine how long this command takes to complete. You can check if the volume has uncommitted data in the cache by using the command svcinfo lsvdisk <VDISKNAME> and checking the fast_write_state attribute. This attribute has the following meanings: empty not_empty corrupt No modified data exists in the cache. Modified data might exist in the cache. Modified data might have existed in the cache, but any data has been lost.
IBM_2145:ITSO-CLS1:admin>svctask rmvdisk palau_SANB IBM_2145:ITSO-CLS1:admin>svctask rmvdisk palau_Data IBM_2145:ITSO-CLS1:admin>svcinfo lsmdisk id name status mode mdisk_grp_id mdisk_grp_name capacity ctrl_LUN_# controller_name UID 31 mdpalau_ivd1 online unmanaged 6.0GB 0000000000000013 DS4500 600a0b8000174233000000bd4877890f00000000000000000000000000000000 32 mdpalau_ivd online unmanaged 12.5GB 0000000000000014 DS4500 600a0b80001744310000011048777bda00000000000000000000000000000000 IBM_2145:ITSO-CLS1:admin>
296
5. Using Storage Manager (our storage subsystem management tool), unmap and unmask the disks from the SVC back to the Linux server. Important: If one of the disks is used to boot your Linux server, you must make sure that it is presented back to the host as SCSI ID 0 so that the FC adapter BIOS finds that disk during its initialization. 6. Power on your host server and enter your FC HBA BIOS before booting the OS. Make sure that you change the boot configuration so that it points to the SVC. In our example, we performed the following steps on a QLogic HBA: a. Pressed Ctrl+Q to enter the HBA BIOS. b. Opened Configuration Settings. c. Opened Selectable Boot Settings. d. Changed the entry from the SVC to your storage subsystem LUN with SCSI ID 0. e. Exited the menu and saved the changes. Important: This is the last step that you can perform and still safely back out everything that you have done so far. Up to this point, you can reverse all of the actions that you have performed so far to get the server back online without data loss: Remap and remask the LUNs back to the SVC. Run the svctask detectmdisk command to rediscover the MDisks. Recreate the volumes with the svctask mkvdisk command. Remap the volumes back to the server with the svctask mkvdiskhostmap command. After you start the next step, you might not be able to turn back without the risk of data loss. 7. We now restart the Linux server. If all of the zoning and LUN masking and mapping were done successfully, the Linux server boots as though nothing has happened. However, if you only moved the application LUN to the SVC and left your Linux server running, you must follow these steps to see the new volume: a. Load your HBA driver with the modprobe DRIVER_NAME command. If you did not (and cannot) unload your HBA driver, you can issue commands to the kernel to rescan the SCSI bus to see the new volumes (describing these details is beyond the scope of this book). b. Check your syslog and verify that the kernel found the new volumes. On Red Hat Enterprise Linux, the syslog is stored in the /var/log/messages directory. c. If your application and data are on an LVM volume, run the vgscan command to rediscover the VG, and then, run the vgchange -a y VOLUME_GROUP command to activate the VG. 8. Mount your file systems with the mount /MOUNT_POINT command (Example 6-34 on page 298). The df output shows that all of the disks are available again.
297
[root@Palau ~]# mount /dev/dm-2 /data [root@Palau ~]# df Filesystem 1K-blocks Used Available Use% Mounted on /dev/mapper/VolGroup00-LogVol00 10093752 1938124 7634620 21% / /dev/sda1 101086 12054 83813 13% /boot tmpfs 1033496 0 1033496 0% /dev/shm /dev/dm-2 5160576 158160 4740272 4% /data [root@Palau ~]# 9. You are ready to start your application. 10.Finally, to make sure that the MDisks are removed from the SVC, run the svctask detectmdisk command. The MDisks will first be discovered as offline, and then they will automatically be removed when the SVC determines that there are no volumes associated with these MDisks.
298
Figure 6-80 shows our ESX server connected to the SAN infrastructure. It has two LUNs that are masked directly to it from our storage subsystem. Our ESX server represents a typical SAN environment with a host directly using LUNs that were created on a SAN storage subsystem, as shown in Figure 6-80: The ESX Servers HBA cards are zoned so that they are in the Green Zone with our storage subsystem. The two LUNs that have been defined on the storage subsystem and that use LUN masking are directly available to our ESX server.
299
Attention: Be extremely careful when connecting the SVC to your storage area network, because this requires you to connect cables to your SAN switches and to alter your switch zone configuration. Performing these activities incorrectly can render your SAN inoperable, so make sure that you fully understand the effect of your actions. You must perform these tasks to connect the SVC to your SAN fabric: Assemble your SVC components (nodes, uninterruptible power supply unit, SSPC), cable the SVC correctly, power the SVC on, and verify that the SVC is visible on your SAN area network. Create and configure your SVC cluster. Create these additional zones: An SVC node zone (the Black Zone in our picture on Example 6-57 on page 322). A storage zone (our Red Zone). A host zone (our Blue Zone). For more detailed information about how to configure the zones in the correct way, see Chapter 3, Planning and configuration on page 57. Figure 6-81 shows the environment that we set up.
300
IBM_2145:ITSO-CLS1:admin>svctask mkmdiskgrp -name MDG_Nile_VM -ext 512 MDisk Group, id [3], successfully created
Figure 6-82 Obtain your WWN using the VMware Management Console
Use the svcinfo lshbaportcandidate command on the SVC to list all of the WWNs, which have not yet been allocated to a host, that the SVC can see on the SAN fabric. Example 6-36 on page 302 shows the output of the nodes that it found on our SAN fabric. (If the port did not show up, it indicates a zone configuration problem.) 301
IBM_2145:ITSO-CLS1:admin>svcinfo lshbaportcandidate id 210000E08B89B8C0 210000E08B892BCD 210000E08B0548BC 210000E08B0541BC 210000E08B89CCC2 IBM_2145:ITSO-CLS1:admin> After verifying that the SVC can see our host, we create the host entry and assign the WWN to this entry. Example 6-37 shows these commands.
Example 6-37 Create the host entry
IBM_2145:ITSO-CLS1:admin>svctask mkhost -name Nile -hbawwpn 210000E08B89B8C0:210000E08B892BCD Host, id [1], successfully created IBM_2145:ITSO-CLS1:admin>svcinfo lshost Nile id 1 name Nile port_count 2 type generic mask 1111 iogrp_count 4 WWPN 210000E08B892BCD node_logged_in_count 4 state active WWPN 210000E08B89B8C0 node_logged_in_count 4 state active IBM_2145:ITSO-CLS1:admin>
IBM_2145:ITSO-CLS1:admin>svcinfo lscontroller id controller_name ctrl_s/n product_id_low product_id_high 0 DS4500 1742-900 1 DS4700 1814 FAStT
If you also use a DS4000 family storage subsystem, Storage Manager provides the LUN serial numbers. Right-click your logical drive, and choose Properties. The following figures show our serial numbers. Figure 6-83 shows disk serial number VM_W2k3.
303
We are ready to move the ownership of the disks to the SVC, discover them as MDisks, and give them back to the host as volumes.
The virtual machines are located on these LUNs. Therefore, to move these LUNs under the control of the SVC, we do not need to reboot the entire ESX server, but we do have to stop and suspend all VMware guests that are using these LUNs.
2. Identify all of the VMware guests that are using this LUN and shut them down. One way to identify them is to highlight the virtual machine and open the Summary Tab. The datapool that is used is displayed under Datastore. Figure 6-87 on page 305 shows a Linux virtual machine using the datastore named SLES_Costa_Rica.
304
Figure 6-87 Identify the LUNs that are used by virtual machines
3. If you have several ESX hosts, also check the other ESX hosts to make sure that there is no guest operating system that is running and using this datastore. 4. Repeat steps 1 to 3 for every datastore that you want to migrate. 5. After the guests are suspended, we use Storage Manager (our storage subsystem management tool) to unmap and unmask the disks from the ESX server and to remap and to remask the disks to the SVC. 6. From the SVC, discover the new disks with the svctask detectmdisk command. The disks will be discovered and named as mdiskN, where N is the next available MDisk number (starting from 0). Example 6-39 shows the commands that we used to discover our MDisks and to verify that we have the correct MDisks.
Example 6-39 Discover the new MDisks
IBM_2145:ITSO-CLS1:admin>svctask detectmdisk IBM_2145:ITSO-CLS1:admin>svcinfo lsmdisk id name status mode mdisk_grp_id mdisk_grp_name capacity ctrl_LUN_# controller_name UID 21 mdisk21 online unmanaged 60.0GB 0000000000000008 DS4700 600a0b800026b282000041ca486d14a500000000000000000000000000000000 22 mdisk22 online unmanaged 70.0GB 0000000000000009 DS4700 600a0b80002904de0000447a486d14cd00000000000000000000000000000000 IBM_2145:ITSO-CLS1:admin>
305
Important: Match your discovered MDisk serial numbers (UID on the svcinfo lsmdisk command task display) with the serial number that you obtained earlier (in Figure 6-83 and Figure 6-84 on page 303). 7. After we have verified that we have the correct MDisks, we rename them to avoid confusion in the future when we perform other MDisk-related tasks; see Example 6-40.
Example 6-40 Rename the MDisks
IBM_2145:ITSO-CLS1:admin>svctask chmdisk -name ESX_W2k3 mdisk22 IBM_2145:ITSO-CLS1:admin>svctask chmdisk -name ESX_SLES mdisk21 IBM_2145:ITSO-CLS1:admin>svcinfo lsmdisk 21 ESX_SLES online unmanaged 60.0GB 0000000000000008 DS4700 600a0b800026b282000041ca486d14a500000000000000000000000000000000 22 ESX_W2k3 online unmanaged 70.0GB 0000000000000009 DS4700 600a0b80002904de0000447a486d14cd00000000000000000000000000000000 IBM_2145:ITSO-CLS1:admin> 8. We create our image mode volumes with the svctask mkvdisk command; see Example 6-41. Using the parameter -vtype image ensures that it will create image mode volumes, which means that the virtualized disks will have the exact same layout as though they were not virtualized.
Example 6-41 Create the image mode volumes
IBM_2145:ITSO-CLS1:admin>svctask mkvdisk -mdiskgrp MDG_Nile_VM -iogrp 0 -vtype image -mdisk ESX_W2k3 -name ESX_W2k3_IVD Virtual Disk, id [29], successfully created IBM_2145:ITSO-CLS1:admin>svctask mkvdisk -mdiskgrp MDG_Nile_VM -iogrp 0 -vtype image -mdisk ESX_SLES -name ESX_SLES_IVD Virtual Disk, id [30], successfully created IBM_2145:ITSO-CLS1:admin> 9. Finally, we can map the new image mode volumes to the host. Use the same SCSI LUN IDs as on the storage subsystem for the mapping; see Example 6-42.
Example 6-42 Map the volumes to the host
IBM_2145:ITSO-CLS1:admin>svctask mkvdiskhostmap -host Nile -scsi 0 ESX_SLES_IVD Virtual Disk to Host map, id [0], successfully created IBM_2145:ITSO-CLS1:admin>svctask mkvdiskhostmap -host Nile -scsi 1 ESX_W2k3_IVD Virtual Disk to Host map, id [1], successfully created IBM_2145:ITSO-CLS1:admin>svcinfo lshostvdiskmap id name SCSI_id vdisk_id vdisk_name wwpn vdisk_UID 1 Nile 0 30 ESX_SLES_IVD 210000E08B892BCD 60050768018301BF280000000000002A 1 Nile 1 29 ESX_W2k3_IVD 210000E08B892BCD 60050768018301BF2800000000000029
306
10.Using the VMware management console, rescan to discover the new volume. Open the configuration tab, select Storage Adapters, and click Rescan. During the rescan you can receive geometry errors when ESX discovers that the old disk has disappeared. Your volume will appear with the new vmhba devices. 11.We are ready to restart the VMware guests again. At this point, you have migrated the VMware LUNs successfully to the SVC.
307
We also need a Green Zone for our host to use when we are ready for it to directly access the disk, after it has been removed from the SVC. We assume that you have created the necessary zones. In our environment, we have performed these tasks: Created three LUNs on another storage subsystem and mapped it to the SVC Discovered them as MDisks Created a new storage pool Renamed these LUNs to more meaningful names Put all these MDisks into this storage pool You can see the output of our commands in Example 6-43. Example 6-43 Create a new storage pool IBM_2145:ITSO-CLS1:admin>svctask detectmdisk IBM_2145:ITSO-CLS1:admin>svcinfo lsmdisk id name status mode mdisk_grp_id mdisk_grp_name capacity ctrl_LUN_# controller_name UID 21 ESX_SLES online image 3 MDG_Nile_VM 60.0GB 0000000000000008 DS4700 600a0b800026b282000041ca486d14a500000000000000000000000000000000 22 ESX_W2k3 online image 3 MDG_Nile_VM 70.0GB 0000000000000009 DS4700 600a0b80002904de0000447a486d14cd00000000000000000000000000000000 23 mdisk23 online unmanaged 55.0GB 000000000000000D DS4500 600a0b8000174233000000b4486d250300000000000000000000000000000000 24 mdisk24 online unmanaged 55.0GB 000000000000000E DS4500 600a0b800017443100000108486d182c00000000000000000000000000000000 25 mdisk25 online unmanaged 55.0GB 000000000000000F DS4500 600a0b8000174233000000b5486d255b00000000000000000000000000000000 IBM_2145:ITSO-CLS1:admin>svctask mkmdiskgrp -name MDG_ESX_VD -ext 512 IBM_2145:ITSO-CLS1:admin>svctask chmdisk -name IBMESX-MD1 mdisk23 IBM_2145:ITSO-CLS1:admin>svctask chmdisk -name IBMESX-MD2 mdisk24 IBM_2145:ITSO-CLS1:admin>svctask chmdisk -name IBMESX-MD3 mdisk25 IBM_2145:ITSO-CLS1:admin>svctask addmdisk -mdisk IBMESX-MD1 MDG_ESX_VD IBM_2145:ITSO-CLS1:admin>svctask addmdisk -mdisk IBMESX-MD2 MDG_ESX_VD IBM_2145:ITSO-CLS1:admin>svctask addmdisk -mdisk IBMESX-MD3 MDG_ESX_VD IBM_2145:ITSO-CLS1:admin>svcinfo lsmdisk id name status mode mdisk_grp_id mdisk_grp_name capacity ctrl_LUN_# controller_name UID 21 ESX_SLES online image 3 MDG_Nile_VM 60.0GB 0000000000000008 DS4700 600a0b800026b282000041ca486d14a500000000000000000000000000000000 22 ESX_W2k3 online image 3 MDG_Nile_VM 70.0GB 0000000000000009 DS4700 600a0b80002904de0000447a486d14cd00000000000000000000000000000000 23 IBMESX-MD1 online managed 4 MDG_ESX_VD 55.0GB 000000000000000D DS4500 600a0b8000174233000000b4486d250300000000000000000000000000000000 308
Implementing the IBM System Storage SAN Volume Controller V6.1
24 IBMESX-MD2 online managed MDG_ESX_VD 55.0GB 000000000000000E DS4500 600a0b800017443100000108486d182c00000000000000000000000000000000 25 IBMESX-MD3 online managed MDG_ESX_VD 55.0GB 000000000000000F DS4500 600a0b8000174233000000b5486d255b00000000000000000000000000000000 IBM_2145:ITSO-CLS1:admin>
IBM_2145:ITSO-CLS1:admin>svctask migratevdisk -vdisk ESX_SLES_IVD -mdiskgrp MDG_ESX_VD IBM_2145:ITSO-CLS1:admin>svctask migratevdisk -vdisk ESX_W2k3_IVD -mdiskgrp MDG_ESX_VD IBM_2145:ITSO-CLS1:admin>svcinfo lsmigrate migrate_type MDisk_Group_Migration progress 0 migrate_source_vdisk_index 30 migrate_target_mdisk_grp 4 max_thread_count 4 migrate_source_vdisk_copy_id 0 migrate_type MDisk_Group_Migration progress 0 migrate_source_vdisk_index 29 migrate_target_mdisk_grp 4 max_thread_count 4 migrate_source_vdisk_copy_id 0 IBM_2145:ITSO-CLS1:admin>svcinfo lsmigrate migrate_type MDisk_Group_Migration progress 1 migrate_source_vdisk_index 30 migrate_target_mdisk_grp 4 max_thread_count 4 migrate_source_vdisk_copy_id 0 migrate_type MDisk_Group_Migration progress 0 migrate_source_vdisk_index 29 migrate_target_mdisk_grp 4 max_thread_count 4 migrate_source_vdisk_copy_id 0 IBM_2145:ITSO-CLS1:admin>svcinfo lsmdiskgrp id name status mdisk_count vdisk_count capacity extent_size free_capacity virtual_capacity used_capacity real_capacity overallocation warning
309
3 MDG_Nile_VM 130.0GB 512 130.00GB 100 4 MDG_ESX_VD 165.0GB 512 0.00MB 0 IBM_2145:ITSO-CLS1:admin>
2 130.00GB 3 0.00MB
2 130.00GB 0 0.00MB
If you compare the svcinfo lsmdiskgrp output after the migration, as shown in Example 6-45, you can see that all of the virtual capacity has now been moved from the old storage pool (MDG_Nile_VM) to the new storage pool (MDG_ESX_VD). The mdisk_count column shows that the capacity is now spread over three MDisks.
Example 6-45 List MDisk group
IBM_2145:ITSO-CLS1:admin>svcinfo lsmdiskgrp id name status capacity extent_size free_capacity real_capacity overallocation warning 3 MDG_Nile_VM online 130.0GB 512 130.0GB 0.00MB 0 0 4 MDG_ESX_VD online 165.0GB 512 35.0GB 130.00GB 78 0 IBM_2145:ITSO-CLS1:admin>
The migration to the SVC is complete. You can remove the original MDisks from the SVC and remove these LUNs from the storage subsystem. If these LUNs are the last LUNs that were used on our storage subsystem, we can remove it from our SAN fabric.
310
There are also other preparatory activities that we can perform before we shut down the host and reconfigure the LUN masking and mapping. This section describes those activities. In our example, we will move volumes that are located on a DS4500 to image mode volumes that are located on a DS4700. If you are moving the data to a new storage subsystem, it is assumed that this storage subsystem is connected to your SAN fabric, powered on, and visible from your SAN switches. Your environment must look similar to our environment, as described in Adding a new storage subsystem to SVC on page 307 and Make fabric zone changes on page 307.
IBM_2145:ITSO-CLS1:admin>svctask detectmdisk IBM_2145:ITSO-CLS1:admin>svcinfo lsmdisk id name status mode mdisk_grp_id mdisk_grp_name capacity ctrl_LUN_# controller_name UID 23 IBMESX-MD1 online managed MDG_ESX_VD 55.0GB 000000000000000D DS4500 600a0b8000174233000000b4486d250300000000000000000000000000000000 24 IBMESX-MD2 online managed MDG_ESX_VD 55.0GB 000000000000000E DS4500 600a0b800017443100000108486d182c00000000000000000000000000000000 25 IBMESX-MD3 online managed MDG_ESX_VD 55.0GB 000000000000000F DS4500 600a0b8000174233000000b5486d255b00000000000000000000000000000000 26 mdisk26 online unmanaged 120.0GB 000000000000000A DS4700 600a0b800026b282000041f0486e210100000000000000000000000000000000 27 mdisk27 online unmanaged 100.0GB 000000000000000B DS4700 600a0b800026b282000041e3486e20cf00000000000000000000000000000000 4
Even though the MDisks will not stay in the SVC for long, we suggest that you rename them to more meaningful names so that they do not get confused with other MDisks being used by other activities. We also the storage pools to hold our new MDisks. Example 6-47 shows these tasks.
Example 6-47 Rename the MDisks
IBM_2145:ITSO-CLS1:admin>svctask chmdisk -name ESX_IVD_SLES mdisk26 IBM_2145:ITSO-CLS1:admin>svctask chmdisk -name ESX_IVD_W2K3 mdisk27 IBM_2145:ITSO-CLS1:admin>svctask mkmdiskgrp -name MDG_IVD_ESX -ext 512 MDisk Group, id [5], successfully created IBM_2145:ITSO-CLS1:admin>svcinfo lsmdiskgrp id name status mdisk_count vdisk_count capacity extent_size free_capacity virtual_capacity used_capacity real_capacity overallocation warning
311
4 MDG_ESX_VD online 3 165.0GB 512 35.0GB 130.00GB 130.00GB 78 0 5 MDG_IVD_ESX online 0 512 0 0.00MB 0.00MB 0 IBM_2145:ITSO-CLS1:admin>
2 130.00GB 0 0.00MB 0 0
Our SVC environment is ready for the volume migration to image mode volumes.
IBM_2145:ITSO-CLS1:admin>svctask migratetoimage -vdisk ESX_SLES_IVD -mdisk ESX_IVD_SLES -mdiskgrp MDG_IVD_ESX IBM_2145:ITSO-CLS1:admin>svctask migratetoimage -vdisk ESX_W2k3_IVD -mdisk ESX_IVD_W2K3 -mdiskgrp MDG_IVD_ESX IBM_2145:ITSO-CLS1:admin>svcinfo lsmdisk id name status mode mdisk_grp_id mdisk_grp_name capacity ctrl_LUN_# controller_name UID 23 IBMESX-MD1 online managed 4 MDG_ESX_VD 55.0GB 000000000000000D DS4500 600a0b8000174233000000b4486d250300000000000000000000000000000000 24 IBMESX-MD2 online managed 4 MDG_ESX_VD 55.0GB 000000000000000E DS4500 600a0b800017443100000108486d182c00000000000000000000000000000000 25 IBMESX-MD3 online managed 4 MDG_ESX_VD 55.0GB 000000000000000F DS4500 600a0b8000174233000000b5486d255b00000000000000000000000000000000 26 ESX_IVD_SLES online image 5 MDG_IVD_ESX 120.0GB 000000000000000A DS4700 600a0b800026b282000041f0486e210100000000000000000000000000000000 27 ESX_IVD_W2K3 online image 5 MDG_IVD_ESX 100.0GB 000000000000000B DS4700 600a0b800026b282000041e3486e20cf00000000000000000000000000000000 IBM_2145:ITSO-CLS1:admin>
During the migration, our ESX server is unaware that its data is being physically moved between storage subsystems. We can continue to run and continue to use the virtual machines that are running on the server. You can check the migration status with the svcinfo lsmigrate command, as shown in Example 6-49 on page 313.
312
IBM_2145:ITSO-CLS1:admin>svcinfo lsmigrate migrate_type Migrate_to_Image progress 2 migrate_source_vdisk_index 29 migrate_target_mdisk_index 27 migrate_target_mdisk_grp 5 max_thread_count 4 migrate_source_vdisk_copy_id 0 migrate_type Migrate_to_Image progress 12 migrate_source_vdisk_index 30 migrate_target_mdisk_index 26 migrate_target_mdisk_grp 5 max_thread_count 4 migrate_source_vdisk_copy_id 0 IBM_2145:ITSO-CLS1:admin> After the migration has completed, the image mode volumes are ready to be removed from the ESX server, and the real LUNs can be mapped and masked directly to the host using the storage subsystems tool.
IBM_2145:ITSO-CLS1:admin>svcinfo lshostvdiskmap id name SCSI_id vdisk_id wwpn vdisk_UID 1 Nile 0 30 210000E08B892BCD 60050768018301BF280000000000002A 1 Nile 1 29 210000E08B892BCD 60050768018301BF2800000000000029 IBM_2145:ITSO-CLS1:admin>svcinfo lsvdisk id name IO_group_id mdisk_grp_id mdisk_grp_name capacity FC_name RC_id RC_name copy_count 0 vdisk_A 0 2 MDG_Image 36.0GB 29 ESX_W2k3_IVD 0 4 MDG_ESX_VD 70.0GB striped 60050768018301BF2800000000000029 0
online online
313
io_grp0 1
online
2. Shut down and suspend all guests using the LUNs. You can use the same method that is used in Moving VMware guest LUNs on page 304 to identify the guests that are using this LUN. 3. Remove the volumes from the host by using the svctask rmvdiskhostmap command (Example 6-51). To double-check that the volumes have been removed use the svcinfo lshostvdiskmap command, which shows that these volumes are no longer mapped to the ESX server.
Example 6-51 Remove the volumes from the host
IBM_2145:ITSO-CLS1:admin>svctask rmvdiskhostmap -host Nile ESX_W2k3_IVD IBM_2145:ITSO-CLS1:admin>svctask rmvdiskhostmap -host Nile ESX_SLES_IVD 4. Remove the volumes from the SVC by using the svctask rmvdisk command, which makes the MDisks unmanaged, as shown in Example 6-52. Cached data: When you run the svctask rmvdisk command, the SVC first double-checks that there is no outstanding dirty cached data for the volume that is being removed. If there is still uncommitted cached data, the command fails with this error message: CMMVC6212E The command failed because data in the cache has not been committed to disk You have to wait for this cached data to be committed to the underlying storage subsystem before you can remove the volume. The SVC will automatically destage uncommitted cached data two minutes after the last write activity for the volume. How much data there is to destage, and how busy the I/O subsystem is, determine how long this command takes to complete. You can check if the volume has uncommitted data in the cache by using the svcinfo lsvdisk <VDISKNAME> command and checking the fast_write_state attribute. This attribute has the following meanings: empty not_empty corrupt No modified data exists in the cache. Modified data might exist in the cache. Modified data might have existed in the cache, but the data has been lost.
IBM_2145:ITSO-CLS1:admin>svctask rmvdisk ESX_W2k3_IVD IBM_2145:ITSO-CLS1:admin>svctask rmvdisk ESX_SLES_IVD IBM_2145:ITSO-CLS1:admin>svcinfo lsmdisk id name status mode mdisk_grp_id mdisk_grp_name capacity ctrl_LUN_# controller_name UID 26 ESX_IVD_SLES online unmanaged 120.0GB 000000000000000A DS4700 600a0b800026b282000041f0486e210100000000000000000000000000000000 314
Implementing the IBM System Storage SAN Volume Controller V6.1
27 ESX_IVD_W2K3 online unmanaged 100.0GB 000000000000000B DS4700 600a0b800026b282000041e3486e20cf00000000000000000000000000000000 IBM_2145:ITSO-CLS1:admin> 5. Using Storage Manager (our storage subsystem management tool), unmap and unmask the disks from the SVC back to the ESX server. Remember that in Example 6-50 on page 313, we recorded the SCSI LUN IDs. To map your LUNs on the storage subsystem, use the same SCSI LUN IDs that you used in the SVC. Important: This is the last step that you can perform and still safely back out of everything you have done so far. Up to this point, you can reverse all of the actions that you have performed so far to get the server back online without data loss: Remap and remask the LUNs back to the SVC. Run the svctask detectmdisk command to rediscover the MDisks. Recreate the volumes with the svctask mkvdisk command. Remap the volumes back to the server with the svctask mkvdiskhostmap command. After you start the next step, you might not be able to turn back without the risk of data loss. 6. Using the VMware management console, rescan to discover the new volume. Figure 6-89 shows the view before the rescan. Figure 6-90 on page 316 shows the view after the rescan. Note that the size of the LUN has changed, because we have moved to another LUN on another storage subsystem.
315
During the rescan, you can receive geometry errors when ESX discovers that the old disk has disappeared. Your volume will appear with a new vmhba address, and VMware will recognize it as our VMWARE-GUESTS disk. 7. We are now ready to restart the VMware guests. 8. Finally, to make sure that the MDisks are removed from the SVC, run the svctask detectmdisk command. The MDisks are discovered as offline and then automatically removed when the SVC determines that there are no volumes associated with these MDisks.
316
type of data that is stored on those LUNs, taking into account availability, performance, and redundancy. We describe this step in 6.8.4, Migrating image mode volumes to volumes on page 326. Move your AIX servers LUNs back to image mode volumes, so that they can be remapped and remasked directly back to the AIX server. This step starts in 6.8.5, Preparing to migrate from the SVC on page 328. Use these activities individually or together to migrate your AIX servers LUNs from one storage subsystem to another storage subsystem by using the SVC as your migration tool. If you do not use all three activities, you can introduce or remove the SVC from your environment. The only downtime that is required for these activities is the time that it takes you to remask and remap the LUNs between the storage subsystems and your SVC. We show our AIX environment in Figure 6-91.
SAN
Green Zone
Figure 6-91 shows our AIX server connected to our SAN infrastructure. It has two LUNs (hdisk3 and hdisk4) that are masked directly to it from our storage subsystem. The hdisk3 disk makes up the itsoaixvg LVM group, and the hdisk4 disk makes up the itsoaixvg1 LVM group, as shown in Example 6-53 on page 318.
317
#lsdev hdisk0 hdisk1 hdisk2 hdisk3 hdisk4 #lspv hdisk0 hdisk1 hdisk2 hdisk3 hdisk4 #
16 Bit LVD SCSI Disk Drive 16 Bit LVD SCSI Disk Drive 16 Bit LVD SCSI Disk Drive 1814 DS4700 Disk Array Device 1814 DS4700 Disk Array Device rootvg rootvg rootvg itsoaixvg itsoaixvg1 active active active active active
Our AIX server represents a typical SAN environment with a host directly using LUNs that were created on a SAN storage subsystem, as shown in Figure 6-91 on page 317: The AIX servers HBA cards are zoned so that they are in the Green (dotted line) Zone with our storage subsystem. The two LUNs, hdisk3 and hdisk4, have been defined on the storage subsystem. Using LUN masking, they are directly available to our AIX server.
318
SAN
319
IBM_2145:ITSO-CLS2:admin>svctask mkmdiskgrp -name aix_imgmdg -ext 512 MDisk Group, id [7], successfully created IBM_2145:ITSO-CLS2:admin>svcinfo lsmdiskgrp id name status mdisk_count vdisk_count capacity extent_size free_capacity virtual_capacity used_capacity real_capacity overallocation warning 7 aix_imgmdg online 512 0 0.00MB 0 IBM_2145:ITSO-CLS2:admin> 0 0.00MB 0 0.00MB 0 0
#lsdev -Ccadapter|grep fcs fcs0 Available 1Z-08 FC Adapter fcs1 Available 1D-08 FC Adapter #lscfg -vpl fcs0 fcs0 U0.1-P2-I4/Q1 FC Adapter Part Number.................00P4494 EC Level....................A Serial Number...............1E3120A68D Manufacturer................001E Device Specific.(CC)........2765 FRU Number.................. 00P4495 Network Address.............10000000C932A7FB ROS Level and ID............02C03951 Device Specific.(Z0)........2002606D Device Specific.(Z1)........00000000 Device Specific.(Z2)........00000000 Device Specific.(Z3)........03000909 Device Specific.(Z4)........FF401210 Device Specific.(Z5)........02C03951 Device Specific.(Z6)........06433951 Device Specific.(Z7)........07433951 Device Specific.(Z8)........20000000C932A7FB Device Specific.(Z9)........CS3.91A1 Device Specific.(ZA)........C1D3.91A1 Device Specific.(ZB)........C2D3.91A1 Device Specific.(YL)........U0.1-P2-I4/Q1
PLATFORM SPECIFIC
320
Name: fibre-channel Model: LP9002 Node: fibre-channel@1 Device Type: fcp Physical Location: U0.1-P2-I4/Q1 #lscfg -vpl fcs1 fcs1 U0.1-P2-I5/Q1 FC Adapter Part Number.................00P4494 EC Level....................A Serial Number...............1E3120A67B Manufacturer................001E Device Specific.(CC)........2765 FRU Number.................. 00P4495 Network Address.............10000000C932A800 ROS Level and ID............02C03891 Device Specific.(Z0)........2002606D Device Specific.(Z1)........00000000 Device Specific.(Z2)........00000000 Device Specific.(Z3)........02000909 Device Specific.(Z4)........FF401050 Device Specific.(Z5)........02C03891 Device Specific.(Z6)........06433891 Device Specific.(Z7)........07433891 Device Specific.(Z8)........20000000C932A800 Device Specific.(Z9)........CS3.82A1 Device Specific.(ZA)........C1D3.82A1 Device Specific.(ZB)........C2D3.82A1 Device Specific.(YL)........U0.1-P2-I5/Q1
PLATFORM SPECIFIC Name: fibre-channel Model: LP9000 Node: fibre-channel@1 Device Type: fcp Physical Location: U0.1-P2-I5/Q1 ## The svcinfo lshbaportcandidate command on the SVC lists all of the WWNs, which have not yet been allocated to a host, that the SVC can see on the SAN fabric. Example 6-56 shows the output of the nodes that it found in our SAN fabric. (If the port did not show up, it indicates a zone configuration problem.)
Example 6-56 Add the host to the SVC
321
After verifying that the SVC can see our host (Kanaga), we create the host entry and assign the WWN to this entry, as shown with the commands in Example 6-57.
Example 6-57 Create the host entry
IBM_2145:ITSO-CLS2:admin>svctask mkhost -name Kanaga -hbawwpn 10000000C932A7FB:10000000C932A800 Host, id [5], successfully created IBM_2145:ITSO-CLS2:admin>svcinfo lshost Kanaga id 5 name Kanaga port_count 2 type generic mask 1111 iogrp_count 4 WWPN 10000000C932A800 node_logged_in_count 2 state inactive WWPN 10000000C932A7FB node_logged_in_count 2 state inactive IBM_2145:ITSO-CLS2:admin>
IBM_2145:ITSO-CLS2:admin>svcinfo lscontroller id controller_name ctrl_s/n vendor_id product_id_low product_id_high 0 DS4500 IBM 1742-900 1 DS4700 IBM 1814 IBM_2145:ITSO-CLS2:admin> Names: The svctask chcontroller command enables you to change the discovered storage subsystem name in SVC. In complex SANs, we suggest that you rename your storage subsystem to a more meaningful name.
322
We are ready to move the ownership of the disks to the SVC, discover them as MDisks, and give them back to the host as volumes.
323
#varyoffvg itsoaixvg #varyoffvg itsoaixvg1 #lsvg rootvg itsoaixvg itsoaixvg1 #lsvg -o rootvg 3. Using Storage Manager (our storage subsystem management tool), we can unmap and unmask the disks from the AIX server and remap and remask the disks to the SVC. 4. From the SVC, discover the new disks with the svctask detectmdisk command. The disks will be discovered and named mdiskN, where N is the next available mdisk number (starting from 0). Example 6-60 shows the commands that we used to discover our MDisks and to verify that we have the correct MDisks.
Example 6-60 Discover the new MDisks
status capacity
mode ctrl_LUN_#
324
25 mdisk25 online unmanaged 8.0GB 0000000000000009 DS4700 600a0b800026b2820000432f4874f57c00000000000000000000000000000000 IBM_2145:ITSO-CLS2:admin> Important: Match your discovered MDisk serial numbers (UID on the svcinfo lsmdisk command task display) with the serial number that you discovered earlier (in Figure 6-93 and Figure 6-94 on page 323). 5. After we have verified that we have the correct MDisks, we rename them to avoid confusion in the future when we perform other MDisk-related tasks (Example 6-61).
Example 6-61 Rename the MDisks
IBM_2145:ITSO-CLS2:admin>svctask chmdisk -name Kanaga_AIX mdisk24 IBM_2145:ITSO-CLS2:admin>svctask chmdisk -name Kanaga_AIX1 mdisk25 IBM_2145:ITSO-CLS2:admin>svcinfo lsmdisk id name status mode mdisk_grp_id mdisk_grp_name capacity ctrl_LUN_# controller_name UID 5.0GB 0000000000000008 DS4700 600a0b800026b282000043224874f41900000000000000000000000000000000 25 Kanaga_AIX1 online unmanaged 8.0GB 0000000000000009 DS4700 600a0b800026b2820000432f4874f57c00000000000000000000000000000000 IBM_2145:ITSO-CLS2:admin> 6. We create our image mode volumes with the svctask mkvdisk command and the option -vtype image (Example 6-62). This command virtualizes the disks in the exact same layout as though they were not virtualized.
Example 6-62 Create the image mode volumes
IBM_2145:ITSO-CLS2:admin>svctask mkvdisk -mdiskgrp aix_imgmdg -iogrp 0 -vtype image -mdisk Kanaga_AIX -name IVD_Kanaga Virtual Disk, id [8], successfully created IBM_2145:ITSO-CLS2:admin>svctask mkvdisk -mdiskgrp aix_imgmdg -iogrp 0 -vtype image -mdisk Kanaga_AIX1 -name IVD_Kanaga1 Virtual Disk, id [9], successfully created IBM_2145:ITSO-CLS2:admin> 7. Finally, we can map the new image mode volumes to the host (Example 6-63).
Example 6-63 Map the volumes to the host
IBM_2145:ITSO-CLS2:admin>svctask mkvdiskhostmap -host Kanaga IVD_Kanaga Virtual Disk to Host map, id [0], successfully created IBM_2145:ITSO-CLS2:admin>svctask mkvdiskhostmap -host Kanaga IVD_Kanaga1 Virtual Disk to Host map, id [1], successfully created IBM_2145:ITSO-CLS2:admin> FlashCopy: While the application is in a quiescent state, you can choose to use FlashCopy to copy the new image volumes onto other volumes. You do not need to wait until the FlashCopy process has completed before starting your application.
325
Now, we are ready to perform the following steps to put the image mode volumes online: 1. Remove the old disk definitions, if you have not done so already. 2. Run the cfgmgr -vs command to rediscover the available LUNs. 3. If your application and data are on an LVM volume, rediscover the VG, and then, run the varyonvg VOLUME_GROUP command to activate the VG. 4. Mount your file systems with the mount /MOUNT_POINT command. 5. You are ready to start your application.
IBM_2145:ITSO-CLS2:admin>svctask mkmdiskgrp -name aix_vd -ext 512 IBM_2145:ITSO-CLS2:admin>svctask detectmdisk IBM_2145:ITSO-CLS2:admin>svcinfo lsmdisk id name status mode mdisk_grp_id mdisk_grp_name capacity ctrl_LUN_# controller_name UID 24 Kanaga_AIX online image 7 aix_imgmdg 5.0GB 0000000000000008 DS4700 600a0b800026b282000043224874f41900000000000000000000000000000000 25 Kanaga_AIX1 online image 7 aix_imgmdg 8.0GB 0000000000000009 DS4700 600a0b800026b2820000432f4874f57c00000000000000000000000000000000 26 mdisk26 online unmanaged 6.0GB 000000000000000A DS4700 600a0b800026b2820000439c48751ddc00000000000000000000000000000000 27 mdisk27 online unmanaged 6.0GB 000000000000000B DS4700 600a0b800026b2820000438448751da900000000000000000000000000000000 28 mdisk28 online unmanaged 6.0GB 000000000000000C DS4700 600a0b800026b2820000439048751dc200000000000000000000000000000000 IBM_2145:ITSO-CLS2:admin>svctask chmdisk -name aix_vd0 mdisk26 IBM_2145:ITSO-CLS2:admin>svctask chmdisk -name aix_vd1 mdisk27 IBM_2145:ITSO-CLS2:admin>svctask chmdisk -name aix_vd2 mdisk28 IBM_2145:ITSO-CLS2:admin> IBM_2145:ITSO-CLS2:admin>svctask addmdisk -mdisk aix_vd0 aix_vd IBM_2145:ITSO-CLS2:admin>svctask addmdisk -mdisk aix_vd1 aix_vd IBM_2145:ITSO-CLS2:admin>svctask addmdisk -mdisk aix_vd2 aix_vd 326
Implementing the IBM System Storage SAN Volume Controller V6.1
IBM_2145:ITSO-CLS2:admin>svcinfo lsmdisk id name status mode mdisk_grp_id mdisk_grp_name capacity ctrl_LUN_# controller_name UID 24 Kanaga_AIX online image 7 aix_imgmdg 5.0GB 0000000000000008 DS4700 600a0b800026b282000043224874f41900000000000000000000000000000000 25 Kanaga_AIX1 online image 7 aix_imgmdg 8.0GB 0000000000000009 DS4700 600a0b800026b2820000432f4874f57c00000000000000000000000000000000 26 aix_vd0 online managed 6 aix_vd 6.0GB 000000000000000A DS4700 600a0b800026b2820000439c48751ddc00000000000000000000000000000000 27 aix_vd1 online managed 6 aix_vd 6.0GB 000000000000000B DS4700 600a0b800026b2820000438448751da900000000000000000000000000000000 28 aix_vd2 online managed 6 aix_vd 6.0GB 000000000000000C DS4700 600a0b800026b2820000439048751dc200000000000000000000000000000000 IBM_2145:ITSO-CLS2:admin>
IBM_2145:ITSO-CLS2:admin>svctask migratevdisk -vdisk IVD_Kanaga -mdiskgrp aix_vd IBM_2145:ITSO-CLS2:admin>svctask migratevdisk -vdisk IVD_Kanaga1 -mdiskgrp aix_vd IBM_2145:ITSO-CLS2:admin>svcinfo lsmigrate migrate_type MDisk_Group_Migration progress 10 migrate_source_vdisk_index 8 migrate_target_mdisk_grp 6 max_thread_count 4 migrate_source_vdisk_copy_id 0 migrate_type MDisk_Group_Migration progress 0 migrate_source_vdisk_index 9 migrate_target_mdisk_grp 6 max_thread_count 4 migrate_source_vdisk_copy_id 0 IBM_2145:ITSO-CLS2:admin> After this task has completed, Example 6-66 on page 328 shows that the volumes are spread over three MDisks in the aix_vd storage pool. The old storage pool is empty.
Chapter 6. Data migration
327
IBM_2145:ITSO-CLS2:admin>svcinfo lsmdiskgrp aix_vd id 6 name aix_vd status online mdisk_count 3 vdisk_count 2 capacity 18.0GB extent_size 512 free_capacity 5.0GB virtual_capacity 13.00GB used_capacity 13.00GB real_capacity 13.00GB overallocation 72 warning 0 IBM_2145:ITSO-CLS2:admin>svcinfo lsmdiskgrp aix_imgmdg id 7 name aix_imgmdg status online mdisk_count 2 vdisk_count 0 capacity 13.0GB extent_size 512 free_capacity 13.0GB virtual_capacity 0.00MB used_capacity 0.00MB real_capacity 0.00MB overallocation 0 warning 0 IBM_2145:ITSO-CLS2:admin> Our migration to the SVC is complete. You can remove the original MDisks from the SVC, and you can remove these LUNs from the storage subsystem. If these LUNs are the LUNs that were used last on our storage subsystem, we can remove it from our SAN fabric.
328
There are other preparatory activities to be performed before we shut down the host and reconfigure the LUN masking and mapping. This section covers those activities. If you are moving the data to a new storage subsystem, it is assumed that this storage subsystem is connected to your SAN fabric, powered on, and visible from your SAN switches. Your environment must look similar to our environment, as shown in Figure 6-95.
SAN
329
IBM_2145:ITSO-CLS2:admin>svcinfo lscontroller id controller_name ctrl_s/n vendor_id product_id_low product_id_high 0 DS4500 IBM 1742-900 1 DS4700 IBM 1814 FAStT IBM_2145:ITSO-CLS2:admin>
IBM_2145:ITSO-CLS2:admin>svctask detectmdisk IBM_2145:ITSO-CLS2:admin>svcinfo lsmdisk id name status mode mdisk_grp_id mdisk_grp_name capacity ctrl_LUN_# controller_name UID 24 Kanaga_AIX offline managed 7 aix_imgmdg 5.0GB 0000000000000008 DS4700 600a0b800026b282000043224874f41900000000000000000000000000000000 25 Kanaga_AIX1 offline managed 7 aix_imgmdg 8.0GB 0000000000000009 DS4700 600a0b800026b2820000432f4874f57c00000000000000000000000000000000 26 aix_vd0 online managed 6 aix_vd 6.0GB 000000000000000A DS4700 600a0b800026b2820000439c48751ddc00000000000000000000000000000000 27 aix_vd1 online managed 6 aix_vd 6.0GB 000000000000000B DS4700 600a0b800026b2820000438448751da900000000000000000000000000000000 28 aix_vd2 online managed 6 aix_vd 6.0GB 000000000000000C DS4700 600a0b800026b2820000439048751dc200000000000000000000000000000000 29 mdisk29 online unmanaged 10.0GB 0000000000000010 DS4500 600a0b8000174233000000b84876512f00000000000000000000000000000000 30 mdisk30 online unmanaged 10.0GB 0000000000000011 DS4500 600a0b80001744310000010e4876444600000000000000000000000000000000 IBM_2145:ITSO-CLS2:admin> Even though the MDisks will not stay in the SVC for long, we suggest that you rename them to more meaningful names so that they do not get confused with other MDisks that are used by other activities. Also, we create the storage pools to hold our new MDisks, as shown in Example 6-69 on page 331.
330
IBM_2145:ITSO-CLS2:admin>svctask chmdisk -name AIX_MIG mdisk29 IBM_2145:ITSO-CLS2:admin>svctask chmdisk -name AIX_MIG1 mdisk30 IBM_2145:ITSO-CLS2:admin>svctask mkmdiskgrp -name KANAGA_AIXMIG -ext 512 MDisk Group, id [3], successfully created IBM_2145:ITSO-CLS2:admin>svcinfo lsmdiskgrp id name status mdisk_count vdisk_count capacity extent_size free_capacity virtual_capacity used_capacity real_capacity overallocation warning 3 KANAGA_AIXMIG online 0 0 0 512 0 0.00MB 0.00MB 0.00MB 0 0 6 aix_vd online 3 2 18.0GB 512 5.0GB 13.00GB 13.00GB 13.00GB 72 0 7 aix_imgmdg offline 2 0 13.0GB 512 13.0GB 0.00MB 0.00MB 0.00MB 0 0 IBM_2145:ITSO-CLS2:admin>
At this point, our SVC environment is ready for the volume migration to image mode volumes.
IBM_2145:ITSO-CLS2:admin>svctask migratetoimage -vdisk IVD_Kanaga -mdisk AIX_MIG -mdiskgrp KANAGA_AIXMIG IBM_2145:ITSO-CLS2:admin>svctask migratetoimage -vdisk IVD_Kanaga1 -mdisk AIX_MIG1 -mdiskgrp KANAGA_AIXMIG IBM_2145:ITSO-CLS2:admin>svcinfo lsmdisk id name status mode mdisk_grp_id mdisk_grp_name capacity ctrl_LUN_# controller_name UID 24 Kanaga_AIX offline managed 7 aix_imgmdg 5.0GB 0000000000000008 DS4700 600a0b800026b282000043224874f41900000000000000000000000000000000 25 Kanaga_AIX1 offline managed 7 aix_imgmdg 8.0GB 0000000000000009 DS4700 600a0b800026b2820000432f4874f57c00000000000000000000000000000000 26 aix_vd0 online managed 6 aix_vd 6.0GB 000000000000000A DS4700 600a0b800026b2820000439c48751ddc00000000000000000000000000000000 27 aix_vd1 online managed 6 aix_vd 6.0GB 000000000000000B DS4700 600a0b800026b2820000438448751da900000000000000000000000000000000 28 aix_vd2 online managed 6 aix_vd 6.0GB 000000000000000C DS4700 600a0b800026b2820000439048751dc200000000000000000000000000000000
331
29 AIX_MIG online image KANAGA_AIXMIG 10.0GB 0000000000000010 DS4500 600a0b8000174233000000b84876512f00000000000000000000000000000000 30 AIX_MIG1 online image KANAGA_AIXMIG 10.0GB 0000000000000011 DS4500 600a0b80001744310000010e4876444600000000000000000000000000000000 IBM_2145:ITSO-CLS2:admin>svcinfo lsmigrate migrate_type Migrate_to_Image progress 50 migrate_source_vdisk_index 9 migrate_target_mdisk_index 30 migrate_target_mdisk_grp 3 max_thread_count 4 migrate_source_vdisk_copy_id 0 migrate_type Migrate_to_Image progress 50 migrate_source_vdisk_index 8 migrate_target_mdisk_index 29 migrate_target_mdisk_grp 3 max_thread_count 4 migrate_source_vdisk_copy_id 0 IBM_2145:ITSO-CLS2:admin>
During the migration, our AIX server is unaware that its data is being physically moved between storage subsystems. After the migration is complete, the image mode volumes are ready to be removed from the AIX server, and the real LUNs can be mapped and masked directly to the host by using the storage subsystems tool.
332
3. Remove the volumes from the host by using the svctask rmvdiskhostmap command (Example 6-71). To double-check that you have removed the volumes, use the svcinfo lshostvdiskmap command, which shows that these disks are no longer mapped to the AIX server.
Example 6-71 Remove the volumes from the host
IBM_2145:ITSO-CLS2:admin>svctask rmvdiskhostmap -host Kanaga IVD_Kanaga IBM_2145:ITSO-CLS2:admin>svctask rmvdiskhostmap -host Kanaga IVD_Kanaga1 IBM_2145:ITSO-CLS2:admin>svcinfo lshostvdiskmap Kanaga IBM_2145:ITSO-CLS2:admin> 4. Remove the volumes from the SVC by using the svctask rmvdisk command, which will make the MDisks unmanaged, as shown in Example 6-72. Cached data: When you run the svctask rmvdisk command, the SVC first double-checks that there is no outstanding dirty cached data for the volume being removed. If uncommitted cached data still exists, the command fails with the following error message: CMMVC6212E The command failed because data in the cache has not been committed to disk You will have to wait for this cached data to be committed to the underlying storage subsystem before you can remove the volume. The SVC will automatically destage uncommitted cached data two minutes after the last write activity for the volume. How much data there is to destage, and how busy the I/O subsystem is, determine how long this command takes to complete. You can check whether the volume has uncommitted data in the cache by using the svcinfo lsvdisk <VDISKNAME> command and checking the fast_write_state attribute. This attribute has the following meanings: empty not_empty corrupt No modified data exists in the cache. Modified data might exist in the cache. Modified data might have existed in the cache, but any modified data has been lost.
IBM_2145:ITSO-CLS2:admin>svctask rmvdisk IVD_Kanaga IBM_2145:ITSO-CLS2:admin>svctask rmvdisk IVD_Kanaga1 IBM_2145:ITSO-CLS2:admin>svcinfo lsmdisk id name status mode mdisk_grp_id mdisk_grp_name capacity ctrl_LUN_# controller_name UID 29 AIX_MIG online unmanaged 10.0GB 0000000000000010 DS4500 600a0b8000174233000000b84876512f00000000000000000000000000000000 30 AIX_MIG1 online unmanaged 10.0GB 0000000000000011 DS4500 600a0b80001744310000010e4876444600000000000000000000000000000000 IBM_2145:ITSO-CLS2:admin>
333
5. Using Storage Manager (our storage subsystem management tool), unmap and unmask the disks from the SVC back to the AIX server. Important: This step is the last step that you can perform and still safely back out of everything you have done so far. Up to this point, you can reverse all of the actions that you have performed so far to get the server back online without data loss: Remap and remask the LUNs back to the SVC. Run the svctask detectmdisk command to rediscover the MDisks. Recreate the volumes with the svctask mkvdisk command. Remap the volumes back to the server with the svctask mkvdiskhostmap command. After you start the next step, you might not be able to turn back without the risk of data loss. We are ready to access the LUNs from the AIX server. If all of the zoning and LUN masking and mapping were done successfully, our AIX server will boot as though nothing has happened: 1. Run the cfgmgr -S command to discover the storage subsystem. 2. Use the lsdev -Ccdisk command to verify the discovery of the new disk. 3. Remove the references to all of the old disks. Example 6-73 shows the removal using SDD and Example 6-74 on page 335 shows the removal using SDDPCM.
Example 6-73 Remove references to old paths using SDD
#lsdev -Cc disk hdisk0 Available 1S-08-00-8,0 16 Bit LVD SCSI Disk Drive hdisk1 Available 1S-08-00-9,0 16 Bit LVD SCSI Disk Drive hdisk2 Available 1S-08-00-10,0 16 Bit LVD SCSI Disk Drive hdisk3 Available 1Z-08-02 1742-900 (900) Disk Array Device hdisk4 Available 1Z-08-02 1742-900 (900) Disk Array Device hdisk5 Defined 1Z-08-02 SAN volume Controller Device hdisk6 Defined 1Z-08-02 SAN volume Controller Device hdisk7 Defined 1D-08-02 SAN volume Controller Device hdisk8 Defined 1D-08-02 SAN volume Controller Device hdisk10 Defined 1Z-08-02 SAN volume Controller Device hdisk11 Defined 1Z-08-02 SAN volume Controller Device hdisk12 Defined 1D-08-02 SAN volume Controller Device hdisk13 Defined 1D-08-02 SAN volume Controller Device vpath0 Defined Data Path Optimizer Pseudo Device Driver vpath1 Defined Data Path Optimizer Pseudo Device Driver vpath2 Defined Data Path Optimizer Pseudo Device Driver # for i in 5 6 7 8 10 11 12 13; do rmdev -dl hdisk$i -R;done hdisk5 deleted hdisk6 deleted hdisk7 deleted hdisk8 deleted hdisk10 deleted hdisk11 deleted hdisk12 deleted hdisk13 deleted #for i in 0 1 2; do rmdev -dl vpath$i -R;done 334
Implementing the IBM System Storage SAN Volume Controller V6.1
deleted deleted deleted -Cc disk Available Available Available Available Available
16 Bit LVD SCSI Disk Drive 16 Bit LVD SCSI Disk Drive 16 Bit LVD SCSI Disk Drive 1742-900 (900) Disk Array Device 1742-900 (900) Disk Array Device
# lsdev -Cc disk hdisk0 Available 1S-08-00-8,0 16 Bit LVD SCSI hdisk1 Available 1S-08-00-9,0 16 Bit LVD SCSI hdisk2 Available 1S-08-00-10,0 16 Bit LVD SCSI hdisk3 Defined 1D-08-02 MPIO FC 2145 hdisk4 Defined 1D-08-02 MPIO FC 2145 hdisk5 Available 1D-08-02 MPIO FC 2145 # for i in 3 4; do rmdev -dl hdisk$i -R;done hdisk3 deleted hdisk4 deleted # lsdev -Cc disk hdisk0 Available 1S-08-00-8,0 16 Bit LVD SCSI hdisk1 Available 1S-08-00-9,0 16 Bit LVD SCSI hdisk2 Available 1S-08-00-10,0 16 Bit LVD SCSI hdisk5 Available 1D-08-02 MPIO FC 2145
4. If your application and data are on an LVM volume, rediscover the VG and then run the varyonvg VOLUME_GROUP command to activate the VG. 5. Mount your file systems with the mount /MOUNT_POINT command. 6. You are ready to start your application. Finally, to make sure that the MDisks are removed from the SVC, run the svctask detectmdisk command. The MDisks will first be discovered as offline and then they will automatically be removed after the SVC determines that there are no volumes associated with these MDisks.
335
3. Depending on your operating system, unmount the selected LUNs or shut down the host. 4. Add the SVC between your storage and the host. 5. Mount the LUNs or start the host again. 6. Start the migration. 7. After the migration process is complete, unmount the selected LUNs or shut down the host. 8. Remove the SVC from your SAN. 9. Mount the LUNs, or start the host again. 10.The migration is complete. As you can see, extremely little downtime is required. If you prepare everything correctly, you are able to reduce your downtime to a few minutes. The copy process is handled by the SVC, so the host does not hinder the performance while the migration progresses. To use the SVC for storage migrations, perform the steps that are described in the following sections: 6.5.2, Adding the SVC between the host system and the DS4700 on page 240 6.5.6, Migrating the volume from image mode to image mode on page 264 6.5.7, Removing image mode data from the SVC on page 274
336
As shown in Figure 6-97, a thin-provisioned volume has these components: Used capacity This term specifies the portion of real capacity that is being used to store data. For non-thin-provisioned copies, this value is the same as the volume capacity. If the volume
337
copy is thin-provisioned, the value increases from zero to the real capacity value as more of the volume is written to. Real capacity This capacity is the real allocated space in the storage pool. In a thin-provisioned volume, this value can differ from the total capacity. Free data This value specifies the difference between the real capacity and the used capacity values. The SVC is continuously trying to keep equal to the real capacity for contingency. If the free data capacity reaches the used capacity and if the volume has been configured with the -autoexpand option, the SVC will autoexpand the allocated space for this volume to keep this value equal to the real capacity. Grains This value is the smallest unit in into which the allocated space can be divided. Metadata This value is allocated in the real capacity, and it tracks the used capacity, real capacity, and free capacity.
IBM_2145:ITSO-CLS2:admin>svctask mkvdisk -mdiskgrp 0 -iogrp 0 -mdisk 0:1:2:3:4:5 -node 1 -vtype striped -size 15 -unit gb -fmtdisk -name VD_Full Virtual Disk, id [2], successfully created IBM_2145:ITSO-CLS2:admin>svcinfo lsvdisk VD_Full id 2 name VD_Full IO_group_id 0 IO_group_name io_grp0 status offline mdisk_grp_id 0 mdisk_grp_name MDG_DS47 capacity 15.00GB type striped formatted yes . . vdisk_UID 60050768018401BF280000000000000B mdisk_grp_name MDG_DS47 used_capacity 15.00GB real_capacity 15.00GB free_capacity 0.00MB overallocation 100 2. We then add a thin-provisioned volume copy with the volume mirroring option by using the addvdiskcopy command and the autoexpand parameter, as shown in Example 6-76 on page 339.
338
IBM_2145:ITSO-CLS2:admin>svctask addvdiskcopy -mdiskgrp 1 -mdisk 6:7:8:9 -vtype striped -rsize 2% -autoexpand -grainsize 32 -unit gb VD_Full VDisk [2] copy [1] successfully created IBM_2145:ITSO-CLS2:admin>svcinfo lsvdisk VD_Full id 2 name VD_Full IO_group_id 0 IO_group_name io_grp0 status online mdisk_grp_id many mdisk_grp_name many capacity 15.00GB type many formatted yes mdisk_id many mdisk_name many vdisk_UID 60050768018401BF280000000000000B tsync_rate 50 copy_count 2 copy_id 0 sync yes primary yes mdisk_grp_id 0 mdisk_grp_name MDG_DS47 fused_capacity 15.00GB real_capacity 15.00GB free_capacity 0.00MB overallocation 100 autoexpand warning grainsize copy_id 1 status online sync no primary no mdisk_grp_id 1 mdisk_grp_name MDG_DS83 type striped mdisk_id mdisk_name fast_write_state empty used_capacity 0.41MB real_capacity 323.57MB free_capacity 323.17MB overallocation 4746 autoexpand on warning 80 grainsize 32 As you can see in Example 6-76, the VD_Full has a copy_id 1 where the used_capacity is 0.41 MB, which is equal to the metadata, because only zeros exist in the disk.
339
The real_capacity is 323.57 MB, which is equal to the -rsize 2% value that is specified in the addvdiskcopy command. The free capacity is 323.17 MB, which is equal to the real capacity minus the used capacity. If zeros are written on the disk, the thin-provisioned volume does not consume space. Example 6-77 shows that the thin-provisioned volume does not consume space even when they are in sync.
Example 6-77 Thin-provisioned volume display
IBM_2145:ITSO-CLS2:admin>svcinfo lsvdisksyncprogress 2 vdisk_id vdisk_name copy_id progress estimated_completion_time 2 VD_Full 0 100 2 VD_Full 1 100 IBM_2145:ITSO-CLS2:admin>svcinfo lsvdisk VD_Full id 2 name VD_Full IO_group_id 0 IO_group_name io_grp0 status online mdisk_grp_id many mdisk_grp_name many capacity 15.00GB type many formatted yes mdisk_id many mdisk_name many vdisk_UID 60050768018401BF280000000000000B throttling 0 preferred_node_id 1 fast_write_state empty cache readwrite udid 0 fc_map_count 0 sync_rate 50 copy_count 2 copy_id 0 status online sync yes primary yes mdisk_grp_id 0 mdisk_grp_name MDG_DS47 type striped mdisk_id mdisk_name fast_write_state empty used_capacity 15.00GB real_capacity 15.00GB free_capacity 0.00MB overallocation 100 autoexpand warning grainsize copy_id 1 status online
340
sync yes primary no mdisk_grp_id 1 mdisk_grp_name MDG_DS83 type striped mdisk_id mdisk_name fast_write_state empty used_capacity 0.41MB real_capacity 323.57MB free_capacity 323.17MB overallocation 4746 autoexpand on warning 80 grainsize 32 3. We can split the volume mirror or remove one of the copies, keeping the thin-provisioned copy as our valid copy by using the splitvdiskcopy command or the rmvdiskcopy command: If you need your copy as a thin-provisioned clone, we suggest that you use the splitvdiskcopy command because that command will generate a new volume and you will be able to map to any server that you want. If you need your copy because you are migrating from a previously fully allocated volume to go to a thin-provisioned volume without any effect on the server operations, we suggest that you use the rmvdiskcopy command. In this case, the original volume name is kept and it remains mapped to the same server. Example 6-78 shows the splitvdiskcopy command.
Example 6-78 splitvdiskcopy command
IBM_2145:ITSO-CLS2:admin>svctask splitvdiskcopy -copy 1 -name VD_SEV VD_Full Virtual Disk, id [7], successfully created IBM_2145:ITSO-CLS2:admin>svcinfo lsvdisk -filtervalue name=VD* id name IO_group_id IO_group_name status mdisk_grp_id mdisk_grp_name capacity type FC_id FC_name RC_id RC_name vdisk_UID fc_map_count copy_count fast_write_state 2 VD_Full 0 io_grp0 online 0 MDG_DS47 15.00GB striped 60050768018401BF280000000000000B 0 1 empty 7 VD_SEV 0 io_grp0 online 1 MDG_DS83 15.00GB striped 60050768018401BF280000000000000D 0 1 empty IBM_2145:ITSO-CLS2:admin>svcinfo lsvdisk VD_SEV id 7 name VD_SEV IO_group_id 0 IO_group_name io_grp0 status online mdisk_grp_id 1 mdisk_grp_name MDG_DS83 capacity 15.00GB type striped formatted no
341
vdisk_UID 60050768018401BF280000000000000D throttling 0 preferred_node_id 2 fast_write_state empty cache readwrite udid fc_map_count 0 sync_rate 50 copy_count 1 copy_id 0 status online sync yes primary yes mdisk_grp_id 1 mdisk_grp_name MDG_DS83 type striped mdisk_id mdisk_name fast_write_state empty used_capacity 0.41MB real_capacity 323.57MB free_capacity 323.17MB overallocation 4746 autoexpand on warning 80 grainsize 32 Example 6-79 shows the rmvdiskcopy command.
Example 6-79 rmvdiskcopy command
IBM_2145:ITSO-CLS2:admin>svctask rmvdiskcopy -copy 0 VD_Full IBM_2145:ITSO-CLS2:admin>svcinfo lsvdisk -filtervalue name=VD* id name IO_group_id IO_group_name status mdisk_grp_id mdisk_grp_name capacity type FC_id FC_name RC_id RC_name vdisk_UID fc_map_count copy_count fast_write_state 2 VD_Full 0 io_grp0 online 1 MDG_DS83 15.00GB striped 60050768018401BF280000000000000B 0 1 empty IBM_2145:ITSO-CLS2:admin>svcinfo lsvdisk 2 id 2 name VD_Full IO_group_id 0 IO_group_name io_grp0 status online mdisk_grp_id 1 mdisk_grp_name MDG_DS83 capacity 15.00GB type striped formatted no vdisk_UID 60050768018401BF280000000000000B throttling 0 preferred_node_id 1 fast_write_state empty cache readwrite 342
Implementing the IBM System Storage SAN Volume Controller V6.1
udid 0 fc_map_count 0 sync_rate 50 copy_count 1 copy_id 1 status online sync yes primary yes mdisk_grp_id 1 mdisk_grp_name MDG_DS83 type striped mdisk_id mdisk_name fast_write_state empty used_capacity 0.41MB real_capacity 323.57MB free_capacity 323.17MB overallocation 4746 autoexpand on warning 80 grainsize 32
343
344
Chapter 7.
Easy Tier
In this chapter we describe the function provided by the EasyTier disk performance optimization feature of the SAN Volume Controller. We also explain how to activate the EasyTier process for both evaluation purposes and for automatic extent migration.
345
346
MDisks that are used in a single tier storage pool should have the same hardware characteristics, for example, the same RAID type, RAID array size, disk type, and disk revolutions per minute (RPMs) and controller performance characteristics.
347
Figure 7-2 shows a scenario in which a storage pool is populated with two different MDisk types: one belonging to an SSD array, and one belonging to an HDD array. Although this example shows RAID5 arrays, other RAID types can be used.
Adding SSD to the pool means additional space is also now available for new volumes, or volume expansion.
348
Data Migration Planner Using the extents previously identified, the Data Migration Planner step builds the extent migration plan for the storage pool. 4. Data Migrator The Data Migrator step involves the actual movement or migration of the volumes extents up to, or down from the high disk tier. The extent migration rate is capped so that a maximum of up to 30 MBps is migrated. This equates to around 3 TB a day that will be migrated between disk tiers. When relocating volume extents, Easy Tier performs these actions: It attempts to migrate the most active volume extents up to SSD first. To ensure there is a free extent available, a less frequently accessed extent may first need to be migrated back to HDD. A previous migration plan and any queued extents that are not yet relocated are abandoned.
349
Examples of the use of these parameters are shown in 7.5, Using Easy Tier with the SVC CLI on page 353 and 7.6, Using Easy Tier with the SVC GUI on page 359.
7.3.1 Prerequisites
There is no Easy Tier license required for the SVC; it comes as part of the V6.1 code. For Easy Tier to migrate extents you will need to have disk storage available that has different tiers, for example a mix of SSD and HDD.
Automatic data placement and extent I/O activity monitors are supported on each copy of a mirrored volume. Easy Tier works with each copy independently of the other copy. Note: Volume mirroring can have different workload characteristics on each copy of the data because reads are normally directed to the primary copy and writes occur to both. Thus, the number of extents that Easy Tier will migrate to SSD tier will probably be different for each copy. If possible, the SAN Volume Controller creates new volumes or volume expansions using extents from MDisks from the HDD tier. However, it will use extents from MDisks from the SSD tier if necessary. When a volume is migrated out of a storage pool that is managed with Easy Tier, then Easy Tier automatic data placement mode is no longer active on that volume. Automatic data placement is also turned off while a volume is being migrated even if it is between pools that both have Easy Tier automatic data placement enabled. Automatic data placement for the volume is re-enabled when the migration is complete.
7.3.3 Limitations
Limitations exist when using IBM System Storage Easy Tier on the SAN Volume Controller. Limitations when removing an MDisk by using the -force parameter When an MDisk is deleted from a storage pool with the -force parameter, extents in use are migrated to MDisks in the same tier as the MDisk being removed, if possible. If insufficient extents exist in that tier, then extents from the other tier are used. Limitations when migrating extents When Easy Tier automatic data placement is enabled for a volume, the svctask migrateexts command-line interface (CLI) command cannot be used on that volume. Limitations when migrating a volume to another storage pool When the SAN Volume Controller migrates a volume to a new storage pool, Easy Tier automatic data placement between the two tiers is temporarily suspended. After the volume is migrated to its new storage pool, Easy Tier automatic data placement between the generic SSD tier and the generic HDD tier resumes for the moved volume, if appropriate. When the SAN Volume Controller migrates a volume from one storage pool to another, it will attempt to migrate each extent to an extent in the new storage pool from the same tier as the original extent. In several cases, such as a target tier being unavailable, the other tier is used. For example, the generic SSD tier might be unavailable in the new storage pool. Limitations when migrating a volume to image mode Easy Tier automatic data placement does not support image mode. When a volume with Easy Tier automatic data placement mode active is migrated to image mode, Easy Tier automatic data placement mode is no longer active on that volume. Image mode and sequential volumes cannot be candidates for automatic data placement. Easy Tier does support evaluation mode for image mode volumes.
Best practices
Always set the Storage Pool -easytier value to on rather than to the default value auto. This makes it easier to turn on evaluation mode for existing single tier pools, and no further
351
changes will be needed when you move to multitier pools. See Easy Tier activation on page 350 for more information about the mix of pool and volume settings. Using Easy Tier can make it more appropriate to use smaller storage pool extent sizes.
Offloading statistics
To extract the summary performance data, use one of these methods.
352
The distribution of hot data and cold data for each volume is shown in the volume heat distribution report. The report displays the portion of the capacity of each volume on SSD (red), and HDD (blue), as shown in Figure 7-5.
353
Deleted lines: Many non-Easy Tier-related lines have been deleted in the command output or responses in examples shown in the following sections to enable you to focus on Easy Tier-related information only.
IBM_2145:ITSO-CLS5:admin>svcinfo lsmdiskgrp -filtervalue "name=Single*" id name status mdisk_count vdisk_count easy_tier easy_tier_status 27 Single_Tier_Storage_Pool online 3 1 off inactive IBM_2145:ITSO-CLS5:admin>svcinfo lsmdiskgrp Single_Tier_Storage_Pool id 27 name Single_Tier_Storage_Pool status online mdisk_count 3 vdisk_count 1 . easy_tier off easy_tier_status inactive . tier generic_ssd 354
Implementing the IBM System Storage SAN Volume Controller V6.1
tier_mdisk_count 0 . tier generic_hdd tier_mdisk_count 3 tier_capacity 200.25GB IBM_2145:ITSO-CLS5:admin>svctask chmdiskgrp -easytier on Single_Tier_Storage_Pool IBM_2145:ITSO-CLS5:admin>svcinfo lsmdiskgrp Single_Tier_Storage_Pool id 27 name Single_Tier_Storage_Pool status online mdisk_count 3 vdisk_count 1 . easy_tier on easy_tier_status active . tier generic_ssd tier_mdisk_count 0 . tier generic_hdd tier_mdisk_count 3 tier_capacity 200.25GB
------------ Now Reapeat for the Volume ------------IBM_2145:ITSO-CLS5:admin>svcinfo lsvdisk -filtervalue "mdisk_grp_name=Single*" id name status mdisk_grp_id mdisk_grp_name capacity type 27 ITSO_Volume_1 online 27 Single_Tier_Storage_Pool 10.00GB striped IBM_2145:ITSO-CLS5:admin>svcinfo lsvdisk ITSO_Volume_1 id 27 name ITSO_Volume_1 . easy_tier off easy_tier_status inactive . tier generic_ssd tier_capacity 0.00MB . tier generic_hdd tier_capacity 10.00GB
IBM_2145:ITSO-CLS5:admin>svctask chvdisk -easytier on ITSO_Volume_1 IBM_2145:ITSO-CLS5:admin>svcinfo lsvdisk ITSO_Volume_1 id 27 name ITSO_Volume_1 . easy_tier on easy_tier_status measured . tier generic_ssd tier_capacity 0.00MB
355
IBM_2145:ITSO-CLS5:admin>svcinfo lsmdisk mdisk_id mdisk_name status mdisk_grp_name capacity raid_level tier 299 SSD_Array_RAID5_1 online Multi_Tier_Storage_Pool 203.6GB raid5 generic_hdd 300 SSD_Array_RAID5_2 online Multi_Tier_Storage_Pool 203.6GB raid5 generic_hdd IBM_2145:ITSO-CLS5:admin>svcinfo lsmdisk SSD_Array_RAID5_2 mdisk_id 300 mdisk_name SSD_Array_RAID5_2 status online mdisk_grp_id 28 mdisk_grp_name Multi_Tier_Storage_Pool capacity 203.6GB
356
IBM_2145:ITSO-CLS5:admin>svcinfo lsmdiskgrp -filtervalue "name=Multi" *" id name mdisk_count vdisk_count capacity easy_tier easy_tier_status 28 Multi_Tier_Storage_Pool 5 1 606.00GB auto inactive IBM_2145:ITSO-CLS5:admin>svcinfo lsmdiskgrp Multi_Tier_Storage_Pool id 28 name Multi_Tier_Storage_Pool status online mdisk_count 5 vdisk_count 1 . easy_tier auto easy_tier_status inactive . tier generic_ssd tier_mdisk_count 0 . tier generic_hdd tier_mdisk_count 5
IBM_2145:ITSO-CLS5:admin>svcinfo lsmdisk SSD_Array_RAID5_1 id 299 name SSD_Array_RAID5_1 status online . tier generic_hdd IBM_2145:ITSO-CLS5:admin>svctask chmdisk -tier generic_ssd SSD_Array_RAID5_1 IBM_2145:ITSO-CLS5:admin>svctask chmdisk -tier generic_ssd SSD_Array_RAID5_2
IBM_2145:ITSO-CLS5:admin>svcinfo lsmdisk SSD_Array_RAID5_1 id 299 name SSD_Array_RAID5_1 status online . tier generic_ssd IBM_2145:ITSO-CLS5:admin>svcinfo lsmdiskgrp Multi_Tier_Storage_Pool id 28 name Multi_Tier_Storage_Pool status online mdisk_count 5 vdisk_count 1 . easy_tier auto
Chapter 7. Easy Tier
357
easy_tier_status active . tier generic_ssd tier_mdisk_count 2 tier_capacity 407.00GB . tier generic_hdd tier_mdisk_count 3
IBM_2145:ITSO-CLS5:admin>svcinfo lsvdisk ITSO_Volume_10 id 28 name ITSO_Volume_10 mdisk_grp_name Multi_Tier_Storage_Pool capacity 10.00GB type striped . easy_tier on easy_tier_status active . tier generic_ssd tier_capacity 0.00MB tier generic_hdd tier_capacity 10.00GB The volume in the example will be measured by Easy Tier and a hot extent migration will be performed from the hdd tier MDisk to the ssd tier MDisk. Also note that the volume hdd tier generic_hdd still holds the entire capacity of the volume because the generic_ssd capacity value is 0.00 MB. The allocated capacity on the generic_hdd tier will gradually change as Easy Tier optimizes the performance by moving extents into the generic_ssd tier.
tier generic_ssd tier_capacity 407.00GB tier_free_capacity 100.00GB tier generic_hdd tier_capacity 18.85TB tier_free_capacity 10.40TB As you can now see we have two different tiers available in our SVC cluster, generic_ssd and generic_hdd. At this time there are also extents being used on both the generic_ssd tier and the generic_hdd tier; see the free_capacity values. However, we do not know from this command if the SSD storage is being used by the Easy Tier process. To determine if Easy Tier is actively measuring or migrating extents within the cluster, you need to view the volume status as shown previously in Example 7-5 on page 358.
This is because, by default, all MDisks are initially discovered as Hard Disk Drives (HDDs); see the MDisk properties panel Figure 7-7 on page 360.
359
Therefore, for Easy Tier to take effect, you need to change the disk tier. Right-click the selected MDisk and choose Select Tier, as shown in Figure 7-8.
Now set the MDisk Tier to Solid-State Drive, as shown in Figure 7-9 on page 361.
360
The MDisk now has the correct tier and so the properties value is correct for a multidisk tier pool, as shown in Figure 7-10.
361
362
Chapter 8.
363
8.1 FlashCopy
The FlashCopy function of the IBM System Storage SAN Volume Controller (SVC) provides the capability to perform a point-in-time copy of one or more volumes. In this section we describe the inner workings of FlashCopy and provide details of its configuration and use. You can use FlashCopy to help you solve the critical but challenging task of creating consistent copies of data sets while they remain online and actively in use. Because the copy is performed at the block level, it operates below the host operating system and cache and is therefore transparent to the host. While the FlashCopy operation is performed, the source volume is frozen briefly to initialize the FlashCopy bitmap and then I/O is allowed to resume. Although several FlashCopy options require the data to be copied from the source to the target in the background, which can take a length of time to complete, the resulting data on the target volume is presented so that the copy appears to have completed immediately.
8.1.2 Backup
FlashCopy does not reduce the time it takes to perform a backup. However, it can be used to minimize and, under certain conditions, eliminate application downtime associated with performing backups. After the FlashCopy is performed, the resulting image of the data can be backed up to tape. After the copy to tape has been completed, the image data is redundant and the target volumes can be discarded. Usually when FlashCopy is used for backup purposes, the target data is managed as read-only.
8.1.3 Restore
FlashCopies can be taken periodically and targets left online so they can be rapidly restored from. The target can be used to preform a restore of individual files, or the entire source volume can be restored if required.
365
Reverse FlashCopy enables FlashCopy targets to become restore points for the source without breaking the FlashCopy relationship and without having to wait for the original copy operation to complete. The maximum number of supported FlashCopy mappings is 8192 per SVC cluster. The size of the source and target volumes cannot be altered (increased or decreased) while a FlashCopy mapping is defined.
366
Note that regardless of whether the initial FlashCopy map (volume X volume Y) is incremental, the Reverse FlashCopy operation only copies the modified data. Consistency Groups are reversed by creating a set of new reverse FlashCopy maps and adding them to a new reverse Consistency Group. Consistency Groups cannot contain more than one FlashCopy map with the same target volume.
367
Figure 8-2 Tivoli Storage Manager for Advanced Copy Services features
Tivoli FlashCopy Manager provides many of the features of Tivoli Storage Manager for Advanced Copy Services without the requirement to use Tivoli Storage Manager. With Tivoli FlashCopy Manager, you can coordinate and automate host preparation steps before issuing FlashCopy start commands to ensure that a consistent backup of the application is made. You can put databases into hot backup mode and flush filesystem cache prior to starting the FlashCopy. FlashCopy Manager also allows for easier management of on-disk backups using FlashCopy, and provides a simple interface to perform the reverse operation. Figure 8-3 on page 369 shows the FlashCopy Manager feature.
368
Describing Tivoli Storage Manager FlashCopy Manager is beyond the scope of this publication.
369
370
Figure 8-6 shows four targets and mappings taken from a single source, along with their interdependencies. In this example Target 1 is the oldest (as measured from the time it was started) through to Target 4, which is the newest. The ordering is important because of the way in which data is copied when multiple target volumes are defined and because of the dependency chain that results. A write to the source volume does not cause its data to be copied to all of the targets. Instead, it is copied to the newest target volume only (Target 4 in Figure 8-6). The older targets will refer to new targets first before referring to the source. From the point of view of an intermediate target disk (neither the oldest or the newest), it treats the set of newer target volumes and the true source volume as a type of composite source. It treats all older volumes as a kind of target (and behaves like a source to them). If the mapping for an intermediate target volume shows 100% progress, its target volume contains a complete set of data. In this case, mappings treat the set of newer target volumes, up to and including the 100% progress target, as a form of composite source. A dependency relationship exists between a particular target and all newer targets (up to and including a target that shows 100% progress) that share the same source until all data has been copied to this target and all older targets. You can read more about Multiple Target FlashCopy in 8.4.6, Interaction and dependency between Multiple Target FlashCopy mappings on page 375.
371
When Consistency Groups are used, the FlashCopy commands are issued to the FlashCopy Consistency Group, which performs the operation on all FlashCopy mappings contained within the Consistency Group. Figure 8-7 illustrates a Consistency Group consisting of two FlashCopy mappings.
Note: After an individual FlashCopy mapping has been added to a Consistency Group, it can only be managed as part of the group. Operations such as prepare, start, and stop are no longer allowed on the individual mapping.
Dependent writes
To illustrate why it is crucial to use Consistency Groups when a data set spans multiple volumes, consider the following typical sequence of writes for a database update transaction: 1. A write is executed to update the database log, indicating that a database update is about to be performed. 2. A second write is executed to perform the actual update to the database. 3. A third write is executed to update the database log, indicating that the database update has completed successfully. The database ensures the correct ordering of these writes by waiting for each step to complete before starting the next step. However, if the database log (updates 1 and 3) and the database itself (update 2) are on separate volumes, then it is possible for the FlashCopy of the database volume to occur prior to the FlashCopy of the database log. This can result in the target volumes seeing writes (1) and (3) but not (2), because the FlashCopy of the database volume occurred before the write was completed. In this case, if the database was restarted using the backup that was made from the FlashCopy target volumes, the database log indicates that the transaction had completed successfully when in fact it had not, because the FlashCopy of the volume with the database file was started (bitmap was created) before the write had completed to the volume. Therefore, the transaction is lost and the integrity of the database is in question.
372
To overcome the issue of dependent writes across volumes and to create a consistent image of the client data, it is necessary to perform a FlashCopy operation on multiple volumes as an atomic operation. To accomplish this the SVC supports the concept of Consistency Groups. A FlashCopy Consistency Group can contain up to 512 FlashCopy mappings (this is the maximum number of FlashCopy mappings supported by the SVC cluster). FlashCopy commands can then be issued to the FlashCopy Consistency Group and thereby simultaneously for all of the FlashCopy mappings that are defined in the Consistency Group. For example, when issuing a FlashCopy start command to the Consistency Group, all of the FlashCopy mappings in the Consistency Group are started at the same time, resulting in a point-in-time copy that is consistent across all of the FlashCopy mappings that are contained in the Consistency Group.
Maximum configurations
Table 8-1 lists the FlashCopy properties and maximum configurations.
Table 8-1 FlashCopy properties and maximum configuration FlashCopy property FlashCopy targets per source Maximum 256 Comment This maximum is the maximum number of FlashCopy mappings that can exist with the same source volume. The number of mappings is no longer limited by the number of volumes in the cluster, so the FlashCopy component limit applies. This maximum is an arbitrary limit that is policed by the software. This maximum is a limit on the quantity of FlashCopy mappings using bitmap space from this I/O Group. This maximum configuration will consume all 512 MB of bitmap space for the I/O Group and allow no Metro and Global Mirror bitmap space. The default is 40 TB. This limit is due to the time that is taken to prepare a Consistency Group with a large number of mappings.
4,096
FlashCopy Consistency Groups per cluster FlashCopy volumes per I/O Group
127 1,024
512
373
Source reads
Reads are performed from the source volume. This is the same as for non-FlashCopy volumes.
Source writes
Writes to the source will cause the grain to be copied to the target if it has not already been copied, then the write will be performed to the source.
374
Target reads
Reads are performed from the target if the grain has already been copied. Otherwise, the read is performed from the source and no copy is performed.
Target writes
Writes to the target will cause the grain to be copied from the source to the target unless the entire grain is being written, then the write will complete to the target.
375
Target 0 is not dependent on a source, because it has completed copying. Target 0 has two dependent mappings (Target 1 and Target 2). Target 1 is dependent upon Target 0. It will remain dependent until all of Target 1 has been copied. Target 2 is dependent on it, because Target 2 is 20% copy complete. After all of Target 1 has been copied, it can then move to the idle_copied state. Target 2 is dependent upon Target 0 and Target 1 and will remain dependent until all of Target 2 has been copied. No target is dependent on Target 2, so when all of the data has been copied to Target 2, it can move to the Idle_copied state. Target 3 has actually completed copying, so it is not dependent on any other maps.
376
Yes Target No
Read from source volume. If any newer targets exist for this source in which this grain has already been copied, read from the oldest of these targets. Otherwise, read from the source.
Yes
377
IBM_2145:ITSO-CLS1:admin>svcinfo lsvdisk Image_volume_A id 8 name Image_volume_A IO_group_id 0 IO_group_name io_grp0 status online storage_pool_id 2 storage_pool_name Storage_Pool_Image capacity 36.0GB type image . . . autoexpand warning grainsize IBM_2145:ITSO-CLS1:admin>svctask mkvolume -size 36 -unit gb -name volume_A_copy -mdiskgrp Storage_Pool_DS47 -vtype striped -iogrp 1 Virtual Disk, id [19], successfully created
378
Tip: Alternatively, you can use the expandvolumesize and shrinkvolumesize volume commands to modify the size of the volume. See 9.5.10, Expanding a volume on page 471 and 9.5.16, Shrinking a volume on page 476 for more information. You can use an image mode volume as either a FlashCopy source volume or target volume.
379
Description The prestartfcmap or prestartfcconsistgrp command is directed to either a Consistency Group for FlashCopy mappings that are members of a normal Consistency Group or to the mapping name for FlashCopy mappings that are stand-alone mappings. The prestartfcmap or prestartfcconsistgrp command places the FlashCopy mapping into the Preparing state. Important: The prestartfcmap or prestartfcconsistgrp command can corrupt any data that previously resided on the target volume because cached writes are discarded. Even if the FlashCopy mapping is never started, the data from the target might have logically changed during the act of preparing to start the FlashCopy mapping. The FlashCopy mapping automatically moves from the Preparing state to the Prepared state after all cached data for the source is flushed and all cached data for the target is no longer valid. When all of the FlashCopy mappings in a Consistency Group are in the Prepared state, the FlashCopy mappings can be started. To preserve the cross-volume Consistency Group, the start of all of the FlashCopy mappings in the Consistency Group must be synchronized correctly with respect to I/Os that are directed at the volumes by using the startfcmap or startfcconsistgrp command. The following actions occur during the startfcmap or startfcconsistgrp commands run: New reads and writes to all source volumes in the Consistency Group are paused in the cache layer until all ongoing reads and writes beneath the cache layer are completed. After all FlashCopy mappings in the Consistency Group are paused, the internal cluster state is set to allow FlashCopy operations. After the cluster state is set for all FlashCopy mappings in the Consistency Group, read and write operations continue on the source volumes. The target volumes are brought online. As part of the startfcmap or startfcconsistgrp command, read and write caching is enabled for both the source and target volumes. You can modify the following FlashCopy mapping properties: FlashCopy mapping name Clean rate Consistency Group Copy rate (for background copy) Automatic deletion of the mapping when the background copy is complete There are two separate mechanisms by which a FlashCopy mapping can be stopped: You have issued a command. An I/O error has occurred. This command requests that the specified FlashCopy mapping be deleted. If the FlashCopy mapping is in the Stopped state, the force flag must be used. If the flush of data from the cache cannot be completed, the FlashCopy mapping enters the Stopped state.
Flush done
Start
Modify
Stop
Delete
Flush failed
380
Description After all of the source data has been copied to the target and there are no dependent mappings, the state is set to Copied. If the option to automatically delete the mapping after the background copy completes is specified, the FlashCopy mapping is automatically deleted. If this option is not specified, the FlashCopy mapping is not automatically deleted and can be reactivated by preparing and starting again. The node has failed.
Bitmap online/offline
Idle_or_copied
Read and write caching is enabled for both the source and the target. A FlashCopy mapping exists between the source and target but the source and target behave as independent volumes in this state.
Copying
The FlashCopy indirection layer governs all I/O to the source and target volumes while the background copy is running. The background copy process is copying grains from the source to the target. Reads and writes are executed on the target as though the contents of the source were instantaneously copied to the target during the startfcmap or startfcconsistgrp command. The source and target can be independently updated. Internally, the target depends on the source for certain tracks. Read and write caching is enabled on the source and the target.
Stopped
The FlashCopy was stopped either by a user command or by an I/O error. When a FlashCopy mapping is stopped, the integrity of the data on the target volume is lost. Therefore, while the FlashCopy mapping is in this state, the target volume is in the Offline state. To regain access to the target, the mapping must be started again (the previous point-in-time will be lost) or the FlashCopy mapping must be deleted. The source volume is accessible, and read/write caching is enabled for the source. In the Stopped state, a mapping can either be prepared again or deleted.
Stopping
The mapping is in the process of transferring data to a depend mapping. The behavior of the target volume depends on whether the background copy process had completed while the mapping was in the Copying state. If the copy process had completed, the target volume remains online while the stopping copy process completes. If the copy process had not completed, data in the cache is discarded for the target volume. The target volume is taken offline, and the stopping copy process runs. After the data has been copied, a stop complete asynchronous event notification is issued. The mapping will move to the Idle/Copied state if the background copy has completed or to the Stopped state if the background copy has not completed. The source volume remains accessible for I/O.
381
Suspended
The FlashCopy was in the Copying or Stopping state when access to the metadata was lost. As a result both the source and target volumes are offline and the background copy process has been halted. When the metadata becomes available again, the FlashCopy mapping will return to the Copying or Stopping state. Access to the source and target volumes will be restored, and the background copy or stopping process will resume. Unflushed data that was written to the source or target before the FlashCopy was suspended is pinned in cache until the FlashCopy mapping leaves the Suspended state.
Preparing
The FlashCopy is in the process of preparing the mapping. While in this state, data from cache is destaged to disk and a consistent copy of the source exists on disk. At this time cache is operating in write-through mode and therefore writes to the source volume will experience additional latency. The target volume is reported as online, but will not perform reads or writes. These reads and writes are failed by the SCSI front-end. Before starting the FlashCopy mapping, it is important that any cache at the host level, for example, buffers on the host operating system or application, are also instructed to flush any outstanding writes to the source volume. Performing the cache flush required as part of the startfcmap or startfcconsistgrp command causes I/Os to be delayed waiting for the cache flush to complete. To overcome this problem, SVC FlashCopy supports the prestartfcmap or prestartfcconsistgrp commands, which prepare for a FlashCopy start while still allowing I/Os to continue to the source volume. In the Preparing state, the FlashCopy mapping is prepared by the following steps: 1. Flushing any modified write data associated with the source volume from the cache. Read data for the source will be left in the cache. 2. Placing the cache for the source volume into write-through mode, so that subsequent writes wait until data has been written to disk before completing the write command that is received from the host. 3. Discarding any read or write data that is associated with the target volume from the cache.
Prepared
When in the Prepared state, the FlashCopy mapping is ready to perform a start. While the FlashCopy mapping is in this state, the target volume is in the Offline state. In the Prepared state, writes to the source volume experience additional latency because the cache is operating in write-through mode.
382
Table 8-4 FlashCopy mapping state summary State Online/Offline Idling/Copied Copying Stopped Stopping Online Online Online Online Source Cache state Write-back Write-back Write-back Write-back Online/Offline Online Online Offline Online if copy complete Offline if copy not complete Offline Online but not accessible Online but not accessible Target Cache state Write-back Write-back N/A N/A
383
A fully allocated source volume can be incrementally copied using FlashCopy to another fully allocated volume at the same time as being copied to multiple thin-provisioned targets (taken at separate points in time). This combination allows a single full backup to be kept for recovery purposes and separates the backup workload from the production workload, and at the same time, allowing older thin-provisioned backups to be retained.
The grains per second numbers represent the maximum number of grains that the SVC will copy per second, assuming that the bandwidth to the managed disks (MDisks) can accommodate this rate. If the SVC is unable to achieve these copy rates because of insufficient bandwidth from the SVC nodes to the MDisks, then background copy I/O contends for resources on an equal basis with the I/O that is arriving from the hosts. Both background copy I/O and I/O that is arriving from the hosts tend to see an increase in latency and a consequential reduction in throughput. Both background copy and foreground I/O continue to make forward progress, and do not stop, hang, or cause the node to fail. The background copy is performed by both nodes of the I/O Group in which the source volume resides.
384
8.4.14 Synthesis
The FlashCopy functionality in SVC simply creates copy volumes. All of the data in the source volume is copied to the destination volume, including operating system, logical volume manager, and application metadata. Note: Certain operating systems are unable to use FlashCopy without an additional step, which is termed synthesis. In summary, synthesis performs a type of transformation on the operating system metadata on the target volume so that the operating system can use the disk.
Node failure
Normally, two copies of the FlashCopy bitmaps are maintained. One copy of the FlashCopy bitmaps is on each of the two nodes making up the I/O Group of the source volume. When a node fails, one copy of the bitmaps, for all FlashCopy mappings whose source volume is a member of the failing nodes I/O Group, will become inaccessible. FlashCopy will continue with a single copy of the FlashCopy bitmap being stored as non-volatile in the remaining node in the source I/O Group. The cluster metadata is updated to indicate that the missing node no longer holds a current bitmap. When the failing node recovers, or a replacement node is added to the I/O Group, the bitmap redundancy will be restored.
385
386
FlashCopy Destination
Not supported
Snapshot
Options: If Auto-Create Target Thin-provisioned target with rsize = 0 Autoexpand=on Target pool is primary copy source pool No background copy Use case The user wants to produce a copy of a volume without impacting the availability of the volume. The user does not anticipate a large number of changes to be made to the source or target volume; a significant proportion of the volumes will not be changed. By ensuring that only changes require a copy of data to be made, the total amount of disk space required for the copy is significantly reduced, and so allows for many such snapshot copies to be used in the environment. Snapshots are therefore useful for providing protection against corruption or similar issues with the validity of the data, but do not provide protection from physical controller failures.
387
Snapshots can also provide a vehicle for performing repeatable testing including what-if modeling based on production data without requiring a full copy of the data to be provisioned.
Clone
Options: If auto-create target Created volume identical to primary copy of source volume (including storage pool) Auto-Delete Clean Rate = 0 Background Copy Rate = 50 Use case Users want a copy of the volume that they can modify without impacting the original. After the clone is established, there is no expectation that it will be refreshed or that there will be any further need to reference the original production data again. If the source is thin-provisioned, then the target will be thin-provisioned for auto-create target.
Backup
Options: If auto-create target Created volume identical to primary copy of source volume Incremental Clean Rate = 0 Background Copy Rate = 50 Use case The user wants to create a copy of the volume that can be used as a backup in the event that the source becomes unavailable, as in the case of the loss of the underlying physical controller. The user plans to periodically update the secondary copy and does not want to suffer the overhead of creating a completely new copy each time (and incremental FlashCopy times are faster than full copy, which helps to reduce the window where the new backup is not yet fully effective). If the source is thin-provisioned, then the target will be thin-provisioned on this one for auto-create target. Another use case here, which is not supported by the name, is to create and maintain (periodically refresh) an independent image that can be subjected to intensive I/O (for example, data mining) without impacting source volume performance.
388
Tips: Note that intracluster Metro Mirror will consume more resources within the cluster as compared to an intercluster Metro Mirror relationship, where resource allocation is shared between the clusters. Use intercluster Metro Mirror when possible. A typical application of this function is to set up a dual-site solution using two SVC clusters. The first site is considered the primary or production site, and the second site is considered the backup site or failover site, which is activated when a failure at the first site is detected.
389
volume and allows I/O to the master volume to continue, to avoid impacting the operation of the master volumes. Figure 8-11 illustrates how a write to the master volume is mirrored to the cache of the auxiliary volume before an acknowledgement of the write is sent back to the host that issued the write. This process ensures that the auxiliary is synchronized in real time, in case it is needed in a failover situation. However, this process also means that the application is exposed to the latency and bandwidth limitations (if any) of the communication link between the master and auxiliary volumes. This process might lead to unacceptable application performance, particularly when placed under peak load. Therefore, using Metro Mirror has distance limitations.
390
SVC implements a configuration model that maintains the Metro Mirror configuration and state through major events, such as failover, recovery, and resynchronization, to minimize user configuration action through these events. SVC allows resynchronization of changed data so that write failures occurring on either the master or auxiliary volumes do not require a complete resyncronization of the relationship.
Software level restrictions for Multiple Cluster Mirroring: Partnership between a cluster running 6.1.0 and a cluster running a version earlier than 4.3.1 is not supported. Clusters in a partnership where one cluster is running 6.1.0 and the other is running 4.3.1 cannot participate in additional partnerships with other clusters. Clusters that are all running either 6.1.0 or 5.1.0 can participate in up to three cluster partnerships.
391
Note: SVC 6.1 supports object names up to 63 characters. Previous levels only supported up to 15 characters. When SVC 6.1 clusters are partnered with 4.3.1 and 5.1.0 clusters, various object names will be truncated at 15 characters when displayed from 4.3.1 and 5.1.0 clusters.
Example: A B, A C, and A D
Figure 8-13 shows four clusters in a star topology, with cluster A at the center. Cluster A can be a central DR site for the three other locations. Using a star topology, you can migrate applications by using a process like the one described in the following example: 1. Suspend application at A. 2. Remove the A B relationship. 3. Create the A C relationship (or alternatively, the B C relationship). 4. Synchronize to cluster C, and ensure A C is established: A B, A C, A D, B C, B D, and C D A B, A C, and B C
392
Example: A B, A C, and B C
Example: A B, A C, A D, B D, and C D
Figure 8-15 is a fully connected mesh where every cluster has a partnership to each of the three other clusters. This allows volumes to be replicated between any pair of clusters.
Example: A B, A C, and B C
Figure 8-16 shows a daisy-chain topology.
393
Note that although clusters can have up to three partnerships, volumes can only be part of one Remote Copy relationship, for example A B. Upgrade restriction: Upgrading a cluster to 6.1.0 requires that the partner cluster be running 4.3.1 or later. If the partner cluster is running 4.3.0, it must first be upgraded to 4.3.1.
394
Certain uses of Metro Mirror require manipulation of more than one relationship. Metro Mirror Consistency Groups can provide the ability to group relationships so that they are manipulated in unison. Consider the following points: Metro Mirror relationships can be part of a Consistency Group, or they can be stand-alone and therefore handled as single instances. A Consistency Group can contain zero or more relationships. An empty Consistency Group, with zero relationships in it, has little purpose until it is assigned its first relationship, except that it has a name. All relationships in a Consistency Group must have corresponding master and auxiliary volumes. Although it is possible to use Consistency Groups to manipulate sets of relationships that do not need to satisfy these strict rules, this manipulation can lead to undesired side effects. The rules behind a Consistency Group mean that certain configuration commands are prohibited. These configuration commands are not prohibited if the relationship is not part of a Consistency Group. For example, consider the case of two applications that are completely independent, yet they are placed into a single Consistency Group. In the event of an error there is a loss of synchronization, and a background copy process is required to recover synchronization. While this process is in progress, Metro Mirror rejects attempts to enable access to the auxiliary volumes of either application. If one application finishes its background copy much more quickly than the other application, Metro Mirror still refuses to grant access to its auxiliary volumes even though it is safe in this case, because Metro Mirror policy is to refuse access to the entire Consistency Group if any part of it is inconsistent.
Chapter 8. Advanced Copy Services
395
Stand-alone relationships and Consistency Groups share a common configuration and state model. All of the relationships in a non-empty Consistency Group have the same state as the Consistency Group.
Zoning
SVC node ports on each SVC cluster must be able to communicate with each other for the partnership creation to be performed. Switch zoning is critical to facilitating intercluster communication. See Chapter 3, Planning and configuration on page 57 for critical information regarding proper zoning for intercluster communication.
Intercluster links
All SVC nodes maintain a database of other devices that are visible on the fabric. This database is updated as devices appear and disappear. Devices that advertise themselves as SVC nodes are categorized according to the SVC cluster to which they belong. SVC nodes that belong to the same cluster establish communication channels between themselves and begin to exchange messages to implement clustering and the functional protocols of SVC. Nodes that are in separate clusters do not exchange messages after initial discovery is complete, unless they have been configured together to perform a remote copy relationship. The intercluster link carries control traffic to coordinate activity between two clusters. It is formed between one node in each cluster. The traffic between the designated nodes is distributed among logins that exist between those nodes. If the designated node fails (or all of its logins to the remote cluster fail), then a new node is chosen to carry control traffic. This node change causes the I/O to pause, but it does not put the relationships in a ConsistentStopped state.
396
397
When creating the Metro Mirror relationship, you can specify if the auxiliary volume is already in sync with the master volume, and the background copy process is then skipped. This capability is especially useful when creating Metro Mirror relationships for volumes that have been created with the format option. The step identifiers in Figure 8-18 are described here. Step 1 a. The Metro Mirror relationship is created with the -sync option, and the Metro Mirror relationship enters the ConsistentStopped state. b. The Metro Mirror relationship is created without specifying that the master and auxiliary volumes are in sync, and the Metro Mirror relationship enters the InconsistentStopped state.
398
Step 2 a. When starting a Metro Mirror relationship in the ConsistentStopped state, the Metro Mirror relationship enters the ConsistentSynchronized state. Therefore, no updates (write I/O) have been performed on the master volume while in the ConsistentStopped state. Otherwise, the -force option must be specified, and the Metro Mirror relationship then enters the InconsistentCopying state, while the background copy is started. b. When starting a Metro Mirror relationship in the InconsistentStopped state, the Metro Mirror relationship enters the InconsistentCopying state, while the background copy is started. Step 3 When the background copy completes, the Metro Mirror relationship transitions from the InconsistentCopying state to the ConsistentSynchronized state. Step 4 a. When stopping a Metro Mirror relationship in the ConsistentSynchronized state, specifying the -access option, which enables write I/O on the auxiliary volume, the Metro Mirror relationship enters the Idling state. b. To enable write I/O on the auxiliary volume, when the Metro Mirror relationship is in the ConsistentStopped state, issue the command svctask stoprcrelationship specifying the -access option, and the Metro Mirror relationship enters the Idling state. Step 5 a. When starting a Metro Mirror relationship that is in the Idling state, you must specify the -primary argument to set the copy direction. Given that no write I/O has been performed (to either the master or auxiliary volume) while in the Idling state, the Metro Mirror relationship enters the ConsistentSynchronized state. b. If write I/O has been performed to either the master or auxiliary volume, the -force option must be specified, and the Metro Mirror relationship then enters the InconsistentCopying state, while the background copy is started. Stop or Error: When a Metro Mirror relationship is stopped (either intentionally or due to an error), a state transition is applied: For example, the Metro Mirror relationships in the ConsistentSynchronized state enter the ConsistentStopped state, and the Metro Mirror relationships in the InconsistentCopying state enter the InconsistentStopped state. If the connection is broken between the SVC clusters in a partnership, then all (intercluster) Metro Mirror relationships enter a Disconnected state. For further information, refer to Connected versus disconnected on page 399. Common states: Stand-alone relationships and Consistency Groups share a common configuration and state model. All Metro Mirror relationships in a Consistency Group that is not empty have the same state as the Consistency Group.
State overview
in the following sections we provide an overview of the different Metro Mirror states.
399
When the two clusters can communicate, the clusters and the relationships spanning them are described as connected. When they cannot communicate, the clusters and the relationships spanning them are described as disconnected. In this state, both clusters are left with fragmented relationships and will be limited regarding the configuration commands that can be performed. The disconnected relationships are portrayed as having a changed state. The new states describe what is known about the relationship and what configuration commands are permitted. When the clusters can communicate again, the relationships become connected again. Metro Mirror automatically reconciles the two state fragments, taking into account any configuration or other event that took place while the relationship was disconnected. As a result, the relationship can either return to the state that it was in when it became disconnected or enter a new state. Relationships that are configured between volumes in the same SVC cluster (intracluster) will never be described as being in a disconnected state.
400
The application might work without a problem. Because of the risk of data corruption, and in particular undetected data corruption, Metro Mirror strongly enforces the concept of consistency and prohibits access to inconsistent data. Consistency as a concept can be applied to a single relationship or a set of relationships in a Consistency Group. Write ordering is a concept that an application can maintain across a number of disks accessed through multiple systems; therefore, consistency must operate across all those disks. When deciding how to use Consistency Groups, the administrator must consider the scope of an applications data, taking into account all of the interdependent systems that communicate and exchange information. If two programs or systems communicate and store details as a result of the information exchanged, either of the following actions might occur: All of the data accessed by the group of systems must be placed into a single Consistency Group. The systems must be recovered independently (each within its own Consistency Group). Then, each system must perform recovery with the other applications to become consistent with them.
401
Detailed states
The following sections detail the states that are portrayed to the user, for either Consistency Groups or relationships. It also details additional information that is available in each state. The major states are designed to provide guidance about the configuration commands that are available.
InconsistentStopped
InconsistentStopped is a connected state. In this state, the master is accessible for read and write I/O, but the auxiliary is not accessible for either read or write I/O. A copy process needs to be started to make the auxiliary consistent. This state is entered when the relationship or Consistency Group was InconsistentCopying and has either suffered a persistent error or received a stop command that has caused the copy process to stop. A start command causes the relationship or Consistency Group to move to the InconsistentCopying state. A stop command is accepted, but has no effect. If the relationship or Consistency Group becomes disconnected, the auxiliary side transitions to InconsistentDisconnected. The master side transitions to IdlingDisconnected.
InconsistentCopying
InconsistentCopying is a connected state. In this state, the master is accessible for read and write I/O, but the auxiliary is not accessible for either read or write I/O. This state is entered after a start command is issued to an InconsistentStopped relationship or a Consistency Group. It is also entered when a forced start is issued to an Idling or ConsistentStopped relationship or Consistency Group. In this state, a background copy process runs that copies data from the master to the auxiliary volume. In the absence of errors, an InconsistentCopying relationship is active, and the copy progress increases until the copy process completes. In certain error situations, the copy progress might freeze or even regress. A persistent error or stop command places the relationship or Consistency Group into an InconsistentStopped state. A start command is accepted but has no effect. If the background copy process completes on a stand-alone relationship, or on all relationships for a Consistency Group, the relationship or Consistency Group transitions to the ConsistentSynchronized state. If the relationship or Consistency Group becomes disconnected, the auxiliary side transitions to InconsistentDisconnected. The master side transitions to IdlingDisconnected.
ConsistentStopped
ConsistentStopped is a connected state. In this state, the auxiliary contains a consistent image, but it might be out-of-date with respect to the master. This state can arise when a relationship was in a ConsistentSynchronized state and suffers an error that forces a Consistency Freeze. It can also arise when a relationship is created with a CreateConsistentFlag set to TRUE. Normally, following an I/O error, subsequent write activity causes updates to the master and the auxiliary is no longer synchronized (set to false). In this case, to reestablish synchronization, consistency must be given up for a period. You must use a start command 402
Implementing the IBM System Storage SAN Volume Controller V6.1
with the -force option to acknowledge this condition, and the relationship or Consistency Group transitions to InconsistentCopying. Enter this command only after all outstanding events have been repaired. In the unusual case where the master and the auxiliary are still synchronized (perhaps following a user stop, and no further write I/O was received), a start command takes the relationship to ConsistentSynchronized. No -force option is required. Also, in this case, you can enter a switch command that moves the relationship or Consistency Group to ConsistentSynchronized and reverses the roles of the master and the auxiliary. If the relationship or Consistency Group becomes disconnected, the auxiliary transitions to ConsistentDisconnected. The master transitions to IdlingDisconnected. An informational status log is generated whenever a relationship or Consistency Group enters the ConsistentStopped state with a status of Online. You can configure this event to generate an SNMP trap that can be used to trigger automation or manual intervention to issue a start command following a loss of synchronization.
ConsistentSynchronized
ConsistentSynchronized is a connected state. In this state, the master volume is accessible for read and write I/O, and the auxiliary volume is accessible for read-only I/O. Writes that are sent to the master volume are sent to both the master and auxiliary volumes. Either successful completion must be received for both writes, the write must be failed to the host, or a state must transition out of the ConsistentSynchronized state before a write is completed to the host. A stop command takes the relationship to the ConsistentStopped state. A stop command with the -access parameter takes the relationship to the Idling state. A switch command leaves the relationship in the ConsistentSynchronized state, but it reverses the master and auxiliary roles. A start command is accepted, but it has no effect. If the relationship or Consistency Group becomes disconnected, the same transitions are made as for ConsistentStopped.
Idling
Idling is a connected state. Both master and auxiliary volumes operate in the master role. Consequently, both master and auxiliary volumes are accessible for write I/O. In this state, the relationship or Consistency Group accepts a start command. Metro Mirror maintains a record of regions on each disk that received write I/O while idling. This record is used to determine what areas need to be copied following a start command. The start command must specify the new copy direction. A start command can cause a loss of consistency if either volume in any relationship has received write I/O, which is indicated by the Synchronized status. If the start command leads to loss of consistency, you must specify the -force parameter. Following a start command, the relationship or Consistency Group transitions to ConsistentSynchronized if there is no loss of consistency, or to InconsistentCopying if there is a loss of consistency.
403
Also, while in this state, the relationship or Consistency Group accepts a -clean option on the start command. If the relationship or Consistency Group becomes disconnected, both sides change their state to IdlingDisconnected.
IdlingDisconnected
IdlingDisconnected is a disconnected state. The volume or disks in this half of the relationship or Consistency Group are all in the master role and accept read or write I/O. The priority in this state is to recover the link to restore the relationship or consistency. No configuration activity is possible (except for deletes or stops) until the relationship becomes connected again. At that point, the relationship transitions to a connected state. The exact connected state that is entered depends on the state of the other half of the relationship or Consistency Group, which depends on these factors: The state when it became disconnected The write activity since it was disconnected The configuration activity since it was disconnected If both halves are IdlingDisconnected, the relationship becomes Idling when reconnected. While IdlingDisconnected, if a write I/O is received that causes loss of synchronization (synchronized attribute transitions from true to false) and the relationship was not already stopped (either through a user stop or a persistent error), an event is raised to notify you of the condition. This same event will also be raised when this condition occurs for the ConsistentSynchronized state.
InconsistentDisconnected
InconsistentDisconnected is a disconnected state. The volumes in this half of the relationship or Consistency Group are all in the auxiliary role and do not accept read or write I/O. No configuration activity, except for deletes, is permitted until the relationship becomes connected again. When the relationship or Consistency Group becomes connected again, the relationship becomes InconsistentCopying automatically unless either condition is true: The relationship was InconsistentStopped when it became disconnected. The user issued a stop command while disconnected. In either case, the relationship or Consistency Group becomes InconsistentStopped.
ConsistentDisconnected
ConsistentDisconnected is a disconnected state. The volumes in this half of the relationship or Consistency Group are all in the auxiliary role and accept read I/O but not write I/O. This state is entered from ConsistentSynchronized or ConsistentStopped when the auxiliary side of a relationship becomes disconnected. In this state, the relationship or Consistency Group displays an attribute of FreezeTime, which is the point in time that Consistency was frozen. When entered from ConsistentStopped, it retains the time that it had in that state. When entered from ConsistentSynchronized, the
404
FreezeTime shows the last time at which the relationship or Consistency Group was known to be consistent. This time corresponds to the time of the last successful heartbeat to the other cluster. A stop command with the -access flag set to true transitions the relationship or Consistency Group to the IdlingDisconnected state. This state allows write I/O to be performed to the auxiliary volume and is used as part of a DR scenario. When the relationship or Consistency Group becomes connected again, the relationship or Consistency Group becomes ConsistentSynchronized only if this action does not lead to a loss of consistency. These conditions must be true: The relationship was ConsistentSynchronized when it became disconnected. No writes received successful completion at the master while disconnected. Otherwise, the relationship become ConsistentStopped. The FreezeTime setting is retained.
Empty
This state only applies to Consistency Groups. It is the state of a Consistency Group that has no relationships and no other state information to show. It is entered when a Consistency Group is first created. It is exited when the first relationship is added to the Consistency Group, at which point, the state of the relationship becomes the state of the Consistency Group.
Background copy
Metro Mirror paces the rate at which background copy is performed by the appropriate relationships. Background copy takes place on relationships that are in the InconsistentCopying state with a status of Online. The quota of background copy (configured on the intercluster link) is divided evenly between all of the nodes that are performing background copy for one of the eligible relationships. This allocation is made irrespective of the number of disks for which the node is responsible. Each node in turn divides its allocation evenly between the multiple relationships performing a background copy. For intracluster relationships, each node is assigned a static quota of 25 MBps.
405
on the auxiliary volumes cannot be read by a host, because most operating systems write a dirty bit to the file system when it is mounted. Because this write operation is not allowed on the auxiliary volume, the volume cannot be mounted. This access is only provided where consistency can be guaranteed. However, there is no way in which coherency can be maintained between reads that are performed at the auxiliary and later write I/Os that are performed at the master. To enable access to the auxiliary volume for host operations, you must stop the Metro Mirror relationship by specifying the -access parameter. While access to the auxiliary volume for host operations is enabled, the host must be instructed to mount the volume and related tasks before the application can be started, or instructed to perform a recovery process. For example, the Metro Mirror requirement to enable the auxiliary copy for access differentiates it from third-party mirroring software on the host, which aims to emulate a single, reliable disk regardless of what system is accessing it. Metro Mirror retains the property that there are two volumes in existence, but it suppresses one volume while the copy is being maintained. Using an auxiliary copy demands a conscious policy decision by the administrator that a failover is required and that the tasks to be performed on the host involved in establishing operation on the auxiliary copy are substantial. The goal is to make this rapid (much faster when compared to recovering from a backup copy) but not seamless. The failover process can be automated through failover management software. The SVC provides Simple Network Management Protocol (SNMP) traps and programming (or scripting) for the command-line interface (CLI) to enable this automation.
406
Parameter Number of Metro Mirror relationships per Consistency Group Total volume size per I/O Group
Value 8192
There is a per I/O Group limit of 1024 TB on the quantity of master and auxiliary volume address space that can participate in Metro Mirror and Global Mirror relationships. This maximum configuration will consume all 512 MB of bitmap space for the I/O Group and allow no FlashCopy bitmap space.
svcinfo lsclustercandidate
The svcinfo lsclustercandidate command is used to list the clusters that are available for setting up a two-cluster partnership. This command is a prerequisite for creating Metro Mirror relationships.
407
svctask mkpartnership
The svctask mkpartnership command is used to establish a one-way Metro Mirror partnership between the local cluster and a remote cluster. To establish a fully functional Metro Mirror partnership, you must issue this command to both clusters. This step is a prerequisite to creating Metro Mirror relationships between volumes on the SVC clusters. When creating the partnership, you can specify the bandwidth to be used by the background copy process between the local and the remote SVC cluster, and if it is not specified, the bandwidth defaults to 50 MBps. The bandwidth must be set to a value that is less than or equal to the bandwidth that can be sustained by the intercluster link.
svctask chpartnership
In case it is needed to change the bandwidth that is available for background copy in an SVC cluster partnership, you can use the svctask chpartnership command to specify the new bandwidth.
svctask mkrcconsistgrp
The svctask mkrcconsistgrp command is used to create a new empty Metro Mirror Consistency Group.
408
The Metro Mirror Consistency Group name must be unique across all of the Consistency Groups that are known to the clusters owning this Consistency Group. If the Consistency Group involves two clusters, the clusters must be in communication throughout the creation process. The new Consistency Group does not contain any relationships and will be in the Empty state. Metro Mirror relationships can be added to the group either upon creation or afterward by using the svctask chrelationship command.
svctask mkrcrelationship
The svctask mkrcrelationship command is used to create a new Metro Mirror relationship. This relationship persists until it is deleted. The auxiliary volume must be equal in size to the master volume or the command will fail, and if both volumes are in the same cluster, they must both be in the same I/O Group. The master and auxiliary volume cannot be in an existing relationship and cannot be the target of a FlashCopy mapping. This command returns the new relationship (relationship_id) when successful. When creating the Metro Mirror relationship, it can be added to an already existing Consistency Group, or it can be a stand-alone Metro Mirror relationship if no Consistency Group is specified. To check whether the master or auxiliary volumes comply with the prerequisites to participate in a Metro Mirror relationship, use the svcinfo lsrcrelationshipcandidate command.
svcinfo lsrcrelationshipcandidate
The svcinfo lsrcrelationshipcandidate command is used to list available volumes that are eligible for a Metro Mirror relationship. When issuing the command, you can specify the source volume name and secondary cluster to list candidates that comply with prerequisites to create a Metro Mirror relationship. If the command is issued with no flags, all volumes that are not disallowed by another configuration state, such as being a FlashCopy target, are listed.
svctask chrcrelationship
The svctask chrcrelationship command is used to modify the following properties of a Metro Mirror relationship: Change the name of a Metro Mirror relationship. Add a relationship to a group. Remove a relationship from a group using the -force flag. Adding a Metro Mirror relationship: When adding a Metro Mirror relationship to a Consistency Group that is not empty, the relationship must have the same state and copy direction as the group to be added to it.
Chapter 8. Advanced Copy Services
409
svctask chrcconsistgrp
The svctask chrcconsistgrp command is used to change the name of a Metro Mirror Consistency Group.
svctask startrcrelationship
The svctask startrcrelationship command is used to start the copy process of a Metro Mirror relationship. When issuing the command, the copy direction can be set, if it is undefined, and optionally mark the auxiliary volume of the relationship as clean. The command fails if it is used to attempt to start a relationship that is part of a Consistency Group. This command can only be issued to a relationship that is connected. For a relationship that is idling, this command assigns a copy direction (master and auxiliary roles) and begins the copy process. Otherwise, this command restarts a previous copy process that was stopped either by a stop command or by an I/O error. If the resumption of the copy process leads to a period when the relationship is inconsistent, you must specify the -force flag when restarting the relationship. This situation can arise if, for example, the relationship was stopped, and then further writes were performed on the original master of the relationship. The use of the -force flag here is a reminder that the data on the auxiliary will become inconsistent while resynchronization (background copying) occurs, and therefore, the data is not usable for DR purposes before the background copy has completed. In the Idling state, you must specify the master volume to indicate the copy direction. In other connected states, you can provide the -primary argument, but it must match the existing setting.
svctask stoprcrelationship
The svctask stoprcrelationship command is used to stop the copy process for a relationship. It can also be used to enable write access to a consistent auxiliary volume by specifying the -access flag. This command applies to a stand-alone relationship. It is rejected if it is addressed to a relationship that is part of a Consistency Group. You can issue this command to stop a relationship that is copying from master to auxiliary.
410
If the relationship is in an inconsistent state, any copy operation stops and does not resume until you issue an svctask startrcrelationship command. Write activity is no longer copied from the master to the auxiliary volume. For a relationship in the ConsistentSynchronized state, this command causes a consistency freeze. When a relationship is in a consistent state (that is, in the ConsistentStopped, ConsistentSynchronized, or ConsistentDisconnected state), you can use the -access parameter with the stoprcrelationship command to enable write access to the auxiliary volume.
svctask stoprcconsistgrp
The svctask startrcconsistgrp command is used to stop the copy process for a Metro Mirror Consistency Group. It can also be used to enable write access to the auxiliary volumes in the group if the group is in a consistent state. If the Consistency Group is in an inconsistent state, any copy operation stops and does not resume until you issue the svctask startrcconsistgrp command. Write activity is no longer copied from the master to the auxiliary volumes belonging to the relationships in the group. For a Consistency Group in the ConsistentSynchronized state, this command causes a consistency freeze. When a Consistency Group is in a consistent state (for example, in the ConsistentStopped, ConsistentSynchronized, or ConsistentDisconnected state), the -access argument can be used with the svctask stoprcconsistgrp command to enable write access to the auxiliary volumes within that group.
svctask rmrcrelationship
The svctask rmrcrelationship command is used to delete the relationship that is specified. Deleting a relationship only deletes the logical relationship between the two volumes. It does not affect the volumes themselves. If the relationship is disconnected at the time that the command is issued, the relationship is only deleted on the cluster on which the command is being run. When the clusters reconnect, then the relationship is automatically deleted on the other cluster.
411
Alternatively, if the clusters are disconnected, and you still want to remove the relationship on both clusters, you can issue the rmrcrelationship command independently on both of the clusters. If you delete an inconsistent relationship, the auxiliary volume becomes accessible even though it is still inconsistent. This situation is the one case in which Metro Mirror does not inhibit access to inconsistent data.
svctask rmrcconsistgrp
The svctask rmrcconsistgrp command is used to delete a Metro Mirror Consistency Group. This command deletes the specified Consistency Group. You can issue this command for any existing Consistency Group. If the Consistency Group is disconnected at the time that the command is issued, the Consistency Group is only deleted on the cluster on which the command is being run. When the clusters reconnect, the Consistency Group is automatically deleted on the other cluster. Alternatively, if the clusters are disconnected, and you still want to remove the Consistency Group on both clusters, you can issue the svctask rmrcconsistgrp command separately on both of the clusters. If the Consistency Group is not empty, the relationships within it are removed from the Consistency Group before the group is deleted. These relationships then become stand-alone relationships. The state of these relationships is not changed by the action of removing them from the Consistency Group.
svctask switchrcrelationship
The svctask switchrcrelationship command is used to reverse the roles of the master and auxiliary volumes when a stand-alone relationship is in a consistent state. When issuing the command, the desired master is specified.
svctask switchrcconsistgrp
The svctask switchrcconsistgrp command is used to reverse the roles of the master and auxiliary volumes when a Consistency Group is in a consistent state. This change is applied to all of the relationships in the Consistency Group, and when issuing the command, the desired master is specified.
412
413
secondary site at a later stage, which provides the capability to perform remote copy over distances exceeding the limitations of synchronous remote copy. The Global Mirror function provides the same function as Metro Mirror Remote Copy, but over long distance links with higher latency, without requiring the hosts to wait for the full round-trip delay of the long distance link. Figure 8-19 shows that a write operation to the master volume is acknowledged back to the host issuing the write before the write operation is mirrored to the cache for the auxiliary volume.
The Global Mirror algorithms maintain a consistent image on the auxiliary at all times. They achieve this consistent image by identifying sets of I/Os that are active concurrently at the master, assigning an order to those sets, and applying those sets of I/Os in the assigned order at the secondary. As a result, Global Mirror maintains the features of Write Ordering and Read Stability that are described in this chapter. The multiple I/Os within a single set are applied concurrently. The process that marshals the sequential sets of I/Os operates at the secondary cluster, and is therefore not subject to the latency of the long distance link. These two elements of the protocol ensure that the throughput of the total cluster can be grown by increasing cluster size, while maintaining consistency across a growing data set. In a failover scenario, where the secondary site needs to become the master source of data, certain updates might be missing at the secondary site. Therefore, any applications that will use this data must have an external mechanism for recovering the missing updates and reapplying them, for example, such as a transaction log replay.
414
SVC implements the Global Mirror relationship between a volume pair, with each volume in the pair being managed by an SVC cluster. SVC supports intracluster Global Mirror, where both volumes belong to the same cluster (and I/O Group). Although, as stated earlier, this functionality is better suited to Metro Mirror. SVC supports intercluster Global Mirror, where each volume belongs to its separate SVC cluster. A given SVC cluster can be configured for partnership with between one and three other clusters. Intercluster and intracluster Global Mirror can be used concurrently within a cluster for separate relationships. SVC does not require a control network or fabric to be installed to manage Global Mirror. For intercluster Global Mirror, the SVC maintains a control link between the two clusters. This control link is used to control the state and to coordinate the updates at either end. The control link is implemented on top of the same FC fabric connection that the SVC uses for Global Mirror I/O. SVC implements a configuration model that maintains the Global Mirror configuration and state through major events, such as failover, recovery, and resynchronization, to minimize user configuration action through these events. SVC implements flexible resynchronization support, enabling it to resynchronize volume pairs that have experienced write I/Os to both disks and to resynchronize only those regions that are known to have changed. Colliding writes are supported. An optional feature for Global Mirror permits a delay simulation to be applied on writes that are sent to auxiliary volumes.
Colliding writes
Prior to V4.3.1, the Global Mirror algorithm required that only a single write is active on any given 512 byte LBA of a volume. If a further write is received from a host while the auxiliary write is still active, even though the master write might have completed, the new host write will be delayed until the auxiliary write is complete. This restriction is needed in case a series of writes to the auxiliary have to be retried (called reconstruction). Conceptually, the data for reconstruction comes from the master volume. If multiple writes are allowed to be applied to the master for a given sector, only the most recent write will get the correct data during reconstruction, and if reconstruction is interrupted for any reason, the intermediate state of the auxiliary is inconsistent. Applications that deliver such write activity will not achieve the performance that Global Mirror is intended to support. A volume statistic is maintained about the frequency of these collisions. From V4.3.1 onward, an attempt is made to allow multiple writes to a single location to be outstanding in the Global Mirror algorithm. There is still a need for master writes to be serialized, and the intermediate states of the master data must be kept in a non-volatile journal while the writes are outstanding to maintain the correct write ordering during reconstruction. Reconstruction must never overwrite data on the auxiliary with an earlier version. The volume statistic monitoring colliding writes is now limited to those writes that are not affected by this change. Figure 8-20 on page 416 shows a colliding write sequence example.
415
These numbers correspond to the numbers in Figure 8-20: (1) First write is performed from the host to LBA X. (2) Host is provided acknowledgment that the write it complete even though the mirrored write to the auxiliary volume has note yet completed. (1) and (2) occur asynchronously with the first write. (3) Second write is performed from host also to LBA X, if this write occurs prior to (2) the write will be written to the journal file. (4) Host is provided acknowledgment that the second write is complete.
Delay simulation
An optional feature for Global Mirror permits a delay simulation to be applied on writes that are sent to auxiliary volumes. This feature allows testing to be performed that detects colliding writes, and therefore, this feature can be used to test an application before the full deployment of the feature. The feature can be enabled separately for each of the intracluster or intercluster Global Mirrors. You specify the delay setting by using the chcluster command and viewed by using the lscluster command. The gm_intra_delay_simulation field expresses the amount of time that intracluster auxiliary I/Os are delayed. The gm_inter_delay_simulation field expresses the amount of time that intercluster auxiliary I/Os are delayed. A value of zero (0) disables the feature.
Notes: A volume can only be part of one Global Mirror relationship at a time. A volume that is a FlashCopy target cannot be part of a Global Mirror relationship.
417
Certain uses of Global Mirror require the manipulation of more than one relationship. Global Mirror Consistency Groups can provide the ability to group relationships so that they are manipulated in unison. Global Mirror relationships within a Consistency Group can be in any form: Global Mirror relationships can be part of a Consistency Group, or be stand-alone and therefore handled as single instances. A Consistency Group can contain zero (0) or more relationships. An empty Consistency Group, with zero relationships in it, has little purpose until it is assigned its first relationship, except that it has a name. All of the relationships in a Consistency Group must have matching master and auxiliary volumes. Although it is possible to use Consistency Groups to manipulate sets of relationships that do not need to satisfy these strict rules, such manipulation can lead to undesired side effects. The rules behind a Consistency Group mean that certain configuration commands are prohibited. These specific configuration commands are not prohibited if the relationship is not part of a Consistency Group. For example, consider the case of two applications that are completely independent, yet they are placed into a single Consistency Group. If a loss of synchronization were to occur, and a background copy process is required to recover synchronization, then while this process is in progress Global Mirror rejects attempts to enable access to the auxiliary volumes of either application. If one application finishes its background copy before the other, Global Mirror still refuses to grant access to its auxiliary volume. Even though it is safe in this case, Global Mirror policy refuses access to the entire Consistency Group if any part of it is inconsistent.
418
Stand-alone relationships and Consistency Groups share a common configuration and state model. All of the relationships in a Consistency Group that is not empty have the same state as the Consistency Group.
419
With this technique, do not allow I/O on either the master or auxiliary before the relationship is established. Then, the administrator must ensure that commands are issued: A new relationship is created (mkrcrelationship is issued) with the -sync flag. A new relationship is started (startrcrelationship is issued) without the -clean flag. Attention: Failure to perform these steps correctly can cause Metro Mirror to report the relationship as consistent when it is not, thereby creating a data loss or data integrity exposure for hosts accessing data on the auxiliary volume.
When creating the Global Mirror relationship, you can specify whether the auxiliary volume is already in sync with the master volume, and the background copy process is then skipped. This capability is especially useful when creating Global Mirror relationships for volumes that have been created with the format option.
421
The following steps explain the Global Mirror state diagram (these numbers correspond to the numbers in Figure 8-23 on page 421): Step 1 a. The Global Mirror relationship is created with the -sync option, and the Global Mirror relationship enters the ConsistentStopped state. b. The Global Mirror relationship is created without specifying that the master and auxiliary volumes are in sync, and the Global Mirror relationship enters the InconsistentStopped state. Step 2 a. When starting a Global Mirror relationship in the ConsistentStopped state, it enters the ConsistentSynchronized state. This state implies that no updates (write I/O) have been performed on the master volume while in the ConsistentStopped state. Otherwise, you must specify the -force option, and the Global Mirror relationship then enters the InconsistentCopying state while the background copy is started. b. When starting a Global Mirror relationship in the InconsistentStopped state, it enters the InconsistentCopying state while the background copy is started. Step 3 a. When the background copy completes, the Global Mirror relationship transitions from the InconsistentCopying state to the ConsistentSynchronized state. Step 4 a. When stopping a Global Mirror relationship in the ConsistentSynchronized state, where specifying the -access option enables write I/O on the auxiliary volume, the Global Mirror relationship enters the Idling state. b. To enable write I/O on the auxiliary volume, when the Global Mirror relationship is in the ConsistentStopped state, issue the command svctask stoprcrelationship, specifying the -access option, and the Global Mirror relationship enters the Idling state. Step 5 a. When starting a Global Mirror relationship that is in the Idling state, you must specify the -primary argument to set the copy direction. Because no write I/O has been performed (to either the master or auxiliary volume) while in the Idling state, the Global Mirror relationship enters the ConsistentSynchronized state. b. In case write I/O has been performed to either the master or the auxiliary volume, then you must specify the -force option. The Global Mirror relationship then enters the InconsistentCopying state, while the background copy is started. If the Global Mirror relationship is intentionally stopped or experiences an error, a state transition is applied. For example, Global Mirror relationships in the ConsistentSynchronized state enter the ConsistentStopped state, and Global Mirror relationships in the InconsistentCopying state enter the InconsistentStopped state. In a case where the connection is broken between the SVC clusters in a partnership, all of the (intercluster) Global Mirror relationships enter a Disconnected state. For further information, refer to Connected versus disconnected on page 423. Common configuration and state model: Stand-alone relationships and Consistency Groups share a common configuration and state model. All of the Global Mirror relationships in a Consistency Group that is not empty have the same state as the Consistency Group.
422
State overview
The SVC defined concepts of state are key to understanding the configuration concepts. We explain them in more detail here.
423
From the point of view of an application, consistency means that a auxiliary volume contains the same data as the master volume at the recovery point (the time at which the imaginary power failure occurred). If an application is designed to cope with an unexpected power failure, this guarantee of consistency means that the application will be able to use the auxiliary and begin operation just as though it had been restarted after the hypothetical power failure. Again, the application is dependent on the key properties of consistency: Write ordering Read stability for correct operation at the auxiliary If a relationship, or a set of relationships, is inconsistent and if an attempt is made to start an application using the data in the secondaries, a number of outcomes are possible: The application might decide that the data is corrupt and crash or exit with an error code. The application might fail to detect that the data is corrupt and return erroneous data. The application might work without a problem. Because of the risk of data corruption, and, in particular, undetected data corruption, Global Mirror strongly enforces the concept of consistency and prohibits access to inconsistent data. You can apply consistency as a concept to a single relationship or to a set of relationships in a Consistency Group. Write ordering is a concept that an application can maintain across a number of disks that are accessed through multiple systems, and therefore, consistency must operate across all of those disks. When deciding how to use Consistency Groups, the administrator must consider the scope of an applications data, taking into account all of the interdependent systems that communicate and exchange information. If two programs or systems communicate and store details as a result of the information exchanged, either of the following actions might occur: All of the data that is accessed by the group of systems must be placed into a single Consistency Group. The systems must be recovered independently (each within its own Consistency Group). Then, each system must perform recovery with the other applications to become consistent with them.
424
You can use two policies to cope with this situation: Make a point-in-time copy of the consistent auxiliary before allowing the auxiliary to become inconsistent. In the event of a disaster, before consistency is achieved again, the point-in-time copy target provides a consistent, though out-of-date, image. Accept the loss of consistency, and the loss of a useful auxiliary, while making it synchronized.
Detailed states
The following sections detail the states that are portrayed to the user, for either Consistency Groups or relationships. It also details the extra information that is available in each state. We described the various major states to provide guidance regarding the available configuration commands.
InconsistentStopped
InconsistentStopped is a connected state. In this state, the master is accessible for read and write I/O, but the auxiliary is inaccessible for either read or write I/O. A copy process needs to be started to make the auxiliary consistent. This state is entered when the relationship or Consistency Group was InconsistentCopying and has either suffered a persistent error or received a stop command that has caused the copy process to stop. A start command causes the relationship or Consistency Group to move to the InconsistentCopying state. A stop command is accepted, but has no effect. If the relationship or Consistency Group becomes disconnected, the auxiliary side transitions to InconsistentDisconnected. The master side transitions to IdlingDisconnected.
InconsistentCopying
InconsistentCopying is a connected state. In this state, the master is accessible for read and write I/O, but the auxiliary is inaccessible for either read or write I/O. This state is entered after a start command is issued to an InconsistentStopped relationship or Consistency Group. It is also entered when a forced start is issued to an Idling or ConsistentStopped relationship or Consistency Group. In this state, a background copy process runs, which copies data from the master to the auxiliary volume. In the absence of errors, an InconsistentCopying relationship is active, and the copy progress increases until the copy process completes. In certain error situations, the copy progress might freeze or even regress. A persistent error or stop command places the relationship or Consistency Group into the InconsistentStopped state. A start command is accepted, but has no effect. If the background copy process completes on a stand-alone relationship, or on all relationships for a Consistency Group, the relationship or Consistency Group transitions to the ConsistentSynchronized state. If the relationship or Consistency Group becomes disconnected, the auxiliary side transitions to InconsistentDisconnected. The master side transitions to IdlingDisconnected.
425
ConsistentStopped
ConsistentStopped is a connected state. In this state, the auxiliary contains a consistent image, but it might be out-of-date with respect to the master. This state can arise when a relationship is in the ConsistentSynchronized state and experiences an error that forces a Consistency Freeze. It can also arise when a relationship is created with a CreateConsistentFlag set to true. Normally, following an I/O error, subsequent write activity causes updates to the master, and the auxiliary is no longer synchronized (set to false). In this case, to reestablish synchronization, consistency must be given up for a period. A start command with the -force option must be used to acknowledge this situation, and the relationship or Consistency Group transitions to InconsistentCopying. Issue this command only after all of the outstanding events are repaired. In the unusual case where the master and auxiliary are still synchronized (perhaps following a user stop, and no further write I/O was received), a start command takes the relationship to ConsistentSynchronized. No -force option is required. Also, in this unusual case, a switch command is permitted that moves the relationship or Consistency Group to ConsistentSynchronized and reverses the roles of the master and the auxiliary. If the relationship or Consistency Group becomes disconnected, then the auxiliary side transitions to ConsistentDisconnected. The master side transitions to IdlingDisconnected. An informational status log is generated every time a relationship or Consistency Group enters the ConsistentStopped with a status of Online state. This can be configured to enable an SNMP trap and provide a trigger to automation software to consider issuing a start command following a loss of synchronization.
ConsistentSynchronized
This is a connected state. In this state, the master volume is accessible for read and write I/O. The auxiliary volume is accessible for read-only I/O. Writes that are sent to the master volume are sent to both master and auxiliary volumes. Either successful completion must be received for both writes; the write must be failed to the host; or a state must transition out of the ConsistentSynchronized state before a write is completed to the host. A stop command takes the relationship to the ConsistentStopped state. A stop command with the -access parameter takes the relationship to the Idling state. A switch command leaves the relationship in the ConsistentSynchronized state, but reverses the master and auxiliary roles. A start command is accepted, but has no effect. If the relationship or Consistency Group becomes disconnected, the same transitions are made as for ConsistentStopped.
Idling
Idling is a connected state. Both master and auxiliary disks are operating in the master role. Consequently, both master and auxiliary disks are accessible for write I/O. In this state, the relationship or Consistency Group accepts a start command. Global Mirror maintains a record of regions on each disk that received write I/O while Idling. This record is used to determine what areas need to be copied following a start command.
426
The start command must specify the new copy direction. A start command can cause a loss of consistency if either volume in any relationship has received write I/O, which is indicated by the synchronized status. If the start command leads to loss of consistency, you must specify a -force parameter. Following a start command, the relationship or Consistency Group transitions to ConsistentSynchronized if there is no loss of consistency, or to InconsistentCopying if there is a loss of consistency. Also, while in this state, the relationship or Consistency Group accepts a -clean option on the start command. If the relationship or Consistency Group becomes disconnected, both sides change their state to IdlingDisconnected.
IdlingDisconnected
IdlingDisconnected is a disconnected state. The volume or disks in this half of the relationship or Consistency Group are all in the master role and accept read or write I/O. The major priority in this state is to recover the link and reconnect the relationship or Consistency Group. No configuration activity is possible (except for deletes or stops) until the relationship is reconnected. At that point, the relationship transitions to a connected state. The exact connected state that is entered depends on the state of the other half of the relationship or Consistency Group, which depends on these factors: The state when it became disconnected The write activity since it was disconnected The configuration activity since it was disconnected If both halves are IdlingDisconnected, the relationship becomes Idling when reconnected. While IdlingDisconnected, if a write I/O is received that causes loss of synchronization (synchronized attribute transitions from true to false) and the relationship was not already stopped (either through a user stop or a persistent error), an event is raised. This same event will also be raised when this condition occurs for the ConsistentSynchronized state.
InconsistentDisconnected
InconsistentDisconnected is a disconnected state. The volumes in this half of the relationship or Consistency Group are all in the auxiliary role and do not accept read or write I/O. No configuration activity, except for deletes, is permitted until the relationship reconnects. When the relationship or Consistency Group reconnects, the relationship becomes InconsistentCopying automatically unless either of these conditions exist: The relationship was InconsistentStopped when it became disconnected. The user issued a stop while disconnected. In either case, the relationship or Consistency Group becomes InconsistentStopped.
ConsistentDisconnected
ConsistentDisconnected is a disconnected state. The volumes in this half of the relationship or Consistency Group are all in the auxiliary role and accept read I/O but not write I/O. This state is entered from ConsistentSynchronized or ConsistentStopped when the auxiliary side of a relationship becomes disconnected.
427
In this state, the relationship or Consistency Group displays an attribute of FreezeTime, which is the point in time that Consistency was frozen. When entered from ConsistentStopped, it retains the time that it had in that state. When entered from ConsistentSynchronized, the FreezeTime shows the last time at which the relationship or Consistency Group was known to be consistent. This time corresponds to the time of the last successful heartbeat to the other cluster. A stop command with the -access flag set to true transitions the relationship or Consistency Group to the IdlingDisconnected state. This state allows write I/O to be performed to the auxiliary volume and is used as part of a DR scenario. When the relationship or Consistency Group reconnects, the relationship or Consistency Group becomes ConsistentSynchronized only if this state does not lead to a loss of consistency. This is the case provided that these conditions are true: The relationship was ConsistentSynchronized when it became disconnected. No writes received successful completion at the master while disconnected. Otherwise, the relationship becomes ConsistentStopped. The FreezeTime setting is retained.
Empty
This state only applies to Consistency Groups. It is the state of a Consistency Group that has no relationships and no other state information to show. It is entered when a Consistency Group is first created. It is exited when the first relationship is added to the Consistency Group, at which point, the state of the relationship becomes the state of the Consistency Group.
428
This access is only provided where consistency can be guaranteed. However, there is no way in which coherency can be maintained between reads that are performed at the auxiliary and later write I/Os that are performed at the master. To enable access to the auxiliary volume for host operations, you must stop the Global Mirror relationship by specifying the -access parameter. While access to the auxiliary volume for host operations is enabled, you must instruct the host to mount the volume and other related tasks, before the application can be started or instructed to perform a recovery process. Using an auxiliary copy demands a conscious policy decision by the administrator that a failover is required, and the tasks to be performed on the host that is involved in establishing operation on the auxiliary copy are substantial. The goal is to make this failover rapid (much faster than recovering from a backup copy), but it is not seamless. You can automate the failover process by using failover management software. The SVC provides Simple Network Management Protocol (SNMP) traps and programming (or scripting) for the command-line interface (CLI) to enable this automation.
A per I/O Group limit of 1024 TB exists on the quantity of master and auxiliary volume address spaces that can participate in Metro Mirror and Global Mirror relationships. This maximum configuration will consume all 512 MB of bitmap space for the I/O Group and allow no FlashCopy bitmap space.
429
svcinfo lsclustercandidate
Use the svcinfo lsclustercandidate command to list the clusters that are available for setting up a two-cluster partnership. This command is a prerequisite for creating Global Mirror relationships. To display the characteristics of the cluster, use the svcinfo lscluster command, specifying the name of the cluster.
svctask chcluster
There are three parameters for Global Mirror in the command output: -gmlinktolerance link_tolerance This parameter specifies the maximum period of time that the system will tolerate delay before stopping Global Mirror relationships. Specify values between 60 and 86400 seconds in increments of 10 seconds. The default value is 300. Do not change this value except under the direction of IBM Support.
430
-relationshipbandwidthlimit cluster_relationship_bandwidth_limit This parameter controls the maximum rate at which any one remote copy relationship can synchronize. The default value for the relationship bandwidth limit is 25 MBps, but this value can now be specified between 1 MBps to 1000 MBps. Note that the overall limit is controlled by the -bandwidth parameter of each cluster partnership so the partnership bandwidth will need to be raised accordingly. Attention: Do not set this value higher than the default without first establishing that the higher bandwidth can be sustained without impacting host performance. -gminterdelaysimulation link_tolerance This parameter specifies the number of milliseconds that I/O activity (intercluster copying to a auxiliary volume) is delayed. This parameter permits you to test performance implications before deploying Global Mirror and obtaining a long distance link. Specify a value from 0 to 100 milliseconds in 1 millisecond increments. The default value is 0. Use this argument to test each intercluster Global Mirror relationship separately. -gmintradelaysimulation link_tolerance This parameter specifies the number of milliseconds that I/O activity (intracluster copying to a auxiliary volume) is delayed. This parameter permits you to test performance implications before deploying Global Mirror and obtaining a long distance link. Specify a value from 0 to 100 milliseconds in 1 millisecond increments. The default value is 0. Use this argument to test each intracluster Global Mirror relationship separately. Use the svctask chcluster command to adjust these values; see the following example: svctask chcluster -gmlinktolerance 300 You can view all of these parameter values with the svcinfo lscluster <clustername> command.
gmlinktolerance
The gmlinktolerance parameter needs a particular and detailed note. If poor response extends past the specified tolerance, a 1920 event is logged and one or more Global Mirror relationships are automatically stopped, which protects the application hosts at the primary site. During normal operation, application hosts experience a minimal effect from the response times, because the Global Mirror feature uses asynchronous replication. However, if Global Mirror operations experience degraded response times from the secondary cluster for an extended period of time, I/O operations begin to queue at the primary cluster. This queue results in an extended response time to application hosts. In this situation, the gmlinktolerance feature stops Global Mirror relationships and the application hosts response time returns to normal. After a 1920 event has occurred, the Global Mirror auxiliary volumes are no longer in the consistent_synchronized state until you fix the cause of the event and restart your Global Mirror relationships. For this reason, ensure that you monitor the cluster to track when this 1920 events occur. You can disable the gmlinktolerance feature by setting the gmlinktolerance value to 0 (zero). However, the gmlinktolerance feature cannot protect applications from extended response times if it is disabled. It might be appropriate to disable the gmlinktolerance feature under the following circumstances: During SAN maintenance windows where degraded performance is expected from SAN components and application hosts can withstand extended response times from Global Mirror volumes.
Chapter 8. Advanced Copy Services
431
During periods when application hosts can tolerate extended response times and it is expected that the gmlinktolerance feature might stop the Global Mirror relationships. For example, if you test using an I/O generator, which is configured to stress the back-end storage, the gmlinktolerance feature might detect the high latency and stop the Global Mirror relationships. Disabling the gmlinktolerance feature prevents this result at the risk of exposing the test host to extended response times. We suggest using a script to periodically monitor the Global Mirror status. Example 8-2 shows an example of a script in ksh to check the Global Mirror status.
Example 8-2 Script example
[AIX1@root] /usr/GMC > cat checkSVCgm #!/bin/sh # # Description # # GM_STATUS GM Status variable # HOSTsvcNAME SVC cluster ipaddress # PARA_TEST Consistent syncronized variable # PARA_TESTSTOPIN Stop inconsistent variable # PARA_TESTSTOP Consistent stopped variable # IDCONS Consistency Group ID variable # variable definition HOSTsvcNAME="128.153.3.237" IDCONS=255 PARA_TEST="consistent_synchronized" PARA_TESTSTOP="consistent_stopped" PARA_TESTSTOPIN="inconsistent_stopped" FLOG="/usr/GMC/log/gmtest.log" VAR=0 # Start Programm if [[ $1 == "" ]] then CICLI="true" fi while $CICLI do GM_STATUS=`ssh -l admin $HOSTsvcNAME svcinfo lsrcconsistgrp -delim : | awk -F: 'NR==2 {print $8 }'` echo "`date` Gobal Mirror STATUS <$GM_STATUS> " >> $FLOG if [[ $GM_STATUS = $PARA_TEST ]] then sleep 600 else sleep 600 GM_STATUS=`ssh -l admin $HOSTsvcNAME svcinfo lsrcconsistgrp -delim : | awk -F: 'NR==2 {print $8 }'` if [[ $GM_STATUS = $PARA_TESTSTOP || $GM_STATUS = $PARA_TESTSTOPIN ]] then ssh -l admin $HOSTsvcNAME svctask startrcconsistgrp -force $IDCONS TESTEX=`echo $?` echo "`date` Gobal Mirror RESTARTED.......... con RC=$TESTEX " >> $FLOG fi
432
GM_STATUS=`ssh -l admin $HOSTsvcNAME svcinfo lsrcconsistgrp -delim : | awk -F: 'NR==2 {print $8 }'` if [[ $GM_STATUS = $PARA_TESTSTOP ]] then echo "`date` Global Mirror restarted <$GM_STATUS>" else echo "`date` ERROR Global Mirro Failed <$GM_STATUS>" fi sleep 600 fi ((VAR+=1)) done PARA_TESTSTOP="consistent_stopped" The script in Example 8-2 on page 432 performs these functions: Check the Global Mirror status every 600 seconds. If the status is ConsistentSyncronized, wait another 600 seconds and test again. If the status is ConsistentStopped or InconsistentStopped, wait another 600 seconds and then try to restart Global Mirror. If the status remains ConsistentStopped or InconsistentStopped, it is likely that an associated 1920 event exists, which means that we might have a performance problem. Waiting 600 seconds before restarting Global Mirror can give the SVC enough time to deliver the high workload that is requested by the server. Because Global Mirror has been stopped for 10 minutes (600 seconds), the auxiliary copy is now out of date by this amount of time and must be resynchronized. Sample script: The script described in Example 8-2 on page 432 is supplied as is. A 1920 event indicates that one or more of the SAN components are unable to provide the performance that is required by the application hosts. This situation can be temporary (for example, a result of a maintenance activity) or permanent (for example, a result of a hardware failure or an unexpected host I/O workload). If 1920 events are occurring, it can be necessary to use a performance monitoring and analysis tool, such as the IBM Tivoli Storage Productivity Center, to assist in identifying and resolving the problem.
svctask mkpartnership
Use the svctask mkpartnership command to establish a one-way Global Mirror partnership between the local cluster and a remote cluster. To establish a fully functional Global Mirror partnership, you must issue this command on both clusters. This step is a prerequisite for creating Global Mirror relationships between volumes on the SVC clusters. When creating the partnership, you can specify the bandwidth to be used by the background copy process between the local and the remote SVC cluster, and if it is not specified, the
433
bandwidth defaults to 50 MBps. The bandwidth must be set to a value that is less than or equal to the bandwidth that can be sustained by the intercluster link.
svctask chpartnership
To change the bandwidth that is available for background copy in an SVC cluster partnership, use the svctask chpartnership command to specify the new bandwidth.
svctask mkrcconsistgrp
Use the svctask mkrcconsistgrp command to create a new, empty Global Mirror Consistency Group. The Global Mirror Consistency Group name must be unique across all Consistency Groups that are known to the clusters owning this Consistency Group. If the Consistency Group involves two clusters, the clusters must be in communication throughout the creation process. The new Consistency Group does not contain any relationships and will be in the Empty state. You can add Global Mirror relationships to the group, either upon creation or afterward, by using the svctask chrelationship command.
434
svctask mkrcrelationship
Use the svctask mkrcrelationship command to create a new Global Mirror relationship. This relationship persists until it is deleted. The auxiliary volume must be equal in size to the master volume or the command will fail, and if both volumes are in the same cluster, they must both be in the same I/O Group. The master and auxiliary volume cannot be in an existing relationship, and they cannot be the target of a FlashCopy mapping. This command returns the new relationship (relationship_id) when successful. When creating the Global Mirror relationship, you can add it to a Consistency Group that already exists, or it can be a stand-alone Global Mirror relationship if no Consistency Group is specified. To check whether the master or auxiliary volumes comply with the prerequisites to participate in a Global Mirror relationship, use the svcinfo lsrcrelationshipcandidate command, as shown in svcinfo lsrcrelationshipcandidate on page 435.
svcinfo lsrcrelationshipcandidate
Use the svcinfo lsrcrelationshipcandidate command to list the available volumes that are eligible to form a Global Mirror relationship. When issuing the command, you can specify the master volume name and auxiliary cluster to list candidates that comply with the prerequisites to create a Global Mirror relationship. If the command is issued with no parameters, all volumes that are not disallowed by another configuration state, such as being a FlashCopy target, are listed.
svctask chrcrelationship
Use the svctask chrcrelationship command to modify the following properties of a Global Mirror relationship: Change the name of a Global Mirror relationship. Add a relationship to a group. Remove a relationship from a group using the -force flag. Adding a Global Mirror relationship: When adding a Global Mirror relationship to a Consistency Group that is not empty, the relationship must have the same state and copy direction as the group to be added to it.
svctask chrcconsistgrp
Use the svctask chrcconsistgrp command to change the name of a Global Mirror Consistency Group.
435
svctask startrcrelationship
Use the svctask startrcrelationship command to start the copy process of a Global Mirror relationship. When issuing the command, you can set the copy direction if it is undefined, and, optionally, you can mark the auxiliary volume of the relationship as clean. The command fails if it is used as an attempt to start a relationship that is already a part of a Consistency Group. You can only issue this command to a relationship that is connected. For a relationship that is idling, this command assigns a copy direction (master and auxiliary roles) and begins the copy process. Otherwise, this command restarts a previous copy process that was stopped either by a stop command or by an I/O error. If the resumption of the copy process leads to a period when the relationship is inconsistent, you must specify the -force parameter when restarting the relationship. This situation can arise if, for example, the relationship was stopped and then further writes were performed on the original master of the relationship. The use of the -force parameter here is a reminder that the data on the auxiliary will become inconsistent while resynchronization (background copying) takes place and, therefore, is unusable for DR purposes before the background copy has completed. In the Idling state, you must specify the master volume to indicate the copy direction. In other connected states, you can provide the -primary argument, but it must match the existing setting.
svctask stoprcrelationship
Use the svctask stoprcrelationship command to stop the copy process for a relationship. You can also use this command to enable write access to a consistent auxiliary volume by specifying the -access parameter. This command applies to a stand-alone relationship. It is rejected if it is addressed to a relationship that is part of a Consistency Group. You can issue this command to stop a relationship that is copying from master to auxiliary. If the relationship is in an inconsistent state, any copy operation stops and does not resume until you issue an svctask startrcrelationship command. Write activity is no longer copied from the master to the auxiliary volume. For a relationship in the ConsistentSynchronized state, this command causes a Consistency Freeze. When a relationship is in a consistent state (that is, in the ConsistentStopped, ConsistentSynchronized, or ConsistentDisconnected state), you can use the -access parameter with the svctask stoprcrelationship command to enable write access to the auxiliary volume.
436
svctask startrcconsistgrp
Use the svctask startrcconsistgrp command to start a Global Mirror Consistency Group. You can only issue this command to a Consistency Group that is connected. For a Consistency Group that is idling, this command assigns a copy direction (master and auxiliary roles) and begins the copy process. Otherwise, this command restarts a previous copy process that was stopped either by a stop command or by an I/O error.
svctask stoprcconsistgrp
Use the svctask startrcconsistgrp command to stop the copy process for a Global Mirror Consistency Group. You can also use this command to enable write access to the auxiliary volumes in the group if the group is in a consistent state. If the Consistency Group is in an inconsistent state, any copy operation stops and does not resume until you issue the svctask startrcconsistgrp command. Write activity is no longer copied from the master to the auxiliary volumes, which belong to the relationships in the group. For a Consistency Group in the ConsistentSynchronized state, this command causes a Consistency Freeze. When a Consistency Group is in a consistent state (for example, in the ConsistentStopped, ConsistentSynchronized, or ConsistentDisconnected state), you can use the -access parameter with the svctask stoprcconsistgrp command to enable write access to the auxiliary volumes within that group.
svctask rmrcrelationship
Use the svctask rmrcrelationship command to delete the relationship that is specified. Deleting a relationship only deletes the logical relationship between the two volumes. It does not affect the volumes themselves. If the relationship is disconnected at the time that the command is issued, the relationship is only deleted on the cluster on which the command is being run. When the clusters reconnect, the relationship is automatically deleted on the other cluster. Alternatively, if the clusters are disconnected, and you still want to remove the relationship on both clusters, you can issue the rmrcrelationship command independently on both of the clusters. A relationship cannot be deleted if it is part of a Consistency Group. You must first remove the relationship from the Consistency Group. If you delete an inconsistent relationship, the auxiliary volume becomes accessible even though it is still inconsistent. This situation is the one case in which Global Mirror does not inhibit access to inconsistent data.
Chapter 8. Advanced Copy Services
437
svctask rmrcconsistgrp
Use the svctask rmrcconsistgrp command to delete a Global Mirror Consistency Group. This command deletes the specified Consistency Group. You can issue this command for any existing Consistency Group. If the Consistency Group is disconnected at the time that the command is issued, the Consistency Group is only deleted on the cluster on which the command is being run. When the clusters reconnect, the Consistency Group is automatically deleted on the other cluster. Alternatively, if the clusters are disconnected, and you still want to remove the Consistency Group on both clusters, you can issue the svctask rmrcconsistgrp command separately on both of the clusters. If the Consistency Group is not empty, the relationships within it are removed from the Consistency Group before the group is deleted. These relationships then become stand-alone relationships. The state of these relationships is not changed by the action of removing them from the Consistency Group.
svctask switchrcrelationship
Use the svctask switchrcrelationship command to reverse the roles of the master volume and the auxiliary volume when a stand-alone relationship is in a consistent state; when issuing the command, the desired master needs to be specified.
svctask switchrcconsistgrp
Use the svctask switchrcconsistgrp command to reverse the roles of the master volume and the auxiliary volume when a Consistency Group is in a consistent state. This change is applied to all of the relationships in the Consistency Group, and when issuing the command, the desired master needs to be specified.
438
Chapter 9.
439
Help: You can also use -h instead of -?, for example, the svcinfo -h or svctask commandname -h command. If you look at the syntax of the command by typing svcinfo command name -?, you often see -filter listed as a parameter. Be aware that the correct parameter is -filtervalue. Tip: You can use the up and down arrow keys on your keyboard to recall commands that were recently issued. Then, you can use the left and right, Backspace, and Delete keys to edit commands before you resubmit them.
Using reverse-i-search
If you work on your SVC with the same PuTTy session for many hours and enter many commands, then scrolling back to find your previous or similar commands can be a time-intensive task. In this case, using the reverse-i-search command can help you quickly and easily find any command you already issued in the history of your commands by using the Ctrl+r keys. Ctrl+r will allow you to interactively search through the command history as you type in commands. Pressing Ctrl+r at an empty command prompt will give you a prompt as shown in Example 9-1.
Example 9-1 Using reverse-i-search
IBM_2145:ITSO-CLS5:admin>svcinfo lsarray mdisk_id mdisk_name status mdisk_grp_id mdisk_grp_name capacity raid_status raid_level redundancy strip_size tier 298 SDD-Array_1 online 0 ITSO-Storage_Pool-Multi_Tier 135.7GB online raid10 1 256 generic_ssd (reverse-i-search)`sv': svcinfo lsarray
440
As shown in Example 9-1 on page 440, we had executed an svcinfo lsarray command. By then pressing Ctrl+r and typing sv, the command we needed was recalled from history.
IBM_2145:ITSO_SVC_4:admin>svcinfo lscontroller 0 id 0 controller_name ITSO_XIV_01 WWNN 50017380022C0000 mdisk_link_count 10 max_mdisk_link_count 10 degraded no vendor_id IBM product_id_low 2810XIVproduct_id_high LUN-0 product_revision 10.1 ctrl_s/n allow_quorum yes WWPN 50017380022C0170 path_count 2 max_path_count 4 WWPN 50017380022C0180 path_count 2 max_path_count 2 WWPN 50017380022C0190 path_count 4 max_path_count 6 WWPN 50017380022C0182 path_count 4 max_path_count 12 WWPN 50017380022C0192 path_count 4 max_path_count 6 WWPN 50017380022C0172 path_count 4 max_path_count 6
441
IBM_2145:ITSO-CLS1:admin>svctask chcontroller -name DS4500 controller0 IBM_2145:ITSO-CLS1:admin>svcinfo lscontroller -delim , id,controller_name,ctrl_s/n,vendor_id,product_id_low,product_id_high 0,DS4500,,IBM ,1742-900, 1,DS4700,,IBM ,1814 , FAStT This command renames the controller named controller0 to DS4500. Choosing a new name: The chcontroller command specifies the new name first. You can use letters A to Z, a to z, numbers 0 to 9, the dash (-), and the underscore (_). The new name can be between one and 63 characters in length. However, the new name cannot start with a number, dash, or the word controller (because this prefix is reserved for SVC assignment only).
IBM_2145:ITSO-CLS5:admin>svcinfo lsdiscoverystatus id scope IO_group_id IO_group_name status 0 fc_fabric inactive 1 sas_iogrp 0 io_grp0 inactive This command displays the state of all discoveries in the cluster. During discovery, the system updates the drive and MDisk records. You must wait until the discovery has finished and is inactive before you attempt to use the system. This command displays one of the following results: active: There is a discovery operation in progress at the time that the command is issued. inactive: There are no discovery operations in progress at the time that the command is issued.
442
Use the svctask detectmdisk command to scan for newly added MDisks (Example 9-5).
Example 9-5 svctask detectmdisk
IBM_2145:ITSO-CLS1:admin>svctask detectmdisk To check whether any newly added MDisks were successfully detected, run the svcinfo lsmdisk command and look for new unmanaged MDisks. If the disks do not appear, check that the disk is appropriately assigned to the SVC in the disk subsystem, and that the zones are set up properly. Note: If you have assigned a large number of logical unit numbers (LUNs) to your SVC, the discovery process can take time. Check several times, using the svcinfo lsmdisk command if all of the MDisks that you were expecting are present. When all of the disks allocated to the SVC are seen from the SVC cluster, the following procedure is a useful way to verify which MDisks are unmanaged and ready to be added to the storage pool. Perform the following steps to display MDisks: 1. Enter the svcinfo lsmdiskcandidate command, as shown in Example 9-6. This command displays all detected MDisks that are not currently part of a storage poll.
Example 9-6 svcinfo lsmdiskcandidate command IBM_2145:ITSO-CLS1:admin>svcinfo lsmdiskcandidate id 0 1 2 . .
Alternatively, you can list all MDisks (managed or unmanaged) by issuing the svcinfo lsmdisk command, as shown in Example 9-7.
Example 9-7 svcinfo lsmdisk command IBM_2145:ITSO-CLS5:admin>svcinfo lsmdisk -filtervalue controller_name=ITSO-4700 id name status mode mdisk_grp_id mdisk_grp_name capacity ctrl_LUN_# controller_name UID tier 43 mdisk43 online managed 1 ITSO-Storage_Pool-Single_Tier 66.8GB 0000000000000000 ITSO-4700 600a0b8000510b8a000003f54b 1fc84b00000000000000000000000000000000 generic_hdd 61 mdisk61 online managed 0 ITSO-Storage_Pool-Multi_Tier 66.8GB 0000000000000001 ITSO-4700 600a0b8000510f3a000003ee4b 1fc8c900000000000000000000000000000000 generic_hdd 73 mdisk73 online managed 2 STGPool_DS4700 66.8GB 0000000000000002 ITSO-4700 600a0b8000510b8a000003f84b 1fc8db00000000000000000000000000000000 generic_hdd 80 mdisk80 online managed 2 STGPool_DS4700 66.8GB 0000000000000003 ITSO-4700 600a0b8000510f3a000003f34b 1fc96700000000000000000000000000000000 generic_hdd 93 mdisk93 online unmanaged 66.8GB 0000000000000004 ITSO-4700 600a0b8000510b8a0000049e4b
443
From this output, you can see additional information about each MDisk (such as the current status). For the purpose of our current task, we are only interested in the unmanaged disks, because they are candidates for a storage pool (in our case all MDisks). Tip: The -delim parameter collapses output instead of wrapping text over multiple lines. 2. If not all of the MDisks that you expected are visible, rescan the available FC network by entering the svctask detectmdisk command, as shown in Example 9-8.
Example 9-8 svctask detectmdisk IBM_2145:ITSO-CLS1:admin>svctask detectmdisk
3. If you run the svcinfo lsmdiskcandidate command again and your MDisk or MDisks are still not visible, check that the LUNs from your subsystem have been properly assigned to the SVC and that appropriate zoning is in place (for example, the SVC can see the disk subsystem). See Chapter 3, Planning and configuration on page 57 for details about setting up your storage area network (SAN) fabric.
IBM_2145:ITSO-CLS5:admin>svcinfo lsmdisk -delim : id:name:status:mode:mdisk_grp_id:mdisk_grp_name:capacity:ctrl_LUN_#:controller_nam e:UID:tier 0:mdisk0:online:unmanaged:::47.0GB:0000000000000000:controller2:60050768017f0000a8 0000000000000000000000000000000000000000000000:generic_hdd . the remaining line has been removed for brevity . 61:mdisk61:online:managed:0:ITSO-Storage_Pool-Multi_Tier:66.8GB:0000000000000001:I TSO-4700:600a0b8000510f3a000003ee4b1fc8c900000000000000000000000000000000:generic_ hdd . the remaining line has been removed for brevity . 298:SSD-Array_1:online:array:0:ITSO-Storage_Pool-Multi_Tier:135.7GB::::generic_ssd Example 9-10 shows a summary for a single MDisk.
Example 9-10 Usage of the command svcinfo lsmdisk (ID)
IBM_2145:ITSO-CLS5:admin>svcinfo lsmdisk 61 id 61
444
name mdisk61 status online mode managed mdisk_grp_id 0 mdisk_grp_name ITSO-Storage_Pool-Multi_Tier capacity 66.8GB quorum_index 1 block_size 512 controller_name ITSO-4700 ctrl_type 4 ctrl_WWNN 200600A0B8510B8A controller_id 12 path_count 2 max_path_count 2 ctrl_LUN_# 0000000000000001 UID 600a0b8000510f3a000003ee4b1fc8c900000000000000000000000000000000 preferred_WWPN 200700A0B8510B8B active_WWPN 200700A0B8510B8B fast_write_state empty raid_status raid_level redundancy strip_size spare_goal spare_protection_min balanced tier generic_hdd
IBM_2145:ITSO-CLS1:admin>svctask chmdisk -name mdisk_6 mdisk6 This command renamed the MDisk named mdisk6 to mdisk_6. The chmdisk command: The chmdisk command specifies the new name first. You can use letters A to Z, a to z, numbers 0 to 9, the dash (-), and the underscore (_). The new name can be between one and 63 characters in length. However, the new name cannot start with a number, dash, or the word mdisk (because this prefix is reserved for SVC assignment only).
445
and you can undertake preventive maintenance. If not, the hosts that were using virtual disks (VDisks), which used the excluded MDisk, now have I/O errors. By running the svcinfo lsmdisk command, you can see that mdisk61 is excluded in Example 9-12.
Example 9-12 svcinfo lsmdisk command: Excluded MDisk
IBM_2145:ITSO-CLS5:admin>svcinfo lsmdisk -delim : id:name:status:mode:mdisk_grp_id:mdisk_grp_name:capacity:ctrl_LUN_#:controller_nam e:UID:tier 0:mdisk0:online:unmanaged:::47.0GB:0000000000000000:controller2:60050768017f0000a8 0000000000000000000000000000000000000000000000:generic_hdd . the remaining line has been removed for brevity . 61:mdisk61:excluded:managed:0:ITSO-Storage_Pool-Multi_Tier:66.8GB:0000000000000001 :ITSO-4700:600a0b8000510f3a000003ee4b1fc8c900000000000000000000000000000000:generi c_hdd After taking the necessary corrective action to repair the MDisk (for example, replace the failed disk, repair the SAN zones, and so on), we need to include the MDisk again by issuing the svctask includemdisk command (Example 9-13), because the SVC cluster does not include the MDisk automatically.
Example 9-13 svctask includemdisk
IBM_2145:ITSO-CLS5:admin>svctask includemdisk mdisk61 Running the svcinfo lsmdisk command again shows mdisk61 online again; see Example 9-14.
Example 9-14 svcinfo lsmdisk command: Verifying that MDisk is included
IBM_2145:ITSO-CLS5:admin>svcinfo lsmdisk -delim : id:name:status:mode:mdisk_grp_id:mdisk_grp_name:capacity:ctrl_LUN_#:controller_nam e:UID:tier 0:mdisk0:online:unmanaged:::47.0GB:0000000000000000:controller2:60050768017f0000a8 0000000000000000000000000000000000000000000000:generic_hdd . the remaining line has been removed for brevity . 61:mdisk61:online:managed:0:ITSO-Storage_Pool-Multi_Tier:66.8GB:0000000000000001:I TSO-4700:600a0b8000510f3a000003ee4b1fc8c900000000000000000000000000000000:generic_ hdd
446
You can only add unmanaged MDisks to a storage pool. This command adds the MDisk named mdisk61 to the storage pool named ITSO-Storage_Pool-Multi_Tier. Important: Do not add this MDisk to a storage pool if you want to create an image mode volume from the MDisk that you are adding. As soon as you add an MDisk to a storage pool it becomes managed, and extent mapping is not necessarily one-to-one anymore.
IBM_2145:ITSO-CLS5:admin>svcinfo lsmdisk -filtervalue mdisk_grp_name=ITSO-Storage_* id name status mode mdisk_grp_id mdisk_grp_name capacity ctrl_LUN_# controller_name UID tier 43 mdisk43 online managed 1 ITSO-Storage_Pool-Single_Tier 66.8GB 0000000000000000 ITSO-4700 600a0b8000510b8a000003f54b1fc84b00000000000000000000000000000000 generic_hdd 61 mdisk61 online managed 0 ITSO-Storage_Pool-Multi_Tier 66.8GB 0000000000000001 ITSO-4700 600a0b8000510f3a000003ee4b1fc8c900000000000000000000000000000000 generic_hdd 298 SDD-Array_1 online array 0 ITSO-Storage_Pool-Multi_Tier 135.7GB generic_ssd As you can see in Example 9-16, with this command you will be able to see all the MDisks present in the storage pools named ITSO-Storage_* where the asterisk (*) is a wild card.
IBM_2145:ITSO-CLS5:admin>svctask mkmdiskgrp -name ITSO-Storage_Pool-Single_Tier -ext 256 MDisk Group, id [1], successfully created
447
This command creates a storage pool called ITSO-Storage_Pool-Single_Tier. The extent size that is used within this group is 256 MB. We have not added any MDisks to the storage pool yet, so it is an empty storage pool. You can add unmanaged MDisks and create the storage pool in the same command. Use the command svctask mkmdiskgrp with the -mdisk parameter and enter the IDs or names of the MDisks. This will add the MDisks immediately after the storage pool is created. Prior to the creation of the storage pool, enter the svcinfo lsmdisk command as shown in Example 9-18. This lists all of the available MDisks that are seen by the SVC cluster.
Example 9-18 Listing available MDisks
IBM_2145:ITSO-CLS5:admin>svcinfo lsmdisk -filtervalue controller_name=ITSO-4700 -delim : id:name:status:mode:mdisk_grp_id:mdisk_grp_name:capacity:ctrl_LUN_#:controller_nam e:UID:tier 43:mdisk43:online:managed:1:ITSO-Storage_Pool-Single_Tier:66.8GB:0000000000000000: ITSO-4700:600a0b8000510b8a000003f54b1fc84b00000000000000000000000000000000:generic _hdd 61:mdisk61:online:managed:0:ITSO-Storage_Pool-Multi_Tier:66.8GB:0000000000000001:I TSO-4700:600a0b8000510f3a000003ee4b1fc8c900000000000000000000000000000000:generic_ hdd 73:mdisk73:online:unmanaged:::66.8GB:0000000000000002:ITSO-4700:600a0b8000510b8a00 0003f84b1fc8db00000000000000000000000000000000:generic_hdd 80:mdisk80:online:unmanaged:::66.8GB:0000000000000003:ITSO-4700:600a0b8000510f3a00 0003f34b1fc96700000000000000000000000000000000:generic_hdd Using the same command as before (svctask mkmdiskgrp) and knowing the MDisk IDs that we are using, we can add multiple MDisks to the storage pool at the same time. We now add the unmanaged MDisks to the storage pool that we created, as shown in Example 9-19.
Example 9-19 Creating an storage pool and adding available MDisks
IBM_2145:ITSO-CLS5:admin>svctask mkmdiskgrp -name STGPool_DS4700 -ext 512 -mdisk 73:80 MDisk Group, id [2], successfully created This command creates a storage pool called STGPool_DS4700. The extent size that is used within this group is 512 MB, and two MDisks (73 and 80) are added to the storage pool. Storage pool name: The -name and -mdisk parameters are optional. If you do not enter a -name, the default is MDiskgrpx, where x is the ID sequence number that is assigned by the SVC internally. If you do not enter the -mdisk parameter, an empty storage pool is created. If you want to provide a name, you can use letters A to Z, a to z, numbers 0 to 9, and the underscore. The name can be between one and 63 characters in length, but it cannot start with a number or the word MDiskgrp (because this prefix is reserved for SVC assignment only). By running the svcinfo lsmdisk command, you now see the MDisks as managed and as part of the STGPool_DS4700, as shown in Example 9-20 on page 449.
448
IBM_2145:ITSO-CLS5:admin>svcinfo lsmdisk -filtervalue controller_name=ITSO-4700 -delim : id:name:status:mode:mdisk_grp_id:mdisk_grp_name:capacity:ctrl_LUN_#:controller_nam e:UID:tier 43:mdisk43:online:managed:1:ITSO-Storage_Pool-Single_Tier:66.8GB:0000000000000000: ITSO-4700:600a0b8000510b8a000003f54b1fc84b00000000000000000000000000000000:generic _hdd 61:mdisk61:online:managed:0:ITSO-Storage_Pool-Multi_Tier:66.8GB:0000000000000001:I TSO-4700:600a0b8000510f3a000003ee4b1fc8c900000000000000000000000000000000:generic_ hdd 73:mdisk73:online:managed:2:STGPool_DS4700:66.8GB:0000000000000002:ITSO-4700:600a0 b8000510b8a000003f84b1fc8db00000000000000000000000000000000:generic_hdd 80:mdisk80:online:managed:2:STGPool_DS4700:66.8GB:0000000000000003:ITSO-4700:600a0 b8000510f3a000003f34b1fc96700000000000000000000000000000000:generic_hdd At this point, you have completed the tasks that are required to create a new storage pool.
IBM_2145:ITSO-CLS5:admin>svcinfo lsmdiskgrp -delim : id:name:status:mdisk_count:vdisk_count:capacity:extent_size:free_capacity:virtual_ capacity:used_capacity:real_capacity:overallocation:warning:easy_tier:easy_tier_st atus 0:ITSO-Storage_Pool-Multi_Tier:online:2:5:200.50GB:256:150.50GB:50.00GB:50.00GB:50 .00GB:24:0:auto:active 1:ITSO-Storage_Pool-Single_Tier:online:1:2:66.25GB:256:46.25GB:20.00GB:20.00GB:20. 00GB:30:0:on:active 2:STGPool_DS4700:online:2:0:132.50GB:512:132.50GB:0.00MB:0.00MB:0.00MB:0:0:auto:in active
IBM_2145:ITSO-CLS5:admin>svctask chmdiskgrp -name STGPool_DS4700_new 2 IBM_2145:ITSO-CLS5:admin>svcinfo lsmdiskgrp -delim : id:name:status:mdisk_count:vdisk_count:capacity:extent_size:free_capacity:virtual_ capacity:used_capacity:real_capacity:overallocation:warning:easy_tier:easy_tier_st atus 0:ITSO-Storage_Pool-Multi_Tier:online:2:5:200.50GB:256:150.50GB:50.00GB:50.00GB:50 .00GB:24:0:auto:active 1:ITSO-Storage_Pool-Single_Tier:online:1:2:66.25GB:256:46.25GB:20.00GB:20.00GB:20. 00GB:30:0:on:active
449
2:STGPool_DS4700_new:online:2:0:132.50GB:512:132.50GB:0.00MB:0.00MB:0.00MB:0:0:aut o:inactive This command renamed the storage pool STGPool_DS4700 shown in Example 9-21 on page 449 to STGPool_DS4700_new as shown in Example 9-22 on page 449. Changing the storage pool: The chmdiskgrp command specifies the new name first. You can use letters A to Z, a to z, numbers 0 to 9, the dash (-), and the underscore (_). The new name can be between one and 63 characters in length. However, the new name cannot start with a number, dash, or the word mdiskgrp (because this prefix is reserved for SVC assignment only).
This command removes storage pool STGPool_DS4700_new from the SVC cluster configuration. Removing a storage pool from the SVC cluster configuration: If there are MDisks within the storage pool, you must use the -force flag to remove the storage pool from the SVC cluster configuration, for example: svctask rmmdiskgrp STGPool_DS4700_new -force Ensure that you definitely want to use this flag, because it destroys all mapping information and data held on the volumes, which cannot be recovered.
IBM_2145:ITSO-CLS1:admin>svctask rmmdisk -mdisk 80 -force 2 This command removes the MDisk with ID 80 from the storage pool with ID 2. The -force flag is set because there are volumes using this storage pool. Sufficient space: The removal only takes place if there is sufficient space to migrate the volumes data to other extents on other MDisks that remain in the storage pool. After you remove the MDisk from the storage pool, it takes time to change the mode from managed to unmanaged depending on the size of the MDisk you are removing.
450
IBM_2145:ITSO-CLS5:admin>svcinfo lshbaportcandidate id 210000E08B89C1CD 210000E08B054CAA After you know that the WWPNs that are displayed match your host (use host or SAN switch utilities to verify), use the svctask mkhost command to create your host. Name: If you do not provide the -name parameter, the SVC automatically generates the name hostx (where x is the ID sequence number that is assigned by the SVC internally). You can use the letters A to Z and a to z, the numbers 0 to 9, the dash (-), and the underscore (_). The name can be between one and 63 characters in length. However, the name cannot start with a number, dash, or the word host (because this prefix is reserved for SVC assignment only). The command to create a host is shown in Example 9-26.
Example 9-26 svctask mkhost
IBM_2145:ITSO-CLS5:admin>svctask mkhost -name Almaden -hbawwpn 210000E08B89C1CD:210000E08B054CAA Host, id [0], successfully created This command creates a host called Almaden using WWPN 21:00:00:E0:8B:89:C1:CD and 21:00:00:E0:8B:05:4C:AA. Ports: You can define from one up to eight ports per host, or you can use the addport command, which we show in 9.3.5, Adding ports to a defined host on page 455.
451
IBM_2145:ITSO-CLS5:admin>svctask mkhost -name Almaden -hbawwpn 210000E08B89C1CD:210000E08B054CAA -force Host, id [0], successfully created This command forces the creation of a host called Almaden using WWPN 210000E08B89C1CD:210000E08B054CAA. Note: WWPNs are not case sensitive in the CLI.
452
We create the host by issuing the mkhost command, as shown in Example 9-28. When the command completes successfully, we display our newly created host. It is important to know that when the host is initially configured, the default authentication method is set to no authentication and no Challenge Handshake Authentication Protocol (CHAP) secret is set. To set a CHAP secret for authenticating the iSCSI host with the SVC cluster, use the svctask chhost command with the chapsecret parameter.
Example 9-28 mkhost command
IBM_2145:ITSO-CLS1:admin>svctask mkhost -name Baldur -iogrp 0 -iscsiname iqn.1991-05.com.microsoft:st-2k8hv-004.englab.brocade.com Host, id [4], successfully created IBM_2145:ITSO-CLS1:admin>svcinfo lshost 4 id 4 name Baldur port_count 1 type generic mask 1111 iogrp_count 1 iscsiname iqn.1991-05.com.microsoft:st-2k8hv-004.englab.brocade.com node_logged_in_count 0 state offline We have now created our host definition. We map a volume to our new iSCSI server, as shown in Example 9-29 on page 454. We have already created the volume, as shown in 9.5.1, Creating a volume on page 458. In our scenario, our volume has ID 21 and the host name is Baldur. We map it to our iSCSI host.
Chapter 9. SAN Volume Controller operations using the command-line interface
453
IBM_2145:ITSO-CLS1:admin>svctask mkvdiskhostmap -host Baldur 21 Virtual Disk to Host map, id [0], successfully created After the volume has been mapped to the host, we display the host information again, as shown in Example 9-30.
Example 9-30 svcinfo lshost
IBM_2145:ITSO-CLS1:admin>svcinfo lshost 4 id 4 name Baldur port_count 1 type generic mask 1111 iogrp_count 1 iscsiname iqn.1991-05.com.microsoft:st-2k8hv-004.englab.brocade.com node_logged_in_count 1 state online Note: FC hosts and iSCSI hosts are handled in the same way operationally after they have been created. If you need to display a CHAP secret for an already defined server, use the svcinfo lsiscsiauth command. The lsiscsiauth command lists the Challenge Handshake Authentication Protocol (CHAP) secret configured for authenticating an entity to the SAN Volume Controller cluster.
IBM_2145:ITSO-CLS1:admin>svctask chhost -name Angola Guinea IBM_2145:ITSO-CLS1:admin>svcinfo lshost id name port_count 0 Palau 2 1 Nile 2 2 Kanaga 2 3 Siam 2 4 Angola 1
iogrp_count 4 1 1 2 4
This command renamed the host from Guinea to Angola. Note: The chhost command specifies the new name first. You can use letters A to Z and a to z, numbers 0 to 9, the dash (-), and the underscore (_). The new name can be between one and 63 characters in length. However, it cannot start with a number, dash, or the word host (because this prefix is reserved for SVC assignment only).
454
Note: If you use Hewlett-Packard UNIX (HP-UX), you use the -type option. See IBM System Storage Open Software Family SAN Volume Controller: Host Attachment Guide, SC26-7563, for more information about the hosts that require the -type parameter.
IBM_2145:ITSO-CLS1:admin>svctask rmhost Angola Deleting a host: If there are any volume assigned to the host, you must use the -force flag, for example: svctask rmhost -force Angola.
IBM_2145:ITSO-CLS1:admin>svcinfo lshbaportcandidate id 210000E08B054CAA If the WWPN matches your information (use host or SAN switch utilities to verify), use the svctask addhostport command to add the port to the host. Example 9-34 shows the command to add a host port.
Example 9-34 svctask addhostport
IBM_2145:ITSO-CLS1:admin>svctask addhostport -hbawwpn 210000E08B054CAA Palau This command adds the WWPN of 210000E08B054CAA to the Palau host. Adding multiple ports: You can add multiple ports all at one time by using the separator or colon (:) between WWPNs, for example: svctask addhostport -hbawwpn 210000E08B054CAA:210000E08B89C1CD Palau
455
If the new HBA is not connected or zoned, the svcinfo lshbaportcandidate command does not display your WWPN. In this case, you can manually type the WWPN of your HBA or HBAs and use the -force flag to create the host, as shown in Example 9-35.
Example 9-35 svctask addhostport
IBM_2145:ITSO-CLS1:admin>svctask addhostport -hbawwpn 210000E08B054CAA -force Palau This command forces the addition of the WWPN named 210000E08B054CAA to the host called Palau. WWPNs: WWPNs are not case sensitive within the CLI. If you run the svcinfo lshost command again, you see your host with an updated port count of 2 in Example 9-36.
Example 9-36 svcinfo lshost command: Port count
iogrp_count 4 4 1 1 1
If your host currently uses iSCSI as a connection method, you must have the new iSCSI IQN ID before you add the port. Unlike FC-attached hosts, you cannot check for available candidates with iSCSI. After you have acquired the additional iSCSI IQN, use the svctask addhostport command, as shown in Example 9-37.
Example 9-37 Adding an iSCSI port to an already configured host
IBM_2145:ITSO-CLS1:admin>svcinfo lshost Palau id 0 name Palau port_count 2 type generic mask 1111
456
iogrp_count 4 WWPN 210000E08B054CAA node_logged_in_count 2 state active WWPN 210000E08B89C1CD node_logged_in_count 2 state offline When you know the WWPN or iSCSI IQN, use the svctask rmhostport command to delete a host port, as shown in Example 9-39.
Example 9-39 svctask rmhostport
For removing WWPN IBM_2145:ITSO-CLS1:admin>svctask rmhostport -hbawwpn 210000E08B89C1CD Palau and for removing iSCSI IQN IBM_2145:ITSO-CLS1:admin>svctask rmhostport -iscsiname iqn.1991-05.com.microsoft:baldur Baldur This command removes the WWPN of 210000E08B89C1CD from the Palau host and the iSCSI IQN iqn.1991-05.com.microsoft:baldur from the Baldur host. Removing multiple ports: You can remove multiple ports at one time by using the separator or colon (:) between the port names, for example: svctask rmhostport -hbawwpn 210000E08B054CAA:210000E08B892BCD Angola
IBM_2145:ITSO-CLS5:admin>svcinfo lsportip id node_id node_name IP_address gateway IP_address_6 prefix_6 gateway_6 duplex state speed failover 1 1 node1 00:1a:64:95:2f:cc Full unconfigured 1Gb/s 1 1 node1 00:1a:64:95:2f:cc Full unconfigured 1Gb/s 2 1 node1 10.44.36.64 10.44.36.254 00:1a:64:95:2f:ce Full online 1Gb/s 2 1 node1 00:1a:64:95:2f:ce Full online 1Gb/s 1 2 node2 00:1a:64:95:3f:4c Full unconfigured 1Gb/s
mask MAC
457
1 2 node2 00:1a:64:95:3f:4c Full 2 2 10.44.36.254 00:1a:64:95:3f:4e Full 2 2 node2 00:1a:64:95:3f:4e Full 1 3 node3 00:21:5e:41:53:18 Full 1 3 node3 00:21:5e:41:53:18 Full 2 3 10.44.36.254 00:21:5e:41:53:1a Full 2 3 node3 00:21:5e:41:53:1a Full 1 4 node4 00:21:5e:41:56:8c Full 1 4 node4 00:21:5e:41:56:8c Full 2 4 10.44.36.254 00:21:5e:41:56:8e Full 2 4 node4 00:21:5e:41:56:8e Full
node2
node3
node4
Example 9-41 shows how the cfgportip command assigns an IP address to each node Ethernet port for iSCSI I/O.
Example 9-41 cfgportip command
IBM_2145:ITSO-CLS5:admin>svctask 10.44.36.254 -mask 255.255.255.0 IBM_2145:ITSO-CLS5:admin>svctask 10.44.36.254 -mask 255.255.255.0 IBM_2145:ITSO-CLS5:admin>svctask 10.44.36.254 -mask 255.255.255.0
cfgportip -node 4 -ip 10.44.36.63 -gw 2 cfgportip -node 1 -ip 10.44.36.64 -gw 2 cfgportip -node 2 -ip 10.44.36.65 -gw 2
Creating an image mode disk: If you do not specify the -size parameter when you create an image mode disk, the entire MDisk capacity is used. When you are ready to create a volume, you must know the following information before you start creating the volume: In which storage pool the volume is going to have its extents From which I/O Group the volume will be accessed Which SVC node will be the preferred node for the volume Size of the volume Name of the volume Type of the volume Whether this volume will be managed by Easy Tier to optimize its performance When you are ready to create your striped volume, use the svctask mkvdisk command (we discuss sequential and image mode volume later). In Example 9-42, this command creates a 10 GB striped volume with volume id7 within the storage pool STGPool_DS4700 and assigns it to the iogrp_0 I/O Group. Its preferred node will be node 1.
Example 9-42 svctask mkvdisk command
IBM_2145:ITSO-CLS5:admin>svctask mkvdisk -mdiskgrp STGPool_DS4700 -iogrp io_grp0 -node 1 -size 10 -unit gb -name Tiger Virtual Disk, id [7], successfully created
To verify the results use the svcinfo lsvdisk command, as shown in Example 9-43.
Example 9-43 svcinfo lsvdisk command
IBM_2145:ITSO-CLS5:admin>svcinfo lsvdisk 7 id 7 name Tiger IO_group_id 0 IO_group_name io_grp0 status online mdisk_grp_id 2 mdisk_grp_name STGPool_DS4700 capacity 10.00GB type striped formatted no mdisk_id mdisk_name FC_id FC_name RC_id RC_name vdisk_UID 6005076801820000100000000000000D throttling 0 preferred_node_id 1 fast_write_state empty cache readwrite udid fc_map_count 0 sync_rate 50
459
copy_count 1 se_copy_count 0 copy_id 0 status online sync yes primary yes mdisk_grp_id 2 mdisk_grp_name STGPool_DS4700 type striped mdisk_id mdisk_name fast_write_state empty used_capacity 10.00GB real_capacity 10.00GB free_capacity 0.00MB overallocation 100 autoexpand warning grainsize se_copy no easy_tier on easy_tier_status inactive tier generic_ssd tier_capacity 0.00MB tier generic_hdd tier_capacity 10.00GB At this point, you have completed the required tasks to create a volume.
IBM_2145:ITSO-CLS5:admin>svcinfo lsvdisk -delim : id:name:IO_group_id:IO_group_name:status:mdisk_grp_id:mdisk_grp_name:capacity:type :FC_id:FC_name:RC_id:RC_name:vdisk_UID:fc_map_count:copy_count:fast_write_state:se _copy_count 0:Volume_measured_only:0:io_grp0:online:1:ITSO-Storage_Pool-Single_Tier:10.00GB:st riped:::::60050768018200001000000000000003:0:1:empty:0 1:Volume_EasyTier_active:0:io_grp0:online:0:ITSO-Storage_Pool-Multi_Tier:10.00GB:s triped:::::60050768018200001000000000000005:0:1:empty:0 2:Volume_EasyTier_active1:0:io_grp0:online:0:ITSO-Storage_Pool-Multi_Tier:10.00GB: striped:::::60050768018200001000000000000009:0:1:empty:0 3:Volume_EasyTier_active2:0:io_grp0:online:0:ITSO-Storage_Pool-Multi_Tier:10.00GB: striped:::::6005076801820000100000000000000A:0:1:empty:0 4:Volume_EasyTier_active3:0:io_grp0:online:0:ITSO-Storage_Pool-Multi_Tier:10.00GB: striped:::::60050768018200001000000000000008:0:1:empty:0
460
5:Volume_not_measured:0:io_grp0:online:1:ITSO-Storage_Pool-Single_Tier:10.00GB:str iped:::::6005076801820000100000000000000B:0:1:empty:0 6:Volume_EasyTier_active4:0:io_grp0:online:0:ITSO-Storage_Pool-Multi_Tier:10.00GB: striped:::::6005076801820000100000000000000C:0:1:empty:0 7:Tiger:0:io_grp0:online:2:STGPool_DS4700:10.00GB:striped:::::60050768018200001000 00000000000D:0:1:empty:0 IBM_2145:ITSO-CLS4:admin>svcinfo lsvdisk Volume_measured_only id 0 name Volume_measured_only IO_group_id 0 IO_group_name io_grp0 status online mdisk_grp_id 1 mdisk_grp_name ITSO-Storage_Pool-Single_Tier capacity 10.00GB type striped formatted no mdisk_id mdisk_name FC_id FC_name RC_id RC_name vdisk_UID 60050768018281BEE000000000000000 throttling 0 preferred_node_id 3 fast_write_state empty cache readwrite udid fc_map_count 0 sync_rate 50 copy_count 1 se_copy_count 0 copy_id 0 status online sync yes primary yes mdisk_grp_id 1 mdisk_grp_name ITSO-Storage_Pool-Single_Tier type striped mdisk_id mdisk_name fast_write_state empty used_capacity 10.00GB real_capacity 10.00GB free_capacity 0.00MB overallocation 100 autoexpand warning grainsize se_copy no easy_tier on easy_tier_status measured tier generic_ssd
461
-grainsize
IBM_2145:ITSO-CLS5:admin>svctask mkvdisk -mdiskgrp STGPool_DS4700 -iogrp 0 -vtype striped -size 10 -unit gb -rsize 50% -autoexpand -grainsize 32 Virtual Disk, id [8], successfully created
This command creates a space-efficient 10 GB volume. The volume belongs to the storage pool named STGPool_DS4700 and is owned by the io_grp1 I/O Group. The real capacity automatically expands until the volume size of 10 GB is reached. The grain size is set to 32 K, which is the default. Disk size: When using the -rsize parameter, you have the following options: disk_size, disk_size_percentage, and auto. Specify the disk_size_percentage value using an integer, or an integer immediately followed by the percent (%) symbol. Specify the units for a disk_size integer using the -unit parameter; the default is MB. The -rsize value can be greater than, equal to, or less than the size of the volume. The auto option creates a volume copy that uses the entire size of the MDisk. If you specify the -rsize auto option, you must also specify the -vtype image option. An entry of 1 GB uses 1024 MB.
462
As soon as the first MDisk extent has been migrated, the volume is no longer an image mode volume. You can add an image mode volume to an already populated storage pool with other types of volume, such as a striped or sequential volume. Size: An image mode volume must be at least 512 bytes (the capacity cannot be 0). That is, the minimum size that can be specified for an image mode volume must be the same as the storage pool extent size to which it is added, with a minimum of 16 MB. You must use the -mdisk parameter to specify an MDisk that has a mode of unmanaged. The -fmtdisk parameter cannot be used to create an image mode volume. Capacity: If you create a mirrored volume from two image mode MDisks without specifying a -capacity value, the capacity of the resulting volume is the smaller of the two MDisks, and the remaining space on the larger MDisk is inaccessible. If you do not specify the -size parameter when you create an image mode disk, the entire MDisk capacity is used. Use the svctask mkvdisk command to create an image mode volume, as shown in Example 9-46.
Example 9-46 svctask mkvdisk (image mode)
IBM_2145:ITSO-CLS5:admin>svctask mkvdisk -mdiskgrp STGPool_DS4700 -iogrp 0 -mdisk mdisk93 -vtype image -name Image_Volume_A Virtual Disk, id [9], successfully created
This command creates an image mode volume called Image_Volume_A using the mdisk93 MDisk. The volume belongs to the storage pool STGPool_DS4700 and is owned by the io_grp0 I/O Group. If we run the svcinfo lsvdisk command again, notice that volume named Image_Volume_A has a status of image, as shown in Example 9-47.
Example 9-47 svcinfo lsmdisk
IBM_2145:ITSO-CLS5:admin>svcinfo lsvdisk -filtervalue type=image id name IO_group_id IO_group_name status mdisk_grp_id mdisk_grp_name capacity type FC_id FC_name RC_id RC_name vdisk_UID fc_map_count copy_count fast_write_state se_copy_count 9 Image_Volume_A 0 io_grp0 online 2 STGPool_DS4700 66.80GB image 6005076801820000100000000000000F 0 1 empty 0
463
In addition, you can use volume mirroring as an alternative method of migrating volumes between storage pools. For example, if you have a non-mirrored volume in one storage pool and want to migrate that volume to another storage pool, you can add a new copy of the volume and specify the second storage pool. After the copies are synchronized, you can delete the copy on the first storage pool. The volume is copied to the second storage pool while remaining online during the copy. To create a mirrored copy of a volume, use the addvdiskcopy command. This command adds a copy of the chosen volume to the selected storage pool, which changes a non-mirrored volume into a mirrored volume. In the following scenario, we show creating a mirrored volume from one storage pool to another storage pool. As you can see in Example 9-48, the volume has a copy with copy_id 0.
Example 9-48 svcinfo lsvdisk
IBM_2145:ITSO-CLS4:admin>svcinfo lsvdisk Volume_no_mirror id 2 name Volume_no_mirror IO_group_id 0 IO_group_name io_grp0 status online mdisk_grp_id 2 mdisk_grp_name STGPool_DS4700_1 capacity 1.00GB type striped formatted no mdisk_id mdisk_name FC_id FC_name RC_id RC_name vdisk_UID 60050768018281BEE000000000000004 throttling 0 preferred_node_id 2 fast_write_state empty cache readwrite udid fc_map_count 0 sync_rate 50 copy_count 1 se_copy_count 0 copy_id 0 status online sync yes primary yes mdisk_grp_id 2 mdisk_grp_name STGPool_DS4700_1 type striped mdisk_id mdisk_name 464
Implementing the IBM System Storage SAN Volume Controller V6.1
fast_write_state empty used_capacity 1.00GB real_capacity 1.00GB free_capacity 0.00MB overallocation 100 autoexpand warning grainsize se_copy no easy_tier on easy_tier_status inactive tier generic_ssd tier_capacity 0.00MB tier generic_hdd tier_capacity 1.00GB In Example 9-49, we add the volume copy mirror by using the svctask addvdiskcopy command.
Example 9-49 svctask addvdiskcopy
IBM_2145:ITSO-CLS4:admin>svctask addvdiskcopy -mdiskgrp STGPool_DS4700_2 -vtype striped -unit gb Volume_no_mirror Vdisk [2] copy [1] successfully created During the synchronization process, you can see the status by using the svcinfo lsvdisksyncprogress command. As shown in Example 9-50, the first time that the status is checked, the synchronization progress is at 26%, and the estimated completion time is 11:12:44. The second time that the command is run, the progress status is at 100%, and the synchronization is complete.
Example 9-50 Synchronization
IBM_2145:ITSO-CLS4:admin>svcinfo lsvdisksyncprogress vdisk_id vdisk_name copy_id progress estimated_completion_time 2 Volume_no_mirror 1 26 100920111244 IBM_2145:ITSO-CLS4:admin>svcinfo lsvdisksyncprogress vdisk_id vdisk_name copy_id progress estimated_completion_time 2 Volume_no_mirror 1 100 As you can see in Example 9-51, the new mirrored volume copy (copy_id 1) has been added and can be seen by using the svcinfo lsvdisk command.
Example 9-51 svcinfo lsvdisk
IBM_2145:ITSO-CLS4:admin>svcinfo lsvdisk 2 id 2 name Volume_no_mirror IO_group_id 0 IO_group_name io_grp0 status online mdisk_grp_id many mdisk_grp_name many capacity 1.00GB type many formatted no
465
mdisk_id many mdisk_name many FC_id FC_name RC_id RC_name vdisk_UID 60050768018281BEE000000000000004 throttling 0 preferred_node_id 2 fast_write_state empty cache readwrite udid fc_map_count 0 sync_rate 50 copy_count 2 se_copy_count 0 copy_id 0 status online sync yes primary yes mdisk_grp_id 2 mdisk_grp_name STGPool_DS4700_1 type striped mdisk_id mdisk_name fast_write_state empty used_capacity 1.00GB real_capacity 1.00GB free_capacity 0.00MB overallocation 100 autoexpand warning grainsize se_copy no easy_tier on easy_tier_status inactive tier generic_ssd tier_capacity 0.00MB tier generic_hdd tier_capacity 1.00GB copy_id 1 status online sync yes primary no mdisk_grp_id 3 mdisk_grp_name STGPool_DS4700_2 type striped mdisk_id mdisk_name fast_write_state empty used_capacity 1.00GB real_capacity 1.00GB free_capacity 0.00MB
466
overallocation 100 autoexpand warning grainsize se_copy no easy_tier on easy_tier_status inactive tier generic_ssd tier_capacity 0.00MB tier generic_hdd tier_capacity 1.00GB While adding a volume copy mirror, you can define a mirror with different parameters to the volume copy. Therefore, you can define a thin-provisioned volume copy for a non-volume copy volume and vice versa, which is one way to migrate a non-thin-provisioned volume to a thin-provisioned volume. Note: To change the parameters of a volume copy mirror, you must delete the volume copy and redefine it with the new values. Now we can change the name of the volume just mirrored from Volume_no_mirror to Volume_mirrored, as shown in Example 9-52.
Example 9-52 volume name changing
IBM_2145:ITSO-CLS4:admin>svctask splitvdiskcopy -copy 1 -iogrp 0 -name Volume_new Volume_mirrored Virtual Disk, id [3], successfully created As you can see in Example 9-54, the new volume named Volume_new, has been created as an independent volume.
Example 9-54 svcinfo lsvdisk
467
status online mdisk_grp_id 3 mdisk_grp_name STGPool_DS4700_2 capacity 1.00GB type striped formatted no mdisk_id mdisk_name FC_id FC_name RC_id RC_name vdisk_UID 60050768018281BEE000000000000005 throttling 0 preferred_node_id 1 fast_write_state empty cache readwrite udid fc_map_count 0 sync_rate 50 copy_count 1 se_copy_count 0 copy_id 0 status online sync yes primary yes mdisk_grp_id 3 mdisk_grp_name STGPool_DS4700_2 type striped mdisk_id mdisk_name fast_write_state empty used_capacity 1.00GB real_capacity 1.00GB free_capacity 0.00MB overallocation 100 autoexpand warning grainsize se_copy no easy_tier on easy_tier_status inactive tier generic_ssd tier_capacity 0.00MB tier generic_hdd tier_capacity 1.00GB By issuing the command in Example 9-53 on page 467, vdisk_B will no longer have its mirrored copy and a new volume will be created automatically.
468
IBM_2145:ITSO-CLS4:admin>svctask chvdisk -rate 20 -unitmb vdisk_C IBM_2145:ITSO-CLS4:admin>svctask chvdisk -warning 85% vdisk7
469
New name: The chvdisk command specifies the new name first. The name can consist of letters A to Z and a to z, numbers 0 to 9, the dash (-), and the underscore (_). It can be between one and 63 characters in length. However, it cannot start with a number, the dash, or the word vdisk (because this prefix is reserved for SVC assignment only). The first command changes the volume throttling of vdisk7 to 20 MBps. The second command changes the thin-provisioned volume warning to 85%. To verify the changes, issue the svcinfo lsvdisk command as shown in Example 9-56.
Example 9-56 svcinfo lsvdisk command: Verifying throttling
IBM_2145:ITSO-CLS4:admin>svcinfo lsvdisk vdisk7 id 7 name vdisk7 IO_group_id 1 IO_group_name io_grp1 status online mdisk_grp_id 0 mdisk_grp_name STGPool_DS4700 capacity 10.0GB type striped formatted no mdisk_id mdisk_name FC_id FC_name RC_id RC_name vdisk_UID 60050768018301BF280000000000000A virtual_disk_throttling (MB) 20 preferred_node_id 6 fast_write_state empty cache readwrite udid fc_map_count 0 sync_rate 50 copy_count 1 copy_id 0 status online sync yes primary yes mdisk_grp_id 0 mdisk_grp_name STGPool_DS4700 type striped mdisk_id mdisk_name fast_write_state empty used_capacity 0.41MB real_capacity 5.02GB free_capacity 5.02GB overallocation 199 autoexpand on warning 85 grainsize 32
470
IBM_2145:ITSO-CLS4:admin>svctask rmvdisk volume_A This command deletes the volume_A volume from the SVC configuration. If the volume is assigned to a host, you need to use the -force flag to delete the volume (Example 9-58).
Example 9-58 svctask rmvdisk (-force)
IBM_2145:ITSO-CLS4:admin>svctask expandvdisksize -size 5 -unit gb volume_C This command expands the volume_C volume, which was 35 GB before, by another 5 GB to give it a total size of 40 GB. To expand a thin-provisioned volume, you can use the -rsize option, as shown in Example 9-60 on page 472. This command changes the real size of the volume_B volume to a real capacity of 55 GB. The capacity of the volume remains unchanged.
471
IBM_2145:ITSO-CLS4:admin>svcinfo lsvdisk volume_B id 1 name volume_B capacity 100.0GB mdisk_name fast_write_state empty used_capacity 0.41MB real_capacity 50.00GB free_capacity 50.00GB overallocation 200 autoexpand off warning 40 grainsize 32 IBM_2145:ITSO-CLS1:admin>svctask expandvdisksize -rsize 5 -unit gb volume_B IBM_2145:ITSO-CLS1:admin>svcinfo lsvdisk volume_B id 1 name vdisk_B capacity 100.0GB mdisk_grp_name STGPool_DS4700 type striped mdisk_id mdisk_name fast_write_state empty used_capacity 0.41MB real_capacity 55.00GB free_capacity 55.00GB overallocation 181 autoexpand off warning 40 grainsize 32 Important: If a volume is expanded, its type will become striped even if it was previously sequential or in image mode. If there are not enough extents to expand your volume to the specified size, you receive the following error message: CMMVC5860E Ic_failed_vg_insufficient_virtual_extents
472
do not specify a SCSI LUN ID, the cluster automatically assigns the next available SCSI LUN ID, given any mappings that already exist with that host. Using the volume and host definition that we created in the previous sections, we assign volumes to hosts that are ready for their use. We use the svctask mkvdiskhostmap command (see Example 9-61).
Example 9-61 svctask mkvdiskhostmap
IBM_2145:ITSO-CLS4:admin>svctask mkvdiskhostmap -host Tiger volume_B Virtual Disk to Host map, id [2], successfully created IBM_2145:ITSO-CLS4:admin>svctask mkvdiskhostmap -host Tiger volume_C Virtual Disk to Host map, id [1], successfully created This command assigns volume_B and volume_C to host Tiger as shown in Example 9-62.
Example 9-62 svcinfo lshostvdiskmap -delim, command
IBM_2145:ITSO-CLS1:admin>svcinfo lshostvdiskmap -delim , id,name,SCSI_id,vdisk_id,vdisk_name,wwpn,vdisk_UID 1,Tiger,2,1,volume_B,210000E08B892BCD,60050768018301BF2800000000000001 1,Tiger,1,2,volume_C,210000E08B892BCD,60050768018301BF2800000000000002 Assigning a specific LUN ID to a volume: The optional -scsi scsi_num parameter can help assign a specific LUN ID to a volume that is to be associated with a given host. The default (if nothing is specified) is to increment based on what is already assigned to the host. Be aware that certain HBA device drivers stop when they find a gap in the SCSI LUN IDs. For example: Volume 1 is mapped to Host 1 with SCSI LUN ID 1. Volume 2 is mapped to Host 1 with SCSI LUN ID 2. Volume 3 is mapped to Host 1 with SCSI LUN ID 4. When the device driver scans the HBA, it might stop after discovering Volumes 1 and 2, because there is no SCSI LUN mapped with ID 3. Important: Ensure that the SCSI LUN ID allocation is contiguous. It is not possible to map a volume to a host more than one time at separate LUNs (Example 9-63).
Example 9-63 svctask mkvdiskhostmap
IBM_2145:ITSO-CLS1:admin>svctask mkvdiskhostmap -host Siam volume_A Virtual Disk to Host map, id [0], successfully created This command maps the volume called volume_A to the host called Siam. At this point, you have completed all tasks that are required to assign a volume to an attached host.
473
IBM_2145:ITSO-CLS1:admin>svcinfo lshostvdiskmap -delim , Siam id,name,SCSI_id,vdisk_id,vdisk_name,wwpn,vdisk_UID 3,Siam,0,0,volume_A,210000E08B18FF8A,60050768018301BF280000000000000C From this command, you can see that the host Siam has only one assigned volume called volume_A. The SCSI LUN ID is also shown, which is the ID by which the volume is presented to the host. If no host is specified, all defined host to volume mappings will be returned. Specifying the flag before the host name: Although the -delim flag normally comes at the end of the command string, in this case, you must specify this flag before the host name. Otherwise, it returns the following message: CMMVC6070E An invalid or duplicated parameter, unaccompanied argument, or incorrect argument sequence has been detected. Ensure that the input is as per the help.
IBM_2145:ITSO-CLS4:admin>svctask rmvdiskhostmap -host Tiger volume_D This command unmaps the volume called volume_D from the host called Tiger.
474
After you know these details you can issue the migratevdisk command, as shown in Example 9-66.
Example 9-66 svctask migratevdisk
IBM_2145:ITSO-CLS4:admin>svctask migratevdisk -mdiskgrp STGPool_DS4700_2 -vdisk volume_C This command moves volume_C to the storage pool named STGPool_DS4700_2. Tips: If insufficient extents are available within your target storage pool, you receive an error message. Make sure that the source and target MDisk group have the same extent size. The optional threads parameter allows you to assign a priority to the migration process. The default is 4, which is the highest priority setting. However, if you want the process to take a lower priority over other types of I/O, you can specify 3, 2, or 1. You can run the svcinfo lsmigrate command at any time to see the status of the migration process (Example 9-67).
Example 9-67 svcinfo lsmigrate command
IBM_2145:ITSO-CLS4:admin>svcinfo lsmigrate migrate_type MDisk_Group_Migration progress 12 migrate_source_vdisk_index 2 migrate_target_mdisk_grp 1 max_thread_count 4 migrate_source_vdisk_copy_id 0 IBM_2145:ITSO-CLS4:admin>svcinfo lsmigrate migrate_type MDisk_Group_Migration progress 16 migrate_source_vdisk_index 2 migrate_target_mdisk_grp 1 max_thread_count 4 migrate_source_vdisk_copy_id 0 Progress: The progress is given as percent complete. If you receive no more replies, it means that the process has finished.
475
Both of the MDisks involved are reported as being image mode during the migration. If the migration is interrupted by a cluster recovery or by a cache problem, the migration resumes after the recovery completes. Example 9-68 shows an example of the command.
Example 9-68 svctask migratetoimage
IBM_2145:ITSO-CLS4:admin>svctask migratetoimage -vdisk volume_A -mdisk mdisk8 -mdiskgrp STGPool_Image In this example, you migrate the data from volume_A onto mdisk8, and the MDisk must be put into the STGPool_Image storage pool.
476
Assuming your operating system supports it, you can use the svctask shrinkvdisksize command to decrease the capacity of a given volume. Example 9-69 shows an example of this command.
Example 9-69 svctask shrinkvdisksize
IBM_2145:ITSO-CLS4:admin>svctask shrinkvdisksize -size 44 -unit gb volume_A This command shrinks a volume called volume_A from a total size of 80 GB, by 44 GB, to a new total size of 36 GB.
IBM_2145:ITSO-CLS4:admin>svcinfo lsmdiskmember mdisk1 id copy_id 0 0 2 0 3 0 4 0 5 0 This command displays a list of all of the volume IDs that correspond to the volume copies that use mdisk1. To correlate the IDs displayed in this output to volume names we can run the svcinfo lsvdisk command, which we discuss in more detail in 9.5, Working with volumes on page 458.
IBM_2145:ITSO-CLS4:admin>svcinfo lsvdisk -filtervalue mdisk_grp_name=STGPool_DS4700_1 -delim , id,name,IO_group_id,IO_group_name,status,mdisk_grp_id,mdisk_grp_name,capacity,type ,FC_id,FC_name,RC_id,RC_name,vdisk_UID,fc_map_count,copy_count,fast_write_state,se _copy_count 2,Volume_mirrored,0,io_grp0,online,2,STGPool_DS4700_1,1.00GB,striped,,,,,600507680 18281BEE000000000000004,0,1,empty,0
477
IBM_2145:ITSO-CLS4:admin>svcinfo lsvdiskmember 0 id 4 5 6 7 If you want to know more about these MDisks you can run the svcinfo lsmdisk command, as explained in 9.2, Working with managed disks and disk controller systems on page 441 (using the ID displayed in Example 9-72 rather than the name).
9.5.20 Showing from which storage pool a volume has its extents
Use the svcinfo lsvdisk command as shown in Example 9-73 to show to which storage pool a specific volume belongs.
Example 9-73 svcinfo lsvdisk command: storage pool name
IBM_2145:ITSO-CLS4:admin>svcinfo lsvdisk vdisk_D id 3 name vdisk_D IO_group_id 1 IO_group_name io_grp1 status online mdisk_grp_id 0 mdisk_grp_name STGPool_DS4700_1 capacity 80.0GB type striped formatted no mdisk_id mdisk_name FC_id FC_name RC_id RC_name vdisk_UID 60050768018301BF2800000000000003 throttling 0 preferred_node_id 6 fast_write_state empty cache readwrite udid fc_map_count 0 sync_rate 50 copy_count 1 copy_id 0 status online sync yes primary yes mdisk_grp_id 0 mdisk_grp_name STGPool_DS4700_1 type striped mdisk_id mdisk_name 478
Implementing the IBM System Storage SAN Volume Controller V6.1
fast_write_state empty used_capacity 80.00GB real_capacity 80.00GB free_capacity 0.00MB overallocation 100 autoexpand warning grainsize To learn more about these storage pools you can run the svcinfo lsmdiskgrp command, as explained in 9.2.10, Working with a storage pool on page 447.
IBM_2145:ITSO-CLS4:admin>svcinfo lsvdiskhostmap -delim , volume_B id,name,SCSI_id,host_id,host_name,wwpn,vdisk_UID 1,vdisk_B,2,1,Nile,210000E08B892BCD,60050768018301BF2800000000000001 1,vdisk_B,2,1,Nile,210000E08B89B8C0,60050768018301BF2800000000000001 This command shows the host or hosts to which the volume_B volume was mapped. It is normal for you to see duplicate entries, because there are more paths between the cluster and the host. To be sure that the operating system on the host sees the disk only one time, you must install and configure a multipath software application, such as the IBM Subsystem Driver (SDD). Specifying the -delim flag: Although the optional -delim flag normally comes at the end of the command string, in this case, you must specify this flag before the volume name. Otherwise, the command does not return any data.
IBM_2145:ITSO-CLS4:admin>svcinfo lshostvdiskmap -delim , Siam id,name,SCSI_id,vdisk_id,vdisk_name,wwpn,vdisk_UID 3,Siam,0,5,MM_DB_Pri,210000E08B18FF8A,60050768018301BF2800000000000005 3,Siam,1,4,MM_DBLog_Pri,210000E08B18FF8A,60050768018301BF2800000000000004 3,Siam,2,6,MM_App_Pri,210000E08B18FF8A,60050768018301BF2800000000000006 This command shows which volumes are mapped to the host called Siam. Specifying the -delim flag: Although the optional -delim flag normally comes at the end of the command string, in this case you must specify this flag before the volume name. Otherwise, the command does not return any data.
479
DEV#: 0 DEVICE NAME: Disk1 Part0 TYPE: 2145 POLICY: OPTIMIZED SERIAL: 60050768018301BF2800000000000005 ============================================================================ Path# Adapter/Hard Disk State Mode Select Errors 0 Scsi Port2 Bus0/Disk1 Part0 OPEN NORMAL 20 0 1 Scsi Port3 Bus0/Disk1 Part0 OPEN NORMAL 2343 0 DEV#: 1 DEVICE NAME: Disk2 Part0 TYPE: 2145 POLICY: OPTIMIZED SERIAL: 60050768018301BF2800000000000004 ============================================================================ Path# Adapter/Hard Disk State Mode Select Errors 0 Scsi Port2 Bus0/Disk2 Part0 OPEN NORMAL 2335 0 1 Scsi Port3 Bus0/Disk2 Part0 OPEN NORMAL 0 0 DEV#: 2 DEVICE NAME: Disk3 Part0 TYPE: 2145 POLICY: OPTIMIZED SERIAL: 60050768018301BF2800000000000006 ============================================================================ Path# Adapter/Hard Disk State Mode Select Errors 0 Scsi Port2 Bus0/Disk3 Part0 OPEN NORMAL 2331 0 1 Scsi Port3 Bus0/Disk3 Part0 OPEN NORMAL 0 0 State: In Example 9-76 the state of each path is OPEN. Sometimes you will see the state CLOSED. This does not necessarily indicate a problem, because it might be a result of the paths processing stage. 2. Run the svcinfo lshostvdiskmap command to return a list of all assigned volumes (Example 9-77).
Example 9-77 svcinfo lshostvdiskmap IBM_2145:ITSO-CLS4:admin>svcinfo lshostvdiskmap -delim , Siam id,name,SCSI_id,vdisk_id,vdisk_name,wwpn,vdisk_UID 3,Siam,0,5,MM_DB_Pri,210000E08B18FF8A,60050768018301BF2800000000000005 3,Siam,1,4,MM_DBLog_Pri,210000E08B18FF8A,60050768018301BF2800000000000004 3,Siam,2,6,MM_App_Pri,210000E08B18FF8A,60050768018301BF2800000000000006
480
Look for the disk serial number that matches your datapath query device output. This host was defined in our SVC as Siam. 3. Run the svcinfo lsvdiskmember vdiskname command for a list of the MDisk or MDisks that make up the specified volume (Example 9-78).
Example 9-78 svcinfo lsvdiskmember IBM_2145:ITSO-CLS1:admin>svcinfo lsvdiskmember MM_DBLog_Pri id 0 1 2 3 4 10 11 13 15 16 17
4. Query the MDisks with the svcinfo lsmdisk mdiskID to find their controller and LUN number information, as shown in Example 9-79. The output displays the controller name and the controller LUN ID to help you (provided you gave your controller a unique name, such as a serial number) to track back to a LUN within the disk subsystem; see Example 9-79.
Example 9-79 svcinfo lsmdisk command IBM_2145:ITSO-CLS1:admin>svcinfo lsmdisk 3 id 3 name mdisk3 status online mode managed mdisk_grp_id 0 mdisk_grp_name MDG_DS45 capacity 36.0GB quorum_index block_size 512 controller_name DS4500 ctrl_type 4 ctrl_WWNN 200400A0B8174431 controller_id 0 path_count 4 max_path_count 4 ctrl_LUN_# 0000000000000003 UID 600a0b8000174431000000e44713575400000000000000000000000000000000 preferred_WWPN 200400A0B8174433 active_WWPN 200400A0B8174433
481
You can create your own customized scripts to automate a large number of tasks for completion at a variety of times and run them through the CLI. We suggest that in large SAN environments where scripting with svctask commands is used, to keep the scripting as simple as possible. It is harder to manage fallback, documentation, and verifying a successful script prior to execution in a large SAN environment. In this section we present an overview of how to automate various tasks by creating scripts using the IBM System Storage SAN Volume Controller (SVC) command-line interface (CLI).
Perform logging
482
command. You can download it from the SVC documentation page for each SVC code level at this website: http://www-947.ibm.com/support/entry/portal/Documentation/Hardware/System_Storage/ Storage_software/Storage_virtualization/SAN_Volume_Controller_%282145%29Performing logging When using the CLI, not all commands provide a response to determine the status of the invoked command. Therefore, always create checks that can be logged for monitoring and troubleshooting purposes.
The private key for authentication (for example, icat.ppk). This key is the private key that you have already created. This parameter is set under the Connection Session Auth category as shown in Figure 9-4 on page 484.
483
The IP address of the SVC cluster. This parameter is set under the Session category as shown in Figure 9-5.
A session name. Our example uses ITSO-CLS4. Our PuTTy version is 0.60.
484
To use this predefined PuTTY session, use the following syntax: plink ITSO-CLS4 If a predefined PuTTY session is not used, use this syntax: plink admin@<your cluster ip add> -i "C:\DirectoryPath\KeyName.PPK" Various limited scripts can be run directly in the SVC shell. Examples can be found at the following websites: http://www.db94.net/wtfwiki/pmwiki.php?n=Main.HandySVCMiniScripts http://www.db94.net/wtfwiki/pmwiki.php?n=Main.SVCMiniscriptStorage http://www.db94.net/wtfwiki/pmwiki.php?n=Main.SVCMiniscriptTesting Additionally, IBM provides a suite of scripting tools that is based on Perl. You can download these scripting tools from this website: http://www.alphaworks.ibm.com/tech/svctools
Help: You can also use -h instead of -?, for example, svcinfo -h or svctask commandname -h. If you look at the syntax of the command by typing svcinfo command name -?, you often see -filter listed as a parameter. Be aware that the correct parameter is -filtervalue. Tip: You can use the up and down arrow keys on your keyboard to recall commands that were recently issued. Then, you can use the left and right, Backspace, and Delete keys to edit commands before you resubmit them.
Chapter 9. SAN Volume Controller operations using the command-line interface
485
Filtering
To reduce the output that is displayed by an svcinfo command, you can specify a number of filters, depending on which svcinfo command you are running. To see which filters are available, type the command followed by the -filtervalue? flag, as shown in Example 9-80.
Example 9-80 svcinfo lsvdisk -filtervalue? command
IBM_2145:ITSO-CLS4:admin>svcinfo lsvdisk -filtervalue? Filters for this view are : name id IO_group_id IO_group_name status mdisk_grp_name mdisk_grp_id capacity type FC_id FC_name RC_id RC_name vdisk_name vdisk_id vdisk_UID fc_map_count copy_count fast_write_state se_copy_count
When you know the filters, you can be more selective in generating output: Multiple filters can be combined to create specific searches. You can use an asterisk (*) as a wildcard when using names. When capacity is used, the units must also be specified using -u b | kb | mb | gb | tb | pb. For example, if we issue the svcinfo lsvdisk command with no filters but with the -delim parameter, we see the output that is shown in Example 9-81.
Example 9-81 svcinfo lsvdisk command: No filters
IBM_2145:ITSO-CLS4:admin>svcinfo lsvdisk -delim , id,name,IO_group_id,IO_group_name,status,mdisk_grp_id,mdisk_grp_name,capacity,type ,FC_id,FC_name,RC_id,RC_name,vdisk_UID,fc_map_count,copy_count,fast_write_state,se _copy_count 0,Volume_measured_only,0,io_grp0,online,1,ITSO-Storage_Pool-Single_Tier,10.00GB,st riped,,,,,60050768018281BEE000000000000000,0,1,not_empty,0 1,Volume_EasyTier_active,0,io_grp0,online,0,ITSO-Storage_Pool-Multi_Tier,10.00GB,s triped,,,,,60050768018281BEE000000000000003,0,1,not_empty,0
486
2,Volume_mirrored,0,io_grp0,online,2,STGPool_DS4700_1,1.00GB,striped,,,,,600507680 18281BEE000000000000004,0,1,empty,0 3,Volume_new,0,io_grp0,online,3,STGPool_DS4700_2,1.00GB,striped,,,,,60050768018281 BEE000000000000005,0,1,empty,0 Tip: The -delim parameter truncates the content in the window and separates data fields with colons as opposed to wrapping text over multiple lines. This parameter is normally used in cases where you need to get reports during script execution. If we now add a filter to our svcinfo command (mdisk_grp_name) we can reduce the output, as shown in Example 9-82.
Example 9-82 svcinfo lsvdisk command: With a filter
IBM_2145:ITSO-CLS4:admin>svcinfo lsvdisk -filtervalue mdisk_grp_name=STGPool_DS4700_1 -delim , id,name,IO_group_id,IO_group_name,status,mdisk_grp_id,mdisk_grp_name,capacity,type ,FC_id,FC_name,RC_id,RC_name,vdisk_UID,fc_map_count,copy_count,fast_write_state,se _copy_count 2,Volume_mirrored,0,io_grp0,online,2,STGPool_DS4700_1,1.00GB,striped,,,,,600507680 18281BEE000000000000004,0,1,empty,0
IBM_2145:ITSO-CLS4:admin>svcinfo lscluster id name location partnership bandwidth id_alias 0000020060A06FB8 ITSO-CLS4 local 0000020060A06FB8 000002006440A068 ITSO-CLS1 remote fully_configured 20 000002006440A068 0000020061806FCA ITSO-CLS2 remote fully_configured 20 0000020061806FCA
487
All command parameters are optional; however, you must specify at least one parameter. Important: Be aware of the following points: Only a user with administrator authority can change the password. As mentioned, if the cluster IP address is changed, the open command-line shell closes during the processing of the command and you must reconnect to the new IP address. Changing the speed on a running cluster breaks I/O service to the attached hosts. Before changing the fabric speed, stop I/O from the active hosts and force these hosts to flush any cached data by unmounting volumes (for UNIX host types) or by removing drive letters (for Windows host types). Specific hosts can need to be rebooted to detect the new fabric speed.
IBM_2145:ITSO-CLS1:admin>svctask chcluster -servicepwd Enter a value for -password : Enter password: Confirm password: IBM_2145:ITSO-CLS1:admin> See 9.11.1, Managing users using the CLI on page 500 for more information about managing users.
488
We configured our nodes to use the primary and secondary Ethernet ports for iSCSI and to contain the cluster IP. When we configured our nodes to be used with iSCSI, we did not affect our cluster IP. The cluster IP is changed, as shown in 9.8.2, Changing cluster settings on page 487. It is important to know that we can have more than a one IP address-to-one physical connection relationship. We have the capability to have a four-to-one relationship (4:1), consisting of two IPv4 plus two IPv6 addresses (four total) to one physical connection per port per node. Tip: When reconfiguring IP ports, be aware that already configured iSCSI connections will need to reconnect if changes are made to the IP addresses of the nodes. There are two ways to perform iSCSI authentication or CHAP, either for the whole cluster or per host connection. Example 9-85 shows configuring CHAP for the whole cluster.
Example 9-85 Setting a CHAP secret for the entire cluster to passw0rd
IBM_2145:ITSO-CLS1:admin>svctask chcluster -iscsiauthmethod chap -chapsecret passw0rd IBM_2145:ITSO-CLS1:admin> In our scenario we have a cluster IP of 9.64.210.64, which is not affected during our configuration of the nodes IP addresses. We start by listing our ports using the svcinfo lsportip command. We see that we have two ports per node with which to work. Both ports can have two IP addresses that can be used for iSCSI. We configure the secondary port in both nodes in our I/O Group as shown in Example 9-86.
Example 9-86 Configuring secondary Ethernet port on SVC nodes
While both nodes are online, each node will be available to iSCSI hosts on the IP address that we have configured. Note that iSCSI failover between nodes is enabled automatically. Therefore, if a node goes offline for any reason, its partner node in the I/O group will become available on the failed nodes port IP address. This ensures that hosts will continue to be able to perform I/O. The svcinfo lsportip command will display which port IP addresses are currently active on each node.
489
List the IP address of the cluster by issuing the svcinfo lsclusterip command. Modify the IP address by issuing the svctask chclusterip command. You can either specify a static IP address or have the system assign a dynamic IP address, as shown in Example 9-87.
Example 9-87 svctask chclusterip -clusterip
IBM_2145:ITSO-CLS1:admin>svctask chclusterip -clusterip 10.20.133.5 -gw 10.20.135.1 -mask 255.255.255.0 -port 1 This command changes the current IP address of the cluster to 10.20.133.5. Important: If you specify a new cluster IP address, then the existing communication with the cluster through the CLI is broken and the PuTTY application automatically closes. You must relaunch the PuTTY application and point to the new IP address, but your SSH key will still work.
At this point, we have completed the tasks that are required to change the IP addresses (cluster and service) of the SVC environment.
490
IBM_2145:ITSO-CLS1:admin>svcinfo showtimezone id timezone 522 UTC 2. To find the time zone code that is associated with your time zone, enter the svcinfo lstimezones command, as shown in Example 9-89. A truncated list is provided for this example. If this setting is correct (for example, 522 UTC), go to Step 4. If not, continue with Step 3.
Example 9-89 svcinfo lstimezones command
IBM_2145:ITSO-CLS1:admin>svcinfo lstimezones id timezone . . 507 Turkey 508 UCT 509 Universal 510 US/Alaska 511 US/Aleutian 512 US/Arizona 513 US/Central 514 US/Eastern 515 US/East-Indiana 516 US/Hawaii 517 US/Indiana-Starke 518 US/Michigan 519 US/Mountain 520 US/Pacific 521 US/Samoa 522 UTC . . 3. Now that you know which time zone code is correct for you, set the time zone by issuing the svctask settimezone (Example 9-90) command.
Example 9-90 svctask settimezone command
IBM_2145:ITSO-CLS1:admin>svctask settimezone -timezone 520 4. Set the cluster time by issuing the svctask setclustertime command (Example 9-91).
Example 9-91 svctask setclustertime command
IBM_2145:ITSO-CLS1:admin>svctask setclustertime -time 061718402008 The format of the time is MMDDHHmmYYYY. You have completed the necessary tasks to set the cluster time zone and time.
491
Use the svctask startstats command to start the collection of statistics, as shown in Example 9-92.
Example 9-92 svctask startstats command
IBM_2145:ITSO-CLS1:admin>svctask startstats -interval 15 The interval that we specify (minimum 1, maximum 60) is in minutes. This command starts statistics collection and gathers data at 15-minute intervals. Statistics collection: To verify that statistics collection is set, display the cluster properties again, as shown in Example 9-93.
Example 9-93 Statistics collection status and frequency
IBM_2145:ITSO-CLS1:admin>svcinfo lscluster ITSO-CLS1 statistics_status on statistics_frequency 15 -- Note that the output has been shortened for easier reading. -At this point, we have completed the required tasks to start statistics collection on the cluster.
IBM_2145:ITSO-CLS1:admin>svctask stopstats This command stops the statistics collection. Do not expect any prompt message from this command. To verify that the statistics collection is stopped, display the cluster properties again, as shown in Example 9-95.
Example 9-95 Statistics collection status and frequency
IBM_2145:ITSO-CLS1:admin>svcinfo lscluster ITSO-CLS1 statistics_status off statistics_frequency 15 -- Note that the output has been shortened for easier reading. -Notice that the interval parameter is not changed, but the status is off. At this point, we have completed the required tasks to stop statistics collection on our cluster.
492
IBM_2145:ITSO-CLS1:admin>svctask stopcluster Are you sure that you want to continue with the shut down? This command shuts down the SVC cluster. All data is flushed to disk before the power is removed. At this point, you lose administrative contact with your cluster, and the PuTTY application automatically closes. 2. You will be presented with the following message: Warning: Are you sure that you want to continue with the shut down? Ensure that you have stopped all FlashCopy mappings, Metro Mirror (Remote Copy) relationships, data migration operations, and forced deletions before continuing. Entering y to this message will execute the command. No feedback is then displayed. Entering anything other than y(es) or Y(ES) will result in the command not executing. No feedback is displayed. Important: Before shutting down a cluster, ensure that all I/O operations are stopped that are destined for this cluster, because you will lose all access to all volumes being provided by this cluster. Failure to do so can result in failed I/O operations being reported to the host operating systems. Begin the process of quiescing all I/O to the cluster by stopping the applications on the hosts that are using the volumes provided by the cluster.
493
3. We have completed the tasks that are required to shut down the cluster. To shut down the uninterruptible power supply units, press the power on button on the front panel of each uninterruptible power supply unit. Restarting the cluster: To restart the cluster, you must first restart the uninterruptible power supply units by pressing the power button on their front panels. Then press the power on button on the service panel of one of the nodes within the cluster. After the node is fully booted up (for example, displaying Cluster: on line 1 and the cluster name on line 2 of the panel), you can start the other nodes in the same way. As soon as all of the nodes are fully booted, you can reestablish administrative contact using PuTTY, and your cluster will be fully operational again.
9.9 Nodes
This section details the tasks that can be performed at an individual node level.
IBM_2145:ITSO-CLS1:admin>svcinfo lsnode -delim , id,name,UPS_serial_number,WWNN,status,IO_group_id,IO_group_name,config_node,UPS_un ique_id,hardware 1,node1,1000739007,50050768010037E5,online,0,io_grp0,yes,20400001C3240007,8G4 2,node2,1000739004,50050768010037DC,online,0,io_grp0,no,20400001C3240004,8G4 3,node3,100066C107,5005076801001D1C,online,1,io_grp1,no,20400001864C1007,8G4 4,node4,100066C108,50050768010027E2,online,1,io_grp1,no,20400001864C1008,8G4 IBM_2145:ITSO-CLS1:admin>svcinfo lsnode node1 id 1 name node1 UPS_serial_number 1000739007 WWNN 50050768010037E5 status online IO_group_id 0 IO_group_name io_grp0 partner_node_id 2 partner_node_name node2 config_node yes UPS_unique_id 20400001C3240007 port_id 50050768014037E5 port_status active port_speed 4Gb port_id 50050768013037E5 port_status active 494
Implementing the IBM System Storage SAN Volume Controller V6.1
port_speed 4Gb port_id 50050768011037E5 port_status active port_speed 4Gb port_id 50050768012037E5 port_status active port_speed 4Gb hardware 8G4
UPS_unique_id
20400001864C1008 8G4 20400001C3240004 8G4
hardware
Tip: The node that you want to add must have a separate uninterruptible power supply unit serial number from the uninterruptible power supply unit on the first node.
Example 9-100 svcinfo lsnode command
id,name,UPS_serial_number,WWNN,status,IO_group_id,IO_group_name,config_node,UPS_un ique_id,hardware,iscsi_name,iscsi_alias 1,ITSO_CLS1_0,100089J040,50050768010059E7,online,0,io_grp0,yes,2040000209680100,8G 4,iqn.1986-03.com.ibm:2145.ITSO_CLS1_0.ITSO_CLS1_0_N0, Now that we know the available nodes, we can use the svctask addnode command to add the node to the SVC cluster configuration. Example 9-101 shows the command to add a node to the SVC cluster.
Example 9-101 svctask addnode (wwnodename) command
IBM_2145:ITSO-CLS1:admin>svctask addnode -wwnodename 50050768010027E2 -name Node2 -iogrp io_grp0 Node, id [2], successfully added This command adds the candidate node with the wwnodename of 50050768010027E2 to the I/O Group called io_grp0.
495
We used the -wwnodename parameter (50050768010027E2). However, we can also use the -panelname parameter (108283) instead, as shown in Example 9-102. If standing in front of the node, it is easier to read the panel name than it is to get the WWNN.
Example 9-102 svctask addnode (panelname) command
IBM_2145:ITSO-CLS1:admin>svctask addnode -panelname 108283 -name Node2 -iogrp io_grp0 We also used the optional -name parameter (Node2). If you do not provide the -name parameter, the SVC automatically generates the name nodex (where x is the ID sequence number that is assigned internally by the SVC). Name: If you want to provide a name, you can use letters A to Z and a to z, numbers 0 to 9, the dash (-), and the underscore (_). The name can be between one and 63 characters in length. However, the name cannot start with a number, dash, or the word node (because this prefix is reserved for SVC assignment only). If the svctask addnode command returns no information, your second node is powered on, and the zones are correctly defined, then preexisting cluster configuration data can be stored in the node. If you are sure that this node is not part of another active SVC cluster, you can use the service panel to delete the existing cluster information. After this action is complete, reissue the svcinfo lsnodecandidate command and you will see it listed.
IBM_2145:ITSO-CLS1:admin>svctask chnode -name ITSO_CLS1_Node1 4 This command renames node ID 4 to ITSO_CLS1_Node1. Name: The chnode command specifies the new name first. You can use letters A to Z and a to z, numbers 0 to 9, the dash (-), and the underscore (_). The name can be between one and 63 characters in length. However, the name cannot start with a number, dash, or the word node (because this prefix is reserved for SVC assignment only).
IBM_2145:ITSO-CLS1:admin>svctask rmnode node4 This command removes node4 from the SVC cluster. Because node4 was also the configuration node, the SVC transfers the configuration node responsibilities to a surviving node, within the I/O Group. Unfortunately, the PuTTY session cannot be dynamically passed to the surviving node. Therefore, the PuTTY application loses communication and closes automatically.
496
We must restart the PuTTY application to establish a secure session with the new configuration node. Important: If this node is the last node in an I/O Group, and there are volumes still assigned to the I/O Group, the node is not deleted from the cluster. If this node is the last node in the cluster, and the I/O Group has no volumes remaining, the cluster is destroyed and all virtualization information is lost. Any data that is still required must be backed up or migrated prior to destroying the cluster.
IBM_2145:ITSO-CLS1:admin>svctask stopcluster -node n4 Are you sure that you want to continue with the shut down? This command shuts down node n4 in a graceful manner. When this node has been shut down, the other node in the I/O Group will destage the contents of its cache and will go into write-through mode until the node is powered up and rejoins the cluster. Important: There is no need to stop FlashCopy mappings, Remote Copy relationships, and data migration operations. The other cluster will handle these activities, but be aware that this cluster is a single point of failure now. If this is the last node in an I/O Group, all access to the volumes in the I/O Group will be lost. Verify that you want to shut down this node before executing this command. You must specify the -force flag. By reissuing the svcinfo lsnode command (Example 9-106), we can see that the node is now offline.
Example 9-106 svcinfo lsnode command
IBM_2145:ITSO-CLS1:admin>svcinfo lsnode -delim , id,name,UPS_serial_number,WWNN,status,IO_group_id,IO_group_name,config_node,UPS_un ique_id,hardware 1,n1,1000739007,50050768010037E5,online,0,io_grp0,yes,20400001C3240007,8G4 2,n2,1000739004,50050768010037DC,online,0,io_grp0,no,20400001C3240004,8G4 3,n3,100066C107,5005076801001D1C,online,1,io_grp1,no,20400001864C1007,8G4 6,n4,100066C108,0000000000000000,offline,1,io_grp1,no,20400001864C1008,unknown IBM_2145:ITSO-CLS1:admin>svcinfo lsnode n4 CMMVC5782E The object specified is offline. Restart: To restart the node manually, press the power on button from the service panel of the node.
497
At this point we have completed the tasks that are required to view, add, delete, rename, and shut down a node within an SVC environment.
vdisk_count 3 4 0 0 0
host_count 3 3 2 2 0
As shown, the SVC predefines five I/O Groups. In a four-node cluster (similar to our example), only two I/O Groups are actually in use. The other I/O Groups (io_grp2 and io_grp3) are for a six- or eight-node cluster. The recovery I/O Group is a temporary home for volumes when all nodes in the I/O Group that normally owns them have suffered multiple failures. This design allows us to move the volumes to the recovery I/O Group and then into a working I/O Group. Note that while temporarily assigned to the recovery I/O Group, I/O access is not possible.
IBM_2145:ITSO-CLS1:admin>svctask chiogrp -name io_grpA io_grp1 This command renames the I/O Group io_grp1 to io_grpA. Name: The chiogrp command specifies the new name first. If you want to provide a name, you can use letters A to Z, letters a to z, numbers 0 to 9, the dash (-), and the underscore (_). The name can be between one and 63 characters in length. However, the name cannot start with a number, dash, or the word iogrp (because this prefix is reserved for SVC assignment only). To see whether the renaming was successful, issue the svcinfo lsiogrp command again to see the change. At this point we have completed the tasks that are required to rename an I/O Group.
498
IBM_2145:ITSO-CLS1:admin>svctask addhostiogrp -iogrp 1 Kanaga Parameters: -iogrp iogrp_list -iogrpall Specify a list of one or more I/O Groups that must be mapped to the host. This parameter is mutually exclusive with the -iogrpall option. The -iogrpall option specifies that all the I/O Groups must be mapped to the specified host. This parameter is mutually exclusive with -iogrp. -host host_id_or_name Identify the host either by ID or name to which the I/O Groups must be mapped. Use the svctask rmhostiogrp command to unmap a specific host to a specific I/O Group, as shown in Example 9-110.
Example 9-110 svctask rmhostiogrp command
IBM_2145:ITSO-CLS1:admin>svctask rmhostiogrp -iogrp 0 Kanaga Parameters: -iogrp iogrp_list -iogrpall Specify a list of one or more I/O Groups that must be unmapped to the host. This parameter is mutually exclusive with the -iogrpall option. The -iogrpall option specifies that all of the I/O Groups must be unmapped to the specified host. This parameter is mutually exclusive with -iogrp. -force If the removal of a host to I/O Group mapping will result in the loss of volume to host mappings, the command fails if the -force flag is not used. The -force flag, however, overrides this behavior and forces the deletion of the host to I/O Group mapping. host_id_or_name Identify the host either by the ID or name to which the I/O Groups must be mapped.
499
To list all of the host objects that are mapped to the specified I/O Group, use the svcinfo lsiogrphost command, as shown in Example 9-112.
Example 9-112 svcinfo lsiogrphost command
IBM_2145:ITSO-CLS1:admin>svcinfo lsiogrphost io_grp1 id name 1 Nile 2 Kanaga 3 Siam In Example 9-113, iogrp_1 is the I/O Group name.
IBM_2145:ITSO-CLS2:admin>svcinfo id name role 0 SecurityAdmin SecurityAdmin 1 Administrator Administrator 2 CopyOperator CopyOperator 3 Service Service 4 Monitor Monitor
lsusergrp remote no no no no no
Example 9-114 is a simple example of creating a user. User John is added to the user group Monitor with the password m0nitor.
Example 9-114 svctask mkuser called John with password m0nitor
IBM_2145:ITSO-CLS1:admin>svctask mkuser -name John -usergrp Monitor -password m0nitor User, id [2], successfully created
Local users are users that are not authenticated by a remote authentication server. Remote
users are users that are authenticated by a remote central registry server. The user groups already have a defined authority role, as listed in Table 9-2 on page 501.
500
Table 9-2 Authority roles User group Security admin Administrator Role All commands All commands except: svctask: chauthservice, mkuser, rmuser, chuser, mkusergrp, rmusergrp, chusergrp, and setpwdreset All svcinfo commands and the following svctask commands: prestartfcconsistgrp, startfcconsistgrp, stopfcconsistgrp, chfcconsistgrp, prestartfcmap, startfcmap, stopfcmap, chfcmap, startrcconsistgrp, stoprcconsistgrp, switchrcconsistgrp, chrcconsistgrp, startrcrelationship, stoprcrelationship, switchrcrelationship, chrcrelationship, and chpartnership All svcinfo commands and the following svctask commands: applysoftware, setlocale, addnode, rmnode, cherrstate, writesernum, detectmdisk, includemdisk, clearerrlog, cleardumps, settimezone, stopcluster, startstats, stopstats, and settime All svcinfo commands and the following svctask commands: finderr, dumperrlog, dumpinternallog, and chcurrentuser And svcconfig: backup User Superusers Administrators that control the SVC
Copy operator
For users that control all of the copy functionality of the cluster
Service
For users that perform service maintenance and other hardware tasks on the cluster
Monitor
501
IBM_2145:ITSO-CLS2:admin>svcinfo id name role 0 SecurityAdmin SecurityAdmin 1 Administrator Administrator 2 CopyOperator CopyOperator 3 Service Service 4 Monitor Monitor
lsusergrp remote no no no no no
To view our currently defined users and the user groups to which they belong we use the svcinfo lsuser command, as shown in Example 9-116.
Example 9-116 svcinfo lsuser command
svctask dumperrlog svctask dumpinternallog The audit log contains approximately 1 MB of data, which can contain about 6000 average length commands. When this log is full, the cluster copies it to a new file in the /dumps/audit directory on the config node and resets the in-memory audit log. To display entries from the audit log, use the svcinfo catauditlog -first 5 command to return a list of five in-memory audit log entries, as shown in Example 9-117.
Example 9-117 catauditlog command
IBM_2145:ITSO-CLS1:admin>svcinfo catauditlog -first 5 -delim , 291,090904200329,superuser,10.64.210.231,0,,svctask mkvdiskhostmap -host 1 21 292,090904201238,admin,10.64.210.231,0,,svctask chvdisk -name swiss_cheese 21 293,090904204314,superuser,10.64.210.231,0,,svctask chhost -name ITSO_W2008 1 294,090904204314,superuser,10.64.210.231,0,,svctask chhost -mask 15 1 295,090904204410,admin,10.64.210.231,0,,svctask chvdisk -name SwissCheese 21
If you need to dump the contents of the in-memory audit log to a file on the current configuration node, use the svctask dumpauditlog command. This command does not provide any feedback; it only provides the prompt. To obtain a list of the audit log dumps, use the svcinfo lsauditlogdumps command as shown in Example 9-118.
Example 9-118 svctask dumpauditlog/svcinfo lsauditlogdumps command
Scenario description
We use the following scenario in both the command-line section and the GUI section. In the following scenario, we want to FlashCopy the following volumes: DB_Source Log_Source App_Source Database files Database log files Application files
We create Consistency Groups to handle the FlashCopy of DB_Source and Log_Source, because data integrity must be kept on DB_Source and Log_Source.
503
In our scenario, the application files are independent of the database, so we create a single FlashCopy mapping for App_Source. We will make two FlashCopy targets for DB_Source and Log_Source and therefore, two Consistency Groups. Figure 9-6 shows the scenario.
504
IBM_2145:ITSO-CLS1:admin>svctask mkfcconsistgrp -name FCCG1 FlashCopy Consistency Group, id [1], successfully created IBM_2145:ITSO-CLS1:admin>svctask mkfcconsistgrp -name FCCG2 FlashCopy Consistency Group, id [2], successfully created In Example 9-120, we checked the status of Consistency Groups. Each Consistency Group has a status of empty.
Example 9-120 Checking the status
IBM_2145:ITSO-CLS1:admin>svcinfo lsfcconsistgrp id name status 1 FCCG1 empty 2 FCCG2 empty If you want to change the name of a Consistency Group, you can use the svctask chfcconsistgrp command. Type svctask chfcconsistgrp -h for help with this command.
505
The background copy rate specifies the priority that must be given to completing the copy. If 0 is specified, the copy will not proceed in the background. The default is 50. Tip: There is a parameter to delete FlashCopy mappings automatically after completion of a background copy (when the mapping gets to the idle_or_copied state). Use the command: svctask mkfcmap -autodelete This command does not delete mappings in cascade with dependent mappings, because it cannot get to the idle_or_copied state in this situation. In Example 9-121, the first FlashCopy mapping for DB_Source and Log_Source is created.
Example 9-121 Create the first FlashCopy mapping for DB_Source, Log_Source, and App_Source
IBM_2145:ITSO-CLS1:admin>svctask mkfcmap -source DB_Source -target DB_Target_1 -name DB_Map1 -consistgrp FCCG1 FlashCopy Mapping, id [0], successfully created IBM_2145:ITSO-CLS1:admin>svctask mkfcmap -source Log_Source -target Log_Target_1 -name Log_Map1 -consistgrp FCCG1 FlashCopy Mapping, id [1], successfully created IBM_2145:ITSO-CLS1:admin>svctask mkfcmap -source App_Source -target App_Target_1 -name App_Map1 FlashCopy Mapping, id [2], successfully created Example 9-122 shows the command to create a second FlashCopy mapping for volume DB_Source and Log_Source.
Example 9-122 Create additional FlashCopy mappings
IBM_2145:ITSO-CLS1:admin>svctask mkfcmap -source DB_Source -target DB_Target2 -name DB_Map2 -consistgrp FCCG2 FlashCopy Mapping, id [3], successfully created IBM_2145:ITSO-CLS1:admin>svctask mkfcmap -source Log_Source -target Log_Target2 -name Log_Map2 -consistgrp FCCG2 FlashCopy Mapping, id [4], successfully created Example 9-123 shows the result of these FlashCopy mappings. The status of the mapping is idle_or_copied.
Example 9-123 Check the result of Multiple Target FlashCopy mappings
IBM_2145:ITSO-CLS1:admin>svcinfo lsfcmap id name source_vdisk_id source_vdisk_name target_vdisk_id target_vdisk_name group_id group_name status progress copy_rate clean_progress incremental partner_FC_id partner_FC_name restoring 0 DB_Map1 0 DB_Source 6 DB_Target_1 1 FCCG1 idle_or_copied 0 50 100 off 1 Log_Map1 1 Log_Source 4 Log_Target_1 1 FCCG1 idle_or_copied 0 50 100 off
no
no
506
2 App_Map1 2 App_Source App_Target_1 idle_or_copied 50 100 off 3 DB_Map2 0 DB_Source DB_Target_2 2 FCCG2 idle_or_copied 50 100 off 4 Log_Map2 1 Log_Source Log_Target_2 2 FCCG2 idle_or_copied 50 100 off IBM_2145:ITSO-CLS1:admin>svcinfo lsfcconsistgrp id name status 1 FCCG1 idle_or_copied 2 FCCG2 idle_or_copied
3 0 no 7 0 no 5 0 no
If you want to change the FlashCopy mapping, you can use the svctask chfcmap command. Type svctask chfcmap -h to get help with this command.
IBM_2145:ITSO-CLS1:admin>svctask prestartfcmap App_Map1 IBM_2145:ITSO-CLS1:admin>svcinfo lsfcmap App_Map1 id 2 name App_Map1 source_vdisk_id 2 source_vdisk_name App_Source target_vdisk_id 3 target_vdisk_name App_Target_1 group_id group_name status prepared progress 0
Chapter 9. SAN Volume Controller operations using the command-line interface
507
copy_rate 50 start_time dependent_mappings 0 autodelete off clean_progress 100 clean_rate 50 incremental off difference 100 grain_size 256 IO_group_id 0 IO_group_name io_grp0 partner_FC_id partner_FC_name restoring no
IBM_2145:ITSO-CLS1:admin>svctask IBM_2145:ITSO-CLS1:admin>svctask IBM_2145:ITSO-CLS1:admin>svcinfo id 1 name FCCG1 status prepared autodelete off FC_mapping_id 0 FC_mapping_name DB_Map1 FC_mapping_id 1 FC_mapping_name Log_Map1 IBM_2145:ITSO-CLS1:admin>svcinfo id name 1 FCCG1 2 FCCG2
When the FlashCopy mapping is triggered, it enters the Copying state. The way that the copy proceeds depends on the background copy rate attribute of the mapping. If the mapping is set to 0 (NOCOPY), only data that is subsequently updated on the source will be copied to the destination. We suggest that you use this scenario as a backup copy while the mapping exists in the Copying state. If the copy is stopped, the destination is unusable. If you want to end up with a duplicate copy of the source at the destination, set the background copy rate greater than 0. This way, the system copies all of the data (even unchanged data) to the destination and eventually reaches the idle_or_copied state. After this data is copied, you can delete the mapping and have a usable point-in-time copy of the source at the destination. In Example 9-126, after the FlashCopy is started, App_Map1 changes to copying status.
Example 9-126 Start App_Map1
IBM_2145:ITSO-CLS1:admin>svctask startfcmap App_Map1 IBM_2145:ITSO-CLS1:admin>svcinfo lsfcmap id name source_vdisk_id source_vdisk_name target_vdisk_id target_vdisk_name group_id group_name status progress copy_rate clean_progress incremental partner_FC_id partner_FC_name restoring 0 DB_Map1 0 DB_Source 6 DB_Target_1 1 FCCG1 prepared 0 50 100 off 1 Log_Map1 1 Log_Source 4 Log_Target_1 1 FCCG1 prepared 0 50 100 off 2 App_Map1 2 App_Source 3 App_Target_1 copying 0 50 100 off 3 DB_Map2 0 DB_Source 7 DB_Target_2 2 FCCG2 prepared 0 50 100 off 4 Log_Map2 1 Log_Source 5 Log_Target_2 2 FCCG2 prepared 0 50 100 off IBM_2145:ITSO-CLS1:admin>svcinfo lsfcmap App_Map1 id 2 name App_Map1 source_vdisk_id 2 source_vdisk_name App_Source target_vdisk_id 3 target_vdisk_name App_Target_1 group_id group_name status copying progress 29 copy_rate 50 start_time 090826171647 dependent_mappings 0 autodelete off clean_progress 100 clean_rate 50 incremental off difference 100
no
no
no
no
no
509
IBM_2145:ITSO-CLS1:admin>svctask IBM_2145:ITSO-CLS1:admin>svctask IBM_2145:ITSO-CLS1:admin>svcinfo id 1 name FCCG1 status copying autodelete off FC_mapping_id 0 FC_mapping_name DB_Map1 FC_mapping_id 1 FC_mapping_name Log_Map1 IBM_2145:ITSO-CLS1:admin>svcinfo id name 1 FCCG1 2 FCCG2
lsfcmapprogress DB_Map1
lsfcmapprogress Log_Map1
lsfcmapprogress Log_Map2
lsfcmapprogress DB_Map2
510
3 23 IBM_2145:ITSO-CLS1:admin>svcinfo lsfcmapprogress App_Map1 id progress 2 53 When the background copy has completed, the FlashCopy mapping enters the idle_or_copied state. When all FlashCopy mappings in a Consistency Group enter this status, the Consistency Group will be at idle_or_copied status. When in this state, the FlashCopy mapping can be deleted and the target disk can be used independently if, for example, another target disk is to be used for the next FlashCopy of the particular source volume.
IBM_2145:ITSO-CLS1:admin>svctask stopfcmap App_Map1 IBM_2145:ITSO-CLS1:admin>svcinfo lsfcmap App_Map1 id 2 name App_Map1 source_vdisk_id 2 source_vdisk_name App_Source target_vdisk_id 3 target_vdisk_name App_Target_1 group_id group_name status idle_or_copied progress 100 copy_rate 50 start_time 090826171647 dependent_mappings 0 autodelete off
Chapter 9. SAN Volume Controller operations using the command-line interface
511
clean_progress 100 clean_rate 50 incremental off difference 100 grain_size 256 IO_group_id 0 IO_group_name io_grp0 partner_FC_id partner_FC_name restoring no
IBM_2145:ITSO-CLS1:admin>svctask stopfcconsistgrp FCCG1 IBM_2145:ITSO-CLS1:admin>svctask stopfcconsistgrp FCCG2 IBM_2145:ITSO-CLS1:admin>svcinfo lsfcconsistgrp id name status 1 FCCG1 stopped 2 FCCG2 stopped IBM_2145:ITSO-CLS1:admin>svcinfo lsfcmap -delim , id,name,source_vdisk_id,source_vdisk_name,target_vdisk_id,target_vdisk_name,group_ id,group_name,status,progress,copy_rate,clean_progress,incremental,partner_FC_id,p artner_FC_name,restoring 0,DB_Map1,0,DB_Source,6,DB_Target_1,1,FCCG1,idle_or_copied,100,50,100,off,,,no 1,Log_Map1,1,Log_Source,4,Log_Target_1,1,FCCG1,idle_or_copied,100,50,100,off,,,no 2,App_Map1,2,App_Source,3,App_Target_1,,,idle_or_copied,100,50,100,off,,,no 3,DB_Map2,0,DB_Source,7,DB_Target_2,2,FCCG2,idle_or_copied,100,50,100,off,,,no 4,Log_Map2,1,Log_Source,5,Log_Target_2,2,FCCG2,idle_or_copied,100,50,100,off,,,no
FlashCopy mapping is stopped, the command fails unless the -force flag is specified. If the mapping is active (copying), it must first be stopped before it can be deleted. Deleting a mapping only deletes the logical relationship between the two volumes. However, when issued on an active FlashCopy mapping using the -force flag, the delete renders the data on the FlashCopy mapping target volume as inconsistent. Tip: If you want to use the target volume as a normal volume, monitor the background copy progress until it is complete (100% copied) and, then, delete the FlashCopy mapping. Another option is to set the -autodelete option when creating the FlashCopy mapping. As shown in Example 9-131, we delete App_Map1.
Example 9-131 Delete App_Map1
IBM_2145:ITSO-CLS1:admin>svctask rmfcmap DB_Map1 IBM_2145:ITSO-CLS1:admin>svctask rmfcmap DB_Map2 IBM_2145:ITSO-CLS1:admin>svctask rmfcmap Log_Map1 IBM_2145:ITSO-CLS1:admin>svctask rmfcmap Log_Map2 IBM_2145:ITSO-CLS1:admin>svctask rmfcconsistgrp FCCG1 IBM_2145:ITSO-CLS1:admin>svctask rmfcconsistgrp FCCG2 IBM_2145:ITSO-CLS1:admin>svcinfo lsfcconsistgrp IBM_2145:ITSO-CLS1:admin>svcinfo lsfcmap IBM_2145:ITSO-CLS1:admin>
513
IBM_2145:ITSO-CLS1:admin>svcinfo lsvdisk 8 id 8 name App_Source_SE IO_group_id 0 IO_group_name io_grp0 status online mdisk_grp_id 0 mdisk_grp_name MDG_DS47 capacity 1.00GB type striped formatted no mdisk_id mdisk_name FC_id FC_name RC_id RC_name vdisk_UID 6005076801AB813F100000000000000B throttling 0 preferred_node_id 2 fast_write_state empty cache readwrite udid 0 fc_map_count 0 sync_rate 50 copy_count 1 copy_id 0 status online sync yes primary yes mdisk_grp_id 0 mdisk_grp_name MDG_DS47 type striped mdisk_id mdisk_name fast_write_state empty used_capacity 0.41MB real_capacity 221.17MB free_capacity 220.77MB overallocation 462 autoexpand on warning 80 grainsize 32
514
2. Define a FlashCopy mapping in which the non thin-provisioned volume is the source and the thin-provisioned volume is the target. Specify a copy rate as high as possible and activate the -autodelete option for the mapping; see Example 9-134. Example 9-134 svctask mkfcmap IBM_2145:ITSO-CLS1:admin>svctask mkfcmap -source App_Source -target App_Source_SE -name MigrtoThinProv -copyrate 100 -autodelete FlashCopy Mapping, id [0], successfully created IBM_2145:ITSO-CLS1:admin>svcinfo lsfcmap 0 id 0 name MigrtoThinProv source_vdisk_id 2 source_vdisk_name App_Source target_vdisk_id 8 target_vdisk_name App_Source_SE group_id group_name status idle_or_copied progress 0 copy_rate 100 start_time dependent_mappings 0 autodelete on clean_progress 100 clean_rate 50 incremental off difference 100 grain_size 256 IO_group_id 0 IO_group_name io_grp0 partner_FC_id partner_FC_name restoring no 3. Run the svctask prestartfcmap command and the svcinfo lsfcmap MigrtoThinProv command, as shown in Example 9-135. Example 9-135 svctask prestartfcmap IBM_2145:ITSO-CLS1:admin>svctask prestartfcmap MigrtoThinProv IBM_2145:ITSO-CLS1:admin>svcinfo lsfcmap MigrtoThinProv id 0 name MigrtoThinProv source_vdisk_id 2 source_vdisk_name App_Source target_vdisk_id 8 target_vdisk_name App_Source_SE group_id group_name status prepared progress 0 copy_rate 100 start_time dependent_mappings 0
Chapter 9. SAN Volume Controller operations using the command-line interface
515
autodelete on clean_progress 100 clean_rate 50 incremental off difference 100 grain_size 256 IO_group_id 0 IO_group_name io_grp0 partner_FC_id partner_FC_name restoring no 4. Run the svctask startfcmap command, as shown in Example 9-136.
Example 9-136 svctask startfcmap command
IBM_2145:ITSO-CLS1:admin>svctask startfcmap MigrtoThinProv 5. Monitor the copy process using the svcinfo lsfcmapprogress command, as shown in Example 9-137.
Example 9-137 svcinfo lsfcmapprogress command
IBM_2145:ITSO-CLS1:admin>svcinfo lsfcmapprogress MigrtoThinProv id progress 0 63 6. The FlashCopy mapping has been deleted automatically, as shown in Example 9-138. Example 9-138 svcinfo lsfcmap command IBM_2145:ITSO-CLS1:admin>svcinfo lsfcmap MigrtoThinProv id 0 name MigrtoThinProv source_vdisk_id 2 source_vdisk_name App_Source target_vdisk_id 8 target_vdisk_name App_Source_SE group_id group_name status copying progress 73 copy_rate 100 start_time 090827095354 dependent_mappings 0 autodelete on clean_progress 100 clean_rate 50 incremental off difference 100 grain_size 256 IO_group_id 0 IO_group_name io_grp0 partner_FC_id partner_FC_name restoring no
516
IBM_2145:ITSO-CLS1:admin>svcinfo lsfcmap MigrtoThinProv CMMVC5754E The specified object does not exist, or the name supplied does not meet the naming rules. An independent copy of the source volume (App_Source) has been created. The migration has completed, as shown in Example 9-139. Example 9-139 svcinfo lsvdisk IBM_2145:ITSO-CLS1:admin>svcinfo lsvdisk App_Source_SE id 8 name App_Source_SE IO_group_id 0 IO_group_name io_grp0 status online mdisk_grp_id 0 mdisk_grp_name MDG_DS47 capacity 1.00GB type striped formatted no mdisk_id mdisk_name FC_id FC_name RC_id RC_name vdisk_UID 6005076801AB813F100000000000000B throttling 0 preferred_node_id 2 fast_write_state empty cache readwrite udid 0 fc_map_count 0 sync_rate 50 copy_count 1 copy_id 0 status online sync yes primary yes mdisk_grp_id 0 mdisk_grp_name MDG_DS47 type striped mdisk_id mdisk_name fast_write_state empty used_capacity 1.00GB real_capacity 1.00GB free_capacity 0.77MB overallocation 99 autoexpand on warning 80 grainsize 32 Real size: Independently of what you defined as the real size of the target thin-provisioned volume, the real size will be at least the capacity of the source volume.
Chapter 9. SAN Volume Controller operations using the command-line interface
517
To migrate a thin-provisioned volume to a fully allocated volume, you can follow the same scenario.
IIBM_2145:ITSO-CLS4:admin>svcinfo lsvdisk id name IO_group_id IO_group_name status mdisk_grp_id mdisk_grp_name capacity type FC_id FC_name RC_id RC_name vdisk_UID fc_map_count copy_count fast_write_state se_copy_count 4 Volume_FC_S 0 io_grp0 online 2 STGPool_DS4700_1 1.00GB striped 60050768018281BEE000000000000006 0 1 empty 0 5 Volume_FC_T_S1 0 io_grp0 online 2 STGPool_DS4700_1 1.00GB striped 60050768018281BEE000000000000007 0 1 empty 0 6 Volume_FC_T1 0 io_grp0 online 2 STGPool_DS4700_1 1.00GB striped 60050768018281BEE000000000000008 0 1 empty 0 IBM_2145:ITSO-CLS4:admin>svctask mkfcmap -source Volume_FC_S -target Volume_FC_T_S1 -name FCMAP_1 -copyrate 50 FlashCopy Mapping, id [0], successfully created IBM_2145:ITSO-CLS4:admin>svctask mkfcmap -source Volume_FC_T_S1 -target Volume_FC_S -name FCMAP_rev_1 -copyrate 50 FlashCopy Mapping, id [1], successfully created IBM_2145:ITSO-CLS4:admin>svctask mkfcmap -source Volume_FC_T_S1 -target Volume_FC_T1 -name FCMAP_2 -copyrate 50 FlashCopy Mapping, id [2], successfully created IBM_2145:ITSO-CLS4:admin>svcinfo lsfcmap id name source_vdisk_id source_vdisk_name target_vdisk_id target_vdisk_name group_id group_name status progress copy_rate clean_progress incremental partner_FC_id partner_FC_name restoring 518
Implementing the IBM System Storage SAN Volume Controller V6.1
50
FCMAP_1
Volume_FC_T_S1 50 100
IBM_2145:ITSO-CLS4:admin>svctask startfcmap -prep FCMAP_1 IBM_2145:ITSO-CLS4:admin>svcinfo lsfcmap id name source_vdisk_id source_vdisk_name target_vdisk_id target_vdisk_name group_id group_name status progress copy_rate clean_progress incremental partner_FC_id partner_FC_name restoring 0 FCMAP_1 4 Volume_FC_S 5 Volume_FC_T_S1 copying 0 50 100 off 1 FCMAP_rev_1 no 1 FCMAP_rev_1 5 Volume_FC_T_S1 4 Volume_FC_S idle_or_copied 0 50 100 off 0 FCMAP_1 no 2 FCMAP_2 5 Volume_FC_T_S1 6 Volume_FC_T1 idle_or_copied 0 50 100 off no IBM_2145:ITSO-CLS4:admin>svctask startfcmap -prep FCMAP_2 IBM_2145:ITSO-CLS4:admin>svcinfo lsfcmap id name source_vdisk_id source_vdisk_name target_vdisk_id target_vdisk_name group_id group_name status progress copy_rate clean_progress incremental partner_FC_id partner_FC_name restoring 0 FCMAP_1 4 Volume_FC_S 5 Volume_FC_T_S1 copying 8 50 91 off 1 FCMAP_rev_1 no 1 FCMAP_rev_1 5 Volume_FC_T_S1 4 Volume_FC_S idle_or_copied 0 50 100 off 0 FCMAP_1 no 2 FCMAP_2 5 Volume_FC_T_S1 6 Volume_FC_T1 copying 0 50 100 off no IBM_2145:ITSO-CLS4:admin>svctask startfcmap -prep FCMAP_rev_1 CMMVC6298E The command failed because a target VDisk has dependent FlashCopy mappings. IBM_2145:ITSO-CLS4:admin>svctask startfcmap -prep -restore FCMAP_rev_1 IBM_2145:ITSO-CLS4:admin>svcinfo lsfcmap id name source_vdisk_id source_vdisk_name target_vdisk_id target_vdisk_name group_id group_name status progress copy_rate clean_progress incremental partner_FC_id partner_FC_name restoring 0 FCMAP_1 4 Volume_FC_S 5 Volume_FC_T_S1 copying 20 50 86 off 1 FCMAP_rev_1 no
519
Volume_FC_S 0 Volume_FC_T1
FCMAP_1
As you can see in Example 9-140 on page 518, FCMAP_rev_1 shows a restoring value of yes while the FlashCopy mapping is copying. After it has finished copying, the restoring value field will change to no.
520
Table 9-3 Volume details Content of volume Database files Database log files Application files Volumes at primary site MM_DB_Pri MM_DBLog_Pri MM_App_Pri Volumes at secondary site MM_DB_Sec MM_DBLog_Sec MM_App_Sec
Because data consistency is needed across the MM_DB_Pri and MM_DBLog_Pri volumes, a CG_WIN2K3_MM Consistency Group is created to handle Metro Mirror relationships for them. Because in this scenario application files are independent of the database, a stand-alone Metro Mirror relationship is created for the MM_App_Pri volume. Figure 9-7 illustrates the Metro Mirror setup.
521
2. Create a Metro Mirror Consistency Group: Name CG_W2K3_MM 3. Create the Metro Mirror relationship for MM_DB_Pri: Master MM_DB_Pri Auxiliary MM_DB_Sec Auxiliary SVC cluster ITSO-CLS4 Name MMREL1 Consistency Group CG_W2K3_MM Master MM_DBLog_Pri Auxiliary MM_DBLog_Sec Auxiliary SVC cluster ITSO-CLS4 Name MMREL2 Consistency Group CG_W2K3_MM Master MM_App_Pri Auxiliary MM_App_Sec Auxiliary SVC cluster ITSO-CLS4 Name MMREL3
Preverification
To verify that both clusters can communicate with each other, use the svcinfo lsclustercandidate command. As shown in Example 9-141, ITSO-CLS4 is an eligible SVC cluster candidate at ITSO-CLS1 for the SVC cluster partnership, and vice versa. Therefore, both clusters are communicating with each other.
Example 9-141 Listing the available SVC cluster for partnership
IBM_2145:ITSO-CLS1:admin>svcinfo lsclustercandidate id configured name 0000020069E03A42 no ITSO-CLS3 0000020063E03A38 no ITSO-CLS4 0000020061006FCA no ITSO-CLS2 IBM_2145:ITSO-CLS4:admin>svcinfo lsclustercandidate id configured name 0000020069E03A42 no ITSO-CLS3 000002006AE04FC4 no ITSO-CLS1 0000020061006FCA no ITSO-CLS2
522
Example 9-142 shows the output of the svcinfo lscluster command, before setting up the Metro Mirror relationship. We show it so that you can compare with the same relationship after setting up the Metro Mirror relationship.
Example 9-142 Pre-verification of cluster configuration
IBM_2145:ITSO-CLS1:admin>svcinfo lscluster id name location id_alias 000002006AE04FC4 ITSO-CLS1 local 000002006AE04FC4 IBM_2145:ITSO-CLS4:admin>svcinfo lscluster id name location id_alias 0000020063E03A38 ITSO-CLS4 local 0000020063E03A38
partnership
bandwidth
partnership
bandwidth
IBM_2145:ITSO-CLS1:admin>svctask mkpartnership -bandwidth 50 ITSO-CLS4 IBM_2145:ITSO-CLS1:admin>svcinfo lscluster id name location partnership bandwidth id_alias 000002006AE04FC4 ITSO-CLS1 local 000002006AE04FC4 0000020063E03A38 ITSO-CLS4 remote fully_configured 50 0000020063E03A38 In Example 9-144, the partnership is created between ITSO-CLS4 back to ITSO-CLS1, specifying the bandwidth to be used for a background copy of 50 MBps. After creating the partnership, verify that the partnership is fully configured on both clusters by reissuing the svcinfo lscluster command.
Example 9-144 Creating the partnership from ITSO-CLS4 to ITSO-CLS1 and verifying the partnership
IBM_2145:ITSO-CLS4:admin>svctask mkpartnership -bandwidth 50 ITSO-CLS1 IBM_2145:ITSO-CLS4:admin>svcinfo lscluster id name location partnership bandwidth id_alias 0000020063E03A38 ITSO-CLS4 local 0000020063E03A38 000002006AE04FC4 ITSO-CLS1 remote fully_configured 50 000002006AE04FC4
523
IBM_2145:ITSO-CLS1:admin>svctask mkrcconsistgrp -cluster ITSO-CLS4 -name CG_W2K3_MM RC Consistency Group, id [0], successfully created IBM_2145:ITSO-CLS1:admin>svcinfo lsrcconsistgrp id name master_cluster_id master_cluster_name aux_cluster_id aux_cluster_name primary state relationship_count copy_type 0 CG_W2K3_MM 000002006AE04FC4 ITSO-CLS1 0000020063E03A38 ITSO-CLS4 empty 0 empty_group
IBM_2145:ITSO-CLS1:admin>svcinfo lsvdisk -filtervalue name=MM* id name IO_group_id IO_group_name status mdisk_grp_id mdisk_grp_name capacity type FC_id FC_name RC_id RC_name vdisk_UID fc_map_count copy_count fast_write_state 13 MM_DB_Pri 0 io_grp0 online 0 MDG_DS47 1.00GB striped 6005076801AB813F1000000000000010 0 1 empty 14 MM_Log_Pri 0 io_grp0 online 0 MDG_DS47 1.00GB striped 6005076801AB813F1000000000000011 0 1 empty 15 MM_App_Pri 0 io_grp0 online 0 MDG_DS47 1.00GB striped 6005076801AB813F1000000000000012 0 1 empty IBM_2145:ITSO-CLS1:admin>svcinfo lsrcrelationshipcandidate id vdisk_name 0 DB_Source 1 Log_Source 2 App_Source
524
3 4 5 6 7 8 9 13 14 15
App_Target_1 Log_Target_1 Log_Target_2 DB_Target_1 DB_Target_2 App_Source_SE FC_A MM_DB_Pri MM_Log_Pri MM_App_Pri
IBM_2145:ITSO-CLS1:admin>svcinfo lsrcrelationshipcandidate -aux ITSO-CLS4 -master MM_DB_Pri id vdisk_name 0 MM_DB_Sec 1 MM_Log_Sec 2 MM_App_Sec IBM_2145:ITSO-CLS1:admin>svctask mkrcrelationship -master MM_DB_Pri -aux MM_DB_Sec -cluster ITSO-CLS4 -consistgrp CG_W2K3_MM -name MMREL1 RC Relationship, id [13], successfully created IBM_2145:ITSO-CLS1:admin>svctask mkrcrelationship -master MM_Log_Pri -aux MM_Log_Sec -cluster ITSO-CLS4 -consistgrp CG_W2K3_MM -name MMREL2 RC Relationship, id [14], successfully created IBM_2145:ITSO-CLS1:admin>svcinfo lsrcrelationship id name master_cluster_id master_cluster_name master_vdisk_id master_vdisk_name aux_cluster_id aux_cluster_name aux_vdisk_id aux_vdisk_name primary consistency_group_id consistency_group_name state bg_copy_priority progress copy_type 13 MMREL1 000002006AE04FC4 ITSO-CLS1 13 MM_DB_Pri 0000020063E03A38 ITSO-CLS4 0 MM_DB_Sec master 0 CG_W2K3_MM inconsistent_stopped 50 0 metro 14 MMREL2 000002006AE04FC4 ITSO-CLS1 14 MM_Log_Pri 0000020063E03A38 ITSO-CLS4 1 MM_Log_Sec master 0 CG_W2K3_MM inconsistent_stopped 50 0 metro
525
Tip: The -sync option is only used when the target volume has already mirrored all of the data from the source volume. By using this option, there is no initial background copy between the primary volume and the secondary volume. MMREL2 and MMREL1 are in the inconsistent_stopped state because they were not created with the -sync option, so their auxiliary volumes need to be synchronized with their primary volumes.
Example 9-147 Creating a stand-alone relationship and verifying it
IBM_2145:ITSO-CLS1:admin>svctask mkrcrelationship -master MM_App_Pri -aux MM_App_Sec -sync -cluster ITSO-CLS4 -name MMREL3 RC Relationship, id [15], successfully created IBM_2145:ITSO-CLS1:admin>svcinfo lsrcrelationship 15 id 15 name MMREL3 master_cluster_id 000002006AE04FC4 master_cluster_name ITSO-CLS1 master_vdisk_id 15 master_vdisk_name MM_App_Pri aux_cluster_id 0000020063E03A38 aux_cluster_name ITSO-CLS4 aux_vdisk_id 2 aux_vdisk_name MM_App_Sec primary master consistency_group_id consistency_group_name state consistent_stopped bg_copy_priority 50 progress 100 freeze_time status online sync in_sync copy_type metro sync in_sync copy_type metro
IBM_2145:ITSO-CLS1:admin>svctask startrcrelationship MMREL3 IBM_2145:ITSO-CLS1:admin>svcinfo lsrcrelationship MMREL3 id 15 name MMREL3 master_cluster_id 000002006AE04FC4 master_cluster_name ITSO-CLS1 master_vdisk_id 15 master_vdisk_name MM_App_Pri aux_cluster_id 0000020063E03A38 aux_cluster_name ITSO-CLS4 aux_vdisk_id 2 aux_vdisk_name MM_App_Sec primary master consistency_group_id consistency_group_name state consistent_synchronized bg_copy_priority 50 progress freeze_time status online sync copy_type metro IBM_2145:ITSO-CLS1:admin>
IBM_2145:ITSO-CLS1:admin>svctask startrcconsistgrp CG_W2K3_MM IBM_2145:ITSO-CLS1:admin>svcinfo lsrcconsistgrp CG_W2K3_MM id 0 name CG_W2K3_MM master_cluster_id 000002006AE04FC4 master_cluster_name ITSO-CLS1 aux_cluster_id 0000020063E03A38 aux_cluster_name ITSO-CLS4 primary master state inconsistent_copying relationship_count 2 freeze_time status sync copy_type metro RC_rel_id 13 RC_rel_name MMREL1
Chapter 9. SAN Volume Controller operations using the command-line interface
527
IBM_2145:ITSO-CLS1:admin>svcinfo lsrcrelationship MMREL1 id 13 name MMREL1 master_cluster_id 000002006AE04FC4 master_cluster_name ITSO-CLS1 master_vdisk_id 13 master_vdisk_name MM_DB_Pri aux_cluster_id 0000020063E03A38 aux_cluster_name ITSO-CLS4 aux_vdisk_id 0 aux_vdisk_name MM_DB_Sec primary master consistency_group_id 0 consistency_group_name CG_W2K3_MM state consistent_synchronized bg_copy_priority 50 progress 35 freeze_time status online sync copy_type metro IBM_2145:ITSO-CLS1:admin>svcinfo lsrcrelationship MMREL2 id 14 name MMREL2 master_cluster_id 000002006AE04FC4 master_cluster_name ITSO-CLS1 master_vdisk_id 14 master_vdisk_name MM_Log_Pri aux_cluster_id 0000020063E03A38 aux_cluster_name ITSO-CLS4 aux_vdisk_id 1 aux_vdisk_name MM_Log_Sec primary master consistency_group_id 0 consistency_group_name CG_W2K3_MM state consistent_synchronized bg_copy_priority 50
528
progress 37 freeze_time status online sync copy_type metro When all Metro Mirror relationships have completed the background copy the Consistency Group enters the Consistent synchronized state, as shown in Example 9-151.
Example 9-151 Listing the Metro Mirror Consistency Group
IBM_2145:ITSO-CLS1:admin>svcinfo lsrcconsistgrp CG_W2K3_MM id 0 name CG_W2K3_MM master_cluster_id 000002006AE04FC4 master_cluster_name ITSO-CLS1 aux_cluster_id 0000020063E03A38 aux_cluster_name ITSO-CLS4 primary master state consistent_synchronized relationship_count 2 freeze_time status sync copy_type metro RC_rel_id 13 RC_rel_name MMREL1 RC_rel_id 14 RC_rel_name MMREL2
IBM_2145:ITSO-CLS1:admin>svctask stoprcrelationship -access MMREL3 IBM_2145:ITSO-CLS1:admin>svcinfo lsrcrelationship MMREL3 id 15 name MMREL3 master_cluster_id 000002006AE04FC4 master_cluster_name ITSO-CLS1 master_vdisk_id 15 master_vdisk_name MM_App_Pri aux_cluster_id 0000020063E03A38 aux_cluster_name ITSO-CLS4 aux_vdisk_id 2
529
aux_vdisk_name MM_App_Sec primary consistency_group_id consistency_group_name state idling bg_copy_priority 50 progress freeze_time status sync in_sync copy_type metro
IBM_2145:ITSO-CLS1:admin>svctask stoprcconsistgrp CG_W2K3_MM IBM_2145:ITSO-CLS1:admin>svcinfo lsrcconsistgrp CG_W2K3_MM id 0 name CG_W2K3_MM master_cluster_id 000002006AE04FC4 master_cluster_name ITSO-CLS1 aux_cluster_id 0000020063E03A38 aux_cluster_name ITSO-CLS4 primary master state consistent_stopped relationship_count 2 freeze_time status sync in_sync copy_type metro RC_rel_id 13 RC_rel_name MMREL1 RC_rel_id 14 RC_rel_name MMREL2 If, afterwards, we want to enable access (write I/O) to the secondary volume, we reissue the svctask stoprcconsistgrp command specifying the -access flag. The Consistency Group transits to the Idling state as shown in Example 9-154.
Example 9-154 Stopping a Metro Mirror Consistency Group and enabling access to the secondary
IBM_2145:ITSO-CLS1:admin>svctask stoprcconsistgrp -access CG_W2K3_MM IBM_2145:ITSO-CLS1:admin>svcinfo lsrcconsistgrp CG_W2K3_MM id 0 name CG_W2K3_MM master_cluster_id 000002006AE04FC4 master_cluster_name ITSO-CLS1 aux_cluster_id 0000020063E03A38 aux_cluster_name ITSO-CLS4 primary state idling
530
relationship_count 2 freeze_time status sync in_sync copy_type metro RC_rel_id 13 RC_rel_name MMREL1 RC_rel_id 14 RC_rel_name MMREL2
IBM_2145:ITSO-CLS1:admin>svctask startrcrelationship -primary master -force MMREL3 IBM_2145:ITSO-CLS1:admin>svcinfo lsrcrelationship MMREL3 id 15 name MMREL3 master_cluster_id 000002006AE04FC4 master_cluster_name ITSO-CLS1 master_vdisk_id 15 master_vdisk_name MM_App_Pri aux_cluster_id 0000020063E03A38 aux_cluster_name ITSO-CLS4 aux_vdisk_id 2 aux_vdisk_name MM_App_Sec primary master consistency_group_id consistency_group_name state consistent_synchronized bg_copy_priority 50 progress freeze_time status online sync copy_type metro
531
In Example 9-156, we change the copy direction by specifying the auxiliary volumes to become the primaries.
Example 9-156 Restarting a Metro Mirror relationship while changing the copy direction
IBM_2145:ITSO-CLS1:admin>svctask startrcconsistgrp -force -primary aux CG_W2K3_MM IBM_2145:ITSO-CLS1:admin>svcinfo lsrcconsistgrp CG_W2K3_MM id 0 name CG_W2K3_MM master_cluster_id 000002006AE04FC4 master_cluster_name ITSO-CLS1 aux_cluster_id 0000020063E03A38 aux_cluster_name ITSO-CLS4 primary aux state consistent_synchronized relationship_count 2 freeze_time status sync copy_type metro RC_rel_id 13 RC_rel_name MMREL1 RC_rel_id 14 RC_rel_name MMREL2
IBM_2145:ITSO-CLS1:admin>svcinfo lsrcrelationship MMREL3 id 15 name MMREL3 master_cluster_id 000002006AE04FC4 master_cluster_name ITSO-CLS1 532
master_vdisk_id 15 master_vdisk_name MM_App_Pri aux_cluster_id 0000020063E03A38 aux_cluster_name ITSO-CLS4 aux_vdisk_id 2 aux_vdisk_name MM_App_Sec primary master consistency_group_id consistency_group_name state consistent_synchronized bg_copy_priority 50 progress freeze_time status online sync copy_type metro IBM_2145:ITSO-CLS1:admin>svctask switchrcrelationship -primary aux MMREL3 IBM_2145:ITSO-CLS1:admin>svcinfo lsrcrelationship MMREL3 id 15 name MMREL3 master_cluster_id 000002006AE04FC4 master_cluster_name ITSO-CLS1 master_vdisk_id 15 master_vdisk_name MM_App_Pri aux_cluster_id 0000020063E03A38 aux_cluster_name ITSO-CLS4 aux_vdisk_id 2 aux_vdisk_name MM_App_Sec primary aux consistency_group_id consistency_group_name state consistent_synchronized bg_copy_priority 50 progress freeze_time status online sync copy_type metro
533
Important: When the copy direction is switched, it is crucial that there is no outstanding I/O to the volume that transitions from primary to secondary, because all of the I/O will be inhibited when that volume becomes the secondary. Therefore, careful planning is required prior to using the svctask switchrcconsistgrp command.
Example 9-158 Switching the copy direction for a Metro Mirror Consistency Group
IBM_2145:ITSO-CLS1:admin>svcinfo lsrcconsistgrp CG_W2K3_MM id 0 name CG_W2K3_MM master_cluster_id 000002006AE04FC4 master_cluster_name ITSO-CLS1 aux_cluster_id 0000020063E03A38 aux_cluster_name ITSO-CLS4 primary master state consistent_synchronized relationship_count 2 freeze_time status sync copy_type metro RC_rel_id 13 RC_rel_name MMREL1 RC_rel_id 14 RC_rel_name MMREL2 IBM_2145:ITSO-CLS1:admin>svctask switchrcconsistgrp -primary aux CG_W2K3_MM IBM_2145:ITSO-CLS1:admin>svcinfo lsrcconsistgrp CG_W2K3_MM id 0 name CG_W2K3_MM master_cluster_id 000002006AE04FC4 master_cluster_name ITSO-CLS1 aux_cluster_id 0000020063E03A38 aux_cluster_name ITSO-CLS4 primary aux state consistent_synchronized relationship_count 2 freeze_time status sync copy_type metro RC_rel_id 13 RC_rel_name MMREL1 RC_rel_id 14 RC_rel_name MMREL2
534
IBM_2145:ITSO-CLS1:admin>svcinfo lsclustercandidate id configured name 0000020069E03A42 no ITSO-CLS3 0000020063E03A38 no ITSO-CLS4 0000020061006FCA no ITSO-CLS2 IBM_2145:ITSO-CLS2:admin>svcinfo lsclustercandidate id configured cluster_name 000002006AE04FC4 no ITSO-CLS1 0000020069E03A42 no ITSO-CLS3 0000020063E03A38 no ITSO-CLS4 IBM_2145:ITSO-CLS3:admin>svcinfo lsclustercandidate id configured name 000002006AE04FC4 no ITSO-CLS1 0000020063E03A38 no ITSO-CLS4 0000020061006FCA no ITSO-CLS2 IBM_2145:ITSO-CLS4:admin>svcinfo lsclustercandidate id configured name 0000020069E03A42 no ITSO-CLS3 000002006AE04FC4 no ITSO-CLS1 0000020061006FCA no ITSO-CLS2
535
Example 9-160 shows the sequence of mkpartnership commands to execute to create a star configuration.
Example 9-160 Creating a star configuration using the mkpartnership command
From ITSO-CLS1 to multiple clusters IBM_2145:ITSO-CLS1:admin>svctask mkpartnership -bandwidth 50 ITSO-CLS2 IBM_2145:ITSO-CLS1:admin>svctask mkpartnership -bandwidth 50 ITSO-CLS3 IBM_2145:ITSO-CLS1:admin>svctask mkpartnership -bandwidth 50 ITSO-CLS4 From ITSO-CLS2 to ITSO-CLS1 IBM_2145:ITSO-CLS2:admin>svctask mkpartnership -bandwidth 50 ITSO-CLS1 From ITSO-CLS3 to ITSO-CLS1 IBM_2145:ITSO-CLS3:admin>svctask mkpartnership -bandwidth 50 ITSO-CLS1 From ITSO-CLS4 to ITSO-CLS1 IBM_2145:ITSO-CLS4:admin>svctask mkpartnership -bandwidth 50 ITSO-CLS1 From ITSO-CLS1 IBM_2145:ITSO-CLS1:admin>svcinfo lscluster -delim : id:name:location:partnership:bandwidth:id_alias 000002006AE04FC4:ITSO-CLS1:local:::000002006AE04FC4 0000020061006FCA:ITSO-CLS2:remote:fully_configured:50:0000020061006FCA 0000020069E03A42:ITSO-CLS3:remote:fully_configured:50:0000020069E03A42 0000020063E03A38:ITSO-CLS4:remote:fully_configured:50:0000020063E03A38 From ITSO-CLS2 IBM_2145:ITSO-CLS1:admin>svcinfo lscluster -delim : 536
Implementing the IBM System Storage SAN Volume Controller V6.1
id:name:location:partnership:bandwidth:id_alias 0000020061006FCA:ITSO-CLS2:local:::0000020061006FCA 000002006AE04FC4:ITSO-CLS1:remote:fully_configured::000002006AE04FC4 0000020069E03A42:ITSO-CLS3:remote:fully_configured:50:0000020069E03A42 0000020063E03A38:ITSO-CLS4:remote:fully_configured:50:0000020063E03A38 From ITSO-CLS3 IBM_2145:ITSO-CLS1:admin>svcinfo lscluster -delim : id:name:location:partnership:bandwidth:id_alias 0000020069E03A42:ITSO-CLS3:local:::0000020069E03A42 000002006AE04FC4:ITSO-CLS1:remote:fully_configured::000002006AE04FC4 0000020061006FCA:ITSO-CLS2:remote:fully_configured:50:0000020061006FCA 0000020063E03A38:ITSO-CLS4:remote:fully_configured:50:0000020063E03A38 From ITSO-CLS4 IBM_2145:ITSO-CLS1:admin>svcinfo lscluster -delim : id:name:location:partnership:bandwidth:id_alias 0000020063E03A38:ITSO-CLS4:local:::0000020063E03A38 000002006AE04FC4:ITSO-CLS1:remote:fully_configured:50:000002006AE04FC4 0000020061006FCA:ITSO-CLS2:remote:fully_configured:50:0000020061006FCA 0000020069E03A42:ITSO-CLS3:remote:fully_configured:50:0000020069E03A42 After the SVC partnership has been configured, you can configure any rcrelationship or rcconsistgrp that you need. Make sure that a single volume is only in one relationship.
Triangle configuration
Figure 9-9 shows the triangle configuration.
Example 9-161 shows the sequence of mkpartnership commands to execute to create a triangle configuration.
Example 9-161 Creating a triangle configuration
537
IBM_2145:ITSO-CLS1:admin>svctask mkpartnership -bandwidth 50 ITSO-CLS2 IBM_2145:ITSO-CLS1:admin>svctask mkpartnership -bandwidth 50 ITSO-CLS3 From ITSO-CLS2 to ITSO-CLS1 and ITSO-CLS3 IBM_2145:ITSO-CLS2:admin>svctask mkpartnership -bandwidth 50 ITSO-CLS1 IBM_2145:ITSO-CLS1:admin>svctask mkpartnership -bandwidth 50 ITSO-CLS3 From ITSO-CLS3 to ITSO-CLS1 and ITSO-CLS2 IBM_2145:ITSO-CLS3:admin>svctask mkpartnership -bandwidth 50 ITSO-CLS1 IBM_2145:ITSO-CLS1:admin>svctask mkpartnership -bandwidth 50 ITSO-CLS2 From ITSO-CLS1 IBM_2145:ITSO-CLS1:admin>svcinfo lscluster -delim : id:name:location:partnership:bandwidth:id_alias 000002006AE04FC4:ITSO-CLS1:local:::000002006AE04FC4 0000020061006FCA:ITSO-CLS2:remote:fully_configured:50:0000020061006FCA 0000020069E03A42:ITSO-CLS3:remote:fully_configured:50:0000020069E03A42 From ITSO-CLS2 IBM_2145:ITSO-CLS1:admin>svcinfo lscluster -delim : id:name:location:partnership:bandwidth:id_alias 0000020061006FCA:ITSO-CLS2:local:::0000020061006FCA 000002006AE04FC4:ITSO-CLS1:remote:fully_configured:50:000002006AE04FC4 0000020069E03A42:ITSO-CLS3:remote:fully_configured:50:0000020069E03A42 From ITSO-CLS3 IBM_2145:ITSO-CLS1:admin>svcinfo lscluster -delim : id:name:location:partnership:bandwidth:id_alias 0000020069E03A42:ITSO-CLS3:local:::0000020069E03A42 000002006AE04FC4:ITSO-CLS1:remote:fully_configured:50:000002006AE04FC4 0000020061006FCA:ITSO-CLS2:remote:fully_configured:50:0000020061006FCA After the SVC partnership has been configured, you can configure any rcrelationship or rcconsistgrp that you need. Make sure that a single volume is only in one relationship.
538
Example 9-162 shows the sequence of mkpartnership commands to execute to create a fully connected configuration.
Example 9-162 Creating a fully connected configuration
From ITSO-CLS1 to ITSO-CLS2, ITSO-CLS3 and ITSO-CLS4 IBM_2145:ITSO-CLS1:admin>svctask mkpartnership -bandwidth 50 ITSO-CLS2 IBM_2145:ITSO-CLS1:admin>svctask mkpartnership -bandwidth 50 ITSO-CLS3 IBM_2145:ITSO-CLS1:admin>svctask mkpartnership -bandwidth 50 ITSO-CLS4 From ITSO-CLS2 to ITSO-CLS1, ITSO-CLS3 and ITSO-CLS4 IBM_2145:ITSO-CLS2:admin>svctask mkpartnership -bandwidth 50 ITSO-CLS1 IBM_2145:ITSO-CLS2:admin>svctask mkpartnership -bandwidth 50 ITSO-CLS3 IBM_2145:ITSO-CLS2:admin>svctask mkpartnership -bandwidth 50 ITSO-CLS4 From ITSO-CLS3 to ITSO-CLS1, ITSO-CLS3 and ITSO-CLS4 IBM_2145:ITSO-CLS3:admin>svctask mkpartnership -bandwidth 50 ITSO-CLS1 IBM_2145:ITSO-CLS3:admin>svctask mkpartnership -bandwidth 50 ITSO-CLS2 IBM_2145:ITSO-CLS3:admin>svctask mkpartnership -bandwidth 50 ITSO-CLS4 From ITSO-CLS4 to ITSO-CLS1, ITSO-CLS2 and ITSO-CLS3 IBM_2145:ITSO-CLS4:admin>svctask mkpartnership -bandwidth 50 ITSO-CLS1 IBM_2145:ITSO-CLS4:admin>svctask mkpartnership -bandwidth 50 ITSO-CLS2 IBM_2145:ITSO-CLS4:admin>svctask mkpartnership -bandwidth 50 ITSO-CLS3 From ITSO-CLS1 IBM_2145:ITSO-CLS1:admin>svcinfo lscluster -delim : id:name:location:partnership:bandwidth:id_alias 000002006AE04FC4:ITSO-CLS1:local:::000002006AE04FC4 0000020061006FCA:ITSO-CLS2:remote:fully_configured:50:0000020061006FCA 0000020069E03A42:ITSO-CLS3:remote:fully_configured:50:0000020069E03A42 0000020063E03A38:ITSO-CLS4:remote:fully_configured:50:0000020063E03A38
Chapter 9. SAN Volume Controller operations using the command-line interface
539
From ITSO-CLS2 IBM_2145:ITSO-CLS1:admin>svcinfo lscluster -delim : id:name:location:partnership:bandwidth:id_alias 0000020061006FCA:ITSO-CLS2:local:::0000020061006FCA 000002006AE04FC4:ITSO-CLS1:remote:fully_configured::000002006AE04FC4 0000020069E03A42:ITSO-CLS3:remote:fully_configured:50:0000020069E03A42 0000020063E03A38:ITSO-CLS4:remote:fully_configured:50:0000020063E03A38 From ITSO-CLS3 IBM_2145:ITSO-CLS1:admin>svcinfo lscluster -delim : id:name:location:partnership:bandwidth:id_alias 0000020069E03A42:ITSO-CLS3:local:::0000020069E03A42 000002006AE04FC4:ITSO-CLS1:remote:fully_configured::000002006AE04FC4 0000020061006FCA:ITSO-CLS2:remote:fully_configured:50:0000020061006FCA 0000020063E03A38:ITSO-CLS4:remote:fully_configured:50:0000020063E03A38 From ITSO-CLS4 IBM_2145:ITSO-CLS1:admin>svcinfo lscluster -delim : id:name:location:partnership:bandwidth:id_alias 0000020063E03A38:ITSO-CLS4:local:::0000020063E03A38 000002006AE04FC4:ITSO-CLS1:remote:fully_configured:50:000002006AE04FC4 0000020061006FCA:ITSO-CLS2:remote:fully_configured:50:0000020061006FCA 0000020069E03A42:ITSO-CLS3:remote:fully_configured:50:0000020069E03A42
After the SVC partnership has been configured, you can configure any rcrelationship or rcconsistgrp that you need. Make sure that a single volume is only in one relationship.
Daisy-chain configuration
Figure 9-11 shows the daisy-chain configuration.
Example 9-163 shows the sequence of mkpartnership commands to execute to create a daisy-chain configuration.
Example 9-163 Creating a daisy-chain configuration
From ITSO-CLS2 to ITSO-CLS1 and ITSO-CLS3 IBM_2145:ITSO-CLS2:admin>svctask mkpartnership -bandwidth 50 ITSO-CLS1 IBM_2145:ITSO-CLS2:admin>svctask mkpartnership -bandwidth 50 ITSO-CLS3 From ITSO-CLS3 to ITSO-CLS2 and ITSO-CLS4 IBM_2145:ITSO-CLS3:admin>svctask mkpartnership -bandwidth 50 ITSO-CLS2 IBM_2145:ITSO-CLS3:admin>svctask mkpartnership -bandwidth 50 ITSO-CLS4 From ITSO-CLS4 to ITSO-CLS3 IBM_2145:ITSO-CLS4:admin>svctask mkpartnership -bandwidth 50 ITSO-CLS3 From ITSO-CLS1 IBM_2145:ITSO-CLS1:admin>svcinfo lscluster -delim : id:name:location:partnership:bandwidth:id_alias 000002006AE04FC4:ITSO-CLS1:local:::000002006AE04FC4 0000020061006FCA:ITSO-CLS2:remote:fully_configured:50:0000020061006FCA
From ITSO-CLS2 IBM_2145:ITSO-CLS1:admin>svcinfo lscluster -delim : id:name:location:partnership:bandwidth:id_alias 0000020061006FCA:ITSO-CLS2:local:::0000020061006FCA 000002006AE04FC4:ITSO-CLS1:remote:fully_configured::000002006AE04FC4 0000020069E03A42:ITSO-CLS3:remote:fully_configured:50:0000020069E03A42 From ITSO-CLS3 IBM_2145:ITSO-CLS1:admin>svcinfo lscluster -delim : id:name:location:partnership:bandwidth:id_alias 0000020069E03A42:ITSO-CLS3:local:::0000020069E03A42 000002006AE04FC4:ITSO-CLS1:remote:fully_configured::000002006AE04FC4 0000020061006FCA:ITSO-CLS2:remote:fully_configured:50:0000020061006FCA From ITSO-CLS4 IBM_2145:ITSO-CLS4:admin>svcinfo lscluster -delim : id:name:location:partnership:bandwidth:id_alias 0000020063E03A38:ITSO-CLS4:local:::0000020063E03A38 After the SVC partnership has been configured, you can configure any rcrelationship or rcconsistgrp that you need. Make sure that a single volume is only in one relationship.
541
Note: This example is for an intercluster Global Mirror operation only. In case you want to set up an intracluster operation, we highlight those parts in the following procedure that you do not need to perform. Table 9-4 shows the details of the volumes.
Table 9-4 Details of volumes for Global Mirror relationship scenario Content of volume Database files Database log files Application files Volumes at primary site GM_DB_Pri GM_DBLog_Pri GM_App_Pri Volumes at secondary site GM_DB_Sec GM_DBLog_Sec GM_App_Sec
Because data consistency is needed across GM_DB_Pri and GM_DBLog_Pri, we create a Consistency Group to handle Global Mirror relationships for them. Because in this scenario the application files are independent of the database, we create a stand-alone Global Mirror relationship for GM_App_Pri. Figure 9-12 illustrates the Global Mirror relationship setup.
Consistency Group
CG_W2K3_GM
GM_DB_Pri
GM Relationship 1
GM_DB_Sec
GM_Dlog_Pri
GM Relationship 2
GM_DBlog_Sec
GM_App_Pri
GM Relationship 3
GM_App_Sec
542
2. Create a Global Mirror Consistency Group: Name CG_W2K3_GM 3. Create the Global Mirror relationship for GM_DB_Pri: Master GM_DB_Pri Auxiliary GM_DB_Sec Auxiliary SVC cluster ITSO-CLS4 Name GMREL1 Consistency Group CG_W2K3_GM Master GM_DBLog_Pri Auxiliary GM_DBLog_Sec Auxiliary SVC cluster ITSO-CLS4 Name GMREL2 Consistency Group CG_W2K3_GM Master GM_App_Pri Auxiliary GM_App_Sec Auxiliary SVC cluster ITSO-CLS4 Name GMREL3
Preverification
To verify that both clusters can communicate with each other, use the svcinfo lsclustercandidate command. Example 9-164 confirms that our clusters are communicating, because ITSO-CLS4 is an eligible SVC cluster candidate at ITSO-CLS1 for the SVC cluster partnership, and vice versa. Therefore, both clusters are communicating with each other.
Example 9-164 Listing the available SVC clusters for partnership
In Example 9-165 on page 544, we show the output of the svcinfo lscluster command before setting up the SVC clusters partnership for Global Mirror. We show this output for comparison after we have set up the SVC partnership.
543
IBM_2145:ITSO-CLS1:admin>svcinfo lscluster -delim : id:name:location:partnership:bandwidth:cluster_IP_address:cluster_service_IP_addre ss:cluster_IP_address_6:cluster_service_IP_address_6:id_alias 0000020060C06FCA:ITSO-CLS1:local:::10.64.210.240:10.64.210.241:::0000020060C06FCA IBM_2145:ITSO-CLS2:admin>svcinfo lscluster -delim : id:name:location:partnership:bandwidth:cluster_IP_address:cluster_service_IP_addre ss:cluster_IP_address_6:cluster_service_IP_address_6:id_alias 0000020063E03A38:ITSO-CLS4:local:::10.64.210.246.119:10.64.210.247:::0000020063E03 A38
IBM_2145:ITSO-CLS1:admin>svctask mkpartnership -bandwidth 10 ITSO-CLS4 IBM_2145:ITSO-CLS1:admin>svcinfo lscluster id name location partnership bandwidth id_alias 000002006AE04FC4 ITSO-CLS1 local 000002006AE04FC4 0000020063E03A38 ITSO-CLS4 remote partially_configured_local 10 0000020063E03A38 In Example 9-167, we create the partnership from ITSO-CLS4 back to ITSO-CLS1, specifying a 10 MBps bandwidth to be used for the background copy. After creating the partnership, verify that the partnership is fully configured by reissuing the svcinfo lscluster command.
Example 9-167 Creating the partnership from ITSO-CLS4 to ITSO-CLS1 and verifying the partnership
IBM_2145:ITSO-CLS4:admin>svctask mkpartnership -bandwidth 10 ITSO-CLS1 IBM_2145:ITSO-CLS4:admin>svcinfo lscluster id name location partnership bandwidth id_alias 0000020063E03A38 ITSO-CLS4 local 0000020063E03A38 000002006AE04FC4 ITSO-CLS1 remote fully_configured 10 000002006AE04FC4 IBM_2145:ITSO-CLS1:admin>svcinfo lscluster id name location id_alias 000002006AE04FC4 ITSO-CLS1 local 000002006AE04FC4 0000020063E03A38 ITSO-CLS4 remote 0000020063E03A38
partnership
bandwidth
fully_configured 10
544
IBM_2145:ITSO-CLS1:admin>svctask IBM_2145:ITSO-CLS1:admin>svctask IBM_2145:ITSO-CLS1:admin>svctask IBM_2145:ITSO-CLS1:admin>svcinfo id 000002006AE04FC4 name ITSO-CLS1 location local partnership bandwidth total_mdisk_capacity 160.0GB
545
space_in_mdisk_grps 160.0GB space_allocated_to_vdisks 19.00GB total_free_space 141.0GB statistics_status off statistics_frequency 15 required_memory 8192 cluster_locale en_US time_zone 520 US/Pacific code_level 5.1.0.0 (build 17.1.0908110000) FC_port_speed 2Gb console_IP id_alias 000002006AE04FC4 gm_link_tolerance 200 gm_inter_cluster_delay_simulation 20 gm_intra_cluster_delay_simulation 40 email_reply email_contact email_contact_primary email_contact_alternate email_contact_location email_state invalid inventory_mail_interval 0 total_vdiskcopy_capacity 19.00GB total_used_capacity 19.00GB total_overallocation 11 total_vdisk_capacity 19.00GB cluster_ntp_IP_address cluster_isns_IP_address iscsi_auth_method none iscsi_chap_secret auth_service_configured no auth_service_enabled no auth_service_url auth_service_user_name auth_service_pwd_set no auth_service_cert_set no relationship_bandwidth_limit 25
IBM_2145:ITSO-CLS1:admin>svctask mkrcconsistgrp -cluster ITSO-CLS4 -name CG_W2K3_GM RC Consistency Group, id [0], successfully created IBM_2145:ITSO-CLS1:admin>svcinfo lsrcconsistgrp id name master_cluster_id master_cluster_name aux_cluster_id aux_cluster_name primary state relationship_count copy_type
546
547
IBM_2145:ITSO-CLS1:admin>svcinfo lsrcrelationship id name master_cluster_id master_cluster_name master_vdisk_id master_vdisk_name aux_cluster_id aux_cluster_name aux_vdisk_id aux_vdisk_name primary consistency_group_id consistency_group_name state bg_copy_priority progress copy_type 17 GMREL1 000002006AE04FC4 ITSO-CLS1 17 GM_DB_Pri 0000020063E03A38 ITSO-CLS4 4 GM_DB_Sec master 0 CG_W2K3_GM inconsistent_stopped 50 0 global 18 GMREL2 000002006AE04FC4 ITSO-CLS1 18 GM_DBLog_Pri 0000020063E03A38 ITSO-CLS4 5 GM_DBLog_Sec master 0 CG_W2K3_GM inconsistent_stopped 50 0 global
548
IBM_2145:ITSO-CLS1:admin>svctask startrcrelationship GMREL3 IBM_2145:ITSO-CLS1:admin>svcinfo lsrcrelationship GMREL3 id 16 name GMREL3 master_cluster_id 000002006AE04FC4 master_cluster_name ITSO-CLS1 master_vdisk_id 16 master_vdisk_name GM_App_Pri aux_cluster_id 0000020063E03A38 aux_cluster_name ITSO-CLS4 aux_vdisk_id 3 aux_vdisk_name GM_App_Sec primary master consistency_group_id consistency_group_name state consistent_synchronized bg_copy_priority 50 progress freeze_time status online sync copy_type global
IBM_2145:ITSO-CLS1:admin>svctask startrcconsistgrp CG_W2K3_GM IBM_2145:ITSO-CLS1:admin>svcinfo lsrcconsistgrp CG_W2K3_GM id 0 name CG_W2K3_GM master_cluster_id 000002006AE04FC4 master_cluster_name ITSO-CLS1 aux_cluster_id 0000020063E03A38 aux_cluster_name ITSO-CLS4 primary master state inconsistent_copying relationship_count 2 freeze_time
549
status sync copy_type global RC_rel_id 17 RC_rel_name GMREL1 RC_rel_id 18 RC_rel_name GMREL2
IBM_2145:ITSO-CLS1:admin>svcinfo lsrcrelationship GMREL1 id 17 name GMREL1 master_cluster_id 000002006AE04FC4 master_cluster_name ITSO-CLS1 master_vdisk_id 17 master_vdisk_name GM_DB_Pri aux_cluster_id 0000020063E03A38 aux_cluster_name ITSO-CLS4 aux_vdisk_id 4 aux_vdisk_name GM_DB_Sec primary master consistency_group_id 0 consistency_group_name CG_W2K3_GM state inconsistent_copying bg_copy_priority 50 progress 38 freeze_time status online sync copy_type global IBM_2145:ITSO-CLS1:admin>svcinfo lsrcrelationship GMREL2 id 18 name GMREL2 master_cluster_id 000002006AE04FC4 master_cluster_name ITSO-CLS1 master_vdisk_id 18 master_vdisk_name GM_DBLog_Pri aux_cluster_id 0000020063E03A38 aux_cluster_name ITSO-CLS4 aux_vdisk_id 5 aux_vdisk_name GM_DBLog_Sec primary master consistency_group_id 0
550
consistency_group_name CG_W2K3_GM state inconsistent_copying bg_copy_priority 50 progress 40 freeze_time status online sync copy_type global When all of the Global Mirror relationships complete the background copy, the Consistency Group enters the Consistent synchronized state, as shown in Example 9-151 on page 529.
Example 9-175 Listing the Global Mirror Consistency Group
IBM_2145:ITSO-CLS1:admin>svcinfo lsrcconsistgrp CG_W2K3_GM id 0 name CG_W2K3_GM master_cluster_id 000002006AE04FC4 master_cluster_name ITSO-CLS1 aux_cluster_id 0000020063E03A38 aux_cluster_name ITSO-CLS4 primary master state consistent_synchronized relationship_count 2 freeze_time status sync copy_type global RC_rel_id 17 RC_rel_name GMREL1 RC_rel_id 18 RC_rel_name GMREL2
IBM_2145:ITSO-CLS1:admin>svctask stoprcrelationship -access GMREL3 IBM_2145:ITSO-CLS1:admin>svcinfo lsrcrelationship GMREL3 id 16 name GMREL3 master_cluster_id 000002006AE04FC4
551
master_cluster_name ITSO-CLS1 master_vdisk_id 16 master_vdisk_name GM_App_Pri aux_cluster_id 0000020063E03A38 aux_cluster_name ITSO-CLS4 aux_vdisk_id 3 aux_vdisk_name GM_App_Sec primary consistency_group_id consistency_group_name state idling bg_copy_priority 50 progress freeze_time status sync in_sync copy_type global
IBM_2145:ITSO-CLS1:admin>svctask stoprcconsistgrp CG_W2K3_GM IBM_2145:ITSO-CLS1:admin>svcinfo lsrcconsistgrp CG_W2K3_GM id 0 name CG_W2K3_GM master_cluster_id 000002006AE04FC4 master_cluster_name ITSO-CLS1 aux_cluster_id 0000020063E03A38 aux_cluster_name ITSO-CLS4 primary master state consistent_stopped relationship_count 2 freeze_time status sync in_sync copy_type global RC_rel_id 17 RC_rel_name GMREL1 RC_rel_id 18 RC_rel_name GMREL2 If, afterwards, we want to enable access (write I/O) for the secondary volume, we can reissue the svctask stoprcconsistgrp command specifying the -access parameter. The Consistency Group transits to the Idling state, as shown in Example 9-154 on page 530.
Example 9-178 Stopping a Global Mirror Consistency Group
552
name CG_W2K3_GM master_cluster_id 000002006AE04FC4 master_cluster_name ITSO-CLS1 aux_cluster_id 0000020063E03A38 aux_cluster_name ITSO-CLS4 primary state idling relationship_count 2 freeze_time status sync in_sync copy_type global RC_rel_id 17 RC_rel_name GMREL1 RC_rel_id 18 RC_rel_name GMREL2
IBM_2145:ITSO-CLS1:admin>svctask startrcrelationship -primary master -force GMREL3 IBM_2145:ITSO-CLS1:admin>svcinfo lsrcrelationship GMREL3 id 16 name GMREL3 master_cluster_id 000002006AE04FC4 master_cluster_name ITSO-CLS1 master_vdisk_id 16 master_vdisk_name GM_App_Pri aux_cluster_id 0000020063E03A38 aux_cluster_name ITSO-CLS4 aux_vdisk_id 3 aux_vdisk_name GM_App_Sec primary master consistency_group_id consistency_group_name state consistent_synchronized bg_copy_priority 50 progress freeze_time status online sync copy_type global
553
IBM_2145:ITSO-CLS1:admin>svctask startrcconsistgrp -primary aux CG_W2K3_GM IBM_2145:ITSO-CLS1:admin>svcinfo lsrcconsistgrp CG_W2K3_GM id 0 name CG_W2K3_GM master_cluster_id 000002006AE04FC4 master_cluster_name ITSO-CLS1 aux_cluster_id 0000020063E03A38 aux_cluster_name ITSO-CLS4 primary aux state consistent_synchronized relationship_count 2 freeze_time status sync copy_type global RC_rel_id 17 RC_rel_name GMREL1 RC_rel_id 18 RC_rel_name GMREL2
554
Important: When the copy direction is switched, it is crucial that there is no outstanding I/O to the volume that transits from primary to secondary, because all I/O will be inhibited to that volume when it becomes the secondary. Therefore, careful planning is required prior to using the svctask switchrcrelationship command.
Example 9-181 Switching the copy direction for a Global Mirror relationship
IBM_2145:ITSO-CLS1:admin>svcinfo lsrcrelationship GMREL3 id 16 name GMREL3 master_cluster_id 000002006AE04FC4 master_cluster_name ITSO-CLS1 master_vdisk_id 16 master_vdisk_name GM_App_Pri aux_cluster_id 0000020063E03A38 aux_cluster_name ITSO-CLS4 aux_vdisk_id 3 aux_vdisk_name GM_App_Sec primary master consistency_group_id consistency_group_name state consistent_synchronized bg_copy_priority 50 progress freeze_time status online sync copy_type global IBM_2145:ITSO-CLS1:admin>svctask switchrcrelationship -primary aux GMREL3 IBM_2145:ITSO-CLS1:admin>svcinfo lsrcrelationship GMREL3 id 16 name GMREL3 master_cluster_id 000002006AE04FC4 master_cluster_name ITSO-CLS1 master_vdisk_id 16 master_vdisk_name GM_App_Pri aux_cluster_id 0000020063E03A38 aux_cluster_name ITSO-CLS4 aux_vdisk_id 3 aux_vdisk_name GM_App_Sec primary aux consistency_group_id consistency_group_name state consistent_synchronized bg_copy_priority 50 progress freeze_time status online sync copy_type global
555
IBM_2145:ITSO-CLS1:admin>svcinfo lsrcconsistgrp CG_W2K3_GM id 0 name CG_W2K3_GM master_cluster_id 000002006AE04FC4 master_cluster_name ITSO-CLS1 aux_cluster_id 0000020063E03A38 aux_cluster_name ITSO-CLS4 primary master state consistent_synchronized relationship_count 2 freeze_time status sync copy_type global RC_rel_id 17 RC_rel_name GMREL1 RC_rel_id 18 RC_rel_name GMREL2 IBM_2145:ITSO-CLS1:admin>svctask switchrcconsistgrp -primary aux CG_W2K3_GM IBM_2145:ITSO-CLS1:admin>svcinfo lsrcconsistgrp CG_W2K3_GM id 0 name CG_W2K3_GM master_cluster_id 000002006AE04FC4 master_cluster_name ITSO-CLS1 aux_cluster_id 0000020063E03A38 aux_cluster_name ITSO-CLS4 primary aux state consistent_synchronized relationship_count 2 freeze_time status sync copy_type global RC_rel_id 17 RC_rel_name GMREL1 RC_rel_id 18 RC_rel_name GMREL2
556
557
#datapath query adapter Active Adapters :2 Adpt# 0 1 Name State fscsi0 NORMAL fscsi1 NORMAL Mode ACTIVE ACTIVE Select 1445 1888 Errors 0 0 Paths 4 4 Active 4 4
#datapath query device Total Devices : 2 DEV#: 0 DEVICE NAME: vpath0 TYPE: 2145 POLICY: Optimized SERIAL: 60050768018201BF2800000000000000 ========================================================================== Path# Adapter/Hard Disk State Mode Select Errors 0 fscsi0/hdisk3 OPEN NORMAL 0 0 1 fscsi1/hdisk7 OPEN NORMAL 972 0 DEV#: 1 DEVICE NAME: vpath1 TYPE: 2145 POLICY: Optimized SERIAL: 60050768018201BF2800000000000002 ========================================================================== Path# Adapter/Hard Disk State Mode Select Errors 0 fscsi0/hdisk4 OPEN NORMAL 784 0 1 fscsi1/hdisk8 OPEN NORMAL 0 0 Write-through mode: During a software upgrade, there are periods when not all of the nodes in the cluster are operational and as a result, the cache operates in write-through mode. Note that write-through mode has an effect on the throughput, latency, and bandwidth aspects of performance. Verify that your uninterruptible power supply unit configuration is also set up correctly (even if your cluster is running without problems). Specifically, make sure that the following conditions are true: Your uninterruptible power supply units are all getting their power from an external source, and they are not daisy chained. Make sure that each uninterruptible power supply unit is not supplying power to another nodes uninterruptible power supply unit. The power cable and the serial cable, which comes from each node, go back to the same uninterruptible power supply unit. If the cables are crossed and go back to separate uninterruptible power supply units, then during the upgrade, while one node is shut down, another node might also mistakenly be shut down. Important: Do not share the SVC uninterruptible power supply unit with any other devices. You must also ensure that all I/O paths are working for each host that runs I/O operations to the SAN during the software upgrade. You can check the I/O paths by using the datapath query commands. You do not need to check for hosts that have no active I/O operations to the SAN during the software upgrade.
558
Procedure
To upgrade the SVC cluster software, perform the following steps: 1. Before starting the upgrade, you must back up the configuration (see 9.16, Backing up the SVC cluster configuration on page 572) and save the backup config file in a safe place. 2. Save the data collection for support diagnosis in case of problems, as shown in Example 9-184.
Example 9-184 svc_snap command
IBM_2145:ITSO-CLS1:admin>svc_snap -dumpall Collecting system information... Copying files, please wait... Copying files, please wait... Listing files, please wait... Copying files, please wait... Listing files, please wait... Copying files, please wait... Listing files, please wait... Dumping error log... Creating snap package... Snap data collected in /dumps/snap..100921.151239.tgz 3. List the dump that was generated by the previous command, as shown in Example 9-185.
Example 9-185 svcinfo ls2145dumps command
IBM_2145:ITSO-CLS1:admin>svcinfo ls2145dumps IBM_2145:ITSO-CLS4:admin>svcinfo ls2145dumps id 2145_filename 0 dump.104603.080801.161333 1 snap.104603.080815.153527.tgz 2 svc.config.cron.bak_n1 3 svc.config.cron.bak_node3 . above and below rows has been removed for brevity . 47 svc.config.cron.bak_104603 48 svc.config.cron.log_104603 49 svc.config.cron.xml_104603 50 svc.config.cron.sh_104603 51 snap..100921.151239.tgz 52 ups_log.b 53 ups_log.a 4. Save the generated dump in a safe place using the pscp command, as shown in Example 9-186 on page 560. Note: The pscp command will not work if you have not uploaded your PuTTy SSH private key into the PuTTy pageant agent as shown in Figure 9-13.
559
C:\>pscp -load ITSOCL4 admin@10.18.229.84:/dumps/snap..100921.151239.tgz c:\ \dumps\snap..100921.151239.tgz | 597 kB | 597.7 kB/s | ETA: 00:00:00 | 100% 5. Upload the new software package using PuTTY Secure Copy. Enter the command as shown in Example 9-187.
Example 9-187 pscp -load command
C:\>pscp -load ITSOCL4 c:\IBM2145_INSTALL_6.1.0.0 admin@10.18.229.84:/home/admin/upgrade 100921a.tgz.gpg.gz | 300143 kB | 1389.6 kB/s | ETA: 00:00:00 | 100% c. Upload the SAN Volume Controller Software Upgrade Test Utility by using PuTTY Secure Copy. Enter the command as shown in Example 9-188.
Example 9-188 Upload utility
C:\>pscp -load ITSOCL4 IBM2145_INSTALL_svcupgradetest_4.1 admin@10.18.229.84:/home/admin/upgrade IBM2145_INSTALL_svcupgrad | 11 kB | 12.0 kB/s | ETA: 00:00:00 | 100% 6. Verify that the packages were successfully delivered through the PuTTY command-line application by entering the svcinfo lssoftwaredumps command, as shown in Example 9-189.
Example 9-189 svcinfo lssoftwaredumps command
IBM_2145:ITSO-CLS4:admin>svcinfo lssoftwaredumps id software_filename 0 IBM2145_INSTALL_6.1.0.0 1 IBM2145_INSTALL_svcupgradetest_4.1 7. Now that the packages are uploaded, install the SAN Volume Controller Software Upgrade Test Utility, as shown in Example 9-190 on page 561.
560
IBM_2145:ITSO-CLS4:admin>svctask applysoftware -file IBM2145_INSTALL_svcupgradetest_4.1 CMMVC6227I The package installed successfully. 8. Using the following command, test the upgrade for known issues that might prevent a software upgrade from completing successfully, as shown in Example 9-191.
Example 9-191 svcupgradetest command
IBM_2145:ITSO-CLS4:admin>svcupgradetest svcupgradetest version 4.1 Please wait while the tool tests for issues that may prevent a software upgrade from completing successfully. The test will take approximately one minute to complete. The test has not found any problems with the 2145 cluster. Please proceed with the software upgrade. Important: If the svcupgradetest command produces any errors, troubleshoot the errors using the maintenance procedures before continuing. 9. Use the svctask command set to apply the software upgrade, as shown in Example 9-192.
Example 9-192 Apply upgrade command example
IBM_2145:ITSO-CLS4:admin>svctask applysoftware -file IBM2145_INSTALL_6.1.0.0 While the upgrade runs, you can check the status as shown in Example 9-193.
Example 9-193 Check update status
IBM_2145:ITSO-CLS4:admin>svcinfo lssoftwareupgradestatus status upgrading 10.The new code is distributed and applied to each node in the SVC cluster. After installation, each node is automatically restarted one at a time. If a node does not restart automatically during the upgrade, you must repair it manually. 11.Eventually both nodes display Cluster: on line one on the SVC front panel and the name of your cluster on line two of the panel. Be prepared for a wait (in our case, we waited approximately 40 minutes). Performance: During this process, both your CLI and GUI vary from sluggish (slow) to unresponsive. The important thing is that I/O to the hosts can continue through this process. 12.To verify that the upgrade was successful, you can perform either of the following options: You can run the svcinfo lscluster and svcinfo lsnodevpd commands as shown in Example 9-194. (We truncated the lscluster and lsnodevpd information for this example.)
Example 9-194 svcinfo lscluster and svcinfo lsnodevpd commands
561
id 0000020060A06FB8 name ITSO-CLS4 location local partnership bandwidth total_mdisk_capacity 251.0GB space_in_mdisk_grps 251.0GB space_allocated_to_vdisks 27.00GB total_free_space 224.0GB statistics_status on statistics_frequency 15 required_memory 0 cluster_locale en_US time_zone 344 Etc/GMT-7 code_level 6.1.0.0 (build 47.6.1009140000) FC_port_speed 2Gb . tier_free_capacity 199.75GB email_contact2 email_contact2_primary email_contact2_alternate total_allocated_extent_capacity 28.50GB IBM_2145:ITSO-CLS4:admin>svcinfo lsnodevpd 1 id 1 system board: 21 fields part_number 31P1090 . software: 4 fields id 1 node_name n104603 WWNN 0x50050768010037dc code_level 6.1.0.0 (build 47.6.1009140000) Or you can check whether the code installation has completed without error by copying the log to your management workstation as explained in 9.15.2, Running maintenance procedures on page 562. Open the event log in WordPad and search for the Software Install completed. message. At this point you have completed the required tasks to upgrade the SVC software.
IBM_2145:ITSO-CLS4:admin>svctask dumperrlog
562
This command generates a errlog_timestamp file, such as errlog_107662_100921_170547, where: errlog is part of the default prefix for all event log files. 107662 is the panel name of the current configuration node. 100921 is the date (YYMMDD). 170547 is the time (HHMMSS). You can add the -prefix parameter to your command to change the default prefix of errlog to something else (Example 9-196).
Example 9-196 svctask dumperrlog -prefix command
IBM_2145:ITSO-CLS4:admin>svctask dumperrlog -prefix ITSO-SVC4_errlog This command creates a file called ITSO-SVC4_errlog_timestamp. To see the file name, enter the following command (Example 9-197).
Example 9-197 svcinfo lserrlogdumps command
IBM_2145:ITSO-CLS4:admin>svcinfo lserrlogdumps id filename 0 errlog_107662_100921_170547 1 ITSO-SVC4_errlog_107662_100921_170648 Maximum number of event log dump files: A maximum of ten event log dump files per node will be kept on the cluster. When the eleventh dump is made, the oldest existing dump file for that node will be overwritten. Note that the directory might also hold log files retrieved from other nodes. These files are not counted. The SVC will delete the oldest file (when necessary) for this node to maintain the maximum number of files. The SVC will not delete files from other nodes unless you issue the cleandumps command. After you generate your event log you can issue the svctask finderr command to scan the event log for any unfixed events, as shown in Example 9-198.
Example 9-198 svctask finderr command
IBM_2145:ITSO-CLS4:admin>svctask finderr Highest priority unfixed error code is [1550] As you can see, we have one unfixed event on our system. To analyze this event, download it onto your PC. To know more about this unfixed event, look at the event log in more detail. Use the PuTTY Secure Copy process to copy the file from the cluster to your local management workstation, as shown in Example 9-199.
Example 9-199 pscp command: Copy event logs off of the SVC
In W2K3 Start Run cmd C:\Program Files\PuTTY>pscp -load ITSO-CLS4 admin@10.18.229.84:/dumps/elogs/ITSO-SVC4_errlog_107662_100921_170648 c:\ITSO-SVC4_errlog_107662_100921_170648| 147 kB | 147.8 kB/s | ETA: 00:00:00 | 100%
563
To use the Run option, you must know where your pscp.exe file is located. In this case, it is in the C:\Program Files\PuTTY\ folder. This command copies the file called ITSO-SVC4_errlog_107662_100921_170648 to the C:\ directory on our local workstation and calls the file ITSO-SVC4_errlog_107662_100921_170648 Open the file in WordPad (Notepad does not format the window as well). You will see information similar to that is shown in Example 9-200. (We truncated this list for the purposes of this example.)
Example 9-200 errlog in WordPad
Error Log Entry 0 Node Identifier Object Type Object ID Copy ID Sequence Number Root Sequence Number First Error Timestamp Last Error Timestamp Error Count Error ID Error Code Status Flag Type Flag 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
: : : : : : : : : : : : : : : 00 00 00 00 00 00 00
n104603 node 1 101 101 Fri Sep 17 01:20:06 2010 Epoch + 1284661206 Fri Sep 17 01:20:06 2010 Epoch + 1284661206 1 980221 : Error log cleared SNMP trap raised INFORMATION 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
By scrolling through, or searching for the term unfixed, you can find more detail about the problem. You might see more entries in the errorlog that have the status of unfixed. After rectify the problem, you can mark the event as fixed in the log by issuing the svctask cherrstate command against its sequence number; see Example 9-201.
Example 9-201 svctask cherrstate command
IBM_2145:ITSO-CLS4:admin>svctask cherrstate -sequencenumber 37404 If you accidentally mark the wrong event as fixed, you can mark it as unfixed again by entering the same command and appending the -unfix flag to the end, as shown in Example 9-202 on page 565.
564
IBM_2145:ITSO-CLS4:admin>svctask mksnmpserver -error on -warning on -info on 9.43.86.160 -community SVC SNMP Server id [1] successfully created
-ip
This command sends all events and warning to the SVC community on the SNMP manager with the IP address 9.43.86.160.
IBM_2145:ITSO-CLS4:admin>svctask mksyslogserver -ip 10.64.210.231 -name Syslogserv1 Syslog Server id [1] successfully created When we have configured our syslog server, we can display the current syslog server configurations in our cluster, as shown in Example 9-205.
Example 9-205 svcinfo lssyslogserver command
IBM_2145:ITSO-CLS4:admin>svcinfo lssyslogserver id name IP_address facility error warning info 0 Syslogsrv 10.64.210.230 on on on 1 Syslogserv1 10.64.210.231 on on on
4 0
565
IBM_2145:ITSO-CLS4:admin>svctask mkemailserver -ip 192.168.1.1 Email Server id [0] successfully created IBM_2145:ITSO-CLS4:admin>svcinfo lsemailserver 0 id 0 name emailserver0 IP_address 192.168.1.1 port 25 We can configure an email user that will receive email notifications from the SVC cluster. We can define 12 users to receive emails from our SVC. Using the svcinfo lsemailuser command, we can verify who is already registered and what type of information is sent to that user, as shown in Example 9-207.
Example 9-207 svcinfo lsemailuser command
IBM_2145:ITSO-CLS4:admin>svcinfo lsemailuser id name address user_type error warning info 0 IBM_Support_Center callhome0@de.ibm.com support on off off
inventory on
We can also create a new user, as shown in Example 9-208 for a SAN administrator.
Example 9-208 svctask mkemailuser command
IBM_2145:ITSO-CLS4:admin>svctask mkemailuser -address SANadmin@ibm.com -error on -warning on -info on -inventory on User, id [1], successfully created
Critical: Errors which put the node into service state and prevent the node from joining the cluster 500 699 Note: Deleting a node from a cluster will cause nodes to enter service state as well. Non-critical: Partial hardware faults (for example, one PSU failed in 2145-CF8) 800 - 899 To display the event log use the svcinfo lserrlog command or the svcinfo caterrlog command, as shown in Example 9-209 (the output is the same for either command).
Example 9-209 svcinfo caterrlog command
IBM_2145:ITSO-CLS4:admin>svcinfo caterrlog -first 10 -delim : id:type:fixed:SNMP_trap_raised:error_type:node_name:sequence_number:root_sequence_ number:first_timestamp:last_timestamp:number_of_errors:error_code:copy_id 0:cluster:no:yes:6:n104603:147:143:100922010005:100922010005:1:00981004: 0:cluster:no:no:5:n107662:0:0:100922010002:100922010002:1:00990203: 0:cluster:no:no:5:n107662:0:0:100921170859:100921170859:1:00990219: 0:cluster:no:no:5:n107662:0:0:100921170648:100921170648:1:00990220: 0:cluster:no:no:5:n107662:0:0:100921170547:100921170547:1:00990220: 0:cluster:no:no:5:n107662:0:0:100921170515:100921170515:1:00990219: 0:cluster:no:yes:6:n104603:146:143:100921165929:100921165929:1:00981003: 0:cluster:no:no:5:n107662:0:0:100921165909:100921165909:1:00990415: 1:node:no:yes:6:n104603:145:145:100921165904:100921165904:1:00987102: 1:node:no:yes:6:n107662:144:144:100921165904:100921165904:1:00980349: These commands allow you to view the last 10 events that were generated. Use the method described in 9.15.2, Running maintenance procedures on page 562 to upload and analyze the event log in more detail. To clear the event log, you can issue the svctask clearerrlog command, as shown in Example 9-210.
Example 9-210 svctask clearerrlog command
IBM_2145:ITSO-CLS4:admin>svctask clearerrlog Do you really want to clear the log? y Using the -force flag will stop any confirmation requests from appearing. When executed, this command will clear all of the entries from the event log. This process will proceed even if there are unfixed errors in the log. It also clears any status events that are in the log. Note: This command is a destructive command for the event log. Only use this command when you have either rebuilt the cluster, or when you have fixed a major problem that has caused many entries in the event log that you do not want to fix manually.
567
IBM_2145:ITSO-CLS4:admin>svcinfo lslicense used_flash 0.01 used_remote 0.01 used_virtualization 0.25 license_flash 5 license_remote 5 license_virtualization 5 license_physical_disks 0 license_physical_flash off license_physical_remote off The current license settings for the cluster are displayed in the viewing license settings log window. These settings show whether you are licensed to use the FlashCopy, Metro Mirror, Global Mirror, or Virtualization features. They also show the storage capacity that is licensed for virtualization. Typically, the license settings log contains entries, because feature options must be set as part of the web-based cluster creation process. Consider, for example, that you have purchased an additional 5 TB of licensing for the Metro Mirror and Global Mirror feature from your actual 20 TB license. Example 9-212 shows the command that you enter.
Example 9-212 svctask chlicense command
IBM_2145:ITSO-CLS1:admin>svctask chlicense -remote 25 To turn a feature off, add 0 TB as the capacity for the feature that you want to disable. To verify that the changes you have made are reflected in your SVC configuration, you can issue the svcinfo lslicense command as before (see Example 9-213).
Example 9-213 svcinfo lslicense command: Verifying changes
IBM_2145:ITSO-CLS4:admin>svcinfo lslicense used_flash 0.01 used_remote 0.01 used_virtualization 0.25 license_flash 5 license_remote 25 license_virtualization 5 license_physical_disks 0 license_physical_flash off license_physical_remote off
lssoftwaredumps ls2145dumps If no node is specified, the command lists the dumps that are available on the configuration node.
569
The command to list all of the dumps in the /dumps/iotrace directory is the svcinfo lsiotracedumps command (Example 9-216).
Example 9-216 svcinfo lsiotracedumps command
IBM_2145:ITSO-CLS4:admin>svcinfo lsiostatsdumps id iostat_filename 0 Nd_stats_107662_100922_061303 1 Nv_stats_107662_100922_061303 2 Nm_stats_107662_100922_061303 3 Nn_stats_107662_100922_061303 4 Nd_stats_107662_100922_062801 5 Nm_stats_107662_100922_062801 ........
Software dump
The svcinfo lssoftwaredump command lists the contents of the /home/admin/upgrade directory. Any files in this directory are copied there at the time that you perform a software upgrade. Example 9-218 shows the command.
Example 9-218 svcinfo lssoftwaredumps
570
However, files can only be copied from the current configuration node (using PuTTY Secure Copy). Therefore, you must issue the svctask cpdumps command to copy the files from a non-configuration node to the current configuration node. Subsequently, you can copy them to the management workstation using PuTTY Secure Copy. For example, suppose you discover a dump file and want to copy it to your management workstation for further analysis. In this case, you must first copy the file to your current configuration node. To copy dumps from other nodes to the configuration node, use the svctask cpdumps command. In addition to the directory, you can specify a file filter. For example, if you specified /dumps/elogs/*.txt, all of the files in the /dumps/elogs directory that end in .txt are copied. Wildcards: The following rules apply to the use of wildcards with the SAN Volume Controller CLI: The wildcard character is an asterisk (*). The command can contain a maximum of one wildcard. When you use a wildcard, you must surround the filter entry with double quotation marks (""), for example: >svctask cleardumps -prefix "/dumps/elogs/*.txt" Example 9-219 shows an example of the cpdumps command.
Example 9-219 svctask cpdumps command
IBM_2145:ITSO-CLS4:admin>svctask cpdumps -prefix /dumps/configs n4 Now that you have copied the configuration dump file from Node n4 to your configuration node, you can use PuTTY Secure Copy to copy the file to your management workstation for further analysis. To clear the dumps, you can run the svctask cleardumps command. Again, you can append the node name if you want to clear dumps off of a node other than the current configuration node (the default for the svctask cleardumps command). The commands in Example 9-220 clear all logs or dumps from the SVC Node n1.
Example 9-220 svctask cleardumps command
n1
571
In addition to the dump file, trace files can be written to this directory. These trace files are named NNNNNN.trc. The command to list all of the dumps in the /dumps directory is the svcinfo ls2145dumps command; see Example 9-221.
Example 9-221 svcinfo ls2145dumps command
IBM_2145:ITSO-CLS4:admin>svcinfo ls2145dumps id 2145_filename 0 107662.100917.062357.ups_log.tar.gz 1 107662.100917.141406.ups_log.tar.gz 2 107662.100917.144108.ups_log.tar.gz 3 dump.107662.100917.144514 4 000000.trc 5 ethernet.000000.trc 6 dump.107662.100917.145246 7 107662.100917.151723.ups_log.tar.gz 8 svc.config.cron.bak_104603 9 107662.100921.160923.ups_log.tar.gz 10 endd.trc.old 11 dpa_log_107662_20100921164920_00000000.xml.gz 12 ethernet.107662.trc 13 endd.trc 14 107662.trc 15 svc.config.cron.sh_107662 16 svc.config.cron.log_107662 17 svc.config.cron.xml_107662 18 dpa_heat.107662.100922.084254.data 19 ups_log.a 20 ups_log.b
Also, the object with the default name is restored with its original name with an _r appended. The prefix _(underscore) is reserved for backup and restore command usage. Do not use this prefix in any object names. Important: The tool backs up logical configuration data only, not client data. It does not replace a traditional data backup and restore tool, but this tool supplements a traditional data backup and restore tool with a way to back up and restore the clients configuration. To provide a complete backup and disaster recovery solution, you must back up both user (non-configuration) data and configuration (non-user) data. After the restoration of the SVC configuration, you must fully restore user (non-configuration) data to the clusters disks.
9.16.1 Prerequisites
You must have the following prerequisites in place: All nodes must be online. No object name can begin with an underscore. All objects must have non-default names, that is, names that are not assigned by the SVC. Although we advise that objects have non-default names at the time that the backup is taken, this prerequisite is not mandatory. Objects with default names are renamed when they are restored. Example 9-222 shows an example of the svcconfig backup command.
Example 9-222 svcconfig backup command
IBM_2145:ITSO-CLS4:admin>svcconfig backup ............ CMMVC6130W Cluster ITSO-CLS1 with inter-cluster partnership fully_configured will not be restored . CMMVC6130W Cluster ITSO-CLS2 with inter-cluster partnership fully_configured will not be restored .................................................................................. ....................... CMMVC6155I SVCCONFIG processing completed successfully As you can see in Example 9-222, we received a CMMVC6130W Cluster ITSO-CLS1 with inter-cluster partnership fully_configured will not be restored message. This message indicates that individual clusters in a multi cluster environment will need to be backed-up individually. In the event that recovery is required, recovery will only be performed on the cluster where the recovery commands are executed. Example 9-223 shows the pscp command.
Example 9-223 pscp command
C:\Program Files\PuTTY>pscp -load ITSO-CLS4 admin@10.18.229.84:/tmp/svc.config.backup.xml c:\temp\clibackup.xml clibackup.xml | 97 kB | 97.2 kB/s | ETA: 00:00:00 | 100%
573
The following scenario illustrates the value of configuration backup: 1. Use the svcconfig command to create a backup file on the cluster that contains details about the current cluster configuration. 2. Store the backup configuration on a form of tertiary storage. You must copy the backup file from the cluster or it becomes lost if the cluster crashes. 3. If a sufficiently severe failure occurs, the cluster might be lost. Both the configuration data (for example, the cluster definitions of hosts, I/O Groups, MDGs, and MDisks) and the application data on the virtualized disks are lost. In this scenario, it is assumed that the application data can be restored from normal client backup procedures. However, before you can perform this restoration, you must reinstate the cluster as it was configured at the time of the failure. Therefore, you restore the same MDGs, I/O Groups, host definitions, and volumes that existed prior to the failure. Then you can copy the application data back onto these volumes and resume operations. 4. Recover the hardware: hosts, SVCs, disk controller systems, disks, and SAN fabric. The hardware and SAN fabric must physically be the same as the hardware and SAN fabric that were used before the failure. 5. Reinitialize the cluster with the configuration node; the other nodes will be recovered when restoring the configuration. 6. Restore your cluster configuration using the backup configuration file that was generated prior to the failure. 7. Restore the data on your volumes using your preferred restoration solution or with help from IBM Service. 8. Resume normal operations.
command clears all of the configuration backup that is stored in the /tmp directory; see Example 9-224.
Example 9-224 svcconfig clear command
IBM_2145:ITSO-CLS4:admin>svcinfo lsquorum quorum_index status id name controller_id 0 online 0 mdisk0 0 1 online 1 mdisk1 0 2 online 2 mdisk2 0 IBM_2145:ITSO-CLS4:admin>svcinfo lsquorum 0 quorum_index 0 status online id 0 name mdisk0 controller_id 0 controller_name ITSO-4700 active yes object_type mdisk
active yes no no
IBM_2145:ITSO-CLS4:admin>svcinfo lsquorum quorum_index status id name controller_id 0 online 0 mdisk0 0 1 online 1 mdisk1 0 2 online 2 mdisk2 0
active yes no no
IBM_2145:ITSO-CLS4:admin>svctask chquorum -mdisk 9 2 IBM_2145:ITSO-CLS4:admin>svcinfo lsquorum quorum_index status id name controller_id controller_name active object_type
Chapter 9. SAN Volume Controller operations using the command-line interface
575
0 1 2
yes no no
As you can see in Example 9-226 on page 575, the quorum index 2 has been moved from MDisk2 on ITSO-4700 controller to MDisk9 on ITSO-XIV controller.
Example 9-227 shows the two new set of commands introduced with Service Assistant.
Example 9-227 sainfo and satask command
IBM_2145:ITSO-CLS4:admin>sainfo -h The following actions are available with this command : lscmdstatus lsfiles lsservicenodes lsservicerecommendation lsservicestatus IBM_2145:ITSO-CLS4:admin>satask -h The following actions are available with this command : chenclosurevpd chnodeled chserviceip
576
chwwnn cpfiles installsoftware leavecluster mkcluster rescuenode setlocale setpacedccu settempsshkey snap startservice stopnode stopservice t3recovery Attention: The sainfo and satask command set usage must be performed under IBM Support direction. Incorrect use of those commands can lead to unexpected results.
IBM_2145:ITSO-CLS4:admin>svcinfo lsfabric remote_wwpn remote_nportid id node_name local_wwpn local_nportid state name cluster_name type 50050768012027E2 620900 1 n104603 50050768011037DC active n108283 ITSO-CLS1 node 50050768012027E2 620900 1 n104603 50050768012037DC active n108283 ITSO-CLS1 node 50050768012027E2 620900 2 n107662 5005076801101D1C active n108283 ITSO-CLS1 node . Above and below rows has been removed for brevity . 200700A0B84858A1 171B00 1 n104603 50050768014037DC inactive ITSO-4700 controller 200700A0B84858A1 171B00 1 n104603 50050768013037DC inactive ITSO-4700 controller
1 2
170600 170700
577
200700A0B84858A1 171B00 inactive ITSO-4700 200700A0B84858A1 171B00 inactive ITSO-4700 . Above and below rows has been . 5005076801101D22 620200 active n100048 ITSO-CLS3 5005076801101D22 620200 active n100048 ITSO-CLS3 5005076801101D22 620200 active n100048 ITSO-CLS3 5005076801101D22 620200 active n100048 ITSO-CLS3 10000000C92B7F90 171300 active W2K3 10000000C92B7F90 171300 active W2K3 10000000C92B7F90 171300 active W2K3
2 n107662 5005076801401D1C 1 controller 2 n107662 5005076801301D1C 2 controller removed for brevity 1 n104603 node 1 n104603 node 2 n107662 node 2 n107662 node 1 n104603 host 2 n107662 host 2 n107662 host 50050768011037DC 3 50050768012037DC 4 5005076801101D1C 3 5005076801201D1C 4 50050768013037DC 2 5005076801401D1C 1 5005076801301D1C 2
170400 170500
For more detail about the lsfabric command, see IBM System Storage SAN Volume Controller and Storwize V7000 Command-Line Interface User's Guide Version 6.1.0 GC27-2287.
578
10
Chapter 10.
579
From this Welcome panel, on the left panel, there is a dynamic menu.
Dynamic menu
This new version of the SVC GUI includes a new dynamic menu located in the left column of the window. To navigate using this menu, move the mouse over the various icons and choose a page that you want to display, as shown in Figure 10-2 on page 581.
580
A non-dynamic version of this menu exists for slow connections. To access the non-dynamic menu, select Low graphics mode as shown in Figure 10-3.
Figure 10-4 on page 582 shows the non-dynamic version of the menu.
581
In this case, in the upper part of the web page there are tabs for navigating between submenus. For example in Figure 10-4, All volumes, Volumes by Pool, and Volumes by Host are submenus (tabs) for the Volumes menu.
582
If there are issues on your cluster nodes, external storage, or remote partnerships, you will be informed here, as shown in Figure 10-7.
You will be able to fix the error using the Fix Error button, which will direct you to the troubleshooting panel.
The following information is displayed in this window. To view all of them, you need to use the left and right arrows: Allocated Capacity Free Capacity Physical Capacity Virtual Capacity Over-allocation
583
It also provides information about the recently completed tasks, as shown in Figure 10-10.
Table filtering
In most pages, in the upper right corner of the window, there is a search field to filter the elements, which is useful if the list of entries is too large to work with. Perform these steps to use search filtering: 1. Enter a value in the search box in the upper right corner of the window, as shown in Figure 10-11 on page 585.
584
2. Click the
icon.
3. This function enables you to filter your table based on the column names. In this example, a volume list is displayed containing names that include DB2 somewhere in the name, as shown in Figure 10-12.
4. You can remove this filtered view by clicking Reset, as shown in Figure 10-13 on page 586.
585
Table information
With SVC 6.1, you are able to add or remove additional information in the tables available on most pages. As an example, in the All Volumes page we will add a column to our table. 1. Right-click the top part of the table; see Figure 10-14. A menu with all available columns appears.
2. Select the column that you want to add (or remove) from this table. In our example, we added the volume ID column as shown in Figure 10-15 on page 587.
586
3. You can repeat this process several times to create custom tables that meet your requirements.
587
Sorting
Regardless of whether you use filter options, you can sort the displayed data by clicking one column's table as shown in Figure 10-17. In this example, we sort the table by volume ID.
After we click the volume ID column, the table is sorted by volume ID as shown in Figure 10-18 on page 589.
588
Note: By repeatedly clicking a column, you can sort this table based on that column in ascending or descending order.
10.1.3 Help
To access online help, click the Help link in the upper right corner of any panel, as shown in Figure 10-19.
This action opens a new window where you can find help on different topics (see Figure 10-20).
589
590
2. Type the new name that you want to assign to the controller, and press Enter as shown in Figure 10-23.
Controller name: The name can consist of the letters A to Z and a to z, the numbers 0 to 9, the dash (-), and the underscore (_) character. The name can be between one and 63 characters in length. However, the name cannot start with a number, the dash or the underscore. 3. A task is launched to change the name of this Storage System. When it is completed, you can close this window. 4. The new name of your controller is displayed on the Disk Controller Systems panel.
591
3. The Discover devices task runs. 4. When the task is completed, click Close and see the new MDisks available.
592
You can add information (new columns) to the table, as explained in Table information on page 586. To retrieve more detailed information about a specific Storage Pool, select any Storage Pool in the left column. The top left corner of the panel, shown in Figure 10-26, contains the following information about this pool: Status Number of MDisks Number of volumes copies If Easy Tiering is active on this pool
The top right corner of this panel, shown in Figure 10-27 on page 594, contains the following information about the pool: Volume Allocation Used Capacity Virtual Capacity Capacity
593
The main part of this panel displays the MDisks that are present in this Storage Pool, as shown in Figure 10-28.
3. The Discover Device window is displayed. 4. Click Close to see the newly discovered MDisks.
594
2. The wizard Create Storage Pools opens. 3. On this first page, complete the following elements as shown in Figure 10-31 on page 596: a. You can specify a name for the Storage Pool as we have in Figure 10-31 on page 596. If you do not provide a name, the SVC automatically generates the name mdiskgrpx, where x is the ID sequence number that is assigned by the SVC internally. Storage Pool name: You can use the letters A to Z and a to z, the numbers 0 to 9, and the underscore (_) character. The name can be between one and 63 characters in length and is case sensitive, but it cannot start with a number or the word MDiskgrp because this prefix is reserved for SVC assignment only. b. You can also change the icon associated with this Storage Pool as shown in Figure 10-31 on page 596. c. If you expand the Advanced Settings box, you can specify: The Extent Size (by default at 256 MB) The Warning threshold to send a warning to the event log when the capacity is first exceeded (by default at 80%).
d. Click Next.
595
4. On this page (Figure 10-32), you are able to detect new MDisks by using Detect MDisks. For more information about this topic, see 10.4.3, Discovering MDisks on page 602. a. Select the MDisks that you want to add to this Storage Pool. Tip: To add multiple MDisks, hold down Ctrl and use your mouse to select the entries you want to add. b. Click Finish to complete the creation.
5. In the Storage Pools panel (Figure 10-33 on page 597), the new Storage Pool is displayed.
596
At this point, you have completed the tasks that are required to create a Storage Pool.
597
2. Type the new name that you want to assign to the Storage Pool and press Enter (Figure 10-35).
Storage Pools name: The name can consist of the letters A to Z and a to z, the numbers 0 to 9, the dash (-), and the underscore (_) character. The name can be between one and 63 characters in length. However, the name cannot start with a number, the dash or the underscore.
3. A task is launched to change the name of this pool. When it is completed, you can close this window. 4. From the Storage Pools panel, the new Storage Pool name is displayed.
2. In the Delete Pool window, click Delete to confirm that you want to delete the Storage Pool (Figure 10-37 on page 599). If there are MDisks and volumes within the Storage Pool that you are deleting, you must select the Delete all volumes, host mappings, and MDisks that are associated with this pool. option.
598
Attention: If you delete a Storage Pool by using the Delete all volumes, host mappings, and MDisks that are associated with this pool option, and volumes were associated with that Storage Pool, you will lose the data on your volumes because they are deleted before the Storage Pool. If you want to save your data, then migrate or mirror the volumes to another Storage Pool before you delete the Storage Pool previously assigned to the volumes.
10.3.7 Showing the volumes that are associated with a Storage Pool
To show the volumes that are associated with a Storage Pool, click volumes and then click volumes by Pool. For more information about this feature see 10.7, Working with volumes on page 630.
599
To retrieve more detailed information about a specific MDisk, perform the following steps: 1. In the MDisks panel (Figure 10-38), right-click an MDisk. 2. As shown in Figure 10-39, click Properties.
3. For the selected MDisk, an overview is displayed showing its various parameters and dependent volumes; see Figure 10-40 on page 601.
Note: To obtain all information about the MDisk, select Show Details as shown in Figure 10-40.
600
4. Clicking Dependent Volumes displays information about volumes that reside on this MDisk, as shown in Figure 10-41. The volume panel is discussed in more detail in 10.7, Working with volumes on page 630.
601
Note: You can also right-click this MDisk as shown in Figure 10-39 on page 600 and select Rename from the list. 3. In the Rename MDisk window (Figure 10-43), type the new name that you want to assign to the MDisk and click Rename.
MDisk name: The name can consist of the letters A to Z and a to z, the numbers 0 to 9, the dash (-), and the underscore (_) character. The name can be between one and 63 characters in length.
602
The Discover Device window is displayed. 3. When the task is completed, click Close. 4. Newly assigned MDisks are displayed as Unmanaged as shown in Figure 10-45.
Troubleshooting: If your MDisks are still not visible, check that the logical unit numbers (LUNs) from your subsystem are properly assigned to the SVC (for example, using storage partitioning with a DS5000) and that appropriate zoning is in place (for example, the SVC can see the disk subsystem).
603
Note: You can also access the Add to Pool action by right-clicking an unmanaged MDisk. 3. From the Add MDisk to Pool window, select in which pool you want to integrate this MDisk and then click Add to Pool, as shown in Figure 10-47.
604
Note: You can also access the Remove from Pool action by right-clicking an unmanaged MDisk. 3. From the Remove from Pool window (Figure 10-49), you need to validate the number of MDisks that you want to remove from this pool. This verification has been added to secure the process of deleting data. If volumes are using the MDisks that you are removing from the Storage Pool, you must select the option Remove the MDisk from the storage pool even if it has data on it. The system migrates the data to other MDisks in the pool. to confirm the removal of the MDisk. 4. Click Delete as shown in Figure 10-49.
An error message is displayed, as shown in Figure 10-50 on page 606, if there is insufficient space to migrate the volume data to other extents on other MDisks in that Storage Pool.
605
606
Note: For more detailed information about Easy Tier, see Chapter 7, Easy Tier on page 345. Easy Tier is also still inactive (Figure 10-51) for the storage pool because we do not yet have a true multidisk tier pool. To activate the pool we have to set the SSD MDisks to their correct generic_ssd tier. To set an MDisk as ssd on a Storage Pool, perform the following steps: Note: Repeat this action for each of your ssd MDisks. 1. Select the MDisk. 2. Click Select Tier in the Actions menu as shown in Figure 10-52. Note: You can also access the Select Tier action by right-clicking an MDisk.
3. In the Select MDisk Tier window, shown in Figure 10-53 on page 608, select Solid-State Drive using the drop-down list and then click OK.
607
4. The Easy Tier is now activated in this multidisk tier pool (Hard Disk Drive and Solid-State Drive) in this pool as shown in Figure 10-54.
10.5 Migration
See Chapter 6, Data migration on page 227 for a comprehensive description of data migration.
608
from several hosts to one host object to make a simpler configuration. A host object can have both WWPNs and iSCSI names. There are three ways to visualize and manage your hosts: By using the All Hosts panel, as shown in Figure 10-55
By using the Host Mapping panel, as shown in Figure 10-57 on page 610
609
Important: Several actions on the hosts are specific to the Ports by Host or the Host Mapping panels, but all these actions and others are accessible from the All Hosts panel. For this reason, all actions on hosts will be executed from the All Hosts panel.
Note: You can also access the Properties action by right-clicking a host.
610
3. For a given host in the Overview window you will be presented with information as shown in Figure 10-59.
Note: To obtain more information about the hosts select Show Details (Figure 10-59). 4. On the Mapped Volumes tab (Figure 10-60), you will see the volumes that are mapped to this host.
611
5. The Port Definitions tab (Figure 10-61) displays attachment information such as the worldwide port names (WWPNs) that are defined for this host or the iSCSI qualified name (IQN) that are defined for this host.
When you are finished viewing the details, click Close to return to the previous window.
3. Select Fibre-Channel Host from the two types of connection available (Figure 10-63). 612
Implementing the IBM System Storage SAN Volume Controller V6.1
4. In the Creating Hosts window (Figure 10-64 on page 614), type a name for your host (Host Name). Host name: If you do not provide a name, the SVC automatically generates the name hostx (where x is the ID sequence number that is assigned by the SVC internally). If you want to provide a name, you can use the letters A to Z and a to z, the numbers 0 to 9, and the underscore (_) character. The host name can be between one and 63 characters in length. 5. Fibre-Channel Ports Section: Use the drop-down list to select the WWPNs that correspond to your HBA or HBAs and click Add Port to List in the Fibre-Channel Ports window. To add additional ports, repeat this action. Note: If you added a wrong Fibre-Channel port, you can delete it from the list by clicking the red cross. If your WWPNs are not being displayed, click Rescan to rediscover new WWPNs available since the last scan. Note: In certain cases your WWPNs still might not be displayed, even though you are sure that your adapter is functioning (for example, you see the WWPN in the switch name server) and your zones are correctly set up. To rectify this, type the WWPN of your HBA or HBAs into the drop-down list and click Add Port to List. It will be displayed as unverified. 6. Advanced Settings Section: If you need to modify the I/O Group, the Port Mask or the Host Type, you must select Advanced to access these Advanced Settings as shown in Figure 10-64 on page 614. Select one or more I/O groups from which the host can access volumes. By default, all I/O Groups are selected. You can use a port mask to control the node target ports that a host can access. The port mask applies to the logins from the host initiator port that is associated with the host object.
613
Note: For each login between a host bus adapter (HBA) port and a node port, the node examines the port mask that is associated with the host object for which the HBA is a member and determines if access is allowed or denied. If access is denied, the node responds to SCSI commands as though the HBA port is unknown. Select the Host Type. The default type is Generic. Use generic for all hosts, unless you use Hewlett-Packard UNIX (HP-UX) or Sun. For these, select HP_UX (to have more than eight LUNs supported for HP_UX machines) or TPGS for Sun hosts using MPxIO.
7. Click the Create Host button as shown in Figure 10-64. This action brings you back to the All Hosts panel (Figure 10-65 on page 614) where you can see the newly added FC host.
614
iSCSI-attached hosts
To create a new host that uses the iSCSI connection type, perform the following steps: 1. Go to the All Hosts panel from the SVC Welcome panel on Figure 10-1 on page 580 and click Hosts All Hosts (Figure 10-55 on page 609). 2. Click New Host, as shown in Figure 10-66.
3. Select iSCSI Host from the two types of connection (Figure 10-67).
4. In the Creating Hosts window (Figure 10-68 on page 617), type a name for your host (Host Name). Host name: If you do not provide a name, the SVC automatically generates the name hostx (where x is the ID sequence number that is assigned by the SVC internally). If you want to provide a name, you can use the letters A to Z and a to z, the numbers 0 to 9, and the underscore. The host name can be between one and 63 characters in length. 5. iSCSI ports Section: Enter the iSCSI initiator or IQN as an iSCSI port, and then click Add Port to List. This IQN is obtained from the server and generally has the same purpose as the WWPN. To add additional ports, repeat this action.
615
Note: If you add the wrong iSCSI port, you can delete it from the list by clicking the red cross. If needed, select Use CHAP authentication (all ports) and enter the CHAP secret as shown in Figure 10-68 on page 617. The CHAP secret is the authentication method that is used to restrict access for other iSCSI hosts to use the same connection. You can set the CHAP for the whole cluster under cluster properties or for each host definition. The CHAP must be identical on the server and the cluster/host definition. You can create an iSCSI host definition without using a CHAP. 6. Advanced Settings Section: If you need to modify the I/O Group, the Port Mask or the Host Type, you have to select the Advanced button to access these settings as shown in Figure 10-64 on page 614. Select one or more I/O groups from which the host can access volumes. By default, all I/O Groups are selected. You can use a port mask to control the node target ports that a host can access. The port mask applies to the logins from the host initiator port that is associated with the host object. Note: For each login between a host bus adapter (HBA) port and a node port, the node examines the port mask that is associated with the host object for which the HBA is a member and determines if access is allowed or denied. If access is denied, the node responds to SCSI commands as though the HBA port is unknown. Select the Host Type. The default type is Generic. Use generic for all hosts, unless you use Hewlett-Packard UNIX (HP-UX) or Sun. For these, select HP_UX: (to have more than eight LUNs supported for HP_UX machines) or TPGS for Sun hosts using MPxIO.
616
7. Click Create Host as shown in Figure 10-68. This action brings you back to the All Hosts panel (Figure 10-69) where you can see the newly added iSCSI host.
617
Note: There are two other ways to rename a host. You can right-click a host and select Rename from the list, or use the method described in 10.6.4, Modifying a host on page 618. 3. In the Rename Host window, type the new name that you want to assign and click Rename (Figure 10-71).
Host name: If you do not provide a name, the SVC automatically generates the name hostx (where x is the ID sequence number that is assigned by the SVC internally). If you want to provide a name, you can use the letters A to Z and a to z, the numbers 0 to 9, and the underscore (_) character. The host name can be between one and 63 characters in length.
618
Note: You can also right-click a host and select Properties from the list. 3. In the Overview tab, click Edit to be able to modify parameters for this host. You can modify: The Host Name Host name: If you do not provide a name, the SVC automatically generates the name hostx (where x is the ID sequence number that is assigned by the SVC internally). If you want to provide a name, you can use the letters A to Z and a to z, the numbers 0 to 9, and the underscore (_) character. The host name can be between one and 63 characters in length. The Host Type: The default type is Generic. Use generic for all hosts, unless you use Hewlett-Packard UNIX (HP-UX) or Sun. For these, select HP_UX (to have more than eight LUNs supported for HP_UX machines) or TPGS for Sun hosts using MPxIO. Advanced Settings: If you need to modify the I/O Group, the Port Mask or the iSCSI CHAP Secret (in case you want to convert it to an iSCSI Host), you must select Advanced to access these settings, as shown in Figure 10-73 on page 620.
619
4. Save the changes by clicking Save. 5. You can close the Host Details window by clicking Close.
Note: You can also right-click a host and select Delete from the list.
3. The Delete Host window opens as shown in Figure 10-75 on page 621. In the field Verify the number of hosts that you are deleting, enter a value matching the correct number
620
of hosts that you want to remove. This verification has been added to secure the process of inadvertently deleting wrong hosts. If you still have volumes associated with the host and if you are sure that you want to delete it even if these volumes are no longer accessible, select the Delete the host even if volumes are mapped to them. These volumes will no longer be accessible to the hosts. option. 4. Click Delete to complete the operation (Figure 10-75).
621
Note: You can also right-click a host and select Properties from the list. 3. On the Properties window, click Port Definitions (Figure 10-77).
4. Click Add and select the type of port that you want to add to your host (Fibre Channel Port or iSCSI Port) as shown in Figure 10-78. In this example, we selected a Fibre-Channel Port.
5. In the Add Fibre-Channel Ports window (Figure 10-79 on page 623), use the drop-down list to select the WWPNs that correspond to your HBA or HBAs and click Add Port to List in the Fibre-Channel Ports window. To add additional ports, repeat this action. Note: If you added the wrong Fibre-Channel port, you can delete it from the list by clicking the red cross. If your WWPNs are not displayed, click Rescan to rediscover any new WWPNs available since the last scan.
622
Note: In certain cases your WWPNs might still not be displayed, even though you are sure your adapter is functioning (for example, you see the WWPN in the switch name server) and your zones are correctly set up. To rectify this, type the WWPN of your HBA or HBAs into the drop-down list and click Add Port to List. It will be displayed as unverified. 6. To finish, click Add Ports to Host.
7. This action takes you back to the Port Definitions window (Figure 10-80), where you can see the newly added ports.
623
Note: This action is exactly the same for iSCSI Ports, except that you have to add iSCSI ports.
Tip: You can also right-click a host and select Properties from the list.
6. In the Delete Port window (Figure 10-84), in the field Verify the number of ports to delete, you need to enter a value matching the correct number of ports that you want to remove. This verification has been added to secure the process of inadvertently deleting the wrong hosts.
7. Click Delete to remove the port or ports. 8. This action brings you back to the Port Definitions window.
625
3. On the Modify Mappings window select the volume or volumes that you want to map to this host and move each of them to the right table using the right arrow, as shown in Figure 10-86. If you need to remove them, use the left arrow.
In the right table you can edit the SCSI ID by selecting a mapping that is highlighted in yellow, indicating that the mapping is new. Click Edit SCSI ID (Figure 10-86). Note: Only new mappings can have their SCSI ID changed. To edit an existing mapping SCSI ID, you must unmap the volume and recreate the map to the volume. In the Edit SCSI ID window, change the SCSI ID then click OK (Figure 10-87 on page 627).
626
4. After all the volumes you wanted to map to this host have been added, click OK to create the Host mapping relationships.
Tip: You can also right-click a host and select Properties from the list. 3. On the opened window, click Mapped volumes (Figure 10-89 on page 628).
627
4. Select the host mapping or mappings that you want to remove. 5. Click Unmap (Figure 10-90)
In the Unmap from Host window (Figure 10-91 on page 629), in the field Verify the number of mappings that this operation affects:, enter a value matching the correct number of ports that you want to remove. This verification has been added to secure the process of inadvertently deleting the wrong hosts.
628
6. Click Unmap to remove the host mapping or mappings. This action brings you back to the Mapped volumes window.
Tip: You can also right-click a host and select Unmap All volumes from the list.
From the Unmap from Host window (Figure 10-93 on page 630), in the Verify the number of mappings that this operation affects: field, enter a value matching the correct number of ports that you want to remove. This verification has been added to secure the process of inadvertently deleting the wrong hosts.
629
3. Click Unmap to remove the host mapping or mappings. This action brings you back to the All Hosts window.
Or you can use the Volumes by Pool panel, as shown in Figure 10-95 on page 631.
630
Or you can use the Volumes by Host panel, as shown in Figure 10-96.
Important: Several actions on the hosts are specific to the Volumes by Pool or to the Volumes by Host panels. However, all these actions and others are accessible from the All volumes panel. All actions in the following sections are executed from the All Volumes panel.
631
You can add information (new columns) to the table in the All Volumes panel as shown in Figure 10-94 on page 630; see Table information on page 586. To retrieve more information about a specific volume, perform the following steps: 1. Select a volume in the table. 2. Click Properties in the Actions menu (Figure 10-97).
Tip: You can also access the Properties action by right-clicking a volume. 3. The Overview tab shows information about a given volume (Figure 10-98).
632
Note: To obtain more information about the volume, select Show Details (Figure 10-98 on page 632). 4. The Host Maps tab (Figure 10-99) displays the hosts that are mapped with this volume.
5. The Member MDisks tab (Figure 10-100 on page 634) displays the used MDisks for this volume. You can perform actions on the MDisks such as removing them from a pool, adding them to a tier, renaming them, showing their dependent volumes, or seeing their properties.
633
6. When you have finished viewing the details, click Close to return to the All Volumes panel.
3. Select one of the following presets, as shown in Figure 10-102 on page 635: Generic: Create volumes that use a set amount of capacity from the selected storage pool. Thin Provision: Create volumes whose capacity is large, but which only use the capacity that is written by the host application from the pool. Mirror: Create volumes with two physical copies that provide data protect. Each copy can belong to a different storage pool to protect data from storage failures.
634
Thin Mirror: Create volumes with two physical copies to protect data from failures while using only the capacity that is written by the host application. Note: For our example we chose the Generic preset. However, whatever the selected preset is, you have the opportunity afterwards to reconsider your decision by customizing the volume using the Advanced... button.
4. After selecting a preset, in our example Generic, you must select the Storage Pool on which the data will be striped (Figure 10-103).
5. After the Storage Pool has been selected, the window will be updated automatically and you will have to select a volume name and size as shown in Figure 10-104 on page 636. Enter a name if you want to create a single volume, or a naming prefix if you want to create multiple volumes.
635
Volume name: You can use the letters A to Z and a to z, the numbers 0 to 9, and the underscore (_) character. The host name can be between one and 63 characters in length. Enter the size of the volume that you want to create and select the capacity measurement (bytes, KB, MB, GB or TB) from the list. Note: An entry of 1 GB uses 1024 MB. An updated summary automatically appears in the bottom of the window to give you an idea of the space that will be used and that is remaining in the pool.
Various optional actions are available from this window: You can modify the Storage Pool by clicking Edit. In this case, you can select another storage pool. You can create additional volumes by clicking the button. This action can be repeated as many times as necessary. You can remove them by clicking the button. Note: When you create more than one volume, the wizard does not ask you for a name for each volume to be created. Instead, the name that you use here will become the prefix and have a number, starting at zero, appended to it as each volume is created. 6. You can activate and customize advanced features such as thin-provisioning or mirroring, depending on the preset you selected. To access these settings, click Advanced...: On the Characteristics tab (Figure 10-105 on page 637), you can set the following options: General: Format the new volume by selecting the Format Before Use check box (formatting writes zeros to the volume before it can be used; that is, it will write zeros to its MDisk extents). Locality: Choose an I/O Group and then select a preferred node.
636
OpenVMS only: Enter the UDID (OpenVMS). This field needs to be completed only for OpenVMS system. Note: Each OpenVMS fibre-attached volume requires a user-defined identifier or unit device identifier (UDID). A UDID is a nonnegative integer that is used when an OpenVMS device name is created. To recognize volumes, OpenVMS issues a UDID value, which is a unique numerical number.
On the Thin Provisioning tab (Figure 10-106 on page 638), after you activate thin provisioning by selecting the Thin provisioning check box, you can set the following options: Real: Type the Real size that you want to allocate. This size is the amount of disk space that will actually be allocated. It can either be a percentage of the virtual size or a specific number in GB. Automatically Expand: Select auto expand, which allows the real disk size to grow as required. Warning Threshold: Type a percentage or select a specific size for the usage threshold warning. It will generate a warning when the used disk capacity on the space-efficient copy first exceeds the specified threshold. Thin-Provisioned Grain size: Select the Grain size (32 KB, 64 KB, 128 KB or 256 KB). Smaller grain sizes save space and larger grain sizes produce better performance. Try to match the FlashCopy grain size if the volume will be used for FlashCopy.
637
Important: If the Thin Provision or Thin Mirror preset is selected on the first page (Figure 10-102 on page 635), the Thin provisioning check box is already selected and the parameter presets are the following: Real: 2% of Virtual Capacity Automatically Expand: Selected Warning Threshold: Selected with a value 80% of Virtual Capacity Thin-Provisioned Grain size: 32 KB On the Mirroring tab (Figure 10-107 on page 639), after you activate mirroring by selecting the Create Mirrored Copy check box, you can set the following option: Mirror Sync Rate: Enter the Mirror Synchronization rate. It is the I/O governing rate in a percentage that determines how quickly copies are synchronized. A zero value disables synchronization. Important: If you activate this feature from the Advanced menu, you will have to select a secondary pool on the main window (Figure 10-104 on page 636). The Primary Pool is going to be used as the primary and preferred copy for read operations. The secondary pool will be used as the secondary copy.
638
Important: If the Mirror or Thin Mirror preset is selected on the first page (Figure 10-102 on page 635), the Mirroring check box is already selected and the parameter preset is the following: Mirror Sync Rate: 80% of Maximum 7. After all the advanced settings have been set, click OK to return to the main menu (Figure 10-104 on page 636). 8. Then, you have the choice to only create the volume using the Create button, or to create and map it using the Create and Map to Host button. If you select to only create the volume, you will return to the main All Volumes panel and you will see your volume created but not mapped (Figure 10-108). You can map it later.
If you want to create and map it on the volume creation window, click the Continue button and another window opens. In the Modify Mappings window, select on which host you want to map this volume by using the drop-down button and then clicking Next (Figure 10-109 on page 640).
639
In the Modify Mappings window, verify the mapping. If you want to modify it, select the volume or volumes that you want to map to a host and move each of them to the right table using the right arrow, as shown in Figure 10-110. If you need to remove them, use the left arrow.
In the right table, you can edit the SCSI ID by selecting a mapping that is highlighted in yellow, indicating that the mapping is new. Next, click Edit SCSI ID (shown in Figure 10-86 on page 626). Note: Only new mappings can have their SCSI ID changed. To edit an existing mapping SCSI ID, you must unmap the volume and recreate the map to the volume. In the Edit SCSI ID window, change the SCSI ID then click OK (Figure 10-111 on page 641).
640
After all volumes that you wanted to map to this host have been added, click OK to create the Host mapping relationships and finalize the volume creation. You will return to the main All Volume window and see your volume created and mapped as shown in Figure 10-112.
641
Tip: There are two other ways to rename a volume. You can right-click a volume and select Rename from the list, or you can use the method explained in Figure 10.7.4 on page 642.
3. In the Rename Volume window, type the new name that you want to assign to the volume, and click OK (Figure 10-114).
Volume name: You can use the letters A to Z and a to z, the numbers 0 to 9, and the underscore (_) character. The volume name can be between one and 63 characters in length.
642
Tip: You can also right-click a volume and select Properties from the list. 3. In the Overview tab, click Edit to modify parameters for this volume (Figure 10-116 on page 644). From this window, you can modify the following parameters: Volume Name: You can modify the volume name. Volume name: You can use the letters A to Z and a to z, the numbers 0 to 9, and the underscore. The host name can be between one and 63 characters in length. I/O Group: You can select an alternate I/O Group from the list to alter the I/O Group to which it is assigned. You can also select the Force check box. This option changes the I/O group when the cache state is either Not Empty or corrupts and stops synchronization for mirrored volumes.
Preferred node: You can change the preferred node for this volume. Hosts try to access the volume through the preferred node. By default, the system automatically balances the load between nodes. Mirror Sync Rate: Change the Mirror Sync rate. It is the I/O governing rate in a percentage that determines how quickly copies are synchronized. A zero value disables synchronization. Cache Mode: By uncloaking the check box, the SVC cache is disabled (read/write cache is disabled) OpenVMS: Enter the UDID (OpenVMS). This field needs to be completed only for an OpenVMS system. Note: Each OpenVMS fibre-attached volume requires a user-defined identifier or unit device identifier (UDID). A UDID is a nonnegative integer that is used when an OpenVMS device name is created. To recognize volumes, OpenVMS issues a UDID value, which is a unique numerical you will number.
643
4. Save the changes by clicking Save. 5. You can close the Host Details window by clicking Close.
644
Tip: You can also right-click the volume and select Volume Copy Actions Thin Provisioned Edit Properties from the list. For a mirrored volume: Select the thin-provisioned copy of the mirrored volume that you want to modify. In the Actions menu, click Thin Provisioned Edit Properties as shown in Figure 10-118.
Tip: You can also right-click the thin provisioned copy and select Thin Provisioned Edit Properties from the list.
2. The Edit Properties: volumename (where volumename is the volume that you selected in the previous step) window opens (Figure 10-119). From this window, you are able to modify: Warning Threshold: Type a percentage. It will generate a warning when the used disk capacity on the thin-provisioned copy first exceeds the specified threshold. Automatically Expand: Autoexpand allows the real disk size to grow as required automatically.
Note: You can modify the real size of your thin-provisioned volume by using the GUI. Refer to 10.7.12, Shrinking the real capacity of a thin-provisioned volume on page 657 or 10.7.13, Expanding the real capacity of a thin provisioned volume on page 659, depending on your needs.
645
Tip: You can also right-click a volume and select Delete from the list.
3. The Delete Volume window opens as shown in Figure 10-121 on page 647. In the field Verify the number of volumes that you are deleting, enter a value matching the correct number of volumes that you want to remove. This verification has been added to secure the process of deleting wrong volumes. Important: Deleting a volume is a destructive action for user data residing in that volume. If you still have a volume (or volumes) associated with a host (or hosts) used with FlashCopy or remote copy, and you definitely want to delete the volume (or volumes), select the Delete the volume even if it has host mappings or is used in FlashCopy mappings or remote-copy relationships. option. Click Delete to complete the operation (Figure 10-121 on page 647).
646
3. On the Modify Mappings window, select the host on which you want to map this volume using the drop-down button and then click Next (Figure 10-109 on page 640).
647
Figure 10-123 Select the host to which you want to map your volume
4. On the Modify Mappings window, verify the mapping. If you want to modify it, select the volume or volumes that you want to map to a host and move each of them to the right table using the right arrow as shown in Figure 10-124. If you need to remove them, use the left arrow.
In the right table, you can edit the SCSI ID. Select a mapping that is highlighted in yellow, which indicates that the mapping is new, and click Edit SCSI ID (shown in Figure 10-86 on page 626). Note: Only new mappings can have their SCSI ID changed. To edit an existing mapping SCSI ID, you must unmap the volume and recreate the map to the volume. In the Edit SCSI ID window, change the SCSI ID then click OK (Figure 10-125 on page 649).
648
5. After all the volumes you want to map to this host have been added, click OK. You will return to the main All Volumes panel.
649
Tip: You can also right-click a volume and select Properties from the list. 3. On the Properties window, click the Host Maps tab (Figure 10-127).
Note: You can also access this window by selecting the volume in the table and clicking View Mapped Hosts in the Actions menu (Figure 10-128).
650
4. Select the host mapping or mappings that you want to remove. 5. Click Unmap from Host (Figure 10-129).
In the Unmap Host window (Figure 10-130 on page 652), in the field Verify the number of hosts that this operation affects: enter a value matching the correct number of ports that you want to remove. This verification has been added to secure the process of deleting wrong hosts.
651
6. Click Unmap to remove the host mapping or mappings. This action returns you to the Host Maps window. 7. Click Close to return to the main All Volumes panel.
Tip: You can also right-click a volume and select Unmap All Hosts from the list.
652
3. In the Unmap from Hosts window (Figure 10-132), in the field Verify the number of mappings that this operation affects: enter a value matching the correct number of ports that you want to remove. This verification has been added to secure the process of deleting wrong hosts.
4. Click Unmap to remove the host mapping or mappings. This action returns you to the All Volumes panel.
653
Assuming your operating system supports it, perform the following steps to shrink a volume: 1. Perform any necessary steps on your host to ensure that you are not using the space that you are about to remove. 2. Select the volume that you want to shrink in the table. 3. Click Shrink in the Actions menu (Figure 10-133).
Tip: You can also right-click a volume and select Shrink from the list.
4. The Shrink Volume: volumename window (where volumename is the volume that you selected in the previous step) opens. See Figure 10-134 on page 655. You can either enter how much you want to shrink the volume using the field Shrink By or you can directly enter the final size that you want to use for the volume using the field Final Size. The other field will be computed automatically. For example, if you have a 20 GB disk and you want it to become 15 GB, you can specify 5 GB in Shrink By field or you can directly specify 15 GB in Final Size field as shown in Figure 10-134 on page 655. 5. When you are finished, click Shrink as shown in Figure 10-134 on page 655, and the changes become visible on your host.
654
655
Tip: You can also right-click a volume and select Expand from the list.
3. The Expand Volume: volumename window (where volumename is the volume that you selected in the previous step) opens; see Figure 10-136 on page 657. You can either enter how much you want to enlarge the volume by using the field Expand By, or you can directly enter the final size that you want to use for the volume by using the field Final Size. The other field will be computed automatically. For example, if you have a 10 GB disk and you want it to become 20 GB, you can specify 10 GB in the Expand By field or you can directly specify 20 GB in the Final Size field as shown in Figure 10-136 on page 657. Volume expansion notes: No support exists for the expansion of image mode volumes. If there are insufficient extents to expand your volume to the specified size, you receive an error message. If you use volume mirroring, all copies must be synchronized before expanding. 4. When you are finished, click Expand (see Figure 10-136 on page 657).
656
Tip: You can also right-click the volume and select Volume Copy Actions Thin provisioned Shrink from the list.
657
For a mirrored volume: Select the thin-provisioned copy of the mirrored volume that you want to modify and in the Actions menu, click Thin Provisioned Shrink as shown in Figure 10-138.
Tip: You can also right-click the thin provisioned copy and select Thin Provisioned Shrink from the list.
2. The Shrink Volume: volumename window (where volumename is the volume that you selected in the previous step) opens; see Figure 10-139. You can either enter how much you want to shrink the volume by using the field Shrink By, or you can directly enter the final real capacity that you want to use for the volume by using the field Final Real Capacity. The other field will be computed automatically. For example, if you have a current real capacity equal to 118.8 MB and you want a final real size equal to 10 MB, you can specify 108.8 MB in the Shrink By field, or you can directly specify 10 MB in the Final Real Capacity field as shown in Figure 10-139. 3. When you are finished, click Shrink (Figure 10-139) and the changes will become visible on your host.
658
Tip: You can also right-click the volume and select Volume Copy Actions Thin provisioned Expand from the list.
For a mirrored volume: Select the thin provisioned copy of the mirrored volume that you want to modify and in the Actions menu, then click Thin Provisioned Expand (Figure 10-141).
Tip: You can also right-click the thin provisioned copy and select Thin Provisioned Expand from the list.
659
2. The Expand Volume: volumename window (where volumename is the volume that you selected in the previous step) opens (Figure 10-142). You can either enter how much you want to expand the volume using the field Expand By, or you can directly enter the final real capacity that you want to use for the volume using the field Final Real Capacity. The other field will be computed automatically. For example, if you have a current real capacity equal to 10 MB and you want a final real size equal to 100 MB, you can specify 90 MB in the Expand By field or you can directly specify 100 MB in the Final Real Capacity field, as shown in Figure 10-142. 3. When you are finished, click Expand (Figure 10-142) and the changes will become visible on your host.
660
Tip: You can also right-click a volume and select Migrate to Another Pool from the list. 3. The Migrate Volume Copy window opens (Figure 10-144). Select the Storage Pool to which you want to reassign the volume. You will only be presented with a list of Storage Pools with the same extent size. 4. When you have finished making your selections, click Migrate to begin the migration process.
Important: After a migration starts, you cannot stop it. Migration continues until it is complete unless it is stopped or suspended by an error condition, or the volume that is being migrated is deleted.
5. You can check the migration using the Running Tasks menu (Figure 10-145 on page 662).
661
To expand this area, click the icon and then click Migration. Figure 10-146 shows a detailed view of the running tasks.
6. When the migration is finished, the volume will be part of the new pool.
662
You can use a volume mirror for any operation for which you can use a volume. It is transparent to higher level operations such as Metro Mirror, Global Mirror, or FlashCopy. Creating a volume mirror from an existing volume is not restricted to the same Storage Pool, so it is an ideal method to use to protect your data from a disk system or an array failure. If one copy of the mirror fails, it provides continuous data access to the other copy. When the failed copy is repaired, the copies automatically resynchronize. You can also use a volume mirror as an alternative migration tool, where you can synchronize the mirror before splitting off the original side of the mirror. The volume stays online, and can be used normally, while the data is being synchronized. The copies can also be separate structures (that is, striped, image, sequential, or space-efficient) and separate extent sizes. To create a mirror copy from within a volume, perform the following steps; 1. Select the volume in the table. 2. In the Actions menu, click Volume Copy Actions Add Mirrored Copy (Figure 10-147).
Tip: You can also right-click a volume and select Volume Copy Actions and then Add Mirrored Copy from the list. 3. The Add Volume Copy: volumename window (where volumename is the volume that you selected in the previous step) opens (Figure 10-148 on page 664). You can perform the following steps separately or in combination: Select the Storage Pool in which you want to put the copy. To maintain higher availability, choose a separate group. Select the Enable Thin Provisioning check box to make the copy space-efficient. The following parameters are used for this thin-provisioned copy: Real Size: 2% of Virtual Capacity Automatically Expand: Active Warning Threshold: 80% of Virtual Capacity Thin-Provisioned Grain size: 32 KB
663
Note: Real Size, Auto expand, and Warning Threshold can be changed only after the thin-provisioned volume copy has been added. For information about modifying the real size of your thin-provisioned volume, see 10.7.12, Shrinking the real capacity of a thin-provisioned volume on page 657 and 10.7.13, Expanding the real capacity of a thin provisioned volume on page 659. For information about modifying the Auto expand and Warning Threshold of your thin provisioned volume, see 10.7.5, Modifying thin-provisioning volume properties on page 644. 4. Click Add Copy (Figure 10-148).
5. You can check the migration using the Running Tasks menu (see Figure 10-145 on page 662). To expand this Status Area, click the icon and click Volume Synchronization. Figure 10-149 on page 665 shows a detailed view of the running tasks.
664
Note: You can change the Mirror Sync Rate (the default is 50%) by modifying the volume properties. For more information, see Figure 10.7.4 on page 642. 6. When synchronization is finished, the volume will be part of the new pool (Figure 10-150).
Note: As shown in Figure 10-150, the primary copy is identified with an asterisk (*). In this example, Copy 0 is the primary copy.
665
Tip: You can also right-click a volume and select Delete this Copy from the list.
2. The Warning window opens (Figure 10-152). Click OK to confirm your choice.
Note: If you try to remove the primary copy, before it has been synchronized with the other one, you will receive the message: The command failed because the copy specified is the only synchronized copy. You must wait until the end of the synchronization to be able to remove this copy. 3. The copy is now deleted.
666
Tip: You can also right-click a volume and select Split into New Volume from the list.
2. The Split Volume Copy window opens (Figure 10-154). In this window, type a name for the new volume. Volume name: If you do not provide a name, the SVC automatically generates the name vdiskx (where x is the ID sequence number that is assigned by the SVC internally). If you want to provide a name, you can use the letters A to Z and a to z, the numbers 0 to 9, and the underscore. The host name can be between one and 63 characters in length. 3. Click Split Volume Copy (Figure 10-154).
4. This new volume is now available to be mapped to a host. Important: After you split a volume mirror, you cannot resynchronize or recombine them. You must create a volume copy from scratch.
667
2. The Validate Volume Copies window opens (Figure 10-156). In this window, select one of the following options: Generate Event of differences: Use this option if you only want to verify that the mirrored volume copies are identical. If any difference is found, the command stops and logs an error that includes the logical block address (LBA) and the length of the first difference. You can use this option, starting at a different LBA each time, to count the number of differences on a volume. Overwrite differences: Use this option to overwrite contents from the primary volume copy to the other volume copy. The command corrects any differing sectors by copying the sectors from the primary copy to the copies being compared. Upon completion, the command process logs an event. This indicates the number of differences that were corrected. Use this option if you are sure that either the primary volume copy data is correct, or that your host applications can handle incorrect data. Return Media Error to Host: Use this option to convert sectors on all volumes copies that contain different contents into virtual medium errors. Upon completion, the command logs an event, which indicates the number of differences that were found, the number that were converted into medium errors, and the number that were not converted. Use this option if you are unsure what the correct data is, and you do not want an incorrect version of the data to be used.
668
3. Click Validate (Figure 10-156 on page 668). 4. The volume is now checked.
Tip: You can also right-click a volume and select Volume Copy Actions then Add Mirrored Copy from the list. 3. The Add Volume Copy: volumename window (where volumename is the volume that you selected in the previous step) opens (Figure 10-158 on page 670). You can perform the following steps separately or in combination: Select the Storage Pool in which you want to put the copy. To maintain higher availability, choose a separate group. Select the Enable Thin Provisioning check box to make the copy space-efficient. The following parameters are used for this thin-provisioned copy: Real Size: 2% of Virtual Capacity Automatically Expand: Active Warning Threshold: 80% of Virtual Capacity Thin-Provisioned Grain size: 32 KB Note: Real Size, Auto expand, and Warning Threshold can be changed after the volume copy has been added in the GUI. For the Thin-Provisioned Grain size, you need to use the CLI. 4. Click Add Copy.
669
5. You can check the migration using the Running Tasks Status Area menu as shown in Figure 10-145 on page 662. To expand this Status Area, click the icon and click Volume Synchronization. Figure 10-159 shows the detailed view of the running tasks.
Note: You can change the Mirror Sync Rate (by default at 50%) by modifying the volume properties. For more information, see Figure 10.7.4 on page 642. 6. When the synchronization is finished, select the non thin-provisioned copy that you want to remove in the table and in the Actions menu, click Delete this Copy (Figure 6).
670
Tip: You can also right-click a volume and select Delete this Copy from the list. 7. The Warning window opens (Figure 10-161). Click OK to confirm your choice.
Note: If you try to remove the primary copy before it has been synchronized with the other one, you will receive the following message: The command failed because the copy specified is the only synchronized copy. You must wait till the end of the synchronization to be able to remove this copy.
8. When the copy is deleted, your thin-provisioned volume is ready to be used. At this point, you have completed the required tasks to manage volumes within an SVC environment.
671
By using the Consistency Groups panel (Figure 10-163 on page 673). A Consistency Group is a container for mappings. You can add many mappings to a Consistency Group.
672
By using the FlashCopy Mappings panel (Figure 10-164 on page 674). A FlashCopy mapping defines the relationship between a source volume and a target volume.
673
674
2. Select the volume that you want to create the FlashCopy relationship for (Figure 10-166). Note: To create many FlashCopy mappings at one time, select multiple volumes by holding down the Ctrl key and using the mouse to select the entries that you want.
675
Depending on whether or not you have already created the target volumes for your FlashCopy mappings, there are two options: If you have already created the target volumes, see Using existing target volumes on page 676. If you want SVC to create the target volumes you, see Creating new target volumes on page 680.
2. The New FlashCopy Mapping window opens (see Figure 10-168). In this window, you have to create the relationship between the source volume (the disk that is copied) and the target volume (the disk that receives the copy). A mapping can be created between any two volumes in a cluster. Select a volume in the Target Volumes column using the drop-down list for your selected Source Volume, then click Add button (Figure 10-195 on page 692). If you need to create other relations, repeat this action. Important: The source and target volumes must be of equal size. So, for a given source volume, only targets of the appropriate size are visible.
676
Note: The volumes do not have to be in the same I/O group or storage pool. 3. Click Next after all relationships that you wanted to create are registered (Figure 10-169).
4. On the next window, select one FlashCopy preset. The GUI interface provides three presets (Snapshot, Clone, Backup) to simplify the more common FlashCopy operations (Figure 10-170). The presets and their use cases are described here: Snapshot Clone Backup Create a copy-on-write point-in-time copy with the following parameters: Creates an exact replica of the source volume on a target volume. The copy can be changed without impacting the original volume. Creates a FlashCopy mapping that can be used to recover data or objects if the system experiences data loss. These backups can be copied multiple times from source and target volumes.
For whichever preset you select, you can customize various advanced options. You access these settings by clicking Advanced Settings (Figure 10-171 on page 678). If you prefer not to customize these settings, go directly to step 5 on page 678.
677
You can customize the following options, as shown in Figure 10-171: Background Copy Rate This determines the priority that is given to the copy process. A faster rate increases the priority of the process, which can affect the performance of other operations. Incremental This copies only the parts of the source or target volumes that have changed since the last copy. Incremental copies reduce the completion time of the copy operation. Note: Even if the type of the FlashCopy mapping is incremental, the first copy process copies all of the data from the source to the target volume. Delete after completion This automatically deletes a FlashCopy mapping after the background copy is completed. Do not use this option when the background copy rate is set to zero (0). Cleaning Rate This minimizes the amount of time that a mapping is in the stopping state. If the mapping has not completed, the target volume is offline while the mapping is stopping.
5. If you want to include this FlashCopy mapping in a Consistency Group, in the window that shown in Figure 10-172 on page 679, select Yes, add the mappings to a Consistency Group and also select the Consistency Group from the drop-down list.
678
If you do not want to include this FlashCopy mapping in a Consistency Group, select No, do not add the mappings to a Consistency Group (Figure 10-173).
6. Then click Finish as shown in Figure 10-172 and Figure 10-173. 7. Check the result of this FlashCopy mapping (Figure 10-174 on page 680). For each FlashCopy mapping relationship created, a mapping name is automatically generated starting with fcmapX, where X is an available number. If needed, you can rename these mappings; see Figure 10.7.4 on page 642, for more information about this topic.
679
2. On the New FlashCopy Mapping window (Figure 10-176 on page 681), you need to select one FlashCopy preset. The GUI interface provides three presets (Snapshot, Clone, Backup) to simplify the more common FlashCopy operations.
680
The presets and their use cases are described here: Snapshot Clone Backup Create a copy-on-write point-in-time copy with the following parameters: Creates an exact replica of the source volume on a target volume. The copy can be changed without impacting the original volume. Creates a FlashCopy mapping that can be used to recover data or objects if the system experiences data loss. These backups can be copied multiple times from source and target volumes. Figure 10-176
Whichever preset you select, you can customize various advanced options. To access these settings, click Advanced Settings (Figure 10-177 on page 682). If you prefer not to customize these settings, go directly to step 3 on page 682. You can customize the following options, as shown in Figure 10-177 on page 682: Background Copy Rate This determines the priority that is given to the copy process. A faster rate increases the priority of the process, which can affect performance of other operations. Incremental This copies only the parts of the source or target volumes that have changed since the last copy. Incremental copies reduce the completion time of the copy operation. Note: Even if the type of the FlashCopy mapping is incremental, the first copy process copies all of the data from the source to the target volume. Delete after completion This automatically deletes a FlashCopy mapping after the background copy is completed. Do not be use this option when background copy rate is set to 0. Cleaning Rate This minimizes the amount of time that a mapping is in the stopping state. If the mapping has not completed, the target volume is offline while the mapping is stopping.
681
3. If you want to include this FlashCopy mapping in a Consistency Group, in the next window select Yes, add the mappings to a Consistency Group and select the Consistency Group in the drop-down list (Figure 10-178). If you do not want to include this FlashCopy mapping in a Consistency Group, select No, do not add the mappings to a Consistency Group. Choose whichever option you prefer, then click Next (Figure 10-178).
4. In the next window (Figure 10-179 on page 683), select the storage pool that is used to automatically create new targets. You can choose to use the same storage pool that is used by the source volume, or you can select it from a list. In that case, select one storage pool and then click Next.
682
5. Select if you want to have a targeted volume using thin provisioning or not. There are three choices available, as shown in Figure 10-180 on page 684: Yes, in which case enter the following parameters: Real: Type the Real size that you want to allocate. This size is the amount of disk space that will actually be allocated. It can either be a percentage of the virtual size or a specific number in GB. Automatically Expand: Select auto expand, which allows the real disk size to grow as required. Warning Threshold: Type a percentage or select a specific size for the usage threshold warning. It will generate a warning when the used disk capacity on the space-efficient copy first exceeds the specified threshold.
No Inherit properties from source volume Click Finish to complete the FlashCopy Mapping operation.
683
6. Check the result of this FlashCopy mapping, as shown in Figure 10-181. For each FlashCopy mapping relationship created, a mapping name is automatically generated starting with fcmapX where X is an available number. If needed, you can rename these mappings; see Figure 10.7.4 on page 642.
At this point, the FlashCopy mapping is ready to be used. Tip: You can invoke FlashCopy from the SVC GUI, but using the SVC GUI might be impractical if you plan to handle a large number of FlashCopy mappings or Consistency Groups periodically, or at varying times. In such cases, creating a script by using the CLI might be more convenient.
684
4. A volume is created as a target volume for this snapshot in the same pool as the source volume. The FlashCopy mapping is created and it is started. You can check the FlashCopy progress in the Progress column or in the Running Tasks column as shown in Figure 10-183 on page 686.
685
686
4. A volume is created as a target volume for this clone in the same pool as the source volume. The FlashCopy mapping is created and started as shown in Figure 10-185. You can check the FlashCopy progress in the Progress column or in the Running Tasks column.
687
Note: The backup preset creates a point-in-time replica of the production data. After the copy completes, the backup view can be refreshed from the production data, with minimal copying of data from the production volume to backup volume. Clone preset parameters: Background Copy rate: 50 Incremental: Yes Delete after completion: No Cleaning rate: 50 Target pool is primary copy source pool 1. From the SVC Welcome panel, click Copy Services in the left menu and then click the FlashCopy panel. 2. Select the volume that you want to back up. 3. Click New Backup in the Actions menu (Figure 10-186).
4. A volume is created as a target volume for this backup in the same pool as the source volume. The FlashCopy mapping is created and started. You can check the FlashCopy progress in the Progress column or in the Running Tasks column (Figure 10-187 on page 689).
688
689
3. Enter the desired FlashCopy Consistency Group name and click Create (Figure 10-190).
Consistency Group name: You can use the letters A to Z and a to z, the numbers 0 to 9, and the underscore (_) character. The volume name can be between one and 63 characters in length. 4. Figure 10-191 on page 691 shows the result.
690
691
3. If you select a Consistency Group, click New FlashCopy Mapping in the Actions menu (Figure 10-193).
If you did not select a Consistency Group, click New FlashCopy Mapping (Figure 10-194). Consistency Groups: If no Consistency Group is defined, the mapping is a stand-alone mapping, and it can be prepared and started without affecting other mappings. All mappings in the same Consistency Group must have the same status to maintain the consistency of the group.
4. The New FlashCopy Mapping window opens (Figure 10-195). In this window you must create the relationships between the source volumes (the disks that are copied) and the target volumes (the disks that receive the copy). A mapping can be created between any two volumes in a cluster. Important: The source and target volumes must be of equal size.
Note: The volumes do not have to be in the same I/O group or storage pool.
692
5. Select a volume in the Sources Volumes column using the drop-down list, then select a volume in the Target Volumes column using the drop-down list and click Add as shown in Figure 10-195 on page 692. Repeat this action to create others relations. To remove a relationship that has been created, use the button.
Important: The source and target volumes must be of equal size. So for a given source volume, only the targets with the appropriate size are area. 6. Click Next after all the relationships that you wanted to create are registered (Figure 10-196).
7. In the next window, you need to select one FlashCopy preset. The GUI interface provides three presets (Snapshot, Clone, Backup) to simplify the more common FlashCopy operations (Figure 10-197). The presets and their use cases are described here: Snapshot Clone Backup Create a copy-on-write point-in-time copy with the following parameters: This creates an exact replica of the source volume on a target volume. The copy can be changed without impacting the original volume. This creates a FlashCopy mapping that can be used to recover data or objects if the system experiences data loss. These backups can be copied multiple times from source and target volumes.
Whichever preset you select, you can customize various advanced options. To access these settings, click the Advanced Settings button.
693
If you prefer not to customize these settings, go directly to step 8. You can customize the following options as shown in Figure 10-198: Background Copy Rate This determines the priority that is given to the copy process. A faster rate increases the priority of the process, which might affect performance of other operations. Incremental This copies only the parts of the source or target volumes that have changed since the last copy. Incremental copies reduce the completion time of the copy operation. Note: Even if the type of the FlashCopy mapping is incremental, the first copy process copies all of the data from the source to the target volume. Delete after completion This automatically deletes a FlashCopy mapping after the background copy is completed. Do not use this option when background copy rate is set to zero (0). Cleaning Rate This minimizes the amount of time that a mapping is in the stopping state. If the mapping has not completed, the target volume is offline while the mapping is stopping.
8. If you did not create these FlashCopy mappings from a Consistency Group (see step 3 on page 692), you will have to confirm your choice by selecting No, do not add the mappings to a Consistency Group (Figure 10-199 on page 695).
694
9. Click Finish as shown in Figure 10-198 on page 694. 10.Check the result of this FlashCopy mapping in the Consistency Groups window, as shown in Figure 10-200. For each FlashCopy mapping relationship created, a mapping name is automatically generated starting with fcmapX where X is an available number. If needed, you can rename these mappings; see Figure 10.7.4 on page 642.
Tip: You can invoke FlashCopy from the SVC GUI, but using the SVC GUI might be impractical if you plan to handle a large number of FlashCopy mappings or Consistency Groups periodically, or at varying times. In this case, creating a script by using the CLI might be more convenient.
695
In the Dependent Mappings window (Figure 10-202), you can see the dependent mapping for a given volume or a FlashCopy mapping. If you click one of these volumes, you can see its properties. For more information about volume properties, see 10.7.1, Volume information on page 631.
696
3. Click Move to Consistency Group in the Actions menu (Figure 10-203). Tip: You can also right-click a FlashCopy mapping and select Move to Consistency Group from the list.
4. In the Move a FlashCopy Mapping to a Consistency Group window, select the Consistency Group for this FlashCopy mapping using the drop-down list (Figure 10-204):
697
In the Remove FlashCopy Mapping from Consistency Group window, click Remove (Figure 10-206).
Tip: You can also right-click a FlashCopy mapping and select Edit Properties from the list.
698
4. In the Edit Properties window, you can modify the following parameters for a selected FlashCopy mapping as shown in Figure 10-208: Background Copy Rate This determines the priority that is given to the copy process. A faster rate increases the priority of the process, which might affect performance of other operations. Cleaning Rate This minimizes the amount of time that a mapping is in the stopping state. If the mapping has not completed, the target volume is offline while the mapping is stopping.
4. In the Rename Mapping window, type the new name that you want to assign to the FlashCopy mapping and click Rename (Figure 10-210 on page 700).
699
FlashCopy name: You can use the letters A to Z and a to z, the numbers 0 to 9, and the underscore (_) character. The mapping name can be between one and 63 characters in length.
700
3. Type the new name that you want to assign to the Consistency Group and press Enter (Figure 10-212).
Consistency Group name: The name can consist of the letters A to Z and a to z, the numbers 0 to 9, the dash (-), and the underscore (_) character. The name can be between one and 15 characters in length. However, the name cannot start with a number, the dash or the underscore.
4. From the Consistency Group panel, the new Consistency Group name is displayed.
4. The Delete Mapping window opens as shown in Figure 10-214 on page 702. In the field Verify the number of FlashCopy mappings you are deleting, you need to enter a value
701
matching the correct number of volumes that you want to remove. This verification has been added to secure the process of deleting wrong mappings. If you still have target volumes that are inconsistent with the source volumes and you definitely want to delete these FlashCopy mappings, then select the Delete the FlashCopy mapping even when the data on the target volume is inconsistent with the source volume option. Click Delete to complete the operation (Figure 10-214).
702
4. The Warning window opens (Figure 10-216). Click OK to complete the operation.
703
4. You can check the FlashCopy progress in the Progress column of the table or in the Running Tasks section (Figure 10-218).
5. After the task is completed, the FlashCopy status is in a Copied state (Figure 10-219).
704
To start the FlashCopy Consistency Group, perform these steps: 1. From the SVC Welcome window, click Copy Services and then click the Consistency Groups panel. 2. From the left panel, select the Consistency Group that you want to start (Figure 10-220).
3. Click Start in the Actions menu (Figure 10-221) to start the FlashCopy Consistency Group.
4. You can check the FlashCopy Consistency Group progress in the Progress column or in the Running Tasks section (Figure 10-222 on page 706).
705
5. After the task is completed, the FlashCopy status is in a Copied state (Figure 10-223).
706
Perform the following steps to stop a FlashCopy Consistency Group: 1. From the SVC Welcome panel, click Copy Services and then click the FlashCopy, Consistency Groups, or the FlashCopy Mappings panel. 1. Select the FlashCopy mapping that you want to stop in the table. 2. Click Stop in the Actions menu (Figure 10-224) to stop the FlashCopy mapping.
3. Notice that the FlashCopy mapping status has changed to Stopped (Figure 10-225).
4. The targeted volume is now shown as Offline in the Volumes menu (Figure 10-226).
707
Perform the following steps to stop a FlashCopy mapping: 1. From the SVC Welcome panel, click Copy Services and then click the Consistency Groups panel. 1. In the left side of this panel, select the Consistency Group that you want to stop. 2. Click Stop in the Actions menu (Figure 10-227) to stop the FlashCopy Consistency Group.
3. Notice that the FlashCopy Consistency Group status has now changed to Stopped (Figure 10-228 on page 709).
708
709
This capability enables you to reverse the direction of a FlashCopy map without having to remove existing maps, and without losing the data from the target as shown in Figure 10-230.
710
2. The Partnerships panel, shown in Figure 10-232 on page 712 Partnerships can be used to create a disaster recovery environment, or to migrate data between clusters that are in different locations. Partnerships define an association between a local cluster and a remote cluster.
711
712
713
10.9.2 Creating the SVC partnership between two remote SVC Clusters
We perform this operation to create the partnership on both clusters. Note: If you are creating an intracluster Metro Mirror, do not perform this next step to create the SVC cluster Metro Mirror partnership. Instead, go to 10.9.3, Creating stand-alone remote copy relationships on page 716. To create a partnership between the SVC clusters using the GUI, follow these steps: 1. From the SVC Welcome panel, click Copy Services Partnerships. The Partnerships panel opens as shown in Figure 10-237.
2. Click the New Partnership button to create a new partnership with another cluster, as shown in Figure 10-238.
3. On the New Partnership window (Figure 10-239 on page 715), complete the following elements: Select an available cluster in the drop-down list. If there is no candidate, you will receive the following error message: This cluster does not have any candidates. Enter a bandwidth (MBps) that is used by the background copy process between the clusters in the partnership. Set this value so that it is less than or equal to the
714
bandwidth that can be sustained by the communication link between the cluster. The link must be able to sustain any host requests and the rate of background copy.
4. Click the Create button to confirm the partnership relation. As shown in Figure 10-240, our partnership is in the Partially Configured state, because we have only performed the work on one side of the partnership so far.
To fully configure the cluster partnership, we must perform the same steps on the other SVC cluster (ITSO-CLS2) as we did on this one (ITSO-CLS1). For simplicity and brevity, only the two most significant windows are shown when the partnership is fully configured. 5. Launching the SVC GUI for ITSO-CLS2, we select ITSO-CLS1 for the cluster partnership and specify the available bandwidth for the background copy, again 200 MBps, and then click Create. Now that both sides of the SVC cluster partnership are defined, the resulting windows shown in Figure 10-241 and Figure 10-242 on page 716 confirm that our cluster partnership is now in the Fully Configured state. Figure 10-241 shows Cluster ITSO-CLS1.
715
3. In the New Relationship window, select the type of relationship that you want to create (Figure 10-244 on page 717): Metro Mirror This is a type of remote copy that creates a synchronous copy of data from a primary volume to a secondary volume. A secondary volume can either be located on the same cluster or on another cluster. Global Mirror It provides a consistent copy of a source volume on a target volume. Data is written to the target volume asynchronously, so that the copy is continuously updated, but the copy might not contain the last few updates in the event that a disaster recovery operation is performed. Then, click Next. 716
Implementing the IBM System Storage SAN Volume Controller V6.1
Figure 10-244 Select the type of relation that you want to create
4. In the next window, select where the auxiliary volumes are located as shown in Figure 10-245: On this system - this means the volumes are located locally On another system - in this case, select the remote system from the drop-down list.
5. In this window you can create new relationships. Select a volume in the Master drop-down list, then select a volume in the Auxiliary drop-down lists for this master and click Add (Figure 10-246 on page 718). If needed, repeat this action to create other relationships. Important: The Master and Auxiliary must be of equal size. So for a given source volume, only the targets with the appropriate size are returned.
717
To remove a relation created, use the button shown in Figure 10-246. After all the relationships that you wanted to create are registered, click Next. 6. Select if the volumes are already synchronized or not as shown in Figure 10-247, then click Next.
7. Finally, on the last window, select if you want to start to copy the data as shown in Figure 10-248 and then click Finish.
The relationships are visible in the Remote Copy panel. If you selected to copy the data, you can see that their status is Inconsistent Copying. You can check the copying progress in the Running tasks as shown in Figure 10-249 on page 719.
718
After the copy is finished, the relationships status changes to Consistent synchronized.
719
3. Enter a name for the Consistency Group and then click Next (Figure 10-251). Note: If you do not provide a name, the SVC automatically generates the name rccstgrpX, where X is the ID sequence number that is assigned by the SVC internally. You can use the letters A to Z and a to z, the numbers 0 to 9, and the underscore (_) character. The Consistency Group can be between 1 and 15 characters in length.
4. In the next window, select where the auxiliary volumes are located as shown in Figure 10-252: On this system - this means the volumes are located locally On another system - in that case, select the remote system in the drop-down list. After you make a selection, click Next.
5. Select if you want to add relationships to this group as shown in Figure 10-253. There are two options: If you answer Yes. click Next to continue the wizard and go to step 6. If you answer No, click Finish to create an empty Consistency Group that can be used later.
720
6. Select the type of relationship that you want to create (Figure 10-254): Metro Mirror This is a type of remote copy that creates a synchronous copy of data from a primary volume to a secondary volume. A secondary volume can either be located on the same cluster or on another cluster. Global Mirror This provides a consistent copy of a source volume on a target volume. Data is written to the target volume asynchronously so that the copy is continuously updated, but the copy might not contain the last few updates in the event that a disaster recovery operation is performed. Click Next.
Figure 10-254 Select the type of relation that you want to create
7. As shown in Figure 10-255, you can optionally select existing relationships to add to the group, then click Next. Note: To select multiple relationships, hold down Ctrl and use your mouse to select the entries you want to include.
721
8. In this window, you can create new relationships. Select a volume in the Master drop-down list then select a volume in the Auxiliary drop-down lists for this master. Click Add as shown in Figure 10-256. Repeat this action to create other relationships if needed. Important: The Master and Auxiliary must be of equal size. So for a given source volume, only the targets with the appropriate size are included. To remove a relation created, use the button as shown in Figure 10-256. After all the relationships that you want to create are registered, click Next.
9. Select if the volumes are already synchronized or not, as shown in Figure 10-257, then click Next.
10.Finally, on the last window, select if you want to start to copy the data as shown in Figure 10-258 on page 723, and then click Finish.
722
11.The relationships are visible in the Remote copy panel. If you selected to copy the data, you can see that their status is Inconsistent Copying. You can check the copying progress in the Running tasks as shown in Figure 10-259.
After the copies are completed, the relationships and the Consistency Group changes to Consistent synchronized status.
723
3. Type the new name that you want to assign to the Consistency Group and press Enter (Figure 10-261).
Consistency Group name: The Consistency Group name can consist of the letters A to Z and a to z, the numbers 0 to 9, the dash (-), and the underscore (_) character. The name can be between one and 15 characters in length. However, the name cannot start with a number, the dash, or the underscore.
4. From the Remote Copy panel, the new Consistency Group name is displayed.
724
4. In the Rename relationship window, type the new name that you want to assign to the FlashCopy mapping and click OK (Figure 10-263).
Remote Copy relationship name: You can use the letters A to Z and a to z, the numbers 0 to 9, and the underscore (_) character. The Remote Copy name can be between one and 63 characters in length.
725
5. In the Add Relationship to Consistency Group window, select the Consistency Group for this Remote Copy relationship using the drop-down list (Figure 10-265).
726
5. In the Remove Relationship From Consistency Group window, click Remove (Figure 10-267).
727
5. If the relationship was not consistent, the Remote Copy progress can be checked in the Running tasks (Figure 10-269).
6. After the task is completed, the Remote Copy relationship status has a Consistent Synchronized state (Figure 10-219 on page 704).
728
3. Click Start in the Actions menu (Figure 10-272) to start the Remote Copy Consistency Group.
4. You can check the Remote Copy Consistency Group progress in the Running tasks as shown in Figure 10-273 on page 730.
729
5. After the task is completed, the Consistency Group and all its relationship statuses are in a Consistent Synchronized state (Figure 10-274).
730
Important: When the copy direction is switched, it is crucial that no outstanding I/O exists to the volume that transits from primary to secondary, because all of the I/O will be inhibited to that volume when it becomes the secondary. Therefore, careful planning is required prior to switching the copy direction for a Remote Copy relationship. Perform the following steps to switch a Remote Copy relationship: 1. From the SVC Welcome panel, click Copy Services Remote Copy. 2. In the left column, select Not in a Group. 3. Select the Remote Copy relationship that you want to switch in the table. 4. Click Switch in the Actions menu (Figure 10-275) to start the Remote Copy process. Tip: You can also right-click a relationship and select Switch from the list.
5. A Warning window opens (Figure 10-276). A confirmation is needed to switch the Remote Copy relationship direction. As shown in Figure 10-276, the Remote Copy is switched from the master volume to the auxiliary volume. Click OK to confirm your choice.
6. The copy direction is now switched as shown in Figure 10-269 on page 728. The auxiliary volume is now accessible and indicated as the primary volume. There is now a synchronization from auxiliary to master Volume.
731
Tip: You can also right-click a relationship and select Switch from the list.
4. A Warning window opens (Figure 10-279 on page 733). A confirmation is needed to switch the Consistency Group direction. In the example shown in Figure 10-279 on page 733, the Consistency Group is switched from the master group to the auxiliary group. Click OK to confirm your choice.
732
5. The Remote Copy direction is now switched, as shown in Figure 10-280. The auxiliary volume is now accessible and indicated as primary volume. There is now a synchronization from auxiliary to master volume.
733
5. The Stop Remove Copy Relationship window opens (Figure 10-282). To allow secondary read/write access, select Allow secondary read/write access then click Stop Relationship to confirm your choice.
6. The new relationship status can be checked as shown in Figure 10-283. The relationship is now stopped.
4. Click Stop in the Actions menu (Figure 10-284) to stop the Remote Copy Consistency Group. Tip: You can also right-click a relationship and select Stop from the list.
5. The Stop Remote Copy Consistency Group window opens (Figure 10-285). To allow secondary read/write access, select Allow secondary read/write access then click Stop Consistency Group to confirm your choice.
6. The new relationship status can be checked as shown in Figure 10-286. The relationship is now stopped.
735
4. The Delete Relationship window opens (Figure 10-288 on page 737). In the field Verify the number of relationships you are deleting, enter a value matching the correct number of volumes that you want to remove. This verification has been added to secure the process of deleting wrong relationships. Click Delete to complete the operation (Figure 10-288 on page 737).
736
4. A Warning window opens as shown in Figure 10-290. Click OK to complete the operation.
737
By simply moving the mouse over the tower in the left part of the panel, you are able to view the global storage usage as shown in Figure 10-292 on page 739. Using this method, you can monitor the Physical Capacity and the Used Capacity of your cluster.
738
739
2. When you click the Info tab, the following information is displayed: General information Name ID Location Capacity information Total MDisk Capacity Space in MDisk Groups Space Allocated to Volumes Total Free Space Total Volume Capacity Total Volume Copy Capacity Total Used Capacity Total Over Allocation
740
Cluster name: You can use the letters A to Z and a to z, the numbers 0 to 9, and the underscore (_) character. The node name can be between one and 63 characters in length. 4. Click Save. 5. A Warning window opens as shown in Figure 10-296. In fact, if you are using the iSCSI protocol, changing either name also changes the iSCSI Qualified Name (IQN) of all of the nodes in the cluster and might require reconfiguration of all iSCSI-attached hosts. This is because the IQN for each node is generated using the cluster and node names.
741
SVC uninterruptible power supply units are designed to survive at least two power failures in a short time before nodes will refuse to start until the batteries have sufficient power (to survive another immediate power failure). If, during your maintenance activities, the uninterruptible power supply unit detected power and a loss of power multiple times (and thus the nodes start and shut down more than one time in a short time frame), you might find that you have unknowingly drained the uninterruptible power supply unit batteries. You will have to wait until they are charged sufficiently before the nodes will start. Important: Before shutting down a cluster, quiesce all I/O operations that are destined for this cluster, because you will lose access to all of the volumes that are provided by this cluster. Failure to do so might result in failed I/O operations being reported to your host operating systems. There is no need to quiesce all I/O operations if you are only shutting down one SVC node. Begin the process of quiescing all I/O to the cluster by stopping the applications on your hosts that are using the volumes that are provided by the cluster. If you are unsure which hosts are using the volumes that are provided by the cluster, follow the procedure explained in 9.5.21, Showing the host to which the volume is mapped on page 479, and repeat this procedure for all volumes. From the System Status panel, perform the following steps to shut down your cluster: 1. Click the cluster name as shown in Figure 10-297.
2. Click the Manage tab and then click Shut Down Cluster as shown in Figure 10-298 on page 743.
742
3. The Confirm Cluster Shutdown cluster window (Figure 10-299) opens. You will receive a message asking you to confirm whether you want to shut down the cluster. Ensure that you have stopped all FlashCopy mappings, Remote Copy relationships, data migration operations, and forced deletions before continuing. Click Yes to begin the shutdown process. Important: At this point, you will lose administrative contact with your cluster.
You have now completed the required tasks to shut down the cluster. At this point you can shut down the uninterruptible power supply units by pressing the power buttons on their front panels. Tip: When you shut down the cluster, it will not automatically start. You must manually start the cluster. If the cluster shuts down because the uninterruptible power supply unit has detected a loss of power, it will automatically restart when the uninterruptible power supply unit detects that the power has been restored (and the batteries have sufficient power to survive another immediate power failure).
743
Note: To restart the SVC cluster, you must first restart the uninterruptible power supply units by pressing the power buttons on their front panels. After they are on, go to the service panel of one of the nodes within your SVC cluster and press the power on button, releasing it quickly. After it is fully booted (for example, displaying Cluster: on line 1 and the cluster name on line 2 of the SVC front panel), you can start the other nodes in the same way. As soon as all nodes are fully booted and you have reestablished administrative contact using the GUI, your cluster is fully operational again.
2. Click the Manage tab and then click Upgrade Cluster as shown in Figure 10-301.
744
2. Click the Info tab to obtain the following information: General information Name ID Numbers of Nodes Numbers of Hosts Numbers of Volumes Memory information FlashCopy Global Mirror and Metro Mirror Volume Mirroring RAID
745
2. Click the Manage tab. 3. From this tab, as shown in Figure 10-304 on page 747, you can modify: The I/O Group name I/O Group name: You can use the letters A to Z and a to z, the numbers 0 to 9, and the underscore (_) character. The host name can be between one and 63 characters in length. The amount of memory for the following features: FlashCopy (default 20 MB - maximum 512 MB) Global Mirror and Metro Mirror (default 20 MB - maximum 512 MB) Volume Mirroring (default 20 MB - maximum 512 MB) RAID (default 40 MB - maximum 512 MB)
Important: For Volume mirroring, Copy Services (FlashCopy, Metro Mirror, and Global Mirror) and RAID operations, memory is traded against memory that is available to the cache. The amount of memory can be decreased or increased. The maximum combined memory size across all features is 552 MB.
746
747
2. Click the Info tab and to obtain the following information: General information Name ID Status Hardware WWNN I/O Group Configuration node Failover Partner node iSCSI Name (IQN) iSCSI Alias Failover iSCSI Name Failover iSCSI Alias if iSCSI Failover is active Serial Number Unique ID WWPNs Status Speed
Redundancy information
iSCSI information
UPS information
Ports information
3. Click the VPD tab to display the vital product data (VPD) for this node. Note: The amount of information in the vital product data (VPD) tab is extensive, so we do not describe it in this section. For the list of these elements, refer to Command-Line Interface User's Guide - Version 6.1.0 and search for the lsnodevpd command.
748
2. Click the Manage tab. 3. Specify a new name for the node as shown in Figure 10-307.
Node name: You can use the letters A to Z and a to z, the numbers 0 to 9, and the underscore (_) character. The node name can be between one and 63 characters in length. 4. Click Save. 5. A Warning window opens as shown in Figure 10-308 on page 750. This is due to the fact that the iSCSI Qualified Name (IQN) for each node is generated using the cluster and
Chapter 10. SAN Volume Controller operations using the GUI
749
node names. If you are using the iSCSI protocol, changing either name also changes the IQN of all of the nodes in the cluster and might require reconfiguration of all iSCSI-attached hosts.
6. To confirm that you want to change the node name, click OK.
Important: Keep in mind that you need to have at least two nodes in an I/O group. Add your available nodes in sequence. 2. Select the node you want to add to your cluster using the drop-down list. Change its name, if needed, and click Add Node as shown in Figure 10-310 on page 751.
750
3. As shown in Figure 10-311, a window appears to inform you about the time required to add a node to the cluster.
4. If you want to add it, click OK. Important: When a node is added to a cluster, it displays a state of adding and a yellow color, as shown in Figure 10-293 on page 739. It can take as long as 30 minutes for the node to be added to the cluster, particularly if the software version of the node has changed.
751
2. Click the Manage tab and then click Remove node as shown in Figure 10-313.
3. A Warning window opens as shown in Figure 10-314 on page 753. By default, the cache is flushed before the node is deleted to prevent data loss if a failure occurs on the other node in the I/O group. In certain circumstances, such as when the system is already degraded, you can take the specified node offline immediately without flushing the cache or ensuring data loss does not occur, by selecting the Bypass check for volumes that will go offline, and remove the node immediately without flushing its cache check box.
752
If this node is the last node in the cluster the warning message is different, as shown in Figure 10-315. Before you delete the last node in the cluster, ensure that you want to destroy the cluster. Removing the last node in the cluster destroys the cluster. The user interface and any open CLI sessions are lost.
4. If you want to remove it, click OK. This makes the node a candidate to be added back into this cluster or into another cluster.
10.13 Troubleshooting
Events detected by the system are saved in an event log. When an entry is made in this event log, the condition is analyzed and classified to help you diagnose problems.
753
The highest-priority event is indicated, along with information about how long ago the event occurred. It is important to note that if an event is reported, you must select the event and run a fix procedure.
Event properties
To retrieve properties and sense about a specific event, perform the following steps: 1. Select an event in the table. 2. Click Properties in the Actions menu (Figure 10-317 on page 755).
754
Tip: You can also obtain access to the Properties action by right-clicking an event. 3. The Properties and Sense Data for Event sequence_number window (where sequence_number is the sequence number of the event that you selected in the previous step) opens, as shown in Figure 10-318 on page 756.
755
Tip: From the Properties and Sense Data for Event window, you can use the Previous and Next buttons to navigate between events. 4. Click Close to return to the Recommended Actions panel.
756
Tip: You can also obtain access to the Run Fix Procedure action by right-clicking an event. 3. The Directed Maintenance Procedure window opens as shown in Figure 10-320. You have to follow the wizard and its different steps to fix the event. Note: We do not describe here all the possible steps because the steps involve depend on the event.
757
To access this panel, from the Welcome panel shown in Figure 10-1 on page 580, select Troubleshooting Event Log.
Certain alerts have a four-digit error code and a fix procedure that helps you fix the problem. Other alerts also require action, but do not have a fix procedure. Messages are fixed when you acknowledge reading them.
Filtering events
You can filter events in different ways. Filtering can be based on event status (see Basic filtering), or over a period of time (see Time filtering on page 759). Certain events require a certain number of occurrences in 25 hours before they are displayed as unfixed. If they do not reach this threshold in 25 hours, they are flagged as expired. Monitoring events are below the coalesce threshold and are usually transient. You can also sort events by time or error code. When you sort by error code, the most serious events (those with the lowest numbers) are displayed first.
Basic filtering
The event log display can be filtered in three ways using the drop-down menu in the upper right corner of the panel (see Figure 10-322 on page 759): Display all unfixed alerts and messages: Default (events requiring attention) Display all alerts and messages: Expanded (include fixed events) Display all events alerts, messages, monitoring, and expired: Show all (include below-threshold events)
758
Time filtering
There are two ways to perform time filtering: by selecting a start date and time and an end date and time; and by selecting an event and showing the entries within in a certain period of time of this event. In this section we demonstrate both methods. By selecting a start date and time, and an end date and time To use this time frame filter, perform the following steps: Click Filter by Date in the Actions menu (Figure 10-323).
Tip: You can also obtain access to the Filter by Date action by right-clicking an event. The Date/Time Filter window opens (Figure 10-324). From this window, select a start date and time and an end date and time.
Click Filter and Close. Your panel is now filtered based on the time frame. To disable this time frame filter, click Reset Date Filter in the Actions menu (Figure 10-325 on page 760).
759
Select an event and show the entries within in a certain period of time of this event To use this time frame filter, perform the following steps: Select an event in the table. In the Actions menu, click Show entries within... and select minutes, hours, or days and finally select a value (Figure 10-326).
Tip: You can also access the Show entries within... action by right-clicking an event. c. Your window is now filtered based on the time frame (Figure 10-327).
760
To disable this time frame filter, click Reset Date Filter in the Actions menu (Figure 10-328).
Event properties
To retrieve properties and sense about a specific event, perform the following steps: 1. Select an event in the table. 2. Click Properties in the Actions menu (Figure 10-329).
Tip: You can also access the Properties action by right-clicking an event.
3. The Properties and Sense Data for Event sequence_number window (where sequence_number is the sequence number of the event that you selected in the previous step) opens, as shown in Figure 10-318 on page 756.
761
Tip: From the Properties and Sense Data for Event window, you can use the Previous and Next buttons to navigate between events. 4. Click Close to return to the Event log.
762
Tip: You can also access the Mark as fixed action by right-clicking an event. 3. The Warning window opens (Figure 10-332).
4. Click OK to confirm your choice. Note: To be able to see fixed events, you need to filter the event log panel using the Expanded (include fixed events) filter profile or the Show all (include below-threshold events) filter profile.
Tip: You can also access the Mark as unfixed action by right-clicking an event.
763
Tip: You can also access the Run Fix Procedure action by right-clicking an alert.
3. The Directed Maintenance Procedure window opens (Figure 10-336 on page 765). You must follow the wizard and its steps to fix the event. Note: We do not describe all the various steps, because they depend on the alert.
764
Clear log
To clear the logs, perform the following steps: 1. Click Clear Log (Figure 10-337).
2. A Warning window opens (Figure 10-338). From this window, you must confirm that you want to delete the logs.
765
2. A Download Support Packages window opens (Figure 10-341 on page 767). From there, select which kind of logs you want to download: Standard logs These contain the most recent logs that have been collected for the cluster. These logs are the most commonly used by support to diagnose and solve problems. Standard logs plus one existing statesave These contain the standard logs for the cluster and the most recent statesave from any of the nodes in the cluster. Statesaves are also known as dumps or livedumps. Standard logs plus most recent statesave from each node These contain the standard logs for the cluster and the most recent statesave from each node in the cluster. Statesaves are also known as dumps or livedumps. Standard logs plus new statesaves These generate a new statesave (livedump) for all the nodes in the cluster and packages them with the most recent logs.
766
Note: Depending on your choice, this action can take several minutes to complete.
3. Click Download to confirm your choice (Figure 10-341). 4. Finally, select where you want to save these logs (Figure 10-342).
2. On the detailed view, select the node from which you want to download logs using the drop-down menu in the upper right corner of the panel (Figure 10-344 on page 768).
767
3. Select the package or packages that you want to download (Figure 10-345).
Tip: To select multiple packages, hold down the Ctrl key and use the mouse to select the entries you want to include.
Tip: You can also access the Download action by right-clicking a package. 5. Finally, select where you want to save these logs in you workstation.
768
Tip: You can also delete packages by clicking Delete in the Actions menu.
769
Each user account has a name, a role, and password assigned to it, which differs from the Secure Shell (SSH) key-based role approach that is used by the CLI. We describe authentication in detail in 2.8.6, User authentication on page 41. The role-based security feature organizes the SVC administrative functions into groups, which are known as roles, so that permissions to execute the various functions can be granted differently to the separate administrative users. There are four major roles and one special role. Table 10-1 on page 771 lists the user roles.
770
Table 10-1 Authority roles Role Security Admin Administrator Allowed Commands All commands All commands except: svctask: chauthservice, mkuser, rmuser, chuser, mkusergrp, rmusergrp, chusergrp, and setpwdreset All svcinfo commands and the following svctask commands: prestartfcconsistgrp, startfcconsistgrp, stopfcconsistgrp, chfcconsistgrp, prestartfcmap, startfcmap, stopfcmap, chfcmap, startrcconsistgrp, stoprcconsistgrp, switchrcconsistgrp, chrcconsistgrp, startrcrelationship, stoprcrelationship, switchrcrelationship, chrcrelationship, and chpartnership All svcinfo commands and the following svctask commands: applysoftware, setlocale, addnode, rmnode, cherrstate, writesernum, detectmdisk, includemdisk, clearerrlog, cleardumps, settimezone, stopcluster, startstats, stopstats, and settime All svcinfo commands and the following svctask commands: finderr, dumperrlog, dumpinternallog, chcurrentuser and the svcconfig command: backup User Superusers Administrators that control the SVC
Copy Operator
Service
For users that perform service maintenance and other hardware tasks on the cluster
Monitor
The superuser user is a built-in account that has the Security Admin user role permissions. You cannot change permissions or delete this superuser account; you can only change the password. You can also change this password manually on the front panels of the cluster nodes. An audit log keeps track of actions that are issued through the management GUI or the command-line interface. For more information about this topic, see 10.14.9, Audit log information on page 783.
771
772
User name: You can use the letters A to Z and a to z, the numbers 0 to 9, and the underscore (_) character. The user name can be between one and 128 characters in length.
773
From this window, you can change the authentication mode and Local credentials. Authentication Mode There are two types of authentication available in this section: Local: The authentication method is located on the system. Users must be part of a user group which authorizes them to specific sets of operations. If you select this type of authentication, use the drop-down list to select the user group (Table 10-1 on page 771) that you want the user to be part of. Remote: Remote authentication allows users of SAN management applications, such as IBM Tivoli Storage Productivity Center, to authenticate to the cluster using the authentication service provided by the SAN management application. Ensure that the remote authentication service is configured for the SAN management application.
774
To complete this task, you need the following information regarding the remote authentication service: The web address for the remote authentication service. The user name and password for HTTP basic authentication. These credentials are created by and obtained from the administrator of the remote authentication service. Local Credentials There are two types of local credentials that can be configured in this section depending on your needs: GUI authentication: The Password authenticates users to the management GUI. You need to enter the password in the Password field. Password: The password can be between 6 and 64 characters in length and it cannot begin or end with a space. CLI authentication: The SSH Key authenticates users to the command-line interface. The SSH Public Key need to be uploaded using the Browse... button in the SSH Public Key field. 6. To confirm the changes, click OK (see Figure 10-352 on page 774).
775
4. The Warning window opens (Figure 10-354). Click OK to complete the operation.
776
4. The Warning window opens (Figure 10-356). Click OK to complete the operation.
777
4. The Delete User window opens (Figure 10-358). Click Delete to complete the operation.
778
Enter a name for the group in the Group Name field. Group name: You can use the letters A to Z and a to z, the numbers 0 to 9, and the underscore (_) character. The group name can be between one and 63 characters in length. Role section A role needs to be selected between Monitor, Copy Operator, Service, Administrator or Security Administrator. See Table 10-1 on page 771 for more information about these roles. Remote authentication section Select this option if you want to enable remote authentication for the group.
779
Note: Remote authentication allows users of SAN management applications, such as IBM Tivoli Storage Productivity Center, to authenticate to the cluster using the authentication service provided by the SAN management application. 4. To create the group name, click Create (Figure 10-360 on page 779). 5. You can verify the creation in the Users panel (Figure 10-361).
780
From this window, you can change the role and remote authentication: Role A role needs to be selected between Monitor, Copy Operator, Service, Administrator or Security Administrator. See Table 10-1 on page 771 for more information about these roles. Remote Authentication Select this option if you want to enable remote authentication for the group. Note: Remote authentication allows users of SAN management applications, such as IBM Tivoli Storage Productivity Center, to authenticate to the cluster using the authentication service provided by the SAN management application.
781
4. There are two options: If you do not have any users in this group, the Delete User Group window opens as shown in Figure 10-358 on page 778. Click Delete to complete the operation.
If you have users in this group, the Delete User Group window opens as shown in Figure 10-366 on page 783. The users of this group will be moved to the Monitor user group.
782
783
Time filtering
There are two ways to perform time filtering: by selecting a start date and time and an end date and time; and by selecting an event and showing the entries within in a certain period of time of this event. In this section we demonstrate both methods. By selecting a start date and time and an end date and time To use this time frame filter, perform the following steps: Click Filter by Date in the Actions menu (Figure 10-368).
Tip: You can also access the Filter by Date action by right-clicking an entry.
The Date/Time Filter window opens (Figure 10-369). From this window, select a start date and time and an end date and time.
Click Filter and Close. Your panel is now filtered based on its time frame. To disable this time frame filter, click Reset Date Filter in the Actions menu (Figure 10-370).
784
By selecting an entry and showing the entries within in a certain period of time of this event To use this time frame filter, perform the following steps: Select an entry in the table. In the Actions menu, click Show entries within... and select minutes, hours, or days and finally select a value (Figure 10-371).
Tip: You can also access the Show entries within... action by right-clicking an entry. Your panel is now filtered based on the time frame (Figure 10-327 on page 760).
785
To disable this time frame filter, click Reset Date Filter in the Actions menu (Figure 10-373).
10.15 Configuration
In this section we describe how to configure different aspects of the SVC.
Management IP addresses
In this section, we discuss the modification of management IP addresses. Management IP addresses can be defined for the system. The system supports one to four IP addresses. You can assign these addresses to two Ethernet ports and their backup ports. Multiple ports and IP addresses provide redundancy for the system in the event of connection interruptions. At any point in time, the system has an active management interface. Ethernet Port 1 must always be configured and the use of Port 2 is optional. Configuring both ports provides redundancy for the Ethernet connections. If you have configured both ports and you cannot connect through one IP address, attempt to access the system through the alternate IP address. Both IPv4 and IPv6 address formats are supported. Ethernet ports can have either IPv4 addresses or IPv6 addresses, or both. Important: If you specify a new cluster IP address, the existing communication with the cluster through the GUI is lost. You need to relaunch the SAN Volume Controller Application from the GUI Welcome panel. You must use the new IP address to reconnect to the management GUI. When you reconnect, accept the new site certificate. Modifying the IP address of the cluster, although quite simple, requires reconfiguration for other items within the SVC environments, including reconfiguring the central administration GUI by adding the cluster again with its new IP address. Perform the following steps to modify the cluster IP addresses of our SVC configuration: 1. From the SVC Welcome panel, select Configuration and then Network. 2. In the left column, select Management IP Addresses. 3. The Management IP Addresses window opens (Figure 10-374 on page 787). 786
Implementing the IBM System Storage SAN Volume Controller V6.1
4. Click a port to configure the cluster's management IP address. Notice that you can configure both ports on the SVC node (Figure 10-375).
5. Depending on whether you select to configure an IPv4 or IPv6 cluster, there is different information to enter.
787
For IPv4: Type an IPv4 address in the IP Address field. Type an IPv4 gateway in the Gateway field. Type an IPv4 Subnet Mask. For IPv6: Select the Show IPv6 button. Type an IPv6 prefix in the IPv6 Network Prefix field. The Prefix field can have a value of 0 to 127. Type an IPv6 address in the IP Address field. Type an IPv6 gateway in the Gateway field. 6. After the information is filled in, click OK to confirm the modification (Figure 10-375 on page 787).
3. Select one node, then click the port you want to assign a service IP address (Figure 10-377 on page 789).
788
4. Depending on whether you installed an IPv4 or IPv6 cluster, there is different information to enter. For IPv4: Type an IPv4 address in the IP Address field. Type an IPv4 gateway in the Gateway field. Type an IPv4 Subnet Mask. For IPv6: Select the Show IPv6 button. Type an IPv6 prefix in the IPv6 Network Prefix field. The Prefix field can have a value of 0 to 127. Type an IPv6 address in the IP Address field. Type an IPv6 gateway in the Gateway field. 5. After the information is filled in, click OK to confirm modification (Figure 10-378).
789
The following parameters can be updated: Cluster Name It is important to set the cluster name correctly because it is part of the iSCSI qualified name (IQN) for the node. Important: If you change the name of the cluster after iSCSI is configured, iSCSI hosts might need to be reconfigured. To change the cluster name, click the cluster name and specify the new name Cluster name: You can use the letters A to Z and a to z, the numbers 0 to 9, and the underscore (_) character. The node name can be between one and 63 characters in length. iSCSI Ethernet Ports iSCSI configuration can be set for each Ethernet ports. Perform the following steps to change an iSCSI IP: Click a port and, depending if you installed an IPv4 or IPv6 cluster, enter the appropriate information. For IPv4: enter an IP address, a gateway and a Subnet Mask. For IPv6: enter an IP prefix, an IP address and a gateway.
After the information is filled in, click OK to confirm modification. Important: When reconfiguring IP ports, be aware that you must reconnect already configured iSCSI connections if changes are made on the IP addresses of the nodes.
790
iSCSI Aliases An iSCSI alias is a user-defined name that identifies the node to the host. Perform the following steps to change an iSCSI alias: Click an iSCSI alias. Specify a name for it. Each node has a unique iSCSI name associated with two IP addresses. After the host has initiated the iSCSI connection to a target node, this IQN from the target node will be visible in the iSCSI configuration tool on the host. iSNS and CHAP You can specify the IP address for the iSCSI Storage Name Service (iSNS). Host systems use the iSNS server to manage iSCSI targets and for iSCSI discovery. You can also enable CHAP to authenticate the system and iSCSI-attached hosts with the specified shared secret. The CHAP secret is the authentication method that is used to restrict access for other iSCSI hosts to use the same connection. You can set the CHAP for the whole cluster under cluster properties or for each host definition. The CHAP must be identical on the server and the cluster/host definition. You can create an iSCSI host definition without using a CHAP.
791
4. A wizard appears (Figure 10-382). You must enter contact information to enable IBM Support personnel to contact this person to assist with problem resolution (Contact Name, Email reply Address, Machine Location and Phone numbers). Ensure that all contact information is valid, then click Next.
792
5. On the next page (Figure 10-383), configure at least one email server that is used by your site and optionally enable inventory reporting. Enter a valid IP address and a server port for each server added. Ensure that the email servers are valid. Inventory reports allow IBM service personnel to proactively notify you of any known issues with your system. To activate it, enable the inventory reporting and choose a reporting interval in this window.
6. Next (Figure 10-384), you can configure email addresses to receive notifications. It is advisable to configure an email address belonging to a support user with the error event notification type enabled, to notify IBM service personnel if an error condition occurs on your system. Ensure that all email addresses are valid.
7. The last window (Figure 10-385 on page 794) displays a summary of your Email Event Notification wizard. Click Finish to complete the setup.
793
8. The wizard is now closed. Additional information has been added to the panel as shown on Figure 10-386. You can edit or disable email notification from this window.
794
Server port The remote port number for the SNMP server. The remote port number must be a value between 1 - 65535. Community The SNMP community is the name of group to which devices and management stations that run SNMP belong. Event notifications Select Error if you want that the user receives messages about problems, such as hardware failures, that must be resolved immediately. Important: Navigate to Recommended Actions to run fix procedures on these notifications. Select Warning if you want that the user receives messages about problems and unexpected conditions. Investigate the cause immediately to determine any corrective action. Important: Navigate to Recommended Actions to run fix procedures on these notifications. Select Info if you want that the user receives messages about expected events. No action is required for these events.
To remove an SNMP Server, use the To add another SNMP Server, use the
button. button.
Syslog notifications
The syslog protocol is a standard protocol for forwarding log messages from a sender to a receiver on an IP network. The IP network can be either IPv4 or IPv6. The system can send syslog messages that notify personnel about an event.
Chapter 10. SAN Volume Controller operations using the GUI
795
You can configure a syslog server to receive log messages from various systems and stores them in a central repository entering the following information (see Figure 10-388): IP Address The address for the syslog server Facility The facility determines the format for the syslog messages and can be used to determine the source of the message. Message format The message format depends on the facility. The system can transmit syslog messages in two formats: Concise message format provides standard detail on the event. Expanded format provides more details about the event. Event notifications Select Error if you want that the user receives messages about problems, such as hardware failures, that must be resolved immediately. Important: Navigate to Recommended Actions to run fix procedures on these notifications. Select Warning if you want that the user receives messages about problems and unexpected conditions. Investigate the cause immediately to determine any corrective action. Important: Navigate to Recommended Actions to run fix procedures on these notifications. Select Info if you want that the user receives messages about expected events. No action is required for these events.
To remove a syslog server, use the To add another syslog server, use the
button. button.
The syslog messages can be sent in either compact message format or expanded message format. Example 10-1 on page 797 shows a compact format syslog message.
796
IBM2145 #NotificationType=Error #ErrorID=077001 #ErrorCode=1070 #Description=Node CPU fan failed #ClusterName=SVCCluster1 #Timestamp=Wed Jul 02 08:00:00 2008 BST #ObjectType=Node #ObjectName=Node1 #CopyID=0 #ErrorSequenceNumber=100 Example 10-2 shows a expanded format syslog message.
Example 10-2 Full format syslog message example
IBM2145 #NotificationType=Error #ErrorID=077001 #ErrorCode=1070 #Description=Node CPU fan failed #ClusterName=SVCCluster1 #Timestamp=Wed Jul 02 08:00:00 2008 BST #ObjectType=Node #ObjectName=Node1 #CopyID=0 #ErrorSequenceNumber=100 #ObjectID=2 #NodeID=2 #MachineType=21454F2#SerialNumber=1234567 #SoftwareVersion=5.1.0.0 (build 8.14.0805280000)#FRU=fan 24P1118, system board 24P1234 #AdditionalData(0->63)=00000000210000000000000000000000000000000000000000000000000 00000000000000000000000000000000000000000000000000000000000000000000000#Additional Data(64-127)=000000000000000000000000000000000000000000000000000000000000000000000 00000000000000000000000000000000000000000000000000000000000
797
3. From this panel, you can modify: The time zone Select a time zone for your cluster using the drop-down list. The date and time Two options are available: If you are not using a Network Time Protocol (NTP) server, select the Set Date and Time button and then manually enter the date and the time for your cluster as shown in Figure 10-390. You can also use the Use Browser Setting button to automatically adjust date and time of your SVC cluster with your local workstation date and time.
If you are using a Network Time Protocol (NTP) server, select the Set NTP Server IP Address button and then enter the IP address of the NTP server as shown in Figure 10-391.
10.15.10 Licensing
Perform the following steps to configure licensing settings: 1. From the SVC Welcome panel, select Configuration and then Advanced. 2. In the left column, select Licensing (Figure 10-392 on page 799).
798
3. Set the licensing values for the IBM System Storage SAN Volume Controller for the following elements: Virtualization Limit Enter the capacity of the storage that will be virtualized by this cluster. FlashCopy Limit Enter the capacity that is available for FlashCopy mappings. Important: The used capacity for FlashCopy mapping is the sum of all of the volumes that are the source volumes of a FlashCopy mapping. Global and Metro Mirror Limit Enter the capacity that is available for Metro Mirror and Global Mirror relationships. Important: The used capacity for Global Mirror and Metro Mirror is the sum of the capacities of all of the volumes that are in a Metro Mirror or Global Mirror relationship; both master and auxiliary volumes are counted.
799
3. From here you can configure the following elements: Refresh GUI Objects This action causes the GUI to refresh every one of its views. It clears the GUI cache. The GUI will look up every object again. Important: This is a support only action button. Restore Default Browser Preferences This action deletes all GUI preferences that are stored in the browser and restores default preferences. Table Selection If selected, this action shows Select/Deselect All in each table in the cluster (Figure 10-394).
Navigation If selected, this action shows navigation as tabs when not in low graphics mode (Figure 10-395 on page 801).
800
801
802
2. Login with your superuser/password; the SVC management home page will display. From there, go to the Configuration Advanced menu (Figure 10-397) and click Advanced.
3. In the Advanced menu, click the Upgrade Software item; the window shown in Figure 10-398 on page 804 will display.
803
From the window shown in Figure 10-398, you can click the following buttons: Check for updates: Use this to check, on the IBM website, whether there is an SVC software version available that is newer than the version you have installed in your SVC. You need an Internet connection to perform this check. Launch Upgrade Wizard: Use this to launch the software upgrade process. 4. Click Launch Upgrade Wizard to start the upgrade process; you will be redirected to the window shown in Figure 10-399.
From the window shown in Figure 10-399 you can download the Upgrade Test Utility from the IBM website, or you can browse and upload the Upgrade Test Utility from the location where you saved it, as shown in Figure 10-400 on page 805.
804
5. When the Upgrade Test Utility has been uploaded, the window shown in Figure 10-401 displays.
6. When you click Next (Figure 10-401), the Upgrade Test Utility will be applied. You will be redirected to the window shown in Figure 10-402.
805
7. Click Close (Figure 10-402 on page 805), and you will be redirected to the window shown in Figure 10-403. From here you can run your Upgrade Test Utility for the level you need.
8. Click Next (Figure 10-403), and you will be redirected to the window shown in Figure 10-404. At this point the Upgrade Test Utility will run. You will see the suggested actions (if any are needed) or simply the window shown in Figure 10-404.
9. Click Next (Figure 10-404) to start the SVC software upload procedure, and you will be redirected to the window shown in Figure 10-405.
From the window shown in Figure 10-405 you can download the SVC software upgrade package directly from the IBM website, or you can browse and upload the software upgrade package from the location where you saved it, as shown in Figure 10-406 on page 807.
806
Click Open (Figure 10-406), and you will be redirected on the windows shown in Figure 10-407 and Figure 10-408.
Figure 10-408 shows that the SVC package uploading has completed.
10.Click Next and you will be redirected to the window shown in Figure 10-409.
807
11.When you click Finish (Figure 10-409 on page 807), the SVC software upgrade will start and you will be redirected to the window shown in Figure 10-410.
When you click Close (Figure 10-410), the warning message shown in Figure 10-411 will be displayed.
12.When you click OK (Figure 10-411), you will have completed upgrading the SVC software. Now you are redirected to the window shown in Figure 10-412.
After a few minutes the window shown in Figure 10-413 on page 809 will display, showing that the first node has been upgraded.
808
Now the process will install the new SVC software version on the remaining node in the cluster. You can check the upgrade status as shown in Figure 10-413. 13.After all nodes have been rebooted, you will have completed the SVC software upgrade task.
809
Login with your superuser password and you will reach the Service Assistant Home page (Figure 10-415).
From the Service Assistant Home page (Figure 10-415) you can obtain an overview of your SVC cluster and the node status. You can view a detailed status and error summary and manage service actions for the current node. The current node is the node on which service-related actions are performed. The connected node displays the Service Assistant and provides the interface for working with
810
other nodes on the system. To manage a different node, select the radio button on the left of your node panel name, and the details for the selected node will be shown. Using the pull-down menu in the Service Assistant Home page, you can select which action you want to execute in the selected node (Figure 10-416).
As shown in Figure 10-416, for the selected node it is possible to: Enter in Service State Power off Restart Reload
811
At this point the information window displays (Figure 10-418). Wait until the node is available, then click OK.
Now you will be returned to the Service Assistant Home Page. You will be able to see the status of the node just entered into Service State (Figure 10-419 on page 813). Also note an event code 690, which means several resources have entered a Service State.
812
Now you can have different choices from the Service Assistant Home Page pull-down menu as shown in Figure 10-420. Hold in Service State Power off Restart Reload
813
At this point the information window for your action will display (Figure 10-418 on page 812). Wait until the node is available, then click OK. When the node is available, the window shown in Figure 10-423 displays.
You can see that the node is starting and the event shown in the Error column is simply a regular message. Click Refresh until you can see your node is active and no event is displayed in the Error column. In our example we used the Exit from Service State action from the Service Assistant Home Page, but it is also possible to exit from a Service State using the restart action.
814
On the next confirmation window, wait until the operation completes successfully and then click OK (Figure 10-426 on page 816).
815
From the Service Assistant Home Page, notice that the node that you just rebooted has disappeared (Figure 10-427). This node will still be visible in an Offline State from the GUI or from the SVC command line interface.
The node you just rebooted has to complete before it becomes visible. Normally a node reboot takes about 14 minutes.
To create a support package with the last statesave, select the related option, click Create, and download. The page shown in Figure 10-429 on page 817 is displayed.
816
You will be asked where you want to save the support package (Figure 10-430).
817
818
To re-install the software, the node must either be a candidate node or in service state. During the re-installation, the node becomes unavailable. If the connected node and the current node are the same, the connection to the Service Assistant might be lost. Figure 10-433 shows the Re-install software page. On this page, clicking Check for software updates redirects you to the IBM website, where you will find any available update for the SVC software as shown in the following link. http://www-01.ibm.com/support/docview.wss?rs=591&uid=ssg1S1001707#_Latest_SAN_Volu me_Controller%20Code
Attention: We do not detail this procedure because the reinstallation of software action must be run under the direction of IBM support. Do not try to perform this action unless guided by IBM support.
819
820
821
822
Appendix A.
823
Performance considerations
When designing an SVC storage infrastructure or maintaining an existing infrastructure, you need to consider many factors in terms of their potential impact on performance. These factors include dissimilar workloads competing for the same resources; overloaded resources; insufficient resources available; poor performing resources; and so on. Monitoring performance can both provide a validation that design expectations are met and identify opportunities for improvement.
SVC
The SVC cluster is scalable up to eight nodes. The performance is near linear when adding additional nodes into the cluster until performance eventually becomes limited by attached components. Though virtualization with the SVC provides significant flexibility in terms of the components used, it does not diminish the necessity of designing the system around the components so that it can deliver the desired level of performance. Essentially, SVC performance improvements are gained by spreading the workload across a greater number of back-end resources. Eventually, however, the performance of individual resources will become the limiting factor.
Performance monitoring
This section highlights several performance monitoring techniques.
824
Tip: The performance statistics files can be copied from the SVC nodes to a local drive on your workstation using the pscp.exe (included with PuTTY) from an MS-DOS command line, as shown in this example: C:\Program Files\PuTTY>pscp -unsafe -load ITSO-CLS1 admin@10.18.229.81:/dumps/iostats/* c:\statfiles Use the -load parameter to specify the session that is defined in PuTTY. Specify the -unsafe parameter when you use wildcards. The performance statistics files are in .xml format. They can be manipulated using various tools and techniques.
825
An example of a tool that you can use to analyze these files is the SVC Performance Monitor (svcmon). Note: The svcmon tool is not an officially supported tool. It is provided on an as is basis. You can obtain this tool from the following website: http://www-03.ibm.com/support/techdocs/atsmastr.nsf/WebIndex/PRS3177 Figure A-1 shows an example of the type of chart that you can produce using the SVC performance statistics.
Performance data collection and Tivoli Storage Productivity Center for Disk
Although you can obtain performance statistics in standard .xml files, that is not the most practical or user-friendly way to analyze SVC performance statistics. The Tivoli Storage Productivity Center (TPC) for Disk is the official and supported IBM tool used to collect and analyze SVC performance statistics. Tivoli Storage Productivity Center for Disk comes preinstalled on the System Storage Productivity Center Console and can be made available by activating the license.
826
For more information about using Tivoli Storage Productivity Center to monitor your storage subsystem, see Monitoring Your Storage Subsystems with TotalStorage Productivity Center, SG24-7364, which is available at the following website: http://www.redbooks.ibm.com/abstracts/sg247364.html?Open Note: Tivoli Storage Productivity Center for Disk for TPC Version 4.2.1 supports new SVC port quality statistics provided in SVC Versions 4.3 and above. Monitoring these metrics in addition to the performance metrics can help you to maintain a stable SAN environment.
827
828
Appendix B.
Terminology
In this appendix we define terms commonly used within this book that relate to the SVC and its concepts. To see the complete set of terms that are related to the SVC, refer to the Glossary section of the SVC Information Center. It is available at the following website: http://publib.boulder.ibm.com/infocenter/svc/ic/index.jsp
829
back-end
See front-end and back-end.
channel extender
A channel extender is a device used for long distance communication connecting other SAN fabric components. Generally, channel extenders can involve protocol conversion to asynchronous transfer mode (ATM), Internet Protocol (IP), or another long distance communication protocol.
cluster (SVC)
A cluster is a group of eight SVC nodes that presents a single configuration or management and service interface to the user.
cold extent
A cold extent is a volumes extent that will not get any performance benefit if moved from HDD to SSD. A cold extent also refers to an extent that needs to be migrated onto HDD if it currently resides on SSD.
Consistency Group
A Consistency Group is a group of copy relationships between virtual volumes or data sets that are maintained with the same time reference so that all copies are consistent in time. A Consistency Group can be managed as a single entity.
copied
The copied is a FlashCopy state that indicates that a copy has been triggered after the copy relationship was created. The copied state indicates that the copy process is complete, and the target disk has no further dependence on the source disk. The time of the last trigger event is normally displayed with this status.
configuration node
While the cluster is operational, a single node in the cluster is appointed to provide configuration and service functions over the network interface. This node is termed the configuration node. This configuration node manages the information that describes the cluster configuration, and it provides a focal point for configuration commands. If the configuration node fails, another node in the cluster will assume the role.
830
counterpart SAN
A counterpart SAN is a non-redundant portion of a redundant SAN. A counterpart SAN provides all of the connectivity of the redundant SAN, but without the 100% redundancy. SVC nodes are typically connected to a redundant SAN made up of two counterpart SANs. A counterpart SAN is often called a SAN fabric.
disk tier
It is likely that the MDisks (LUNs) presented to the SVC cluster will have different performance attributes due to the type of disk or RAID array that they reside on. The MDisks may be on 15 K RPM Fibre Channel or SAS disk, Nearline SAS or SATA, or even solid state disk (SSDs). Thus, a storage tier attribute is assigned to each MDisk, the default being generic_hdd. With SVC 6.1 a new disk tier attribute is available for SSDs known as generic_ssd.
Easy Tier
Easy Tier is a volume performance function within the SVC that provides automatic data placement of a volumes extents in a multitiered storage pool. The pool will normally contain a mix of SSDs and HDDs. Easy Tier measures host I/O activity on the volumes extents and will migrate hot extents onto the SSDs to ensure maximum performance.
evaluation mode
The evaluation mode is an Easy Tier operating mode in which the host activity on all the volume extents in a pool are measured only. No automatic extent migration is performed.
event (error)
An event is an occurrence of significance to a task or system. Events can include completion or failure of an operation, a user action, or the change in state of a process. Prior to SVC V6.1, this was known as an error.
event code
An event code is a value used to identify an event condition to a user. This value might map to one or more event IDs or to values that are presented on the service panel. This value is used to report error conditions to IBM and to provide an entry point into the service guide.
event ID
An event ID is a value that is used to identify a unique error condition detected by the 2145 cluster. An event ID is used internally in the cluster to identify the error.
excluded
The excluded condition is a status condition that describes an MDisk that the 2145 cluster has decided is no longer sufficiently reliable to be managed by the cluster. The user must issue a command to include the MDisk in the cluster-managed storage.
Appendix B. Terminology
831
extent
An extent is a fixed size unit of data that is used to manage the mapping of data between MDisks and Volumes. The size of the extent can range from 16 MB to 8 GB in size.
FC port logins
FC port logins refers to the number of hosts that can see any one SVC node port. The SVC has a maximum limit per node port of Fibre Channel logins allowed.
grain
A grain is the unit of data that is represented by a single bit in a FlashCopy bitmap (64 KB/256 KB) in the SVC. It is also the unit to extend the real size of a thin provisioned volume (32, 64, 128, or 256 KB).
host ID
A hod ID is a numeric identifier assigned to a group of host FC ports or iSCSI host names for the purposes of LUN mapping. For each host ID, there is a separate mapping of SCSI IDs to volumes. The intent is to have a one-to-one relationship between hosts and host IDs, although this relationship cannot be policed.
host mapping
Host mapping refers to the process of controlling which hosts have access to specific volumes within a cluster (it is equivalent to LUN masking). Prior to SVC V6.1, this was known as VDisk-to-Host mapping.
hot extent
A hot extent is a volumes extent that gets a performance benefit if moved from HDD onto SSD.
internal storage
Internal storage refers to an array of managed disks (MDisks) and drives that are held in enclosures and in nodes that are part of the SVC cluster.
832
image mode
The image mode is an access mode that establishes a one-to-one mapping of extents in an existing LUN or (image mode) MDisk with the extents in a volume. The last MDisk extent can be partially used if the size of the volume is not an exact multiple of the extent size.
I/O group
Each pair of SVC nodes is known as an input/output (I/O) group. An I/O group has a set of volumes associated with it that are presented to host systems. Each SVC node is associated with exactly one I/O group. The nodes in an I/O group provide a failover, failback function for each other.
ISL hop
An inter-switch link (ISL) is a connection between two switches and is counted as one ISL hop. The number of hops is always counted on the shortest route between two N-ports (device connections). In an SVC environment, the number of ISL hops is counted on the shortest route between the pair of nodes farthest apart. The SVC recommends maximum hops for some fabric paths.
local fabric
The local fabric is composed of SAN components (switches, cables, and so on) that connect the components (nodes, hosts, and switches) of the local cluster together.
LU and LUN
LUN is formally defined by the SCSI standards as a logical unit number. It is used as an abbreviation for an entity, which exhibits disk-like behavior, for example, a volume or an MDisk.
Appendix B. Terminology
833
mirrored volume
A mirrored volume is a single virtual volume that has two physical volume copies. The primary physical copy is known within the SVC as copy 0, and the secondary copy is known within the SVC as copy 1.
node
A node is a single processing unit that provides virtualization, cache, and copy services for the cluster. SVC nodes are deployed in pairs called I/O groups. One node in the cluster is designated the configuration node.
oversubscription
The term oversubscription refers to the ratio of the sum of the traffic on the initiator N-port connections to the traffic on the most heavily loaded ISLs, where more than one connection is used between these switches. Oversubscription assumes a symmetrical network, and a specific workload applied equally from all initiators and sent equally to all targets. A symmetrical network means that all the initiators are connected at the same level, and all the controllers are connected at the same level.
preparing phase
Before you start the FlashCopy process, you must prepare a FlashCopy mapping. The preparing phase is used to flush a volumes data from cached in preparation for FlashCopy operation.
RAS
RAS stands for reliability, availability, and serviceability.
RAID
RAID stands for a redundant array of independent disks, which is a collection of two or more physical disk drives that present to the host an image of one or more logical disk drives. The most common RAID formats are 0, 1, 5, 6, 10.
RAID 0
RAID 0 is a data striping technique used across an array. It includes no data protection.
RAID 1
RAID 1 is a mirroring technique used on a storage array in which two or more identical copies of data are maintained on separate mirrored disks.
RAID 10
RAID 10 is a combination of a RAID 0 stripe which is mirrored, RAID 1. Thus, two identical copies of striped data exist; there is no parity.
RAID 5
RAID 5 is an array that has a data stripe which includes a single logical parity drive. The parity check data is distributed across all the array's disks.
834
RAID 6
RAID 6 is a form of RAID that has two logical parity drives per stripe and therefore can continue to process read and write requests to all of the array's virtual disks in the presence of two concurrent disk failures.
redundant SAN
A redundant SAN is a SAN configuration in which there is no single point of failure (SPoF), so no matter what component fails, data traffic will continue. Connectivity between the devices within the SAN is maintained, although possibly with degraded performance, when an error has occurred. A redundant SAN design is normally achieved by splitting the SAN into two independent counterpart SANs (two SAN fabrics), so that if one path of the counterpart SAN is destroyed, the other counterpart SAN path keeps functioning.
remote fabric
The remote fabric is composed of SAN components (switches, cables, and so on) that connect the components (nodes, hosts, and switches) of the remote cluster together. There can be significant distances between the components in the local cluster and those components in the remote cluster.
SAN
SAN stands for storage area network.
SCSI
SCSI stands for Small Computer Systems Interface.
Appendix B. Terminology
835
volume
A volume is an SVC logical device that appears to host systems attached to the SAN as a SCSI disk. Each volume is associated with exactly one I/O Group. It will have a preferred node within the I/O group. Prior to SVC 6.1, this was known as a VDisk or virtual disk.
volume copy
A volume copy is a physical copy of the data that is stored on a volume. Mirrored volumes have two such copies. Non-mirrored volumes have one copy.
836
Related publications
The publications listed in this section are considered particularly suitable for a more detailed discussion of the topics covered in this book.
Other publications
These publications are also relevant as further information sources: IBM System Storage SAN Volume Controller: Planning Guide, GA32-0551 IBM System Storage Open Software Family SAN Volume Controller: Planning Guide, GA22-1052 IBM System Storage SAN Volume Controller: Service Guide, GC26-7901 IBM System Storage SAN Volume Controller Model 2145-8A4 Hardware Installation Guide, GC27-2219 IBM System Storage SAN Volume Controller Model 2145-8G4 Hardware Installation Guide, GC27-2220 IBM System Storage SAN Volume Controller Models 2145-8F2 and 2145-8F4 Hardware Installation Guide, GC27-2221
837
IBM SAN Volume Controller Software Installation and Configuration Guide, GC27-2286 IBM System Storage SAN Volume Controller Command-Line Interface Users Guide, GC27-2287 IBM System Storage Master Console: Installation and Users Guide, GC30-4090 Multipath Subsystem Device Driver Users Guide, GC52-1309 IBM System Storage SAN Volume Controller Model 2145-CF8 Hardware Installation Guide, GC52-1356 IBM System Storage Productivity Center Software Installation and Users Guide, SC23-8823 IBM System Storage Productivity Center Introduction and Planning Guide, SC23-8824 Subsystem Device Driver Users Guide for the IBM TotalStorage Enterprise Storage Server and the IBM System Storage SAN Volume Controller, SC26-7540 IBM System Storage Open Software Family SAN Volume Controller: Installation Guide, SC26-7541 IBM System Storage Open Software Family SAN Volume Controller: Service Guide, SC26-7542 IBM System Storage Open Software Family SAN Volume Controller: Configuration Guide, SC26-7543 IBM System Storage Open Software Family SAN Volume Controller: Command-Line Interface Users Guide, SC26-7544 IBM System Storage Open Software Family SAN Volume Controller: CIM Agent Developers Reference, SC26-7545 IBM System Storage Open Software Family SAN Volume Controller: Host Attachment Guide, SC26-7563 Command-Line Interface Users Guide, SC27-2287 IBM System Storage Productivity Center Users Guide Version 1 Release 4, SC27-2336 IBM TotalStorage Multipath Subsystem Device Driver Users Guide, SC30-4096 IBM System Storage SAN Volume Controller V5.1.0 - Host Attachment Guide, SG26-7905 IBM Tivoli Storage Productivity Center IBM Tivoli Storage Productivity Center for Replication Installation and Configuration Guide, SC27-2337 IBM TotalStorage Multipath Subsystem Device Driver Users Guide, SC30-4096
Online resources
These websites are also relevant as further information sources: IBM TotalStorage home page http://www.storage.ibm.com SAN Volume Controller supported platform http://www-1.ibm.com/servers/storage/support/software/sanvc/index.html Download site for Windows Secure Shell (SSH) freeware http://www.chiark.greenend.org.uk/~sgtatham/putty
838
IBM site to download SSH for AIX http://oss.software.ibm.com/developerworks/projects/openssh Open source site for SSH for Windows and Mac http://www.openssh.com/windows.html Cygwin Linux-like environment for Windows http://www.cygwin.com IBM Tivoli Storage Area Network Manager site http://www-306.ibm.com/software/sysmgmt/products/support/IBMTivoliStorageAreaNe tworkManager.html Microsoft Knowledge Base Article 131658 http://support.microsoft.com/support/kb/articles/Q131/6/58.asp Microsoft Knowledge Base Article 149927 http://support.microsoft.com/support/kb/articles/Q149/9/27.asp Sysinternals home page http://www.sysinternals.com Subsystem Device Driver download site http://www-1.ibm.com/servers/storage/support/software/sdd/index.html IBM TotalStorage Virtualization home page http://www-1.ibm.com/servers/storage/software/virtualization/index.html SVC support page http://www-947.ibm.com/systems/support/supportsite.wss/selectproduct?taskind=4& brandind=5000033&familyind=5329743&typeind=0&modelind=0&osind=0&psid=sr&continu e.x=1 SVC online documentation http://publib.boulder.ibm.com/infocenter/svcic/v3r1m0/index.jsp lBM Redbooks publications about SVC http://www.redbooks.ibm.com/cgi-bin/searchsite.cgi?query=SVC
Related publications
839
840
Index
A
abends 571 abends dump 571 active quorum disk 36 add a node 495 add additional ports 621 add an HBA 455 Add SSH Public Key 125 administration tasks 608 Advanced Copy Services 84 Advanced Settings 613, 773, 775 AIX host system 158 AIX specific information 149 AIX toolbox 158 AIX-based hosts 149 alias 29 alias string 145 aliases 29 analysis 91, 824 application server guidelines 83 application testing 365 assign VDisks 473 assigned VDisk 154 asynchronous notifications 386387 asynchronous remote 413 asynchronous remote copy 34, 390, 413414 asynchronous replication 431 asynchronously 413 audit log 41 authentication 42, 128, 147 authentication service 45 automate tasks 482 automatic Linux system 201 automatic update process 201 automatically discover 442 automation 481 auxiliary 420, 532, 554 auxiliary VDisk 414, 421, 428 available managed disks 444 boss node 36 bottleneck 49 bottlenecks 9192 budget 28 budget allowance 28 business requirements 91, 824
C
cable connections 63 cable length 48 cache 38, 377, 414 caching 92 caching capability 91, 824 candidate node 495 capacity 81 capacity measurement 636 CDB 29 challenge message 32 Challenge-Handshake Authentication Protocol 32, 147, 453 change the IP addresses 490 Channel extender 830 channel extender 835 CHAP 32, 147, 453 CHAP authentication 32, 147, 616 CHAP secret 32, 147, 616 chpartnership 434 chrcconsistgrp 435 chrcrelationship 435 chunks 79, 233 CIM agent 39 CIM Client 39 CIMOM 30, 39, 147 CLI 122, 541 commands 158 scripting for SVC task automation 481 cluster 35 creation 495 IP address 104 shutting down 442, 492493, 502 time zone 490 viewing properties 487 cluster (SVC) 830 Cluster management 39 cluster nodes 35 cluster overview 35 cluster partnership 396 cluster properties 492 clustered ethernet port 148 clustered server resources 35 clusters 58 Colliding writes 415 colliding writes 416 Command Descriptor Block 29
B
back-end application 832 background copy 405, 413, 421, 428 background copy bandwidth 434 background copy progress 528, 550 background copy rate 383384 backup 364 of data with minimal impact on production 370 backup time 364 bandwidth 58, 86, 420 bandwidth impact 434 basic setup requirements 126 bind 225 boot 89 Copyright IBM Corp. 2011. All rights reserved.
841
command syntax 485 COMPASS architecture 46 compression 89 concepts 7 concurrent instances 232 concurrent software upgrade 557 configurable warning capacity 27 configuration 137 configuration node 36, 148, 495, 830 configure AIX 149 configure SDD 225 configuring the GUI 108 connected 399400, 423 connected state 402, 423, 425426 connectivity 37 consistency 424 consistency freeze 402, 411, 426 Consistency Group 370, 373, 830 consistency group 371 limits 373 consistent 34, 400401, 423424 consistent data set 364 Consistent Stopped state 398, 422 Consistent Synchronized state 399, 422 ConsistentDisconnected 404, 427 ConsistentStopped 402, 426 ConsistentSynchronized 403, 426 container 79 contingency capacity 27 controller, renaming 442 conventional storage 227 cookie crumbs recovery 578 cooling 59 copied state 830 copy bandwidth 86, 434 copy operation 34 copy process 410, 436 copy rate 384 copy rate parameter 84 Copy Services managing 503 COPY_COMPLETED 386 copying state 509 counterpart SAN 93, 831, 835 CPU cycle 50 create a FlashCopy 505 create a new VDisk 634 create an SVC partnership 714 create mapping command 505, 684685, 695 create SVC partnership 522, 543 creating a VDisk 458 creating managed disk groups 595 current cluster state 36 Cygwin 190
data consistency 503 data corruption 424 data flow 67 data migration 59, 232 data migration and moving 364 data mining 365 data mover appliance 475 degraded mode 76 delete a FlashCopy 512 a host 455 a host port 457 a port 624, 627, 629, 649, 652 a VDisk 471, 646 ports 456 Delete consistency group command 513 dependent writes 372, 417 destaged 38 destructive 567 detect the new MDisks 442 detected 443 device-specific modules 165 differentiator 51 directory protocol 45 dirty bit 406, 428 disconnected 399400, 423 disconnected state 423 discovering assigned VDisk 154, 166 discovering newly assigned MDisks 594, 602 disk access profile 469 disk controller renaming 591 systems 441 viewing details 441, 590 disk internal controllers 51 disk timeout value 219 disk zone 66 Diskpart 172 display summary information 444 distance 389, 833 distance limitations 390 documentation 58, 589 DSMs 165 dump I/O statistics 570 I/O trace 569 listing 568 other nodes 570 durability 51 dynamic pathing 222223 dynamic shrinking 653 dynamic tracking 150
E
elapsed time 84 empty MDG 446 empty state 405, 428 Enterprise Storage Server (ESS) 388 entire VDisk 370 error 402, 422, 426, 445, 567
D
data backup with minimal impact on production 370 moving and migration 364 data change rates 89
842
Error Code 831 error handling 385 Error ID 831 error log 566 error notification 565 ESS (Enterprise Storage Server) 388 ESS to SVC 237 eth0 49 eth1 49 Ethernet 63 Ethernet connection 64 event 566 event log 569 events 398, 421 Excluded 831 excludes 606 Execute Metro Mirror 526, 548 expand a VDisk 171, 471 a volume 172 expand a space-efficient VDisk 471 extended distance solutions 389 Extent 832 extent 79, 228 extent level 228 extent sizes 79
Idling/Copied 381 Prepared 382 Preparing 382 Stopped 381 Suspended 382 FlashCopy mappings 373 FlashCopy properties 373 FlashCopy rate 84 flexibility 91, 824 foreground I/O latency 434 free extents 471 front-end application 832 FRU 832 Full Feature Phase 30
G
gateway IP address 105 GBICs 833 general housekeeping 589 generating output 486 generator 124 geographically dispersed 388 Global Mirror guidelines 87 Global Mirror protocol 34 Global Mirror relationship 416 Global Mirror remote copy technique 413 gminterdelaysimulation 431 gmintradelaysimulation 431 gmlinktolerance 430431 governing 28 governing rate 28 graceful manner 497 grain 374, 832 grain sizes 84 grains 84, 384 granularity 370 GUI 127
F
fabric remote 93 fabric interconnect 833 failover 222, 414 failover only 204 failover situation 390 fast fail 150 FAStT 388 FC optical distance 48 features, licensing 567 featurization log 569 Fibre Channel interfaces 48 Fibre Channel port fan in 93, 835 Fibre Channel Port Login 30 Fibre Channel port logins 832 Fibre Channel ports 63 file system 207 filtering 486, 584 filters 486 FlashCopy 34, 364 bitmap 374 how it works 365, 369 image mode disk 378 indirection layer 374 mapping 365 mapping events 379 serialization of I/O 385 synthesis 385 FlashCopy indirection layer 374 FlashCopy mapping 370 FlashCopy mapping states 381 Copying 381
H
Hardware Management Console 39 hardware nodes 46 hardware overview 46 HBA 451, 832 HBA ports 83 heartbeat signal 37 heartbeat traffic 86 help 589 high availability 35, 58 home directory 158 host and application server guidelines 83 configuration 137 creating 451 deleting 620 information 610 showing 479 systems 65 host adapter configuration settings 160 host bus adapter 451
Index
843
Host ID 832 Host Type 613, 616 HP-UX support information 222223
I
I/O budget 28 I/O governing 28, 469 I/O governing rate 469 I/O Group 833 I/O group 833 renaming 498 viewing details 498 I/O pair 60 I/O per secs 58 I/O statistics dump 570 I/O trace dump 569 ICAT 3940 identical data 420 idling 403, 426 idling state 410, 436 IdlingDisconnected 404, 427 Image Mode 833 image mode 235, 671 image mode disk 378 image mode MDisk 235 image mode to image mode 264 image mode VDisk 230 image mode virtual disks 81 inappropriate zoning 74 inconsistent 400, 423 Inconsistent Copying state 399, 422 Inconsistent Stopped state 398, 422 InconsistentCopying 402, 425 InconsistentDisconnected 404, 427 InconsistentStopped 402, 425 indirection layer 374 indirection layer algorithm 375 informational error logs 386 initiator 144 initiator name 29 initiator port 613, 616 input power 493 install 57 insufficient bandwidth 384 integrity 371372 interaction with the cache 377 intercluster link 396 intercluster link bandwidth 434 intercluster link maintenance 396397, 419 intercluster Metro Mirror 389, 413 intercluster zoning 396397, 419 Internet Storage Name Service 32, 147, 833 interswitch link (ISL) 834 interval 492 intracluster Metro Mirror 389, 413 IP address modifying 489, 786 IP addresses 59, 786, 788, 792 IP subnet 64 ipconfig 133
IPv4 132 IPv6 132 IPv6 addresses 133 IQN 29, 145, 833 IQNs 29 iSCSI 28, 49, 58, 146 iSCSI Address 29 iSCSI client 144 iSCSI IP address failover 148 iSCSI Multipathing 32 iSCSI Name 29 iSCSI node 29 iSCSI Qualified Name 29, 833 iSCSI target node failover 148 ISL (interswitch link) 834 ISL hop count 389, 413 iSNS 32, 147, 833 issue CLI commands 190
J
jumbo frames 32
K
kernel level 201 key 147 key files on AIX 158
L
LAN Interfaces 49 last extent 237 latency 34, 86 LBA 405, 428 LDAP 44 license 104 licensing feature 567 licensing feature settings 567 Lightweight Directory Access Protocol 44 limiting factor 91 link errors 48 Linux 158 Linux kernel 35 Linux on Intel 200 list dump 568 listing dumps 568 Load balancing 204 Local authentication 41 local cluster 407, 430 Local fabric 833 local fabric interconnect 833 Local users 43 logged 566 Logical Block Address 405, 428 logical configuration data 573 Login Phase 30 logins 613, 616 lsrcrelationshipcandidate 435 LU 833 LUNs 833
844
M
magnetic disks 50 maintenance levels 160 maintenance tasks 557 Managed 833 Managed disk 833 managed disk 833 working with 590 managed disk group 447 creating 595 viewing 596 Managed Disks 833 managed mode MDisk 235 managed mode to image mode 260 managed mode virtual disk 81 management 91, 824 map a VDisk to a host 472 mapping 369 mapping events 379 Master 834 master 420 master console 59 master VDisk 421, 428 MC 834 MDG 833 MDG level 447 MDGs 59 MDisk 59, 833 adding 446, 599, 603 discovering 442, 602 including 445, 606 information 599 modes 235 name parameter 444 removing 450, 599, 604 renaming 445, 602 showing 477 showing in group 447 MDisk group creating 449, 595 deleting 450, 598 renaming 449, 597 showing 447, 478 viewing information 449 MDiskgrp 833 Metro Mirror 388 Metro Mirror consistency group 408, 410412, 434435, 437438 Metro Mirror features 390, 414 Metro Mirror process 420 Metro Mirror relationship 409410, 412, 416, 434, 436, 438 microcode 37 Microsoft Active Directory 44 Microsoft Cluster 171 Microsoft Multi Path Input Output 165 migrate 227 migrate a VDisk 230 migrate between MDGs 230 migrate data 235
migrate VDisks 474 migrating multiple extents 228 migration algorithm 233 functional overview 232 operations 228 overview 228 tips 237 migration activities 228 migration process 475 migration progress 232 migration threads 228 mirrored 414 mirrored copy 413 mkpartnership 433 mkrcconsistgrp 434 mkrcrelationship 435 MLC 49 modify a host 454 modifying a VDisk 469 mount 207 mount point 207 moving and migrating data 364 MPIO 83, 165 MSCS 171 MTU sizes 32 multi layer cell 49 multipath I/O 83 multipath storage solution 165 multipathing device driver 83 Multipathing drivers 32 multiple disk arrays 91, 824 multiple extents 228 multiple paths 32 multiple virtual machines 214
N
network bandwidth 89 Network Entity 145 Network Portals 145 Network Time Protocol 798 new mapping 472 Node 834 node 36, 494 adding 495 deleting 496 failure 385 port 832 renaming 496 shutting down 497 viewing details 494 node details 494 node dumps 570 node level 494 Node Unique ID 36 nodes 58 non-preferred path 222 non-redundant 831 non-zero contingency 27 N-port 834 Index
845
NTP 798
O
offline rules 230 offload features 31 older disk systems 92 on screen content 486, 584 online help 589 on-screen content 486 OpenSSH 158 OpenSSH client 190 operating system versions 160 ordering 34, 372 organizing on-screen content 486 other node dumps 570 overall performance needs 58 Oversubscription 834 oversubscription 834 overwritten 369, 563
P
package numbering and version 557, 801 parallelism 232 partial last extent 236 partnership 430 passphrase 124 path failover 222 path failure 386 path offline 386 path offline for source VDisk 386 path offline for target VDisk 386 path offline state 386 path-selection policy algorithms 204 peak 434 peak workload 86 pended 28 per cluster 232 per managed disk 233 performance 81 performance advantage 91, 824 performance considerations 824 performance improvement 91, 824 performance monitoring tool 87 performance requirements 58 performance scalability 35 performance statistics 87 physical location 59 physical planning 59 physical rules 60 physical site 59 Physical Volume Links 223 PiT 34 PiT consistent data 364 PiT copy 374 planning rules 58 plink 482 PLOGI 30 Point in Time (PiT) technology 34 point-in-time copy 401, 425
policing 28 policy decision 406, 429 port adding 455, 621 deleting 456, 624 port binding 225 Port Mask 613, 616 port mask 8384 Power Systems 158 PPRC background copy 405, 413, 428 commands 407, 430 configuration limits 429 detailed states 402, 425 preferred access node 81 preferred path 222 pre-installation planning 58 Prepare 834 prepare (pre-trigger) FlashCopy mapping command 507 PREPARE_COMPLETED 386 preparing volumes 157 pre-trigger 507 primary 414, 532, 554 priority 475 priority setting 475 private key 122, 124, 158 production VDisk 428 provisioning 434 public key 122, 124, 158, 482 PuTTY 40, 122, 126, 494 CLI session 130 default location 124 security alert 131 PuTTY application 130, 496 PuTTY Installation 190 PuTTY Key Generator 124125 PuTTY Key Generator GUI 123 PuTTY Secure Copy 560 PuTTY session 131 PuTTY SSH client software 190 PVLinks 223
Q
QLogic HBAs 201 Quality Of Service 28 Queue Full Condition 28 quiesce 493 quorum candidates 36 Quorum Disk 36 quorum disk 36 quorum disk candidate 37
R
RAID 834 RAID controller 6566 RAMAC 50 RAS 834 real capacity 27 real-time synchronized 388
846
reassign the VDisk 474 recall commands 440, 485 Redbooks Web site 839 Contact us xxii redundancy 49, 86 redundant 831 Redundant SAN 835 redundant SAN 835 relationship 370, 420 relationship state diagram 398, 421 reliability 81 Reliability, Availability, and Serviceability (RAS) 834 remote 835 Remote authentication 41 remote fabric 93, 833 interconnect 833 Remote users 44 remove a disk 187 remove an MDG 450 remove WWPN definitions 456 rename a disk controller 591 rename an MDG 597, 700, 723 rename an MDisk 602, 617, 641, 699, 724 repartitioning 81 rescan disks 169 restart the cluster 494 restart the node 497 restarting 531, 553 restore points 366 Reverse FlashCopy 35, 366 RFC3720 29 rmrcconsistgrp 438 rmrcrelationship 437 round robin 81, 205, 222
S
SAN Boot Support 222, 224 SAN definitions 93 SAN fabric 65 SAN planning 63 SAN Volume Controller 835 documentation 589 general housekeeping 589 help 589 virtualization 39 SAN zoning 122 SATA 88 scalable 92, 824 SCM 51 scripting 406, 429, 481 scripts 172, 482, 829 SCSI 835 SCSI Disk 833 SCSI primitives 442 SDD 81, 83, 150, 152, 156, 224 SDD (Subsystem Device Driver) 156, 202, 224, 239 SDD Dynamic Pathing 222 SDD installation 153 SDD package version 162 SDDDSM 165
secondary 414 secondary site 58 secure session 497 Secure Shell (SSH) 122 Secure Shell connection 39 separate physical IP networks 49 sequential 81, 459 serialization 385 serialization of I/O by FlashCopy 385 Service Location Protocol 32, 147, 835 set up Metro Mirror 520, 541 SEV 470 shells 481 shrink a VDisk 653 shrinking 653 shrinkvdisksize 476 shut down 171 shut down a single node 497 shut down the cluster 493, 741 Simple Network Management Protocol 406, 429, 445 single layer cell 49 single point in time 35 single point of failure 835 single sign-on 40, 45 site 59 SLC 49 SLP 32, 147, 835 SLP daemon 32 SNIA 2 SNMP 406, 429, 445 SNMP alerts 606 SNMP manager 565 SNMP trap 386 software upgrade 557 software upgrade packages 801 Solid State Drive 35 Solid State Drives 47 solution guidelines 91 sort 588 sorting 588 source 384 space-efficient 462 Space-efficient background copy 420 space-efficient VDisk 476 space-efficient volume 476 special migration 237 Split 77 split brain 16, 36 split I/O Group 77 split per second 84 splitting the SAN 835 SPoF 835 spreading the load 81 SSD market 51 SSD solution 50 SSH 39, 482 SSH (Secure Shell) 122 SSH Client 40 SSH client 158, 190 SSH client software 122
Index
847
SSH key 42 SSH keys 122, 126 SSH server 122 SSH-2 122 SSO 45 stack 234 stand-alone Metro Mirror relationship 525, 548 start (trigger) FlashCopy mapping command 508, 510, 704, 729 start a PPRC relationship command 410, 436 startrcrelationship 436 state 402, 425 connected 399, 423 consistent 400401, 423424 ConsistentDisconnected 404, 427 ConsistentStopped 402, 426 ConsistentSynchronized 403, 426 disconnected 399, 423 empty 405, 428 idling 403, 426 IdlingDisconnected 404, 427 inconsistent 400, 423 InconsistentCopying 402, 425 InconsistentDisconnected 404, 427 InconsistentStopped 402, 425 overview 398, 422 synchronized 401, 424 state fragments 400, 423 state overview 399, 429 state transitions 386, 422 states 384, 398, 421 statistics 492 statistics collection stopping 492 statistics dump 570 stop 422 stop FlashCopy consistency group 512, 708 stop FlashCopy mapping command 511 STOP_COMPLETED 386 stoprcconsistgrp 437 stoprcrelationship 436 storage cache 38 storage capacity 58 Storage Class Memory 51 stripe VDisks 91, 824 striped VDisk 459 subnet mask IP address 104 Subsystem Device Driver (SDD) 156, 202, 224, 239 Subsystem Device Driver DSM 165 SUN Solaris support information 222 superuser 488 surviving node 496 suspended mapping 511 SVC basic installation 101 task automation 481 SVC cluster partnership 407, 430 SVC cluster software 802 SVC configuration 58 SVC Console 39
SVC device 836 SVC GUI 40 SVC installations 76 SVC master console 122 SVC node 76 SVC PPRC functions 390 SVC setup 138 SVC superuser 42 svcinfo 440, 445, 485, 576 svcinfo lsfreeextents 232 svcinfo lshbaportcandidate 456 svcinfo lsmdiskextent 232 svcinfo lsmigrate 232 svcinfo lsVDisk 477 svcinfo lsVDiskextent 232 svcinfo lsVDiskmember 477 svctask 440, 445, 485, 487, 576 svctask chlicense 567 svctask finderr 562 svctask mkfcmap 407410, 430, 433436, 505, 689 switching copy direction 532, 554 switchrcconsistgrp 438 switchrcrelationship 438 symmetrical 2 symmetrical network 834 symmetrical virtualization 2 synchronized 401, 420, 424 synchronizing 420 synchronous reads 234 synchronous writes 234 synthesis 385 System Storage Productivity Center 835
T
target 144 target name 29 thin-provisioned 25 threshold level 28 tie breaker 36 tie-break situations 36 tie-breaker 36 time 490 time zone 490 timeout 219 Time-Zero (T0) copy 34 Time-Zero copy 34 Tivoli Directory Server 44 Tivoli Embedded Security Services 41, 45 Tivoli Integrated Portal 40 Tivoli Storage Productivity Center 40 Tivoli Storage Productivity Center for Data 40 Tivoli Storage Productivity Center for Disk 40 Tivoli Storage Productivity Center for Replication 40 Tivoli Storage Productivity Center Standard Edition 40 token facility 45 trace dump 569 traffic 86 traffic profile activity 58 transitions 235 trigger 508, 510
848
U
unallocated capacity 174 unallocated region 420 unconfigured nodes 495 undetected data corruption 424 uninterruptible power supply 63, 77, 493, 558 unmanaged MDisk 235 unmap a VDisk 474 up2date 201 updates 201 upgrade 801 upgrade precautions 557 use of Metro Mirror 405, 428 used capacity 27 used free capacity 27 using SDD 156, 202, 224
web interface 225 Windows 2000 host configuration 159, 213 Windows 2000-based hosts 159 Windows 2003 165 Windows host system CLI 190 Windows NT and 2000 specific information 159 working with managed disks 590 workload cycle 87 worldwide port name 151 Write data 38 Write ordering 424 write ordering 394, 417, 424 write through mode 77 write workload 87 write-through mode 38 WWPNs 151, 451, 456, 613, 622623
V
VDisk 605 assigning to host 472 creating 458, 462, 634 creating in image mode 462, 671 deleting 471, 646 discovering assigned 154, 166 expanding 471 I/O governing 469 image mode migration concept 235 information 460 mapped to this host 474 migrating 83, 474, 660 modifying 469 path offline for source 386 path offline for target 386 showing 599 showing for MDisk 477 showing using group 477 shrinking 475, 671 working with 458 VDisk discovery 147 VDisk-to-host mapping 474 deleting 649 Veritas Volume Manager 222 View I/O Group details 498 viewing managed disk groups 596 virtual disk 370, 458, 577, 630 Virtual Machine File System 214 virtualization 39 VLUN 833 VMFS 214215 VMFS datastore 217 Volume I/O governing 28 Volume Mirroring 77 Voting Set 36 voting set 36 vpath configured 155
X
xt 806
Y
YaST Online Update 201
Z
zero buffer 420 zero contingency 27 zero-detection algorithm 27 zone 65 zoning capabilities 66 zoning recommendation 170, 184
W
warning capacity 27 warning threshold 476 Index
849
850
Back cover