Sei sulla pagina 1di 824

Front cover

IBM System Storage DS8000: Copy Services in Open Environments


Configuration of Copy Services in heterogeneous environments New IBM FlashCopy SE

TPC for Replication support Copy Services with System i

Bert Dufrasne Wilhelm Gardt Jana Jamsek Peter Kimmel Jukka Myyrylainen

Markus Oscheka Gerhard Pieper Stephen West Axel Westphal Roland Wolf

ibm.com/redbooks

International Technical Support Organization IBM System Storage DS8000: Copy Services in Open Environments May 2008

SG24-6788-03

Note: Before using this information and the product it supports, read the information in Notices on page xvii.

Fourth Edition (May 2008) This edition applies to the IBM System Storage DS8000 with Licensed Machine Code 5.30xx.xx, as announced on October 23, 2007.

Copyright International Business Machines Corporation 2004-2008. All rights reserved. Note to U.S. Government Users Restricted Rights -- Use, duplication or disclosure restricted by GSA ADP Schedule Contract with IBM Corp.

Contents
Notices . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xvii Trademarks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xviii Preface . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xix The team that wrote this book . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xix Special thanks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xxi Become a published author . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xxii Comments welcome. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xxii Summary of changes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xxiii May 2008, Fourth Edition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xxiii Part 1. Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1 Chapter 1. Introduction. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3 1.1 Point-in-time copy functions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5 1.2 Remote Mirror and Copy functions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6 Chapter 2. Copy Services architecture . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9 2.1 Introduction to the Copy Services structure . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10 2.1.1 Management console defined . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10 2.1.2 Storage Unit defined . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10 2.1.3 Storage Facility Image (SFI) defined. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10 2.1.4 Storage Complex defined . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11 2.2 The structure of Copy Services management . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12 2.2.1 Communication path for Copy Services . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12 2.2.2 Remote Mirror and Copy between Storage Complexes . . . . . . . . . . . . . . . . . . . . 14 2.2.3 Differences between the DS CLI and the DS GUI . . . . . . . . . . . . . . . . . . . . . . . . 14 Chapter 3. Licensing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.1 Licenses . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.2 Authorized level. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.2.1 Licensing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.2.2 Charging example . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17 18 22 22 22

Part 2. Interfaces . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25 Chapter 4. DS Storage Manager . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27 4.1 Accessing the DS GUI . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28 4.2 Access capabilities . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28 Chapter 5. DS Command-Line Interface . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.1 Introduction and functionality . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.2 Supported operating systems for the DS CLI . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.3 User accounts . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.4 DS CLI profile . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.5 Command structure. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.6 Copy Services commands . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.7 Using the DS CLI application . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.7.1 Single-shot mode . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31 32 32 33 33 34 35 37 37

Copyright IBM Corp. 2004-2008. All rights reserved.

iii

5.7.2 Script command mode . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.7.3 Interactive mode . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.8 Return codes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.9 User assistance. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.10 Usage examples . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Chapter 6. IBM TotalStorage Productivity Center for Replication . . . . . . . . . . . . . . . . 6.1 IBM TotalStorage Productivity Center . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.2 Where we are coming from . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.3 What TPC for Replication provides . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.4 Copy Services terminology . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.4.1 FlashCopy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.4.2 IBM FlashCopy SE . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.4.3 Metro Mirror . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.4.4 Global Copy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.4.5 Global Mirror . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.4.6 Metro/Global Mirror . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.4.7 Failover/failback terminology. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.5 TPC for Replication terminology . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.6 Volumes in a copy set . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.6.1 Host volume . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.6.2 Target volume . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.6.3 Journal volume . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.6.4 Intermediate volume . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.6.5 TPC for Replication copy set. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.6.6 TPC for Replication session . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.7 TPC for Replication session types . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.7.1 TPC for Replication Basic License . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.7.2 TPC for Replication Two Site Business Continuity . . . . . . . . . . . . . . . . . . . . . . . . 6.7.3 TPC for Replication Three Site Business Continuity. . . . . . . . . . . . . . . . . . . . . . . 6.8 TPC for Replication session states . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.9 TPC for Replication and scalability . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.10 TPC for Replication system and connectivity overview. . . . . . . . . . . . . . . . . . . . . . . . 6.11 TPC for Replication monitoring and freeze capability . . . . . . . . . . . . . . . . . . . . . . . . . 6.12 TPC for Replication heartbeat . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.13 Supported platforms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.14 Hardware requirements for TPC for Replication servers. . . . . . . . . . . . . . . . . . . . . . . 6.15 TPC for Replication GUI . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.15.1 Connecting to the TPC for Replication GUI . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.15.2 Health Overview panel . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.15.3 Sessions panel . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.15.4 Storage Subsystems panel . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.15.5 Path Management panel . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.15.6 TPC for Replication server Configuration panel . . . . . . . . . . . . . . . . . . . . . . . . . 6.15.7 Advanced Tools panel . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.15.8 Console log . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.16 Command line interface to TPC for Replication . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

38 39 39 40 41 43 44 45 45 47 47 47 47 48 48 48 49 50 50 50 51 51 51 51 52 54 54 54 55 55 56 57 61 62 63 64 64 66 67 68 70 72 73 74 75 75

Part 3. FlashCopy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 77 Chapter 7. FlashCopy overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.1 FlashCopy operational environments . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.2 Terminology . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.3 Basic concepts . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . iv
IBM System Storage DS8000: Copy Services in Open Environments

79 80 81 81

7.3.1 Full volume copy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.3.2 Nocopy option . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.4 FlashCopy in combination with other Copy Services . . . . . . . . . . . . . . . . . . . . . . . . . . 7.4.1 FlashCopy and Metro Mirror . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.4.2 FlashCopy and Global Copy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.4.3 FlashCopy and Global Mirror . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.5 FlashCopy in a DS8300 storage LPAR environment . . . . . . . . . . . . . . . . . . . . . . . . . .

85 86 87 87 88 89 90

Chapter 8. FlashCopy options . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 91 8.1 Multiple Relationship FlashCopy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 92 8.2 Consistency Group FlashCopy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 93 8.3 FlashCopy target as a Metro Mirror or Global Copy primary. . . . . . . . . . . . . . . . . . . . . 93 8.4 Incremental FlashCopy refresh target volume . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 94 8.5 Remote FlashCopy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 97 8.6 Persistent FlashCopy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 97 8.7 Reverse restore. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 98 8.8 Fast reverse restore . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 98 8.9 FlashCopy SE . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 99 8.9.1 Multiple Relationship FlashCopy SE . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 99 8.9.2 Consistency Group FlashCopy SE . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 99 8.9.3 FlashCopy SE target as a Metro Mirror or Global Copy primary. . . . . . . . . . . . . . 99 8.9.4 Remote FlashCopy SE . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 99 8.9.5 Persistent FlashCopy SE . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 99 8.9.6 Reverse restore and fast reverse restore of FlashCopy SE relations . . . . . . . . . 100 8.10 Options and interfaces . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 100 Chapter 9. FlashCopy interfaces . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.1 FlashCopy management interfaces: Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.2 DS CLI and DS GUI: Commands and options . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.2.1 Local FlashCopy management . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.2.2 Remote FlashCopy management . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.3 Local FlashCopy using the DS CLI . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.3.1 Parameters used with local FlashCopy commands . . . . . . . . . . . . . . . . . . . . . . 9.3.2 Local FlashCopy commands: Examples . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.3.3 FlashCopy Consistency Groups . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.4 Remote FlashCopy using the DS CLI . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.4.1 Remote FlashCopy commands. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.4.2 Parameters used in remote FlashCopy commands . . . . . . . . . . . . . . . . . . . . . . 9.5 FlashCopy management using the DS GUI . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.5.1 Initiating FlashCopy using Create . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.5.2 Displaying properties of existing FlashCopy . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.5.3 Reversing existing FlashCopy. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.5.4 Initiating background copy for a persistent FlashCopy relationship . . . . . . . . . . 9.5.5 Resynchronizing target . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.5.6 Deleting existing FlashCopy relationship . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Chapter 10. IBM FlashCopy SE . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10.1 IBM FlashCopy SE overview. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10.2 Space Efficient volumes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10.3 Repository for Space Efficient volumes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10.3.1 Capacity planning for FlashCopy SE . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10.3.2 Creating a repository for Space Efficient volumes . . . . . . . . . . . . . . . . . . . . . . 10.3.3 Creation of Space Efficient volumes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10.4 Performing FlashCopy SE operations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Contents

101 102 103 103 104 104 105 106 115 116 116 117 118 119 121 125 126 127 128 129 130 131 132 133 135 138 140 v

10.4.1 10.4.2 10.4.3 10.4.4 10.4.5

Creation and resynchronization of FlashCopy SE relationships . . . . . . . . . . . . Removing FlashCopy relationships and releasing space . . . . . . . . . . . . . . . . . Other FlashCopy SE operations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Working with Space Efficient volumes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Monitoring repository space and out-of-space conditions. . . . . . . . . . . . . . . . .

140 145 148 149 149 153 154 154 155 156 156 156 157 159 161 161 161 162 162 163 165 166 166 166 167 167 168 168 169

Chapter 11. FlashCopy performance. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11.1 FlashCopy performance overview. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11.1.1 Distribution of the workload: Location of source and target volumes . . . . . . . . 11.1.2 LSS/LCU versus rank: Considerations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11.1.3 Rank characteristics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11.2 FlashCopy establish performance. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11.3 Background copy performance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11.4 FlashCopy impact on applications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11.5 Performance planning for IBM FlashCopy SE . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11.6 FlashCopy scenarios. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11.6.1 Scenario #1: Backup to disk . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11.6.2 Scenario #2: Backup to tape . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11.6.3 Scenario #3: IBM FlashCopy SE. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11.6.4 Scenario #4: FlashCopy during peak application activity . . . . . . . . . . . . . . . . . 11.6.5 Scenario #5: Ranks reserved for FlashCopy . . . . . . . . . . . . . . . . . . . . . . . . . . Chapter 12. FlashCopy examples . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12.1 Creating a test system or integration system . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12.1.1 One-time test system . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12.1.2 Multiple setup of a test system with the same contents . . . . . . . . . . . . . . . . . . 12.2 Creating a backup . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12.2.1 Creating a FlashCopy for backup purposes without volume copy . . . . . . . . . . 12.2.2 Using IBM FlashCopy SE for backup purposes . . . . . . . . . . . . . . . . . . . . . . . . 12.2.3 Incremental FlashCopy for backup purposes . . . . . . . . . . . . . . . . . . . . . . . . . . 12.2.4 Using a target volume to restore its contents back to the source . . . . . . . . . . .

Part 4. Metro Mirror. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 171 Chapter 13. Metro Mirror overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13.1 Metro Mirror overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13.2 Metro Mirror volume state . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13.3 Data consistency . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13.4 Rolling disaster . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13.5 Automation and management . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Chapter 14. Metro Mirror options and configuration . . . . . . . . . . . . . . . . . . . . . . . . . . 14.1 Basic Metro Mirror operation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14.2 Open systems: Clustering . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14.3 Failover and failback . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14.4 Consistency Group function . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14.4.1 Data consistency and dependent writes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14.4.2 Consistency Group function: How it works . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14.5 Metro Mirror paths and links . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14.5.1 Fibre Channel links . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14.5.2 Logical paths . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14.6 Bandwidth . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14.7 LSS design . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14.8 Distance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 173 174 175 175 176 176 177 178 179 179 180 181 182 185 186 187 188 188 188

vi

IBM System Storage DS8000: Copy Services in Open Environments

14.9 Symmetrical configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 189 14.10 Volumes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 190 14.11 Hardware requirements. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 190 Chapter 15. Metro Mirror performance and scalability . . . . . . . . . . . . . . . . . . . . . . . . 15.1 Performance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15.1.1 Managing the load . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15.1.2 Initial synchronization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15.2 Scalability . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Chapter 16. Metro Mirror interfaces and examples . . . . . . . . . . . . . . . . . . . . . . . . . . . 16.1 Metro Mirror interfaces . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16.2 Copy Services network components . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16.3 DS Command-Line Interface (DS CLI) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16.4 DS Storage Manager GUI . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16.5 Setting up a Metro Mirror environment using the DS CLI . . . . . . . . . . . . . . . . . . . . . 16.5.1 Preparing to work with the DS CLI . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16.5.2 Setup of the Metro Mirror configuration. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16.5.3 Determining the available Fibre Channel links . . . . . . . . . . . . . . . . . . . . . . . . . 16.5.4 Creating Metro Mirror paths . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16.5.5 Creating Metro Mirror pairs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16.6 Removing Metro Mirror environment using DS CLI. . . . . . . . . . . . . . . . . . . . . . . . . . 16.6.1 Step 1: Remove Metro Mirror pairs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16.6.2 Step 2: Remove paths . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16.7 Managing the Metro Mirror environment with the DS CLI . . . . . . . . . . . . . . . . . . . . . 16.7.1 Suspending and resuming Metro Mirror data transfer. . . . . . . . . . . . . . . . . . . . 16.7.2 Adding and removing paths . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16.8 Failover and Failback functions for sites switching . . . . . . . . . . . . . . . . . . . . . . . . . . 16.8.1 Metro Mirror Failover function . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16.8.2 Metro Mirror Failback function. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16.9 Freezepprc and unfreezepprc . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16.10 DS Storage Manager GUI examples. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16.10.1 Creating paths. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16.10.2 Adding paths . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16.10.3 Changing options . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16.10.4 Deleting paths . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16.10.5 Creating volume pairs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16.10.6 Suspending volume pairs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16.10.7 Resuming volume pairs. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16.10.8 Metro Mirror Failover. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16.10.9 Metro Mirror Failback . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 193 194 194 194 195 197 198 199 200 201 201 202 202 203 204 204 206 206 207 209 209 210 212 212 216 221 224 224 229 233 234 236 241 242 244 246

Part 5. Global Copy. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 251 Chapter 17. Global Copy overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17.1 Global Copy overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17.2 Volume states and change logic . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17.3 Global Copy positioning . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Chapter 18. Global Copy options and configuration . . . . . . . . . . . . . . . . . . . . . . . . . . 18.1 Global Copy basic options . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18.1.1 Establishing a Global Copy pair . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18.1.2 Suspending a Global Copy Pair . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18.1.3 Resuming a Global Copy Pair. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 253 254 255 256 257 258 258 258 259

Contents

vii

18.1.4 Terminating a Global Copy Pair . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18.1.5 Converting a Global Copy pair to Metro Mirror . . . . . . . . . . . . . . . . . . . . . . . . . 18.2 Creating a consistent point-in-time copy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18.2.1 Procedure to take a consistent point-in-time copy . . . . . . . . . . . . . . . . . . . . . . 18.2.2 Making a FlashCopy at the remote site. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18.3 Hardware requirements. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18.4 Global Copy connectivity: Paths and links . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18.4.1 Global Copy Fibre Channel links. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18.4.2 Logical paths . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18.5 Bandwidth . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18.6 LSS design . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18.7 Distance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18.8 DS8000 configuration at the remote site . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Chapter 19. Global Copy interfaces and examples . . . . . . . . . . . . . . . . . . . . . . . . . . . 19.1 Global Copy interfaces: Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19.2 Copy Services network components . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19.3 Using DS CLI examples . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19.3.1 Setting up a Global Copy environment using the DS CLI . . . . . . . . . . . . . . . . . 19.3.2 Remove Global Copy environment using DS CLI . . . . . . . . . . . . . . . . . . . . . . . 19.3.3 Maintaining the Global Copy environment using the DS CLI . . . . . . . . . . . . . . 19.3.4 Periodic off-site backup procedure . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19.4 DS Storage Manager GUI examples. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19.4.1 Establish paths with the DS Storage Manager GUI . . . . . . . . . . . . . . . . . . . . . 19.4.2 Establishing Global Copy pairs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19.4.3 Monitoring the copy status . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19.4.4 Converting to Metro Mirror (synchronous) . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19.4.5 Suspending a pair . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Chapter 20. Global Copy performance and scalability . . . . . . . . . . . . . . . . . . . . . . . . 20.1 Performance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20.2 Scalability . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20.2.1 Adding capacity. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20.2.2 Capacity for existing versus new systems . . . . . . . . . . . . . . . . . . . . . . . . . . . .

259 259 259 260 261 262 263 263 263 263 264 264 265 267 268 269 270 270 273 276 280 286 286 290 294 295 296 299 300 300 300 300

Part 6. Global Mirror . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 301 Chapter 21. Global Mirror overview. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21.1 Synchronous and non-synchronous data replication . . . . . . . . . . . . . . . . . . . . . . . . 21.1.1 Synchronous data replication and dependent writes . . . . . . . . . . . . . . . . . . . . 21.1.2 Asynchronous data replication and dependent writes. . . . . . . . . . . . . . . . . . . . 21.2 Basic concepts of Global Mirror . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21.3 Setting up a Global Mirror session . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21.3.1 Simple configuration to start . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21.3.2 Establishing connectivity to remote site . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21.3.3 Creating Global Copy relationships. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21.3.4 Introducing FlashCopy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21.3.5 Defining the Global Mirror session . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21.3.6 Populating the Global Mirror session with volumes . . . . . . . . . . . . . . . . . . . . . 21.3.7 Starting the Global Mirror session. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21.4 Consistency Groups . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21.4.1 Consistency Group formation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21.4.2 Consistency Group parameters . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 303 304 304 308 312 313 314 314 315 316 317 318 319 319 320 321

viii

IBM System Storage DS8000: Copy Services in Open Environments

Part 7. Solutions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 323 Chapter 22. Global Mirror options and configuration . . . . . . . . . . . . . . . . . . . . . . . . . 22.1 Terminology used in Global Mirror environments . . . . . . . . . . . . . . . . . . . . . . . . . . . 22.2 Creating a Global Mirror environment . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22.3 Modifying a Global Mirror session . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22.3.1 Adding or removing volumes to the Global Mirror session . . . . . . . . . . . . . . . . 22.3.2 Adding or removing storage disk subsystems or LSSs. . . . . . . . . . . . . . . . . . . 22.3.3 Modifing the Global Mirror session parameters . . . . . . . . . . . . . . . . . . . . . . . . 22.3.4 Global Mirror environment topology changes . . . . . . . . . . . . . . . . . . . . . . . . . . 22.3.5 Removing FlashCopy relationships. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22.3.6 Removing the Global Mirror environment . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22.4 Global Mirror with multiple storage disk subsystems . . . . . . . . . . . . . . . . . . . . . . . . 22.5 Recovery scenario after production site failure . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22.5.1 Normal Global Mirror operation. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22.5.2 Production site failure . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22.5.3 Global Copy Failover from B to A . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22.5.4 Verifying for valid Consistency Group state . . . . . . . . . . . . . . . . . . . . . . . . . . . 22.5.5 Setting consistent data on B volumes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22.5.6 Re-establishing FlashCopy relationships between B and C . . . . . . . . . . . . . . . 22.5.7 Restarting the application at the remote site. . . . . . . . . . . . . . . . . . . . . . . . . . . 22.5.8 Preparing to switch back to the local site . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22.5.9 Returning to the local site . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22.5.10 Conclusions of failover/failback example . . . . . . . . . . . . . . . . . . . . . . . . . . . . Chapter 23. Global Mirror interfaces . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23.1 Global Mirror interfaces: Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23.2 DS Command-Line Interface. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23.3 DS Storage Manager GUI . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23.4 TotalStorage Productivity Center for Replication (TPC-R) . . . . . . . . . . . . . . . . . . . . Chapter 24. Global Mirror performance and scalability. . . . . . . . . . . . . . . . . . . . . . . . 24.1 Performance aspects for Global Mirror . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24.2 Performance considerations at coordination time . . . . . . . . . . . . . . . . . . . . . . . . . . . 24.3 Consistency Group drain time . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24.4 Remote storage disk subsystem configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24.5 Balancing the disk subsystem configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24.6 Growth within Global Mirror configurations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Chapter 25. Global Mirror examples . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25.1 Setting up a Global Mirror environment using the DS CLI . . . . . . . . . . . . . . . . . . . . 25.1.1 Preparing to work with the DS CLI . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25.1.2 Configuration used for the environment . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25.1.3 Setup procedure . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25.1.4 Creating Global Copy relationships: A to B volumes . . . . . . . . . . . . . . . . . . . . 25.1.5 Creating FlashCopy relationships: B to C volumes. . . . . . . . . . . . . . . . . . . . . . 25.1.6 Starting Global Mirror . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25.2 Removing a Global Mirror environment with the DS CLI. . . . . . . . . . . . . . . . . . . . . . 25.2.1 Ending Global Mirror processing. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25.2.2 Removing the A volumes from the Global Mirror session . . . . . . . . . . . . . . . . . 25.2.3 Removing the Global Mirror session . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25.2.4 Terminating FlashCopy pairs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25.2.5 Terminating Global Copy pairs and remove the paths . . . . . . . . . . . . . . . . . . . 25.3 Managing the Global Mirror environment with the DS CLI . . . . . . . . . . . . . . . . . . . .
Contents

325 326 327 329 329 330 330 331 331 332 332 336 336 337 338 338 342 343 344 344 345 348 349 350 351 353 355 357 358 359 361 361 364 365 367 368 368 368 369 369 370 371 377 378 379 380 380 381 382 ix

25.3.1 Pausing and resuming Global Mirror Consistency Group formation . . . . . . . . . 25.3.2 Changing the Global Mirror tuning parameters . . . . . . . . . . . . . . . . . . . . . . . . . 25.3.3 Stopping and starting Global Mirror . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25.3.4 Adding and removing A volumes to the Global Mirror environment . . . . . . . . . 25.3.5 Adding and removing an LSS to an existing Global Mirror environment. . . . . . 25.3.6 Adding and removing a subordinate disk subsystem . . . . . . . . . . . . . . . . . . . . 25.4 Recovery scenario after local site failure with the DS CLI. . . . . . . . . . . . . . . . . . . . . 25.4.1 Stopping Global Mirror processing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25.4.2 Performing Global Copy Failover from B to A . . . . . . . . . . . . . . . . . . . . . . . . . . 25.4.3 Verifying for valid Consistency Group state . . . . . . . . . . . . . . . . . . . . . . . . . . . 25.4.4 Reversing FlashCopy from B to C. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25.4.5 Re-establishing the FlashCopy relationship from B to C. . . . . . . . . . . . . . . . . . 25.4.6 Restarting the application at the remote site. . . . . . . . . . . . . . . . . . . . . . . . . . . 25.5 Returning to the local site . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25.5.1 Creating paths from B to A . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25.5.2 Performing Global Copy Failback from B to A . . . . . . . . . . . . . . . . . . . . . . . . . 25.5.3 Querying for the Global Copy first pass completion . . . . . . . . . . . . . . . . . . . . . 25.5.4 Quiescing the application at the remote site . . . . . . . . . . . . . . . . . . . . . . . . . . . 25.5.5 Querying the Out Of Sync Tracks until the result shows zero. . . . . . . . . . . . . . 25.5.6 Creating paths from A to B if they do not exist . . . . . . . . . . . . . . . . . . . . . . . . . 25.5.7 Performing Global Copy Failover from A to B . . . . . . . . . . . . . . . . . . . . . . . . . . 25.5.8 Performing Global Copy Failback from A to B . . . . . . . . . . . . . . . . . . . . . . . . . 25.5.9 Starting Global Mirror . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25.5.10 Starting the application at the local site . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25.6 Practicing disaster recovery readiness . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25.6.1 Querying the Global Mirror environment to look at the situation . . . . . . . . . . . . 25.6.2 Pausing Global Mirror and checking its completion . . . . . . . . . . . . . . . . . . . . . 25.6.3 Pausing Global Copy pairs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25.6.4 Performing Global Copy Failover from B to A . . . . . . . . . . . . . . . . . . . . . . . . . . 25.6.5 Creating consistent data on B volumes. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25.6.6 Waiting for the FlashCopy background copy to complete. . . . . . . . . . . . . . . . . 25.6.7 Re-establishing the FlashCopy relationships . . . . . . . . . . . . . . . . . . . . . . . . . . 25.6.8 Taking FlashCopy from B to D . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25.6.9 Performing the disaster recovery testing using the D volume. . . . . . . . . . . . . . 25.6.10 Performing Global Copy Failback from A to B . . . . . . . . . . . . . . . . . . . . . . . . 25.6.11 Waiting for the Global Copy first pass to complete . . . . . . . . . . . . . . . . . . . . . 25.6.12 Resuming Global Mirror . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25.7 DS Storage Manager GUI: Examples . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25.8 Setting up a Global Mirror environment using the DS GUI . . . . . . . . . . . . . . . . . . . . 25.8.1 Defining paths . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25.8.2 Creating Global Copy pairs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25.8.3 Creating FlashCopy relationships . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25.8.4 Creating the Global Mirror session . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25.9 Managing the Global Mirror environment with the DS GUI . . . . . . . . . . . . . . . . . . . . 25.9.1 Viewing settings and error information of the Global Mirror session. . . . . . . . . 25.9.2 Viewing information of volumes in the Global Mirror session . . . . . . . . . . . . . . 25.9.3 Pausing a Global Mirror session . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25.9.4 Resuming a Global Mirror session . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25.9.5 Modifying a Global Mirror session. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

382 385 385 387 389 391 391 392 393 394 396 399 400 400 401 401 403 404 404 404 405 406 407 409 409 410 410 411 411 412 412 412 413 414 414 416 416 417 418 418 422 427 431 435 435 437 437 438 439

Part 8. Metro/Global Mirror. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 441 Chapter 26. Metro/Global Mirror overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 443

IBM System Storage DS8000: Copy Services in Open Environments

26.1 Metro/Global Mirror overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26.1.1 Metro Mirror and Global Mirror: Comparison . . . . . . . . . . . . . . . . . . . . . . . . . . 26.1.2 Metro/Global Mirror design objectives. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26.2 Metro/Global Mirror processes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Chapter 27. Configuration and setup . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27.1 Metro/Global Mirror configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27.1.1 Metro/Global Mirror with additional Global Mirror . . . . . . . . . . . . . . . . . . . . . . . 27.1.2 Metro/Global Mirror with multiple storage subsystems . . . . . . . . . . . . . . . . . . . 27.2 Configuration examples . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27.3 Initial setup of Metro/Global Mirror . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27.3.1 Identifying the PPRC ports . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27.3.2 Step 1: Set up all Metro Mirror and Global Mirror paths . . . . . . . . . . . . . . . . . . 27.3.3 Step 2: Set up Global Copy NOCOPY from intermediate to remote sites . . . . 27.3.4 Step 3: Set up Metro Mirror between local and intermediate sites . . . . . . . . . . 27.3.5 Step 4: Set up FlashCopy at remote site . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27.3.6 Step 5: Create Global Mirror session and add volumes to session . . . . . . . . . 27.3.7 Step 6: Start Global Mirror at intermediate site . . . . . . . . . . . . . . . . . . . . . . . . . 27.4 Going from Metro Mirror to Metro/Global Mirror . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27.5 Recommendations for setting up Metro/Global Mirror. . . . . . . . . . . . . . . . . . . . . . . . Chapter 28. General Metro/Global Mirror operations. . . . . . . . . . . . . . . . . . . . . . . . . . 28.1 Definitions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28.2 General considerations for storage failover . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28.3 Checking pair status before failover . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28.4 Freezing and unfreezing Metro Mirror volumes. . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28.5 Suspending volumes before failover . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28.6 Removing volumes from the session . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28.7 Checking consistency at the remote site. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28.8 Setting up an additional Global Mirror from remote site . . . . . . . . . . . . . . . . . . . . . . 28.8.1 Step 1: Create Global Copy from remote to intermediate site . . . . . . . . . . . . . 28.8.2 Step 2: Create FlashCopy at the intermediate site . . . . . . . . . . . . . . . . . . . . . . 28.8.3 Step 3: Create session and Global Mirror at remote site . . . . . . . . . . . . . . . . . Chapter 29. Planned recovery scenarios . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29.1 Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29.2 Recovery at intermediate site . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29.2.1 Step 1: Stop the production application at the local site . . . . . . . . . . . . . . . . . . 29.2.2 Step 2: Suspend the Metro Mirror . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29.2.3 Step 3: Failover the intermediate site . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29.2.4 Step 4: Start the production application at the intermediate site. . . . . . . . . . . . 29.3 Return to local from intermediate site . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29.3.1 Step 1: Stop I/O at the intermediate site . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29.3.2 Step 2: Terminate Global Mirror or remove volumes from the session. . . . . . . 29.3.3 Step 3: Suspend Global Copy. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29.3.4 Step 4: Fail back Metro Mirror to local site and wait for Full Duplex . . . . . . . . . 29.3.5 Steps 5 and 6: Suspend Metro Mirror and fail over to the local site . . . . . . . . . 29.3.6 Step 7: Fail back Metro Mirror from the local site to the intermediate site . . . . 29.3.7 Step 8: Resume Global Copy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29.3.8 Step 9: Start I/O at the local site . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29.3.9 Step 10: Start Global Mirror or add volumes to the session . . . . . . . . . . . . . . . 29.4 Recovery at remote site . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29.4.1 Step 1: Stop I/O at the local site . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29.4.2 Step 2: Terminate Global Mirror or remove volumes from session. . . . . . . . . .
Contents

444 444 445 445 449 450 450 451 451 453 454 454 456 456 457 458 458 461 462 463 464 464 464 466 467 468 468 472 473 473 474 475 476 476 477 477 478 479 480 481 481 481 482 482 483 483 484 484 484 485 486 xi

29.4.3 Step 3: Terminate Global Copy. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29.4.4 Step 4: Suspend Metro Mirror . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29.4.5 Step 5: Fail over Metro Mirror to intermediate site . . . . . . . . . . . . . . . . . . . . . . 29.4.6 Step 6: Establish Global Copy from remote to intermediate site. . . . . . . . . . . . 29.4.7 Step 7: Start I/O at the remote site . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29.5 Return from remote site . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29.5.1 Step 1: Stop I/O at remote site . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29.5.2 Step 2: Fail back Metro Mirror from the intermediate site to the local site . . . . 29.5.3 Step 3: Terminate Global Copy from remote to intermediate site . . . . . . . . . . . 29.5.4 Step 4: Suspend Metro Mirror . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29.5.5 Step 5: Fail over to local site . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29.5.6 Step 6: Fail back Metro Mirror from the local site to the intermediate site . . . . 29.5.7 Step 7: Create Global Copy from intermediate to remote site . . . . . . . . . . . . . 29.5.8 Step 8: Start I/O . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29.5.9 Step 9: Start Global Mirror or add volumes to the session . . . . . . . . . . . . . . . . Chapter 30. Disaster recovery test scenarios . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 30.1 Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 30.2 Disaster recovery test at the intermediate site . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 30.2.1 Step 1: Prepare the failover for disaster recovery test . . . . . . . . . . . . . . . . . . . 30.2.2 Step 2: Set up FlashCopy to the additional volumes . . . . . . . . . . . . . . . . . . . . 30.2.3 Step 3: Set up PPRC paths from the local site to the intermediate site . . . . . . 30.2.4 Step 4: Resume Metro Mirror . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 30.2.5 Step 5: Start I/O on the disaster recovery host . . . . . . . . . . . . . . . . . . . . . . . . . 30.3 Disaster recovery test at remote site. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Chapter 31. Unplanned scenarios . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31.1 Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31.2 Server outages . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31.3 Link failures . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31.3.1 Metro Mirror link failure . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31.3.2 Global Copy link failures . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31.4 Partial disasters. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31.5 Data center outages . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Chapter 32. MGM Incremental Resync . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 32.1 Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 32.1.1 Functional description . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 32.1.2 Options for DSCLI . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 32.2 Setting up Metro/Global Mirror with Incremental Resync . . . . . . . . . . . . . . . . . . . . . 32.2.1 Setup of Incremental Resync Metro/Global Mirror . . . . . . . . . . . . . . . . . . . . . . 32.2.2 Going from Global Mirror to Incremental Resync Metro/Global Mirror . . . . . . . 32.3 Failure at the local site scenario . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 32.3.1 Local site fails . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 32.3.2 Local site is back. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 32.3.3 Returning to the original configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 32.4 Failure at the intermediate site scenario . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 32.4.1 Intermediate site failure. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 32.4.2 Intermediate site is back . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 32.4.3 Re-synchronization at intermediate. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 32.4.4 Restoring the original configuration. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

486 487 487 488 488 489 490 490 491 491 491 492 492 493 493 495 496 496 497 498 499 499 500 500 503 504 504 505 505 506 506 509 511 512 512 514 515 515 516 522 523 525 531 540 541 545 548 550

Chapter 33. Metro/Global Mirror with IBM TotalStorage Productivity Center for Replication . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 557 xii
IBM System Storage DS8000: Copy Services in Open Environments

33.1 Metro/Global Mirror: Additional references . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33.2 Metro/Global Mirror scenario. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33.2.1 Creating a session for Metro/Global Mirror . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33.2.2 Creating paths for the Metro/Global Mirror session . . . . . . . . . . . . . . . . . . . . . 33.2.3 Adding Copy Sets to the Metro/Global Mirror session . . . . . . . . . . . . . . . . . . . 33.2.4 Managing Metro/Global Mirror through the GUI . . . . . . . . . . . . . . . . . . . . . . . . 33.2.5 Disaster Recovery with TPC-R . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Chapter 34. MGC Incremental Resync . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 34.1 Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 34.2 Creating a Global Copy relationship between B and C . . . . . . . . . . . . . . . . . . . . 34.3 Modifying an existing Metro Mirror relationship. . . . . . . . . . . . . . . . . . . . . . . . . . . . . 34.4 Suspending Metro Mirror between A and B . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 34.5 Failover from C to B . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 34.6 Incremental resync from A to C . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

558 558 559 564 568 573 575 581 582 582 583 584 584 584

Part 9. Copy Services with System i . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 585 Chapter 35. Copy Services with System i5 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35.2 System i5 functions and external storage . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35.2.1 System i5 structure . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35.2.2 Single-level storage . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35.2.3 Input Output Processors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35.2.4 Clusters . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35.2.5 Independent Auxiliary Storage Pools (IASPs). . . . . . . . . . . . . . . . . . . . . . . . . . 35.3 Metro Mirror for an IASP . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35.3.1 Solution description. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35.3.2 Solution benefits . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35.3.3 Planning and requirements . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35.3.4 Considerations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35.3.5 Implementation and usage . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35.4 Global Mirror for an IASP . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35.4.1 Solution description. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35.4.2 Solution benefits . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35.4.3 Planning and requirements . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35.4.4 Considerations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35.4.5 Implementation and usage . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35.5 Metro Mirror for the entire disk space . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35.5.1 Solution description. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35.5.2 Solution benefits . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35.5.3 Planning and requirements . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35.5.4 Considerations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35.5.5 Implementation and usage . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35.6 Global Mirror for the entire disk space . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35.6.1 Solution description. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35.6.2 Solution benefits . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35.6.3 Planning and requirements . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35.6.4 Considerations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35.6.5 Implementation and usage . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35.7 FlashCopy of IASP . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35.7.1 Solution description. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35.7.2 Solution benefits . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35.7.3 Planning and requirements . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Contents

587 588 589 589 590 590 591 593 595 595 596 597 598 599 622 622 623 624 624 625 627 627 628 628 629 629 637 637 638 639 639 640 644 645 646 646 xiii

35.7.4 Considerations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35.7.5 Using Metro Mirror and FlashCopy of IASP in the same scenario . . . . . . . . . . 35.7.6 Implementation and usage . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35.8 FlashCopy of the entire disk space . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35.8.1 Solution description. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35.8.2 Solution benefits . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35.8.3 Planning and requirements . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35.8.4 Considerations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35.8.5 Implementation and usage . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35.9 FlashCopy SE with System i partition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35.9.1 Overview of FlashCopy SE . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35.9.2 Scenario and usage . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35.9.3 Planning . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35.9.4 Sizing for FlashCopy SE repository . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35.9.5 Implementation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35.9.6 Monitor use of repository space with workload CPW . . . . . . . . . . . . . . . . . . . . 35.9.7 System behavior with a repository full condition . . . . . . . . . . . . . . . . . . . . . . . . 35.10 TPC for Replication with Global Mirror for i5/OS. . . . . . . . . . . . . . . . . . . . . . . . . . . 35.10.1 Planning . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35.10.2 Accessing the TPC-R GUI . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35.10.3 Create a TPC-R session . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35.10.4 Add Copy Sets to the TPC-R session . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35.10.5 Start Global Mirror. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35.10.6 Switch to Remote site at planned outages . . . . . . . . . . . . . . . . . . . . . . . . . . . 35.10.7 Switch to Remote site at unplanned outages . . . . . . . . . . . . . . . . . . . . . . . . . 35.10.8 Fail back to local site. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

646 647 647 650 650 651 652 652 652 656 656 656 657 658 661 668 671 675 675 676 676 678 683 685 688 688

Part 10. Interoperability . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 689 Chapter 36. Data migration through double cascading. . . . . . . . . . . . . . . . . . . . . . . . 691 36.1 Data migration with double cascading . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 692 36.2 Double cascading example . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 692 Chapter 37. Interoperability between DS8000 and ESS 800 . . . . . . . . . . . . . . . . . . . . 37.1 DS8000 and ESS 800 Copy Services interoperability . . . . . . . . . . . . . . . . . . . . . . . 37.2 Preparing the environment . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 37.2.1 Minimum microcode levels . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 37.2.2 Hardware and licensing requirements. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 37.2.3 Network connectivity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 37.2.4 Create matching user IDs and passwords . . . . . . . . . . . . . . . . . . . . . . . . . . . . 37.2.5 Updating the DS CLI profile . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 37.2.6 Adding the Copy Services Domain . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 37.2.7 Volume size considerations for RMC (PPRC). . . . . . . . . . . . . . . . . . . . . . . . . . 37.2.8 Volume address considerations on the ESS 800 . . . . . . . . . . . . . . . . . . . . . . . 37.2.9 Establishment errors on newly created volumes. . . . . . . . . . . . . . . . . . . . . . . . 37.3 RMC: Establishing paths between DS8000 and ESS 800 . . . . . . . . . . . . . . . . . . . . 37.3.1 Decoding port IDs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 37.3.2 Path creation using the DS GUI . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 37.3.3 Establish logical paths between DS8000 and ESS 800 using DS CLI . . . . . . . 37.4 Managing Metro Mirror or Global Copy pairs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 37.4.1 Managing Metro Mirror or Global Copy pairs using the DS GUI . . . . . . . . . . . . 37.4.2 Managing Metro Mirror pairs using the DS CLI. . . . . . . . . . . . . . . . . . . . . . . . . 37.4.3 Managing Global Copy pairs using the DS CLI. . . . . . . . . . . . . . . . . . . . . . . . . 37.5 Managing ESS 800 Global Mirror . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xiv
IBM System Storage DS8000: Copy Services in Open Environments

695 696 696 696 696 696 697 697 698 699 703 703 703 704 704 707 710 710 714 715 715

37.5.1 Managing Global Mirror pairs using the DS CLI . . . . . . . . . . . . . . . . . . . . . . . . 37.6 Managing ESS 800 FlashCopy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 37.6.1 Creating an ESS 800 FlashCopy using the DS GUI . . . . . . . . . . . . . . . . . . . . . 37.6.2 Creating an ESS 800 FlashCopy using DS CLI . . . . . . . . . . . . . . . . . . . . . . . . 37.6.3 Creating a remote FlashCopy on an ESS 800 using DS CLI . . . . . . . . . . . . . . Chapter 38. Solutions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 38.1 IBM Tivoli Storage Manager for Advanced Copy Services . . . . . . . . . . . . . . . . . . . . 38.1.1 TSM for Advanced Copy Services Overview . . . . . . . . . . . . . . . . . . . . . . . . . . 38.1.2 TSM for Advanced Copy Services Backup . . . . . . . . . . . . . . . . . . . . . . . . . . . . 38.1.3 TSM for Advanced Copy Services Restore. . . . . . . . . . . . . . . . . . . . . . . . . . . . 38.1.4 Cloning of an SAP environment . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 38.2 HACMP/XD for Metro Mirror . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 38.3 Geographically Dispersed Open Clusters (GDOC) . . . . . . . . . . . . . . . . . . . . . . . . . . Appendix A. Open systems specifics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Database and file system specifics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . File system consistency . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Database consistency . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . AIX specifics. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . AIX and FlashCopy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . AIX and Remote Mirror and Copy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Windows and Remote Mirror and Copy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Copy services with Windows volumes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Microsoft Volume Shadow Copy Services (VSS) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Microsoft Virtual Disk Service (VDS). . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . SUN Solaris and Copy Services . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . FlashCopy without a volume manager . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Remote copy without a Volume Manager . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Copy Services using VERITAS Volume Manager. . . . . . . . . . . . . . . . . . . . . . . . . . . . . HP-UX and Copy Services . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . HP-UX and FlashCopy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . HP-UX with Remote Mirror and Copy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . VMware Virtual Infrastructure and Copy Services . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Virtual machine considerations regarding Copy Services. . . . . . . . . . . . . . . . . . . . . . . Appendix B. SNMP notifications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . SNMP overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Physical connection events . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Remote Mirror and Copy events . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Global Mirror related SNMP traps . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Appendix C. CLI migration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Migrating ESS CLI to DS CLI . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Reviewing the ESS tasks to migrate . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Converting the individual tasks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Related publications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . IBM Redbooks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Other publications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Online resources . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . How to get IBM Redbooks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Help from IBM . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

716 716 717 718 719 721 722 722 723 723 723 725 726 729 730 730 731 732 732 736 738 739 741 744 749 749 750 750 753 754 755 757 757 765 766 766 768 768 773 774 774 776 781 781 781 782 782 783

Contents

xv

Index . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 785

xvi

IBM System Storage DS8000: Copy Services in Open Environments

Notices
This information was developed for products and services offered in the U.S.A. IBM may not offer the products, services, or features discussed in this document in other countries. Consult your local IBM representative for information on the products and services currently available in your area. Any reference to an IBM product, program, or service is not intended to state or imply that only that IBM product, program, or service may be used. Any functionally equivalent product, program, or service that does not infringe any IBM intellectual property right may be used instead. However, it is the user's responsibility to evaluate and verify the operation of any non-IBM product, program, or service. IBM may have patents or pending patent applications covering subject matter described in this document. The furnishing of this document does not give you any license to these patents. You can send license inquiries, in writing, to: IBM Director of Licensing, IBM Corporation, North Castle Drive, Armonk, NY 10504-1785 U.S.A. The following paragraph does not apply to the United Kingdom or any other country where such provisions are inconsistent with local law: INTERNATIONAL BUSINESS MACHINES CORPORATION PROVIDES THIS PUBLICATION "AS IS" WITHOUT WARRANTY OF ANY KIND, EITHER EXPRESS OR IMPLIED, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF NON-INFRINGEMENT, MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE. Some states do not allow disclaimer of express or implied warranties in certain transactions, therefore, this statement may not apply to you. This information could include technical inaccuracies or typographical errors. Changes are periodically made to the information herein; these changes will be incorporated in new editions of the publication. IBM may make improvements and/or changes in the product(s) and/or the program(s) described in this publication at any time without notice. Any references in this information to non-IBM Web sites are provided for convenience only and do not in any manner serve as an endorsement of those Web sites. The materials at those Web sites are not part of the materials for this IBM product and use of those Web sites is at your own risk. IBM may use or distribute any of the information you supply in any way it believes appropriate without incurring any obligation to you. Information concerning non-IBM products was obtained from the suppliers of those products, their published announcements or other publicly available sources. IBM has not tested those products and cannot confirm the accuracy of performance, compatibility or any other claims related to non-IBM products. Questions on the capabilities of non-IBM products should be addressed to the suppliers of those products. This information contains examples of data and reports used in daily business operations. To illustrate them as completely as possible, the examples include the names of individuals, companies, brands, and products. All of these names are fictitious and any similarity to the names and addresses used by an actual business enterprise is entirely coincidental. COPYRIGHT LICENSE: This information contains sample application programs in source language, which illustrate programming techniques on various operating platforms. You may copy, modify, and distribute these sample programs in any form without payment to IBM, for the purposes of developing, using, marketing or distributing application programs conforming to the application programming interface for the operating platform for which the sample programs are written. These examples have not been thoroughly tested under all conditions. IBM, therefore, cannot guarantee or imply reliability, serviceability, or function of these programs.

Copyright IBM Corp. 2004-2008. All rights reserved.

xvii

Trademarks
IBM, the IBM logo, and ibm.com are trademarks or registered trademarks of International Business Machines Corporation in the United States, other countries, or both. These and other IBM trademarked terms are marked on their first occurrence in this information with the appropriate symbol ( or ), indicating US registered or common law trademarks owned by IBM at the time this information was published. Such trademarks may also be registered or common law trademarks in other countries. A current list of IBM trademarks is available on the Web at http://www.ibm.com/legal/copytrade.shtml The following terms are trademarks of the International Business Machines Corporation in the United States, other countries, or both:
AIX AS/400 DB2 DS4000 DS6000 DS8000 Enterprise Storage Server ESCON eServer FICON FlashCopy GDPS HACMP HyperSwap IBM iSeries i5/OS NetView OS/400 POWER4 POWER5 Redbooks Redbooks (logo) S/390 System i System i5 System p System x System z System Storage System Storage DS Tivoli TotalStorage Virtualization Engine WebSphere z/OS zSeries

The following terms are trademarks of other companies: SAP, and SAP logos are trademarks or registered trademarks of SAP AG in Germany and in several other countries. SANsurfer, QLogic, and the QLogic logo are registered trademarks of QLogic Corporation. SANblade is a registered trademark in the United States. Oracle, JD Edwards, PeopleSoft, Siebel, and TopLink are registered trademarks of Oracle Corporation and/or its affiliates. Network Appliance, SnapMirror, and the NetApp logo are trademarks or registered trademarks of NetApp, Inc. in the U.S. and other countries. Disk Magic, and the IntelliMagic logo are trademarks of IntelliMagic BV in the United States, other countries, or both. Java, JDK, Solaris, Sun, and all Java-based trademarks are trademarks of Sun Microsystems, Inc. in the United States, other countries, or both. Excel, Internet Explorer, Microsoft, Windows NT, Windows Server, Windows, and the Windows logo are trademarks of Microsoft Corporation in the United States, other countries, or both. Intel, Pentium, Intel logo, Intel Inside logo, and Intel Centrino logo are trademarks or registered trademarks of Intel Corporation or its subsidiaries in the United States, other countries, or both. UNIX is a registered trademark of The Open Group in the United States and other countries. Linux is a trademark of Linus Torvalds in the United States, other countries, or both. Other company, product, or service names may be trademarks or service marks of others.

xviii

IBM System Storage DS8000: Copy Services in Open Environments

Preface
This IBM Redbooks publication will help you install, tailor, and configure Copy Services for open systems environments on the IBM System Storage DS8000. It should be read in conjunction with The IBM System Storage DS8000 Series: Architecture and Implementation, SG24-6786. This book will help you design and implement a new Copy Services installation or migrate from an existing installation. It includes hints and tips to maximize the effectiveness of your installation. There is a companion book that supports the configuration of the Copy Services functions in a z/OS environment, The IBM System Storage DS8000 Series: Copy Services with IBM System z, SG24-6787.

The team that wrote this book


This book was produced by a team of specialists from around the world working for the International Technical Support Organization, San Jose Center in IBM Mainz, Germany. Bertrand Dufrasne is an IBM Certified I/T Specialist and Project Leader for System Storage disk products at the International Technical Support Organization, San Jose Center. He has worked at IBM in various I/T areas. Bert has written many IBM Redbooks and has also developed and taught technical workshops. Before joining the ITSO, he worked for IBM Global Services as an Application Architect. He holds a degree in Electrical Engineering. Wilhelm Gardt holds a degree in Computer Sciences from the University of Kaiserslautern, Germany. He has worked as a software developer and IT specialist, designing and implementing heterogeneous IT environments (SAP, Oracle, AIX, HP-UX, SAN). In 2001, he joined the IBM TotalStorage Interoperability Centre (now Systems Lab Europe) in Mainz, Germany. He has performed many proof of concepts on IBM storage products. Since September 2004 he is a member of the Technical Pre-Sales Support team for IBM Storage (ATS). Jana Jamsek is IT specialist in IBM Slovenia. She works for Storage Advanced Technical Support in Europe and specializes in IBM Storage Systems and i5/OS systems. Jana has eight years of experience in the System i and AS/400 area, and six years experience in Storage. She holds a Masters degree in computer science and a degree in mathematics from University of Ljubljana, Slovenia. She has co-authored numerous IBM Redbooks publications for System i and IBM System Storage products, including the IBM System Storage DS8000, the IBM Virtualization Engine TS7510 and other tape offerings. Peter Kimmel is an IT Specialist with the Enterprise Disk ATS Performance team at the European Storage Competence Center in Mainz, Germany. He joined IBM Storage in 1999 and since then worked with SSA, VSS, the various ESS generations, and DS8000/DS6000. He has been involved in all Early Shipment Programs (ESPs) and early installs for the Copy Services rollouts. Peter holds a Diploma (MSc) degree in Physics from the University of Kaiserslautern.

Copyright IBM Corp. 2004-2008. All rights reserved.

xix

Jukka Myyrylainen is an Advisory IT Specialist with Integrated Technology Services, IBM Finland. He has several years of experience with storage product implementations on both System z and open systems platforms. He provides consultancy, technical support, and implementation services to customers for IBM's strategic storage hardware and software products. He has contributed to several storage-related IBM Redbooks publications in the past. Markus Oscheka is an IT Specialist for Proof of Concepts and Benchmarks at the ATS Customer Solutions team in Mainz, Germany. His areas of expertise include setup and demonstration of IBM System Storage products and solutions in various environments including AIX, Linux, Windows, HP-UX, and Solaris. He has worked at IBM for five years. He has performed many proof of concepts with Copy Services on DS6000/DS8000, as well as Performance-Benchmarks with DS4000/DS6000/DS8000. Gerhard Pieper is an IT Specialist for Open Systems Solutions at the Enterprise Disk team in Mainz, Germany. For 5 years, he has worked as last level support engineer for IBM High End Storage Products. His areas of expertise include test, implementation, support and documentation about IBM disk storage servers. His current focus is the implementation of management software for IBM Storage. Stephen West is a member of the IBM Americas Advanced Technical Support team for IBM disk storage products and related copy services. His primary focus has been the enterprise disk storage products, copy services and customer performance issues with these storage and copy products. A significant part of this job is to educate the technical team in the field on the storage and related copy services solutions. Prior to the new product or new release announcements, the ATS team is trained on the details of the new products and releases, and then develops education for field training sessions. Axel Westphal is working as an IT Specialist for Workshops and Proof of Concepts at the European Storage Competence Center (ESCC) in Mainz, Germany. Axel joined IBM in 1996, working for Global Services as a System Engineer. His areas of expertise include setup and demonstration of IBM System Storage products and solutions in various environments. Since 2004 he is responsible for Workshops and Proof of Concepts conducted at the ESSC with DS8000, SAN Volume Controller and IBM TotalStorage Productivity Center. Roland Wolf is a Certified Consulting IT Specialist in Germany. He has 21 years of experience with S/390 and disk storage hardware and since the last years also in SAN and storage for open systems. He is working in Field Technical Sales Support for storage systems. His areas of expertise include performance analysis and disaster recovery solutions in enterprises utilizing the unique capabilities and features of the IBM disk storage servers, ESS and DS6000/DS8000. He has contributed to various IBM Redbooks publications including, ESS, DS6000, and DS80000 Concepts and Architecture, and DS6000 / DS8000 Copy Services. He holds a Ph.D. in Theoretical Physics.

xx

IBM System Storage DS8000: Copy Services in Open Environments

The team: Wilhelm, Markus. Jukka, Peter, Jana, Steve, Bertrand, Gerhard, Roland, and Axel

Special thanks
We especially want to thank: John Bynum, DS8000 World Wide Technical Support Marketing Lead Rainer Zielonka and Rainer Erkens, for hosting us at the European Storage Competency Center in Mainz, Germany. They were able to supply us with the needed hardware, conference room, and all of the many things needed to run a successful residency. Gnter Schmitt and Uwe Schweikhard, for their help in reserving and preparing the equipment we used Many thanks to the authors of the previous edition of this book: Peter Kimmel, Jukka Myyrlainen, Lu Nguyen, Gero Schmidt, Shin Takata, Anthony Vandewerdt, and Bjoern Wesselbaum We also would like to thank: Selwyn Dickey, Timothy Klubertanz, Vess Natchev, James McCord and Chuck Stupca IBM Rochester, System i Client Technology Center Guido Ihlein, Marcus Dupuis, and Wilfried Kleemann, Werner Bauer, Hilmar Rrig, Ingrid Stey, Peter Klee, Hans-Paul Drumm, Torsten Rothenwaldt, Kai Jehnen, and Jens Wissenbach IBM Germany Bob Bartfai, James Davison, Craig Gordon, Lisa Gundy, Bob Kern, Lee La Frese,Jennifer Mason, Alan McClure, Rosemary McCutchen, Rachel Mikolajewski, Richard Ripberger, Henry Sautter, Jim Sedgwick, David Shackelford, Gail Spear, John Thompson, Paulus Usong,e, Steve Van Gundy, Sonny Williams, and Steve Wilkins IBM US Brian Sherman IBM Canada

Preface

xxi

Nick Clayton IBM UK Yvonne Lyon, Deanna Polm, and Sangam Racherla IBM ITSO, San Jose, CA

Become a published author


Join us for a two- to six-week residency program! Help write an IBM Redbooks publication dealing with specific products or solutions, while getting hands-on experience with leading-edge technologies. You'll team with IBM technical professionals, Business Partners and customers. Your efforts will help increase product acceptance and customer satisfaction. As a bonus, you'll develop a network of contacts in IBM development labs, and increase your productivity and marketability. Find out more about the residency program, browse the residency index, and apply online at:
ibm.com/redbooks/residencies.html

Comments welcome
Your comments are important to us! We want our Redbooks to be as helpful as possible. Send us your comments about this or other Redbooks in one of the following ways: Use the online Contact us review Redbooks form found at:
ibm.com/redbooks

Send your comments in an email to:


redbook@us.ibm.com

Mail your comments to: IBM Corporation, International Technical Support Organization Dept. HYTD Mail Station P099 2455 South Road Poughkeepsie, NY 12601-5400

xxii

IBM System Storage DS8000: Copy Services in Open Environments

Summary of changes
This section describes the technical changes made in this edition of the book and in previous editions. This edition may also include minor corrections and editorial changes that are not identified. Summary of Changes for SG24-6788-03 IBM System Storage DS8000: Copy Services in Open Environments as created or updated on December 4, 2009.

May 2008, Fourth Edition


This revision reflects the addition, deletion, or modification of new and changed information described below.

New information
IBM FlashCopy SE Incremental Resync in a Metro/Global Copy environment IBM Tivoli Storage Manager for Advanced Copy Services brief overview Geographically Dispersed Open Clusters (GDOC) brief overview

Changed information
Changed Copy Services architecture to reflect the SSPC/TPC with DS800 Element Manager Updated Copy Services examples Updated Three Site Metro/Global Mirror with Incremental Resync New IBM TotalStorage Productivity Center for Replication examples Updated licensing information

Copyright IBM Corp. 2004-2008. All rights reserved.

xxiii

xxiv

IBM System Storage DS8000: Copy Services in Open Environments

Part 1

Part

Overview
In this part of the book, we describe the various Advanced Copy Services offerings for the DS8000 series and how they relate to previous Copy Services offerings available on the Enterprise Storage Server (ESS). We also show how the existing Copy Services functions from the ESS can coexist with the DS8000 series Copy Services. Similarly, we discuss their use with the DS6000 series Copy Services. With the announcement of the DS8000 series and Advanced Copy Services for z/OS on the DS8000 series, we introduced some new terminology. We also introduced these terms for Advanced Copy Services Version 2 for the ESS. These are detailed in the following Redbooks publication, The IBM TotalStorage DS8000 Series: Implementation, SG24-6786, and are summarized in Table 1.
Table 1 Reference chart for DS Copy Services on DS8000 DS8000 function FlashCopy FlashCopy SE Global Mirror Metro Mirror Global Copy z/OS Global Mirror Metro/Global Mirror for zSeries Metro/Global Copy ESS800 Version 2 function FlashCopy n/a Global Mirror Metro Mirror Global Copy z/OS Global Mirror z/OS Metro/Global Mirror Metro/Global Copy Formerly known as FlashCopy n/a Asynchronous PPRC Synchronous PPRC PPRC Extended Distance Extended Remote Copy (XRC) 3 site solution using Sync PPRC and XRC 2 or 3 site Asynchronous Cascading PPRC

Copyright IBM Corp. 2004-2008. All rights reserved.

All DS8000 installations require at least an Operating Equipment License (OEL) key to operate. In addition, Copy Services functions require that an appropriate License Key is installed. The Copy Services license types are: PTC (Point-in-Time Copy), covering FlashCopy and FlashCopy SE MM (Metro Mirror), covering Metro Mirror and Global Copy GM (Global Mirror) MGM (Metro/Global Mirror) RMZ (Remote mirror for z/OS) The Copy Services configuration is done using either the IBM System Storage DS8000 Command-Line Interface, DS CLI, or the IBM System Storage DS8000 Storage Manager Graphical User Interface, DS GUI. Copy Services can also be managed using the IBM TotalStorage Productivity Center for Replication (TPC-R) application. Note the following requirements and considerations: The DS CLI replaces both the ESS CLI and the ESS Copy Services CLI. The DS CLI can also be used for ESS 800 Copy Services, but not for ESS configuration. For ESS configuration, you have to continue to use the ESS Specialist or the ESS CLI. The DS CLI can invoke Remote Mirror and Copy relationships with ESS 800, if the ESS 800 is at code level LIC 2.4.3.65 or above. The DS CLI provides a consistent interface for current and planned IBM System Storage products. The DS CLI invokes Copy Services functions directly rather than invoking a saved task as the ESS CLI. However, DS CLI commands can be saved in reusable scripts. The DS GUI can only be used for one-time execution of a Copy Services operation; it cannot save tasks. TPC-R requires additional Ethernet adapters to be installed on the DS8000.

IBM System Storage DS8000: Copy Services in Open Environments

Chapter 1.

Introduction
This chapter provides a brief summary of the various Copy Services functions available on the DS8000 series. These services are very similar to the existing Copy Services for the IBM Enterprise Storage Server, and some models of the ESS are interoperable with the DS8000. Copy Services are a collection of functions that provide for disaster recovery, data migration, and data duplication functions. There are two primary types of Copy Services functions: Point-in-Time Copy and Remote Mirror and Copy. Generally, the Point-in-Time Copy functions are used for data duplication, and the Remote Mirror and Copy functions are used for data migration and disaster recovery. With the Copy Services functions, for example, you can create backup data with little or no disruption to your application, and you can back up your application data to the remote site for disaster recovery. Copy Services run on the DS8000 Storage Unit and support open systems and System z environments. A subset of these functions is supported also on the previous generation of disk storage systems, the IBM TotalStorage Enterprise Storage Server (ESS). Many design characteristics of the DS8000 and its data copying and mirroring capabilities contribute to the protection of your data, 24 hours a day and 7 days a week (24x7). The optional licensed functions of Copy Services are: FlashCopy, which is a point-in-time copy function IBM FlashCopy SE, a new Space Efficient point-in-time copy function introduced with Licensed Machine Code 5.30xx.xx. Remote Mirror and Copy functions, previously known as Peer-to-Peer Remote Copy (PPRC), which include: Metro Mirror, previously known as Synchronous PPRC Global Copy, previously known as PPRC Extended Distance Global Mirror, previously known as Asynchronous PPRC 3-site Metro/Global Mirror with Incremental Resync

z/OS Global Mirror, previously known as Extended Remote Copy (XRC) z/OS Metro/Global Mirror across three sites

Copyright IBM Corp. 2004-2008. All rights reserved.

The Copy Services functions are optional licensed functions of the DS8000. Additional licensing information for Copy Services functions can be found in Chapter 3, Licensing on page 17. You can manage the Copy Services functions through a command-line interface (DS CLI) and a Web-based graphical user interface (DS Storage Manager GUI). You also can manage the Copy Services functions through the open application programming interface (DS Open API). The IBM TotalStorage Productivity Center for Replication (TPC-R) program provides yet another interface for managing copy services functions. When you manage the Copy Services through these interfaces, these interfaces invoke Copy Services functions via the Ethernet network. TPC-R requires additional Ethernet adapters to be installed on the DS8000. These interfaces can be used to manage copy services on both FB and CKD volumes. We explain these interfaces in Part 2, Interfaces on page 25.

IBM System Storage DS8000: Copy Services in Open Environments

1.1 Point-in-time copy functions


The DS8000 Point-in-Time Copy (PTC) functions, FlashCopy and the new IBM FlashCopy SE, enable you to create full volume copies of data in a Storage Unit. To use the PTC functions, you must purchase the DS8000 Series Function Authorization (machine type 239x or 2244) with appropriate PTC feature codes (72xx for FlashCopy, 73xx for FlashCopy SE).

FlashCopy
When you set up a FlashCopy operation, a relationship is established between the source and target volumes, and a bitmap of the source volume is created. Once this relationship and bitmap are created, the target volume can be accessed as though all the data had been physically copied. While a relationship between the source and target volume exists, optionally, a background process copies the tracks from the source to the target volume. When a FlashCopy operation is invoked, it takes only a few seconds to complete the process of establishing the FlashCopy pair and creating the necessary control bitmaps. Thereafter, you have access to a Point-in-Time Copy of the source volume. As soon as the pair has been established, you can read and write to both the source and target volumes. After creating the bitmap, a background process begins to copy the real data from the source to the target volumes. If you access the source or the target volumes during the background copy, FlashCopy manages these I/O requests, and facilitates both reading from and writing to both the source and target copies. When all the data has been copied to the target, the FlashCopy relationship ends, unless it is set up as a persistent relationship (used for example for incremental copies). The user may withdraw a FlashCopy relationship any time before all data has been copied to the target.

FlashCopy SE
IBM FlashCopy SE is a new optional licensed function introduced with Licensed Machine Code 5.30xx.xx. FlashCopy SE is a FlashCopy relationship for which the target volume is a Space Efficient volume. By using FlashCopy SE, you can reduce the amount of physical space consumed on your DS8000. A Space Efficient volume is a volume for which physical space is allocated dynamically on a track basis. Initially, a Space Efficient volume does not consume any physical space. When data is written to a Space Efficient volume, a track of physical space is taken from a common preallocated repository and is used to hold the data for the Space Efficient volume. Contrast this with a traditional, fully provisioned volume for which all of the physical space is allocated when the volume is created. A repository is a special volume that is used to contain the physical space for Space Efficient volumes. As tracks are written to a Space Efficient volume, storage for the tracks is obtained from the space assigned to the repository. The data for a Space Efficient volume is stored on the repository, but the data is only accessible from the Space Efficient volume. The host does not have access to the repository, only the associated Space Efficient volumes. The repository will provide the physical space for multiple Space Efficient volumes. There may be multiple repositories in a DS8000, one in each Extent Pool. When a track on the source volume of any FlashCopy relationship is updated, the current version of the track must be copied to the target device before the update can be destaged on the source device. For FlashCopy SE, the current version of the track is written to space taken from the repository and assigned to the Space Efficient volume. In this manner, the amount of physical space consumed by the target volume of a FlashCopy SE relationship is limited to the minimum amount of space required to maintain the copy.

Chapter 1. Introduction

FlashCopy SE should be used for copies that are short term in nature. Examples include copies that will be backed up to tape and the FlashCopy relationships in a Global Mirror session. FlashCopy SE could also be used for copies that will be kept long term, if the installation knows that there will be few updates to the source and target volumes. IBM FlashCopy SE requires DS8000 Licensed Machine Code (LMC) level 5.3.x.x, or later. IBM FlashCopy SE is licensed separately from FlashCopy.

1.2 Remote Mirror and Copy functions


The DS8000 Remote Mirror and Copy (RMC) functions provide a set of flexible data mirroring techniques that allow replication between volumes on two or more disk storage systems. You can use the functions for such purposes as data backup and disaster recovery. Remote Mirror and Copy functions are optional features of the DS8000. To use them, you must purchase the DS8000 Series Function Authorization (machine type 239x or 2244) with appropriate feature codes. Attention: For the DS8000 Turbo models 931, 932, and 9B2, the Metro Mirror and Global Mirror features are licensed separately. DS8000 Storage Units can participate in Remote Mirror and Copy implementations with the ESS Model 750, ESS Model 800, and DS6000 Storage Units. To establish an RMC relationship between the DS8000 and the ESS, the ESS needs to have Licensed Internal Code (LIC) version 2.4.3.15 or later. The DS8000 supports the following Remote Mirror and Copy functions.

Metro Mirror
Metro Mirror provides real-time mirroring of logical volumes between two DS8000s that can be located up to 300 km from each other. It is a synchronous copy solution where write operations are completed on both copies (local and remote site) before they are considered to be complete.

Global Copy
Global Copy copies data non-synchronously and over longer distances than is possible with Metro Mirror. When operating in Global Copy mode, the source volume sends a periodic, incremental copy of updated tracks to the target volume, instead of sending a constant stream of updates. This causes less impact to application writes for source volumes and less demand for bandwidth resources, while allowing a more flexible use of the available bandwidth. Global Copy does not keep the sequence of write operations. Therefore, the copy is normally fuzzy, but you can make a consistent copy through synchronization (called a go-to-sync operation). After the synchronization, you can issue FlashCopy at the secondary site to make the backup copy with data consistency. After the establishment of the FlashCopy, you can change the copy relationship back to non-synchronous mode. Note: In order to make a consistent copy at the secondary site with FlashCopy, you must purchase a Point-in-Time Copy function authorization for the secondary Storage Unit.

IBM System Storage DS8000: Copy Services in Open Environments

Global Mirror
Global Mirror provides a long-distance remote copy feature across two sites using asynchronous technology. This solution is based on the existing Global Copy and FlashCopy functions. FlashCopy SE may be used instead of FlashCopy. With Global Mirror, the data that the host writes to the Storage Unit at the local site is asynchronously shadowed to the Storage Unit at the remote site. A consistent copy of the data is automatically maintained on the Storage Unit at the remote site. Global Mirror operations provide the benefit of supporting operations over virtually unlimited distances between the local and remote sites, restricted only by the capabilities of the network and the channel extension technology. It can also provide a consistent and restartable copy of the data at the remote site, created with minimal impact to applications at the local site. The ability to maintain an efficient synchronization of the local and remote sites with support for failover and failback modes helps to reduce the time that is required to switch back to the local site after a planned or unplanned outage.

3-site Metro/Global Mirror with Incremental Resync


Metro/Global Mirror combines a Metro Mirror and a Global Mirror together to provide the possibility to implement a 3-site disaster recovery solution. The production system is using the storage at the local site, which is replicated synchronously using Metro Mirror to an intermediate site. The secondary volumes of the Metro Mirror relationships are further on used as the primary volumes to the cascaded Global Mirror relationships, which replicate the data to the remote disaster recovery site. This provides a very resilient and flexible solution to recover in various disaster situations. The customer also benefits from a synchronous replication of the data to a close location acting as the intermediate site. It also enables the possibility to copy the data across almost unlimited distance, whereby data consistency can be provided in any time in each location. With Incremental Resync, it is possible to change the copy target destination of a copy relation without requiring a full copy of the data. This functionality can be used, for example, when an intermediate site fails due to a disaster. In this case a Global Mirror will be established from the local to the remote site, which bypasses the intermediate site. When the intermediate site becomes available again the Incremental Resync is used to bring it back into the Metro/Global Mirror setup. The 3-site Metro/Global Mirror is an optional chargeable feature available on all models (92x/9Ax and 93x/9Bx). It requires DS8000 Licensed Machine Code level 5.2.200.x (bundle version 6.2.200.x) or later.

Chapter 1. Introduction

IBM System Storage DS8000: Copy Services in Open Environments

Chapter 2.

Copy Services architecture


This chapter is an overview of the structure of the Copy Services communication architecture in either an open environment or a zSeries environment. The chapter covers the following topics: Introduction to the Copy Services structure The structure of Copy Services management

Copyright IBM Corp. 2004-2008. All rights reserved.

2.1 Introduction to the Copy Services structure


The Copy Services architecture for the IBM ESS 800 used a construct called a Copy Services Domain to manage Copy Services. The communications architecture for the DS6000 and DS8000 is noticeably different. Instead of a Copy Services Domain, we now use the concept of Storage Complexes. You can perform Copy Services operations both within or between DS8000, DS6000, and ESS Storage Complexes. An ESS 800 Copy Services Domain (at the correct firmware level) can, for management purposes, appear to a DS8000 as an independent Storage Complex. Within a Storage Complex, you will find management consoles, Storage Units, and in the case of the DS8000, Storage Facility Images. First we define each of these constructs.

2.1.1 Management console defined


A management console is a PC that is used to manage the devices within a Storage Complex. In the case of the DS8000, it is a dedicated PC that comes with a pre-installed Linux-based operating system and pre-installed management software. The DS8000 console is called a Hardware Management Console (HMC). The end-user interacts with the management console using either the Web browser based DS GUI or the Command-Line Interface (DS CLI). It is possible using the DS GUI to manage Copy Services operations on multiple Storage Complexes.

2.1.2 Storage Unit defined


A Storage Unit is the physical storage device (including expansion enclosures) that you see when you walk into the computer room. If you open a frame door and look at a DS8000 sitting on the floor (with any attached expansion frames), then you are looking at a DS8000 Storage Unit. An example would be a 2107-922 or 932 with an attached 2107-92E. Another example would be a 2107-9A2 with an attached 2107-9AE. In each example you would have a single DS8000 Storage Unit.

2.1.3 Storage Facility Image (SFI) defined


The Copy Services architecture for the IBM ESS 800 used a construct called a Copy Services Domain to manage Copy Services. The communications architecture for the DS6000 and DS8000 is noticeably different. Instead of a Copy Services Domain, we now use the concept of Storage Complexes. When using the DS GUI or the DS CLI to manage a DS8000, you have to distinguish whether you are working with the SFI or with the storage unit. Some commands operate on the storage unit. For most operations, you always work with an SFI. You can tell the difference very easily. The serial number of the Storage Unit always ends with a 0. The serial number of the first SFI always ends with a 1. If you have ordered a model 2107-9A2 or 2107-9B2, then you will also have a second SFI. The serial number of the second SFI always ends with a 2.

DS8000 non-LPAR Model SFI example


In Example 2-1 we connect to a DS8000 Management Console using the DS CLI. We issue the lssu command to display the DS8000 Storage Unit of a 2107-922. We then issue the lssi command to display the DS8000 Storage Image on that Storage Unit. The Storage Unit serial number is 7520780. The SFI serial number is 7520781. Note how the Storage Unit and the SFI have different WWNNs.

10

IBM System Storage DS8000: Copy Services in Open Environments

Example 2-1 Difference between a DS8000 Storage Unit and DS8000 SFI - Model 922 dscli> lssu Date/Time: 13 November 2005 19:21:01 IBM DSCLI Version: 5.1.0.204 Name ID Model WWNN pw state ============================================================= 2107-7520780 IBM.2107-7520780 922 5005076303FFF9A5 On dscli> lssi Date/Time: 13 November 2005 19:21:21 IBM DSCLI Version: 5.1.0.204 Name ID Storage Unit Model WWNN State ESSNet ================================================================================= ATS_3_EXP IBM.2107-7520781 IBM.2107-7520780 922 5005076303FFC1A5 Online Enabled

DS8000 LPAR Model SFI example


In Example 2-2 we connect to a DS8000 Management Console using the DS CLI. We issue the lssu command to display the DS8000 Storage Unit of a 2107-9A2. We then issue the lssi command to display the DS8000 Storage Images on that Storage Unit. The Storage Unit serial number is 75ABTV0. The SFI serial numbers are 75ABTV1 and 75ABTV2. The Storage Unit and the SFIs all have different WWNNs.
Example 2-2 Difference between a DS8000 Storage Unit and DS8000 SFI - Model 9A2 dscli> lssu Date/Time: 13 November 2005 19:25:25 IBM DSCLI Version: 5.1.0.204 Name ID Model WWNN pw state ================================================================ 2107_75ABTV1/V2 IBM.2107-75ABTV0 9A2 5005076303FFFE63 On dscli> lssi Date/Time: 13 November 2005 19:25:34 IBM DSCLI Version: 5.1.0.204 Name ID Storage Unit Model WWNN State ESSNet ============================================================================ IBM.2107-75ABTV1 IBM.2107-75ABTV0 9A2 5005076303FFC663 Online Enabled IBM.2107-75ABTV2 IBM.2107-75ABTV0 9A2 5005076303FFCE63 Online Enabled

2.1.4 Storage Complex defined


A DS8000 Storage Complex consists of one or two DS8000 Storage Units managed by one or two DS Hardware Management Consoles (HMCs). A DS6000 Storage Complex consists of one or two DS6000 Storage Units, managed by one or two DS Storage Management Consoles (SMCs). For the ESS 800, we can manage up to eight ESS 800s in a single ESS Copy Services domain. Note: Only machines of the same family, such as DS8000, can be in the same Storage Complex. However, Storage Complexes with different machine types can be joined together for remote mirror and copy management. For example, a DS8000 Storage Complex and a DS6000 Storage Complex can be inter-connected. In Figure 2-1, you see a logical view of two Storage Complexes, each with one DS8000 Storage Unit. They are running Remote Mirror and Copy. The two DS HMCs are connected via Ethernet LAN, allowing you to use the DS GUI on either DS HMC to manage Copy Services on both DS8000s.

Chapter 2. Copy Services architecture

11

DS8000

Remote Mirror and Copy Links

DS8000

DS HMC

Ethernet connection

DS HMC

Storage Complex 1

Storage Complex 2

Figure 2-1 Logical view of an Storage Complex

2.2 The structure of Copy Services management


The DS CLI and DS GUI can be used to manage Copy Services operations for both System z and open environments. In this chapter, we discuss the communication and management structures for these two interfaces.

2.2.1 Communication path for Copy Services


The communication structure for the Web browser based GUI interface has changed in DS8000 Release 3 code level (Licensed Machine Code 5.3.x.x, bundle version 63.x.x.x). On systems with earlier code level, or systems whose code has been upgraded from pre R3 to R3 level, you can access the GUI by pointing your Web browser to the DS HMC. On new DS8000 systems with R3 or later code level, this is no longer possible. Instead, you access the DS GUI by launching the DS8000 Element Manager on a TotalStorage Productivity Center (TPC) server. This can be either a customer workstation running TPC, or a System Storage Productivity Center (SSPC) workstation, which is delivered with the DS8000 and has TPC pre-installed. The Element Manager in the TPC server provides the means to access the DS GUI. The communication paths for DS CLI and DS GUI are illustrated in Figure 2-2. Note that the DS CLI communication path has not changed in R3. Note: You can also access the DS GUI by using the preinstalled Web browser on the HMC console. In this case, a TPC server is not used.

12

IBM System Storage DS8000: Copy Services in Open Environments

Client
DS CLI or Web browser

Client
DS CLI Web browser

Client
DS CLI or Web browser

SSPC/TPC DS HMC
Network Interface Server DS8000 Element Manager

DS HMC
Network Interface Server

SFI
Network Interface Node Network Interface Node Network Interface Node

SFI
Network Interface Node Network Interface Node Network Interface Node

Micro code

Micro code

Micro code

Micro code

Processor complex1

Processor complex2

Processor complex1

Processor complex2

Processor complex1

Processor complex2

DS8000 pre R3

DS8000 R3

ESS800

Figure 2-2 Network Interface communication structure

DS8000 pre R3 management structure


The management structure of pre R3 systems is shown on the left of Figure 2-2: The client uses either the DS CLI or a Web browser GUI to communicate with the Network Interface Server running on the DS HMC. The Network Interface Server software will communicate with the Network Interface Node, which resides on each server of a Storage Facility Image. From this point, the Network Interface will then talk to the microcode, which operates the DS8000.

DS8000 R3 management structure


The management structure of R3 systems is shown in the middle of Figure 2-2: The client uses a Web browser to access a TPC server and launches the DS8000 Element Manager inside the TPC. The Element Manager then communicates with the Network Interface Server running on the DS HMC. Alternatively, the client uses the DS CLI to communicate with the Network Interface Server running on the DS HMC. The Network Interface Server software will communicate with the Network Interface Node, which resides on each server of a Storage Facility Image. From this point, the Network Interface will then talk to the microcode, which operates the DS8000.

Chapter 2. Copy Services architecture

13

2105 ESS 800 management structure


For reference, the management structure of ESS 800 systems is also shown in Figure 2-2: The client uses either DS CLI or a ESS Copy Services Web browser GUI to icommunicate with the ESS 800 Copy Services Server running on an ESS 800 cluster. The client could also use the DS GUI to issue commands to the ESS 800 Copy Services Server if a DS8000 HMC is available to route them through (not shown in the diagram). The ESS 800 Copy Services Server will then interact with the microcode that operates the ESS 800.

2.2.2 Remote Mirror and Copy between Storage Complexes


It is possible to use Remote Mirror and Copy between Storage Complexes, as depicted in Figure 2-3. In this scenario we have three Storage Complexes. The complexes are interconnected at the top of the chart by Storage Area Network (SAN) connections (solid lines), and at the bottom of the chart by an Ethernet LAN (dashed lines).

SAN

SAN

ESS Copy Services Domain


ESS 800 ESS 800 DS8000 DS8000 DS6000 DS6000

DS8000 Storage Complex


(Complex 1)

(Complex 3)

DS6000 Storage Complex


(Complex 2)

HMC1

SMC1

LAN
Client 1 Client 2

Figure 2-3 Remote Mirror and Copy between Storage Complexes

2.2.3 Differences between the DS CLI and the DS GUI


The DS CLI is not capable of managing several domains in a single session. However, if a single DS CLI client machine has network access to all Storage Complexes, then that client could issue concurrent DS CLI commands to each complex. Each complex would be managed by a separate DS CLI session. The DS CLI can be script driven, and of course scripts can be saved. So by using DS CLI, you can achieve automation. The DS GUI is accessed via a Web browser. The DS GUI is not able to save tasks as we do on the ESS. Thus, you cannot use the DS GUI to initiate a series of saved tasks. To do this we need to use the DS CLI. If you wish to use a single GUI to manage multiple Storage Complexes, then you need to define in the GUI all of the Storage Complexes. This will allow you to manage FlashCopy or Remote Mirror and Copy on, or between, every Storage Unit in every defined and accessible Storage Complex. 14
IBM System Storage DS8000: Copy Services in Open Environments

When you look at the structure in Figure 2-3 on page 14, you can see that you need a working HMC in every DS8000 Storage Complex, to communicate with the systems in the complex. For an inter-Storage Complex Remote Mirror and Copy, the DS GUI will establish sessions to the source and target Storage Complexes to show all paths and LUNs. If no management console is available at the remote Storage Complex, you cannot select this Storage Complex as a target in the GUI. The DS CLI, on the other hand, requires a connection only to the source systems HMC to establish remote mirror and copy relationships, because all path and pair establishment is done by connecting to the source machine with the DS CLI. It is possible to have two HMCs in a DS8000 Storage Complex for the purpose of redundancy.

Chapter 2. Copy Services architecture

15

16

IBM System Storage DS8000: Copy Services in Open Environments

Chapter 3.

Licensing
In this chapter we describe how the licensing functions for Copy Services for the DS8000 Series are arranged.

Copyright IBM Corp. 2004-2008. All rights reserved.

17

3.1 Licenses
All DS8000 Series machines must have an Operating Environment License (OEL) for the total storage installed, as defined in gross decimal TB. Licenses are also required for the use of Copy Services functions. Licensed functions require the selection of a DS8000 series feature number (IBM 2107) and the acquisition of DS8000 Series Function Authorization (IBM 2244) feature numbers: The 2107 licensed function indicator feature number enables technical activation of the function subject to the client applying an activation code made available by IBM. The 2244 function authorization feature numbers establish the extent of IBM authorization for that function on the 2107 machine for which it was acquired. Table 3-1 lists the DS8000 series feature numbers and corresponding DS8000 Series Function Authorization feature numbers.
Table 3-1 DS8000 licensed functions Licensed Function IBM 2107 Indicator Feature Number 0700 0702 0708 0720 0723 0730 0733 0742 0744 0746 0754 0756 0760 0780 0782 IBM 2244 Function Authorization Models and Feature Numbers Model OEL 70xx Model OEL 7090 Model OEL 7080 Model PTC 72xx Model PTC 72xx Model PTC 73xx Model PTC 73xx Model RMC 74xx Model RMC 74xx Model RMC 74xx Model RMC 75xx Model RMC 75xx Model RMZ 76xx Model PAV 78xx Model PAV 7899

Operating environment FICON/ESCON Attachment Database Protection Point in time copy Point in time copy Add-on FlashCopy SE FlashCopy SE Add-on Metro/Global Mirror (3-site) Metro Mirror Global Mirror Metro Mirror Add-on Global Mirror Add-on Remote Mirror for z/OS Parallel access volumes HyperPAV

Note: For the DS8000 Turbo models 931, 932 and 9B2, the Metro Mirror and Global Mirror features are now licensed separately as indicated in Table 3-1. For the former 92X and 9A2 models, support for Metro Mirror and Global Mirror continues to be provided with 2244 Model RMC features 740x and 7410, and indicator feature 0740.

18

IBM System Storage DS8000: Copy Services in Open Environments

For the Copy Services, in addition to the basic licenses such as for Metro Mirror, Global Mirror, or Point-in-Time Copy/FlashCopy, there are also so-called Add-on license features. Add-on license features are cheaper than the complementary basic license feature. An Add-on can only be specified when the complementary basic feature exists. The condition for this is that capacity licensed by Add-on features must not exceed the capacity licensed by the corresponding basic feature. The license for the Space Efficient FlashCopy, FlashCopy SE (SE), does not require the ordinary FlashCopy (PTC) license. As with the ordinary FlashCopy, the FlashCopy SE is licensed in tiers by gross amount of TB installed. FlashCopy (PTC) and FlashCopy SE can be complementary licenses: A client who wants to add a 20 TB FlashCopy SE license to a DS8000 that already has a 20 TB FlashCopy license can use the 20 TB FlashCopy SE Add-on license (2x #7333) for this. The Remote Mirror and Copy (RMC) license on the older models 92x/9A2 was replaced by the Metro Mirror (MM) and Global Mirror (GM) licenses for the newer models. Models with the older type of license can replicate to models with the newer type and vice versa. Metro Mirror (MM license) and Global Mirror (GM) can be complementary features also. Following we show the breakdown of DS8000 Series Function Authorization feature numbers (basic licenses listed only, no Add-ons): OEL: Operating Environment License: OEL - inactive OEL - 1 TB OEL - 5 TB OEL - 10 TB OEL - 25 TB OEL - 50 TB OEL - 100 TB OEL - 200 TB 7000 7001 7002 7003 7004 7005 7010 7015 7090 7800 7801 7802 7803 7804 7805 7810 7815

FICON: z/OS attachment: FICON Attachment PAV - inactive PAV - 1 TB PAV - 5 TB PAV - 10 TB PAV - 25 TB PAV - 50 TB PAV - 100 TB PAV - 200 TB PAV: Parallel Access Volumes:

The following licenses apply to Copy Services: PTC: Point-in-Time Copy, also known as FlashCopy: PTC - inactive PTC - 1 TB PTC - 5 TB PTC - 10 TB PTC - 25 TB PTC - 50 TB PTC - 100 TB PTC - 200 TB 7200 7201 7202 7203 7204 7205 7210 7215

Chapter 3. Licensing

19

SE: FlashCopy SE, also known as Space Efficient FlashCopy: SE - inactive SE - 1 TB SE - 5 TB SE - 10 TB SE - 25 TB SE - 50 TB SE - 100 TB SE - 200 TB MGM - inactive MGM - 1 TB MGM - 5 TB MGM - 10 TB MGM - 25 TB MGM - 50 TB MGM - 100 TB MGM - 200 TB MM - inactive MM - 1 TB MM - 5 TB MM - 10 TB MM - 25 TB MM - 50 TB MM - 100 TB MM - 200 TB GM - inactive GM - 1 TB GM - 5 TB GM - 10 TB GM - 25 TB GM - 50 TB GM - 100 TB GM - 200 TB RMZ - inactive RMZ - 1 TB RMZ - 5 TB RMZ - 10 TB RMZ - 25 TB RMZ - 50 TB RMZ - 100 TB RMZ - 200 TB 7300 7301 7302 7303 7304 7305 7310 7315 7420 7421 7422 7423 7424 7425 7430 7435 7440 7441 7442 7443 7444 7445 7450 7455 7460 7461 7462 7463 7464 7465 7470 7475 7600 7601 7602 7603 7604 7605 7610 7615

MGM: 3-Site Metro/Global Mirror:

MM: Metro Mirror:

GM: Global Mirror:

RMZ: Remote Mirror and Copy for z/OS, also known as z/OS Global Mirror (or XRC):

20

IBM System Storage DS8000: Copy Services in Open Environments

Additional information for Metro/Global Mirror licensing


For the 3-site Metro/Global Mirror solution, the following licensed functions are required: 1. For the DS8000 Turbo Models 931, 932, and 9B2: Site A - A Metro/Global Mirror (MGM) license, and a Metro Mirror (MM) license (additionally, a Global Mirror Add-on (GM Add) license is required if Site B goes away and you want to resync between Site A and Site C) Site B - A Metro/Global Mirror license (MGM), a Metro Mirror (MM) license, and a Global Mirror Add-on (GM Add) license - Site C - A Metro/Global Mirror license (MGM), a Global Mirror (GM) license, and a Point-in-Time Copy (PTC) license 2. For the D8000 Models 921, 922, and 9A2: Site A - A Metro/Global Mirror license (MGM), and a Remote Mirror and Copy (RMC) license Site B - A Metro/Global Mirror license (MGM), and a Remote Mirror and Copy (RMC) license Site C - A Metro/Global Mirror license (MGM), a Remote Mirror and Copy (RMC) license, and a Point-in-Time Copy (PTC) license

DS Storage Manager GUI support


The DS Storage Manager GUI panels used to define the licensed functions and apply the activation codes reflect the various licenses as illustrated in Figure 3-1 for the Apply Activation codes panel.

Figure 3-1 Apply Activation Codes panel

Chapter 3. Licensing

21

3.2 Authorized level


In this section we consider the authorized level in terms of licensing and charges incurred.

3.2.1 Licensing
All Copy Services functions require licensing to be activated. This means that the customer must purchase a license for the appropriate level of storage for each Copy Service function that is required. They then have to install the License Key generated using the Disk Storage Feature Activation (DSFA) application, which is at the following Web site: http://www.ibm.com/storage/dsfa Another consideration relates to the authorized level required. In most cases the total capacity installed must be licensed. This is the total capacity in decimal TB equal to or greater than the actual capacity installed, including all RAID parity disks and hot spares. An exception might be where a mix of both System z and open systems hosts are using the same storage server. In this case it is possible to acquire Copy Services licenses for just the capacity formatted for CKD, or just the capacity formatted for FB storage. This implies that the licensed Copy Services function is required only for open systems hosts, or only for System z hosts. If, however, a Copy Services function is required for both CKD and FB, then that Copy Services license must match the total configured capacity of the machine. The authorization level is maintained by the licensed code in the controller and the DSFA application. For example, the actual capacity is 15 TB, used for both CKD and FB, the scope for the OEL is therefore a type of ALL and the installed OEL must be at least 15 TB. If the client has split storage allocation, with 8 TB for CKD, and only CKD storage is using FlashCopy, then the scope type for the PTC license can be set to CKD. Now the PTC license can be purchased at the CKD level of 8 TB. However, this means that no open systems hosts can use the FlashCopy function. The actual ordered level of any Copy Service license can be any level above that required or installed. Licenses can be added and have their capacities increased, non-disruptively to an installed system. Important: A decrease in the scope or the capacity of a license requires a disruptive IML of the DS8000 Storage Facility Image.

3.2.2 Charging example


A client can choose to purchase any or all DS8000 licenses at an authorization level at or later than the installed raw disk capacity. It might be more cost effective to pre-install authorization to use greater than the currently installed storage capacity. A simple rule for charges is remembering that the required authorization level depends upon the license scope: If the license scope is FB, the authorization level must be equal to or greater than the total amount of physical capacity within the system that will be logically configured as FB. If the license scope is CKD, the authorization level must be equal to or greater than the total amount of physical capacity within the system that will be logically configured as CKD. If the license scope is ALL, the authorization level must be equal to or greater than the total system physical capacity. 22
IBM System Storage DS8000: Copy Services in Open Environments

The dual Storage Facility Image (SFI) version of DS8300 (the model 9Bx) uses the same logic, in this case added between the two SFIs. So all of the above apply to each SFI separately. It is possible to license Copy Services for a single scope in one SFI. Here is an explanation of this scenario: If you activate a licensed function on only SFI 1 with a license scope of FB: The authorization level must be equal to or greater than the total amount of physical capacity within SFI 1 that will be logically configured as FB. If you activate a licensed function on SFI 1 and SFI 2, both with a license scope of CKD: The authorization level must be equal to or greater than the total amount of physical capacity within both SFI 1 and SFI 2 that will be logically configured as CKD. If you activate a licensed function on SFI 1 with a license scope of FB and on SFI 2 with a license scope of ALL: The authorization level must be equal to or greater than the total amount of physical capacity within SFI 1 that will be logically configured as FB and the total physical capacity within SFI 2.

Chapter 3. Licensing

23

24

IBM System Storage DS8000: Copy Services in Open Environments

Part 2

Part

Interfaces
In this part of the book, we discuss the interfaces available to manage the Copy Services features of the DS8000. We give you an overview of the interfaces, describe the options available, discuss configuration considerations, and provide some usage examples of the interfaces

Copyright IBM Corp. 2004-2008. All rights reserved.

25

26

IBM System Storage DS8000: Copy Services in Open Environments

Chapter 4.

DS Storage Manager
The DS Storage Manager provides a graphical user interface (GUI) to configure the DS8000 and manage DS8000 Copy Services. With DS8000 Release 3, the DS Storage Manager GUI (DS GUI) is invoked from SSPC.

Copyright IBM Corp. 2004-2008. All rights reserved.

27

4.1 Accessing the DS GUI


How you can access the DS GUI depends on what version of the microcode you have on the DS8000. When you install a new DS8000 shipped with the R3 microcode (Licensed Machine Code 5.30xx.xx), the back-end part of DS GUI resides in the DS Storage Manager at the HMC, while the front-end part of the DS GUI is invoked at the SSPC from TotalStorage Productivity Center (TPC) and runs in TPC GUI. The SSPC is an external System x machine with preinstalled software, including IBM TotalStorage Productivity Center (TPC) Basic Edition. Utilizing TPC Basic Edition software, the SSPC extends the capabilities available through the DS GUI.

4.2 Access capabilities


With DS8000 Release 3 installed, you can access the DS GUI in the following ways: Via System Storage Productivity Center (SSPC) From a TPC on a workstation connected to HMC From a browser connected to SSPC or TPC on any server Using Microsoft Windows Remote Desktop to SSPC Directly, from a Web browser at the HMC These different access capabilities are depicted in Figure 4-1. In our illustration, SSPC connects to two HMCs managing two DS8000 Storage Complexes.

Browser TPC GUI

TCP/IP Directly TCP/IP

SSPC
TPC DS8000 HMC 1

TPC GUI DS GUI Front-end

DS GUI Back-end

DS 8000 Complex 1

DS8000 HMC 2

DS GUI Back-end
Preinstalled Browser DS GUI Front-end

DS 8000 Complex 2

Remote desktop

Directly

Figure 4-1 Accessing the DS GUI for new DS8000 R3 installation

28

IBM System Storage DS8000: Copy Services in Open Environments

When you upgrade an existing DS8000 to the R3 microcode (Licensed Machine Code 5.30xx.xx), or for a DS8000 with a previous version of the microcode, the back-end part of DS GUI also resides in DS Storage Manager at the HMC, while the front-end part of the DS GUI can run in a web browser on any workstation having a TCP/IP connection to the HMC. It also can run in a browser started directly at the HMC. Refer to Figure 4-2 for an illustration.

This option is possible only with DS8000 upgraded to R3

Browser DS GUI front-end

Directly DS8000 HMC 1


Preinstalled Browser DS GUI Front-end

TCP/IP

DS 8000 Complex 1

DS GUI Back-end

DS8000 HMC 2

DS GUI Back-end

DS 8000 Complex 2

Figure 4-2 Accessing the DS GUI on a DS8000 upgraded to Release 3

Refer to the IBM System Storage DS8000 Series: Architecture and Implementation, SG24-6786 for additional details on how to invoke the DS GUI. This publication also explains how to initially configure the storage system. Refer to the different specific Copy Services parts in this book for examples and illustrations on how to use the DS GUI to define and manage those Copy Services.

Chapter 4. DS Storage Manager

29

30

IBM System Storage DS8000: Copy Services in Open Environments

Chapter 5.

DS Command-Line Interface
This chapter provides an introduction to the DS Command-Line Interface (DS CLI), which can be used to configure and to administer the DS storage system. We explain how you can use the DS CLI to manage Copy Services relationships. In this chapter we describe: System requirements Command modes The commands User assistance Return codes In this chapter, we discuss the use of the DS CLI for Copy Services configuration in the DS8000. For storage configuration of the DS8000,using the DS CLI, refer to the following books: IBM System Storage DS8000: Command-Line Interface Users Guide, SC26-7916 IBM System Storage DS8000 Series: Architecture and Implementation, SG24-6786

Copyright IBM Corp. 2004-2008. All rights reserved.

31

5.1 Introduction and functionality


The IBM System Storage DS Command-Line Interface enables open systems hosts to invoke and manage FlashCopy and Remote Mirror and Copy functions through batch processes and scripts. The command-line interface provides a full-function command set that allows you to check your Storage Unit configuration and perform specific application functions when necessary. Before you can use the DS CLI commands, you must ensure the following requirements: It must have been installed as a Full Management Console installation management type. Your Storage Unit must be configured (part of the DS Storage Manager post-installation instructions). You must activate your license activation codes before you can use the DS CLI commands associated with the Copy Services functions. The following list highlights a few of the specific types of functions that you can perform with the DS Command-Line Interface: Create user IDs that can be used with the GUI and the DS CLI. Manage user ID passwords. Install activation keys for licensed features. Manage Storage Complexes and Units. Configure and manage Storage Facility Images. Create and delete RAID arrays, Ranks, and Extent Pools. Create and delete logical volumes. Manage host access to volumes. Check the current Copy Services configuration that is used by the Storage Unit. Create, modify, or delete Copy Services configuration settings.

5.2 Supported operating systems for the DS CLI


The DS Command-Line Interface can be installed on these operating systems: AIX 5.1, 5.2, 5.3 HP-UX 11.0, 11i HP-True64 5.1, 5.1A Linux (RedHat 3.0 Advanced Server (AS) and Enterprise Server (ES) SUSE Linux SLES 8, SLES 9, SLES10, SUSE 8, SUSE 9 Novell NetWare 6.5 System i system i5/OS 5.3 Sun Solaris 7, 8, 9 Windows 2000, Windows Datacenter, Windows 2003, Windows XP Note: The DS CLI cannot be installed on a Windows 64-bit operating system.

Important: For the most recent information about currently supported operating systems, refer to the IBM System Storage DS8000 Information Center Web site: http://publib.boulder.ibm.com/infocenter/ds8000ic/index.jsp

32

IBM System Storage DS8000: Copy Services in Open Environments

The DS CLI is supplied and installed via a CD that ships with the machine. The installation does not require a reboot of the open systems host. The DS CLI requires Java 1.4.1 or later. Java 1.4.2 for Windows, AIX, and Linux is supplied on the CD. Many hosts may already have a suitable level of Java installed. The installation program checks for this requirement during the installation process and does not install the DS CLI if you do not have the correct version of Java. The installation process can be performed via a shell, such as the bash or korn shell, or the Windows command prompt, or via a GUI interface. If performed via a shell, it can be performed silently using a profile file. The installation process also installs software that allows the DS CLI to be completely de-installed should it no longer be required. If you need any assistance to install the DS CLI you can refer to the publication IBM System Storage DS8000 Command-Line Interface Users Guide, SC26-7916.

5.3 User accounts


The admin account is set up automatically at the time of installation. It is accessed using the user name admin and the default password admin. This password is temporary and you must change the password before you can use any of the other functions. There are seven groups the administrator can assign to a user. The groups and the associated functions allowed by the assignment are as follows: admin: Allow access to all storage management console server service methods and all Storage Image resources. op_volume: Allow access to service methods and resources that relate to logical volumes, hosts, host ports, logical subsystems, and Volume Groups, excluding security methods. op_storage: Allow access to physical configuration service methods and resources, including Storage Complex, Storage Image, Rank, Array, and Extent Pool objects. op_copy_services: Allow access to all Copy Services service methods and resources, excluding security methods. service: Monitor authority, plus access to all management console server service methods and resources, such as performing code loads and retrieving problem logs. monitor: Allow access to list and show commands. It provides access to all read-only, nonsecurity management console server service methods and resources. no access: Does not allow access to any service method or Storage Image resources. By default, this user group is assigned to any user account in the security repository that is not associated with any other user group. Note: A user can be assigned to more than one user group.

Important: When a new user is created, the password that was initially set will be automatically expired and must be changed when the user first logs on.

5.4 DS CLI profile


You can create default settings for the command-line interface by defining one or more profiles on the system. For example, you can specify the management console (MC) for the session, specify the output format for list commands, specify the number of rows per page in
Chapter 5. DS Command-Line Interface

33

the command-line output, and specify that a banner is included with the command-line output. If a user enters a value with a command that is different from a value in the profile, the command overrides the profile. You have several options for using profile files: You can modify the default profile. The default profile, dscli.profile, is installed in the profile directory with the software, for example, c:\Program Files\IBM\DSCLI\profile\dscli.profile for the Windows platform and /opt/ibm/dscli/profile/dscli.profile for UNIX and Linux platforms. You can make a personal default profile by making a copy of the system default profile as <user_home>/dscli/profile/dscli.profile. The home directory <user_home> is designated as follows: Windows system: C:\Documents and Settings\<user_name> Unix/Linux system: /home/<user_name> You can create a profile for the Storage Unit operations. Save the profile in the user profile directory. For example: c:\Program Files\IBM\DSCLI\profile\operation_name1 c:\Program Files\IBM\DSCLI\profile\operation_name2 Attention: The default profile file created when you install the DS CLI will potentially be replaced every time you install a new version of the DS CLI. It is a better practice to open the default profile and then save it as a new file. You can then create multiple profiles and reference the relevant profile file using the -cfg parameter. These profile files can be specified using the DS CLI command parameter -cfg <profile_name>. If the -cfg file is not specified, the users default profile is used. If a users profile does not exist, the system default profile is used. Note: A password file generated using the managepwfile command is located at the following directory: user_home_directory/dscli/profile/security/security.dat. When you install the command-line interface software, the default profile is installed in the profile directory with the software. The file name is dscli.profile, for example c:\Program Files\IBM\DSCLI\profile\dscli.profile. The available variables and detailed descriptions and information about how to handle them can be found in the publication IBM System Storage DS8000 Command-Line Interface Users Guide, SC26-7916.

5.5 Command structure


This is a description of the components and structure of a command-line interface command. A command-line interface command consists of one to four types of components, arranged in the following order: 1. The command name: Specifies the task that the command-line interface is to perform. 2. Flags: Modify the command. They provide additional information that directs the command-line interface to perform the command task in a specific way. 3. Flags parameter: Provides information that is required to implement the command modification that is specified by a flag. 34
IBM System Storage DS8000: Copy Services in Open Environments

4. Command parameters: Provides basic information that is necessary to perform the command task. When a command parameter is required, it is always the last component of the command, and it is not preceded by a flag.

5.6 Copy Services commands


We can classify the commands available with the DS CLI as follows: 1. ls commands: Give you brief information about the Copy Services state. See Table 5-1. 2. show commands: Give you detailed information about the Copy Services state. See Table 5-2. 3. mk commands: Used to create relationships. See Table 5-3. 4. rm commands: Used to remove relationships. See Table 5-4 on page 36. 5. options commands: Used to modify some options in relationships that were previously created. See Table 5-5 on page 36.
Table 5-1 List commands Command lsflash lsremoteflash lssestg lspprc lspprcpath lsavailpprcport lssession Description Lists of FlashCopy relationships. Lists of Remote FlashCopy relationships (inband). Lists Space Efficient storage repositories. Lists of Remote Mirror and Copy volumes relationships. Lists of existing Remote Mirror and Copy path definition. Lists available ports that can be defined as Remote Mirror and Copy. Displays a list of Global Mirror sessions for an LSS.

Table 5-2 List detailed commands Command showgmir showgmircg showsestg showgmiroos Table 5-3 Creation commands Command mkflash mkremoteflash mksestg mkgmir mkpprc mkpprcpath Description Initiates a Point-in-Time Copy from a source to a target volume. Initiates a remote copy through a Remote Mirror and Copy relationship. Creates a Space Efficient storage repository in an extent pool. Starts Global Mirror for a session. Establishes a Remote Mirror and Copy relationship. Establishes or replaces a Remote Mirror and Copy path over a Fibre Channel. Description Displays detailed properties and performance metrics for Global Mirror. Displays Consistency Group status for a Global Mirror session. Displays the Space Efficient storage properties of an extent pool Displays the number of unsynchronized (out of sync) tracks for a Global Mirror session.

Chapter 5. DS Command-Line Interface

35

Command mkesconpprcpath mksession Table 5-4 Deletion commands Command rmflash rmremoteflash rmsestg rmgmir rmpprc rmpprcpath rmsession Table 5-5 Options commands Command commitflash resyncflash reverseflash revertflash setflashrevertible commitremoteflash resyncremoteflash revertremoteflash setremoteflashrevertible unfreezeflash failoverpprc failbackpprc freezepprc pausepprc resumepprc unfreezepprc pausegmir resumegmir chsession chsestg

Description Creates a Remote Mirror and Copy path over an ESCON connection. Opens a Global Mirror for a session.

Description Removes a relationship between FlashCopy pairs. Removes a relationship between remote FlashCopy pairs. Removes the Space Efficient storage repository in an extent pool. Removes Global Mirror for the specified session. Removes a Remote Mirror and Copy relationship. Removes a Remote Mirror and Copy path. Closes an existing Global Mirror session.

Description Commits data to a target volume to form a consistency between the source and target. Incremental FlashCopy process. Reverses the direction of a FlashCopy pair. Overwrites new data with data saved at the last consistency formation. Modifies a remote FlashCopy pair that is part of a Global Mirror to revertible. Commits data to a target volume to form a consistency (inband). Incremental FlashCopy process (inband). Overwrites new data with data saved at the last consistency formation (inband). Modifies a remote FlashCopy pair that is part of a Global Mirror to revertible (inband). Resets a FlashCopy Consistency Group. Changes a secondary device into a primary suspended and keeps the primary in current. Usually used after a failoverpprc to reverse the direction of the synchronization. Creates a new Remote Mirror and Copy Consistency Group. Pauses an existing Remote Mirror and Copy volume pair relationship. Resumes a Remote Mirror and Copy relationship for a volume pair. Thaws an existing Remote Mirror and Copy Consistency Group. Pauses a Global Mirror processing for a session. Resumes a Global Mirror processing for a session. Allows you to modify a Global Mirror session. Changes the Space Efficient storage repository attributes.

36

IBM System Storage DS8000: Copy Services in Open Environments

Important: The Remote Mirror and Copy commands are asynchronous. This means that a command is issued to the DS CLI server and if it is accepted successfully, you receive a successful completion code; however, background activity might still be occurring. For example, a Metro Mirror pair will take some time to establish, because the tracks need to be copied across from the primary to secondary device. In this example you should check the state of the volumes using the lspprc command until the pairs have reached the Duplex state.

Note: The mkflash and mkpprc commands offer the -wait flag, which delays the command response until copy complete status is achieved. You can choose to use this flag if you want to be sure of successful completion.

5.7 Using the DS CLI application


You have to log into the DS CLI application to use the command modes. There are three command modes for the DS CLI: Single-shot mode Interactive mode Script mode

5.7.1 Single-shot mode


Use the DS CLI single-shot command mode if you want to issue an occasional command but do not want to keep a history of the commands that you have issued. You must supply the login information and the command that you want to process at the same time. Use the following example to use the single-shot mode: 1. Enter:
dscli -hmc1 <hostname or ip address> -user <adm user> -passwd <pwd> <command>

2. Wait for the command to process and display the end results. Example 5-1 shows the use of the single-shot command mode.
Example 5-1 Single-shot command mode

C:\Program Files\ibm\dscli>dscli -hmc1 10.10.10.1 -user admin -passwd adminpwd lsuser Date/Time: 7. November 2007 14:38:27 CET IBM DSCLI Version: X.X.X.X Name Group State ===================== admin admin locked admin admin active exit status of dscli = 0 Note: When typing the command, you can use the host name or the IP address of the DS HMC.

Chapter 5. DS Command-Line Interface

37

5.7.2 Script command mode


Use the DS CLI script command mode if you want to use a sequence of DS CLI commands. Administrators can use this mode to create automated processes, for example, establishing Remote Mirror and Copy relationships for volume pairs. You can issue the DS CLI script from the command prompt at the same time that you provide your login information: 1. Enter:
dscli -hmc1 <hostname ip address> -user <user name> -passwd <pwd> -script <full path of script file>

2. Wait for the script to process and provide a report regarding the success or failure of the process. Example 5-2 shows the use of the script command mode.
Example 5-2 Script command mode

C:\Program Files\ibm\dscli>dscli -hmc1 10.10.10.1 -user admin -passwd adminpwd -script c:\test.cli Date/Time: 7. November 2007 14:42:17 CET IBM DSCLI Version: X.X.X.X DS: IBM IBM.1750-1367890 ID WWPN State Type topo portgrp =============================================================== I0000 500507630E01FC00 Online Fibre Channel-LW SCSI-FCP 0 I0001 500507630E03FC00 Online Fibre Channel-LW 0 I0002 500507630E05FC00 Online Fibre Channel-LW SCSI-FCP 0 I0003 500507630E07FC00 Online Fibre Channel-LW 0 I0100 500507630E81FC00 Online Fibre Channel-LW SCSI-FCP 0 I0101 500507630E83FC00 Online Fibre Channel-LW 0 I0102 500507630E85FC00 Online Fibre Channel-LW SCSI-FCP 0 I0103 500507630E87FC00 Online Fibre Channel-LW 0 Date/Time: 24 de Maio de 2005 14h40min36s BRT IBM DSCLI Version: X.X.X.X Name ID Storage Unit Model WWNN State ESSNet ============================================================================ IBM.1750-1312345 IBM.1750-1312345 511 500507630EFFFC6F Online Enabled exit status of dscli = 0 Important: The DS CLI script can contain only DS CLI commands. Use of shell commands results in process failure. You can add comments to the scripts prefixed by the number sign (#). When typing the command, you can use the host name or the IP address of the DS HMC.

38

IBM System Storage DS8000: Copy Services in Open Environments

5.7.3 Interactive mode


Use the DS CLI interactive command mode when you have multiple transactions to process that cannot be incorporated into a script. The interactive command mode provides a history function that makes repeating or checking prior command usage easy to do. 1. Log on to the DS CLI application at the directory where it is installed. 2. Provide the information that is requested by the information prompts. The information prompts might not appear if you have provided this information in your profile file. The command prompt switches to a dscli command prompt. 3. Begin using the DS CLI commands and parameters. You are not required to begin each command with dscli because this prefix is provided by the dscli command prompt. Example 5-3 shows the use of interactive command mode.
Example 5-3 Interactive command mode

C:\Program Files\ibm\dscli>dscli Enter your username: admin Enter your password: Date/Time: 7. November 2007 14:42:17 CET IBM DSCLI Version: X.X.X.X DS: IBM.1750-1312345 . dscli> lsarraysite Date/Time: 7. November 2007 15:05:57 CET IBM DSCLI Version: X.X.X.X DS: IBM. 1750-1312345 arsite DA Pair dkcap (Decimal GB) State Array ================================================ S1 0 146.0 Assigned A0 S2 0 146.0 Assigned A0 S3 0 146.0 Assigned A1 S4 0 146.0 Assigned A1 . dscli> lssi Date/Time: 7. November 2007 15:25:09 CET IBM DSCLI Version: X.X.X.X Name ID Storage Unit Model WWNN State ESSNet ============================================================================ IBM.1750-1312345 IBM.1750-1312345 511 500507630EFFFC6F Online Enabled dscli> quit exit status of dscli = 0 Note: When typing the command, you can use the host name or the IP address of the DS HMC.

5.8 Return codes


When the DS CLI is exited, the exit status code is provided. This is effectively a return code. If DS CLI commands are issued as separate commands (rather than using script mode), then a return code will be presented for every command. If a DS CLI command fails (for instance, due to a syntax error or the use of an incorrect password), then a failure reason and a return code will be presented. Standard techniques to collect and analyze return codes can be used.

Chapter 5. DS Command-Line Interface

39

The return codes used by the DS CLI are shown in Table 5-6.
Table 5-6 Return code table Return code 0 2 3 4 5 6 Category Success Syntax error Connection error Server error Authentication error Application error Description The command was successful. There is a syntax error in the command. There was a connection problem to the server. The DS CLI server had an error. Password or user ID details are incorrect. The DS CLI application had an error.

5.9 User assistance


The DS CLI is designed to include several forms of user assistance. The main form of user assistance is via the help command. Examples of usage include: help lists all available DS CLI commands. help -s lists all DS CLI commands with brief descriptions of each. help -l lists all DS CLI commands with syntax information. If the user is interested in more details about a specific DS CLI command, they can use -l (long) or -s (short) against a specific command. In Example 5-4, the -s parameter is used to get a short description of the mkflash commands purpose.
Example 5-4 Use of the help -s command

dscli> help -s mkflash mkflash The mkflash command initiates a point-in-time copy from source volumes to target volumes. In Example 5-5, the -l parameter is used to get a list of all parameters that can be used with the mkflash command.
Example 5-5 Use of the help -l command

dscli> help -l mkflash mkflash [ { -help|-h|-? } ] [-dev storage_image_ID] [-tgtpprc] [-tgtoffline] [-t gtinhibit] [-freeze] [-record] [-persist] [-nocp] [-wait] [-seqnum Flash_Sequenc e_Num] SourceVolumeID:TargetVolumeID ... | -

Man pages
A man page is available for every DS CLI command. Man pages are most commonly seen in UNIX-based operating systems to give information about command capabilities. This information can be displayed by issuing the relevant command followed by -h, -help, or -?, for example: dscli> mkflash -help or dscli> help mkflash

40

IBM System Storage DS8000: Copy Services in Open Environments

5.10 Usage examples


It is not the intent of this section to list every DS CLI command and its syntax. If you need to see a list of all the available commands, or require assistance using DS CLI commands, you are better served by reading the publication IBM System Storage DS8000 Command-Line Interface Users Guide, SC26-7916, or you can use the online help. Example 5-6 shows some of the common commands used on a DS8000.
Example 5-6 Examples of DS CLI commands

# The following command establishes flashcopy pairs dscli> mkflash 4600-4602:4721-4723 Date/Time: 7. November 2007 15:02:24 CET IBM DSCLI Version: 0.0.0.0 DS: IBM.2107-7503461 CMUC00137I mkflash: FlashCopy pair 4600:4721 successfully created. CMUC00137I mkflash: FlashCopy pair 4601:4722 successfully created. CMUC00137I mkflash: FlashCopy pair 4602:4723 successfully created. # The following command establishes Remote Mirror and copy paths dscli> mkpprcpath -remotedev IBM.2107-7520781 -remotewwnn 5005076303FFC1A5 -srclss 46 -tgtlss 47 I0003:I0043 I0101:I0501 Date/Time: 7. November 2007 15:25:09 CET IBM DSCLI Version: 0.0.0.0 DS: IBM.2107-7503461 CMUC00149I mkpprcpath: Remote Mirror and Copy path 46:47 successfully established. # The following command establishes Remote Mirror and copy pairs dscli> mkpprc -remotedev IBM.2107-7520781 -type gcp 4600-4601:4730-4731 Date/Time: 7. November 2007 15:30:09 CET IBM DSCLI Version: 0.0.0.0 DS: IBM.2107-7503461 CMUC00153I mkpprc: Remote Mirror and Copy volume pair relationship 4600:4730 successfully created. CMUC00153I mkpprc: Remote Mirror and Copy volume pair relationship 4601:4731 successfully created.

Chapter 5. DS Command-Line Interface

41

42

IBM System Storage DS8000: Copy Services in Open Environments

Chapter 6.

IBM TotalStorage Productivity Center for Replication


The IBM TotalStorage Productivity Center for Replication (TPC-R) is an automated solution providing a management front-end to Copy Services. The IBM TotalStorage Productivity Center for Replication or TPC for Replication can help manage Copy Services for the SAN Volume Controller (SVC), as well as the ESS 800 when using FCP links for PPRC paths between ESS 800s or between ESS 800 and DS6000 and DS8000. This also applies to FlashCopy. This chapter describes TPC-R in the context of the DS8000 Copy Services. You must refer to the following manuals when implementing and managing TPC for Replication configurations: IBM TotalStorage Productivity Center for Replication Users Guide, SC32-0103 IBM TotalStorage Productivity Center for Replication Installation and Configuration Guide, SC32-0102 IBM TotalStorage Productivity Center for Replication Command-Line Interface Users Guide, SC32-0104 IBM TotalStorage Productivity Center for Replication Problem Determination Guide GI11-8060 Note: The IBM TotalStorage Productivity Center for Replication is completely independent from TPC for Disk, Data, and Fabric.

Copyright IBM Corp. 2004-2008. All rights reserved.

43

6.1 IBM TotalStorage Productivity Center


The IBM TotalStorage Productivity Center (TPC) is a suite of software products. It is designed to support customers in monitoring and managing their storage environments. Design and development emphasis for TPC is on scalability and standards. The approach based on open standards allows TPC to manage any equipment or solution implementation that follow the same open standards. Figure 6-1 shows a split set of products with three components based on former Tivoli products and a package with four components designed and developed to facilitate storage replication solutions.

IBM TotalSIBM TotalStorage Productivity Center Family torage Productivity C enter Fam ily
Standard Edition
Replication (Base)

R eplicatio n
Replication Two Site BC Replication for System z ** Replication 3 Site BC **

Disk

Data

Fabric

Figure 6-1 TPC software products suite

All three Tivoli products provide a single source and single management tool to cover the tasks of the storage manager or network manager in their daily business. TPC for Disk is software based on open standard interfaces to query, gather, and collect all available data necessary for performance management. TPC for Data focuses on data management and addresses aspects related to information life-cycle management. TPC for Fabric is a management tool to monitor and manage a SAN fabric. TPC for Replication, as a member of the TPC product family, is a standalone package and does not build on anything related to the TPC Standard Edition. It comes in three levels: TPC for Replication, (5608-TRA), which includes support for: FlashCopy and FlashCopy SE Planned failover and restart (one direction) for MM and GM TPC for Replication Two Site Business Continuity (BC), (5608-TRB), which includes support for: FlashCopy and FlashCopy SE Planned and unplanned failover and failback for MM and GM High availability (with two TPC Replication servers) TPC for Replication Three Site Business Continuity (BC), (5608-TRC), which includes support for: FlashCopy and FlashCopy SE Planned and unplanned failover and failback for MM and GM High availability (with two TPC Replication servers) Metro/Global Mirror for DS8000 failover and failback with incremental resync

44

IBM System Storage DS8000: Copy Services in Open Environments

TPC for Replication for System z, (5698-TPC), which includes support for: The same features and supported products as 5608-TRA, but TPC for Replication Server runs on z/OS. Provides SMP/E installer Supports English and Japanese language initially Supports FICON commands in addition to TCP/IP, but TCP/IP attachment is required FICON used only for assigned LSSs, TCP/IP required to manage LSS heartbeat, remote LSSs and SVC

In the following paragraphs we cover the basic structures and features of TPC for Replication, TPC for Replication Two Site BC and TPC for Replication Three Site BC. This is to explain and discuss Copy Services topics with the DS8000.

6.2 Where we are coming from


Since the advent of Copy Services functions with IBM storage servers, a framework is required to handle and manage disk storage environments and the various combinations of software, firmware, and hardware used for replication. To ensure and guarantee data consistency at the remote or backup site, it is even more important to provide a framework around Copy Services functions that helps achieve the data consistency and ease of management. Other aspects are automation and scalability. Early solution attempts turned out to have weaknesses that would often surface during actual implementation in real production environments. Tools offered on an as is basis are not suitable for managing production environments. Other solutions are perfectly suited for certain host server platforms but cannot manage other server platforms. This led to the design and implementation of a framework that is platform independent and provides the required production attributes such as scalability, stability, and security, and provides as a standard product also offering the guarantee to be serviced and maintained. Furthermore, the concept has the potential for enhancements and can evolve over time and according to user requirements. TPC for Replication builds on all previous experiences with storage management tools and framework proposals to organize all aspects of Copy Services for disaster recovery solutions. TPC for Replication also address the need for other solutions, which involve Copy Services functions available with the DS6000 and DS8000 like data or volume migration, data center movements, or other projects that require copying or moving data between similar devices such as the DS6000, DS8000, and ESS 800.

6.3 What TPC for Replication provides


TPC for Replication is designed to help administrators manage Copy Services. This applies not only to the Copy Services provided by DS6000 and DS8000 but also to Copy Services provided by the ESS 800 and SAN Volume Controller (SVC).

Chapter 6. IBM TotalStorage Productivity Center for Replication

45

In more detail, here are some highlights of the functions provided by TPC for Replication: Manage Copy Services for the IBM System Storage DS6000 and DS8000. Extend support for ESS 800 to manage Copy Services. Provide disaster recovery support with failover/failback capability for the: ESS 800 DS6000 DS8000 The optional high availability feature allows us to continue replication management even when one TPC for Replication server goes down. The basic functions of TPC for Replication provide management of: FlashCopy FlashCopy SE Metro Mirror Global Mirror Metro/Global Mirror Note that TPC for Replication does not support Global Copy. Figure 6-2 is a screen capture from the TPC for Replication GUI showing the different session types, or Copy Services functions, supported.

Figure 6-2 Management of Copy Services Functions supported by TPC for Replication

TPC for Replication is designed to simplify management of Copy Services by: Automating administration and configuration of Copy Services functions with wizard-based session and copy set definitions. Provide simple operational control of Copy Services tasks, which includes starting, suspending, and resuming Copy Services Tasks. Offer tools to monitor and manage Copy Services sessions. Note that TPC for Replication also manages FlashCopy and Metro Mirror for IBM SAN Volume Controller (SVC).

46

IBM System Storage DS8000: Copy Services in Open Environments

6.4 Copy Services terminology


Although Copy Services terminology is discussed in a previous part of this book, we have included here, for convenience and to keep it in context with TPC for Replication, a brief review. For details about configuring Metro Mirror and Global Mirror with TPC-R refer to the Redbooks publications: IBM TotalStorage Productivity Center for Replication on Windows 2003, SG24-7250 IBM TotalStorage Productivity Center for Replication on AIX, SG24-7407 IBM TotalStorage Productivity Center for Replication on Linux, SG24-7411 The configuration of Metro/Global Mirror with TPC-R is explained in Chapter 33, Metro/Global Mirror with IBM TotalStorage Productivity Center for Replication on page 557.

6.4.1 FlashCopy
FlashCopy is a point-in-time copy of a source volume mapped to a target volume. It offers various options including the choice between a COPY option, for a background copy through physical I/O operations in the storage server back-end, or a NOCOPY option. FlashCopy is available on the DS6000 and DS8000 as well as on the ESS 800. FlashCopy is also available on the DS4000 family and with the SAN Volume Controller. Note, however, that the DS4000 and SVC use different implementations of the FlashCopy function that are not compatible with the DS8000, DS6000 and ESS 800 FlashCopy.

6.4.2 IBM FlashCopy SE


TotalStorage Productivity Center for Replication can establish a FlashCopy relationship where the target volume is a DS8000 Space Efficient volume. A Space Efficient volume does not occupy physical capacity when it is created. Space gets allocated when data is actually written to the volume. The amount of space that gets physically allocated is a function of the amount of data changes performed on a volume. The sum of all defined Space Efficient volumes can be larger than the physical capacity available. This function is also called over provisioning or thin provisioning.

6.4.3 Metro Mirror


Metro Mirror (MM) is a synchronous data replication mechanism and was previously called Peer-to-Peer Remote copy (PPRC). I/O completion is signaled when the data is secured on both the local storage server and the remote storage server. Metro Mirror is popular for 2-site disaster recovery solutions. Metro mirror is available in any combination between DS6000, DS8000, and ESS800. Synchronous data replication is also available on the DS4000 family and with the SVC, but again is not directly compatible with Metro Mirror on the DS6000, DS8000, and ESS 800.

Chapter 6. IBM TotalStorage Productivity Center for Replication

47

6.4.4 Global Copy


Global Copy (GC) is an asynchronous data replication mechanism and used to be called PPRC for eXtended Distance (PPRC-XD). Asynchronous in this context means that the data transmission between local and remote storage servers is separated from signaling the I/O completion. Note that GC does not guarantee that the arriving writes at the local site are applied to the remote site in the same sequence. Therefore, it does not provide data consistency. GC is suited for data migration over any distance. Global Copy is possible in any combination between DS6000, DS8000, and ESS 800. Restriction: TPC for Replication does not provide an interface to manage a Global Copy environment.

6.4.5 Global Mirror


Global Mirror (GM) is an asynchronous data replication over any distance. It is a combination of GC and FlashCopy complemented by unique firmware features. The firmware drives the copy operations in a fully autonomic fashion to guarantee consistent data at any time at the remote site. This holds also for the requirement to provide consistent data at any time, which can be spread over more than a single storage server. GM is also the base for a 2-site disaster recovery solution involving any DS6000, DS8000, or ESS 800. The function as such is also possible between products of the DS4000 family and the SVC. Note: With TPC for Replication Version 3.3.3, the Metro Mirror license requirement for TPC for Replication managed Global Mirror target machines is gone. As part of the Global Mirror recovery, TPC for Replication will do a go to sync operation when doing the Global Mirror failback from the remote site to the local site.

6.4.6 Metro/Global Mirror


Metro/Global Mirror (MGM) is a 3-site disaster recovery solution. MGM is a combination of MM and GM. This is a synchronous data replication between two sites in a metropolitan area and an asynchronous copy of the same data at a different remote site, which can be located at any distance. In a MGM configuration, a software-based solution is also needed to provide automation and to guarantee that any pre-determined action happens as quickly as possible and always with the same results when a trigger causes this action to happen. Beside TPC for Replication, a software utility, named MGM Utility Toolkit for ICKDSF, and TSO can be used to manage MGM from a z/OS-based environment. Note: Currently TPC for Replication supports MGM management only for DS8000 with the incremental resync concept. This concept also requires (temporary) Fibre Channel connectivity capabilities between Site 1 and Site 3 if: Site 2 goes down and incremental resync from Site 1> Site 3 is started. A recovery was performed to either Site 2 or Site 3 and the failed site needs to be integrated again into the MGM configuration.

48

IBM System Storage DS8000: Copy Services in Open Environments

6.4.7 Failover/failback terminology


There is often a misconception about what failover/failback commands do. Figure 6-3 illustrates the failover/failback two-step process. The process is as follows: 1. When the replication process between primary (P) and secondary (S) is suspended as a result of a planned or unplanned outage, you might want to access the S volumes immediately without any checking on the P volumes. The failover command always points to the S volumes and turns the S volumes from SECONDARY state into a PRIMARY SUSPEND state, making the volumes immediately available to the application. This state change for the S volumes at the remote site happens without any communication or any checking of the P volumes at the local site, which keep the status they had before the failover command was issued.
Local DS6000 / DS8000 Suspends due to planned or unplanned event Replication direction Remote DS6000 / DS8000

Primary Primary Primary Primary Primary

A A P

Primary Primary Primary Primary


Secondary

A A S

Primary Primary Primary Primary

A A

Primary Primary Primary Primary

A A

1. Failover command 2. Failback command


A A

PRIMARY SUSPENDED

SECONDARY PENDING SECONDARY DUPLEX

Primary Primary Primary Primary

A A

Resynchronisation
PRIMARY PENDING PRIMARY DUPLEX

Primary Primary Primary Primary

OR

2. Failback command
PRIMARY PENDING PRIMARY DUPLEX
Primary Primary Primary Primary

A A

Resynchronisation
SECONDARY PENDING SECONDARY DUPLEX

Primary Primary Primary Primary

A A

Figure 6-3 Failover/failback terminology

2. When the outage is over and you decide to resume replicating data between the sites, you have a choice as whether to re-synchronize from the remote site to the local site or vice versa. This depends on what is required after updating one of the two sites or even updating both sites after the suspend action. a. When you decide to re-synchronize from the remote site to the local site the failback command is pointing to the remote volumes. You need a PPRC path from the remote site to the local site before you can perform the corresponding commands. Doing this will resynchronize both sites with the level of data as it is at the remote site. Only the data that changed at the remote site, from the moment the replication was suspended and until the failover happened, is replicated from remote to local volumes. Figure 6-3 on page 49 assumes a Metro Mirror (MM) environment that puts the corresponding MM pair in PENDING state during re-synchronization, and once everything is replicated the MM pair goes into DUPLEX state.
Chapter 6. IBM TotalStorage Productivity Center for Replication

49

b. If you want to re-synchronize both sites with the data level at the local site, issue the failback command towards the volumes at the local site. This replicates the changed data since the suspend from the local site to the remote site. Also, this resets the tracks that were potentially changed at the remote site after the failover. Figure 6-3 on page 49 assumes a MM environment that places the MM pair in PENDING state during the re-synchronization phase. Once all changed data is replicated, the MM pair goes into DUPLEX state. The option to re-synchronize from either site is possible due to the availability of change bitmaps maintained by the DS6000 or DS8000 at both sites.

6.5 TPC for Replication terminology


TPC for Replication manages and integrates not only the DS6000 and DS8000 but also the SAN Volume Controller. In search of a common terminology and to describe the functions for the different disk storage servers in a common way, we introduce new terms here, different from those normally used in the context of Copy Services with the ESS 800, DS6000, and DS8000.

6.6 Volumes in a copy set


With TPC for Replication, the role of a volume within Copy Services has been renamed to be more generic and also to include other storage subsystems such as SVC. A copy set contains the volumes that are part of a Copy Services relationship. Metro Mirror knows two volumes per copy set. Global Mirror requires three volumes per copy set. FlashCopy again consists of two volumes per copy set. Terminology is slightly different from what you might be used to. For example, Metro Mirror uses a primary or source volume at the sending site and a secondary or target volume at the receiving end. Such a pair is now called a copy set. FlashCopy usually used to mention source and target volume to identify a FlashCopy pair, in contrast to RMC, which designates volumes in that pair as primary and secondary volumes. Again such a FlashCopy volume pair is now a copy set. Global Mirror involves three volumes per copy set. Global Mirror relies on Global Copy and FlashCopy. Therefore we have a Global Copy primary volume and a Global Copy secondary volume. The Global Copy secondary volume has another role as FlashCopy source volume. The third volume is then the FlashCopy target volume. All three volumes are part of a Global Mirror copy set.

6.6.1 Host volume


A host volume is identical to what is called a primary or source volume in Copy Services. The host designation represents the volume functional role from an application point of view. It is usually connected to a host or server and receives read, write, and update application I/Os. When a host volume becomes the target volume of a Copy Services function, it is usually no longer accessible from the application host. FlashCopy target volumes can be considered as an exception.

50

IBM System Storage DS8000: Copy Services in Open Environments

6.6.2 Target volume


A target volume is what was also usually designated as a secondary volume. It receives data from a host volume or another intermediate volume. It might also be an intermediate volume, as in a Global Mirror copy set.

6.6.3 Journal volume


A journal volume is currently the FlashCopy target volume in a Global Mirror copy set. It is called a journal volume because it functions like a journal and holds the required data to reconstruct consistent data at the Global Mirror remote site. When a session needs to be recovered at the remote site, the journal volume is used to restore data to the last consistency point. Journal volumes are labeled with the identifier J and the site to which they belong; for example J2 is the Journal Volume on site 2.

6.6.4 Intermediate volume


The intermediate volume is used in the so-called Practice Sessions for Metro Mirror and Global Mirror. These two new session types provide all the functions available in Metro Mirror and Global Mirror Two-Site BC sessions, with the added support for an intermediate volume to allow users to practice recovery procedures. Intermediate volumes are labeled with the identifier I and the site to which they belong, for example I2.

6.6.5 TPC for Replication copy set


A copy set in TPC for Replication is a set of volumes that have a copy of the same data. In PPRC terms this is, for example, a PPRC primary volume and a PPRC secondary volume. Figure 6-4 shows three Metro Mirror Copy pairs. In TPC for Replication each pair is considered as a copy set. A copy set here contains two volumes.
Local DS6000 / DS8000 Remote DS6000 / DS8000

Primary Primary Primary

A H1

PPRC path

H2
Secondary

Copy Set 1

PPR link

Primary Primary Primary

A H1

PPRC path

H2
Secondary

Copy Set 2

PPR link

Primary Primary Primary

A H1

PPRC path

Primary Primary Primary


Secondary

A H2

Copy Set 3

Figure 6-4 TPC for Replication - Metro Mirror copy sets

Chapter 6. IBM TotalStorage Productivity Center for Replication

51

Figure 6-5 represents two copy sets. In this illustration, each copy set contains three volumes, which indicates a Global Mirror configuration. Note that the third volume is a kind of a journal volume and is the FlashCopy target volume ensuring data consistency in a Global Mirror setup.
Global Copy

Primary Primary Primary

A H1

A H2

FlashCopy

Primary Primary Primary Primary

Secondary

Copy Set 1

J2
Tertiary
PPRC links

Global Copy

Primary Primary Primary

A H1

Primary Primary
Secondary

A H2

FlashCopy
Primary Primary

Copy Set 2

J2
Tertiary

Local site

Remote Site

Figure 6-5 TPC for Replication - Global Mirror copy sets

6.6.6 TPC for Replication session


TPC for Replication uses a session concept, which is similar to what Global Mirror for System z (XRC) uses. A session is a logical concept that gathers multiple copy sets, representing a group of volumes with the requirement to provide consistent data within the scope of all involved volumes. Commands and processes performing against a session apply these actions to all copy sets within the session. Again this is similar to a Global Mirror for System z (XRC) session.

52

IBM System Storage DS8000: Copy Services in Open Environments

Figure 6-6 shows an example of two storage servers at the local site and two corresponding storage servers at the remote site. The example further assumes that a Metro Mirror relation is established to replicate data between both sites.
Local DS6000 / DS8000 Remote DS6000 / DS8000

Session 1

Primary Primary Primary

A H1

PPRC path

H2
Secondary

Copy Set 1

Primary Primary Primary

A H1

PPRC path

H2
Secondary

Copy Set 2

Local DS6000 / DS8000

Remote DS6000 / DS8000

Primary Primary Primary

A H1

PPRC path

Primary Primary Primary


Secondary

A H2

Copy Set 3

Session 2

Primary Primary Primary

A H1

PPRC path

Primary Primary Primary


Secondary

A H2

Copy Set 4

Primary Primary Primary

A H1

PPRC path

Primary Primary Primary


Secondary

A H2

Copy Set 5

Local site
Figure 6-6 TPC for Replication - session concept

Remote Site

Metro Mirror primary volume H1 from copy set 3 in the second storage server, and volumes H1(copy set 1 and 2) in the first storage server with their corresponding Metro Mirror secondary volumes are grouped together in a session, Session 1. Such a session can contain copy sets that span across storage server boundaries. Metro Mirror primary volumes H1 with their counterparts H2 at the remote site belong to a different session, Session 2. Copy set 4 and copy set 5 belong to Session 2. Note that all application-dependent copy sets must belong to the same session to guarantee a successful management and to provide consistent data across all involved volumes within a session. In this context we recommend having application-dependent volume granularity within the scope of an LSS. Or, in other words, volumes or copy sets that require consistent data and can be subject to a freeze trigger ought to be grouped in one or more LSSs and that LSS must not contain other copy sets. This because the scope of the freeze function is at the LSS level and affects all volumes within that LSS.

Chapter 6. IBM TotalStorage Productivity Center for Replication

53

6.7 TPC for Replication session types


There are three TPC for Replication licenses, which include different levels of Copy Services functions, as described in the following sections.

6.7.1 TPC for Replication Basic License


The Basic License includes Metro Mirror and Global Mirror, in addition to FlashCopy. However, Metro Mirror and Global Mirror are only configured to replicate data in a uni-directional fashion.

FlashCopy
FlashCopy includes all functional flavors of FlashCopy. This includes FlashCopy with these capabilities: Background copy or no background copy. Convert no background copy to background copy. Add persistent to a FlashCopy relationship. Establish incremental FlashCopy to apply only changed data since a previous FlashCopy operation. In a FlashCopy relationship the target volumes can be Space Efficient volumes.

Metro Mirror
Metro Mirror is between a local site (site 1) and a remote site (site 2) in a uni-directional fashion. It allows to failover to the remote site for testing, but mirroring can only be restarted in the direction defined in the session (from site 1 to site 2). This can happen through failover/failback operations.

Global Mirror
Global Mirror implies three volumes for each Global Mirror copy set. Again, this happens between a local site (site 1) and a distant site (site 2) in a uni-directional way. It is not possible to reverse the replication direction, a restart of mirroring can only be done from site 1 to site 2.

6.7.2 TPC for Replication Two Site Business Continuity


The Two Site Business Continuity license has, in addition to the Basic license, the capability to reverse the replication direction of the mirroring sessions (bi-directional). Additionally it supports a second TPC for Replication server as Standby-Server to provide a redundant replication management.

Metro Mirror
Metro Mirror can be established from site 1 to site 2 or vice versa. It also allows failover and failback operations for any copy set at any time.

Global Mirror
The Two Site BC license allows the Global Mirror to reverse its direction after a Recovery. This means it allows to change the replication direction with Failover/Failback operations. Global Mirror builds on three volumes per copy set. TPC for Replication allows managing a configuration that only replicates data through Global Mirror from site 1 to site 2 and only in Global Copy in the reversed direction from site 2 to site 1. 54
IBM System Storage DS8000: Copy Services in Open Environments

6.7.3 TPC for Replication Three Site Business Continuity


TPC for Replication Three Site Business Continuity (BC) includes all the capabilities of TPC for Replication Two Site BC, plus support that enables you to perform Metro/Global Mirror.

Metro/Global Mirror for the DS8000


Metro/Global Mirror is a 3-site continuous copy solution for the DS8000 that combines Metro Mirror synchronous copy and Global Mirror asynchronous copy into one session. In this combined session, the Metro Mirror target is the Global Mirror source. This combination allows HyperSwap scenarios between the Metro Mirror pairs and disaster recovery scenarios at the Global Mirror target.

6.8 TPC for Replication session states


Again a session contains a group of copy sets (that is, RMC (PPRC) volume pairs or FlashCopy pairs) that belong to a certain application. You can also consider it as a collection of volumes that belong to a certain application or system with the requirement for consistency. Such a session can be in one of the following states: Defined: A session is defined and might already contain copy sets or still have no copy sets assigned yet. However, a defined session is not yet started. Flashing: Data copying is temporarily suspended while a consistent practice copy of data is being prepared on site 2. Preparing: The session started already and is in the process of initializing, which might be, for example, in the case of a first full initial copy for a Metro Mirror. It could also just re-initialize, which, for example, might be the case for a resynchronization of a previously suspended Metro Mirror. Once the initialization is complete, the session state changes to prepared. Prepared: All volumes within the concerned session completed the initialization process. Suspending: This is a transition state caused by either a suspend command or any other suspend trigger that might be an error in the storage subsystem or loss of connectivity between sites. Eventually the process to suspend copy sets ends and copying has stopped, which is indicated by a session state of suspended. Suspended: Replicating data from site 1 to site 2 has stopped. Application writes to concerned volumes in site 1 can continue (FREEZE and GO) or (FREEZE and STOP). An additional recoverable flag indicates whether the data is consistent and recoverable. Recovering: A session is about to recover.

Chapter 6. IBM TotalStorage Productivity Center for Replication

55

TargetAvailable: The recover command has completed and the target volumes are write enabled and available for application I/Os. An additional recoverable flag indicates whether the data is consistent and recoverable. Important: Do not manage through another software, Copy Service volume pairs which are already managed through TPC for Replication.

6.9 TPC for Replication and scalability


From an architecture point of view there is no limit to the number of Copy Sets that can belong to a single session. The implementation approach of managing copy sets and sessions provides the scalability that very large installations require.

Local DS8000

Intermediate DS8000

Remote DS8000

Primary Primary Primary Primary

A APrimary Primary

Primary Primary Primary Primary

A APrimary Primary

Session 1 Session 1

Primary Primary Set Copy Primary Primary

A Primary A Primary

Primary Primary

A Primary A Primary A Primary Primary

Primary Primary Primary Primary

Session 2 Session 2

Primary Primary Primary Primary

A A

Primary Primary Primary Primary

A
Primary Primary

A
Primary Primary

APrimary
Primary Primary

Primary

A Primary A Primary Session 3 Session 3

A Primary A Primary APrimary


Primary Primary Primary

A
Primary Primary

Session 4 Session 4

A
Primary Primary

A
Primary Primary

Figure 6-7 TPC for Replication - four sessions

There is also no limit to the number of sessions that TPC for Replication can manage. Figure 6-7 shows four sessions managing different copy sets. The first session, session 1, implies a Metro/Global Mirror configuration that is supported by TPC for Replication Three Site BC. The second session, session 2, is a Global Mirror session between an intermediate site and a remote site with two copy sets. The third session, session 3, is a Metro Mirror configuration between the local and intermediate site.

56

IBM System Storage DS8000: Copy Services in Open Environments

The session at the bottom in Figure 6-7, session 4, is a Global Mirror session between the local and remote sites.

6.10 TPC for Replication system and connectivity overview


TPC for Replication is an outboard approach and its software runs on a dedicated server, or on two servers, for a high availability configuration. Figure 6-8 shows how the TPC for Replication server connects to the storage server that is going to be managed by TPC for Replication servers. It does not show the connectivity of the TPC for Replication server to the network through which the user connects with the browsers to the server itself.

DB2

Websphere

TPC for Replication Server


IP Communication Services

IP network

System p

System p

Ethernet ports FCP ports

System p

System p

Primary Primary

A Primary Set Primary Copy

A Primary A Primary

Primary Primary

A Primary A Primary A Primary Primary


FCP links

Primary Primary Set Copy Primary Primary

A Primary A Primary

Primary Primary

A Primary A Primary A Primary Primary

Primary Primary Primary Primary

Primary Primary

A A Primary Primary

Primary Primary Primary Primary

Primary Primary

A Primary A Primary

DS8000

DS8000

Figure 6-8 TPC for Replication system overview

TPC for Replication server contains, besides the actual functional code, the DB2 UDB and WebSphere software. IP communication services also contains an event listening capability to react to respective trap messages from the storage servers. This provides a handle to plan for particular events and provide pre-established scripts that are going to be triggered by a corresponding trap message. This also includes a capability to distinguish between the different storage servers that can be managed by the TPC for Replication server such as DS8000, DS6000, ESS 800, and SVC. This approach has the potential to be enhanced as storage servers change over time without touching the actual functional code and the involved database.

Chapter 6. IBM TotalStorage Productivity Center for Replication

57

Figure 6-9 shows how the TPC for Replication server connects to the DS6000 and DS8000.

Replication Manager Server

SMC

IP network

Ethernet ports server0 server1

PowerPC server0

PowerPC server1 DS6000

System p

System p

DS8000

Figure 6-9 TPC for Replication server connectivity to DS6000 and DS8000

The actual connectivity between the TPC for Replication server and the storage servers is based on Ethernet networks and connects to particular Ethernet ports in the System p in the DS8000. This particular Ethernet card is a new card and slides into the first slot out of these four slots in the p570. Note that Figure 6-9 shows the DS8000 rear view because these slots are only accessible from the rear site of the DS8000.

DS8000 Ethernet card feature codes


This Ethernet card is required for TPC for Replication and is available for the following DS8000 models: 921, 922, 931, and 932 with feature code 1801 for the Ethernet adapter pair. Note that you always need a pair of cards because one Ethernet card installs in server0 and a second card installs in server1. 9A2 and 9B2 with feature code 1802 for the Ethernet adapter pair for the first LPAR. 9A2 and 9B2 with feature code 1803 for the Ethernet adapter pair for the second LPAR. These features are chargeable and carry a minimum monthly maintenance charge.

DS8000 Ethernet card installation and configuration considerations


The Ethernet card might come already installed by manufacturing or be installed on-site. To configure the Ethernet ports, at least Release 2 microcode is required for the concerned DS8000. This are code bundles starting with 6.2.xxx.xx, with 62.xx or higher. Port numbers on the first card are I9801 and I9802. This is the card that installs in server0. Port numbers on the second card are I9B01 and I9B02. This is the card that installs in server1. Note that only the first port on each card is currently used.

58

IBM System Storage DS8000: Copy Services in Open Environments

Communication through these ports uses static IP addresses. DHCP is not supported. You can configure these new ports either through the GUI or the DSCLI. The GUI provides a new panel to configure the required IP addresses. This panel is in the GUI path of the storage image. Select a storage image, then select Configure network ports from the Select Action pull-down (note that this option is only presented when the network card for TPC for Replication is actually installed). This is shown in Figure 6-10. The Configure Network ports panel then displays.

Figure 6-10 Select Configure network ports

The DSCLI provides the following commands to manage these new network ports: lsnetworkport -l This command shows the server associations, physical port location, and all IP address settings for all ports on the queried Storage Image Facility. See Example 6-2. shownetworkport This shows the server association, physical location, and IP addresses for a particular port on the Ethernet card. See Example 6-3. setnetworkport This command configures the network ports. Example 6-1 displays a command example and configures the first port on the Ethernet card in server0.
Example 6-1 setnetoworkport command example

dscli> setnetworkport -dev IBM.2107-7520781 -ipaddr 9.155.86.128 -gateway 9.155.86.1 -subnet 255.255.255.0 -primary 9.64.163.21 -secondary 9.64.162.21 N9801 Example 6-2 shows an output example of lsnetworkport -l, which provides an overview of all available Ethernet ports on the concerned storage image facility.

Chapter 6. IBM TotalStorage Productivity Center for Replication

59

Example 6-2 Output of lsnetworkport command

dscli> lsnetworkport -l Date/Time: 28 August 2006 13:19:55 CEST IBM DSCLI Version: 5.2.200.308 DS: IBM.2107-7503461 ID IP Address Subnet Mask Gateway Primary DNS Secondary DNS State Server Speed Type Location ================================================================================== ================================================== I9801 9.155.50.53 255.255.255.0 0.0.0.0 9.64.163.21 9.64.162.21 Online 00 1 Gb/sec Ethernet-Copper U7879.001.DQD04X5-P1-C1-T1 I9802 0.0.0.0 0.0.0.0 0.0.0.0 9.64.163.21 9.64.162.21 Offline 00 1 Gb/sec Ethernet-Copper U7879.001.DQD04X5-P1-C1-T2 I9B01 9.155.50.54 255.255.255.0 0.0.0.0 9.64.163.21 9.64.162.21 Online 01 1 Gb/sec Ethernet-Copper U7879.001.DQD04WH-P1-C1-T1 I9B02 0.0.0.0 0.0.0.0 0.0.0.0 9.64.163.21 9.64.162.21 Offline 01 1 Gb/sec Ethernet-Copper U7879.001.DQD04WH-P1-C1-T2 Example 6-3 shows an output example of shownetworkport, which provides an overview of all settings for a particular Ethernet port.
Example 6-3 Output of shownetworkport for a particular Ethernet port

dscli> shownetworkport i9801 Date/Time: 28 August 2006 13:20:00 CEST IBM DSCLI Version: 5.2.200.308 DS: IBM.2107-7503461 ID I9801 IP Address 9.155.50.53 Subnet Mask 255.255.255.0 Gateway 0.0.0.0 Primary DNS 9.64.163.21 Secondary DNS 9.64.162.21 State Online Server 00 Speed 1 Gb/sec Type Ethernet-Copper Location U7879.001.DQD04X5-P1-C1-T1

TPC for Replication configuration options


Note that Figure 6-9 on page 58 outlines the basic connectivity idea. For high availability, you might consider a second TPC for Replication server that connects to the same IP network as the first server. Currently, the TPC for Replication server connects only to one Ethernet port out of the two ports on the Ethernet card, which must reside in the first slot of the System p server in the DS8000. Note that the internal and potential external HMCs connect to the DS8000 through different ports than the TPC for Replication servers. The communication between the TPC for Replication server and the DS infrastructure is direct, as shown in Figure 6-9 on page 58. It is interesting to note that this is different from when the TPC for Replication server communicates with the SAN Volume Controller (SVC). Between the TPC for Replication server and the SVC nodes is a SVC CIMON based console, which is part of the standard SVC master console. For more details, refer to Powering SOA with IBM Data Servers, SG24-7259.

60

IBM System Storage DS8000: Copy Services in Open Environments

6.11 TPC for Replication monitoring and freeze capability


TPC for Replication always uses the consistency group attribute when you define PPRC paths for Metro Mirror between a primary and a secondary storage server. This provides TPC for Replication with the capability to freeze a Metro Mirror configuration when an incident happens, to guarantee consistent data at the secondary or backup site.

Replication Manager Server

2
LSS

FREEZE

3
LSS LSS LSS

LSS LSS

LSS LSS LSS

LSS LSS LSS

Primaries

Session

Secondaries

Figure 6-11 TPC for Replication server freeze

TPC for Replication server listens to incidents from the storage servers and takes action when being notified of a replication error from the concerned storage server. Figure 6-11 implies a replication error in an LSS that belongs to the session. The TPC server receives a corresponding SNMP trap message from the concerned storage server. TPC for Replication server then issues a freeze command to all LSSs that are part of the concerned session. This implies a suspend of all PPRC pairs or copy sets that belong to this session. During this freeze process, write I/Os are held until the freeze process ends and the TPC server communicates to the storage server to continue processing write I/O requests to the concerned primary volumes in the session. After that, write I/O can continue to the suspended primary volumes. However, both sites are not in sync any longer but the data on the secondary site is consistent (power drop consistent). This is a freeze-and-go policy.

Chapter 6. IBM TotalStorage Productivity Center for Replication

61

6.12 TPC for Replication heartbeat


Because the connectivity between the TPC for Replication server and the storage servers that the TPC server is managing can fail, the firmware in the storage server waits for a heartbeat signal from the TPC server. TPC for Replication can enable this heartbeat in the corresponding LSS for Metro Mirror sessions.

Replication Manager Server

Ethernet ports

1 2 FREEZE
LSS LSS LSS
Primaries
PPRC FCP links

FCP ports

LSS LSS LSS

Session

Secondaries

Figure 6-12 LSS heartbeat triggers freeze when connectivity to server fails

Figure 6-12 illustrates a failing connectivity between the TPC for Replication server and a primary storage server. When this heartbeat is set and the storage subsystem cannot communicate its heartbeat information to the RM sever, the storage server internally triggers a freeze to its involved LSS and their primary volumes. Because this heartbeat is a timer scheduled function, it is based on the Consistency Group time-out value of each LSS that contains volumes that belong to the concerned session. This session-based heartbeat timer expires after the lowest time-out value of all concerned LSSs. With more than a storage server involved, the TPC for Replication server issues freeze commands to all other LSS pairs in the affected session when the heartbeat expires and the TPC for Replication server could not receive heartbeat information from any one of the involved storage servers. The disconnected storage server or LSS resumes I/O after their Consistency Group time-out has expired. LSS or storage servers that might still be connected to the TPC for Replication server receive a corresponding freeze command from the TPC for Replication server. If a Freeze occurred due to a lost heartbeat, TPC for Replication will not release I/O automatically to ensure consistency. This is independent of the Session I/O policy after a freeze, because on lost LSS heartbeats TPC for Replication cannot verify whether all involved LSSs are frozen already. Note that Extended Long Busy (ELB), automation window, and freeze period are synonyms for Consistency Group time-out in the above context.

62

IBM System Storage DS8000: Copy Services in Open Environments

Keep in mind that only the Active TPC for Replication Server can control the Metro Mirror heartbeat and automated Freeze function, at the time of writing. For that reason, we recommend that you place the Active Server on the same site as the Metro Mirror primaries. Especially with increased ELB time-out values, a communication loss to the Active TPC for Replication Server can cause the LSS Freeze with elongated write I/O impact for the application. The heartbeat function can also be disabled on the active TPC for Replication Server.

6.13 Supported platforms


Currently, the TPC for Replication server can run under the following operating systems: Windows 2003 Server Edition with SP1 and above Windows 2003 Enterprise Edition with SP1 and above (shown in Figure 6-13)

Figure 6-13 Windows 2003 Software level

SUSE Linux Enterprise Server 9 SP2 (note that SUSE Linux does not support the Two Site BC configuration.) Red Hat Enterprise Linux RHEL4 AS 2.1 (For Replication Two Site BC: Red Hat Enterprise Linux RHEL4 Update 1 SLES9 SP2) AIX 5.3 ML3 z/OS V1.6 and above Note that for a TPC for Replication Two Site BC configuration that involves two servers, it is possible to run TPC for Replication under two different operating system platforms. For the latest software requirements, refer to: http://www.ibm.com/servers/storage/support/software/tpcrep/installing.html

Chapter 6. IBM TotalStorage Productivity Center for Replication

63

6.14 Hardware requirements for TPC for Replication servers


For Windows and Linux, we suggest using the following minimum hardware configuration: 1.5 GHz Intel (TM) Pentium (R) III processor 2 GB RAM memory 10 GB free disk space When TPC for Replication runs on AIX, we suggest using the following minimum hardware configuration: Server p, IBM POWER4 or IBM POWER5 processor, 1 GHz 2 GB RAM memory 10 GB free disk space Disk space is required to hold for data in DB2 databases and WebSphere Express Application Server code besides the actual TPC for Replication server code. For z/OS, the following hardware configuration is required: System z architecture CPU Operating System: z/OS V1.6 and above TCP/IP connection to manage storage systems You can refer to the following Web site for the latest information about hardware requirements: http://www.ibm.com/support/docview.wss?rs=1115&uid=ssg1S7001676

6.15 TPC for Replication GUI


This section describes the graphical user interface (GUI) that is used to communicate with the the replication server. Additional details can be found in the documents mentioned at the very beginning of this chapter. See Chapter 6, IBM TotalStorage Productivity Center for Replication on page 43. TPC for Replication provides a graphical user interface to manage and to monitor any Copy Services configuration and Copy Services operations. This GUI is Web browser based and does not rely on any other product such as TotalStorage Productivity Center. The GUI is simply called through an Internet browser such as MS Internet Explorer or Mozilla Firefox, is intuitive, easy-to-use, and performs very well. The panel structure is not too deep and allows you to quickly transition to any application through a hyperlink-based menu.

64

IBM System Storage DS8000: Copy Services in Open Environments

Figure 6-14 displays the TPC for Replication GUI and its components used between the client on the user machine and the server component on the server machine.

Login Configure sessions Session States

User machine
Browser / GUI

Global Mirror settings Advanced tools

LAN

Database

Monitor Server

Replication Manager Server


Hardware Lavers

IP network

Series p DS8000

Series p

PowerPC DS6000

PowerPC

Figure 6-14 TPC for Replication GUI

The Web-based client that runs on the users machine contains the GUI client, a presentation component. This software piece on the users machine is highly optimized to minimize data traffic over the LAN to the TPC for Replication server. The GUI server component runs on the TPC for Replication server machine that contains the interface to the function code in the connected storage servers and is usually closely installed to the storage servers that interface with the TPC for Replication server.

Chapter 6. IBM TotalStorage Productivity Center for Replication

65

6.15.1 Connecting to the TPC for Replication GUI


You connect to the GUI by specifying the IP address of the TPC for Replication server in your Web browser. This will present you with the sign-on panel, as shown in Figure 6-15. When you sign out from the TPC for Replication server, the same panel is also displayed.

URL of TPC for Replication Server

Figure 6-15 Launch the TPC for Replication Server GUI

Specify a user ID as a text string on the UserID filed and a password in a hidden text field. User IDs are defined and set up in the TPC for Replication server system.

66

IBM System Storage DS8000: Copy Services in Open Environments

6.15.2 Health Overview panel


After a successful login to the TPC for Replication server, the Health Overview panel is displayed, as shown in Figure 6-16.

Figure 6-16 Health Overview panel

This health panel displayed an overall summary of the TPC for Replication system status. It shows information very similar to what is also shown in the small box in the left lower corner of this panel. This small health overview box at the lower left corner is always present. However, the Health Overview panel provides more details. First, it provides overall status of: Sessions Connected storage subsystems Management servers Figure 6-16 reports that all sessions are in normal status and working fine. There is no high availability server environment, and one or more storage servers cannot be reached by the TPC for Replication server, as they have been defined before. The upper left box in this panel labeled My Work provides a list of applications that you can use to manage various aspects of a Copy Services environment:: Health Overview: This is the currently displayed panel, as Figure 6-16 shows. Sessions: This hyperlink brings you to the application that manages all sessions. This is the application that you will use the most. Storage Subsystems: Here you start when you define storage servers to the TPC for Replication server that are going to be used for Copy Services.

Chapter 6. IBM TotalStorage Productivity Center for Replication

67

ESS/DS Paths: This link allows you to manage everything that is related to PPRC path management. Management Servers: This links lead you to the application that manages the TPC for Replication server configuration. Advanced Tools: Here you can trigger to collect diagnostic information or set the refresh cycle for the displayed data. Console: This link opens a log that contains all activities that the user did and their results.

6.15.3 Sessions panel


This is the panel to all sessions within the TPC for Replication server (Figure 6-17).

Figure 6-17 Sessions overview

Each session comprises a number of copy sets that can be distributed across LSSs and physical storage servers. The session name functions as a token that is used to apply any action again the session or all the volumes that belong to that session.

68

IBM System Storage DS8000: Copy Services in Open Environments

Figure 6-18 illustrates that you first select a session and then chose the action you want to perform against that session.

2. Select action 1. Select session


Figure 6-18 About to create copy sets to session

Figure 6-19 displays a next possible step to perform an action. After selecting the session FlashCopy, as shown in Figure 6-17 on page 68, you can select any action from the action list shown in Figure 6-19.

Figure 6-19 Actions against a session

This can be an action against the entire session, such as suspending all volumes within the session. It is also used to modify an existing session and add or remove copy sets.

Chapter 6. IBM TotalStorage Productivity Center for Replication

69

6.15.4 Storage Subsystems panel


The following panel (Figure 6-20) displays all storage subsystems currently connected to the TPC for Replication server.

Figure 6-20 GUI basic layout

Using the Add Subsystem button, you can define another storage subsystem to the TPC for Replication server. The panel used to add a new server is shown in Figure 6-22.

70

IBM System Storage DS8000: Copy Services in Open Environments

Figure 6-21 displays the available action list. From this list you select, for instance, the View/Modify Details action and apply it to the previously selected storage server.

Figure 6-21 Select storage subsystem and select View/Modify Details action

The selected storage subsystem connectivity details are now displayed, as shown in Figure 6-22.

Figure 6-22 Storage subsystem details

Typically, you would use the Storage Subsystems application only to connect a storage server to the TPC for Replication server. Figure 6-22 also shows the standard port used by the TPC for Replication server to communicate with the storage server. All the other fields are self-explanatory.

Chapter 6. IBM TotalStorage Productivity Center for Replication

71

6.15.5 Path Management panel


Figure 6-23 displays the entry panel to manage PPRC paths.

Figure 6-23 Path overview panel

Clicking the Manage Path button will trigger the path wizard to help you define a new PPRC path or remove existing PPRC paths. Clicking a storage subsystem gives you a list of all the existing paths currently defined for the selected storage subsystems. Figure 6-24 displays all the defined PPRC paths for the selected LSS and the ports used for these PPRC paths.

Figure 6-24 Path overview of a DS8000

72

IBM System Storage DS8000: Copy Services in Open Environments

You can select any path here, and the only available action in this case is to then remove the selected paths.

6.15.6 TPC for Replication server Configuration panel


The panel in Figure 6-25 displays the status of the Replication management servers.

Figure 6-25 Replication Management Servers

Figure 6-25 shows only one server named weissnet08.mainz.de.ibm.com, which is an active server. When it exists, a second row will show the potential second server, which is in the status standby. A panel with two servers has a slightly different appearance than what is shown in Figure 6-25. Use this panel for basic operations such as defining a server as standby or to take over in the event of a disaster. In the case of two servers, each server manages its own DB2 database. The communication between both servers is performed through the LAN to which both servers are connected.

Chapter 6. IBM TotalStorage Productivity Center for Replication

73

6.15.7 Advanced Tools panel


Figure 6-26 displays a panel through which you handle some specific tasks.

Figure 6-26 Advanced tools option

Specific tasks in this context are tasks to create a diagnostic package or to change the automatic refresh rate of the GUI. A third task is to enable or to disable the heartbeat, which happens between the TPC for Replication server and the connected storage servers. See 6.12, TPC for Replication heartbeat on page 62. The diagnostic package contains all logs, and its location on the TPC for Replication server is shown on the screen. The browser refresh rate is currently a value between 5 and 3600 seconds. The default value is 30 seconds.

74

IBM System Storage DS8000: Copy Services in Open Environments

6.15.8 Console log


Figure 6-27 displays an example of a console log. This panel shows a list of the most recent commands, which this user entered through the GUI.

Figure 6-27 Console log

Besides the commands, this log shows also whether the command succeeded and a message number functions at the same time as a hyper link to more detailed text about the result of the concerned command execution.

6.16 Command line interface to TPC for Replication


Besides the GUI, you can also manage TPC for Replication through a command line interface (CSMCLI). As with the DSCLI for DS8000 and DSCLI for DS6000, the CSMCLI command structure is similar between all three CLI products like mk... for make, ch... for change, and rm.... for delete. CSMCLI is also invoked in the same fashion as the DSCLI for the DS products and provides three different modes: Singe-shot mode Interactive mode Script mode

Chapter 6. IBM TotalStorage Productivity Center for Replication

75

Example 6-4 shows a single shot command.


Example 6-4 Single-shot CLI command

> csmcli lsdevice -devtype ds To execute more than one command, you can start the CLI program and perform the commands at the shell prompt, as Example 6-5 shows.
Example 6-5 Interactive CLI commands

... start csmcli ... csmcli> lsdevice -devtype ds csmcli> lsdevice -devtype ess The third mode is the script mode to run commands out of a file.
Example 6-6 Script mode to execute CLI commands

... start csmcli ... csmcli -script ~/rm/scrtips/devreport In contrast to DSCLI for the DS storage servers, the CSMCLI currently does not use a -profile option.

76

IBM System Storage DS8000: Copy Services in Open Environments

Part 3

Part

FlashCopy
This part of the book describes the IBM System Storage FlashCopy and IBM FlashCopy SE when used in open systems environments with the DS8000. We discuss the FlashCopy and FlashCopy SE features and describe the options for setup. We also show which management interfaces can be used, as well as the important aspects to be considered when establishing FlashCopy relationships.

Copyright IBM Corp. 2004-2008. All rights reserved.

77

78

IBM System Storage DS8000: Copy Services in Open Environments

Chapter 7.

FlashCopy overview
FlashCopy creates a copy of a volume at a specific point-in-time, which we also refer to as a Point-in-Time copy, instantaneous copy, or time-zero copy (t0 copy). This chapter explains the basic characteristics of FlashCopy when used in an open systems environment with the DS8000. The following topics are discussed: FlashCopy operational areas FlashCopy basic concepts FlashCopy in combination with other Copy Services FlashCopy in a storage LPAR environment IBM FlashCopy SE

Copyright IBM Corp. 2004-2008. All rights reserved.

79

7.1 FlashCopy operational environments


It takes only a few seconds to establish the FlashCopy relationships for tens to hundreds or more volume pairs. The copy is then immediately available for both read and write access. In a 24x7 environment, the quickness of the FlashCopy operation allows us to use FlashCopy in very large environments and to take multiple FlashCopies of the same volume for use with different applications. Some of the different uses of FlashCopy are shown in Figure 7-1.

Production System System Operation

Production Backup System

Reverse FlashCopy Data Backup System

Application Help desk or System Operation

Integration System Business Integration

Production data

System Operation

Test System Development

other systems

Data Mining System Analysis Team

Figure 7-1 FlashCopy operational environments

FlashCopy is suitable for the following operational environments: Production backup system: A FlashCopy of the production data allows data recovery from an older level of data. This might be necessary due to a user error or a logical application error. Let us assume that a user deleted accidentally a customer record. The production backup system could work with a FlashCopy of the data. The necessary part of the customer data can be exported and then be imported into the production environment. Thus, the production continues, and while a specific problem is being fixed, the majority of the users can work with the application without recognizing any problems. The FlashCopy of the data could also be used by system operations to re-establish production in case of any server errors. Data backup system: A FlashCopy of the production data allows the client to create backups with the shortest possible application outage. The main reason for data backup is to provide protection in case of source data loss due to disaster, hardware failure, software failure, or user errors. Data mining system: A FlashCopy of the data can be used for data analysis, thus avoiding performance impacts for the production system due to long running data mining tasks. Test system: Test environments created by FlashCopy can be used by the development team to test new application functions with real production data hence, a faster test setup process.

80

IBM System Storage DS8000: Copy Services in Open Environments

Integration system: New application releases (for example, SAP releases) are likely to be tested prior to putting them onto a production server. By using FlashCopy, a copy of the production data can be established and used for integration tests. With the capability to reverse a FlashCopy, a previously created FlashCopy can be used within seconds to bring production back to the level of data it had at the time when the FlashCopy was taken.

7.2 Terminology
When discussing Metro Mirror, Global Copy, and Global Mirror, you can see that the following terms are frequently used and interchangeably: The terms local, production, application, primary, or source, denote the site where the production applications run while in normal operation. These applications create, modify, and read the data the application data. The meaning is extended to the disk subsystem that holds the data as well as to its components volumes and LSS. The terms remote, recovery, backup, secondary, or target, denote the site to where the data is replicated the copy of the application data. The meaning is extended to the disk subsystem that holds the data as well as to its components volumes and LSS. When discussing FlashCopy, we use the term source to refer to the original data that is created by the application and we use the term target to refer to the point-in-time backup copy. Also, the terms LUN and volume are used interchangeably in our discussions.

7.3 Basic concepts


By doing a FlashCopy, a relationship is established between a source and a target. Both are considered to form a FlashCopy pair. As result of the FlashCopy either all physical blocks from the source volume are copied (full copy) or when using the nocopy option only those parts that are changing in the source data since the FlashCopy has been established. The target volume needs to be the same size or bigger than the source volume whenever FlashCopy is used to flash a whole volume. Two variations of FlashCopy are available. Standard FlashCopy uses a normal volume as target volume. This target volume has to have the same size (or larger) as the source volume and that space is allocated in the storage subsystem. FlashCopy SE uses Space Efficient volumes as FlashCopy target volumes. A Space Efficient volume has a virtual size that is equal to the source volume size. However, space is not allocated for this volume at the beginning when the volume is created and the FlashCopy initiated. Space is allocated in a repository when a first update is made to original tracks on the source volumes and those tracks are copied to the FlashCopy SE target volume. Writes to the SE target will also consume repository space. For more information on Space Efficient volumes and the concept of repository, refer to IBM System Storage DS8000 Series: Architecture and Implementation, SG24-6786.

Chapter 7. FlashCopy overview

81

FlashCopy and FlashCopy SE are optional and distinct licensed features of the IBM DS8000. Both features can coexist on a DS8000. Typically, large applications such as databases have their data spread across several volumes and their volumes should all be FlashCopied at exactly the same point-in-time. FlashCopy offers consistency groups, which allows multiple volumes to be FlashCopied at exactly the same instance. The following characteristics are basic to the FlashCopy operation: Establishing FlashCopy relationship: When the FlashCopy is started, the relationship between source and target is established within seconds by creating a pointer table, including a bitmap for the target. While the FlashCopy relationship is being created, the DS8000 holds off I/O activity to a volume for a time period by putting the source volume in a SCSI queue full state. During this state, any new I/Os issued receive a queue full error and are automatically reissued by the host bus adapter. No user disruption or intervention is required. I/O activity resumes when the FlashCopy is established. If all bits for the bitmap of the target are set to their initial values, this means that no data block has been copied so far. The data in the target is not modified during setup of the bitmaps. At this first step, the bitmap and the data look as illustrated in Figure 7-2. The target volume, as depicted in various figures in this section, can be a normal volume or a Space Efficient volume. In both cases the logic is the same. The difference between standard FlashCopy and FlashCopy SE is where the physical storage resides. For standard FlashCopy it is a normal volume, for IBM FlashCopy it is a repository (see Figure 7-4 on page 84).

FlashCopy established at time t0 (time-zero)


t0 time

bitmap 1 1 1 1 1 1

source volume data t0 t0 t0 t0 t0 t0

target volume data

Figure 7-2 FlashCopy at time t0

82

IBM System Storage DS8000: Copy Services in Open Environments

When the relationship has been established, it is possible to perform read and write I/Os on both the source and the target. Assuming that the target is used for reads only while production is ongoing, things will look as illustrated in Figure 7-3.

Writing to the source volume and reading from the source and the target volume
t0 tx ty tz time read 1 read 2

read

write x

write y

write z

time-zero data not yet available in target volume: read it from source volume.

bitmap 0 1 1

source volume data t0 tx t0 tz t0 t0 t0

target volume data t0

before physical write to the source volume: copy time-zero data from the source volume to the target volume

Figure 7-3 Reads from source and target volumes and writes to source volume

Figure 7-4 shows reads and writes for IBM FlashCopy SE. Reading from the source: The data is read immediately. See Figure 7-3 or Figure 7-4. Writing to the source: Whenever data is written to the source volume while the FlashCopy relationship exists, the storage subsystem makes sure that the time-zero-data is copied to the target volume prior to overwriting it in the source volume. When the target volume is a Space Efficient volume, the data is actually written to a repository (Figure 7-4). To identify if the data of the physical track on the source volume needs to be copied to the target volume, the bitmap is analyzed. If it identifies that the time-zero data is not available on the target volume, then the data will be copied from source to target. If it states that the time-zero data has already been copied to the target volume then no further action is done. See Figure 7-3 on page 83 or Figure 7-4. It is possible to use the target volume immediately for reading data and also for writing data.

Chapter 7. FlashCopy overview

83

Reading from the target: Whenever a read-request goes to the target while the FlashCopy relationship exists, the bitmap is used to identify if the data has to be retrieved from the source or from the target. If the bitmap states that the time-zero data has not yet been copied to the target, then the physical read is directed to the source. If the time-zero data has already been copied to the target then the read will be performed immediately against the target. See Figure 7-3 on page 83 or Figure 7-4 here.

Writing to the source volume in an IBM FlashCopy SE relationship and reading from the source and the target volume
t0 tx ty tz time read 1 read 2

read

write x

write y

write z

time-zero data not yet available in target volume: read it from source volume.

bitmap 0 1 1

source volume data t0 tx t0 tz t0 t0 t0

Virtual target volume data t0

before physical write to the source volume: copy time-zero data from the source volume to the target volume

Repository for space efficient volumes

t0 t0

Figure 7-4 Reads from source and target volumes and writes to source volume for IBM FlashCopy SE relations

84

IBM System Storage DS8000: Copy Services in Open Environments

Writing to the target: Whenever data is written to the target volume while the FlashCopy relationship exists, the storage subsystem makes sure that the bitmap is updated. This way the time-zero data from the source volume never overwrites updates done directly to the target volume. See Figure 7-5.

Writing to the target volume


t0 ta write a tb write b bitmap 0 0 1 0 1 1 time

source volume data t0 tx t0 ty t0 t0 ta t0

target volume data tb

Figure 7-5 Writes to target volume

Terminating the FlashCopy relationship: The FlashCopy relationship is automatically ended when all tracks have been copied from the source volume to the target volume. The relationship can also be explicitly withdrawn by issuing the corresponding commands. If the persistent FlashCopy option was specified then the FlashCopy relationship must be withdrawn explicitly. An IBM FlashCopy SE relationship ends when it is withdrawn. When the relationship is withdrawn, there is an option to release the allocated space of the Space Efficient volume.

7.3.1 Full volume copy


When the copy option is invoked and the establish process completes, a background process is started that copies all data from the source to the target. Once this process is finished and if there were no updates on the target, the picture we get is similar to the one in Figure 7-6. If not explicitly defined as persistent, the FlashCopy relationship ends as soon as all data is copied. Only the classical FlashCopy allows a full copy; IBM FlashCopy SE has no such function. But remember that both features can co-exist.

Chapter 7. FlashCopy overview

85

Background copy
t0 tx ty 0 0 0 0 0 time bitmap 0

source volume data t0 tx t0 ty t0 t0 t0

target volume data t0 t0 t0 t0 t0

Background copy will copy all time-zero data from source volume to target volume

Figure 7-6 Target volume after full volume FlashCopy relationship finished

If there are writes to the target, then the picture we get is similar to the one in Figure 7-7.

Background copy
t0 tx ty ta 0 0 0 tb 0 0 time bitmap 0

source volume data t0 tx t0 ty t0 t0 ta t0

target volume data t0 tb t0 t0

Background copy will copy all time-zero data from source volume to target volume.

Figure 7-7 FlashCopy after updates to the target volume

7.3.2 Nocopy option


If FlashCopy is established using the nocopy option, then the result will be as shown in Figure 7-3 on page 83 and Figure 7-5 on page 85. The relationship will last until it is explicitly withdrawn or until all data in the source volume has been modified. Blocks for which no write occurred on the source or on the target will stay as they were at the time when the FlashCopy was established. If the persistent FlashCopy option was specified, the FlashCopy relationship must be withdrawn explicitly. The nocopy option is the default for IBM FlashCopy SE.

86

IBM System Storage DS8000: Copy Services in Open Environments

7.4 FlashCopy in combination with other Copy Services


Volume based FlashCopy can be used in various combinations with other Copy Services, whereas the most suitable will depend on the characteristics of the environment and the requirements.

7.4.1 FlashCopy and Metro Mirror


Refer to Figure 7-8 for the following discussion of FlashCopy and Metro Mirror.

FlashCopy
source target

primary

Metro Mirror
primary secondary

Metro Mirror

source

target

FlashCopy
Figure 7-8 FlashCopy and Metro Mirror

As illustrated in Figure 7-8, at the primary Metro Mirror site, the following combinations are supported: A FlashCopy source volume can become a Metro Mirror primary volume and vice versa. The order of creation is optional. A FlashCopy target volume can become a Metro Mirror primary volume and vice versa. If you wish to use a FlashCopy target volume as a Metro Mirror primary, be aware of the following considerations: The recommended order is to first establish the Metro Mirror, and then create a FlashCopy to that Metro Mirror primary using the -tgtpprc parameter. The Metro Mirror secondary will not be in a fully consistent state until the Metro Mirror enters the full duplex state. If you create the FlashCopy first and then do a Metro Mirror of the FlashCopy target, you must monitor the progress of the FlashCopy background copy. The Metro Mirror secondary will not be in a fully consistent state until the FlashCopy background copy process is complete. Use the copy option to ensure that the entire FlashCopy source volume data is copied to the Metro Mirror secondary.

Chapter 7. FlashCopy overview

87

On the secondary site of the Metro Mirror, a FlashCopy source volume can be the Metro Mirror secondary volume, and vice versa. There are no restrictions on which relationship should be defined first. Tip: During the resynchronization phase of Metro Mirror pairs (after an outage of the Metro Mirror pairs), the secondary volumes are in a undefined state. Should something happen during this time, the secondary volumes are no longer usable. Some customers have planned for such scenarios and do a FlashCopy of all secondary volumes. With standard FlashCopy you needed twice the capacity at the secondary site. With IBM FlashCopy SE, only a fraction of the secondary capacity needs to be available in addition to the secondary capacity to hold the modified data during resynchronization.

7.4.2 FlashCopy and Global Copy


Refer to Figure 7-9 for the following discussion of FlashCopy and Global Copy.

FlashCopy
source target

primary

Global Copy
primary secondary

Global Copy

source

target

FlashCopy
Figure 7-9 FlashCopy and Global Copy

As illustrated in Figure 7-9, at the primary Global Copy site, the following combinations are possible: A FlashCopy source volume can become a Global Copy primary volume and vice versa. The order of creation is optional. A FlashCopy target volume can become a Global Copy primary volume and vice versa. If you want to use a FlashCopy target volume as a Global Copy primary, be aware of the following considerations: The recommended order is to first establish the Global Copy, and then create a FlashCopy to that Global Copy primary using the -tgtpprc parameter. The Global Copy secondary is will not be in a fully consistent state until the Global Copy enters the full duplex state. Execute the mkpprc -type mmir command to force the Global Copy to enter the full duplex state.

88

IBM System Storage DS8000: Copy Services in Open Environments

If you create the FlashCopy first and then do a Global Copy of the FlashCopy target, you must monitor the progress of the FlashCopy background copy. The Global Copy secondary will not be in a fully consistent state until the FlashCopy background copy process is complete. Use the copy option to ensure that the entire FlashCopy source volume data is copied to the Global Copy secondary.

secondary volume for the Global Copy.

On the secondary site of the Global Copy a FlashCopy source volume can be based on the

7.4.3 FlashCopy and Global Mirror


FlashCopy in combination with Global Mirror supports only one type of relationship at the primary site (see Figure 7-10).

FlashCopy
source target

secondary

primary

Global Mirror for open


Figure 7-10 FlashCopy and Global Mirror

Global Mirror for open

A FlashCopy source volume can become a Global Mirror primary volume and vice versa. The relationships can be established in any sequence. A FlashCopy target volume cannot become a Global Mirror primary volume. On the Global Mirror secondary site, the Global Mirror target volume cannot be used as FlashCopy source or FlashCopy target unless the Global Mirror pair is first suspended.

Chapter 7. FlashCopy overview

89

7.5 FlashCopy in a DS8300 storage LPAR environment


The IBM DS8300 supports the LPAR technology for storage disk subsystems. This allows a DS8300 model to be partitioned in two virtual storage systems. Each partition is called a DS8000 storage LPAR or Storage Facility Image (SFI) shortly, storage LPAR, or storage image, respectively. See Figure 7-11.

Processor complex 0
LPAR01

Processor complex 1 Storage Image 1


LPAR11

LPAR02

Storage Image 2

LPAR12

Figure 7-11 DS8300 Storage Facility Image (SFI)

Note in this figure that two processor complex LPARs, one from each processor complex, make one storage LPAR. That is, processor complex LPAR 01 and LPAR 11 make the DS8000 storage LPAR 1 storage image 1.

FlashCopy within the same SFI


A FlashCopy can always be established between volumes belonging to the same Storage Facility Image (SFI). FlashCopy is not supported from a source volume in one DS8300 storage LPAR to a target volume on another storage LPAR.

90

IBM System Storage DS8000: Copy Services in Open Environments

Chapter 8.

FlashCopy options
In this chapter we describe the options of FlashCopy when working with the IBM System Storage DS8000 series in an open systems environment. We explain the following options: Multiple Relationship FlashCopy. Consistency Group FlashCopy. FlashCopy on existing Metro Mirror or Global Copy source. Incremental FlashCopy. Remote FlashCopy. Persistent FlashCopy. Reverse Restore and Fast Reverse Restore. FlashCopy SE (Space Efficient FlashCopy) Most of the considerations in the following chapters apply to both standard FlashCopy and FlashCopy SE. However, in FlashCopy SE on page 99, we look more closely at the valid options for FlashCopy SE.

Copyright IBM Corp. 2004-2008. All rights reserved.

91

8.1 Multiple Relationship FlashCopy


It is possible to establish up to 12 FlashCopy relationships using the same source. In other words, a source volume can have up to 12 target volumes. However, a target volume can still only have one source. Furthermore, cascading FlashCopy is not allowed (that is, a volume cannot be both a source and a target volume). Following is a summary, of the considerations that apply: A FlashCopy source volume can have up to 12 FlashCopy target volumes. Note: Only one of those targets can be defined as incremental FlashCopy. For each source volume, only one FlashCopy relationship can be reversed (the one having the -record attribute). A FlashCopy target volume can have only one FlashCopy source volume. A FlashCopy target volume cannot be a FlashCopy source volume at the same time. Figure 8-1 illustrates what is possible and what is not with multiple relationship FlashCopy.

FlashCopy
source target

... maximum is 12 ...

a source can have up to 12 targets

FlashCopy
source target

FlashCopy
source source and target target

not allowed a target can have only one source

not allowed a volume or dataset can be only a source or target at any given time

Figure 8-1 Multiple Relationship FlashCopy possibilities

Note: At any point-in-time, a volume or LUN can be only a source or a target.

92

IBM System Storage DS8000: Copy Services in Open Environments

8.2 Consistency Group FlashCopy


Applications might have their data spread over multiple volumes. In this case if FlashCopy needs to be used for multiple volumes, they all have to be at a consistent level. Consistency groups can be used to help create a consistent Point-in-Time copy across multiple volumes, and even across multiple DS8000 storage systems, thus managing the consistency of dependent writes. With the DS CLI, you can establish a Consistency Group by using the freeze option and identifying all FlashCopy pairs and target volumes belonging to a group. With the Freeze FlashCopy Consistency Group option, the DS8000 holds off I/O activity to a volume for a time period by putting the source volume in a queue full state. Therefore, a time slot can be created during which dependent write updates do not occur, and FlashCopy uses that time slot to obtain a consistent point-in-time copy of the related volumes. I/O activity resumes when all FlashCopies are established. Dependent writes: If the start of one write operation is dependent upon the completion of a previous write, the writes are dependent. Application examples for dependent writes are databases with their associated logging files. For instance, the database logging file will be updated after a new entry has been successfully written to a tablespace. The chronological order of dependent writes to the FlashCopy source volumes is the basis to provide consistent data at the FlashCopy target volumes. For a more detailed understanding of dependent writes and how the DS8000 enables the creation of consistency groups, thus ensuring data integrity on the target volumes, you can refer to the discussion in 14.4.1, Data consistency and dependent writes on page 181.

8.3 FlashCopy target as a Metro Mirror or Global Copy primary


With this option, the FlashCopy target volume can also be a primary volume for a Metro Mirror or Global Copy relationship. You might want to use this capability to create both a remote copy and a local copy of a production volume. Figure 8-2 illustrates this capability. In this figure, the FlashCopy target and the Metro Mirror (or Global Copy) primary are the same volume. They are displayed as two separate volumes for ease of understanding.

Local site

Remote site

source

target same volume

FlashCopy

primary

secondary

Global Copy

Metro Mirror

Global Copy

Metro Mirror

Figure 8-2 FlashCopy target is Metro Mirror (or Global Copy) primary

Chapter 8. FlashCopy options

93

It is possible to either create the FlashCopy relationship first or the Metro Mirror (or Global Copy) relationship first. However, in general, it is better to create the Metro Mirror or Global Copy relationship first with the nocopy option to avoid the initial sending of unnecessary data across to the Metro Mirror (or Global Copy) secondary and then do a full copy FlashCopy. Important: With DS8000 R3 (Licensed Machine Code 5.3.x.x, bundle version 63.x.x.x) the time to initialize the bitmaps for the aforementioned scenario has been greatly improved.

8.4 Incremental FlashCopy refresh target volume


Incremental FlashCopy (resyncflash) provides the capability to refresh a FlashCopy relationship, thus refreshing the target volume. Important: A refresh of the target volume always overwrites any writes previously written to the target volume.

Restrictions: If a FlashCopy source has multiple targets, an incremental FlashCopy relationship can be established with one and only one target. Incremental FlashCopy is not available with FlashCopy SE. To perform an incremental FlashCopy, you must first establish the FlashCopy relationship with the Start Change Recording and Persistent FlashCopy options enabled.

source volume

target volume

no updates in source

nothing to be done as data was already copied from source to target and is identical on both sides

no updates in target

updates took place in source volume updates took place in source volume no updates in source

no updates in target volume current source data will be copied to target updates took place in target volume updates took place in target volume

Figure 8-3 Updates to the target volume caused by a refresh target FlashCopy

94

IBM System Storage DS8000: Copy Services in Open Environments

With Incremental FlashCopy, the initial FlashCopy copy or nocopy relationship between a source and target volume is subject to the following (see Figure 8-3): FlashCopy with nocopy option If the original FlashCopy was established with the nocopy option, then the bitmap for the target volume will be reset, and of course, the updates on the target volumes are overwritten. FlashCopy with copy option If the original FlashCopy was established with the copy option (full volume copy), then the updates that took place on the source volume since the last FlashCopy will be copied to the target volume. Also, the updates done on the target volume will be overwritten with the contents of the source volume. When initializing a FlashCopy with Start Change Recording activated, a second and third bitmap will be used to identify writes done to the source or the target volume (see Figure 8-4). All three bitmaps are necessary for incremental FlashCopy: Target bitmap: This bitmap keeps track of tracks not yet copied from source to target. Source Change Recording bitmap: This bitmap keeps track of changes to the source. Target Change Recording bitmap: This bitmap keeps track of changes to the target. These bitmaps allow subsequent FlashCopies to transmit only those blocks of data for which updates occurred. Every write operation to the source or target volume will be reflected in these bitmaps by setting the corresponding bit to 0.

Reads and writes with Start Change Recording option set


t0 read tx write x ty write y ta write a read 1
time-zero data not yet available in target : read it from source.

tz write z

read 2 t b write b

tc

time

write c

bitmap 0 0 1 0 0 1 0

bitmap

source data t0 tx t0 tz t0 t0 ta t0 t

target data tb t0 t

before physical write to the source: copy time-zero data from the source to the target

Figure 8-4 FlashCopy with Start Change Recording set

Chapter 8. FlashCopy options

95

When the refresh takes place, the bitmap used for change recording is used to analyze which blocks need to be copied from the source volume to the target volume (see Figure 8-5).

Refresh FlashCopy target volume


tx t0 bitmap 0 0 1 ty 0 0 ta 0 0 1 0 0 tb bitmap 0 t c t 0' time

tz 1

source data t0 tx t0 tz t0 t0 t0 tx

target data t0 tz t0 t0

needs to be copied as a write occured on the target update to the source needs to be copied update to the source needs to be copied needs to be copied as a write occured on the target

bitmap 1 1 1 1 1 1

source data t0' t0' t0' t0' t0' t0'

target data t 0' t 0'

Figure 8-5 Refresh of the FlashCopy target volume

After the refresh (which takes place only on the bitmap level) the new FlashCopy based on time-0 is active. The copy of the time-0 data to the target is done in the background. Tip: You can do the incremental copy (resyncflash)at any time. You do not have to wait for the previous background copy to complete.

96

IBM System Storage DS8000: Copy Services in Open Environments

8.5 Remote FlashCopy


With the DS CLI you can use commands to manage a FlashCopy relationship at a remote site. The commands can be issued from the local site, and then they are transmitted over the Metro Mirror or Global Copy links. This eliminates the need for a network connection to the remote site solely for the management of FlashCopy. The FlashCopy source volume at the remote site must be the secondary volume of the Metro Mirror or Global Copy pair.

Local site

Remote site
same volume

primary

secondary

Global Copy

Metro Mirror

Global Copy

Metro Mirror

source

target

FlashCopy
Figure 8-6 Remote FlashCopy

Figure 8-6 illustrates this capability. In this figure, the Metro Mirror (or Global Copy) secondary and the FlashCopy source are the same volume. They are displayed as two separate volumes for ease of understanding.

8.6 Persistent FlashCopy


With this option the FlashCopy relationship continues until explicitly removed (until the user terminates the relationship using one of the interface methods). If this option is not selected, the FlashCopy relationship will exist until all data has been copied from the source volume to the target.

Chapter 8. FlashCopy options

97

8.7 Reverse restore


With this option, the FlashCopy relationship can be reversed by copying over modified tracks from the target volume to the source volume (see Figure 8-7). The background copy process must complete before you can reverse the order of the FlashCopy relationship to its original source and target relationship. Change recording is a prerequisite for reverse restore.

source volume will become target volume

target volume will become source volume

no updates in source

nothing to be done as data was already copied from source to target and is identical on both sides

no updates in target

updates took place in source volume updates took place in source volume no updates in source

data of previous target (now source) will be copied to previous source (now target)

no updates in target volume updates took place in target volume updates took place in target volume

Figure 8-7 Reverse restore

The source and target bitmaps (illustrated in Figure 8-4 on page 95) are exchanged and then handled as described with the Incremental FlashCopy option. Because a reverse restore operation requires the background copy to be completed and FlashCopy SE does not allow a background copy process, a reverse restore is not possible with FlashCopy SE.

8.8 Fast reverse restore


This option is used with Global Mirror, but you can also use it for normal FlashCopies or FlashCopy SE. A prerequisite, however, is that you set up your FlashCopy relationship with the Persistent and Inhibit Target Write attribute. If you specify the -fast option on the reverseflash command, you can reverse the FlashCopy relationship without waiting for the completion of the background copy of the previous FlashCopy. Note, however, that the original target volume is in an undefined state after the reverse. Only the original FlashCopy source volume is usable and contains data from the last FlashCopy.

98

IBM System Storage DS8000: Copy Services in Open Environments

Tip: The Fast Reverse Restore capability has been enhanced with DS8000 R3 (Licensed Machine Code 5.3.x.x, bundle version 63.x.x.x). In R3, you will no longer need to have Change Recording specified on the relationship in order to do Fast Reverse Restore. It is possible to have multiple targets and use Fast Reverse Restore to restore any ONE of them. You still have to specify target write inhibit. The main consideration for using Fast Reverse Restore on one of the relationships is that prior to the Fast Reverse Restore, all other targets must be removed. Therefore, you must be careful about picking the correct relationship for the Fast Reverse Restore.

8.9 FlashCopy SE
Most options available for standard FlashCopy are also available for FlashCopy SE. Important: For up-to-date information and recommendation, refer to the following link: http://www.ibm.com/support/techdocs/atsmastr.nsf/WebIndex/FLASH10617

8.9.1 Multiple Relationship FlashCopy SE


Standard FlashCopy supports up to 12 relationships and one of these relationships can be incremental. There is always some overhead when doing a FlashCopy or any kind of copy within a storage subsystem. A FlashCopy onto a Space Efficient volume has some more overhead, because additional tables have to maintained. All FlashCopy SE relations are nocopy relations; incremental FlashCopy is not possible. Therefore the practical number (that is acceptable from a performance standpoint) of FlashCopy SE relationships from one source volume will be lower than 12. You have to test in your own environment how many concurrent FlashCopy SE relationships are acceptable from a performance standpoint.

8.9.2 Consistency Group FlashCopy SE


With FlashCopy SE, consistency groups can be formed in the same way as standard FlashCopy (see Consistency Group FlashCopy on page 93). Within a consistency group, there can be a mix of standard FlashCopy and FlashCopy SE relationships.

8.9.3 FlashCopy SE target as a Metro Mirror or Global Copy primary


A FlashCopy SE target volume can be a source volume for a Metro Mirror or Global Copy relationship.

8.9.4 Remote FlashCopy SE


Just as with standard FlashCopy, you can initiate a FlashCopy SE relation on a remote DS8000 (see Remote FlashCopy on page 97).

8.9.5 Persistent FlashCopy SE


A FlashCopy SE relation can be persistent. The relation does not end even if the whole data is re-written.

Chapter 8. FlashCopy options

99

8.9.6 Reverse restore and fast reverse restore of FlashCopy SE relations


The reverse restore function can only be used when a full copy relationship completed. It is therefore not possible with FlashCopy SE. Fast reverse restore, however, works with nocopy relationships. Fast reverse restore is supported the same way as classical FlashCopy (see Fast reverse restore on page 98).

8.10 Options and interfaces


Now that we have discussed the options available with FlashCopy, in this section we see how the DS Command-Line Interface (DS-CLI) and the DS Storage Manager (DS-SM) interfaces support them. See Figure 8-8. Notice that there are some options that cannot be invoked from the DS-SM they are indicated with a cross in Figure 8-8.
Interface Function
Multiple relationship FlashCopy Consistency Group FlashCopy Target on existing Metro Mirror or Global Copy primary Incremental FlashCopy Remote FlashCopy Persistent Flashcopy Reverse restore, fast reverse restore

DS front ends
DS SM DS CLI

Figure 8-8 FlashCopy options and interfaces

100

IBM System Storage DS8000: Copy Services in Open Environments

Chapter 9.

FlashCopy interfaces
The setup of FlashCopy in an open systems environment can be done using different interfaces. We explain these interfaces and give you some examples of their use for FlashCopy management on the IBM System Storage DS8000 in an open systems environment. In this chapter, we discuss standard FlashCopy. For information about IBM FlashCopy SE, see IBM FlashCopy SE on page 129.

Copyright IBM Corp. 2004-2008. All rights reserved.

101

9.1 FlashCopy management interfaces: Overview


Various interfaces can be used for the configuration and management of FlashCopy or FlashCopy SE when used in an open systems environment with the DS8000. Note: For additional examples specifically related to the new FlashCopy SE function, refer to Chapter 10, IBM FlashCopy SE on page 129. These are the DS8000 front end provided interfaces: DS Storage Manager: This graphical user interface (DS GUI) runs in a Web browser. The DS GUI can be accessed using the preinstalled browser on the HMC console, or through the DS8000 Element Manager on a TPC server, such as the SSPC (for new DS800 with Licensed Machine Code 5.30xx.xx), or for former DS8000 installations through a supported Web browser on any workstation connected to the HMC console. DS Command Line Interface (DS CLI): This interface provides a set of commands that are executed on a workstation that communicates with the DS HMC. TotalStorage Productivity Center for Replication (TPC for Replication): The TPC Replication Manager server, where TPC for Replication runs, connects to the DS8000. TotalStorage Productivity Center for Replication (TPC for Replication) provides management of DS8000 series business continuance solutions, including FlashCopy, Metro Mirror, and Global Mirror. TPC for Replication is covered in Chapter 6, IBM TotalStorage Productivity Center for Replication on page 43. DS Open Application Programming Interface (DS Open API): The DS Open API is a set of application programming interfaces that are available to be integrated in programs. The DS Open API is not covered in this book. For information on the DS Open API, refer to the publication, IBM System Storage DS Open Application Programming Interface Reference, GC35-0516. This chapter gives you an overview of the DS CLI and the DS GUI for FlashCopy management.

FlashCopy control with the interfaces


Independently of the interface that is used, when managing FlashCopy the following basic sequence takes place: 1. FlashCopy is initiated by means of an interface. The initialization process of a FlashCopy takes only some seconds. At the end of this process, FlashCopy is established based on the given parameters. This means that all the necessary meta structures have been established. No data has yet been copied. 2. FlashCopy runs in the background. This is the moment when FlashCopy copies (in the background) the necessary data to create the point-in-time copy. The parameters given at initialization time define how FlashCopy will work in the background. They also define which subsequent activities can be performed with this FlashCopy. 3. FlashCopy terminates. FlashCopy either terminates automatically if all tracks have been copied, or needs to be terminated explicitly (by means of an interface) if it has been defined as persistent.

102

IBM System Storage DS8000: Copy Services in Open Environments

9.2 DS CLI and DS GUI: Commands and options


This section summarizes the commands and select options you can use when managing FlashCopy at the local and remote sites.

9.2.1 Local FlashCopy management


Table 9-1 lists the commands you can use, as well as the displayed panels actions and options you can select, when working with the DS8000 provided interfaces DS CLI and DS GUI for local FlashCopy management. Local FlashCopy refers to the situation where the FlashCopy is local, as opposed to a remote FlashCopy. For a discussion of the characteristics of a remote FlashCopy, refer to 8.5 Remote FlashCopy on page 97.
Table 9-1 Local FlashCopy using DS CLI and DS SM Options Create a FlashCopy Create a local FlashCopy Create a Space Efficient FlashCopy Work with an existing FlashCopy Display a list of FlashCopy relationships Modify a FlashCopy pair that is part of a Global Mirror relationship to revertible Commit data to the target volume Increment an existing FlashCopy pair lsflash
setflashrevertible

Command with DS CLI

Select option with DS GUI

mkflash mkflash -tgtse

Create Track Space Efficient

Main panel restorable, Record Change Wizard Commit changes resync target

commitflash resyncflash (prerequisites: -record and -persist) reverseflash revertflash

Change the source-target-relationship A B to B A Reestablish contents of target B by contents of source A as it was during last consistency formation Reset a FlashCopy Consistency Group

reverse relationship Consistency Groups currently not supported Consistency Groups currently not supported background copy

unfreezeflash

Run new background copy for persistent FlashCopy Terminate FlashCopy Remove local FlashCopy

rmflash -cp

rmflash

Delete

Is automatically removed as soon as all data is copied and FlashCopy pair was not established using -persist parameter

Chapter 9. FlashCopy interfaces

103

9.2.2 Remote FlashCopy management


Table 9-2 lists the commands that you can use when working with the DS8000 provided interface DS CLI for remote FlashCopy management.
Table 9-2 Remote FlashCopy using DS CLI commands Options Create a FlashCopy Create a remote FlashCopy Work with an existing FlashCopy Display a list of FlashCopy relationships Modify a FlashCopy pair that is part of a Global Mirror relationship to revertible Commit data to the target volume Increment an existing FlashCopy pair Change the source-target-relationship A B to B A Reestablish contents of target B by contents of source A as it was during last consistency formation Reset a FlashCopy Consistency Group Run new background copy for persistent FlashCopy Terminate FlashCopy Remove local FlashCopy rmremoteflash Is automatically removed as soon as all data is copied and FlashCopy pair was not established using the -persist parameter lsremoteflash setremoteflashrevertible commitremoteflash resyncremoteflash (prerequisites: -record and -persist) reverseremoteflash revertremoteflash Not available as remote command rmremoteflash -cp mkremoteflash Command with DS CLI

Note: Remote FlashCopy is not supported with the DS GUI or DS Open API interfaces.

9.3 Local FlashCopy using the DS CLI


The DS CLI can be downloaded from the IBM Web site and then installed on a workstation. It communicates with the DS8000 HMC. For a detailed information about the DS CLI, refer to the IBM System Storage DS8000 Command-Line Interface Users Guide, SC26-7916.

104

IBM System Storage DS8000: Copy Services in Open Environments

9.3.1 Parameters used with local FlashCopy commands


In this section we discuss the parameters that can be passed to FlashCopy when using the DS CLI, and what the results are.
DS CLI Commands resync reverse flash flash x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x

mkflash Parameters Source freeze Target tgtpprc tgtoffline tgtinhibit tgtonly Flashcopy pair dev record persist nocp seqnum source:target fast source cp source LSS l s activecp revertible Command wait quiet x x x x

lsflash

setflash revertible

commit flash

revert flash

unfreeze flash

rmflash

x x

Figure 9-1 Overview of parameters used in DS CLI FlashCopy commands

Figure 9-1 summarizes the parameters and the corresponding DS CLI commands. When FlashCopy receives these parameters, the following actions result: freeze: Consistency Group FlashCopy. With the DS CLI, it is possible to establish a Consistency Group by using the -freeze parameter and identifying all FlashCopy pairs and target volumes belonging to it. tgtpprc: Establish target on existing Metro Mirror source. When this option is selected, the target volume can be or become a source volume for a Metro Mirror or Global Copy relationship. tgtinhibit: Inhibit writes to target volume. If FlashCopy is active, writes to the target volume are not allowed (inhibited). record: Change recording. Activating the option change recording during setup of a FlashCopy enables subsequent refreshes to the target volume. To do so a second bitmap is created for the source volume, which keeps track of all writes to the source. This bitmap can later be used to refresh the target by only copying the updates from the source to the target. persist: Persistent FlashCopy. The FlashCopy relationship will continue to exist until explicitly removed by an interface method. If this option is not selected, the FlashCopy relationship will exist until all data has been copied from the source volume to the target.

Chapter 9. FlashCopy interfaces

105

nocp: full volume background copy. With the parameter nocp it is possible to indicate whether the data of the source volume will be copied to the target volume in the background. If -nocp is not used, a copy of all data from source to target takes place in the background. With -nocp selected, only updates to the source volume will cause writes to the target volume. This way the time-zero data can be preserved. seqnum: sequence number for FlashCopy pairs. A number that identifies the FlashCopy relationship. Once used with the initial mkflash command, it could be used within subsequent commands to refer to multiple FlashCopy relationships. source:target: identification of source volume and target volume. fast: reverse FlashCopy before background copy is finished. Allows you to issue the reverseflash command before the background copy is finished. cp: restrict command to FlashCopy relationships with background copy. sourceLSS: reset Consistency Group for source logical subsystems. s: display of FlashCopy pairs with lsflash command. The shortened output of the lsflash command returned. Only the FlashCopy pair IDs display. l: display additional FlashCopy information. The standard output of the lsflash command is enhanced. The values for the copy indicator, out-of-sync tracks, date created, and date synchronized all additionally display. activecp: selection of FlashCopy pairs with active background copy. revertible: selection of FlashCopy pairs with revertible attribute.

9.3.2 Local FlashCopy commands: Examples


This section explains the DS CLI commands that you can use to manage FlashCopy and shows examples of their use.

Initiate FlashCopy using mkflash


With mkflash, a local FlashCopy can be established. Four coding examples for mkflash are shown in Example 9-1.
Example 9-1 mkflash command examples Script mkflash -dev IBM.2107-7506571 -seqnum 01 0001:0101 Date/Time: July 11, 2005 6:29:51 PM CEST IBM DSCLI Version: 5.0.3.134 CMUC00137I mkflash: FlashCopy pair 0001:0101 successfully created. mkflash -dev IBM.2107-7506571 -record -seqnum 02 0002:0102 Date/Time: July 11, 2005 6:29:58 PM CEST IBM DSCLI Version: 5.0.3.134 CMUC00137I mkflash: FlashCopy pair 0002:0102 successfully created. mkflash -dev IBM.2107-7506571 -persist -seqnum 03 0003:0103 Date/Time: July 11, 2005 6:30:02 PM CEST IBM DSCLI Version: 5.0.3.134 CMUC00137I mkflash: FlashCopy pair 0003:0103 successfully created. mkflash -dev IBM.2107-7506571 -nocp -seqnum 04 0004:0104 Date/Time: July 11, 2005 6:30:05 PM CEST IBM DSCLI Version: 5.0.3.134 CMUC00137I mkflash: FlashCopy pair 0005:0105 successfully created. Listing of the properties of the FlashCopies lsflash -dev IBM.2107-7506571 0001-0004

DS: IBM.2107-7506571

DS: IBM.2107-7506571

DS: IBM.2107-7506571

DS: IBM.2107-7506571

106

IBM System Storage DS8000: Copy Services in Open Environments

Date/Time: July 11, 2005 6:30:09 PM CEST IBM DSCLI Version: 5.0.3.134 DS: IBM.2107-7506571 ID SrcLSS SequenceNum Timeout ActiveCopy Recording Persistent Revertible SourceWriteEnabled TargetWriteEnabled BackgroundCopy ==================================================================================================================================== 0001:0101 00 1 300 Disabled Disabled Disabled Disabled Enabled Enabled Enabled 0002:0102 00 2 300 Disabled Enabled Enabled Disabled Enabled Enabled Enabled 0003:0103 00 3 300 Disabled Disabled Enabled Disabled Enabled Enabled Enabled 0004:0104 00 4 300 Disabled Disabled Disabled Disabled Enabled Enabled Disabled

The following explanations apply to the cases presented in Example 9-1: Example 1: 0001 0101 The FlashCopy between volume 0001 and volume 0101 is established using the default parameters. By default, the following properties are enabled: SourceWriteEnabled, TargetWriteEnabled, BackgroundCopy (default if not specified differently using the -nocp parameter). All other properties are disabled. The background copy takes place immediately and once everything has been copied, the FlashCopy relationship is automatically removed. Example 2: 0002 0102 The FlashCopy between volume 0002 and volume 0102 is established with the following FlashCopy properties enabled: Recording, Persistent, and BackgroundCopy. Note: The parameter -persist is automatically added whenever -record is used. The background copy takes place immediately and the relationship remains as a persistent relationship. Using other DS CLI commands, it could be reversed and re-synchronized. Example 3: 0003 0103 The FlashCopy between volume 0003 and volume 0103 is established with the following FlashCopy properties enabled: Persistent and BackgroundCopy. The background copy takes place immediately. Once the background copy has finished the FlashCopy relationship will remain, because of the persistent flag. Example 4: 0004 0104 The FlashCopy between volume 0004 and volume 0104 is established with the -nocp parameter. This means, no full volume background copy will be done only the data changed in the source is copied to the target prior to changing it. Over time, this could result in the situation where all data is copied to the target then the FlashCopy relationship would end. It would also end after a background copy is initiated using the DS SM. This way the relationship is temporarily persistent even though the property Persistent is not activated.

Display existing FlashCopy relationships using lsflash


The command lsflash can be used to display FlashCopy relationships and its properties. Parameters can be used with this command to identify the subset of FlashCopy relationships to be displayed. Example 9-2 shows a script with several lsflash commands and the output of the script (this script is logically based on the example for mkflash).

Chapter 9. FlashCopy interfaces

107

Example 9-2 lsflash command examples #--- Example 1 lsflash -dev IBM.2107-7506571 0004 Date/Time: July 11, 2005 6:40:12 PM CEST IBM DSCLI Version: 5.0.3.134 DS: IBM.2107-7506571
ID SrcLSS SequenceNum Timeout ActiveCopy Recording Persistent Revertible SourceWriteEnabled TargetWriteEnabled BackgroundCopy ==================================================================================================================================== 0004:0104 00 4 300 Disabled Disabled Enabled Disabled Disabled Disabled Enabled

#--- Example 2 lsflash -dev IBM.2107-7506571 0001-0005 Date/Time: July 11, 2005 6:40:29 PM CEST IBM DSCLI Version: 5.0.3.134 DS: IBM.2107-7506571
ID SrcLSS SequenceNum Timeout ActiveCopy Recording Persistent Revertible SourceWriteEnabled TargetWriteEnabled BackgroundCopy ==================================================================================================================================== 0003:0103 00 3 300 Disabled Enabled Enabled Disabled Disabled Disabled Enabled 0004:0104 00 4 300 Disabled Disabled Enabled Disabled Disabled Disabled Enabled 0005:0105 00 5 300 Disabled Disabled Disabled Disabled Disabled Disabled Disabled

#--- Example 3 lsflash -dev IBM.2107-7506571 -l 0001-0005 Date/Time: July 11, 2005 6:40:52 PM CEST IBM DSCLI Version: 5.0.3.134 DS: IBM.2107-7506571
ID SrcLSS SequenceNum Timeout ActiveCopy Recording Persistent Revertible SourceWriteEnabled TargetWriteEnabled BackgroundCopy OutOfSyncTracks DateCreated DateSynced ======================================================================================================================= 0003:0103 00 3 300 Disabled Enabled Enabled Disabled Disabled Disabled Enabled 0 Mon Jul 11 19:30:06 CEST 2005 Mon Jul 11 19:30:06 CEST 2005 0004:0104 00 4 300 Disabled Disabled Enabled Disabled Disabled Disabled Enabled 0 Mon Jul 11 19:30:10 CEST 2005 Mon Jul 11 19:30:10 CEST 2005 0005:0105 00 5 300 Disabled Disabled Disabled Disabled Disabled Disabled Disabled 50085 Mon Jul 11 19:30:13 CEST 2005 Mon Jul 11 19:30:13 CEST 2005

#--- Example 4 lsflash -dev IBM.2107-7506571 -s 0001-0005 Date/Time: July 11, 2005 6:40:59 PM CEST IBM DSCLI Version: 5.0.3.134 DS: IBM.2107-7506571
ID ========= 0003:0103 0004:0104 0005:0105

#--- Example 5 lsflash -dev IBM.2107-7506571 -activecp 0001-0004 Date/Time: July 11, 2005 6:41:02 PM CEST IBM DSCLI Version: 5.0.3.134 DS: IBM.2107-7506571 #--- Example 6 lsflash -dev IBM.2107-7506571 -record 0001-0004 Date/Time: July 11, 2005 6:41:08 PM CEST IBM DSCLI Version: 5.0.3.134 DS: IBM.2107-7506571
ID SrcLSS SequenceNum Timeout ActiveCopy Recording Persistent Revertible SourceWriteEnabled TargetWriteEnabled BackgroundCopy ==================================================================================================================================== 0003:0103 00 3 300 Disabled Enabled Enabled Disabled Disabled Disabled Enabled

#--- Example 7 lsflash -dev IBM.2107-7506571 -persist 0001-0004 Date/Time: July 11, 2005 6:41:15 PM CEST IBM DSCLI Version: 5.0.3.134 DS: IBM.2107-7506571
ID SrcLSS SequenceNum Timeout ActiveCopy Recording Persistent Revertible SourceWriteEnabled TargetWriteEnabled BackgroundCopy ==================================================================================================================================== 0003:0103 00 3 300 Disabled Enabled Enabled Disabled Disabled Disabled Enabled 0004:0104 00 4 300 Disabled Disabled Enabled Disabled Disabled Disabled Enabled

#--- Example 8 lsflash -dev IBM.2107-7506571 -revertible 0001-0004 Date/Time: July 11, 2005 6:41:22 PM CEST IBM DSCLI Version: 5.0.3.134 DS: IBM.2107-7506571 #--- Example 9 lsflash -dev IBM.2107-7506571 -cp 0001-0004 Date/Time: July 11, 2005 6:41:32 PM CEST IBM DSCLI Version: 5.0.3.134 DS: IBM.2107-7506571
ID SrcLSS SequenceNum Timeout ActiveCopy Recording Persistent Revertible SourceWriteEnabled TargetWriteEnabled BackgroundCopy

108

IBM System Storage DS8000: Copy Services in Open Environments

==================================================================================================================================== 0003:0103 00 3 300 Disabled Enabled Enabled Disabled Disabled Disabled Enabled 0004:0104 00 4 300 Disabled Disabled Enabled Disabled Disabled Disabled Enabled

The following explanations apply to the cases presented in Example 9-2 on page 108: Example 1: List FlashCopy information for a specific volume. In this example, the lsflash command shows the FlashCopy relationship information for volume 0004, showing the status (enabled/disabled) of the FlashCopy properties. Example 2: List existing FlashCopy relationships information within a range of volumes. In this example, the lsflash command shows the FlashCopy relationships information for the range of volumes 0001 to 0005, showing the properties status (enabled/disabled). Example 3: List existing FlashCopy relationships with full information. Using the -l parameter with the lsflash command displays the default output plus information about the following properties: OutOfSyncTracks, DateCreated, and DateSynced. Example 4: List volume numbers of existing FlashCopy pairs within a volume range. Using the -s parameter displays only the FlashCopy source and target volume IDs for the specified range of volumes. Example 5: List FlashCopy relationships with active background copy running. Using the parameter -activecp will display only those FlashCopy relationships within the selected range of volumes for which a background copy is actively running. The output format is the default output. In our example there were no active background copies. Example 6: List existing FlashCopy relationships with -record enabled. Using the -record parameter will display only those FlashCopy relationships within the selected range of volumes that were established with the -record parameter. Example 7: List existing FlashCopy relationships with the Persistent attribute enabled. When using the parameter -persist, only those FlashCopy relationships within the range of selected volumes for which the Persistent option is enabled will be displayed. Example 8: List existing FlashCopy relationships which are revertible. When using the parameter -revertible, only those FlashCopy relationships within the range of selected volumes for which the option Revertible is enabled will be displayed. There were no revertible relationships in our example. Example 9: List existing FlashCopy relationships for which BackgroundCopy is enabled. When using the parameter -cp only those FlashCopy relationships within the range of selected volumes for which the BackgroundCopy option is enabled will be displayed.

Set an existing FlashCopy to revertible using setflashrevertible


The command setflashrevertible can be used to modify the revertible attribute of a FlashCopy relationship that is part of a Global Mirror relationship. The FlashCopy properties Recording and Persistent must be enabled to set a FlashCopy relationship to revertible using this command. This command needs to be executed prior to running a commitflash or revertflash.

Chapter 9. FlashCopy interfaces

109

Example 9-3 illustrates two situations when using the setflashrevertible command.
Example 9-3 setflashrevertible command examples #-----------------------------------------------------------#--- script to set FlashCopy property Revertible to value enabled and display values afterwards #-----------------------------------------------------------#--- Example 1 setflashrevertible -dev IBM.2107-7506571 0002:0102 Date/Time: July 11, 2005 9:34:21 PM CEST IBM DSCLI Version: 5.0.3.134 DS: IBM.2107-7506571 CMUC00167I setflashrevertible: FlashCopy volume pair 0002:0102 successfully made revertible. lsflash -dev IBM.2107-7506571 0000-0004
Date/Time: July 11, 2005 9:34:25 PM CEST IBM DSCLI Version: 5.0.3.134 DS: IBM.2107-7506571 ID SrcLSS SequenceNum Timeout ActiveCopy Recording Persistent Revertible SourceWriteEnabled TargetWriteEnabled BackgroundCopy ==================================================================================================================================== 0002:0102 00 0 300 Disabled Enabled Enabled Enabled Enabled Enabled Enabled

#--- Example 2 setflashrevertible -dev IBM.2107-7506571 0003:0103 Date/Time: July 11, 2005 9:53:07 PM CEST IBM DSCLI Version: 5.0.3.134 DS: IBM.2107-7506571
CMUN03027E setflashrevertible: 0003:0103: FlashCopy operation failure: action prohibited by current FlashCopy state

The following explanations apply to the cases presented in Example 9-3: Example 1: Set the FlashCopy relationship to revertible. This command will set the existing FlashCopy for source volume 0002 and target volume 0102 to revertible. Once the property Revertible is enabled, any subsequent commands will result in an error message similar to the one displayed in Example 2. Example 2: Error occurs when trying to set FlashCopy relationship to revertible. When trying to set a FlashCopy relationship to revertible for which the property Recording is disabled, an error will result. The script will end after this command with return code 2 and any other commands following the one that causes the error will not be executed.

Commit data to target using commitflash


The command commitflash can be used to commit data to a target volume to set consistency between source and target. It is intended to be used in asynchronous remote copy environments such as Global Mirror. Therefore, its usage is discussed in greater detail in Part 6, Global Mirror on page 301, while this section discusses the basic usage of the command. Before the FlashCopy relationship can be committed, it needs to be made revertible. Typically, this is done automatically by an application such as Global Mirror. However, it can also be set manually.

110

IBM System Storage DS8000: Copy Services in Open Environments

Example 9-4 Commit command examples #--- Example 1 lsflash -dev IBM.2107-7506571 0000-0005 Date/Time: July 11, 2005 10:29:29 PM CEST IBM DSCLI Version: 5.0.3.134 DS: IBM.2107-7506571
ID SrcLSS SequenceNum Timeout ActiveCopy Recording Persistent Revertible SourceWriteEnabled TargetWriteEnabled BackgroundCopy ==================================================================================================================================== 0001:0101 00 1 300 Disabled Enabled Enabled Disabled Disabled Disabled Enabled 0005:0105 00 1 300 Disabled Enabled Enabled Disabled Disabled Disabled Enabled

setflashrevertible -dev IBM.2107-7506571 -seqnum 01 0001:0101 0005:0105 Date/Time: July 11, 2005 10:29:35 PM CEST IBM DSCLI Version: 5.0.3.134 DS: IBM.2107-7506571 CMUC00167I setflashrevertible: FlashCopy volume pair 0001:0101 successfully made revertible. CMUC00167I setflashrevertible: FlashCopy volume pair 0005:0105 successfully made revertible. lsflash -dev IBM.2107-7506571 0000-0005 Date/Time: July 11, 2005 10:29:39 PM CEST IBM DSCLI Version: 5.0.3.134 DS: IBM.2107-7506571
ID SrcLSS SequenceNum Timeout ActiveCopy Recording Persistent Revertible SourceWriteEnabled TargetWriteEnabled BackgroundCopy ==================================================================================================================================== 0001:0101 00 1 300 Disabled Enabled Enabled Enabled Enabled Disabled Enabled 0005:0105 00 1 300 Disabled Enabled Enabled Enabled Enabled Disabled Enabled

commitflash -dev IBM.2107-7506571 0001-0005 Date/Time: July 11, 2005 10:29:45 PM CEST IBM CMUC00170I commitflash: FlashCopy volume pair CMUC00170I commitflash: FlashCopy volume pair lsflash -dev IBM.2107-7506571 -l 0000-0005 Date/Time: July 11, 2005 10:36:19 PM CEST IBM

DSCLI Version: 5.0.3.134 DS: IBM.2107-7506571 0001:0001 successfully committed. 0005:0005 successfully committed. DSCLI Version: 5.0.3.134 DS: IBM.2107-7506571

ID SrcLSS SequenceNum Timeout ActiveCopy Recording Persistent Revertible SourceWriteEnabled TargetWriteEnabled BackgroundCopy ==================================================================================================================================== 0001:0101 00 1 300 Disabled Enabled Enabled Disabled Disabled Enabled Disabled 0005:0105 00 1 300 Disabled Enabled Enabled Disabled Disabled Enabled Disabled

#--- Example 2 commitflash -dev IBM.2107-7506571 0003:0103


CMUN03027E commitflash: 0003:0003: FlashCopy operation failure: action prohibited by current FlashCopy state

The following explanations apply to the cases presented in Example 9-4: Example 1: Commit the FlashCopy relationship. This example shows the properties of the two FlashCopy relationships 0001:0101 and 0005:0105 using the lsflash command prior and after issuing the setflashrevertible command. After the commitflash command is executed the properties of the two FlashCopy relationships are listed again. Example 2: Error when trying to commit a FlashCopy relationship. When trying to commit a FlashCopy relationship that isnt revertible (property Revertible is disabled) an error will be the result. The script will end after this command with return code 2 and any other commands following the one that causes the error will not be executed.

Increment FlashCopy using resyncflash


With the resyncflash command, an existing FlashCopy relationship can be incremented. To run this command, the FlashCopy relationship must have the options Recording and Persistent enabled. Tip: You do not have to wait for the background copy to complete before you do the FlashCopy re-synchronization. The resyncflash command can be used at any time. To make sure an existing FlashCopy relationship can be incremented multiple times, it is necessary to repeat the -record and -persist parameters with the resyncflash command.

Chapter 9. FlashCopy interfaces

111

Example 9-5 shows examples where the resyncflash command is used.


Example 9-5 resyncflash command examples #--- Example 1 mkflash -dev IBM.2107-7506571 -record -persist -seqnum 01 0001:0101 0005:0105 Date/Time: July 11, 2005 10:47:34 PM CEST IBM DSCLI Version: 5.0.3.134 DS: IBM.2107-7506571 CMUC00137I mkflash: FlashCopy pair 0001:0101 successfully created. CMUC00137I mkflash: FlashCopy pair 0005:0105 successfully created. mkflash -dev IBM.2107-7506571 -record -seqnum 03 0003:0103 Date/Time: July 11, 2005 10:47:37 PM CEST IBM DSCLI Version: 5.0.3.134 DS: IBM.2107-7506571 CMUC00137I mkflash: FlashCopy pair 0003:0103 successfully created. lsflash -dev IBM.2107-7506571 0000-0005 Date/Time: July 11, 2005 10:47:41 PM CEST IBM DSCLI Version: 5.0.3.134 DS: IBM.2107-7506571
ID SrcLSS SequenceNum Timeout ActiveCopy Recording Persistent Revertible SourceWriteEnabled TargetWriteEnabled BackgroundCopy ==================================================================================================================================== 0001:0101 00 1 300 Disabled Enabled Enabled Disabled Disabled Disabled Enabled 0003:0103 00 3 300 Disabled Enabled Enabled Disabled Disabled Disabled Enabled 0005:0105 00 1 300 Disabled Enabled Enabled Disabled Disabled Disabled Enabled

resyncflash -dev IBM.2107-7506571 -record -persist -seqnum 11 0001:0101 0005:0105 Date/Time: July 11, 2005 10:47:55 PM CEST IBM DSCLI Version: 5.0.3.134 DS: IBM.2107-7506571 CMUC00168I resyncflash: FlashCopy volume pair 0001:0101 successfully resynchronized. CMUC00168I resyncflash: FlashCopy volume pair 0005:0105 successfully resynchronized. resyncflash -dev IBM.2107-7506571 seqnum 13 0003:0103 Date/Time: July 11, 2005 10:48:00 PM CEST IBM DSCLI Version: 5.0.3.134 DS: IBM.2107-7506571 CMUC00168I resyncflash: FlashCopy volume pair 0003:0103 successfully resynchronized. lsflash -dev IBM.2107-7506571 0000-0005 Date/Time: July 11, 2005 10:48:04 PM CEST IBM DSCLI Version: 5.0.3.134 DS: IBM.2107-7506571
ID SrcLSS SequenceNum Timeout ActiveCopy Recording Persistent Revertible SourceWriteEnabled TargetWriteEnabled BackgroundCopy ==================================================================================================================================== 0001:0101 00 11 300 Disabled Enabled Enabled Disabled Disabled Disabled Enabled 0003:0103 00 13 300 Disabled Disabled Disabled Disabled Disabled Disabled Enabled 0005:0105 00 11 300 Disabled Enabled Enabled Disabled Disabled Disabled Enabled

#--- Example 2 mkflash -dev IBM.2107-7506571 -nocp -seqnum 03 0004:0104 Date/Time: July 11, 2005 11:01:41 PM CEST IBM DSCLI Version: 5.0.3.134 DS: IBM.2107-7506571 CMUC00137I mkflash: FlashCopy pair 0004:0104 successfully created. lsflash -dev IBM.2107-7506571 0004 Date/Time: July 11, 2005 11:01:45 PM CEST IBM DSCLI Version: 5.0.3.134 DS: IBM.2107-7506571
ID SrcLSS SequenceNum Timeout ActiveCopy Recording Persistent Revertible SourceWriteEnabled TargetWriteEnabled BackgroundCopy ==================================================================================================================================== 0004:0104 00 3 300 Disabled Disabled Disabled Disabled Disabled Disabled Disabled

resyncflash -dev IBM.2107-7506571 -record -persist -seqnum 14 0004:0104 Date/Time: July 11, 2005 11:01:58 PM CEST IBM DSCLI Version: 5.0.3.134 DS: IBM.2107-7506571
CMUN03027E resyncflash: 0004:0104: FlashCopy operation failure: action prohibited by current FlashCopy state

The following explanations apply to the examples shown in Example 9-5: Example 1: Increment a FlashCopy relationship. In this example, three FlashCopy relationships are created with the -record and -persist parameters. The resyncflash commands are executed using a different sequence number, which overwrites the one of the current FlashCopy relationship. The sequence number only changes if the resyncflash finishes successfully. The resyncflash for the 0001:0101 and 0005:0105 relationships take place using the -record and -persist parameter. Because the two parameters are omitted for the 0003:0103 FlashCopy relationship the two properties Recording and Persistent change to disabled for this FlashCopy relationship. As soon as the background copy for the 0003:0103 FlashCopy relationship finishes then the FlashCopy relationship will terminate.

112

IBM System Storage DS8000: Copy Services in Open Environments

Example 2: Error occurs when trying to increment FlashCopy relationship. When trying to increment a FlashCopy relationship for which the properties Recording and Persistent are disabled an error will be the result. The script will end after this command with return code 2 and any other commands following the one that caused the error will not be executed.

Reverse source-target relationship using reverseflash


The command reverseflash can be used to change the direction of a FlashCopy relationship. The former source becomes target and the former target becomes source. The data is copied from the target to the source. Example 9-6 illustrates examples of the use of the reverseflash command.
Example 9-6 reverseflash command examples #--- Example 1 lsflash -dev IBM.2107-7506571 0000-0005 Date/Time: July 11, 2005 11:28:33 PM CEST IBM DSCLI Version: 5.0.3.134 DS: IBM.2107-7506571
ID SrcLSS SequenceNum Timeout ActiveCopy Recording Persistent Revertible SourceWriteEnabled TargetWriteEnabled BackgroundCopy ==================================================================================================================================== 0001:0101 00 1 300 Disabled Enabled Enabled Disabled Disabled Disabled Enabled 0002:0102 00 2 300 Disabled Enabled Enabled Disabled Disabled Disabled Enabled 0003:0103 00 3 300 Disabled Enabled Enabled Disabled Disabled Disabled Enabled 0005:0105 00 1 300 Disabled Enabled Enabled Disabled Disabled Disabled Enabled

reverseflash -dev IBM.2107-7506571 -record -persist 0001:0101 0005:0105 Date/Time: July 11, 2005 11:33:21 PM CEST IBM DSCLI Version: 5.0.3.134 DS: IBM.2107-7506571 CMUC00169I reverseflash: FlashCopy volume pair 0001:0101 successfully reversed. CMUC00169I reverseflash: FlashCopy volume pair 0005:0105 successfully reversed. reverseflash -dev IBM.2107-7506571 -record 0002:0102 Date/Time: July 11, 2005 11:33:27 PM CEST IBM DSCLI Version: 5.0.3.134 DS: IBM.2107-7506571 CMUC00169I reverseflash: FlashCopy volume pair 0002:0102 successfully reversed. reverseflash -dev IBM.2107-7506571 0003:0103 Date/Time: July 11, 2005 11:33:33 PM CEST IBM DSCLI Version: 5.0.3.134 DS: IBM.2107-7506571 CMUC00169I reverseflash: FlashCopy volume pair 0003:0103 successfully reversed. lsflash -dev IBM.2107-7506571 0000-0005 Date/Time: July 11, 2005 11:33:37 PM CEST IBM DSCLI Version: 5.0.3.134 DS: IBM.2107-7506571
ID SrcLSS SequenceNum Timeout ActiveCopy Recording Persistent Revertible SourceWriteEnabled TargetWriteEnabled BackgroundCopy ==================================================================================================================================== 0101:0001 01 1 300 Disabled Enabled Enabled Disabled Disabled Disabled Enabled 0102:0002 01 2 300 Disabled Enabled Enabled Disabled Disabled Disabled Enabled 0103:0003 01 3 300 Disabled Disabled Disabled Disabled Disabled Disabled Enabled 0105:0005 01 1 300 Disabled Enabled Enabled Disabled Disabled Disabled Enabled

#--- Example 2 reverseflash -dev IBM.2107-7506571 -record 0002:0102 Date/Time: July 11, 2005 11:42:17 PM CEST IBM DSCLI Version: 5.0.3.134 DS: IBM.2107-7506571 CMUC00169I reverseflash: FlashCopy volume pair 0002:0102 successfully reversed. lsflash -dev IBM.2107-7506571 0002 Date/Time: July 11, 2005 11:42:30 PM CEST IBM DSCLI Version: 5.0.3.134 DS: IBM.2107-7506571
ID SrcLSS SequenceNum Timeout ActiveCopy Recording Persistent Revertible SourceWriteEnabled TargetWriteEnabled BackgroundCopy ==================================================================================================================================== 0102:0002 01 2 300 Disabled Enabled Enabled Disabled Disabled Disabled Enabled

#--- Example 3 reverseflash -dev IBM.2107-7506571 -seqnum 12 -record 0102:0002 Date/Time: July 11, 2005 11:46:34 PM CEST IBM DSCLI Version: 5.0.3.134 DS: IBM.2107-7506571 CMUC00169I reverseflash: FlashCopy volume pair 0102:0002 successfully reversed. lsflash -dev IBM.2107-7506571 0002 Date/Time: July 11, 2005 11:46:39 PM CEST IBM DSCLI Version: 5.0.3.134 DS: IBM.2107-7506571
ID SrcLSS SequenceNum Timeout ActiveCopy Recording Persistent Revertible SourceWriteEnabled TargetWriteEnabled BackgroundCopy ==================================================================================================================================== 0002:0102 00 12 300 Disabled Enabled Enabled Disabled Disabled Disabled Enabled

Chapter 9. FlashCopy interfaces

113

The following explanations apply to the examples shown in Example 9-6 on page 113: Example 1: Reverse a FlashCopy relationship. In this example, three FlashCopy relationships are created with the -record and -persist parameters. The reverseflash commands are executed using a different sequence number, which overwrites the one of the current FlashCopy relationship. The reverseflash for the 0001:0101 and 0005:0105 relationships take place using the -record and -persist parameter. Because the two parameters are omitted for the 0003:0103 FlashCopy relationship the two properties Recording and Persistent change to disabled for this FlashCopy relationship. This will terminate the 0003:0103 FlashCopy relationship as soon it is successfully reversed. Example 2: reverse a FlashCopy relationship multiple times It is possible to reverse a FlashCopy relationship multiple times, thus recopying the contents of the original FlashCopy target volume multiple times back to the original source volume. In this example, the 0002:0102 was reversed once as part of example 1. Then changes are done to data residing on volume 0002. A subsequent reverseflash for 0002:0102 eliminates the changes done to 0002 and brings back the data from volume 0102 to volume 0002 as it was during the initial FlashCopy. Example 3: reestablish original FlashCopy direction reversing again It is possible to reverse a FlashCopy relationship back again. In example 3 this is shown for the reversed FlashCopy relationship 0102:0002. Reversing it a second time and referring to it as FlashCopy pair 0102:0002 is similar to establishing a new FlashCopy for the volume pair 0002:0102. In this case a sequence number provided with the reverseflash is used to identify a new FlashCopy relationship.

Reset target to contents of last consistency point using revertflash


The command revertflash can be used to reset the target volume to the contents of the last consistency point. Like the commitflash command, this command is intended to be used in asynchronous environments like Global Mirror environments. Before this command can be issued, the relationship must be made revertible, either automatically as with Global Mirror, or manually using the setrevertible command. See Example 9-7.
Example 9-7 revertflash command example #--- Example 1 mkflash -dev IBM.2107-7506571 -record -persist -seqnum 01 0001:0101 Date/Time: July 12, 2005 12:12:20 AM CEST IBM DSCLI Version: 5.0.3.134 DS: IBM.2107-7506571 CMUC00137I mkflash: FlashCopy pair 0001:0101 successfully created. mkflash -dev IBM.2107-7506571 -nocp -seqnum 04 0001:0104 0001:0105 Date/Time: July 12, 2005 12:12:23 AM CEST IBM DSCLI Version: 5.0.3.134 DS: IBM.2107-7506571 CMUC00137I mkflash: FlashCopy pair 0001:0104 successfully created. CMUC00137I mkflash: FlashCopy pair 0001:0105 successfully created. lsflash -dev IBM.2107-7506571 0000-0005 Date/Time: July 12, 2005 12:12:27 AM CEST IBM DSCLI Version: 5.0.3.134 DS: IBM.2107-7506571
ID SrcLSS SequenceNum Timeout ActiveCopy Recording Persistent Revertible SourceWriteEnabled TargetWriteEnabled BackgroundCopy ==================================================================================================================================== 0001:0101 00 1 300 Disabled Enabled Enabled Disabled Disabled Disabled Enabled 0001:0104 00 4 300 Disabled Disabled Disabled Disabled Disabled Disabled Disabled 0001:0105 00 4 300 Disabled Disabled Disabled Disabled Disabled Disabled Disabled

setflashrevertible -dev IBM.2107-7506571 0001:0101 Date/Time: July 12, 2005 12:12:34 AM CEST IBM DSCLI Version: 5.0.3.134 DS: IBM.2107-7506571 CMUC00167I setflashrevertible: FlashCopy volume pair 0001:0101 successfully made revertible. revertflash -dev IBM.2107-7506571 0001 Date/Time: July 12, 2005 12:12:44 AM CEST IBM DSCLI Version: 5.0.3.134 DS: IBM.2107-7506571 CMUC00171I revertflash: FlashCopy volume pair 0001:0001 successfully reverted.

114

IBM System Storage DS8000: Copy Services in Open Environments

In Example 9-7 on page 114, three FlashCopy relationships are created for one source volume: 0001:0101, 0001:0104, and 0105. The revertflash command is executed for source 0001, and because the FlashCopy relationship 0001:0101 has the Recording and Persistent property enabled, this command refers to this FlashCopy relationship: 0001:0101. Any updates done to volume 0101 will be overwritten.

Run background copy for persistent FlashCopy using rmflash


Additional background copies for persistent FlashCopy relationships can be created using the rmflash command in combination with its -cp parameter. See Example 9-8.
Example 9-8 rmflash command to create a new background copy #--- Example 1 rmflash -dev IBM.2107-7506571 -quiet -cp 0001:0101 Date/Time: July 12, 2005 12:19:29 AM CEST IBM DSCLI Version: 5.0.3.134 DS: IBM.2107-7506571 CMUC00143I rmflash: Background copy process for FlashCopy pair 0001:0101 successfully started. The persistent relationship will not be removed.

In Example 9-8, to create a new background copy of a persistent FlashCopy relationship, the existing FlashCopy relationship 0001:0101 is used to create a new background copy.

Remove local FlashCopy using rmflash


The command rmflash can be used to remove a FlashCopy relationship. Unlike other commands, it does not return an error if the request runs for a FlashCopy relationship that does not exist. In scripts it should always be used with the -quiet parameter to avoid the confirmation prompt. See Example 9-9.
Example 9-9 rmflash command example #--- Example 1 rmflash -dev IBM.2107-7506571 -quiet 0001:0101 Date/Time: July 12, 2005 12:22:06 AM CEST IBM DSCLI Version: 5.0.3.134 DS: IBM.2107-7506571 CMUC00140I rmflash: FlashCopy pair 0001:0101 successfully removed.

In Example 9-9, the existing FlashCopy relationship 0001:0101 is removed.

9.3.3 FlashCopy Consistency Groups


Typically, large applications such as databases have their data spread across several volumes. In order to create a consistent copy or backup, a FlashCopy of all the volumes should be done at exactly the same point-in-time. FlashCopy Consistency Groups are designed to achieve this exact purpose.

Create FlashCopy Consistency Groups using mkflash


Use the mkflash command with the -freeze parameter to create a FlashCopy Consistency Group. This command causes the DS8000 to briefly prevent I/O to the volumes in the Consistency Group. During this time, any I/O that comes from the host will be returned with a SCSI queue full error, which the host bus adapter will automatically retry. However, if the I/O is frozen for an extended period of time, there will be application errors on the host. For this reason, the Consistency Group should be unfrozen as quickly as possible.

Chapter 9. FlashCopy interfaces

115

Example 9-10 illustrates the creation of a FlashCopy Consistency Group that contains two FlashCopy pairs.
Example 9-10 Creating FlashCopy Consistency Groups dscli> mkflash -dev IBM.2107-7506571 -freeze 1500-1501:1502-1503 Date/Time: October 24, 2005 3:37:49 AM PDT IBM DSCLI Version: 5.0.5.6 DS: IBM.2107-7506571 CMUC00137I mkflash: FlashCopy pair 1500:1502 successfully created. CMUC00137I mkflash: FlashCopy pair 1501:1503 successfully created.

Reset FlashCopy Consistency Group using unfreezeflash


The command unfreezeflash can be used to remove a Consistency Group for multiple volumes, for which FlashCopy relations were established using the -freeze parameter. This command removes the queue full condition and allows I/Os to continue on the source volumes. See Example 9-11. The unfreezeflash command is issued to the entire logical subsystem (LSS).
Example 9-11 unfreezeflash command example #--- Example 1 unfreezeflash -dev IBM.2107-7506571/00 Date/Time: July 12, 2005 12:27:06 AM CEST IBM DSCLI Version: 5.0.3.134 DS: IBM.2107-7506571 CMUC00172I unfreezeflash: FlashCopy consistency group for logical subsystem 00: successfully reset.

9.4 Remote FlashCopy using the DS CLI


Remote FlashCopy commands are similar to local FlashCopy commands. The remote commands can be issued whenever a DS8000 mirroring takes place from one DS8000 to another DS8000. In this situation the Fibre Channel links between the two DS8000 (that are used for mirroring purposes) are also used to transmit the FlashCopy commands to the remote DS8000. For detailed information about the DS CLI, refer to the publication, IBM System Storage DS8000 Command-Line Interface Users Guide, SC26-7916.

9.4.1 Remote FlashCopy commands


The syntax of the remote FlashCopy commands is similar to the syntax of the local FlashCopy commands. The commands themselves have almost similar names except for the remote character string that you can see in the remote FlashCopy commands names. Table 9-3 shows the similarity between both sets of commands their actions also similar.
Table 9-3 Command names for local and remote FlashCopy Local command mkflash lsflash setflashrevertible commitflash resyncflash reverseflash Remote command mkremoteflash lsremoteflash setremoteflashrevertible commitremoteflash resyncremoteflash reverseremoteflash Comments Establish FlashCopy List FlashCopy Set FlashCopy to revertible Commit FlashCopy on target Increment FlashCopy Switch source-target

116

IBM System Storage DS8000: Copy Services in Open Environments

Local command revertflash unfreezeflash rmflash

Remote command revertremoteflash rmremoteflash

Comments Reset to last consistency point Reset Consistency Group Remove FlashCopy

9.4.2 Parameters used in remote FlashCopy commands


Most of the parameters for remote FlashCopy commands are similar to those for local FlashCopy commands. However, there are two major differences between local FlashCopy and remote FlashCopy commands: Each remote command has the parameter -conduit to identify the link to pass the command to the secondary DS8000. The local FlashCopy command unfreezeflash has no remote equivalent. Figure 9-2 summarizes the parameters and the corresponding DS CLI commands that can be used when doing remote FlashCopy.
DS CLI Commands commit resync reverse remote remote remote flash flash flash x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x

mkremote flash Parameters Source freeze Target tgtpprc tgtoffline tgtinhibit tgtonly Flashcopy pair dev record persist nocp seqnum source:target fast source cp source LSS l s activecp revertible Command conduit quiet x x x x

lsremote flash

setremote flash revertible

revert remote flash

rmflash

x x

Figure 9-2 DS CLI remote FlashCopy commands and parameters

The description of the parameters is similar to the description presented in 9.3.1 Parameters used with local FlashCopy commands on page 105. In regard to the parameter -conduit, which only applies to remote FlashCopy, the following explanation applies: Within a remote mirror environment, the FlashCopy commands are sent across the mirror paths, thus avoiding the necessity of having separate network connections to the remote site solely for the management of the remote FlashCopy. The -conduit parameter identifies the path to be used for transmitting the commands to the remote site.
Chapter 9. FlashCopy interfaces

117

9.5 FlashCopy management using the DS GUI


To use the DS Storage Manager front end GUI, a supported Web Browser must be installed on the workstation. The DS Storage Manager communicates either directly with the DS8000 HMC or with the SSPC. When the SSPC is used (as is necessarily the case for new DS8000 shipped with the Licensed Machine Code 5.3.x.x, bundle version 63.x.x.x), you have to launch the Element Manager in TPC to get the DS8000 GUI. On DS8000 systems that have an SSPC console, you have to specify the network address of that console (see Example 9-12).
Example 9-12 Launching TPC on the SSPC console http://<ip-address-of-SSPC>:9550/ITSRM/app/tpcgui.jnlp

You must then logon to TPC with a TPC userid and password. To get to the DS8000 GUI, click the Element Management button in the upper left corner (see Figure 9-3).

Figure 9-3 Accessing the DS8000 GUI through the SSPC console.

118

IBM System Storage DS8000: Copy Services in Open Environments

9.5.1 Initiating FlashCopy using Create


Figure 9-4 shows the FlashCopy Create menu with the DS GUI.

Figure 9-4 FlashCopy Create with DS GUI

After you login to the DS GUI, follow this sequence (refer to Figure 9-4): 1. Select Real-time manager. 2. Select Copy services. 3. Select FlashCopy. 4. Use the Storage complex, Storage unit, and Storage image pull-down windows to identify the DS8000 for which you would like to initiate a FlashCopy. Then use the Resource type and specify resource type pull-downs to display the volumes you want to work with on that DS8000. 5. From the Select action pull-down, select Create. The previous selection will give you several windows, one after the other, to define the attributes that will be used with this FlashCopy: 1. Define relationship type: Select either A single source with a single target, or if you want to allow a Multiple Relationship FlashCopy, select A single source with multiple targets. 2. Select source volumes: Select one or multiple source volumes by checking the box at the left of the volume identification. For each of the source volumes, you will afterwards be presented with a window to select one or more targets. 3. Select target volumes: Select the target volumes (a source cannot have more than 12 target volumes).

Chapter 9. FlashCopy interfaces

119

4. Select common options: Figure 9-5 shows the options that can be chosen. The data provided in this window will be used for all defined FlashCopy pairs.

Figure 9-5 Common option window for FlashCopy

5. Verification: In the verification window (not shown), all information regarding the FlashCopy definition is displayed. If it is not possible to establish the FlashCopy relationship (if, for example, a volume was requested to be offline, but is not offline), then an informational message is displayed on this window.

Comparison of parameters for initial FC using DS GUI and the DS CLI


A FlashCopy can be defined by using the DS GUI with a browser (selecting the Create for FlashCopy) or by using the DS CLI (using the mkflash command). Using the DS GUI, it is possible to define FlashCopies that will be generated immediately. It is not possible to store any FlashCopy tasks using the DS GUI. Table 9-4 shows the corresponding parameters for the various FlashCopy options when using the DS CLI command mkflash and the DS GUI FlashCopy Create selection.
Table 9-4 Comparison of options and parameters used for FlashCopy in DS CLI and DS GUI Options Parameter with DS CLI command mkflash Parameter with DS GUI FlashCopy create Comments

Options for the source volume Multiple relationship FlashCopy list of source:target volumes Multiple target volumes can be selected during create

120

IBM System Storage DS8000: Copy Services in Open Environments

Options

Parameter with DS CLI command mkflash freeze

Parameter with DS GUI FlashCopy create

Comments

Consistency Groups for FlashCopy Options for the target volume FlashCopy target can be Metro Mirror or Global Copy primary Inhibit writes to target volume Options for FlashCopy pair Identification of FlashCopy pair Change Recording Persistent FlashCopy Full volume background FlashCopy Sequence number

Currently not supported with DS GUI front end

tgtpprc tgtinhibit

Establish target on existing Metro Mirror source Resync target

dev or source:target record persist nocp seqnum

Selected in windows Enable change recording Make relationship persistent Initiate background copy Sequence number for this relationship

9.5.2 Displaying properties of existing FlashCopy


To display the properties of existing FlashCopy relationships, log in to the DS GUI user interface window and follow this sequence (refer to Figure 9-6): 1. Select Real-time manager. 2. Select Copy services. 3. Select FlashCopy. 4. Use the Storage complex, Storage unit, and Storage image pull-down windows to identify the DS8000 for which you would like to initiate a FlashCopy. Then use the Resource type and specify resource type pull-downs to display the volumes you want to work with on that DS8000. This will give you a list of all active FlashCopy relationships. By checking the box left of the FlashCopy of interest, the Select Actions for this FlashCopy relationship are changed based on the attributes of the FlashCopy relationship and the possible actions to perform on it. Using the Select Action Properties, all attributes of the selected FlashCopy are displayed. Note: In case you are selecting multiple FlashCopy relationships, the Select Action Properties will not be presented. Select only one FlashCopy relationship to view its properties.

Chapter 9. FlashCopy interfaces

121

Figure 9-6 FlashCopy display properties

This selection gives you a window with two folders: General: In this folder, all properties of the selected FlashCopy are presented (see Figure 9-7).

Figure 9-7 General folder with FlashCopy properties information

122

IBM System Storage DS8000: Copy Services in Open Environments

Out-of-sync tracks: The window displaying the out-of-sync tracks can be used to monitor how the FlashCopy performs in the background. A refresh interval can be set to refresh the display after a preselected period of time. See Figure 9-8.

Figure 9-8 Display out-of-sync tracks

Properties display: DS CLI versus DS GUI


Table 9-5 compares the differences in the way that the FlashCopy information is displayed when requested either using the DS CLI command lsflash or an existing FlashCopy relationship was selected for display using the DS GUI front end.

Chapter 9. FlashCopy interfaces

123

Table 9-5 FlashCopy properties as displayed by the DS CLI versus the DS GUI Properties with DS CLI command lsflash Property Source LSS SrcLSS # Source LSS in selection window From selection list Contents Properties with DS GUI FlashCopy property Property Contents

Sequence Number to which the FlashCopy belongs. Can also be used for Consistency Groups. SequenceNum ### Sequence number from overview window ###

Shows if a Background Copy is currently running. If the DS CLI returns ActiveCopy=disabled this could be similar to out-of-sync tracks>0 or copy complete on the DS GUI. ActiveCopy Enabled or disabled Status in overview window Copy complete or out-of-sync tracks>0 or background copy running

Shows if recording was selected for both source and target volume to allow for later usage of resync. Recording Enabled or disabled Change recording Yes or no

Shows if the FlashCopy is a persistent one. Persistent Enabled or disabled Relationship will remain Yes or no

Identifies if the FlashCopy relationship can be changed by copying the contents of the target volume to the source volume Revertible Enabled or disabled Restorable Yes or no

Identifies if the source can be used to write on it SourceWriteEnabled enabled or disabled Source is write-inhibited Yes or no

Identifies it the target can be used by another server to write on it in parallel to the FlashCopy taking place TargetWriteEnabled Enabled or disabled Target is write-inhibited Yes or no

Identifies if a background copy was already done BackgroundCopy Enabled or disabled Background copy initiated Yes or no

using -l parameter with DS CLI OutOfSyncTracks DateCreated DateSynced A number Date Date Out-of-sync tracks Created Last refresh A number Date Date

124

IBM System Storage DS8000: Copy Services in Open Environments

9.5.3 Reversing existing FlashCopy


To reverse an existing FlashCopy relationship, you first display the list of active FlashCopy relationships as described in 9.5.2 Displaying properties of existing FlashCopy on page 121. Then by checking the box at the left of the FlashCopy of your interest, the available Select Actions for this FlashCopy relationship will be shown. If the existing FlashCopy relationship can be reversed, it is possible to choose the Select Action and then select Reverse relationship. The next window displayed is shown in Figure 9-9. It displays those properties of the existing FlashCopy that might be changed during the reverse process. Changing the values of the parameters and then clicking OK will start the reverse process for the FlashCopy.

Figure 9-9 Parameters or copy options to use with reverse action

Chapter 9. FlashCopy interfaces

125

9.5.4 Initiating background copy for a persistent FlashCopy relationship


To initiate a background copy for a persistent FlashCopy relationship, start displaying the list of active FlashCopy relationships as described in 9.5.2 Displaying properties of existing FlashCopy on page 121. Then check the box at the left of the FlashCopy of your interest: the available Select Actions for this FlashCopy relationship display. Then choose Select Action Initiate background copy (see Figure 9-10).

Figure 9-10 Initiate background copy

The next window that is presented is shown in Figure 9-11. It prompts for the FlashCopy pairs for which the background copy should run.

Figure 9-11 Prompt window for background copy

126

IBM System Storage DS8000: Copy Services in Open Environments

9.5.5 Resynchronizing target


To re-synchronize a target volume, start by displaying the list of active FlashCopy relationships, as shown in Figure 9-12. Then check the box at the left of the FlashCopy you want to re-synchronize. Doing so, the available Select Actions for this FlashCopy relationship will be shown. Then select Resync target.

Figure 9-12 Resync target select option to re-synchronize FlashCopy relationship

The prompt window asks for more details for the resync request (see Figure 9-13).

Figure 9-13 Prompt window to detail resync request for FlashCopy relationship

Chapter 9. FlashCopy interfaces

127

9.5.6 Deleting existing FlashCopy relationship


To delete an existing FlashCopy relationship, start by displaying the list of active FlashCopy relationships, described in 9.5.2 Displaying properties of existing FlashCopy on page 121. Then check the box at the left of the FlashCopy relationship you want to terminate. By doing so, the available Select Actions for this FlashCopy relationship will be shown. Then select Delete (see Figure 9-14).

Figure 9-14 Delete select option to delete FlashCopy relationship

The next window is a prompt asking you to confirm the delete request (see Figure 9-15).

Figure 9-15 Prompt window to confirm delete request for FlashCopy relationship

128

IBM System Storage DS8000: Copy Services in Open Environments

10

Chapter 10.

IBM FlashCopy SE
IBM FlashCopy SE is functionally not very different from the standard FlashCopy. The concept of Space Efficient volumes with IBM FlashCopy SE relates to the attributes or properties of a DS8000 volume. IBM FlashCopy SE can co-exist with standard FlashCopy. In this chapter we discuss the setup and use of IBM FlashCopy SE. We cover the following topics: IBM FlashCopy SE overview Setting up Space Efficient volumes Doing FlashCopies onto Space Efficient volumes

Copyright IBM Corp. 2004-2008. All rights reserved.

129

10.1 IBM FlashCopy SE overview


IBM FlashCopy SE is an optional licensed feature. It can be ordered for any IBM System Storage DS8000 series, and requires DS8000 Licensed Machine Code (LMC) level 5.3.0xx.xx (bundle version 63.0.xx.xx), or later. IBM FlashCopy SE can be managed and configured via the DS GUI, DS CLI, and DS Open API. The major difference between FlashCopy SE and standard FlashCopy, as the name implies, is space efficiency. Space is only needed to store the original data that was changed on the source volumes (see Figure 10-1), and of course any new writes to the target. FlashCopy SE accomplishes space efficiency by pooling the physical storage requirements of many FlashCopy SE Volumes into a common repository. A mapping structure is created to keep track of where the FlashCopy SE volume's data is physically located within the repository. A repository is an object within an extent pool.T he repository is not seen by the host.

Source volumes

Classic FlashCopy target volumes

Source volumes

Repository

IBM FlashCopy SE virtual target volumes

Space needed for FlashCopy targets

Space needed for FlashCopy targets

Figure 10-1 Concept of IBM FlashCopy SE

When data is read from the target volume, it can be retrieved from the source if it still resides there, just as it would in standard FlashCopy. If the data resides in the repository, the mapping structure is used to locate it. IBM Flashcopy SE is designed for temporary copies. Because the target storage capacity is smaller than the source, a background copy would not make much sense and is not permitted with IBM Flashcopy SE. Copy duration should generally not last longer than 24 hours unless the source data has little write activity. Durations for typical use cases are expected to generally be less than 8 hours. FlashCopy SE is optimized for use cases where less than 5% of the source volume is updated during the life of the relationship. If more than 20% of the source is expected to change, then standard FlashCopy would likely be a better choice. Standard FlashCopy will generally have superior performance to FlashCopy SE. If performance on the source or target volumes is important, we strongly recommend standard Flashcopy.

130

IBM System Storage DS8000: Copy Services in Open Environments

Here are some scenarios for the use of FlashCopy SE: Creating a temporary copy with FlashCopy SE to dump it to tape. Temporary snapshot for application development or DR testing. Online backup for different points in time, for example, to protect your data against virus infection. Checkpoints (only if the source volumes will undergo moderate updates). FlashCopy target volumes in a Global Mirror (GM) environment. However, beware that if the Global Mirror session gets suspended, the repository will start to fill and it could get completely full.

10.2 Space Efficient volumes


FlashCopy SE requires a new type of volumes as FlashCopy target volumes: Space Efficient volumes. When a normal volume is created, it occupies the defined capacity on the physical drives. A Space Efficient volume does not occupy physical capacity when it is created. Space gets allocated when data is actually written to the volume. The amount of space that gets physically allocated is a function of the amount of data changes performed on a volume. The sum of all defined Space Efficient volumes can be larger than the physical capacity available. This function is also called thin provisioning. Space Efficient volumes can be created when the DS8000 has the FlashCopy SE feature. Space Efficient volumes are seen by a server just like normal volumes. A server cannot see any difference. Note: In the current implementation, Space Efficient volumes are supported as FlashCopy target volumes only. The idea with Space Efficient volumes is to save storage when it is only temporarily needed. This is the case with FlashCopy when you use the nocopy option. This type of FlashCopy is typically used with the goal of taking a backup from the FlashCopy target volumes. Without the use of Space Efficient volumes, target volumes consume the same physical capacity as the source volumes. Actually, however, these target volumes were often nearly empty because space is only occupied on-demand, when a write to the source volume occurs. Only changed data is copied to the target volume. A Space Efficient volume will only use the space needed for updates to the source volume. Since the FlashCopy target volumes are normally kept only until the backup jobs have finished, the changes to the source volumes should be low and consequently the storage needed for the Space Efficient volumes should be low. FlashCopy SE target volumes are also very cost efficient when several copies of your volumes are required. For example, to protect your data against logical errors or viruses, you might want to take Space Efficient FlashCopies several times a day. However, there is some overhead involved with FlashCopy SE. See the discussion about performance in Chapter 11.5, Performance planning for IBM FlashCopy SE on page 159.

Chapter 10. IBM FlashCopy SE

131

10.3 Repository for Space Efficient volumes


The definition of Space Efficient (SE) volumes begins at the extent pool level. SE volumes are defined from virtual space in that the size of the SE volume does not initially use physical storage. However, any data written to an SE volume must have enough physical storage to contain this write activity. This physical storage is provided by the repository. The repository is an object within an extent pool. In some sense it is similar to a volume within the extent pool. The repository has a physical size and a logical size. The physical size of the repository is the amount of space that is allocated in the extent pool. It is the physical space that is available for all Space Efficient volumes in total in this extent pool. The repository is striped across all ranks within the extent pool. There can only be one repository per extent pool. Important: The size of the repository and virtual space is part of the extent pool definition. Each extent pool can have an SE volume repository, but this physical space cannot be shared between extent pools. The logical size of the repository is the sum of virtual storage that is available for Space Efficient volumes. As an example, there could be a repository of 100 GB reserved physical storage and you defined a logical capacity of 200 GB. In this case you could define 10 LUNs with 20 GB each. So the logical capacity can be larger than the physical capacity. Of course, you cannot fill all the volumes with data, because the physical capacity is limited to 100 GB in this example. Note: In the current implementation of Space Efficient volumes, it is not possible to expand the size of the repository, neither the physical capacity of the repository nor the virtual capacity. Therefore careful planning for the size of the repository is required before it is used. If a repository needs to be expanded, all Space Efficient volumes within this extent pool must be deleted. Then the repository must be deleted and re-created with a different size. Space for a Space Efficient volume is allocated when a write occurs, more precisely, when a destage from the cache occurs. The allocation unit is a track. That is 64 KB for open systems LUNs or 57 KB for CKD volumes. This has to be considered when planning for the size of the repository. The amount of space that gets physically allocated might be larger than the amount of data that was written. If there are 100 random writes of, for example, 8 KB (800 KB in total), we probably will allocate 6.4 MB (100 x 64 KB). If there are other writes changing data within these 6.4 MB, there will be no new allocations at all. Because space is allocated in tracks, and the system needs to maintain tables where it places the physical track and how to map it to the logical volume, there is some overhead involved with Space Efficient volumes. The smaller the allocation unit, the larger the tables and the overhead. The DS8000 has a fixed allocation unit of a track, which is a good compromise between processing overhead and allocation overhead. Figure 10-2 illustrates the concept of Space Efficient volumes.

132

IBM System Storage DS8000: Copy Services in Open Environments

Virtual repository capacity Used tracks Allocated tracks

Extent Pool Space efficient volume

Ranks

Repository for space efficient volumes striped across ranks

normal Volume

Figure 10-2 Concept of Space Efficient volumes

10.3.1 Capacity planning for FlashCopy SE


Proper sizing of the amount of physical space required in the repository is essential to the operation of Space Efficient volumes. Currently the physical size of the repository cannot be increased after it is defined. Using all the available physical space will make the SE volumes unavailable to the host. For Global Mirror applications, the last Consistency Group must be maintained, so no updates can be allowed to the FlashCopy source. This results in a write inhibit condition on the source and the Global Mirror pair will suspend. It is essential that the initial allocation of physical space made available to SE volumes not be underestimated Important: It is essential to properly size the repository. Its capacity cannot be expanded after it is created. We expect that in many cases, 20% of the source volume size will be a good value (beyond 20% of utilization, performance might degrade significantly). How much space is needed for a Space Efficient volume depends on two factors: The data change rate on the source volumes The lifetime of the FlashCopy SE relationship You can get information about the write activity with the help of IBM TotalStorage Productivity Center (TPC) for Disk to collect I/O statistics. From the write data rate MB/s, we can estimate the amount of changed data by multiplying this number with the planned lifetime of the FlashCopy SE relationship. Let us assume a set of volumes for a 1 TB database. Let us assume an average of 3 MB/s write activity. Within 10 hours (36 000 seconds) we update about 100 GB. This is about 10% of the capacity. In many cases the change rate is much lower. However, we cannot be sure that this amount of changes is identical with the capacity needed for the repository.

Chapter 10. IBM FlashCopy SE

133

There are two factors that are important here: The required capacity in the repository could be higher because there is always a full track (64 KB) copied to the repository when there is any change to the source track, even if it is only 4 KB, for example. The required capacity in the repository could be lower because several changes to the same source data track will not change anything in the repository. Important: If your source volume has several FlashCopy SE relationships and there is an update of a source volume track that has not been updated since the last FlashCopies had been taken, this track will be copied for each FlashCopy SE relationship of the source volume. So this track will be copied several times. Because in most cases, we do not know the workload, let us assume that both effects even out each other.

Calculating the repository size with the help of standard FlashCopy


If you already have the standard FlashCopy feature and you plan to use FlashCopy SE but you do not yet have the feature and you want to plan for the needed capacity (or the capacity you can save), there is an easy way to calculate the needed capacity and this applies whether FlashCopy SE is going to be used in addition to or instead of standard FlashCopy: For a source volume that you want to copy later with FlashCopy SE, establish a standard FlashCopy with the nocopy option onto a temporary regular target volume. Immediately after the FlashCopy check the OutOfSyncTracks with the lsflash -l command. Wait as long as you plan to keep your FlashCopy target volumes and then repeat the lsflash -l command and check the OutOfSyncTracks. Calculate the difference of the OutOfSyncTracks from the first and last query and multiply it by 64 KB. This will give you the allocated space in a repository for the source volume in question if you would have used FlashCopy SE. You can repeat the procedure for all volumes you want to copy with FlashCopy SE and you will get a size for the repository. If you currently use incremental FlashCopy and you plan to use FlashCopy SE, you can check the OutOfSyncTracks with the lsflash -l command right after you issued the resyncflash command to get the changed tracks since the last FlashCopy. When you have determined the average change rate on the source volumes, you should double that capacity to be on the safe side. Take this as the size of your repository.

Repository overhead
There is some capacity needed in the repository for internal tables. The size depends on the physical and logical size of the repository. The space for these internal tables is allocated in addition to the specified repository size when the repository is created. Usually this additional storage is in the range of about 2% of the repository capacity. However, if you define your virtual capacity much larger than the physical capacity, you will get another ratio. An estimate for the additional capacity (repoverh) that gets allocated when a repository with a certain repository capacity (repcap) and a certain virtual capacity (vircap) can be obtained from the following equation: repoverh (GB) = 0.01 * repcap (GB) + 0.005 * vircap (GB)

134

IBM System Storage DS8000: Copy Services in Open Environments

For example, if a repository of 5,000 GB is created and a virtual capacity of 50,000 GB is specified, about 300 GB get allocated in addition to the specified 5,000 GB.

10.3.2 Creating a repository for Space Efficient volumes


Before you can create a Space Efficient volume you have to create a repository for Space Efficient volumes. There are new DS CLI commands and DS GUI options to deal with repositories. Here are the new DS CLI commands: mksestg rmsestg chsestg lssestg showsestg Create a repository Delete a repository Change the properties of a repository (currently you cannot change the size of a repository) List all repositories Show details for a repository in a specified extent pool

There can be one repository per extent pool. A repository has a physical capacity that is available for storage allocations by Space Efficient volumes and a virtual capacity that is the sum of all LUN/volume sizes of the Space Efficient volumes. The physical repository capacity is allocated when the repository is created.

Working with the DSCLI


Example 10-1 shows the creation of a repository with the DS CLI. If there are several ranks in the extent pool, the repositorys extents are striped across the ranks. The minimum repository size is 16 GB, the maximum repository size is determined by the capacity of the Extent Pool. However, take into account the capacity overhead for the repository (see Repository overhead on page 134). The virtual capacity can be larger than the Extent Pool capacity. Tip: If you want to define all storage in an Extent Pool for a repository use the DS GUI as it tells you what capacity is allocated for the repository, including the overhead (see Figure 10-4 on page 137).
Example 10-1 Creating a repository for Space Efficient volumes dscli> mksestg -repcap 100 -vircap 200 -extpool p53 Date/Time: October 17, 2007 11:59:12 IBM DSCLI Version: 5.3.0.977 DS: IBM.2107-7520781 CMUC00342I mksestg:: The space-efficient storage for the extent pool P53 has been created successfully. dscli>

-vircap specifies virtual capacity. Minimum size is 16 GB for virtual or repository capacities -repcap or -reppercent specifies repository size. The -repcap specifies actual repository size and the -reppercent specifies a percentage of virtual capacity. -captype optionally specifies all capacity unit types with gb (default), cyl, or blocks. -recapthreshold optionally sets user warning threshold. This is the repository threshold, not the virtual threshold. Defaults to 0% available (100% used). The chsestg command is used to change SE storage, but currently, it can only change user warning threshold. It cannot change either virtual or repository capacity sizes. Repeat this step for all extent pools in which you wish to define Space Efficient storage.

Chapter 10. IBM FlashCopy SE

135

Working with the DS GUI


Figure 10-3 shows the creation of a repository by the DS GUI.

Figure 10-3 Creation of a repository by the DS GUI

1. 2. 3. 4. 5.

Select Real-time manager. Select Configure storage. Select Extent pools. Check mark an extent pool where you want to create the repository. Select action Add Space Efficient Storage.

In the next frame displayed (see Figure 10-4), you can specify the physical size of the repository that is allocated on the ranks and the virtual capacity for all Space Efficient volumes within this extent pool. There are also options to set thresholds for warnings when the repository fills up. Of course there are similar options for the DS CLI. For the complete syntax, see IBM System Storage DS8000: Command-Line Interface Users Guide, SC26-7916. When you create a repository with a certain repository capacity, the actual capacity that is allocated in the extent pool is somewhat larger than the specified capacity to hold some internal tables. You also have the option to reserve some percentage of virtual capacity. This can be helpful if many users are allowed to create volumes and you want to prevent the possibility that you would suddenly run out of space. If some capacity is reserved, an administrator can release the rest of the capacity upon becoming aware that the free capacity is short.

136

IBM System Storage DS8000: Copy Services in Open Environments

Figure 10-4 Specifying the size of a repository

A repository can be deleted with the rmsestg command, and you can get information about the repository with the showsestg command. The lssestg command provides information about all repositories in the DS8000 (see Example 10-4). Example 10-2 shows the output of the showsestg command. You can determine how much capacity within the repository is used by checking the repcapalloc value.
Example 10-2 Getting information about a Space Efficient repository dscli> showsestg p53 Date/Time: October 17, 2007 1:30:53 IBM DSCLI Version: 5.3.0.977 DS: IBM.2107-7520781 extentpoolID P53 stgtype fb datastate Normal configstate Normal repcapstatus below %repcapthreshold 0 repcap (2^30B) 100.0 repcapblocks 209715200 repcapcyls repcapalloc 0.0 %repcapalloc 0 vircap 200.0 vircapblocks 419430400 vircapcyls vircapalloc 0.0 %vircapalloc 0 overhead 3.0 dscli>

In Figure 10-3 on page 136 we saw that the DS GUI also has an action Delete Space Efficient Storage to delete a repository. Here, Figure 10-5 shows the DS GUI when you have selected the action Properties.
Chapter 10. IBM FlashCopy SE

137

Figure 10-5 Properties of a repository

10.3.3 Creation of Space Efficient volumes


Now that we have a repository, we can create Space Efficient volumes within this repository.

Working with the DS CLI


A Space Efficient volume is created by specifying the -sam tse (Track Space Efficient) parameter on the mkfbvol command (see Example 10-3).
Example 10-3 Creating a Space Efficient volume dscli> mkfbvol -extpool p53 -cap 40 -name ITSO-1721-SE -sam tse 1721 Date/Time: October 17, 2007 3:10:13 CEST IBM DSCLI Version: 5.3.0.977 DS: IBM.2107-7520781 CMUC00025I mkfbvol: FB volume 1721 successfully created. dscli>

When we list Space Efficient repositories with the lssestg command (see Example 10-4), we can see that in extent pool P53 we have a virtual allocation of 40 extents (GB), but that the allocated (used) capacity repcapalloc is still zero.
Example 10-4 Getting information about Space Efficient repositories
dscli> lssestg -l Date/Time: October 17, 2007 3:12:11 PM CEST IBM DSCLI Version: 5.3.0.977 DS: IBM.2107-7520781 extentpoolID stgtype datastate configstate repcapstatus %repcapthreshold repcap (2^30B) vircap repcapalloc vircapalloc ====================================================================================================================== P4 ckd Normal Normal below 0 64.0 1.0 0.0 0.0 P47 fb Normal Normal below 0 70.0 282.0 0.0 264.0 P53 fb Normal Normal below 0 100.0 200.0 0.0 40.0 dscli>

138

IBM System Storage DS8000: Copy Services in Open Environments

This allocation comes from the volumes we just created. To see the allocated space in the repository for just this volume, we can use the showfbvol command (see Example 10-5).
Example 10-5 Checking the repository usage for a volume dscli> showfbvol 1721 Date/Time: October 17, 2007 3:29:30 PM CEST IBM DSCLI Version: 5.3.0.977 DS: IBM.2107-7520781 Name ITSO-1721-SE ID 1721 accstate Online datastate Normal configstate Normal deviceMTM 2107-900 datatype FB 512 addrgrp 1 extpool P53 exts 40 captype DS cap (2^30B) 40.0 cap (10^9B) cap (blocks) 83886080 volgrp ranks 0 dbexts 0 sam TSE repcapalloc 0 eam reqcap (blocks) 83886080 dscli>

Working with the DS GUI


Figure 10-6 shows the creation of a Space Efficient volume by use of the DS GUI. You have to select an extent pool with a repository. You can identify such an extent pool having a non-zero value in the column Available Virtual GB.

Figure 10-6 Selecting an extent pool with a repository for Space Efficient volumes

Chapter 10. IBM FlashCopy SE

139

On the next panel, Define volume characteristics, we define that we want to create a Space Efficient volume (see Figure 10-7).

Figure 10-7 Creation of a Space Efficient volume

To create a Space Efficient volume, the Storage allocation method, Track space efficient (TSE), must be selected. The remaining steps are the same as for a standard volume.

10.4 Performing FlashCopy SE operations


The operations that can be performed with FlashCopy SE are nearly the same as with standard FlashCopy. However, there are two general exceptions: Whenever you want to make a FlashCopy onto a Space Efficient volume (for example, with mkflash, resyncflash, or reverseflash), you have to specify the option -tgtse to allow the target volume to be Track Space Efficient. On any FlashCopy command, the -cp (full copy) option is not allowed if the target is a Space Efficient volume. You can still perform a standard FlashCopy (onto a fully provisioned volume) even if you had specified that a FlashCopy onto a Space Efficient volume is allowed. If you work with the DS GUI, there are equivalent options. Restriction: It is not possible to create a full background copy of the source volume on the target volume when the target volume is a Space Efficient volume.

10.4.1 Creation and resynchronization of FlashCopy SE relationships


Now we do a Space Efficient FlashCopy. Assume that you have two volumes: one standard volume and a Space Efficient volume. Both volumes can be in different extent pools, in different LSSs, anywhere within the same DS8000 (however, in the same LPAR in an LPAR machine). For performance reasons, both volumes should be managed by the same DS8000 server and preferably be managed by different device adapters, but these considerations are of secondary interest here (see Chapter 11, FlashCopy performance on page 153).

140

IBM System Storage DS8000: Copy Services in Open Environments

Working with the DS CLI


In Example 10-6 we have a standard volume (1720) and a Space Efficient volume (1740). The Space Efficient volume can be identified by the -sam TSE attribute (you have to specify the -l option to see this attribute).
Example 10-6 List of standard and Space Efficient volumes
dscli> lsfbvol -l 1720-1740 Date/Time: October 24, 2007 3:28:12 PM CEST IBM DSCLI Version: 5.3.0.991 DS: IBM.2107-7503461 Name ID accstate datastate configstate deviceMTM datatype extpool sam captype cap (2^30B) ..... ==========================================================================================================..... ITSO-STD-1720 1720 Online Normal Normal 2107-900 FB 512 P3 Standard DS 25.0 ..... ITSO-SE-1740 1740 Online Normal Normal 2107-900 FB 512 P3 TSE DS 25.0 .....

volgrp reqcap (blocks) eam =============================================== V13 52428800 rotatevols V13 52428800 -

When we want to establish a FlashCopy SE relationship between the two volumes, we can use any option that is available for standard FlashCopy like -record or -persist, for example, but we have to specify -tgtse and we cannot specify -cp which means we establish a nocopy relationship (see Example 10-7).
Example 10-7 Establishing a FlashCopy SE relationship dscli> mkflash -tgtse -record -persist 1720:1740 Date/Time: October 24, 2007 1:53:08 PM CEST IBM DSCLI Version: 5.3.0.991 DS: IBM.2107-7503461 CMUC00137I mkflash: FlashCopy pair 1720:1740 successfully created.

If we had tried to do a full copy by specifying the -cp option, we would get a message as shown in Example 10-8.
Example 10-8 Trying to do a full copy dscli> mkflash -tgtse -record -persist -cp 1720:1740 Date/Time: October 24, 2007 3:49:52 IBM DSCLI Version: 5.3.0.991 DS: IBM.2107-7503461 CMUN02649E mkflash: 1720:1740: The task cannot be initiated. Either you did not specify the Permit space-efficient Target or Secondary option, or at least one of the options that you have specified is not supported for a space-efficient target or secondary volume.

Example 10-9 shows the result of an lsflash -l command. We can see that BackgroundCopy is disabled when we do the Space Efficient FlashCopy. The AllowTgtSE Enabled attribute indicates that it actually is a FlashCopy SE relationship.
Example 10-9 Listing a Space Efficient relationship dscli> lsflash -l 1720
Date/Time: October 24, 2007 3:57:12 PM CEST IBM DSCLI Version: 5.3.0.991 DS: IBM.2107-7503461 ID SrcLSS SequenceNum Timeout ActiveCopy Recording Persistent Revertible SourceWriteEnabled TargetWriteEnabled ======================================================================================================================== 1720:1740 17 0 60 Disabled Enabled Enabled Disabled Enabled Enabled

BackgroundCopy OutOfSyncTracks DateCreated DateSynced State AllowTgtSE ========================================================================================================= Disabled 409600 Wed Oct 24 15:55:55 CEST 2007 Wed Oct 24 15:55:55 CEST 2007 Valid Enabled

The lsflash command has an option to show only FlashCopy SE relationships: -tgtse. When using this command, you should specify a range of volume addresses where you want to look for FlashCopy SE relationships (see Example 10-10 on page 142).

Chapter 10. IBM FlashCopy SE

141

Example 10-10 Listing FlashCopy SE relationships dscli> lsflash -tgtse 1700-1750


Date/Time: October 25, 2007 2:13:49 PM CEST IBM DSCLI Version: 5.3.0.991 DS: IBM.2107-7503461 ID SrcLSS SequenceNum Timeout ActiveCopy Recording Persistent Revertible SourceWriteEnabled TargetWriteEnabled BackgroundCopy ==================================================================================================================================== 1720:1740 17 0 60 Disabled Disabled Disabled Disabled Enabled Enabled Disabled

In Example 10-11 we include the resyncflash command with some additional options just to show that they can be used as in standard FlashCopy operations. Only the -tgtse parameter is important for FlashCopy SE.
Example 10-11 Resynchronizing a FlashCopy SE pair dscli> resyncflash -record -persist -tgtpprc -tgtinhibit -tgtse 1720:1740 Date/Time: October 24, 2007 4:16:41 IBM DSCLI Version: 5.3.0.991 DS: IBM.2107-7503461 CMUC00168I resyncflash: FlashCopy volume pair 1720:1740 successfully resynchronized.

A normal reverseflash operation is not possible for a FlashCopy pair that is in a nocopy relationship (an FlashCopy SE relation always is a nocopy relationship), but you can do a fast reverse restore operation by using the command reverseflash -fast.

Working with the DS GUI


When you are setting up a FlashCopy SE relationship with the DS GUI, you have to select the option Allow target volumes to be Space Efficient volumes on the Define relationship type screen (see Figure 10-8).

Figure 10-8 Creating a FlashCopy SE with the DS GUI

On the next screen, you define your source volume as usual. When you come to the Select target volume screen, you want to select a Space Efficient volume. You can recognize such volumes by examining the Storage Allocation attribute, which is TSE for Space Efficient volumes (see Figure 10-9). The other parameters that you can specify are the same as in standard FlashCopy. You should not select the option Initiate background copy because FlashCopy SE does not allow a full copy. If you select that option, you will get an error message and the FlashCopy will fail. For the same reason, you should not select the action Initiate background copy in the panel shown in Figure 10-10. However, let us now select the Properties action in that panel.

142

IBM System Storage DS8000: Copy Services in Open Environments

Figure 10-9 Selecting a Space Efficient target

Figure 10-10 Getting information on a FlashCopy SE relationship

Figure 10-11 shows the options that are in effect for this FlashCopy SE relationship. The last two options, Relationship failed if Space Efficient target volume full and Source write inhibited if Space Efficient target volume full cannot be changed in the current implementation of FlashCopy SE.

Chapter 10. IBM FlashCopy SE

143

Figure 10-11 Properties of a FlashCopy SE relationship

If we want to check how much space is actually allocated for a Space Efficient volume we have to go to Real-time Manager Configure Storage Volumes - Open systems and select a Space Efficient volume and select the Properties action. In Figure 10-12 we can see that the volume we have selected currently occupies 0.3 GB physical storage.

Figure 10-12 Checking the allocated space for a Space Efficient volume

In a similar way, by selecting Real-time Manager Configure Storage Extent pools, we can select an extent pool with a virtual capacity (which means that the extent pool has a repository) and selecting the Properties action for that extent pool. Figure 10-13 shows an example for an extent pool that has 4.7 GB capacity currently allocated.

144

IBM System Storage DS8000: Copy Services in Open Environments

Figure 10-13 Properties of an extent pool with a repository for Space Efficient storage

10.4.2 Removing FlashCopy relationships and releasing space


When changes are made to the FlashCopy source volume, before a destage to the source volume happens, the original data track of the source volume that is to be modified is copied to the repository that is associated with the Space Efficient target volume. In this way, the repository gets filled with data as shown in Example 10-12.
Example 10-12 A repository is filled with data
dscli> lssestg -l Date/Time: October 24, 2007 4:50:02 PM CEST IBM DSCLI Version: 5.3.0.991 DS: IBM.2107-7503461 extentpoolID stgtype datastate configstate repcapstatus %repcapthreshold repcap (2^30B) vircap repcapalloc vircapalloc ====================================================================================================================== P3 fb Normal Normal below 0 50.0 100.0 0.0 37.0 dscli> lssestg -l Date/Time: October 24, 2007 4:56:14 PM CEST IBM DSCLI Version: 5.3.0.991 DS: IBM.2107-7503461 extentpoolID stgtype datastate configstate repcapstatus %repcapthreshold repcap (2^30B) vircap repcapalloc vircapalloc ====================================================================================================================== P3 fb Normal Normal below 0 50.0 100.0 2.6 37.0 dscli> lssestg -l Date/Time: October 24, 2007 5:05:02 PM CEST IBM DSCLI Version: 5.3.0.991 DS: IBM.2107-7503461 extentpoolID stgtype datastate configstate repcapstatus %repcapthreshold repcap (2^30B) vircap repcapalloc vircapalloc ====================================================================================================================== P3 fb Normal Normal below 0 50.0 100.0 8.9 37.0

If you were to use a normal rmflash command to withdraw a FlashCopy relationship, the target volume would be in an undefined state, as is always the case with nocopy relationships when they are withdrawn, but the allocated space for that Space Efficient volume is still allocated and uses up space in the repository.

Chapter 10. IBM FlashCopy SE

145

There are four ways to get rid of that space: By specifying the -tgtreleasespace on the rmflash command By using the initfbvol -action releasespace command Whenever you do a new FlashCopy SE onto the Space Efficient target volume When you delete a Space Efficient volume with the rmfbvol command Similar functions are available when using the DS GUI.

Using the DS CLI


You can release space when you withdraw a FlashCopy SE relationship. The rmflash command has an option to release space: -tgtreleasespace. When you withdraw the FlashCopy SE relationship with this option, the space for the Space Efficient target volume will be released, not immediately, but after a while when all data of your Space Efficient volume is gone.
Example 10-13 Releasing space when withdrawing a relationship
dscli> rmflash -tgtreleasespace -quiet 1720:1740 Date/Time: October 24, 2007 5:21:18 PM CEST IBM DSCLI Version: 5.3.0.991 DS: IBM.2107-7503461 CMUC00140I rmflash: FlashCopy pair 1720:1740 successfully removed. dscli> lssestg -l Date/Time: October 24, 2007 5:21:25 PM CEST IBM DSCLI Version: 5.3.0.991 DS: IBM.2107-7503461 extentpoolID stgtype datastate configstate repcapstatus %repcapthreshold repcap (2^30B) vircap repcapalloc vircapalloc ====================================================================================================================== P3 fb Normal Normal below 0 50.0 100.0 8.0 37.0 dscli> lssestg -l Date/Time: October 24, 2007 5:21:35 PM CEST IBM DSCLI Version: 5.3.0.991 DS: IBM.2107-7503461 extentpoolID stgtype datastate configstate repcapstatus %repcapthreshold repcap (2^30B) vircap repcapalloc vircapalloc ====================================================================================================================== P3 fb Normal Normal below 0 50.0 100.0 2.6 37.0 dscli> lssestg -l Date/Time: October 24, 2007 5:21:49 PM CEST IBM DSCLI Version: 5.3.0.991 DS: IBM.2107-7503461 extentpoolID stgtype datastate configstate repcapstatus %repcapthreshold repcap (2^30B) vircap repcapalloc vircapalloc ====================================================================================================================== P3 fb Normal Normal below 0 50.0 100.0 0.0 37.0

However, the Space Efficient volume still exists with the same virtual size and it can be re-used for another FlashCopy SE relationship. Incidentally, if the virtual size of the Space Efficient volume does not match the size of your new source volume, you can dynamically expand the virtual size with the chfbvol -cap newsize volume command. However, you cannot make the volume smaller. If you want to make it smaller, you must delete it and re-create it with a different size. If you did not specify the -tgtreleasespace parameter on the rmflash command, you can use the initfbvol -releasespace volume command to release space for the specified volume (see Example 10-14).
Example 10-14 Releasing space with the initfbvol command dscli> initfbvol -action releasespace 1740 Date/Time: October 25, 2007 9:35:32 IBM DSCLI Version: 5.3.0.991 DS: IBM.2107-7503461 CMUC00337W initfbvol: Are you sure that you want to submit the command releasespace for the FB volume 1740?[Y/N]:y CMUC00340I initfbvol:: 1740: The command releasespace has completed successfully.

146

IBM System Storage DS8000: Copy Services in Open Environments

Important: Your DS8000 userid needs Administrator rights to use the initfbvol command. After you have issued this command for a volume, it will be empty; all space is released, but the virtual volume still exists.

Working with the DS GUI


When you want to withdraw a FlashCopy SE relationship by selecting the Delete action in the panel shown in Figure 10-10 on page 143, a subsequent panel is displayed where you have to confirm your action. This panel has an option that is check-marked by default: Eliminate data and release allocated target space on Space Efficient target volumes. The option is self-explanatory.

Figure 10-14 Withdrawing a FlashCopy SE relationship with the DS GUI

Suppose that you had not selected the aforementioned option, you do not intend to use the Space Efficient volume for another FlashCopy right now, and you do not want to delete it either. In this case, you can initialize the Space Efficient volume to release space by selecting the action Initialize TSE Volume in the Real-time Manager Configure Storage Volumes - Open systems panel after you have selected the volume you want to initialize (see Figure 10-15).

Chapter 10. IBM FlashCopy SE

147

Figure 10-15 Initializing a Space Efficient volume

10.4.3 Other FlashCopy SE operations


There are other FlashCopy operations in a standard FlashCopy that we have not yet discussed so far. These are: commitflash revertflash unfreezeflash setflashrevertible All these commands can be used with FlashCopy SE in the same way as with standard FlashCopy without any change. The setflashrevertible command has a -tgtse option, which must be specified when dealing with Space Efficient volumes. The same actions are available in the DS GUI. We do not discuss them here any further. There is also a set of remote FlashCopy commands: commitremoteflash resyncremoteflash lsremoteflash mkremoteflash revertremoteflash rmremoteflash setremoteflashrevertible These commands can be used with FlashCopy SE in the same way as with standard FlashCopy. For some commands you have to specify the -tgtse option if you work with Space Efficient volumes. We have discussed this parameter for the local FlashCopy commands. For more information about these commands, see IBM System Storage DS8000: Command-Line Interface Users Guide, SC26-7916.

148

IBM System Storage DS8000: Copy Services in Open Environments

10.4.4 Working with Space Efficient volumes


You can work with a Space Efficient volume in the same way as a normal volume. You can mount it to a server or host, read from it and write to it, use it in remote copy relations, and so on. Space efficient volumes, however, are not supported for production. Important: Space efficient volumes are supported as FlashCopy target volumes exclusively. In other words, trying to use a Space Efficient volume other than as a FlashCopy target will NOT cause a failure, but is not supported.

10.4.5 Monitoring repository space and out-of-space conditions


Since Space Efficient volumes can be over-provisioned, which means that the sum of all virtual volume sizes can be larger than the physical repository size, you have to monitor the free space available in the repository.

Setting a threshold for a repository


By default, there is a warning threshold at 15% and 0% free space left in the repository.You can set your own warning threshold to be informed when the repository has reached a certain filling level. In Example 10-15, we set the threshold to 50% (but a more practical threshold would be at 20% space left).
Example 10-15 Setting a notification threshold for the repository dscli> chsestg -repcapthreshold 50 p3 Date/Time: October 25, 2007 3:09:32 PM CEST IBM DSCLI Version: 5.3.0.991 DS: IBM.2107-7503461 CMUC00343I chsestg:: The space-efficient storage for the extent pool P3 has been modified successfully. dscli> lssestg -l
Date/Time: October 25, 2007 3:09:38 PM CEST IBM DSCLI Version: 5.3.0.991 DS: IBM.2107-7503461 extentpoolID stgtype datastate configstate repcapstatus %repcapthreshold repcap (2^30B) vircap repcapalloc vircapalloc ====================================================================================================================== P3 fb Normal Normal below 50 16.0 60.0 0.0 50.0

When the repository fills up and the threshold is reached a warning is sent out, depending on the options set in the DS8000. If SNMP notification is configured, we would receive a trap as shown in Example 10-16. You can also configure E-Mail notification.
Example 10-16 SNMP alert when repository threshold reached 2007/10/25 15:22:26 CEST Space Efficient Repository or Over-provisioned Volume has reached a warning watermark UNIT: Mnf Type-Mod SerialNm IBM 2107-922 75-03461 Volume Type: 0 Reason code: 0 Extent Pool ID: 3 Percentage Full: 50

You can also specify a threshold for the repository when you use the DS GUI. For example, you can specify it in the panel shown in Figure 10-13 on page 145.

Chapter 10. IBM FlashCopy SE

149

You can also set a limit and a threshold for the virtual capacity in a repository. This is done with the mkextpool or chextpool command. There are new options -virextentlimit, -virlimit, and -virthreshold to enable this limit and set a threshold.

Repository full condition


When the repository becomes full, the FlashCopy SE relationships with Space Efficient volumes in the full repository will fail at the next write operation (see Example 10-17 for a DS CLI example or Figure 10-16 for an example withe the DS GUI).
Example 10-17 State of a FlashCopy SE relationship when the repository is full dscli> lsflash -l 1720 Date/Time: October 25, 2007 3:56:49 PM CEST IBM DSCLI Version: 5.3.0.991 DS: IBM.2107-7503461 ID SrcLSS SequenceNum Timeout ActiveCopy Recording Persistent Revertible SourceWriteEnabled ================================================================================================== 1720:1740 -

TargetWriteEnabled BackgroundCopy OutOfSyncTracks DateCreated DateSynced State AllowTgtSE ============================================================================================== Tgt Failed -

When space is exhausted while a FlashCopy SE relationship exists, the relationship is placed in a failed state, which means that the target copy becomes invalid. Writes will continue to the source volume. If the space becomes depleted, the relationship continues to exist in a failed state reads and writes to the source continue to be allowed, but updated tracks are not copied to the target. Also, all reads and writes to the target are failed, causing any jobs running against the target volume to fail, rather than to succeed without data integrity. To clear this condition for the target volume, you must withdraw the relationship and release the space on the Space Efficient volume. You can also establish a new FlashCopy SE relationship that releases all space in the repository associated with the target volume. Another possibility is to use the initfbvol command the release space. As space begins to approach depletion on a given repository, the control unit begins delaying writes to the Space Efficient volumes backed by that repository volume. This allows the data in cache to be destaged prior to the space being completely exhausted, which minimizes the amount of data that gets trapped in NVS when the space is finally exhausted. The delay is based on how many updates are occurring and how much space is left.

Figure 10-16 Status of a failed FlashCopy SE relation

150

IBM System Storage DS8000: Copy Services in Open Environments

In a Global Mirror environment where the FlashCopy target volumes are Space Efficient volumes, the behavior of a FlashCopy SE relation is different when the repository becomes full. In this case we want to keep the FlashCopy relation because it represents a consistent state of the Global Mirror target volumes. The FlashCopy source volumes are put in a write-source inhibit state. Because these FlashCopy source volumes are target volumes of Global Mirror, the Global Mirror pairs are suspended at the next mirror write to the remote volume.

Chapter 10. IBM FlashCopy SE

151

152

IBM System Storage DS8000: Copy Services in Open Environments

11

Chapter 11.

FlashCopy performance
In this chapter we describe best practices when configuring FlashCopy for specific environments or scenarios. We cover the following topics: FlashCopy performance overview FlashCopy establish performance Background copy performance FlashCopy impact to applications FlashCopy options FlashCopy scenarios IBM FlashCopy SE performance considerations

Copyright IBM Corp. 2004-2008. All rights reserved.

153

11.1 FlashCopy performance overview


Many parameters can affect the performance of FlashCopy operations. It is important to review the data processing requirements and hence select the proper FlashCopy options. This chapter examines when to use copy versus no copy and where to place the FlashCopy source and target volumes/LUNs. We also discuss when and how to use incremental FlashCopy, which should definitely be evaluated for use in most applications. IBM FlashCopy SE needs some special considerations; We discuss IBM FlashCopy SE in 11.5, Performance planning for IBM FlashCopy SE on page 159. Note: This chapter is equally valid for System z volumes and Open Systems LUNs. In the following sections of the present chapter, we use only the terms volume or volumes, but the text is equally valid if the terms LUN and LUNs were to be used, unless otherwise noted.

Terminology
Before proceeding with the discussion on FlashCopy best practices, let us review some of the basic terminology we use in this chapter: Server: In a DS8000, a server is effectively the software that uses a logical partition (LPAR) and that has access to a percentage of the memory and processor resources available on a processor complex. The DS8000 models 931 and 932 have one pair of servers Server 0 and Server 1, one on each processor complex both integrated in a single Storage Facility Image (SFI). The DS8000 model 9B2 can have two pairs of servers two on each processor complex each pair integrating one of the two possible SFIs. You can issue the lsserver command to see the available servers. Device Adapter (DA): A physical component of the DS8000 that provides communications between the servers and the storage devices. The lsda command lists the available device adapters. Rank: An array site made into an array, which is then made into a rank. For the DS8000 a rank is a collection of eight disk drive modules (DDMs). The lsrank command displays detailed information about the ranks.

11.1.1 Distribution of the workload: Location of source and target volumes


In general, you can achieve the best performance by distributing the load across all of the resources of the DS8000. In other words, you should carefully plan your usage so that the load is: Spread evenly across disk subsystems Within each disk subsystem, spread evenly across servers Within each server, spread evenly across device adapters Within each device adapter, spread evenly across ranks

Striping.

Tip: When you have an Extent Pool with more than one rank, you can use Storage Pool

Storage Pool Striping is an allocation method for volumes where the extents for a volume get allocated on the ranks using a round-robin algorithm (see IBM System Storage DS8000 Series: Architecture and Implementation, SG24-6786). This allocation method can improve the throughput for a single volume a lot.

154

IBM System Storage DS8000: Copy Services in Open Environments

On the other hand, if you stripe all your data across all ranks and you lose just one rank, for example, because you lose two drives at the same time in a RAID 5 array, all your data is gone. If you do not protect your data by some mirroring implementation like Metro Mirror, you should better have Extent Pools with only four to eight ranks. It is always best to locate the FlashCopy target volume on the same DS8000 server as the FlashCopy source volume. It is also good practice to locate the FlashCopy target volume on a different device adapter (DA) than the source volume, but there are some cases where this is not really an important decision. Another choice available is whether to place the FlashCopy target volumes on the same ranks as the FlashCopy source volumes. In general it is best not to place these two volumes in the same rank. Refer to Table 11-1 for a summary of the volume placement considerations. Tip: If the FlashCopy target volume is on the same rank as the FlashCopy source volume, the user runs the risk of a rank failure causing the loss of both the source and the target volumes. However, this is only of importance when you do full background copies.

Table 11-1

FlashCopy source and target volume location Server Device Adapter Unimportant Different device adapter Unimportant Rank Different ranks Different ranks Different ranks

FlashCopy establish performance Background copy performance FlashCopy impact to applications

Same server Same server Same server

Tip: To find the relative location of your volumes, you can use the following procedure: 1. Use the lsfbvol command to learn which Extent Pool contains the relevant volumes. 2. Use the lsrank command to display both the device adapter and the rank for each Extent Pool. 3. To determine which server contains your volumes, look at the Extent Pool name. Even numbered Extent Pools are always from Server 0, while odd numbered Extent Pools are always from Server1.

11.1.2 LSS/LCU versus rank: Considerations


In the DS8000 it is much more meaningful to discuss volume location in terms of ranks and not in terms of logical subsystem (LSS) or logical control unit (LCU). On the ESS 800 and earlier IBM disk subsystems, the physical location of the volumes was described in terms of the logical subsystem LSS/LCU. If there existed more than a single rank in the LSS/LCU, then each of the ranks would have a range of volumes from that specific LSS/LCU. The LSSs/LCUs in a DS8000 disk subsystem are logical constructs that are no longer tied to predetermined ranks. Within the DS8000 the LSS/LCU can be configured to span one or more ranks but is not limited to specific ranks. There can be individual ranks that contain volumes from more than a single LSS/LCU, which was not possible before the introduction of the DS8000.

Chapter 11. FlashCopy performance

155

However, LSSs/LCUs are associated with one server of the DS8000. Even numbered LSSs/LCUs are managed by server 0, odd numbered LSSs/LCUs are managed by server 1. Remember that the first two numbers xx of a volumes address xxnn denote the LSS/LCU. To select an ideal (from a performance standpoint) target volume yymm for source volume xxnn, xx and yy should be both even or both odd.

11.1.3 Rank characteristics


Normal performance planning also includes the task to select disk drives (capacity and RPM) and RAID configurations match the performance needs of the applications. Be aware that with FlashCopy nocopy relations the DS8000 does on-demand copy for each first change to a source volume track. If the disks of the target volume are slower than the disks of the source volume, this might slow down production I/O. A full copy FlashCopy produces a quite high write activity on the disk drives of the target volume. Therefore it is always a good practice to use target volumes on ranks with the same characteristics as the source volumes. Finally, you can achieve a small performance improvement by using identical rank geometries for both the source and target volumes. In other words, if the source volumes are located on a rank with a 7+p/RAID 5 configuration, then the target volumes should also be located on a rank configured as 7+p/RAID 5.

11.2 FlashCopy establish performance


The FlashCopy of a volume has two distinct periods: The initial logical FlashCopy (also called establish) The physical FlashCopy (also called background copy) The FlashCopy establish phase, or logical FlashCopy, is the period of time when the microcode is preparing things, such as the bitmaps, necessary to create the FlashCopy relationship so the microcode can properly process subsequent reads and writes to the related volumes. During this logical FlashCopy period, no writes are allowed to the source and target volume. However, this period is very short. After the logical relationship has been established, normal I/O activity is allowed to both source and target volumes according to the options selected. There is a modest performance impact to logical FlashCopy establish performance when using incremental FlashCopy. In the case of incremental FlashCopy, the DS8000 must create additional metadata (bitmaps). However, the impact is negligible in most cases. Finally, the placement of the FlashCopy source and target volumes has an effect on the establish performance. You can refer to the previous section for a discussion of this topic as well as to Table 11-1 on page 155 for a summary of the recommendations.

11.3 Background copy performance


The background copy phase, or physical FlashCopy, is the actual movement of the data from the source volume to the target volume. If the FlashCopy relationship was established requesting the -nocp (no copy) option, then only write updates to the source volume will force a copy from the source to the target. This forced copy is also called a copy-on-demand.

156

IBM System Storage DS8000: Copy Services in Open Environments

Note: The term copy-on-demand describes a forced copy from the source to the target because a write to the source has occurred. This occurs on the first write to a track. Note that since the DS8000 writes to non-volatile cache, there is typically no direct response time delay on host writes. The forced copy only occurs when the write is de-staged onto disk. If the copy option was selected, then upon completion of the logical FlashCopy establish phase, the source will be copied to the target in an expedient manner. If a large number of volumes have been established, then do not expect to see all pairs actively copying data as soon as their logical FlashCopy relationship is completed. The DS8000 microcode has algorithms that will limit the number of active pairs copying data. This algorithm will try to balance active copy pairs across the DS8000 device adapter resources. Additionally, the algorithm will limit the number of active pairs such that there remains bandwidth for host or server I/Os. Tip: The DS8000 gives higher priority to application performance than background copy performance. This means that the DS8000 will throttle the background copy if necessary, so that applications are not unduly impacted. The recommended placement of the FlashCopy source and target volumes, regarding the

physical FlashCopy phase, was discussed in the previous section. Refer to Table 11-1 on
page 155 for a summary of the conclusions. For the best background copy performance, the implementation should always place the source and target volumes in different ranks. There are additional criteria to consider if the FlashCopy is a full box copy that involves all ranks. Note: The term full box copy has the implication that all rank resources are involved in the copy process. Either all or nearly all ranks have both source and target volumes, or half the ranks have source volumes and half the ranks have target volumes. For full box copies, you should still place the source and target volumes in different ranks. When all ranks are participating in the FlashCopy, it is still possible to accomplish this by doing a FlashCopy of volumes on rank R0 onto rank R1, and volumes on rank R1 onto rank R0 (for example). Additionally, if there is heavy application activity in the source rank, performance would be less affected if the background copy target was in some other rank that could be expected to have lighter application activity.
I

Tip: If you used Storage Pool Striping when you allocated your volumes, all ranks will be more or less equally busy. So you do not have to be concerned about the placement of your data. If background copy performance is of high importance in your environment, you should use incremental FlashCopy as much as possible. Incremental FlashCopy will greatly reduce the amount of data that needs to be copied, and therefore greatly reduces the background copy time.

11.4 FlashCopy impact on applications


One of the most important considerations when implementing FlashCopy is to achieve an implementation that has minimal impact on the users applications performance.

Chapter 11. FlashCopy performance

157

Note: As already mentioned, the recommendations discussed in this chapter only consider the performance aspects of a FlashCopy implementation. But FlashCopy performance is only one aspect of an intelligent system design. You must consider all business requirements when designing a total solution. These additional requirements, together with the performance considerations, will guide you when choosing FlashCopy options like copy or no copy and incremental, as well as when making choices on source and target volumes location. The relative placement of the source and target volumes has a significant impact on the applications performance, as we discussed in 11.1.1, Distribution of the workload: Location of source and target volumes on page 154. In addition to the relative placement of volumes, the selection of copy or no copy is also an important consideration in regards to impact to application performance. Typically, the choice of copy or no copy depends primarily on how the FlashCopy will be used and for what interval of time the FlashCopy relationship exists. From a purely performance point of view, the choice of whether to use copy or no copy depends a great deal on the type of workload. The general answer is to use no copy, but this is not always the best choice. For most workloads, including online transaction processing (OLTP) workloads, no copy typically is the preferred option. However, some workloads that contain a large number of random writes and are not cache friendly might benefit from using the copy option.

FlashCopy nocopy
In a FlashCopy nocopy relationship, a copy-on-demand is done whenever a write to a source track occurs for the first time after the FlashCopy was established. This type of FlashCopy is ideal when the target volumes are needed for a short time only, for example, to run the backup jobs. FlashCopy nocopy puts only a minimum additional workload on the back-end adapters and disk drives. However, it affects most of the writes to the source volumes as long as the relationship exists. When you plan to keep your target volumes for a long time, this might not be the best solution.

FlashCopy full copy


When you plan to use the target volumes for a longer time, or you plan to use them for production and you do not plan to repeat the FlashCopy very often, then the full copy FlashCopy will be the right choice. A full copy FlashCopy puts a high additional workload on the back-end device adapters and disk drives. But this lasts only for a few minutes or hours, depending on the capacity. After that, there is no additional overhead any more.

Incremental FlashCopy
Another important performance consideration is whether to use incremental FlashCopy. You should use incremental FlashCopy when you do FlashCopies always to the same target volumes on regular time intervals. The first FlashCopy will be a full copy, but subsequent FlashCopy operations will copy only the tracks of the source volume that had been modified since the last FlashCopy. Incremental FlashCopy has the least impact on applications. During normal operation, no copy-on-demand is done (as in a nocopy relation), and during a resync, the load on the back-end is much lower compared to a full copy. There is only a very small overhead for the maintenance of out-of-sync bitmaps for the source and target volumes.

158

IBM System Storage DS8000: Copy Services in Open Environments

Note: The incremental FlashCopy resyncflash command does not have a -nocp (no copy) option. Using resyncflash will automatically use the copy option, regardless of whether the original FlashCopy was copy or no copy.

11.5 Performance planning for IBM FlashCopy SE


FlashCopy SE has additional overhead compared to standard FlashCopy. Data from source volumes are copied to virtual target volumes. Actually the data is written to a repository and there is a mapping mechanism to map the physical tracks to the logical tracks (see Figure 11-1). Each time a track in the repository is accessed, it has to go through this mapping process. Consequently the attributes of the volume hosting the repository are important considerations when planning a FlashCopy SE environment.

I/O from server: update Vol 100 Trk 17 Cache Server 0 Cache

I/O complete New process with IBM FlashCopy SE Track table of repository
:

Server 1

NVS
destaging FlashCopy relationship? New update? Release Data in NVS

NVS

yes

Get Vol 100 Trk 17

Got it

Lookup Vol 100 Trk 17 Got it

Lookup

Vol 100 Trk 17 = Rep Trk 1032 :

Wait

Get track Done Write to Rep Trk 1032

Written Wait for copy to target volume

Write update

Write

Vol 100 Trk 17

Repository Trk 1032

Figure 11-1 Updates to source volumes in an IBM FlashCopy SE relationship

Because of space efficiency, data is not physically ordered in the same sequence on the repository disks as it is on the source. Processes that might access the source data in a sequential manner might not benefit from sequential processing when accessing the target. Another important consideration for FlashCopy SE is that we always have nocopy relationships. A full copy or incremental copy is not possible. If there are many source volumes that have targets in the same Extent Pool all updates to these source volumes cause write activity to this one Extent Pools repository. We can consider a repository as something similar to a volume. So we have writes to many source volumes being copied to just one volume, the repository.

Chapter 11. FlashCopy performance

159

There is less space in the repository than the total capacity (sum) of the source volumes so you might be tempted to use less disk spindles (DDMs). By definition, fewer spindles mean less performance. You can see how careful planning is needed to achieve the required throughput and response times from the space efficient volumes. A good strategy is to keep the number of spindles roughly equivalent but just use smaller/faster drives (but do not use FATA drives). For example, if your source volumes are 300 GB 15K RPM disks, then using 73 GB 15KRPM disks on the repository can provide both space efficiency and excellent repository performance. Another possibility is to consider RAID 10 for the repository, although that goes somewhat against space efficiency (you might be better off using standard FlashCopy with RAID 5 than SE with RAID 10). However, there might be cases where trading off some of the space efficiency gains for a performance boost justifies RAID 10. Certainly if RAID 10 is used at the source, you should consider it for the repository (note that the repository will always use striping when in a multi-rank extent pool). Storage Pool Striping has good synergy with the repository (volume) function. With Storage Pool Striping, the repository space is striped across multiple RAID arrays in an Extent Pool and this helps balance the volume skew that might appear on the sources. It is generally best to use four RAID arrays in the multi-rank extent pool intended to hold the repository, and no more than eight. Finally, as mentioned before, try to use at least the same number of disk spindles on the repository as the source volumes. Avoid severe fan in configurations such as 32 ranks of source disk being mapped to an 8 rank repository. This type of configuration will likely have performance problems unless the update rate to the source is very modest. Also, although it is possible to share the repository with production volumes on the same extent pool, use caution when doing this, because contention between the two could impact performance. To summarize: We can expect a very high random write workload for the repository. To prevent the repository from becoming overloaded, you can take the following precautions: Have the repository in an extent pool with several ranks (a repository is always striped). Use at least four ranks but not more than eight. Use fast 15K RPM and small capacity disk drives for the repository ranks. Use RAID 10 instead of RAID 5 as it can sustain a higher random write workload. Avoid placing repository and standard volumes in the same extent pool. Of course, the aforementioned recommendations are not required, but you should consider them in your planning for FlashCopy SE. Because FlashCopy SE does not need a lot of capacity (if your update rate is not too high), you might want to make several FlashCopies from the same source volume. For example, you might want to make a FlashCopy several times a day to set checkpoints, to protect your data against viruses or for other reasons. Of course, creating more than one FlashCopy SE relationship for a source volume will increase the overhead, because each first change to a source volume track has to be copied several times for each FlashCopy SE relationship. Therefore, you should keep the number of concurrent FlashCopy SE relationships to a minimum, or test how many relationships you can do without affecting your application performance too much.

160

IBM System Storage DS8000: Copy Services in Open Environments

11.6 FlashCopy scenarios


This section describes four scenarios. These scenario discussions assume that the primary concern is to minimize the FlashCopy impact to application performance.

11.6.1 Scenario #1: Backup to disk


In environments where the Recovery Time Objective (that is, how quickly you can return to production after a failure) is of utmost importance, a FlashCopy backup to disk can help to achieve an extremely fast restore time. As soon as the logical FlashCopy is complete, it is possible to perform a reverse FlashCopy and restore your production data in seconds, instead of the several hours it would normally take to retrieve the data from tape. When backing up to disk, it is important to take the necessary steps to protect your data. Remember that, until the background copy is complete, you still only have one physical copy of the data, and that copy is vulnerable. Therefore, it is important to always establish the FlashCopy with the copy option and have the target volumes on other ranks as the source volumes. Otherwise, in the unlikely event that you have a failure of your production volumes, you will also lose your backup. One method for protecting against failure is to use multiple FlashCopy targets. FlashCopy supports up to 12 targets per source volume. With this feature, it is possible to keep up to 12 versions of your production data (for example, a FlashCopy backup every two hours for one day). Another method is to use incremental FlashCopy. Incremental FlashCopy copies only the data that has changed since the last FlashCopy, so the background copy completes much faster.

11.6.2 Scenario #2: Backup to tape


If you want to create a copy of the data only to subsequently back up that data to tape, then FlashCopy with the no copy option is the preferred approach. Still there are some implementations where the copy option is employed. The backup to tape is normally done shortly after the logical FlashCopy relationships have been established for all of the volumes that are going to be backed up to tape. If you choose the no copy option, that is probably because the data being backed up to tape is mostly coming from the FlashCopy source volumes. If this is the case, then the location of the target volumes is less critical and might be decided by considerations other than performance. If you choose the copy option, that is probably because the data being backed up is coming from the target volumes assuming that the backup to tape does not start until the background copy completes. If the backup starts sooner, the data could be coming from a mixture of source volumes and target volumes. As the backup continues, more and more of the data will come from the target volumes as the background copy moves more and more of the data to the target volumes. To have the least impact on the application and to have a fast backup to tape, we recommend that the source volumes be evenly spread across the available disk subsystems and the disk subsystems resources. Once the backup to tape is complete, make sure to withdraw the FlashCopy relationship. Tip: Withdraw the pairs as soon as the backup to tape is finished. This eliminates any additional copying from the source volume, either due to copy or copy-on-demand.

Chapter 11. FlashCopy performance

161

These recommendations would be equally valid for a copy or no copy environment.

11.6.3 Scenario #3: IBM FlashCopy SE


Whenever you plan to use FlashCopy with the nocopy option because you need the FlashCopy target volumes only for a short time, for example, to back up your data to tape, you should consider IBM FlashCopy SE as your solution. From a performance standpoint, IBM FlashCopy SE has some more overhead compared to standard FlashCopy with nocopy in effect, but on the other hand, much less capacity is needed for the FlashCopy targets. You should have one or more Extent Pools for the virtual target volumes with more than one rank, because the repository for Space Efficient volumes is striped across the ranks in that Extent Pool.

11.6.4 Scenario #4: FlashCopy during peak application activity


Tip: The recommended solution is to fully explore alternatives that would allow no overlapping of FlashCopy activity with other peak application activity. If such alternatives are not viable for whatever operative reasons, then consider the topics discussed in the present section. As discussed previously, the choice of whether to use copy or no copy depends mostly on business requirements, but with regards to performance, it also depends a great deal on the type of workload. This topic has been discussed in 11.4, FlashCopy impact on applications on page 157. In general, no copy is the preferred method, but you should also think about the following considerations when choosing either copy or no copy: Using no copy. The argument here is that the impact caused by the I/O resulting from the copy option is more significant than that of the no copy option, where less I/O activity resulting from copy-on-demand occur. However, because the background copy only occurs when the writes are de-staged from non-volatile cache, there is typically negligible impact. If the workload is cache friendly, then potentially all of the operations will be served from cache, and there will be no impact from copy-on-demand at all. Using copy. The goal of using copy is to quickly complete the background copy and hence the overlapping situations between FlashCopy and application processing ends sooner. If copy is used, then all I/Os experience some degradation as they compete for resources with the background copy activity. However, this impact might be somewhat less than the impact to the individual writes that a copy-on-demand causes. If FlashCopy no copy is active during a period of application high activity, there could be a high rate of copy-on-demand (that is, de-stages being delayed so that the track image can be read and then written to the FlashCopy target track to preserve the point-in-time copy). The de-stage delay could cause degradation of the performance for all writes that occur during the delay de-stage periods. Note that it is only the first write to a track that would cause a collision, and only when that write gets de-staged. The reads do not suffer the copy-on-demand degradation. If using the copy option, also consider these tips: Examine the application environment for the highest activity volumes and the most performance sensitive volumes.

162

IBM System Storage DS8000: Copy Services in Open Environments

Consider arranging the FlashCopy order such that the highest activity and most performance sensitive volumes are copied early and the least active and least performance sensitive volumes are copied last. Tip: One approach to achieve a specified FlashCopy order would be to partition the volumes into priority groups. Issue the appropriate FlashCopy commands for all volumes, but use copy on only the highest priority group and no copy on all other groups. After a specified period of time or after some observable event, issue FlashCopy commands to the next highest priority group from no copy to copy. Continue in this manner until all volumes are fully copied. If a background copy is the desired end result and FlashCopy is to be started just before or during a high activity period, consider the possibility of starting with no copy and converting to copy after the high activity period has completed. One might also want to examine the use of incremental FlashCopy in a high performance sensitive activity period. Incremental FlashCopy automatically uses the copy option, so if the no copy option was previously selected, using incremental FlashCopy might impact performance by causing a full background copy. If the incremental FlashCopy approach is chosen, it might be best to create a FlashCopy copy relationship during a quiet time. To minimize the amount of data to be copied when taking the desired point-in-time copy, schedule an incremental refresh sufficiently in advance of the point-in-time refresh to complete the copy of the changed data. Finally, take the required point-in-time copy with the incremental refresh at the required point in time.

11.6.5 Scenario #5: Ranks reserved for FlashCopy


Another configuration worth considering is the one where 50% of the ranks (capacity) are all FlashCopy source volumes (and where the application write I/Os take place) and the remaining 50% of the ranks (capacity) are all FlashCopy target volumes. Such an approach has pros and cons. The disadvantage is the loss of 50% of the ranks for normal application processing. The advantage is that FlashCopy writes to target volumes will not compete against applications writing to the target volumes. This allows the background copy to complete faster, and thus reduces the interference with application I/Os. This is a trade-off that must be decided upon: Use all ranks for your application: This maximizes normal application performance. FlashCopy performance is reduced. Use only half of the ranks for your applications: This maximizes FlashCopy performance. Normal performance is reduced. If planning a FlashCopy implementation at the disaster recovery (DR) site, you must consider two distinct environments: DR mirroring performance with and without FlashCopy active Application performance if DR failover occurs The solution should provide acceptable performance for both environments.

Chapter 11. FlashCopy performance

163

164

IBM System Storage DS8000: Copy Services in Open Environments

12

Chapter 12.

FlashCopy examples
In this chapter we present examples of the use of FlashCopy in the following scenarios: Fast setup of test systems or integration systems Fast creation of volume copies for backup purposes

Copyright IBM Corp. 2004-2008. All rights reserved.

165

12.1 Creating a test system or integration system


Test systems or integration systems are needed to perform application tests or system integration tests. Because many write operations will probably occur over the time period involved in the tests, we recommend doing a copy in the background environment.

12.1.1 One-time test system


Assume that there is an application using one volume, and you have to create a test system to allow application tests or integration tests, based on the contents of the production data. You would set up a FlashCopy to copy the data once. See Example 12-1.
Example 12-1 Create a one time test system #--- remove existing FlashCopy relationships for volume 6100 rmflash -quiet 6100:6300 #--- establish FlashCopy relationships for source volume 6100 mkflash -seqnum 01 6100:6300 #--- list FlashCopy relationships for volume 6100 lsflash -l 6100

The application typically should be quiesced or briefly suspended before executing the FlashCopy. Also, some applications cache their data, so you might have to flush this data to disk, using application methods, prior to running the FlashCopy (this is not covered in our example).

12.1.2 Multiple setup of a test system with the same contents


Assume that an application test is required multiple times with the same setup of data. The production volume is 6100 and the test volume is 6300. Volume 6101 is chosen as an intermediate volume that gets its data once, copying it from the production volume using FlashCopy. Then it is used as base for refresh of the test volume 6300. Running Part 1 and Part 2 (see Example 12-2) as one job would not work. The job would fail, as volume 6101 could not be the target for one FlashCopy relationship and the source for another FlashCopy relationship at the same time. You must wait until the background copy for 6100 and 6101 finishes successfully before starting Part 2. Alternatively, you can also issue the rmflash command with the -cp and -wait parameters. These parameters cause the command to wait until the background copy is complete before continuing with the next step.
Example 12-2 Create a one time test system #=== Part 1: establish FlashCopy relationship #--- remove, establish, list FlashCopy relationships rmflash -quiet 6100:6101 rmflash -quiet 6101:6300 mkflash -seqnum 01 6100:6101 lsflash -l 6100-6400 #=== Part 2: establish FlashCopy 2 relationship

166

IBM System Storage DS8000: Copy Services in Open Environments

#=== 03:00 pm 6100 6300 #--- after the full volume copy of 6100 6101 finished #--- establish relationship from 6101:6300 mkflash -seqnum 02 6101:6300 lsflash -l 6100-6300

Whenever the test environment needs to be reset to the original data, just run Part 2 of the scripts or use the DS GUI to perform a FlashCopy.

12.2 Creating a backup


Using FlashCopy for backup purposes can be implemented in several ways, as we now explain.

12.2.1 Creating a FlashCopy for backup purposes without volume copy


Volumes that are the result of a FlashCopy can be used by a backup server to back up the data to tape. Because the backup process merely reads the data, one option could be to perform a FlashCopy without physically copying all data to the target. As soon as the backup of the data has finished, the FlashCopy relationship could be removed explicitly. Using the following steps, we illustrate how to perform this procedure (see Example 12-3): 1. Part 1: Establish FlashCopy volume A volume B with the no copy option. 2. Run the backup. 3. Part 2: Remove the FlashCopy relationship once volume backup completes.
Example 12-3 Create a backup copy #=== Part 1: establish FlashCopy relationship #--- remove existing FlashCopy relationships for volume 6100 rmflash -quiet 6100:6300 #--- establish FlashCopy relationships for source volume 6100 mkflash -nocp -seqnum 01 6100:6300 #--- list FlashCopy relationships for volume 6100 lsflash -l 6100

After taking the backup, remove the FlashCopy relationship if you do not intend to use it for other purposes. Thus, you can avoid unnecessary writes. See Example 12-4.
Example 12-4 Withdraw the relationship #=== Part 2: remove FlashCopy relationships rmflash -quiet 6100:6300

Doing backups based on the target volume allows you to use a target multiple times. In complex application environments (for example, SAP), FlashCopy is often used as part of the backup solutions. Good examples for such solutions are the IBM Tivoli Storage Manager for Advanced Copy Services, which integrate into the IBM Tivoli Storage Manager backup infrastructure.

Chapter 12. FlashCopy examples

167

12.2.2 Using IBM FlashCopy SE for backup purposes


To take advantage of space efficiency, you can do the same procedure as shown in 12.2.1, Creating a FlashCopy for backup purposes without volume copy on page 167 for IBM FlashCopy SE. Note that in Example 12-5, we have specified the -tgtse option to allow the target volume to be Space Efficient.
Example 12-5 Create a backup copy with IBM FlashCopy SE #=== Part 1: establish FlashCopy relationship #--- remove existing FlashCopy relationships for volume 6100 rmflash -quiet -tgtreleasespace 6100:6300 #--- establish FlashCopy relationships for source volume 6100 and a Space Efficient target mkflash -tgtse -nocp -seqnum 01 6100:6300 #--- list FlashCopy relationships for volume 6100 lsflash -l 6100

After you have taken the backup, remove the FlashCopy relationship (see Example 12-6). The option -tgtreleasespace was specified to release storage for the target volume in the repository. Thus, you can avoid unnecessary writes and an increase of the used capacity in the repository.
Example 12-6 Withdrawing the IBM FlashCopy SE relationship #=== Part 2: remove FlashCopy relationships and release space for the target rmflash -quiet -tgtreleasespace 6100:6300

The target volume still exists, but it will be empty.

12.2.3 Incremental FlashCopy for backup purposes


To have the safety of a real physical copy without always copying the full volume is something that can be achieved using the incremental FlashCopy. An initial full volume FlashCopy is followed by subsequent incremental FlashCopies, which only copies the updates that took place on the source volume. See Example 12-7.
Example 12-7 Create an initial FlashCopy ready for subsequent incremental FlashCopies #=== Part 1: establish FlashCopy relationship #--- remove existing FlashCopy relationships for volume 6100 rmflash -quiet 6100:6300 #--- establish FlashCopy relationships mkflash -record -persist -seqnum 01 6100:6300 #--- list FlashCopy relationships for volume 6100 lsflash -l 6100

After the initial full volume copy, the following script supports the incremental copy of the FlashCopy relationship, see Example 12-8.

168

IBM System Storage DS8000: Copy Services in Open Environments

Example 12-8 Create an Incremental FlashCopy #=== Part 2: resynch FlashCopy relationship resyncflash -record -persist -seqnum 01 6100:6300 lsflash -l 6100

12.2.4 Using a target volume to restore its contents back to the source
You might have to apply logs to the target, and then reverse the target volume to the source volume. For each source volume, one FlashCopy can exist with the -record and -persist attributes set. Use this volume to refresh the source volume. To reverse the relationship, the data must have been copied completely to the target before reversing it back to the source. To avoid a situation where the full volume needs to be copied with each FlashCopy, the Incremental FlashCopy should be used. Since logs might need to be applied to the target volume prior to reversing it, the target volume should be write-enabled. This example consists of the following steps: Part 1: Establish initial FlashCopy (see Example 12-9). Part 2: Establish Incremental FlashCopy (see Example 12-10 on page 169). Part 3: Reverse the relationship (see Example 12-11 on page 169). Applying application or DB logs needs to be carefully considered as well.
Example 12-9 Run initial FlashCopy to support refresh of source volume #=== Part 1: Establish incremental FlashCopy #--- remove, establish, list FlashCopy relationships rmflash -quiet 6100:6101 mkflash -persist -record -tgtinhibit -seqnum 01 6100:6101 lsflash -l 6100

After the initial FlashCopy, the incremental copies can be done; see Example 12-10.
Example 12-10 Create an Incremental FlashCopy #=== Part 2: Resynch FlashCopy relationship resyncflash -record -persist -tgtinhibit -seqnum 01 6100:6101 lsflash -l 6100

The reverse of the FlashCopy is done using the reverseflash command; see Example 12-11.
Example 12-11 Reverse the volumes #=== Part 3: Reverse FlashCopy relationship reverseflash -persist -record -tgtinhibit -seqnum 01 6100:6101 lsflash -l 6100

Chapter 12. FlashCopy examples

169

170

IBM System Storage DS8000: Copy Services in Open Environments

Part 4

Part

Metro Mirror
In this part of the book we describe IBM System Storage Metro Mirror when used in open systems environments with the DS8000. We discuss the characteristics of Metro Mirror and explain the options for its setup. We also show which management interfaces can be used, as well as the important aspects to be considered when establishing a Metro Mirror environment. We conclude with examples of Metro Mirror management and setup. Note: Throughout this part of the book, in our discussions of Metro Mirror in open systems environments, you will find that the term volume is used indistinctly from the term LUN. In fact, you will see that the term volume is almost always used.

Copyright IBM Corp. 2004-2008. All rights reserved.

171

172

IBM System Storage DS8000: Copy Services in Open Environments

13

Chapter 13.

Metro Mirror overview


In this chapter we explain the basic characteristics of Metro Mirror when used in open systems environments with the DS8000. Metro Mirror was previously known as synchronous Peer-to-Peer Remote Copy, or PPRC.

Copyright IBM Corp. 2004-2008. All rights reserved.

173

13.1 Metro Mirror overview


Metro Mirror (previously known as synchronous Peer-to-Peer Remote Copy, or PPRC) provides real-time mirroring of logical volumes between two DS8000s that can be located up to 300 km from each other. It is a synchronous copy solution where write operations are completed on both copies (local and remote site) before they are considered to be complete. It is typically used for applications that cannot suffer any data loss in the event of a failure. As data is synchronously transferred, the distance between the local and the remote disk subsystems will determine the effect on application response time. Figure 13-1 illustrates the sequence of a write update with Metro Mirror.

Server write 1

4 Write acknowledge Write to secondary 2

LUN or volume Primary (source)


Figure 13-1 Metro Mirror

3 Write complete acknowledgment

LUN or volume Secondary (target)

When the application performs a write update operation to a source volume, this is what happens: 1. 2. 3. 4. Write to source volume (DS8000 cache and NVS). Write to target volume (DS8000 cache and NVS). Signal write complete from the remote target DS8000. Post I/O complete to host server.

The Fibre Channel connection between the local and the remote disk subsystems can be direct, through a switch, or through other supported distance solutions (for example, Dense Wave Division Multiplexor or DWDM).

174

IBM System Storage DS8000: Copy Services in Open Environments

13.2 Metro Mirror volume state


Volumes participating in a Metro Mirror session can be found in any of the following states: Copy pending: Volumes are in copy pending state after the Metro Mirror relationship is established, but the source and target volumes are still out-of-sync. In that case, data still needs to be copied from the source to the target volume of the Metro Mirror pair. This may be the case immediately after a relationship is initially established, or re-established after being suspended. The Metro Mirror target volume is not accessible when the pair is in copy pending state. Full duplex: The full duplex state is the state of a volume copy pair whose members are in sync, that is, both source and target volumes contain exactly the same data. The target volume is not accessible when the pair is in full duplex. Suspended: Volumes are in the suspended state when the source and target storage subsystems cannot communicate anymore, or when the Metro Mirror pair is suspended manually. In this state, writes to the source volume are not mirrored onto the target volume. The target volume becomes out-of-sync. During this time, Metro Mirror keeps a bitmap record of the changed tracks in the source volume. Later, when the volumes are re-synchronized, only the tracks that were updated will be copied. Target copy pending: This indicates that the source volume is unknown or cannot be queried and the target state is copy pending. Target full-duplex: This indicates that the source volume is unknown or cannot be queried and the target state is full duplex. Target suspended: This indicates that the source volume is unknown or cannot be queried and the target state is suspended. Not remote copy pair: This indicates that the relationship is not a Metro Mirror pair. Invalid-state: This indicates that the relationship state is invalid.

13.3 Data consistency


In order to restart applications at the remote site successfully, the remote site volumes must have consistent data. In normal operation, Metro Mirror keeps data consistency at the remote site. However, in case of a rolling disaster type of situation, a certain mechanism is necessary to keep data consistency at the remote site. For Metro Mirror, consistency requirements are managed through use of the Consistency Group option. You can specify this option when you are defining Metro Mirror paths between pairs of LSSs or when you change the default LSS settings. Volumes or LUNs that are paired between two LSSs whose paths are defined with the Consistency Group option can be considered part of a Consistency Group. Consistency is provided by means of the extended long busy (for z/OS) or queue full (for open systems) conditions. These are triggered when the DS8000 detects a condition where it cannot update the Metro Mirror target volume. The volume pair that first detects the error will go into the queue full condition, such that it will not do any writes, and a SNMP trap message will be issued. These messages can be used as a trigger for automation purposes that will provide data consistency. Data consistency and dependent writes are discussed in detail in 14.4, Consistency Group function on page 180.

Chapter 13. Metro Mirror overview

175

Note: During normal Metro Mirror processing, the data on disk at the remote site is an exact mirror of that at the local site. During or after an error situation this depends on the options specified for the pair and the path. Remember that any data still in buffers or processor memory is not yet on disk and so will not be mirrored to the remote site. A disaster will then appear to be a similar situation to a power failure in the local site.

13.4 Rolling disaster


In disaster situations, it is unlikely that the entire complex will fail at the same moment. Failures tend to be intermittent and gradual, and a disaster can occur over many seconds, even minutes. Because some data may have been processed and other data lost in this transition, data integrity on the target volumes is exposed. This situation is called a rolling disaster. The mirrored data at the recovery site must be managed so that cross-volume or LSS data consistency is preserved during the intermittent or gradual failure. Metro Mirror itself does not offer a means of controlling this scenario it offers the Consistency Group and Critical attributes, which along with appropriate automation solutions can manage data consistency and integrity at the remote site. The Metro Mirror volume pairs are always consistent, due to the synchronous nature of Metro Mirror. However, cross-volume or LSS data consistency must have an external management method. IBM offers TPC for Replication to deliver solutions in this area. Visit the IBM Web site and see the Services and Solutions page for more information.

13.5 Automation and management


Metro Mirror is a hardware mirroring solution. A volume (or LUN) is paired with a volume (or LUN) in the remote disk subsystem. As the size of the environment grows, so does the complexity of managing it. You need a means for managing the pairs, ensuring that they are in duplex status, adding volume pairs as required, monitoring for error conditions, and more importantly, for managing data consistency across LSS and across disk subsystems. When planning a Metro Mirror environment, the following topics should be considered: Design of the automation solution Maintenance Testing Support You do not want to be in a situation where you are relying on your mirroring implementation for data to be consistent in a disaster situation, only to find that it has not worked, or perhaps worse, you not being aware that your data is not consistent. IBM offers services and solutions for the automation and management of the Metro Mirror environment. These include GDPS and TPC for Replication (see Chapter 6, IBM TotalStorage Productivity Center for Replication on page 43). There are also continuous availability solutions available for integrating Metro Mirror into a cluster, such as HACMP/XD for AIX, Geographically Dispersed Open Clusters (GDOC) for Open Systems and GDPS for System z. http://www-03.ibm.com/systems/storage/solutions/business_continuity/continuous_ava ilability/technical_details.html#gdoc

176

IBM System Storage DS8000: Copy Services in Open Environments

14

Chapter 14.

Metro Mirror options and configuration


In this chapter we describe the options available when using Metro Mirror in an open systems environment with the DS8000. We also discuss the configuration guidelines that you should consider when planning the Metro Mirror environment.

Copyright IBM Corp. 2004-2008. All rights reserved.

177

14.1 Basic Metro Mirror operation


Before discussing the options and configuration guidelines for Metro Mirror, let us review some basic operations you will be performing with Metro Mirror.

Establish a Metro Mirror pair


This operation establishes the remote copy relationship between a pair of volumes, the source (or local) and the target (or remote) that normally reside on different disk subsystems. Initially the volumes will be in simplex state, and immediately after the pair is established, they transition to the copy pending state. After the data on the pair has been synchronized (both volumes have the same data), the state of the pair becomes full duplex. When establishing a Metro Mirror pair, you can select the following additional options: No copy: This option does not copy data from the source to the target. This option presumes that the volumes are already synchronized. The data synchronization is your responsibility and the DS8000 does not check its synchronization. Target read: This option allows host servers to read from the target. For a host server to read from the target, the pair (source-target) must be in a full duplex state. This parameter applies to open systems volumes (it does not apply to System z volumes). In the open systems file system environment, even if an application reads data from a file system, a SCSI write command can be issued to the LUN where the file system resides. This is because some information, such as the last access time stamp, might be updated by the file system. In this case, even if you specify this option, the operation might fail. Suspend after data synchronization: This option suspends the volume pairs after the data has been synchronized. This parameter cannot be used with the nocopy option. Wait option: This option delays the command response until the volume pairs are in one of the final states: simplex, full duplex, suspended, target full duplex, target suspended (until the pair is not in the copy pending state). This parameter cannot be used with -type gcp or -mode nocp. Reset reserve on target: This option allows the establishment of a Metro Mirror relationship when the target volume is reserved by another host. This parameter can only be used with open system volumes. If this option is not specified and the target volume is reserved, the command fails.

Suspend Metro Mirror pair


This operation stops copying data to the target and the pair transitions to the suspended state. Because the source DS8000 keeps track of all changed tracks on the source volume, you can resume the copy operations at a later time.

Resume Metro Mirror pair


This operation resumes a Metro Mirror relationship for a volume pair that was suspended, and restarts transferring data. Only modified tracks are sent to the target volume because the DS8000 keeps track of all changed tracks on the source volume after the volume becomes suspended. When resuming a Metro Mirror pair, you can use the same options as when initially establishing a Metro Mirror pair except for the no copy option. 178

IBM System Storage DS8000: Copy Services in Open Environments

Terminate Metro Mirror pair


This operation ends the Metro MIrror relationship between the source and target volumes.

14.2 Open systems: Clustering


Because the disk attached to the server is being mirrored using Metro Mirror, this offers improved opportunities for high availability solutions. For open system environments, IBM offers several solutions in this area, including GDOC for Open System environments, and HACMP/XD for AIX. You can find more information about GDOC at the following Web site. http://www-03.ibm.com/systems/storage/solutions/business_continuity/continuous_ava ilability/technical_details.html#gdoc

14.3 Failover and failback


The Metro Mirror Failover and Failback modes are designed to help reduce the time required to synchronize Metro Mirror volumes after switching between the production and the recovery sites. In a typical Metro Mirror environment, processing will temporarily switch over to the Metro Mirror remote site upon an outage at the local site. When the local site is capable of resuming production, processing will switch back from the remote site to the local site. At the recovery site, the Metro Mirror Failover function combines into a single task, the three steps involved in the switch over (planned or unplanned) to the remote site: terminate the original Metro Mirror relationship, then establish and suspend a new relationship at the remote site. The state of the original source volume at the normal production site is preserved. The state of the original target volume at the recovery site becomes a source suspended. This design takes into account the possibility that the original source LSS might no longer be reachable. To initiate the switchback to the production site, the Metro Mirror Failback function, at the recovery site, checks the preserved state of the original source volume at the production site to determine how much data to copy back. Then either all tracks or only out-of-sync tracks are copied, with the original source volume becoming a target full-duplex. In more detail, this is how Metro Mirror Failback operates: If a volume at the production site is in the simplex state, all of the data for that volume is copied back from the recovery site to the production site. If a volume at the production site is in the full-duplex or suspended state and without changed tracks, only the modified data on the volume at the recovery site is copied back to the volume at the production site. If a volume at the production site is in a suspended state and has tracks that have been updated, then both the tracks changed on the production site and the tracks marked at the recovery site will be copied back. Finally, the volume at the production site becomes a write-inhibited target volume. This action is performed on an individual volume basis.

Chapter 14. Metro Mirror options and configuration

179

The switchback is completed with one more sequence of a Metro Mirror Failover followed by a Metro Mirror Failback operation, both given at the now recovered production site. Figure 14-1 summarizes the whole process.

Primary site (A)


Normal operation site (A) production site site (B) recovery site

Secondary site (B)

Updates are transferred Source = A full duplex Target = B full duplex

When planned/unplanned outage at (A):

At (B): Metro Mirror Failover


restart application processing

Establish with PPRC Failover A full duplex Source = B suspended

With site (A) recovered and operable.

At (B): 1. Metro Mirror Failback 2. stop application processing


With pairs in full duplex.

Establish with PPRC Failback Target = A copy pending Source = B copy pending

Establish with PPRC Failover Source = A suspended B full duplex

At (A): 1. Metro Mirror Failover 2. start application processing 3. Metro Mirror Failback

Establish with PPRC Failback Source = A full duplex Target = B full duplex

Figure 14-1 Metro Mirror Failover and Failback sequence

In this manner, Metro Mirror Failover and Failback are dual operations. It is possible to implement all site switch operations with two pairs of failover and failback tasks (one pair for each direction). Note: For a planned switch over from site (A) to site (B), and in order to keep data consistency at (B), the application at site (A) has to be quiesced before the Metro Mirror Failover operation at (B). Alternatively, you can use the freezepprc and unfreezepprc commands. The same consideration applies when switching back from site (B) to (A). The Metro Mirror Failover and Failback modes can be invoked from the DS GUI or the DS Command-Line Interface (DS CLI). We give you an example of a Metro Mirror Failover and Failback sequence and how to control it in 16.8, Failover and Failback functions for sites switching on page 212.

14.4 Consistency Group function


In order to restart applications at the remote site successfully, the remote site must have consistent data. In normal operation, Metro Mirror keeps data consistency at the remote site. However, as mentioned in 13.3, Data consistency on page 175, in case of a rolling disaster, a certain procedure is necessary to keep data consistency even in a synchronous remote copy environment. 180
IBM System Storage DS8000: Copy Services in Open Environments

In this section we discuss data consistency and explain how the Metro Mirror Consistency Group function keeps data consistency at the remote site in case of a rolling disaster.

14.4.1 Data consistency and dependent writes


Many applications, such as databases, process a repository of data that has been generated over a period of time. Many of these applications require that the repository is in a consistent state in order to begin or continue processing. In general, consistency implies that the order of dependent writes is preserved in the data copy. In the Metro Mirror environment, keeping data consistency means that the order of dependent writes is preserved in all the Metro Mirror target volumes. For example, the following sequence might occur for a database operation involving a log volume and a data volume: 1. Write to log volume: Data Record #2 is being updated. 2. Update Data Record #2 on data volume. 3. Write to log volume: Data Record #2 update complete. If the copy of the data contains any of these combinations, then the data is consistent: Operation 1, 2, and 3 Operation 1 and 2 Operation 1 If the copy of data contains any of those combinations, then the data is inconsistent (this means that the order of dependent writes was not preserved): Operation 2 and 3 Operation 1 and 3 Operation 2 Operation 3 When discussing the Consistency Group function, data consistency means that this sequence is always kept in the copied data. And the order of non-dependent writes does not necessarily have to be preserved. For example, consider the following two sequences: 1. 2. 3. 4. Deposit paycheck in checking account A Withdraw cash from checking account A Deposit paycheck in checking account B Withdraw cash from checking account B

In order for the data to be consistent, the deposit of the paycheck must be applied before the withdrawal of cash for each of the checking accounts. However, it does not matter whether the deposit to checking account A or checking account B occurred first, as long as the associated withdrawals are in the correct order. So for example, the data copy would be consistent if the following sequence occurred at the copy. 1. 2. 3. 4. Deposit paycheck in checking account B Deposit paycheck in checking account A Withdraw cash from checking account B WIthdraw cash from checking account A

In other words, the order of updates is not the same as it was for the source data, but the order of dependent writes is still preserved.

Chapter 14. Metro Mirror options and configuration

181

14.4.2 Consistency Group function: How it works


In the operation of the Consistency Group function of Metro Mirror, we distinguish two parts. One is the invocation of the Consistency Group option and the other one is the freeze/run operation. Together they make it possible for the disk subsystem to hold I/O activity and subsequently to thaw the held I/O activities.

Consistency Group option


This option causes the disk subsystem to hold I/O activity to a volume for a time period by putting the source volume into a queue full condition when the DS8000 detects a situation where it cannot update the Metro Mirror target volume. This operation can be done across multiple LUNs or volumes, multiple LSSs, and even across multiple disk subsystems. You can specify this option when you are defining Metro Mirror paths between pairs of LSSs or when you change the default Consistency Group setting on each LSS (the Consistency Group option is disabled by default). In the disk subsystem itself, each command is managed with each logical subsystem (LSS). This means that there are slight time lags until each volume in the different LSS is subject to the queue full condition. Some people are concerned that the time lag causes you to lose data consistency, but that is not true. We explain how to keep data consistency in the Consistency Group environments in the following section. In the example in Figure 14-2, three write operations (first, second, and third) are dependent writes. This means that these operations must be completed sequentially.

1st Wait
Application on Servers

LSS11

LSS21

LSS12 2nd

LSS22

Wait 3rd Wait


dependent write operation

LSS13

LSS23

Source DS8000

Target DS8000

Figure 14-2 Consistency Group: Example1

182

IBM System Storage DS8000: Copy Services in Open Environments

In Figure 14-2 on page 182, there are two Metro Mirror paths between LSS11 and LSS21. There are another two Metro Mirror paths for each of the other LSS pairs (LSS12:LSS22 and LSS13:LSS23). In a disaster, the paths might fail at different times. At the beginning of the disaster, such as a rolling disaster, one set of paths (such as the paths between LSS11 and LSS21) might be inoperable while other paths are working. At this time, the volumes in LSS11 are in a queue full condition, and the volumes in LSS12 and 13 are not. The first operation is not completed because of the queue full condition, and the second and third operations are not completed because the first operation has not been completed. In this case, the first, second, and third updates are not included in the Metro Mirror target volumes in LSS21, LSS22, and LSS23. Therefore, the Metro Mirror target volumes at the remote site keep consistent data. In the example illustrated in Figure 14-3, the volumes in LSS12 are in a queue full condition and the other volumes in LSS11 and 13 are not. The first write operation is completed because the volumes in LSS11 are not in an queue full condition. The second write operation is not completed because of the queue full condition. The third write operation is also not completed because the second operation is not completed. In this case, the first update is included in the Metro Mirror target volumes, and the second and third updates are not included. Therefore, this case is also consistent.

1st Completed
Application on Servers

LSS11

LSS21

LSS12 2nd Wait 3rd LSS13

LSS22

LSS23

Wait

dependent write operation

Source DS8000

Target DS8000

Figure 14-3 Consistency Group: Example 2

In all cases, if each write operation is dependent, the Consistency Group option can keep the data consistent in the Metro Mirror target volumes until the Consistency Group time-out occurs. After the time-out value has been exceeded, all held I/O will be released. The Consistency Group time-out value can be specified at the LSS level.

Chapter 14. Metro Mirror options and configuration

183

If each write operation is not dependent, the I/O sequence is not kept in the Metro Mirror target volumes that are in LSSs with the Consistency Group option specified. In the example illustrated in Figure 14-4, the three write operations are independent. If the volumes in LSS12 are in a queue full condition and the other volumes in LSS11 and 13 are not, the first and third operations are completed and the second operation is not completed.

1st Completed
Application on Servers

LSS11

LSS21

LSS12 2nd Wait 3rd LSS13 Completed


independent write operations Source DS8000

LSS22

LSS23

Wait

Target DS8000

Figure 14-4 Consistency Group: Example 3

In this case, the Metro Mirror target volumes reflect only the first and third write operations, not the second operation. Typical database management software can recover its databases by using its log files if dependent write operations are kept. At the same time, you do not have to be concerned about whether independent write operations are kept in sequence.

Freeze and unfreeze operations


The Consistency Group option itself can keep consistent data at the remote site, in case of a rolling disaster, if all volumes go into the queue full condition within the time interval specified in the Consistency Group time-out value. However, this is not always true. Therefore, we require a command that allows us to hold the I/O activity to volumes other than the volumes that the DS8000 itself detects as having an error condition. We also require a command that allows us to release the held I/O without having to wait for the Consistency Group time-out, in order to minimize the impact on the applications. These commands are freezepprc and unfreezepprc, which can be issued at an LSS level (not volume level). These commands are available using the DS CLI, and we discuss them in this section. When the DS8000 detects a condition where it cannot update the Metro Mirror target volume, the Metro Mirror source volume with the Consistency Group option becomes suspended and enters the queue full condition. At the same time the DS8000 issues an SNMP notification (Trap 202: source PPRC Devices on LSS Suspended Due to Error). 184
IBM System Storage DS8000: Copy Services in Open Environments

An automation program, triggered by the SNMP notification, can issue the freezepprc command to all LSS pairs that have volumes related to the application. This command causes all Metro Mirror source volumes in the LSSs with the Consistency Group option to become suspended and enter the queue full condition. In addition, this operation removes the Metro Mirror paths. Because all Metro Mirror source volumes become suspended and all related paths are removed, further updates to the source volumes will not be sent to their targets. Now you have consistent data at the remote site. It is necessary to have an automation procedure like the one just described in order to have consistent data at the remote site in case of a rolling disaster. In case of partial link failures (such as in the case of Figure 14-2 on page 182), and in order to rapidly resume application I/O processing at the local site, you can use the unfreezepprc command to resume held I/Os to the affected LSSs. In general, you will issue the unfreezepprc command after successful completion of the freezepprc command to all related LSSs, which means that you have consistent data at the remote site. Note: By partial links failure we mean (in this section) the situation where for a particular LSS all of its links fail, while at the same time there are other LSSs with their links working. When any particular LSS has part of its links not working, Metro Mirror can keep sending data to the target volumes using the other available links in the LSS. The default two-minute timer of the queue full condition gives the automation enough time to issue a freezepprc command to the necessary LSSs. I/O resumes after the default two-minute interval if a unfreezepprc command is not received first. It is not possible to issue Consistency Group (freeze/unfreeze) type commands from the DS Storage Manager (DS GUI), but you can change the time-out values by using the DS GUI. Important: The queue full condition is presented only for the source volume affected by the error (in the case of path failures, multiple volumes will often be affected). Still, the freeze operation is performed at the LSS level, causing all Metro Mirror volumes in that LSS to go into suspended state with queue full condition and terminating all associated paths. Therefore, when planning your implementation, you should consider not intermixing volumes from different applications in an LSS pair that is part of a Consistency Group. Otherwise, the not-in-error volumes belonging to other applications will be frozen, too.

14.5 Metro Mirror paths and links


Metro Mirror pairs are set up between volumes in LSSs, usually in different disk subsystems, and these are normally in separate locations. A path (or group of paths) needs to be defined between the source LSS and the target LSS. These logical paths are defined over physical links between the disk subsystems. The physical link includes the host adapter in the source DS8000, the cabling, switches, or directors, any wide band or long distance transport devices (DWDM, channel extenders, and WAN), and the host adapters in the target disk subsystem. Physical links can carry multiple Metro Mirror logical paths, as shown in Figure 14-5.

Chapter 14. Metro Mirror options and configuration

185

Note: For Metro Mirror, the DS8000 supports Fibre Channel links only. To facilitate ease of testing, the DS8000 does support Metro Mirror source and target on the same DS8000.

LSS 0 LSS 1 LSS 2 LSS 3 : LSS 08 : LSS nn


Figure 14-5 Logical paths

Physical Fibre Channel link

LSS 0 LSS 1 LSS 2 LSS 3 : LSS 08 : LSS nn

256 logical paths per FCP link

Paths are unidirectional, that is, they are defined to operate in either one direction or the other. Still, Metro Mirror is bidirectional. That is, any particular pair of LSSs can have paths defined among them that have opposite directions. Each LSS holds both source and target volumes from the other particular LSS. Also, opposite direction paths are allowed to be defined on the same Fibre Channel physical link. For bandwidth and redundancy, more than one path can be created between the same LSSs. Metro Mirror will balance the workload across the available paths between the source and target LSSs. Note: Remember that the LSS is not a physical construct in the DS8000 it is a logical construct. Volumes in an LSS can come from multiple disk arrays. Physical links are bidirectional and can be shared by other Metro Mirror pairs as well as other remote mirror and copy functions, such as Global Copy and Global Mirror.

14.5.1 Fibre Channel links


A DS8000 Fibre Channel port can simultaneously be: Sender for Metro Mirror source Receiver for Metro Mirror target Target for Fibre Channel Protocol (FCP) hosts I/O from open systems and Linux on System z Although one FCP link would have sufficient bandwidth for most Metro Mirror environments, for redundancy reasons we recommend configuring two Fibre Channel links between each source and remote disk subsystem.

186

IBM System Storage DS8000: Copy Services in Open Environments

Dedicating Fibre Channel ports for Metro Mirror use guarantees no interference from host I/O activity. This is recommended with Metro Mirror, which is time critical and should not be impacted by host I/O activity. Each Metro Mirror port provides connectivity for all LSSs within the DS8000 and can carry multiple logical Metro Mirror paths. Note: If you want the same set of Fibre Channel links to be shared between Metro Mirror and Global Copy or Global Mirror, you should consider the impact of the aggregate data transfer. In general, we do not recommend sharing the FCP links used for Metro Mirror, including the physical network links, with other asynchronous remote copy functions. Metro Mirror FCP links can be directly connected, or connected by up to two switches. Note: If you use channel extension technology devices for Metro Mirror links, you should verify with the products vendor what environment (directly connected or connected with a SAN switch) is supported by the vendor and what SAN switch is supported.

14.5.2 Logical paths


A Metro Mirror logical path is a logical connecting path between the sending LSS and the receiving LSS. An FCP link can accommodate multiple logical paths. Figure 14-6 shows an example where we have a 1:1 mapping of source to target LSSs, and where the three logical paths are accommodated in over one physical link: LSS1 in DS8000-1 to LSS1 in DS8000-2 LSS2 in DS8000-1 to LSS2 in DS8000-2 LSS3 in DS8000-1 to LSS3 in DS8000-2 Alternatively, if the volumes in each of the LSSs of DS8000-1 map to volumes in all three target LSSs in DS8000-2, there will be nine logical paths over the physical link (not fully illustrated in Figure 14-6). Note that we recommend a 1:1 LSS mapping.

DS8000 1
LSS 1
1 logical path 3-9 logical paths

DS8000 2
LSS 1
1 logical path

LSS 2
1 logical path

switch

Port
Metro Mirror paths

1 Link

Port

LSS 2
1 logical path

LSS 3
1 logical path

LSS 3
1 logical path

Figure 14-6 Logical paths over a physical link for Metro Mirror

Metro Mirror paths have certain architectural limits, which include: A source LSS can maintain paths up to a maximum of four target LSSs. Each target LSS can reside in a separate DS8000. You can define up to eight logical paths per LSS-LSS relationship. Each path requires a separate physical link.

Chapter 14. Metro Mirror options and configuration

187

An FCP port can host up to 2048 logical paths. These are the logical and directional paths that are made from LSS to LSS. An FCP physical link (the physical connection from one port to another port) can host up to 256 logical paths. An FCP port can accommodate up to 126 different physical links (DS8000 port to DS8000 port through the SAN).

14.6 Bandwidth
Prior to establishing your Metro Mirror solution, you should establish what your peak bandwidth requirement will be. This will help to ensure that you have enough Metro Mirror links in place to support that requirement. To avoid any response time issues, you should establish the peak write rate for your systems and ensure that you have adequate bandwidth to cope with this load and to allow for growth. Remember that only writes are mirrored across to the target volumes. Some tools to assist you with this are TotalStorage Productivity Center (TPC), or the operating system dependent tools, such as iostat. Refer to the Redbooks publication, The IBM TotalStorage DS8000 Series: Implementation, SG24-6786. Another method, but not quite so exact, is to monitor the traffic over the FC switches using FC switch tools and other management tools, and remember that only writes will be mirrored by Metro Mirror. You can also get some feeling about the proportion of read/writes by issuing datapath query devstats on SDD-attached servers.

14.7 LSS design


Since the DS8000 has made the LSS a topological construct, which is not tied to a physical array as in the ESS, the design of your LSS layout can be simplified. It is now possible to assign LSSs to applications, for example, without concern regarding under-allocation or over-allocation of physical disk subsystem resources. This can also simplify the Metro Mirror environment, as it is possible to reduce the number of commands that are required for data consistency as well as make its effects more granular. For example, a freeze operation is performed at the LSS level, causing all Metro Mirror volumes in that LSS to go into suspended state with a queue full condition and terminate all associated paths. If you assign LSSs to each of your applications, you can control the impact of the queue full condition caused by the freeze operation at an application level. On the contrary, if you put volumes used by several different applications into the same LSS, all those applications sharing the LSS and their Metro Mirror volumes will be affected by the queue full condition. You can issue one freezepprc command to multiple LSSs. Therefore, you do not have to consolidate the number of LSSs that your applications use. See 14.4.2, Consistency Group function: How it works on page 182 for more information.

14.8 Distance
The distance between your local and remote DS8000 subsystems has an effect on the response time overhead of the Metro Mirror implementation. The maximum supported distance for Metro Mirror is 300 km.

188

IBM System Storage DS8000: Copy Services in Open Environments

14.9 Symmetrical configuration


When planning your Metro Mirror configuration, consider the possibility of a symmetrical configuration, in terms of both physical and logical elements. This will have the following benefits: Simplified management. It is easier to see where volumes will be mirrored, and processes can be easily automated. Reduced administrator overhead. Due to automation, and the simpler nature of the solution, overhead can be reduced. Simplify the addition of new capacity into the environment. New arrays can be added in a modular fashion. Ease problem diagnosis. The simple structure of the solution will aid in identifying where any problems might exist. Figure 14-7 shows this idea in a graphical form. DS8000 #1 has Metro Mirror paths defined to DS8000 #2, which is in a remote location. On DS8000 #1, volumes defined in LSS 00 are mirrored to volumes in LSS 00 on DS8000 #2 (volume P1 is paired with volume S1, P2 with S2, P3 with S3, and so on). Volumes in LSS 01 on DS8000 #1 are mirrored to volumes in LSS 01 on DS8000 #2, and so on. Requirements for additional capacity can be added in a symmetrical way also by addition of volumes into existing LSSs, and by the addition of new LSSs when needed. (For example, addition of two volumes in LSS 03 and 05, and one volume to LSS 04 will bring them to the same number of volumes as the other LSSs. Additional volumes could then be distributed evenly across all LSSs, or additional LSSs added.) As well as making the maintenance of the Metro Mirror configuration easier, the symmetrical implementation has the added benefit of helping to balance the workload across the DS8000. Figure 14-7 shows a logical configuration. This idea applies equally to the physical aspects of the DS8000. You should attempt to balance workload and apply symmetrical concepts to other aspects of your DS8000 (for example, the Extent Pools).

Figure 14-7 Symmetrical Metro Mirror configuration

Chapter 14. Metro Mirror options and configuration

189

14.10 Volumes
You have to consider which volumes should be mirrored to the target site. One option is to mirror all volumes. This is advantageous for the following reasons: You do not have to consider whether any required data has been missed or not. Users do not have to remember which logical pool of volumes is mirrored and which is not. Adding volumes to the environment is simplified. You do not require two processes for addition of disk (one for mirrored volumes, and another for non-mirrored volumes). You will be able to move data around your disk environment easily without being concerned about whether the target volume is a mirrored volume. You might choose not to mirror all volumes. In this case, you will need careful control over what data is placed on the mirrored volumes (to avoid any capacity issues) and what data you place on the non-mirrored volumes (to avoid missing any required data). One method of doing this is to place all mirrored volumes in a particular set of LSSs, in which all volumes are Metro Mirror-enabled, and direct all data requiring mirroring to these volumes. Though mirroring all volumes might be the simpler solution to manage, it could also require significantly more network bandwidth. Since network bandwidth is a cost to the solution, minimizing the bandwidth might well be worth the added management complexity.

14.11 Hardware requirements


Metro Mirror is an optional licensed function available on the DS8000. Licensed functions require the selection of a DS8000 series feature number (IBM 2107) and the acquisition of DS8000 series Function Authorization (IBM 2244) feature numbers: The 2107 licensed function indicator feature number enables the technical activation of the function subject to the client applying a feature activation code made available by IBM. The 2244 function authorization feature numbers establish the extent of IBM authorization for that function on the 2107 machine for which it was acquired. To use Metro Mirror, you must have the 2107 function indicator feature (#0744) and the corresponding DS8000 Series Function Authorization (IBM 2244-RMC) with the adequate feature number (#7440-7450) to establish the extent of IBM authorization: license size in terms of physical capacity. The DS8000 Series Function Authorizations are for billing purposes only and establish the extent of IBM authorization for use of a particular licensed function on the IBM System Storage DS8000 series (IBM 2107). Authorization must be acquired for both the source and target 2107 machine. Note: For a detailed explanation of the features involved and the considerations you must have when ordering Metro Mirror, we recommend that you refer to the announcement letters: IBM System Storage DS8000 Function Authorization for Machine type 2244 IBM FlashCopy SE features IBM System Storage DS8000 series (machine type 2107) delivers new functional capabilities IBM announcement letters can be found at: http://www.ibm.com/products

190

IBM System Storage DS8000: Copy Services in Open Environments

Interoperability
Metro Mirror pairs can only be established between disk subsystems of the same (or similar) type and features. For example, a DS8000 can have a Metro Mirror pair with another DS8000, a DS6000, an ESS 800, or an ESS 750. It cannot have a Metro Mirror pair with an RVA or an ESS F20. Note that all disk subsystems must have the appropriate Metro Mirror feature installed. If your DS8000 is being mirrored to an ESS disk subsystem, the ESS must have PPRC Version 2 (which supports Fibre Channel links) with the appropriate Licensed Internal Code (LIC) level. Refer to the DS8000 Interoperability Matrix or to the System Storage Interoperation Center (SSIC) for more information: http://www-1.ibm.com/servers/storage/disk/ds8000/interop.html http://www-03.ibm.com/systems/support/storage/config/ssic/displayesssearchwithoutj s.wss?start_over=yes

Chapter 14. Metro Mirror options and configuration

191

192

IBM System Storage DS8000: Copy Services in Open Environments

15

Chapter 15.

Metro Mirror performance and scalability


In this chapter we discuss performance and scalability considerations that you must consider when implementing Metro Mirror with the DS8000.

Copyright IBM Corp. 2004-2008. All rights reserved.

193

15.1 Performance
Because Metro Mirror is a synchronous mirroring technology, it introduces a performance overhead (for write operations) over a similar environment that has no remote mirroring. On the other hand, it does not consume any CPU activity, unlike operating system mirroring. You should understand this as part of the planning process for Metro Mirror. Bandwidth analysis and capacity planning for your Metro Mirror links should help to define how many links you need, and when you need to add more, to ensure best possible performance.

15.1.1 Managing the load


As part of your implementation project, you might be able to identify and then distribute hot spots across your configuration, or take other actions to manage and balance the load. Here are some basic things you should consider: Is your bandwidth too little, so that you might see an increase in the response time of your applications at moments of high workload? We recommend that you do not share the Metro Mirror link I/O ports with host attachment ports. One result can be unpredictable performance of Metro Mirror and a much more complicated search in the case of performance analysis. Distance is an important topic. Remember that light speed is less than 300,000 km/s, that is, less then 300 km/ms on Fibre. The data must go to the other site, and then an acknowledgement must come back. Add possible latency times of some active components on the way, and you get approximately a 1-ms overhead per 100 km for write I/Os. Sometimes the problem is not Metro Mirror, but rather hot spots on the disks. Be sure that these problems are resolved before you start with Metro Mirror.

15.1.2 Initial synchronization


When doing the initial synchronization of your Metro Mirror pairs, the DS8000 uses a throttling algorithm to prevent the Metro Mirror process from using too many source site resources. This might prolong the synchronization process if your DS8000 is busy at the time. You can choose to stagger your synchronization tasks or to run them at a time of low utilization to make this process more efficient. As an alternative, you might also choose to do the initial synchronization in the Global Copy mode, and then switch to the Metro Mirror mode. This will allow you to bring all volumes to copy pending state, which has little or no application impact, and then switch to full duplex state. You can do so using the DS CLI.

194

IBM System Storage DS8000: Copy Services in Open Environments

15.2 Scalability
The DS8000 Metro Mirror environment can be scaled up or down as required. If new volumes are added to the DS8000 that require mirroring, they can be dynamically added. If additional Metro Mirror paths are required, they also can be dynamically added. As we have previously mentioned, the logical nature of the LSS has made a Metro Mirror implementation on the DS8000 easier to plan, implement, and manage. However, if you need to add more LSSs to your Metro Mirror environment, your management and automation solutions should be set up to handle this. TPC for Replication and the GDPS service offering are designed to provide this functionality. Visit the IBM Web site and see the Services and Solutions pages for more information. Also see Part 6, IBM TotalStorage Productivity Center for Replication on page 43.

Adding capacity to the same DS8000


If you are adding capacity to an existing DS8000, providing that your Metro Mirror link bandwidth is not close to or over capacity, it is possible that you would only have to add volume pairs to your configuration. If you are adding more LSSs, then you must define Metro Mirror paths before adding volume pairs.

Adding capacity in new DS8000s


If you are adding new DS8000s to your configuration, you must add physical Metro Mirror links before defining your Metro Mirror paths and volume pairs. A minimum of two Metro Mirror paths per DS8000 pair is recommended for redundancy reasons. Your bandwidth analysis will indicate if you require more than two paths.

Chapter 15. Metro Mirror performance and scalability

195

196

IBM System Storage DS8000: Copy Services in Open Environments

16

Chapter 16.

Metro Mirror interfaces and examples


In this chapter we describe the interfaces that you can use for Metro Mirror management for the IBM System Storage DS8000 in an open systems environment. We also present step-by-step examples that illustrate how to execute the setup and management tasks of the Metro Mirror environment. We cover the following topics: DS Command Line Interface and DS Storage Manager overview Set up a Metro Mirror environment Remove a Metro Mirror environment Manage the Metro Mirror environment Failover and Failback operations (site switch) freezepprc and unfreezepprc commands Using the DS Storage Manager GUI to manage Metro Mirror

Copyright IBM Corp. 2004-2008. All rights reserved.

197

16.1 Metro Mirror interfaces


There are various interfaces available for the configuration and management of Metro Mirror for DS8000 when used in an open systems environment: DS Command-Line Interface (DS CLI): This interface provides a set of commands, which are executed on a workstation that communicates with the DS HMC. DS Storage Manager Graphical User Interface (DS GUI): This is a graphical user interface (DS GUI) running in a Web browser. The DS GUI can be accessed using the preinstalled browser on the HMC console, or through the DS8000 Element Manager on a TPC server, such as the SSPC (for new DS800 with Licensed Machine Code 5.30xx.xx), or for former DS8000 installations through a supported Web browser on any workstation connected to the HMC console. TotalStorage Productivity Center for Replication (TPC for Replication): The TPC Replication Manager server, where TPC for Replication runs, connects to the DS8000. TotalStorage Productivity Center for Replication (TPC for Replication) provides management of DS8000 series business continuance solutions, including FlashCopy, Metro Mirror, and Global Mirror. TPC for Replication is covered in Chapter 6, IBM TotalStorage Productivity Center for Replication on page 43. You can also use the following interfaces (these are not covered in this book): DS Open Application Programming Interface (DS Open API) z/OS interfaces: TSO, ICKDSF, and ANTRQST API In order to use a z/OS interface to manage open systems LUNs, the DS8000 must have at least one CKD volume. If you are interested in this possibility, you can refer to the Redbooks publication, IBM System Storage DS8000 Series: Copy Services with IBM System z, SG24-6787. For information about the DS Open API, refer to the publication IBM System Storage DS Open Application Programming Interface Reference, GC35-0516.

DS CLI and DS GUI similar functions for Metro Mirror


Table 16-1 compares similar Metro Mirror commands from the DS CLI and the DS GUI, and the corresponding action.
Table 16-1 DS CLI and DS GUI commands and actions Task Command with DS CLI Select option with DS GUI

Metro Mirror and Global Copy paths commands List available I/O ports that can be used to establish Metro Mirror paths. List established Metro Mirror paths. Establish path. Delete path. lsavailpprcport This information is shown during the process when a path is established. Copy Services Paths Copy Services Paths Create Copy Services Paths Delete

lspprcpath mkpprcpath rmpprcpath

198

IBM System Storage DS8000: Copy Services in Open Environments

Task Metro Mirror pairs commands Failback.

Command with DS CLI

Select option with DS GUI

failbackpprc

Copy Services Metro Mirror / Global Copy Recovery Failback Copy Services Metro Mirror / Global Copy Recovery Failover Copy Services Metro Mirror / Global Copy Copy Service Metro Mirror / Global Copy Create Copy Services Metro Mirror / Global Copy Suspend Copy Services Metro Mirror / Global Copy Resume Copy Services Metro Mirror / Global Copy Delete

Failover.

failoverpprc

List Metro Mirror volume pairs. Establish pair. Suspend pair.

lspprc mkpprc pausepprc

Resume pair.

resumepprc

Delete pair.

rmpprc

Freeze Consistency Group. Thaw Consistency Group.

freezepprc unfreezepprc

The DS CLI has the advantage that you can make scripts and use them for automation. The DS Storage Manager GUI is a Web-based graphical user interface and is more intuitive than the DS CLI.

16.2 Copy Services network components


The network components and connectivity of the DS HMC are illustrated in Figure 16-1.

Chapter 16. Metro Mirror interfaces and examples

199

SSPC
TPC
DS Storage Manager DS HMC 1 (internal)
DS Storage Manager realtime

Ethernet Switch 1

Processor Complex 0

DS8000
Customer Network
DS HMC 2 (external)
DS Storage Manager realtime

DS CLI DS API

Ethernet Switch 2

Processor Complex 1

Figure 16-1 DS8000 Copy Services network components

DS GUI and DS CLI (as well as DS Open API) commands are issued via the Ethernet network to the DS Hardware Management Console (DS HMC). When the DS HMC receives the commands requests it communicates with each server in the disk subsystem via the Ethernet network. Therefore, the DS HMC is a key component to configure and manage the DS8000 and its functions. Each DS8000 will have an internal DS HMC in the base frame, and you can have an external DS HMC for redundancy. You need at least one available DS HMC to issue Copy Services commands. If you have only one DS HMC and if it has a failure, you will not be able to issue Copy Services commands. Therefore, we recommend having a dual DS HMC configuration, as shown in Figure 16-1, especially when using automation scripts to run Copy Services functions so that the script keeps working in case of one DS HMC failure. To establish a Metro Mirror pair, the LAN connection between the source and target DS8000s is not mandatory, unlike the ESS. A similar situation happens with Global Copy. However, copy services DS CLI commands have to be issued to the DS HMC connected to the DS8000, which has the source volume. When you need to establish a Metro Mirror pair from a volume at the remote site to a volume at the local site, the DS CLI command has to be issued to the DS8000 HMC at the remote site. In this case, you need a LAN connection from the local DS CLI client machine to the DS8000 at the remote site.

16.3 DS Command-Line Interface (DS CLI)


You can use the DS CLI interface to manage all DS8000 Copy Services functions, such as defining paths, establishing Metro Mirror pairs, and so on. For a detailed explanation about the DS CLI, refer to Chapter 5, DS Command-Line Interface on page 31. When you establish or remove Metro Mirror paths and volume pairs, you must give the DS CLI commands to the DS HMC that is connected to the source DS8000. Also, when checking status information at the local and remote site, you must issue DS CLI list type commands, such as the lspprc, to each DS8000, source, and target. 200
IBM System Storage DS8000: Copy Services in Open Environments

The DS CLI commands are documented in the publication, IBM System Storage DS8000 Command-Line Interface Users Guide, SC26-7916.

16.4 DS Storage Manager GUI


You can use the DS Storage Manager Graphical User Interface (DS GUI) to set up and control Metro Mirror for DS8000 functions. It is user friendly. However, you cannot use it for automation activities, and certain Metro Mirror functions are not supported from this interface. For a detailed explanation of the DS GUI, refer to Chapter 4, DS Storage Manager on page 27. Note: The DS GUI supports (differently from the DS CLI) a multi-session environment, but not all functions and options are supported by the DS GUI.

16.5 Setting up a Metro Mirror environment using the DS CLI


In the following sections we present an example of how to set up a Metro Mirror environment using the DS CLI. Figure 16-2 shows the configuration that we implement.

LSS10
1000 1001

LSS20
2000 2001
Physical Fibre path

LSS11
1100 1101 DS8000#1
-dev IBM.2107-7520781

LSS21
2100 2101 DS8000#2
-dev IBM.2107-75ABTV1

Figure 16-2 DS8000 configuration in the Metro Mirror setup example

In our example we use different LSS and LUN numbers for the Metro Mirror source and target elements, so that you can more clearly understand which one is being specified when reading through the example.
L

Note: In a real environment (and different from our example), to simplify the management of your Metro Mirror environment, we recommend that you maintain a symmetrical configuration in terms of both physical and logical elements.

Chapter 16. Metro Mirror interfaces and examples

201

16.5.1 Preparing to work with the DS CLI


As we prepare to work with the DS CLI, we do some initial tasks that will simplify our activities during the configuration process.

Simplifying the DS CLI commands syntax


Before setting up the Metro Mirror environment, we recommend that you set up the following DS CLI environment to simplify the commands syntax. To spare you from typing -dev storage_image_ID and -remotedev storage_image_ID in each command, you can put these values in your CLI profile. The default and target Storage Image ID devid and remotedevid are equivalent to the -dev storage_image_ID and -remotedev storage_image_ID command options. See Example 16-1.
Example 16-1 Part of CLI profile # Internal SHMC ipaddress for DS8000#1 hmc1: 10.10.10.1 # External SHMC ipaddress for DS8000#1 hmc2: 10.10.10.2 # Default and target Storage Image ID devid: IBM.2107-7520781 remotedevid:IBM.2107-75ABTV1

For further information, see Chapter 5, DS Command-Line Interface on page 31.

Creating a password file


With the managepwfile command you can create the password file on your DS CLI client. After creating the password file, you do not need to specify a password at login time. When you create an operational script to manage your remote mirror and copy environment, you can use this function so that you need not write a password in the script file. We recommend that you use this function from a security point of view. Example 16-2 shows how to create the password file.
Example 16-2 Creating password file dscli> managepwfile -action add -name script1 -pw xyz1234 Date/Time: October 25, 2005 8:02:37 PM JST IBM DSCLI Version: 5.1.0.204 CMUC00206I managepwfile: Record 9.155.62.97/copy successfully added to password file C:\Documents and Settings\Administrator\dscli\security.dat. dscli> exit C:\Program Files\ibm\dscli>dscli -cfg name_of_profile -user script1 Date/Time: October 25, 2005 8:03:17 PM JST IBM DSCLI Version: 5.1.0.204 DS: IBM.2107-7520781 dscli>

16.5.2 Setup of the Metro Mirror configuration


Figure 16-3 shows the configuration we set up for this example. The configuration has four Metro Mirror pairs that reside in two LSSs. Two paths are defined between each source and target LSS.

202

IBM System Storage DS8000: Copy Services in Open Environments

Source

LSS10
1000 1001

Metro Mirror paths

Target

LSS20
2000 2001

Metro Mirror Pairs


Fibre Channel links

LSS11
1100 1101 DS8000#1
-dev IBM.2107-7520781

LSS21
2100 2101 DS8000#2
-dev IBM.2107-75ABTV1

Metro Mirror Pairs

Figure 16-3 Metro Mirror environment to be set up

To configure the Metro Mirror environment, we follow this procedure: 1. Determine the available Fibre Channel links for paths definition. 2. Define the paths that Metro Mirror will use. 3. Create Metro Mirror pairs.

16.5.3 Determining the available Fibre Channel links


First you must look at the available Fibre Channel links. With the lsavailpprcport command, you can do this (see Example 16-3). You see all available port combinations between the source and the target LSSs. You have to issue this command to the DS HMC connected to DS8000#1, which is the Metro Mirror source.
Example 16-3 List available fibre links dscli> lsavailpprcport -l -remotedev IBM.2107-75ABTV1 -remotewwnn 5005076303FFC663 10:20
Date/Time: October 25, 2005 9:59:28 PM JST IBM DSCLI Version: 5.1.0.204 DS: IBM.2107-7520781

Local Port Attached Port Type Switch ID Switch Port =================================================== I0143 I0010 FCP NA NA I0213 I0140 FCP NA NA

FCP port ID with the lsavailpprcport command has four hexadecimal characters in the format 0xEEAP, where EE is a port enclosure number (003F), A is the adapter number (0F), and P is the port number (0F). The FCP port ID number is prefixed with the letter I.
You can use the -fullid parameter to display the DS8000 Storage Image ID in the command output (see Example 16-4).
Example 16-4 List available fibre links with DS8000 Storage Image ID
dscli> lsavailpprcport -l -fullid -remotedev IBM.2107-75ABTV1 -remotewwnn 5005076303FFC663 10:20 Date/Time: October 25, 2005 10:00:26 PM JST IBM DSCLI Version: 5.1.0.204 DS: IBM.2107-7520781

Local Port Attached Port Type Switch ID Switch Port ======================================================================== IBM.2107-7520781/I0143 IBM.2107-75ABTV1/I0010 FCP NA NA IBM.2107-7520781/I0213 IBM.2107-75ABTV1/I0140 FCP NA NA

Chapter 16. Metro Mirror interfaces and examples

203

You need the worldwide node name (WWNN) of your target DS8000 to issue the lsavailpprcport command. You can get this by using the lssi command; see Example 16-5. You have to issue this command to the DS HMC connected to DS8000#2, which is the Metro Mirror target.
Example 16-5 Get WWNN of target DS8000 dscli> lssi Date/Time: October 25, 2005 8:38:21 PM JST IBM DSCLI Version: 5.1.0.204 Name ID Storage Unit Model WWNN State ESSNet ============================================================================ IBM.2107-75ABTV1 IBM.2107-75ABTV0 9A2 5005076303FFC663 Online Enabled IBM.2107-75ABTV2 IBM.2107-75ABTV0 9A2 5005076303FFCE63 Online Enabled

16.5.4 Creating Metro Mirror paths


Now you can use the mkpprcpath command to create paths between the source and target LSSs and then verify the result with an lspprcpath command. You have to issue the mkpprcpath command for each LSS pair; see Example 16-6.
Example 16-6 Create Metro Mirror paths and list them dscli> mkpprcpath -remotedev IBM.2107-75ABTV1 -remotewwnn 5005076303FFC663 -srclss 10 -tgtlss 20 i0143:i0010 i0213:i0140
Date/Time: October 25, 2005 10:26:56 PM JST IBM DSCLI Version: 5.1.0.204 DS: IBM.2107-7520781 CMUC00149I mkpprcpath: Remote Mirror and Copy path 10:20 successfully established.

dscli> mkpprcpath -remotedev IBM.2107-75ABTV1 -remotewwnn 5005076303FFC663 -srclss 11 -tgtlss 21 i0143:i0010 i0213:i0140
Date/Time: October 25, 2005 10:27:14 PM JST IBM DSCLI Version: 5.1.0.204 DS: IBM.2107-7520781

CMUC00149I mkpprcpath: Remote Mirror and Copy path 11:21 successfully established. dscli> lspprcpath 10-11
Date/Time: October 25, 2005 10:29:23 PM JST IBM DSCLI Version: 5.1.0.204 DS: IBM.2107-7520781

Src Tgt State SS Port Attached Port Tgt WWNN ========================================================= 10 20 Success FF20 I0143 I0010 5005076303FFC663 10 20 Success FF20 I0213 I0140 5005076303FFC663 11 21 Success FF21 I0143 I0010 5005076303FFC663 11 21 Success FF21 I0213 I0140 5005076303FFC663

With the lspprcpath command, you can use the -fullid command flag to display the fully qualified DS8000 Storage Image ID in the command output; see Example 16-7.
Example 16-7 List paths with DS8000 Storage Image ID dscli> lspprcpath -fullid 10-11
Date/Time: October 25, 2005 10:43:14 PM JST IBM DSCLI Version: 5.1.0.204 DS: IBM.2107-7520781
Src Tgt State SS Port Attached Port Tgt WWNN =================================================================================================================== IBM.2107-7520781/10 IBM.2107-75ABTV1/20 Success FF20 IBM.2107-7520781/I0143 IBM.2107-75ABTV1/I0010 5005076303FFC663 IBM.2107-7520781/10 IBM.2107-75ABTV1/20 Success FF20 IBM.2107-7520781/I0213 IBM.2107-75ABTV1/I0140 5005076303FFC663 IBM.2107-7520781/11 IBM.2107-75ABTV1/21 Success FF21 IBM.2107-7520781/I0143 IBM.2107-75ABTV1/I0010 5005076303FFC663 IBM.2107-7520781/11 IBM.2107-75ABTV1/21 Success FF21 IBM.2107-7520781/I0213 IBM.2107-75ABTV1/I0140 5005076303FFC663

16.5.5 Creating Metro Mirror pairs


After creating the paths, you can establish Metro Mirror volume pairs. You do this by using the mkpprc command and verifying the result with the lspprc command; see Example 16-8. When creating a Metro Mirror pair, you must specify -type mmir parameter with the mkpprc command. 204
IBM System Storage DS8000: Copy Services in Open Environments

Example 16-8 Create Metro Mirror pairs and verify the result
dscli> mkpprc -remotedev IBM.2107-75ABTV1 -type mmir 1000-1001:2000-2001 1100-1101:2100-2101 Date/Time: October 25, 2005 11:19:06 PM JST IBM DSCLI Version: 5.1.0.204 DS: IBM.2107-7520781 CMUC00153I mkpprc: Remote Mirror and Copy volume pair relationship 1000:2000 successfully created. CMUC00153I mkpprc: Remote Mirror and Copy volume pair relationship 1001:2001 successfully created. CMUC00153I mkpprc: Remote Mirror and Copy volume pair relationship 1100:2100 successfully created. CMUC00153I mkpprc: Remote Mirror and Copy volume pair relationship 1101:2101 successfully created.

dscli> lspprc 1000-1001 1100-1101


Date/Time: October 25, 2005 11:26:59 PM JST IBM DSCLI Version: 5.1.0.204 DS: IBM.2107-7520781 ID State Reason Type SourceLSS Timeout (secs) Critical Mode First Pass Status =================================================================================================== 1000:2000 Copy Pending Metro Mirror 10 unknown Disabled Invalid 1001:2001 Copy Pending Metro Mirror 10 unknown Disabled Invalid 1100:2100 Copy Pending Metro Mirror 11 unknown Disabled Invalid 1101:2101 Copy Pending Metro Mirror 11 unknown Disabled Invalid

Once the Metro Mirror source and target volumes have been synchronized, the volume state changes to Full Duplex from Copy Pending; see Example 16-9.
Example 16-9 List Metro Mirror status after Metro Mirror initial copy completes dscli> lspprc 1000-1001 1100-1101
Date/Time: October 25, 2005 11:28:55 PM JST IBM DSCLI Version: 5.1.0.204 DS: IBM.2107-7520781 ID State Reason Type SourceLSS Timeout (secs) Critical Mode First Pass Status ================================================================================================== 1000:2000 Full Duplex Metro Mirror 10 unknown Disabled Invalid 1001:2001 Full Duplex Metro Mirror 10 unknown Disabled Invalid 1100:2100 Full Duplex Metro Mirror 11 unknown Disabled Invalid 1101:2101 Full Duplex Metro Mirror 11 unknown Disabled Invalid

The states of full duplex and copy pending indicate the Metro Mirror source state. In the case of the target state, the states are Target Full Duplex and Target Copy Pending; see Example 16-10. You have to give this command to the DS HMC connected to DS8000#2, which is the Metro Mirror target.
Example 16-10 lspprc for Metro Mirror target volumes
dscli> lspprc 2000-2001 2100-2101
Date/Time: October 26, 2005 5:28:37 PM JST IBM DSCLI Version: 5.1.0.204 DS: IBM.2107-75ABTV1 ID State Reason Type SourceLSS Timeout (secs) Critical Mode First Pass Status ========================================================================================================== 1000:2000 Target Copy Pending Metro Mirror 10 unknown Disabled Invalid 1001:2001 Target Copy Pending Metro Mirror 10 unknown Disabled Invalid 1100:2100 Target Copy Pending Metro Mirror 11 unknown Disabled Invalid 1101:2101 Target Copy Pending Metro Mirror 11 unknown Disabled Invalid

dscli> lspprc

2000-2001 2100-2101

Date/Time: October 26, 2005 5:29:14 PM JST IBM DSCLI Version: 5.1.0.204 DS: IBM.2107-75ABTV1 ID State Reason Type SourceLSS Timeout (secs) Critical Mode First Pass Status ========================================================================================================= 1000:2000 Target Full Duplex Metro Mirror 10 unknown Disabled Invalid 1001:2001 Target Full Duplex Metro Mirror 10 unknown Disabled Invalid 1100:2100 Target Full Duplex Metro Mirror 11 unknown Disabled Invalid 1101:2101 Target Full Duplex Metro Mirror 11 unknown Disabled Invalid

In the copy pending state, you can check the data transfer status of the Metro Mirror initial copy by using the lspprc -l command. Out Of Sync Tracks shows the remaining tracks to be sent to the target volume. The size of the logical track for FB volume for the DS8000 is 64 KB. And you also can use the lspprc -fullid command to show the DS8000 Storage Image ID in the command output; see Example 16-11.

Chapter 16. Metro Mirror interfaces and examples

205

Example 16-11 lspprc -l and lspprc -fullid for Metro Mirror pairs
dscli> lspprc -l 1000-1001 1100-1101
Date/Time: October 25, 2005 11:26:22 PM JST IBM DSCLI Version: 5.1.0.204 DS: IBM.2107-7520781 ID State Reason Type Out Of Sync Tracks Tgt Read Src Cascade Tgt Cascade Date Suspended SourceLSS Timeout (secs) Critical Mode First Pass Status ========================================================================================================================================== 1000:2000 Copy Pending Metro Mirror 46725 Disabled Disabled invalid 10 unknown Disabled Invalid 1001:2001 Copy Pending Metro Mirror 46579 Disabled Disabled invalid 10 unknown Disabled Invalid 1100:2100 Copy Pending Metro Mirror 44080 Disabled Disabled invalid 11 unknown Disabled Invalid 1101:2101 Copy Pending Metro Mirror 44040 Disabled Disabled invalid 11 unknown Disabled Invalid

dscli> lspprc -fullid 1000-1001 1100-1101


Date/Time: October 25, 2005 11:28:18 PM JST IBM DSCLI Version: 5.1.0.204 DS: IBM.2107-7520781 ID State Reason Type SourceLSS Timeout (secs) Critical Mode First Pass Status ========================================================================================================================================== IBM.2107-7520781/1000:IBM.2107-75ABTV1/2000 Full Duplex Metro Mirror IBM.2107-7520781/10 unknown Disabled Invalid IBM.2107-7520781/1001:IBM.2107-75ABTV1/2001 Full Duplex Metro Mirror IBM.2107-7520781/10 unknown Disabled Invalid IBM.2107-7520781/1100:IBM.2107-75ABTV1/2100 Full Duplex Metro Mirror IBM.2107-7520781/11 unknown Disabled Invalid IBM.2107-7520781/1101:IBM.2107-75ABTV1/2101 Full Duplex Metro Mirror IBM.2107-7520781/11 unknown Disabled Invalid dscli>

16.6 Removing Metro Mirror environment using DS CLI


In this section we show how to terminate the Metro Mirror environment that was set up in the previous sections. We follow these main steps: 1. Remove Metro Mirror pairs. 2. Remove logical paths.

16.6.1 Step 1: Remove Metro Mirror pairs


The rmpprc command removes the volume pairs relationships; see Example 16-12. You can use the -quiet parameter to turn off the confirmation prompt for this command.
Example 16-12 Removing Metro Mirror pairs dscli> lspprc 1000-1001 1100-1101
Date/Time: October 26, 2005 5:36:43 PM JST IBM DSCLI Version: 5.1.0.204 DS: IBM.2107-7520781 ID State Reason Type SourceLSS Timeout (secs) Critical Mode First Pass Status ================================================================================================== 1000:2000 Full Duplex Metro Mirror 10 unknown Disabled Invalid 1001:2001 Full Duplex Metro Mirror 10 unknown Disabled Invalid 1100:2100 Full Duplex Metro Mirror 11 unknown Disabled Invalid 1101:2101 Full Duplex Metro Mirror 11 unknown Disabled Invalid

dscli> rmpprc -remotedev IBM.2107-75ABTV1 1000-1001:2000-2001 1100-1101:2100-2101


Date/Time: October 26, 2005 5:36:49 PM JST IBM DSCLI Version: 5.1.0.204 DS: IBM.2107-7520781 CMUC00160W rmpprc: Are you sure you want to delete the Remote Mirror and Copy volume pair relationship 1000-1001:2000-2001:? [y/n]:y CMUC00155I rmpprc: Remote Mirror and Copy volume pair 1000:2000 relationship successfully withdrawn. CMUC00155I rmpprc: Remote Mirror and Copy volume pair 1001:2001 relationship successfully withdrawn. CMUC00160W rmpprc: Are you sure you want to delete the Remote Mirror and Copy volume pair relationship 1100-1101:2100-2101:? [y/n]:y CMUC00155I rmpprc: Remote Mirror and Copy volume pair 1100:2100 relationship successfully withdrawn. CMUC00155I rmpprc: Remote Mirror and Copy volume pair 1101:2101 relationship successfully withdrawn.

You can add the -at tgt parameter to the rmpprc command to remove only the Metro Mirror target volume; see Example 16-13. The commands are given to the DS HMC connected to DS8000#2, which is the Metro Mirror target.

206

IBM System Storage DS8000: Copy Services in Open Environments

Example 16-13 Results of rmpprc with -at tgt dscli> lspprc 2002
Date/Time: October 26, 2005 6:16:13 PM JST IBM DSCLI Version: 5.1.0.204 DS: IBM.2107-75ABTV1 ID State Reason Type SourceLSS Timeout (secs) Critical Mode First Pass Status ========================================================================================================= 1002:2002 Target Full Duplex Metro Mirror 10 unknown Disabled Invalid dscli>

dscli> rmpprc -quiet -remotedev IBM.2107-75ABTV1 -at tgt 1002:2002


Date/Time: October 26, 2005 6:16:42 PM JST IBM DSCLI Version: 5.1.0.204 DS: IBM.2107-75ABTV1 CMUC00155I rmpprc: Remote Mirror and Copy volume pair 1002:2002 relationship successfully withdrawn. dscli>

dscli> lspprc 2002


Date/Time: October 26, 2005 6:16:49 PM JST IBM DSCLI Version: 5.1.0.204 DS: IBM.2107-75ABTV1 CMUC00234I lspprc: No Remote Mirror and Copy found.

Example 16-14 shows the Metro Mirror source volume status after the rmpprc -at tgt command completed and the result of a rmpprc -at src command. In this case, there are still available paths. Therefore, the source status changed after the rmpprc -at tgt command completed. If there are no available paths, the state of the Metro Mirror source volumes is preserved. You have to give the command to the DS HMC connected to DS8000#1, which is the Metro Mirror source.
Example 16-14 Metro Mirror source volume status after rmpprc with -at tgt and rmpprc with -at src dscli> lspprc 1002
Date/Time: October 26, 2005 6:16:22 PM JST IBM DSCLI Version: 5.1.0.204 DS: IBM.2107-7520781 ID State Reason Type SourceLSS Timeout (secs) Critical Mode First Pass Status ================================================================================================== 1002:2002 Full Duplex Metro Mirror 10 unknown Disabled Invalid << After rmpprc -at tgt command completes >>

dscli> lspprc 1002


Date/Time: October 26, 2005 6:16:53 PM JST IBM DSCLI Version: 5.1.0.204 DS: IBM.2107-7520781
ID State Reason Type SourceLSS Timeout (secs) Critical Mode First Pass Status ======================================================================================================== 1002:2002 Suspended Simplex Target Metro Mirror 10 unknown Disabled Invalid

dscli> rmpprc -quiet -remotedev IBM.2107-75ABTV1 -at src 1002:2002


Date/Time: October 26, 2005 6:17:17 PM JST IBM DSCLI Version: 5.1.0.204 DS: IBM.2107-7520781 CMUC00155I rmpprc: Remote Mirror and Copy volume pair 1002:2002 relationship successfully withdrawn.

dscli> lspprc 1002


Date/Time: October 26, 2005 6:17:21 PM JST IBM DSCLI Version: 5.1.0.204 DS: IBM.2107-7520781 CMUC00234I lspprc: No Remote Mirror and Copy found.

16.6.2 Step 2: Remove paths


The rmpprcpath command removes paths. Before removing the paths, you must remove all remote mirror pairs that are using the paths or you must use the -force parameter with the rmpprcpath command; see Example 16-15.
Example 16-15 Remove paths dscli> lspprc 1000-1001 1100-1101 Date/Time: October 26, 2005 6:36:22 PM JST IBM DSCLI Version: 5.1.0.204 DS: IBM.2107-7520781 CMUC00234I lspprc: No Remote Mirror and Copy found. dscli> dscli>dscli> rmpprcpath -remotedev IBM.2107-75ABTV1 10:20 11:21 Date/Time: October 26, 2005 6:37:08 PM JST IBM DSCLI Version: 5.1.0.204 DS: IBM.2107-7520781

Chapter 16. Metro Mirror interfaces and examples

207

CMUC00152W rmpprcpath: 10:20:? [y/n]:y CMUC00150I rmpprcpath: CMUC00152W rmpprcpath: 11:21:? [y/n]:y CMUC00150I rmpprcpath:

Are you sure you want to remove the Remote Mirror and Copy path Remote Mirror and Copy path 10:20 successfully removed. Are you sure you want to remove the Remote Mirror and Copy path Remote Mirror and Copy path 11:21 successfully removed.

If you do not remove the Metro Mirror pairs that are using the paths, the rmpprcpath command fails; see Example 16-16.
Example 16-16 Removing paths still having Metro Mirror pairs dscli> lspprc 1002
Date/Time: October 26, 2005 7:57:34 PM JST IBM DSCLI Version: 5.1.0.204 DS: IBM.2107-7520781 ID State Reason Type SourceLSS Timeout (secs) Critical Mode First Pass Status ================================================================================================== 1002:2002 Full Duplex Metro Mirror 10 unknown Disabled Invalid dscli>

dscli> rmpprcpath -remotedev IBM.2107-75ABTV1 -quiet 10:20


Date/Time: October 26, 2005 7:58:22 PM JST IBM DSCLI Version: 5.1.0.204 DS: IBM.2107-7520781 CMUN03070E rmpprcpath: 10:20: Copy Services operation failure: pairs remain

If you want to remove logical paths while still having Metro Mirror pairs, you can use the -force parameter; see Example 16-17. After the path has been removed, the Metro Mirror pair is still in full duplex state until the Metro Mirror source receives I/O from the servers. When I/O arrives to the Metro Mirror source, the source volume becomes suspended. If you set the Consistency Group option for the LSS in which the Metro Mirror volumes reside, I/Os from the servers are held with queue full status for the specified timeout value.
Example 16-17 Removing paths while still having Metro Mirror pairs with force parameter dscli> lspprc 1002
Date/Time: October 26, 2005 7:57:34 PM JST IBM DSCLI Version: 5.1.0.204 DS: IBM.2107-7520781 ID State Reason Type SourceLSS Timeout (secs) Critical Mode First Pass Status ================================================================================================== 1002:2002 Full Duplex Metro Mirror 10 unknown Disabled Invalid dscli>

dscli> rmpprcpath -remotedev IBM.2107-75ABTV1 -quiet 10:20


Date/Time: October 26, 2005 7:58:22 PM JST IBM DSCLI Version: 5.1.0.204 DS: IBM.2107-7520781 CMUN03070E rmpprcpath: 10:20: Copy Services operation failure: pairs remain dscli>

dscli> rmpprcpath -remotedev IBM.2107-75ABTV1 -quiet -force 10:20


Date/Time: October 26, 2005 8:06:22 PM JST IBM DSCLI Version: 5.1.0.204 DS: IBM.2107-7520781 CMUC00150I rmpprcpath: Remote Mirror and Copy path 10:20 successfully removed.

dscli> lspprc 1002


Date/Time: October 26, 2005 8:06:28 PM JST IBM DSCLI Version: 5.1.0.204 DS: IBM.2107-7520781 ID State Reason Type SourceLSS Timeout (secs) Critical Mode First Pass Status ================================================================================================== 1002:2002 Full Duplex Metro Mirror 10 unknown Disabled Invalid

<< After I/O goes to the source volume >> dscli> lspprc 1002
Date/Time: October 26, 2005 10:00:43 PM JST IBM DSCLI Version: 5.1.0.204 DS: IBM.2107-7520781
ID State Reason Type SourceLSS Timeout (secs) Critical Mode First Pass Status ==================================================================================================================== 1002:2002 Suspended Internal Conditions Target Metro Mirror 10 unknown Disabled Invalid

208

IBM System Storage DS8000: Copy Services in Open Environments

16.7 Managing the Metro Mirror environment with the DS CLI


In this section we show how we can manage the Metro Mirror environment such as suspending and resuming Metro Mirror pairs and changing logical paths.

16.7.1 Suspending and resuming Metro Mirror data transfer


The pausepprc command stops Metro Mirror from transferring data to the target volumes. After this command completes, the Metro Mirror pair becomes suspended. I/Os from the servers complete at the Metro Mirror source volumes without sending those updates to their target volumes; see Example 16-18.
Example 16-18 Suspending Metro Mirror data transfer dscli> lspprc 1000-1001 1100-1101
Date/Time: October 26, 2005 11:00:17 PM JST IBM DSCLI Version: 5.1.0.204 DS: IBM.2107-7520781 ID State Reason Type SourceLSS Timeout (secs) Critical Mode First Pass Status ================================================================================================== 1000:2000 Full Duplex Metro Mirror 10 unknown Disabled Invalid 1001:2001 Full Duplex Metro Mirror 10 unknown Disabled Invalid 1100:2100 Full Duplex Metro Mirror 11 unknown Disabled Invalid 1101:2101 Full Duplex Metro Mirror 11 unknown Disabled Invalid dscli>

dscli> pausepprc -remotedev IBM.2107-75ABTV1 1000-1001:2000-2001


Date/Time: October 26, 2005 11:00:21 PM JST IBM DSCLI Version: 5.1.0.204 DS: IBM.2107-7520781 CMUC00157I pausepprc: Remote Mirror and Copy volume pair 1000:2000 relationship successfully paused. CMUC00157I pausepprc: Remote Mirror and Copy volume pair 1001:2001 relationship successfully paused. dscli>

dscli> lspprc 1000-1001 1100-1101


Date/Time: October 26, 2005 11:00:33 PM JST IBM DSCLI Version: 5.1.0.204 DS: IBM.2107-7520781 ID State Reason Type SourceLSS Timeout (secs) Critical Mode First Pass Status ======================================================================================================= 1000:2000 Suspended Host Source Metro Mirror 10 unknown Disabled Invalid 1001:2001 Suspended Host Source Metro Mirror 10 unknown Disabled Invalid 1100:2100 Full Duplex Metro Mirror 11 unknown Disabled Invalid 1101:2101 Full Duplex Metro Mirror 11 unknown Disabled Invalid

Because the source DS8000 keeps track of all changed data on the source volume, you can resume Metro Mirror operations at a later time. The resumepprc command resumes a Metro Mirror relationship for a volume pair and restarts transferring data. You must specify the remote mirror and copy type, such as Metro Mirror or Global Copy with the -type parameter; see Example 16-19. When resuming the Metro Mirror pairs, the state of the pairs is copy pending, and the way the copy is done, data consistency at the target volumes is not kept. Therefore, you must take specific action to keep data consistency at the recovery site while resuming Metro Mirror pairs. Taking an initial FlashCopy at the recovery site is one of the ways to keep data consistency.
Example 16-19 Resuming Metro Mirror pairs dscli> lspprc 1000-1001 1100-1101
Date/Time: October 26, 2005 11:05:07 PM JST IBM DSCLI Version: 5.1.0.204 DS: IBM.2107-7520781 ID State Reason Type SourceLSS Timeout (secs) Critical Mode First Pass Status ======================================================================================================= 1000:2000 Suspended Host Source Metro Mirror 10 unknown Disabled Invalid 1001:2001 Suspended Host Source Metro Mirror 10 unknown Disabled Invalid 1100:2100 Full Duplex Metro Mirror 11 unknown Disabled Invalid 1101:2101 Full Duplex Metro Mirror 11 unknown Disabled Invalid dscli>

dscli> resumepprc -remotedev IBM.2107-75ABTV1 -type mmir 1000-1001:2000-2001


Date/Time: October 26, 2005 11:05:28 PM JST IBM DSCLI Version: 5.1.0.204 DS: IBM.2107-7520781

Chapter 16. Metro Mirror interfaces and examples

209

CMUC00158I resumepprc: Remote Mirror and Copy volume pair 1000:2000 relationship successfully resumed. This message is being returned before the copy completes. CMUC00158I resumepprc: Remote Mirror and Copy volume pair 1001:2001 relationship successfully resumed. This message is being returned before the copy completes. dscli>

dscli> lspprc 1000-1001 1100-1101


Date/Time: October 26, 2005 11:05:33 PM JST IBM DSCLI Version: 5.1.0.204 DS: IBM.2107-7520781 ID State Reason Type SourceLSS Timeout (secs) Critical Mode First Pass Status ================================================================================================== 1000:2000 Full Duplex Metro Mirror 10 unknown Disabled Invalid 1001:2001 Full Duplex Metro Mirror 10 unknown Disabled Invalid 1100:2100 Full Duplex Metro Mirror 11 unknown Disabled Invalid 1101:2101 Full Duplex Metro Mirror 11 unknown Disabled Invalid dscli>

16.7.2 Adding and removing paths


You can use the mkpprcpath command to add and reduce the number of paths associating LSS pairs. In Example 16-20, for each LSS pair (10:20 and 11:21), we add one additional path (I0102:I0031) to the existing two paths (I0143:I0010, I0213:I0140).
Example 16-20 Adding paths dscli> lspprcpath 10-11 Date/Time: October 26, 2005 11:38:09 PM JST IBM DSCLI Version: 5.1.0.204 DS: IBM.2107-7520781 Src Tgt State SS Port Attached Port Tgt WWNN ========================================================= 10 20 Success FF20 I0143 I0010 5005076303FFC663 10 20 Success FF20 I0213 I0140 5005076303FFC663 11 21 Success FF21 I0143 I0010 5005076303FFC663 11 21 Success FF21 I0213 I0140 5005076303FFC663 dscli> dscli> lsavailpprcport -l -remotedev IBM.2107-75ABTV1 -remotewwnn 5005076303FFC663 Date/Time: October 26, 2005 11:40:32 PM JST IBM DSCLI Version: 5.1.0.204 DS: IBM.2107-7520781 Local Port Attached Port Type Switch ID Switch Port =================================================== I0102 I0031 FCP NA NA I0143 I0010 FCP NA NA I0213 I0140 FCP NA NA dscli> lsavailpprcport -l -remotedev IBM.2107-75ABTV1 -remotewwnn 5005076303FFC663 Date/Time: October 26, 2005 11:43:28 PM JST IBM DSCLI Version: 5.1.0.204 DS: IBM.2107-7520781 Local Port Attached Port Type Switch ID Switch Port =================================================== I0102 I0031 FCP NA NA I0143 I0010 FCP NA NA I0213 I0140 FCP NA NA dscli> dscli> mkpprcpath -remotedev IBM.2107-75ABTV1 -remotewwnn 5005076303FFC663 -srclss -tgtlss 20 i0143:i0010 i0213:i0140 i0102:i0031 Date/Time: October 26, 2005 11:43:52 PM JST IBM DSCLI Version: 5.1.0.204 DS: IBM.2107-7520781 CMUC00149I mkpprcpath: Remote Mirror and Copy path 10:20 successfully established. dscli> mkpprcpath -remotedev IBM.2107-75ABTV1 -remotewwnn 5005076303FFC663 -srclss -tgtlss 21 i0143:i0010 i0213:i0140 i0102:i0031 Date/Time: October 26, 2005 11:44:01 PM JST IBM DSCLI Version: 5.1.0.204 DS: IBM.2107-7520781

10:20

11:21

10

11

210

IBM System Storage DS8000: Copy Services in Open Environments

CMUC00149I mkpprcpath: Remote Mirror and Copy path 11:21 successfully established. dscli> dscli> lspprcpath 10-11 Date/Time: October 26, 2005 11:44:10 PM JST IBM DSCLI Version: 5.1.0.204 DS: IBM.2107-7520781 Src Tgt State SS Port Attached Port Tgt WWNN ========================================================= 10 20 Success FF20 I0143 I0010 5005076303FFC663 10 20 Success FF20 I0213 I0140 5005076303FFC663 10 20 Success FF20 I0102 I0031 5005076303FFC663 11 21 Success FF21 I0143 I0010 5005076303FFC663 11 21 Success FF21 I0213 I0140 5005076303FFC663 11 21 Success FF21 I0102 I0031 5005076303FFC663

Important: When adding paths with the mkpprcpath command, you must specify all of the paths that you want to use. This includes the existing paths. Otherwise, you lose those definitions that were already there.

To reduce the number of paths, you can also use the mkpprcpath command. In Example 16-21, for each LSS pair (10:20 and 11:21), we remove one path (I0102:I0031) from the existing three paths (I0143:I0010, I0213:I0140,I0102:I0031).
Example 16-21 Removing paths dscli> lspprcpath 10-11 Date/Time: October 26, 2005 11:44:10 PM JST IBM DSCLI Version: 5.1.0.204 DS: IBM.2107-7520781 Src Tgt State SS Port Attached Port Tgt WWNN ========================================================= 10 20 Success FF20 I0143 I0010 5005076303FFC663 10 20 Success FF20 I0213 I0140 5005076303FFC663 10 20 Success FF20 I0102 I0031 5005076303FFC663 11 21 Success FF21 I0143 I0010 5005076303FFC663 11 21 Success FF21 I0213 I0140 5005076303FFC663 11 21 Success FF21 I0102 I0031 5005076303FFC663 dscli> dscli> mkpprcpath -remotedev IBM.2107-75ABTV1 -remotewwnn 5005076303FFC663 -srclss 10 -tgtlss 20 i0143:i0010 i0213:i0140 Date/Time: October 26, 2005 11:52:48 PM JST IBM DSCLI Version: 5.1.0.204 DS: IBM.2107-7520781 CMUC00149I mkpprcpath: Remote Mirror and Copy path 10:20 successfully established. dscli> mkpprcpath -remotedev IBM.2107-75ABTV1 -remotewwnn 5005076303FFC663 -srclss 11 -tgtlss 21 i0143:i0010 i0213:i0140 Date/Time: October 26, 2005 11:53:00 PM JST IBM DSCLI Version: 5.1.0.204 DS: IBM.2107-7520781 CMUC00149I mkpprcpath: Remote Mirror and Copy path 11:21 successfully established. dscli> dscli> lspprcpath 10-11 Date/Time: October 26, 2005 11:53:06 PM JST IBM DSCLI Version: 5.1.0.204 DS: IBM.2107-7520781 Src Tgt State SS Port Attached Port Tgt WWNN ========================================================= 10 20 Success FF20 I0143 I0010 5005076303FFC663 10 20 Success FF20 I0213 I0140 5005076303FFC663 11 21 Success FF21 I0143 I0010 5005076303FFC663 11 21 Success FF21 I0213 I0140 5005076303FFC663

Chapter 16. Metro Mirror interfaces and examples

211

16.8 Failover and Failback functions for sites switching


In this section we describe the use of the Metro Mirror Failover and Failback functions in planned and unplanned outage scenarios.

Planned outages
The planned outage procedures rely on two facts: Metro Mirror source and target volumes are in a consistent and current state. Both DS8000s are functional and reachable. You can swap sites without any full copy operation by combining Metro Mirror initialization modes.

Unplanned outages
In contrast to the assumptions for planned outages, the situation in a disaster is more difficult: In an unplanned outage situation, only the DS8000 at the recovery site is functioning. The production site DS8000 might be lost or unreachable. In an unplanned situation, volumes at the production and recovery site might be in different states. As opposed to a planned situation, where you can stop all I/Os at the production site to make all volumes across the recovery site reach a consistent status, this cannot be done in an unplanned situation. If not using Consistency Groups, in the case of, for example, a power failure, you can only assume consistency at the level of a single volume pair, not at the application level.

16.8.1 Metro Mirror Failover function


In our example, we used the Metro Mirror environment shown in Figure 16-4. We have four Metro Mirror source volumes in DS8000#1 (serial number 7520781) at the production site and four Metro Mirror target volumes in DS8000#2 (serial number 75ABTV1) at the recovery site. We call the volumes at the production site A volumes, which initially are Metro Mirror source volumes. We call the volumes at the recovery site B volumes, which initially are Metro Mirror target volumes. During site switch operations, A and B volumes become alternatively source and target. Therefore we add the A and B terminology for easier understanding.

212

IBM System Storage DS8000: Copy Services in Open Environments

Source (A volumes)
LSS10
1000 1001

Metro Mirror paths

Target (B volumes)
LSS20
2000 2001

Metro Mirror Pairs


Fibre Channel links

LSS11
1100 1101 DS8000#1
-dev IBM.2107-7520781

LSS21
2100 2101 DS8000#2
-dev IBM.2107-75ABTV1

Metro Mirror Pairs

Figure 16-4 Metro Mirror environment for sites switch example

A planned site switch using the Metro Mirror Failover function involves the following steps. If the site switch is because of an unplanned outage, then the procedure starts from step 4 on page 214: 1. When the planned outage window is reached, the applications at the production site (A) must be quiesced to cease all write I/O activity. This way, there are no more updates to the source volumes. Depending on the host operating system, it might be necessary to dismount the source volumes. 2. Next make sure that all Metro Mirror pairs are in full duplex state. It is better to check on both sites, on DS8000#1 and on DS8000#2. In order to do this, you must issue the lspprc command to each DS HMC; see Example 16-22.
Example 16-22 Check Metro Mirror state at the production site and the recovery site << DS8000#1 >> dscli> lspprc 1000-1001 1100-1101
Date/Time: October 27, 2005 7:09:43 PM JST IBM DSCLI Version: 5.1.0.204 DS: IBM.2107-7520781 ID State Reason Type SourceLSS Timeout (secs) Critical Mode First Pass Status ================================================================================================== 1000:2000 Full Duplex Metro Mirror 10 unknown Disabled Invalid 1001:2001 Full Duplex Metro Mirror 10 unknown Disabled Invalid 1100:2100 Full Duplex Metro Mirror 11 unknown Disabled Invalid 1101:2101 Full Duplex Metro Mirror 11 unknown Disabled Invalid

<< DS8000#2 >> dscli> lspprc 2000-2001 2100-2101


Date/Time: October 27, 2005 7:10:13 PM JST IBM DSCLI Version: 5.1.0.204 DS: IBM.2107-75ABTV1 ID State Reason Type SourceLSS Timeout (secs) Critical Mode First Pass Status ========================================================================================================= 1000:2000 Target Full Duplex Metro Mirror 10 unknown Disabled Invalid 1001:2001 Target Full Duplex Metro Mirror 10 unknown Disabled Invalid 1100:2100 Target Full Duplex Metro Mirror 11 unknown Disabled Invalid 1101:2101 Target Full Duplex Metro Mirror 11 unknown Disabled Invalid

Chapter 16. Metro Mirror interfaces and examples

213

3. You can give the commands freezepprc and unfreezepprc to ensure that no data can possibly be transferred to the target volumes (B volumes); see Example 16-23. This operation is optional because the application at the production site has been quiesced. Therefore, no data is sent to the target volumes.
Example 16-23 freezepprc and unfreezepprc dscli> freezepprc -remotedev IBM.2107-75ABTV1 10:20 11:21 Date/Time: October 27, 2005 7:44:33 PM JST IBM DSCLI Version: 5.1.0.204 DS: IBM.2107-7520781 CMUC00161W freezepprc: Remote Mirror and Copy consistency group 10:20 successfully created. CMUC00161W freezepprc: Remote Mirror and Copy consistency group 11:21 successfully created. dscli> dscli> lspprc 1000-1001 1100-1101
Date/Time: October 27, 2005 7:44:41 PM JST IBM DSCLI Version: 5.1.0.204 DS: IBM.2107-7520781 ID State Reason Type SourceLSS Timeout (secs) Critical Mode First Pass Status ================================================================================================ 1000:2000 Suspended Freeze Metro Mirror 10 unknown Disabled Invalid 1001:2001 Suspended Freeze Metro Mirror 10 unknown Disabled Invalid 1100:2100 Suspended Freeze Metro Mirror 11 unknown Disabled Invalid 1101:2101 Suspended Freeze Metro Mirror 11 unknown Disabled Invalid

dscli> dscli> unfreezepprc -remotedev IBM.2107-75ABTV1 10:20 11:21


Date/Time: October 27, 2005 7:44:55 PM JST IBM DSCLI Version: 5.1.0.204 DS: IBM.2107-7520781 CMUC00198I unfreezepprc: Remote Mirror and Copy pair 10:20 successfully thawed. CMUC00198I unfreezepprc: Remote Mirror and Copy pair 11:21 successfully thawed.

dscli> dscli> lspprc 1000-1001 1100-1101


Date/Time: October 27, 2005 7:45:00 PM JST IBM DSCLI Version: 5.1.0.204 DS: IBM.2107-7520781 ID State Reason Type SourceLSS Timeout (secs) Critical Mode First Pass Status ================================================================================================ 1000:2000 Suspended Freeze Metro Mirror 10 unknown Disabled Invalid 1001:2001 Suspended Freeze Metro Mirror 10 unknown Disabled Invalid 1100:2100 Suspended Freeze Metro Mirror 11 unknown Disabled Invalid 1101:2101 Suspended Freeze Metro Mirror 11 unknown Disabled Invalid

Note: The freezepprc command is an LSS level command. This means that all remote mirror and copy pairs, Metro Mirror and Global Copy, in the particular LSS will be affected by this command. This command also removes the logical paths between the LSS pair. 4. Give a failoverpprc command to the HMC connected with DS8000#2. You must issue the failoverpprc command, according to the roles the volumes will have after the command is completed. In our example, you must specify the B volumes as the source volumes. After the failoverpprc command is successfully executed, the B volumes become new source volumes in suspended state; see Example 16-24. The state of the A volumes is preserved. Note: In the case of an unplanned outage, before you issue the failoverpprc command, you might consider disconnecting the physical links between the production and the recovery sites. This ensures that no unexpected data transfer to the recovery site will occur at all.
Example 16-24 failoverpprc command dscli> lspprc 2000-2001 2100-2101
Date/Time: October 27, 2005 8:04:59 PM JST IBM DSCLI Version: 5.1.0.204 DS: IBM.2107-75ABTV1

ID State Reason Type SourceLSS Timeout (secs) Critical Mode First Pass Status =========================================================================================================

214

IBM System Storage DS8000: Copy Services in Open Environments

1000:2000 1001:2001 1100:2100 1101:2101 dscli>

Target Target Target Target

Full Full Full Full

Duplex Duplex Duplex Duplex

Metro Metro Metro Metro

Mirror Mirror Mirror Mirror

10 10 11 11

unknown unknown unknown unknown

Disabled Disabled Disabled Disabled

Invalid Invalid Invalid Invalid

dscli> failoverpprc -remotedev IBM.2107-7520781 -type mmir 2000-2001:1000-1001 2100-2101:1100-1101 Date/Time: October 27, 2005 8:05:42 PM JST IBM DSCLI Version: 5.1.0.204 DS: IBM.2107-75ABTV1 CMUC00196I failoverpprc: Remote Mirror and Copy pair 2000:1000 successfully reversed. CMUC00196I failoverpprc: Remote Mirror and Copy pair 2001:1001 successfully reversed. CMUC00196I failoverpprc: Remote Mirror and Copy pair 2100:1100 successfully reversed. CMUC00196I failoverpprc: Remote Mirror and Copy pair 2101:1101 successfully reversed.

dscli> dscli> lspprc 2000-2001 2100-2101


Date/Time: October 27, 2005 8:05:52 PM JST IBM DSCLI Version: 5.1.0.204 DS: IBM.2107-75ABTV1 ID State Reason Type SourceLSS Timeout (secs) Critical Mode First Pass Status ===================================================================================================== 2000:1000 Suspended Host Source Metro Mirror 20 unknown Disabled Invalid 2001:1001 Suspended Host Source Metro Mirror 20 unknown Disabled Invalid 2100:1100 Suspended Host Source Metro Mirror 21 unknown Disabled Invalid 2101:1101 Suspended Host Source Metro Mirror 21 unknown Disabled Invalid

Note: The lspprc command shows Target in the State column only for a target volume. In the case of a source volume, there is no indication. 5. Create paths in the direction recovery site to production site (B A); see Example 16-25. You have to give the mkpprcpath command to the DS HMC connected to DS8000#2. Although it is not strictly necessary to reverse the paths, we recommend that you do it so that you have a well-defined situation at the end of the procedure. Additionally, you will need the paths to transfer the updates back to the production site.
Example 16-25 Create paths from B to A dscli> lsavailpprcport -l -remotedev IBM.2107-7520781 -remotewwnn 5005076303FFC1A5 20:10
Date/Time: October 27, 2005 8:37:57 PM JST IBM DSCLI Version: 5.1.0.204 DS: IBM.2107-75ABTV1

Local Port Attached Port Type Switch ID Switch Port =================================================== I0010 I0143 FCP NA NA I0140 I0213 FCP NA NA dscli> dscli> lsavailpprcport -l -remotedev IBM.2107-7520781 -remotewwnn 5005076303FFC1A5 21:11
Date/Time: October 27, 2005 8:38:03 PM JST IBM DSCLI Version: 5.1.0.204 DS: IBM.2107-75ABTV1

Local Port Attached Port Type Switch ID Switch Port =================================================== I0010 I0143 FCP NA NA I0140 I0213 FCP NA NA dscli> dscli> mkpprcpath -remotedev IBM.2107-7520781 -remotewwnn 5005076303FFC1A5 -srclss 20 -tgtlss 10 i0010:i0143 i0140:i0213
Date/Time: October 27, 2005 8:39:26 PM JST IBM DSCLI Version: 5.1.0.204 DS: IBM.2107-75ABTV1

CMUC00149I mkpprcpath: Remote Mirror and Copy path 20:10 successfully established. dscli> dscli> mkpprcpath -remotedev IBM.2107-7520781 -remotewwnn 5005076303FFC1A5 -srclss 21 -tgtlss 11 i0010:i0143 i0140:i0213
Date/Time: October 27, 2005 8:39:38 PM JST IBM DSCLI Version: 5.1.0.204 DS: IBM.2107-75ABTV1

CMUC00149I mkpprcpath: Remote Mirror and Copy path 21:11 successfully established. dscli> dscli> lspprcpath 20-21
Date/Time: October 27, 2005 8:39:49 PM JST IBM DSCLI Version: 5.1.0.204 DS: IBM.2107-75ABTV1

Src Tgt State

SS

Port

Attached Port Tgt WWNN Chapter 16. Metro Mirror interfaces and examples

215

========================================================= 20 10 Success FF10 I0010 I0143 5005076303FFC1A5 20 10 Success FF10 I0140 I0213 5005076303FFC1A5 21 11 Success FF11 I0010 I0143 5005076303FFC1A5 21 11 Success FF11 I0140 I0213 5005076303FFC1A5 dscli>

6. Depending on your operating system, it might be necessary to rescan Fibre Channel devices (to remove device objects for the recovery site volumes and recognize the new sources) and mount the new source volumes (B volumes). Start all applications at the recovery site (B). Now that the applications have started, Metro Mirror starts keeping track of the updated data on the new source volumes at B.

Summary of the failover procedure


Briefly, this is the procedure we followed to switch to the recovery site: 1. 2. 3. 4. 5. 6. Stop applications at the production site. Verify the Metro Mirror volume pairs. freezepprc and unfreezepprc (optional). failoverpprc B to A (to DS8000#2). mkpprcpath B to A (to DS8000#2). Start applications at the recovery site.

16.8.2 Metro Mirror Failback function


Once the production site has been restored, you must move your application back. At this moment we assume that: Applications are updating the source volumes (B volumes) at the recovery site. During operation at the recovery site, data has not been replicated from B to A. Both DS8000s (source and target) are functional and reachable. Paths are already established from the recovery to the production site. Volumes at the recovery site are in the suspended state (source). Volumes at the production site are also in the suspended state (source), if you executed the optional step 3 on page 216 in the previous procedure. A switchback using the Metro Mirror Failback function involves the following steps: 1. Using the lspprcpath command, verify that the paths from the recovery site to the production site are available. Give this command to the DS HMC connected to DS8000#2; see Example 16-26. You can check whether you have paths between the correct DS8000s with the -fullid parameter.
Example 16-26 Verify paths between recovery and production sites dscli> lspprcpath -fullid 20-21
Date/Time: October 28, 2005 11:38:37 PM JST IBM DSCLI Version: 5.1.0.204 DS: IBM.2107-75ABTV1
Src Tgt State SS Port Attached Port Tgt WWNN =================================================================================================================== IBM.2107-75ABTV1/20 IBM.2107-7520781/10 Success FF10 IBM.2107-75ABTV1/I0010 IBM.2107-7520781/I0143 5005076303FFC1A5 IBM.2107-75ABTV1/20 IBM.2107-7520781/10 Success FF10 IBM.2107-75ABTV1/I0140 IBM.2107-7520781/I0213 5005076303FFC1A5 IBM.2107-75ABTV1/21 IBM.2107-7520781/11 Success FF11 IBM.2107-75ABTV1/I0010 IBM.2107-7520781/I0143 5005076303FFC1A5 IBM.2107-75ABTV1/21 IBM.2107-7520781/11 Success FF11 IBM.2107-75ABTV1/I0140 IBM.2107-7520781/I0213 5005076303FFC1A5

2. If you did not reverse the paths before (in step 5 on page 216 of the previous procedure), now you must establish paths from the recovery to the production site, before running the failbackpprc command. 216
IBM System Storage DS8000: Copy Services in Open Environments

3. Run the failbackpprc command from the recovery site to the production site. You have to give this command to the DS HMC connected to DS8000#2. The failbackpprc command will copy all the modified tracks from the B volumes to the A volumes. See Example 16-27. You must issue the failbackpprc command according to the roles the volumes will have after the command is completed. In our example, you must specify the B volumes as the source volumes and the A volumes as the target volumes. After the failbackpprc command is successfully executed, the B volumes become source volumes in copy pending state, and the A volumes become target volumes in target copy pending state. Note: When issuing the failbackpprc command, if you specify the A volume as the source and the B volume as the target, and you give the command to the DS HMC connected to the DS8000#1, then data will be copied from A to B.
Example 16-27 failbackpprc command

<< DS8000#2 >>


dscli> lspprcpath 20-21
Date/Time: October 28, 2005 12:22:21 AM JST IBM DSCLI Version: 5.1.0.204 DS: IBM.2107-75ABTV1 Src Tgt State SS Port Attached Port Tgt WWNN ========================================================= 20 10 Success FF10 I0010 I0143 5005076303FFC1A5 20 10 Success FF10 I0140 I0213 5005076303FFC1A5 21 11 Success FF11 I0010 I0143 5005076303FFC1A5 21 11 Success FF11 I0140 I0213 5005076303FFC1A5 dscli>

dscli> lspprc 2000-2001 2100-2101


Date/Time: October 28, 2005 12:22:25 AM JST IBM DSCLI Version: 5.1.0.204 DS: IBM.2107-75ABTV1 ID State Reason Type SourceLSS Timeout (secs) Critical Mode First Pass Status ===================================================================================================== 2000:1000 Suspended Host Source Metro Mirror 20 unknown Disabled Invalid 2001:1001 Suspended Host Source Metro Mirror 20 unknown Disabled Invalid 2100:1100 Suspended Host Source Metro Mirror 21 unknown Disabled Invalid 2101:1101 Suspended Host Source Metro Mirror 21 unknown Disabled Invalid dscli> dscli> failbackpprc -remotedev IBM.2107-7520781 -type mmir 2000-2001:1000-1001 2100-2101:1100-1101 Date/Time: October 28, 2005 12:23:01 AM JST IBM DSCLI Version: 5.1.0.204 DS: IBM.2107-75ABTV1 CMUC00197I failbackpprc: Remote Mirror and Copy pair 2000:1000 successfully failed back. CMUC00197I failbackpprc: Remote Mirror and Copy pair 2001:1001 successfully failed back. CMUC00197I failbackpprc: Remote Mirror and Copy pair 2100:1100 successfully failed back. CMUC00197I failbackpprc: Remote Mirror and Copy pair 2101:1101 successfully failed back. dscli>

dscli> lspprc 2000-2001 2100-2101


Date/Time: October 28, 2005 12:23:05 AM JST IBM DSCLI Version: 5.1.0.204 DS: IBM.2107-75ABTV1 ID State Reason Type SourceLSS Timeout (secs) Critical Mode First Pass Status =================================================================================================== 2000:1000 Copy Pending Metro Mirror 20 unknown Disabled Invalid 2001:1001 Copy Pending Metro Mirror 20 unknown Disabled Invalid 2100:1100 Copy Pending Metro Mirror 21 unknown Disabled Invalid 2101:1101 Copy Pending Metro Mirror 21 unknown Disabled Invalid

<< DS8000#1 >>


dscli> lspprc 1000-1001 1100-1101
Date/Time: October 28, 2005 12:23:09 AM JST IBM DSCLI Version: 5.1.0.204 DS: IBM.2107-7520781

ID State Reason Type SourceLSS Timeout (secs) Critical Mode First Pass Status ========================================================================================================== 2000:1000 Target Copy Pending Metro Mirror 20 unknown Disabled Invalid 2001:1001 Target Copy Pending Metro Mirror 20 unknown Disabled Invalid 2100:1100 Target Copy Pending Metro Mirror 21 unknown Disabled Invalid 2101:1101 Target Copy Pending Metro Mirror 21 unknown Disabled Invalid

Chapter 16. Metro Mirror interfaces and examples

217

Failback initialization characteristics


The failbackpprc initialization mode resynchronizes the volumes, as follows: If a volume at the production site is in simplex state, all of the data for that volume is copied back from the recovery site to the production site. If a volume at the production site is in the full-duplex or in suspended state and without changed tracks, only the modified data on the volume at the recovery site is copied back to the volume at the production site. If a volume at the production site is in a suspended state and has tracks on which data has been written, then both the tracks changed on the production site and the tracks marked at the recovery site will be copied back. Finally, the volume at the production site becomes a write-inhibited target volume. This action is performed on an individual volume basis.

Notes on the failbackpprc command


If the server at the production site is still online and accessing the disk, or a crash happens, so that the SCSI persistent reserve is still set on the source disk (A), the failbackpprc command fails; see Example 16-28 on page 218. In this case, the server at the production site locks the target with a SCSI persistent reserve. This situation can be reset with the varyoffvg command (in this case on AIX), and the failbackpprc command will complete successfully. There is a -resetreserve parameter for the failbackpprc command. This parameter resets the reserved status so the operation can complete. In a situation after a real disaster, you can use this parameter because the server might go down while the SCSI persistent reserve was set on the A volume. In a situation after a planned site switch, you must not use this parameter because the server at the production site still owns the A volume, and might be using it, while the Failback operation suddenly changes the contents of the volume. This might cause the server file systems corruption.
Example 16-28 failbackpprc command fails when A volumes are online
dscli> failbackpprc -remotedev IBM.2107-7520781 -type mmir 2000-2001:1000-1001 2100-2101:1100-1101 Date/Time: October 28, 2005 8:52:07 PM JST IBM DSCLI Version: 5.1.0.204 DS: IBM.2107-75ABTV1 CMUN03103E failbackpprc: 2000:1000: Copy Services operation failure: The volume is in a long busy state, not yet configured, not yet formatted, or the source and target volumes are of different types. CMUN03103E failbackpprc: 2001:1001: Copy Services operation failure: The volume is in a long busy state, not yet configured, not yet formatted, or the source and target volumes are of different types. CMUN03103E failbackpprc: 2100:1100: Copy Services operation failure: The volume is in a long busy state, not yet configured, not yet formatted, or the source and target volumes are of different types. CMUN03103E failbackpprc: 2101:1101: Copy Services operation failure: The volume is in a long busy state, not yet configured, not yet formatted, or the source and target volumes are of different types. dscli>

<< After performing varyoffvg A volumes from the AIX servers at the production site >>
dscli> failbackpprc -remotedev IBM.2107-7520781 -type mmir 2000-2001:1000-1001 2100-2101:1100-1101 Date/Time: October 28, 2005 8:52:36 PM JST IBM DSCLI Version: 5.1.0.204 DS: IBM.2107-75ABTV1 CMUC00197I failbackpprc: Remote Mirror and Copy pair 2000:1000 successfully failed back. CMUC00197I failbackpprc: Remote Mirror and Copy pair 2001:1001 successfully failed back. CMUC00197I failbackpprc: Remote Mirror and Copy pair 2100:1100 successfully failed back. CMUC00197I failbackpprc: Remote Mirror and Copy pair 2101:1101 successfully failed back.

Now, we continue with the switchback procedure: 1. Wait until the Metro Mirror pairs become synchronized. When the pairs are synchronized, the state of the source is full duplex and the state of the targets is target full duplex; see Example 16-29. 218
IBM System Storage DS8000: Copy Services in Open Environments

Example 16-29 Confirm synchronization has been completed << DS8000#2 >> dscli> lspprc 2000-2001 2100-2101
Date/Time: October 28, 2005 12:25:42 AM JST IBM DSCLI Version: 5.1.0.204 DS: IBM.2107-75ABTV1 ID State Reason Type SourceLSS Timeout (secs) Critical Mode First Pass Status ================================================================================================== 2000:1000 Full Duplex Metro Mirror 20 unknown Disabled Invalid 2001:1001 Full Duplex Metro Mirror 20 unknown Disabled Invalid 2100:1100 Full Duplex Metro Mirror 21 unknown Disabled Invalid 2101:1101 Full Duplex Metro Mirror 21 unknown Disabled Invalid

<< DS8000#1 >> dscli> lspprc 1000-1001 1100-1101


Date/Time: October 28, 2005 12:25:47 AM JST IBM DSCLI Version: 5.1.0.204 DS: IBM.2107-7520781
ID State Reason Type SourceLSS Timeout (secs) Critical Mode First Pass Status ========================================================================================================= 2000:1000 Target Full Duplex Metro Mirror 20 unknown Disabled Invalid 2001:1001 Target Full Duplex Metro Mirror 20 unknown Disabled Invalid 2100:1100 Target Full Duplex Metro Mirror 21 unknown Disabled Invalid 2101:1101 Target Full Duplex Metro Mirror 21 unknown Disabled Invalid

2. Before returning to normal operation, the applications, still updating B volumes at the recovery site, must be quiesced to cease all write I/O from updating the B volumes. Depending on the host operating system, it might be necessary to dismount the B volumes. 3. You should now execute one more failoverpprc command. At this time, you must specify the A volumes as the source volumes and the B volumes as the target volumes. You must give this command to the DS HMC connected to the DS8000#1. This operation changes the state of the A volumes from target full duplex to (source) suspended. The state of the B volumes is preserved; see Example 16-30.
Example 16-30 failoverpprc to convert A volumes to source suspended << DS8000#1 >> dscli> lspprc 1000-1001 1100-1101
Date/Time: October 28, 2005 9:41:20 PM JST IBM DSCLI Version: 5.1.0.204 DS: IBM.2107-7520781
ID State Reason Type SourceLSS Timeout (secs) Critical Mode First Pass Status ========================================================================================================= 2000:1000 Target Full Duplex Metro Mirror 20 unknown Disabled Invalid 2001:1001 Target Full Duplex Metro Mirror 20 unknown Disabled Invalid 2100:1100 Target Full Duplex Metro Mirror 21 unknown Disabled Invalid 2101:1101 Target Full Duplex Metro Mirror 21 unknown Disabled Invalid dscli> dscli> failoverpprc -remotedev IBM.2107-75ABTV1 -type mmir 1000-1001:2000-2001 1100-1101:2100-2101 Date/Time: October 28, 2005 9:41:31 PM JST IBM DSCLI Version: 5.1.0.204 DS: IBM.2107-7520781 CMUC00196I failoverpprc: Remote Mirror and Copy pair 1000:2000 successfully reversed. CMUC00196I failoverpprc: Remote Mirror and Copy pair 1001:2001 successfully reversed. CMUC00196I failoverpprc: Remote Mirror and Copy pair 1100:2100 successfully reversed. CMUC00196I failoverpprc: Remote Mirror and Copy pair 1101:2101 successfully reversed. dscli>

dscli> lspprc 1000-1001 1100-1101


Date/Time: October 28, 2005 9:41:34 PM JST IBM DSCLI Version: 5.1.0.204 DS: IBM.2107-7520781 ID State Reason Type SourceLSS Timeout (secs) Critical Mode First Pass Status ===================================================================================================== 1000:2000 Suspended Host Source Metro Mirror 10 unknown Disabled Invalid 1001:2001 Suspended Host Source Metro Mirror 10 unknown Disabled Invalid 1100:2100 Suspended Host Source Metro Mirror 11 unknown Disabled Invalid 1101:2101 Suspended Host Source Metro Mirror 11 unknown Disabled Invalid dscli>

<< DS8000#2 >> dscli> lspprc 2000-2001 2100-2101

Chapter 16. Metro Mirror interfaces and examples

219

Date/Time: October 28, 2005 9:41:43 PM JST IBM DSCLI Version: 5.1.0.204 DS: IBM.2107-75ABTV1 ID State Reason Type SourceLSS Timeout (secs) Critical Mode First Pass Status ================================================================================================== 2000:1000 Full Duplex Metro Mirror 20 unknown Disabled Invalid 2001:1001 Full Duplex Metro Mirror 20 unknown Disabled Invalid 2100:1100 Full Duplex Metro Mirror 21 unknown Disabled Invalid 2101:1101 Full Duplex Metro Mirror 21 unknown Disabled Invalid

4. Define paths in the direction of production site to recovery site (A B); see Example 16-31. You must create paths if you executed the freezepprc command in the optional step 3 on page 216 of the production to recovery site switchover procedure (freezepprc removed the paths).
Example 16-31 Create Metro Mirror paths from A to B dscli> mkpprcpath -remotedev IBM.2107-75ABTV1 -remotewwnn 5005076303FFC663 -srclss 10 -tgtlss 20 i0143:i0010 i0213:i0140
Date/Time: October 28, 2005 8:30:51 PM JST IBM DSCLI Version: 5.1.0.204 DS: IBM.2107-7520781 CMUC00149I mkpprcpath: Remote Mirror and Copy path 10:20 successfully established.

dscli> mkpprcpath -remotedev IBM.2107-75ABTV1 -remotewwnn 5005076303FFC663 -srclss 11 -tgtlss 21 i0143:i0010 i0213:i0140
Date/Time: October 28, 2005 8:31:02 PM JST IBM DSCLI Version: 5.1.0.204 DS: IBM.2107-7520781 CMUC00149I mkpprcpath: Remote Mirror and Copy path 11:21 successfully established.

5. Then issue another failbackpprc command. At this time, you must specify the A volumes as the source volumes and the B volumes as the target volumes. You must give this command to the DS HMC connected to the DS8000#1. After you have successfully executed the failbackpprc command, the A volumes become source volumes in copy pending state and the B volumes become target volumes in target copy pending state; see Example 16-32.
Example 16-32 failbackpprc command to restart A to B Metro Mirror operation dscli> failbackpprc -remotedev IBM.2107-75ABTV1 -type mmir 1000-1001:2000-2001 1100-1101:2100-2101
Date/Time: CMUC00197I CMUC00197I CMUC00197I CMUC00197I October 28, 2005 10:12:48 PM JST IBM failbackpprc: Remote Mirror and Copy failbackpprc: Remote Mirror and Copy failbackpprc: Remote Mirror and Copy failbackpprc: Remote Mirror and Copy DSCLI Version: pair 1000:2000 pair 1001:2001 pair 1100:2100 pair 1101:2101 5.1.0.204 DS: IBM.2107-7520781 successfully failed back. successfully failed back. successfully failed back. successfully failed back.

6. Wait until the Metro Mirror pairs are synchronized. Normally, this operation does not take much time because no data transfer is necessary. After the Metro Mirror pairs are synchronized, the state of the sources volumes (A) becomes full duplex and the state of the target volumes (B) becomes target full duplex; see Example 16-33.
Example 16-33 After the Metro Mirror pairs have been synchronized << DS8000#1 >> dscli> lspprc 1000-1001 1100-1101
Date/Time: October 28, 2005 10:35:01 PM JST IBM DSCLI Version: 5.1.0.204 DS: IBM.2107-7520781 ID State Reason Type SourceLSS Timeout (secs) Critical Mode First Pass Status ================================================================================================== 1000:2000 Full Duplex Metro Mirror 10 unknown Disabled Invalid 1001:2001 Full Duplex Metro Mirror 10 unknown Disabled Invalid 1100:2100 Full Duplex Metro Mirror 11 unknown Disabled Invalid 1101:2101 Full Duplex Metro Mirror 11 unknown Disabled Invalid

<< DS8000#2 >> dscli> lspprc 2000-2001 2100-2101


Date/Time: October 28, 2005 10:35:06 PM JST IBM DSCLI Version: 5.1.0.204 DS: IBM.2107-75ABTV1 ID State Reason Type SourceLSS Timeout (secs) Critical Mode First Pass Status

220

IBM System Storage DS8000: Copy Services in Open Environments

========================================================================================================= 1000:2000 Target Full Duplex Metro Mirror 10 unknown Disabled Invalid 1001:2001 Target Full Duplex Metro Mirror 10 unknown Disabled Invalid 1100:2100 Target Full Duplex Metro Mirror 11 unknown Disabled Invalid 1101:2101 Target Full Duplex Metro Mirror 11 unknown Disabled Invalid

7. Depending on your operating system, it might be necessary to rescan Fibre Channel devices and mount the new source volumes (A) at the production site. Start all applications at the production site and check for consistency. Now that the applications have started, all the write I/Os to the source volumes (A) are tracked by Metro Mirror. You should verify the applications integrity. 8. Eventually, terminate the paths from the recovery to the production LSSs.

Summary of the failback procedure


Briefly, this is the procedure we followed to switch to the production site: 1. Verify paths status B to A. 2. Eventually mkpprcpath B to A. 3. failbackpprc B to A (to DS8000#2). 4. Wait for volume pairs synchronization. 5. Quiesce applications at the recovery site (B). 6. failoverpprc A to B (to DS8000#1). 7. mkpprcpath A to B. 8. failbackpprc A to B (to DS8000#1). 9. Wait for volume pairs synchronization. 10.Start applications at the production site (A). 11.Eventually rmpprcpath B to A.

16.9 Freezepprc and unfreezepprc


As mentioned in 14.4, Consistency Group function on page 180, you have to implement a certain procedure to keep data consistency at the recovery site. One of the ways is using the freezepprc and unfreezepprc commands with the Consistency Group option. In this section, we illustrate how to set the Consistency Group option and we discuss how the freezepprc and unfreezepprc commands work. You can specify the Consistency Group option when you define the Metro Mirror paths that communicate source and target LSSs, or when you change the default Consistency Group setting on each LSS by means of the chlss command by default, this option is disabled. With the chlss command, you can also change the default Consistency Group timeout value, by means of the -extlongbusy parameter. Example 16-34 shows how to enable the Consistency Group option with the chlss command. The setting of the option can be verified with the showlss command.
Example 16-34 Change the default Consistency Group setting with the chlss command dscli> showlss 10
Date/Time: October 29, 2005 12:16:20 AM JST IBM DSCLI Version: 5.1.0.204 DS: IBM.2107-7520781

ID Group addrgrp stgtype confgvols subsys pprcconsistgrp

10 0 1 fb 4 0xFF10 Disabled Chapter 16. Metro Mirror interfaces and examples

221

xtndlbztimout 120 secs dscli> showlss 11


Date/Time: October 29, 2005 12:16:24 AM JST IBM DSCLI Version: 5.1.0.204 DS: IBM.2107-7520781

ID 11 Group 1 addrgrp 1 stgtype fb confgvols 4 subsys 0xFF11 pprcconsistgrp Disabled xtndlbztimout 120 secs dscli> dscli> chlss -pprcconsistgrp enable 10-11
Date/Time: October 29, 2005 12:16:59 AM JST IBM DSCLI Version: 5.1.0.204 DS: IBM.2107-7520781

CMUC00029I chlss: LSS 11 successfully modified. CMUC00029I chlss: LSS 10 successfully modified. dscli> dscli> showlss 10
Date/Time: October 29, 2005 12:17:06 AM JST IBM DSCLI Version: 5.1.0.204 DS: IBM.2107-7520781

ID Group addrgrp stgtype confgvols subsys pprcconsistgrp xtndlbztimout dscli> dscli> showlss ID Group addrgrp stgtype confgvols subsys pprcconsistgrp xtndlbztimout

10 0 1 fb 4 0xFF10 Enabled 120 secs 11 11 1 1 fb 4 0xFF11 Enabled 120 secs

Date/Time: October 29, 2005 12:17:10 AM JST IBM DSCLI Version: 5.1.0.204 DS: IBM.2107-7520781

When the DS8000 detects a condition where it cannot update a Metro Mirror target volume, it notifies this condition with a SNMP alert. At that moment, if an automation procedure is in place, the SNMP alert will trigger the automation procedure which will then execute a freezepprc command as the one showed in Example 16-35.
Example 16-35 The results of the freezepprc command dscli> freezepprc -remotedev IBM.2107-75ABTV1 10:20 11:21
Date/Time: October 29, 2005 12:30:08 AM JST IBM DSCLI Version: 5.1.0.204 DS: IBM.2107-7520781 CMUC00161W freezepprc: Remote Mirror and Copy consistency group 10:20 successfully created. CMUC00161W freezepprc: Remote Mirror and Copy consistency group 11:21 successfully created.

dscli> dscli> lspprc 1000-1001 1100-1101


Date/Time: October 29, 2005 12:30:14 AM JST IBM DSCLI Version: 5.1.0.204 DS: IBM.2107-7520781 ID State Reason Type SourceLSS Timeout (secs) Critical Mode First Pass Status ================================================================================================ 1000:2000 Suspended Freeze Metro Mirror 10 unknown Disabled Invalid 1001:2001 Suspended Freeze Metro Mirror 10 unknown Disabled Invalid 1100:2100 Suspended Freeze Metro Mirror 11 unknown Disabled Invalid 1101:2101 Suspended Freeze Metro Mirror 11 unknown Disabled Invalid

222

IBM System Storage DS8000: Copy Services in Open Environments

dscli> dscli> lspprcpath 10-11


Date/Time: October 29, 2005 12:30:45 AM JST IBM DSCLI Version: 5.1.0.204 DS: IBM.2107-7520781

CMUC00234I lspprcpath: No Remote Mirror and Copy Path found.

With the freezepprc command, the DS8000 holds the I/O activity to the addressed LSSs by putting the source volumes in a queue full state for a time period. Example 16-36 shows, for an AIX environment, what the iostat command reports during this time interval.
Example 16-36 AIX iostat command output report during the queue full condition # lsvpcfg vpath8 (Avail pv ) 75207811000 = hdisk18 (Avail ) hdisk22 (Avail ) vpath9 (Avail pv ) 75207811001 = hdisk19 (Avail ) hdisk23 (Avail ) vpath10 (Avail pv ) 75207811100 = hdisk20 (Avail ) hdisk24 (Avail ) vpath11 (Avail pv ) 75207811101 = hdisk21 (Avail ) hdisk25 (Avail ) # iostat -d vpath8 vpath9 vpath10 vpath11 1 Disks: % tm_act Kbps tps Kb_read hdisk18 0.0 0.0 0.0 0 hdisk19 100.0 0.0 0.0 0 hdisk21 0.0 0.0 0.0 0 hdisk20 0.0 0.0 0.0 0 hdisk25 0.0 0.0 0.0 0 hdisk22 100.0 0.0 0.0 0 hdisk23 0.0 0.0 0.0 0 hdisk24 0.0 0.0 0.0 0

Kb_wrtn 0 0 0 0 0 0 0 0

Note: In addition to holding (freezing) the I/O activity, the freezepprc command also removes the paths between the affected LSSs. After the freezepprc for all related LSSs completes, you have consistent data at the recovery site. Therefore, at this moment the automation procedure can execute the unfreezepprc command to release (thaw) the I/O that was on hold (frozen) on the affected LSSs. Example 16-37 shows the unfreezepprc command that thaws the frozen I/O queue on the LSS pairs 10:20 and 11:21.
Example 16-37 unfreezepprc command dscli> unfreezepprc -remotedev IBM.2107-75ABTV1 Date/Time: October 29, 2005 12:30:53 AM JST IBM IBM.2107-7520781 CMUC00198I unfreezepprc: Remote Mirror and Copy CMUC00198I unfreezepprc: Remote Mirror and Copy 10:20 11:21 DSCLI Version: 5.1.0.204 DS: pair 10:20 successfully thawed. pair 11:21 successfully thawed.

In a situation where the data could not be replicated because of a links failure circumstance, that is the production site kept running, then Metro Mirror processing can resume after the links are recovered. Still, if the automation was triggered and the freezepprc was executed, then the Metro Mirror paths have to be defined again. This is because the freezepprc command removes the paths between the affected LSSs. Then, after the paths are re-established you can execute a resumepprc command to re-synchronize the Metro Mirror pairs. Example 16-38 shows this scenario.

Chapter 16. Metro Mirror interfaces and examples

223

Example 16-38 Resume the Metro Mirror environment


dscli> mkpprcpath -remotedev IBM.2107-75ABTV1 -remotewwnn 5005076303FFC663 -srclss 10 -tgtlss 20 i0143:i0010 i0213:i0140 Date/Time: October 29, 2005 12:39:34 AM JST IBM DSCLI Version: 5.1.0.204 DS: IBM.2107-7520781 CMUC00149I mkpprcpath: Remote Mirror and Copy path 10:20 successfully established. dscli> dscli> mkpprcpath -remotedev IBM.2107-75ABTV1 -remotewwnn 5005076303FFC663 -srclss 11 -tgtlss 21 i0143:i0010 i0213:i0140 Date/Time: October 29, 2005 12:39:40 AM JST IBM DSCLI Version: 5.1.0.204 DS: IBM.2107-7520781 CMUC00149I mkpprcpath: Remote Mirror and Copy path 11:21 successfully established. dscli> dscli> resumepprc -remotedev IBM.2107-75ABTV1 -type mmir 1000-1001:2000-2001 1100-1101:2100-2101 Date/Time: October 29, 2005 12:40:03 AM JST IBM DSCLI Version: 5.1.0.204 DS: IBM.2107-7520781 CMUC00158I resumepprc: Remote Mirror and Copy volume pair 1000:2000 relationship successfully resumed. This message is being returned before the copy completes. CMUC00158I resumepprc: Remote Mirror and Copy volume pair 1001:2001 relationship successfully resumed. This message is being returned before the copy completes. CMUC00158I resumepprc: Remote Mirror and Copy volume pair 1100:2100 relationship successfully resumed. This message is being returned before the copy completes. CMUC00158I resumepprc: Remote Mirror and Copy volume pair 1101:2101 relationship successfully resumed. This message is being returned before the copy completes. dscli> dscli> lspprc 1000-1001 1100-1101 Date/Time: October 29, 2005 12:40:11 AM JST IBM DSCLI Version: 5.1.0.204 DS: IBM.2107-7520781 ID State Reason Type SourceLSS Timeout (secs) Critical Mode First Pass Status ================================================================================================== 1000:2000 Full Duplex Metro Mirror 10 unknown Disabled Invalid 1001:2001 Full Duplex Metro Mirror 10 unknown Disabled Invalid 1100:2100 Full Duplex Metro Mirror 11 unknown Disabled Invalid 1101:2101 Full Duplex Metro Mirror 11 unknown Disabled Invalid

Note: While doing the re-synchronization, the volume pairs will be in a copy pending state. During this time period there will be no data consistency in the target volumes. Therefore, you might want to take specific action to keep data consistency at the recovery site while resuming the Metro Mirror pairs. First doing a FlashCopy at the recovery site is one way to accomplish this.

16.10 DS Storage Manager GUI examples


In this section we use the DS Storage Manager graphical user interface (DS GUI) to manage Metro Mirror paths and pairs.

16.10.1 Creating paths


This example shows you how to create Metro Mirror paths. From the DS Storage Manager GUI, you follow these steps: 1. 2. 3. 4. Select Real-time manager. Select Copy services. Click Paths. Select the source Storage Complex, Storage Unit, Storage Image, and LSS.

You will see the Paths panel; see Figure 16-5. This panel shows that for the selected LSS 10, there are no paths defined at the moment.

224

IBM System Storage DS8000: Copy Services in Open Environments

Figure 16-5 Create path - Paths panel

In the Paths panel, from the Select Action pull-down, select Create to proceed with the first step of the creation wizard. The creation wizard then displays the Select source LSS panel. Here you select the source LSS; see Figure 16-6.

Figure 16-6 Create path - Step 1: select source LSS

Chapter 16. Metro Mirror interfaces and examples

225

Click Next to proceed with the second step of the wizard. When the creation wizard displays the Select target LSS panel, you select from the pull-down lists the storage complex, then the storage unit, then the storage image, and finally the target LSS; see Figure 16-7.

Figure 16-7 Create path - Step 2: select target LSS

Click Next to proceed with the third step of the wizard. When the creation wizard displays the Select source I/O ports panel, select using the check boxes at least one I/O port (two is better) to use for Metro Mirror replication; see Figure 16-8 on page 227. In the location column, four digits indicate the location of a port: The first digit (R) is to locate the frame. The second digit (E) is for the I/O enclosure. The third digit (C) is for the adapter. The fourth digit (P) is for the adapters port.

226

IBM System Storage DS8000: Copy Services in Open Environments

Figure 16-8 Create path - Step 3: select source I/O ports

Click Next to proceed with the fourth step of the wizard. When the creation wizard displays the Select target I/Os ports panel, for each I/O port selected during the third step, select from its related pull-down list the target I/O port; see Figure 16-9.

Figure 16-9 Create path - Step 4: select target I/O ports

Click Next to proceed with the fifth step of this wizard.

Chapter 16. Metro Mirror interfaces and examples

227

When the creation wizard displays the Select path options panel, you can use the check box to select the option Define as Consistency Group; see Figure 16-10. See 14.4, Consistency Group function on page 180, for a detailed discussion of this option.

Figure 16-10 Create path - Step 5: select path options

Click Next to proceed with the sixth and last step of this wizard. Figure 16-11 shows the Verification panel. Here you check all the components of your path configuration and, if necessary, click Back to correct any of them or click Finish to validate the configuration and end the wizard.

Figure 16-11 Create path - Step 6: verification

228

IBM System Storage DS8000: Copy Services in Open Environments

After you finish, you are again presented the Paths panel, and now you will see the existing paths that you have just created; see Figure 16-12.

Figure 16-12 Create path - result

16.10.2 Adding paths


There are situations where you might want to add additional paths to the existing ones, for an LSS pair. For example: To add redundant paths. To increase available bandwidth. To change the ports you are using for Metro Mirror. With the GUI you must first add the new paths before you delete old paths. We now consider an example of how to add a path to an LSS pair which already has a path defined. From the DS Storage Manager GUI, you follow these steps: 1. 2. 3. 4. Select Real-time manager. Select Copy services. Click Paths. Select the source Storage Complex, Storage Unit, Storage Image, and LSS.

Chapter 16. Metro Mirror interfaces and examples

229

You will see the Paths panel; see Figure 16-13. This panel shows one existing path for the LSS pair 11:10.

Figure 16-13 Add paths - Paths panel

In the Paths panel, from the Select Action pull-down, select Create to proceed with the first step of the creation wizard. The creation wizard then displays the Select source LSS panel. Here we select the source LSS 11; see Figure 16-14.

Figure 16-14 Add paths - Step 1: select source LSS

Click Next to proceed with the second step of the wizard.

230

IBM System Storage DS8000: Copy Services in Open Environments

When the creation wizard displays the Select target LSS panel, you select from the pull-down lists the storage complex, then the storage unit, then the storage image, and finally the target LSS 10; see Figure 16-15.

Figure 16-15 Add paths - Step 2: select target LSS

Click Next to proceed with the third step of the wizard. When the creation wizard displays the Select source I/O ports panel, select using the check boxes the port on the source for the additional path that you are defining; see Figure 16-16.

Figure 16-16 Add paths - Step 3: select source I/O ports

Click Next to proceed with the fourth step of the wizard.

Chapter 16. Metro Mirror interfaces and examples

231

When the creation wizard displays the Select target I/Os ports panel, you select from the pull-down list the target I/O port; see Figure 16-17.

Figure 16-17 Add paths - Step 4: select target I/O port

Click Next to proceed with the fifth step of this wizard. When the creation wizard displays the Select path options panel, you can use the check box to select the option Define as Consistency Group; see Figure 16-18. See 14.4, Consistency Group function on page 180, for a detailed discussion of this option.

Figure 16-18 Add paths - Step 5: select path options

Click Next to proceed with the sixth and last step of this wizard.

232

IBM System Storage DS8000: Copy Services in Open Environments

Figure 16-19 shows the Verification panel. Here you check all the components of your paths configuration and, if necessary, click Back to correct any of them or click Finish to validate the configuration and end the wizard.

Figure 16-19 Add paths - Step 6: verification

After you finish, you are again presented the Paths panel, and now you will see the new path recently added; see Figure 16-20.

Figure 16-20 Add paths - Result

Note: In our example, the defined paths are in one direction. If you want to mirror from the target back to the source you will have to establish paths in the other direction.

16.10.3 Changing options


You can change options of the existing paths. For this, from the DS Storage Manager GUI, you follow these steps: 1. Select Real-time manager. 2. Select Copy services.

Chapter 16. Metro Mirror interfaces and examples

233

3. Click Paths. 4. Select the source Storage Complex, Storage Unit, Storage Image, and LSS. You will see the Paths panel; see Figure 16-21. This panel shows a list of the existing paths for the selected LSS 11. In this panel we check the box for the path we want to work with and from the Select Action pull-down list we select LSS copy options.

Figure 16-21 Path panel - LSS copy options

The LSS Copy Options panel is then displayed; see Figure 16-22. Here you can change the options of the defined path: various timeout values, as well as the critical mode attribute. For a discussion of these options, see Chapter 14, Metro Mirror options and configuration on page 177.

Figure 16-22 LSS copy options panel

When finished click OK.

16.10.4 Deleting paths


In this example we show you how to remove a path. From the DS Storage Manager GUI you follow these steps: 1. 2. 3. 4. 234 Select Real-time manager. Select Copy services. Click Paths. Select the source Storage Complex, Storage Unit, Storage Image, and LSS.

IBM System Storage DS8000: Copy Services in Open Environments

You will see the Paths panel. This panel shows the list of established paths for the selected LSS 11 of our example. Here you check the box to select the paths you want to remove; see Figure 16-23. Then from the Select Action pull-down list you select Delete.

Figure 16-23 Delete paths - Paths panel

A warning pops up; see Figure 16-24.

Figure 16-24 Delete paths warning

Note: Deleting paths reduces bandwidth. For a discussion on this topic see the Chapter 15, Metro Mirror performance and scalability on page 193.

Chapter 16. Metro Mirror interfaces and examples

235

After you click Continue the deletion is completed. Then you are presented again the Paths panel where you will see the path is not anymore there; see Figure 16-25. If the path still appears to be in the list, click Refresh to renew the screen.

Figure 16-25 Delete paths - Result

16.10.5 Creating volume pairs


In this example we are going to see how the Metro Mirror volume pairs are created. Here are some things to remember when you are going to create the volume pairs: All volumes, on the source and also on the target, must exist before starting to create the Metro Mirror pairs. Source and target volumes should be of the same size. The target volume can be bigger than the source volume, but if this is the case, you cannot reverse the direction of the pair. So different sizes of source and target volumes makes sense only for migration purposes. Usually before you start creating the Metro Mirror pairs, you have to define the paths between the source and target LSS, as explained in Creating paths on page 224. To start creating volume pairs, from the DS Storage Manager GUI you follow these steps: 1. Select Real-time manager. 2. Select Copy services. 3. Click Metro Mirror / Global Copy.

236

IBM System Storage DS8000: Copy Services in Open Environments

4. The Metro Mirror panel is displayed; see Figure 16-26. Here you select the source Storage Complex, Storage Unit, Storage Image, and Resource Type. Possible Resource Type options are: Lss, from which you select the LSS number Host attachment, from which you select the host attachment name Volume Group, from which you select the Volume Group name Show All Volumes, from which you select All FB volumes or All CKD volumes

Figure 16-26 Create MM pair - Metro Mirror panel

In the Metro Mirror panel, from the Select Action pull-down menu you select Create to proceed with the first step of the creation wizard. The Create Metro Mirror panel is displayed; see Figure 16-27. Here you select the volume pairing method. If you select Automated volume pair assignment the system will later automatically select target volumes of the same size as the source volume. If you select Manual volume pair assignment you have to manually assign to each source volume a target volume, one after the other. In our example we selected Automated volume pair assignment. In addition you can allow, if the source or target volumes are Space Efficient volumes. Space Efficient volumes are optimized for use cases where less than 20% of the virtual capacity were updated during their lifetimes, so Space Efficient volumes are not very useful as a Metro Mirror source or target. Note: In the current implementation Space Efficient volumes are only supported as FlashCopy target volumes.

Chapter 16. Metro Mirror interfaces and examples

237

Figure 16-27 Create MM pair - Step 1: volume pairing method

Click Next to proceed with the second step of the creation wizard. The Select source volumes panel is displayed; see Figure 16-28. Here you can check the boxes to select the source volumes of the Metro Mirror pairs. If you still have not established paths between the source and target LSSs, you can do this now by clicking the Create Paths button in this panel and proceeding as explained in 16.10.1, Creating paths on page 224.

Figure 16-28 Create MM pair - Step 2: select source volumes

Click Next to proceed with the third step of the creation wizard. The Select target volumes (Auto pairing) panel is displayed; see Figure 16-29. Here you can select all target volumes from the available ones on the target LSS.

238

IBM System Storage DS8000: Copy Services in Open Environments

Figure 16-29 Create MM pair - Step 3: select target volumes (Auto pairing)

Click Next to proceed with the fourth step of the creation wizard. The Select copy options panel is displayed; see Figure 16-30. Here you can select the copy options we select Metro Mirror. For a discussion of the options presented in this panel see Chapter 14, Metro Mirror options and configuration on page 177.

Figure 16-30 Create MM pair - Step 4: select copy options

Click Next to proceed with the fifth step of the creation wizard. The Verification panel is displayed; see Figure 16-31 on page 240. Here, you can verify the Metro Mirror volume pairs just created. Click Finish to execute this configuration.

Chapter 16. Metro Mirror interfaces and examples

239

Figure 16-31 Create MM pair - Step 5: verification

The Metro Mirror panel will be presented again; see Figure 16-32. This time it will show the list of volume pairs just created. You can see that the state of the volumes is Copy pending. This indicates that the initial copy from the source to the target volumes is still in progress. Click Refresh to see if the status has changed. After a period of time, depending on the size and number of volumes, all volumes will be synchronized (in-sync). At that moment the State column of the Metro Mirror panel will show Full duplex for the volumes, as illustrated in Figure 16-32.

Figure 16-32 Create MM pair - Volume pairs created and in full duplex state

Note: On very high workloads, a high number of established Metro Mirror volumes, or sharing of ports for Metro Mirror and host attachments, you will possibly have higher response times for your applications during this synchronization time.

240

IBM System Storage DS8000: Copy Services in Open Environments

16.10.6 Suspending volume pairs


To get access to the target volumes or to do maintenance of the target disk subsystem, you might need to suspend the Metro Mirror pairs. When the volumes are suspended the updates on the volumes are logged, so when you after have to resume the mirroring process only the differences will be replicated between the volumes this is the re-synchronization process. To suspend volume pairs, from the DS Storage Manager GUI, you follow these steps: 1. Select Real-time manager. 2. Select Copy services. 3. Click Metro Mirror / Global Copy. 4. The Metro Mirror panel is displayed: see Figure 16-33. Here you select the source Storage Complex, Storage Unit, Storage Image, and Resource Type. Possible Resource Type options are: Lss, from which you select the LSS number Host attachment, from which you select the host attachment name Volume Group, from which you select the Volume Group name Show All Volumes, from which you select All FB volumes or All CKD volumes

5. Check the boxes for the selected volume pairs you want to suspend. 6. Then, from the Select Action pull-down menu select Suspend.

Figure 16-33 Suspend - Metro Mirror panel

7. The Suspend Metro Mirror panel is displayed; see Figure 16-34. Here you are prompted to suspend the pair at either the source or the target. It is more usual to suspend at the source, as shown in our example. Then click OK.

Figure 16-34 Suspend - choose: Suspend at source, or Suspend target

Chapter 16. Metro Mirror interfaces and examples

241

8. Next the Metro Mirror panel is displayed, now showing the selected pairs in suspended mode; see Figure 16-35.

Figure 16-35 Suspend - Result

16.10.7 Resuming volume pairs


For the suspended volumes to reach again the full duplex state, you need to resume the Metro Mirror volume pairs. Before starting, verify that there are paths available between the corresponding LSSs. To resume volume pairs, from the DS Storage Manager GUI, you follow these steps: 1. Select Real-time manager. 2. Select Copy services. 3. Click Metro Mirror / Global Copy. 4. The Metro Mirror panel is displayed; see Figure 16-36 on page 243. Here you select the source Storage Complex, Storage Unit, Storage Image, and Resource Type. Possible Resource Type options are: Lss, from which you select the LSS number Host attachment, from which you select the host attachment name Volume Group, from which you select the Volume Group name Show All Volumes, from which you select All FB volumes or All CKD volumes

242

IBM System Storage DS8000: Copy Services in Open Environments

5. Check the boxes for the selected volume pairs you want to resume. In Figure 16-36 you can see their current state is suspended. 6. Then, from the Select Action pull-down menu select Resume.

Figure 16-36 Resume - Metro Mirror panel

7. The Resume Metro Mirror panel is displayed; see Figure 16-37. Here you can select the following options, permit read access from target, reset reservation and suspend Metro Mirror relationship after initial copy. Click OK to proceed.

Figure 16-37 Resume - Verify

8. Next the Metro Mirror panel is displayed, now showing the selected volumes in full duplex mode after the re-synchronization process is completed; see Figure 16-38. You might have to refresh the screen to see when the volumes reach the full duplex state.

Chapter 16. Metro Mirror interfaces and examples

243

Figure 16-38 Resume - Result

16.10.8 Metro Mirror Failover


In the following example, we have two disk subsystems, a DS8000 (machine type 2107) and a DS6000 (machine type 1750). Each disk subsystem has two volumes. The DS8000 is the production unit. The DS6000 is the backup unit. See Table 16-2.
Table 16-2 Failover Failback example devices Machine serial 2105- 7503461 1750-1300247 Location Production site (A) Backup site (B) Normal state Source Target Volume 1 1405 1805 Volume 2 1406 1806

Now we will use these two machines to describe a sites switch scenario.

Site switch: Production site to recovery site


The Metro Mirror Failover function changes the target volume (B) to suspended source volume, while leaving the original source volume (A) in its current state. This command succeeds even if the paths are down and the volume (A) at the production site is unavailable or nonexistent. You can then access the former target volume (B). The assumption here is that you are now running production on what was the original target disk subsystem (B). This means that changes to the volumes on the recovery site disk subsystem will later need to be copied back to the production site disk subsystem (A). Note: If you try to access this volume on the same server where the previous source was or still is, some operating systems have some problems with disks with the same serial number or signature. In most cases, there is a procedure to do this; see Appendix A, Open systems specifics on page 729. In this example, we assume that the production site has failed. We decide to start production processing using the backup servers at the recovery site. We need to start using the Metro Mirror target volumes (B). These volumes must become source volumes since we will be writing them, and these changes must eventually be mirrored back to the production site. The procedure we follow is: 1. Select Real-time manager. 2. Select Copy services.

244

IBM System Storage DS8000: Copy Services in Open Environments

3. Click Metro Mirror / Global Copy. 4. Select the target Storage complex, Storage unit, Storage image, and Resource Type. Possible Resource Type options are: LSS from which you select the LSS number. Host attachment from which you select the host attachment name. Volume Group from which you select the Volume Group name. Show All Volumes from which you select All FB volumes or All CKD volumes

5. Select the volume pairs that you want to fail over. 6. Now select Recovery Failover from the Select Action pull-down menu, as shown in Figure 16-39. Take note that in this example we display the target machine (serial 13-00247) and select the target volumes. We do not work with the source machine.

Figure 16-39 Failover - Metro Mirror panel

7. We now verify the volumes with which we are working. When ready, click OK to proceed, as shown in Figure 16-40.

Figure 16-40 Failover verify

8. The failover operation now takes place. We can see the result in Figure 16-41. Note that Figure 16-41 shows the status of the B volumes, which are suspended. The A volumes will still be in full duplex. This is because the Metro Mirror Failover function does not care about the state of the former source volumes (A). We are now able to start the servers at the backup site, and they can access the B volumes on the backup site DS6000.

Chapter 16. Metro Mirror interfaces and examples

245

Figure 16-41 Result of a failover

16.10.9 Metro Mirror Failback


We run our production using servers at the recovery site, which access our backup disk subsystem (B). We now get ready to switch processing back to the production site (A). The first thing to do is mirror the changes back from the backup site (B) to the production site (A). We do this using the Metro Mirror Failback function.

Site switch: Recovery site to production site


Metro Mirror Failback synchronizes the volumes at the production site (A) with the volumes at the recovery site (B). The (B) volumes were updated while production processing was temporarily run at the recovery site. The steps are: 1. Select Real-time manager. 2. Select Copy services. 3. Click Metro Mirror / Global Copy. 4. Select the source Storage complex, Storage unit, Storage image, and Resource Type. Possible Resource Type options are: LSS from which you select the LSS number. Host attachment from which you select the host attachment name. Volume Group from which you select the Volume Group name. Show All Volumes from which you select All FB volumes or All CKD volumes

5. Select the volume pairs you want to fail back. 6. Now select Recovery Failback from the Select Action pull-down menu, as shown in Figure 16-42. Note that in this example, we are still working with the Storage Unit at the backup site (serial 13-00247).

246

IBM System Storage DS8000: Copy Services in Open Environments

Figure 16-42 Failback - Metro Mirror panel

7. We now verify the volumes we have selected; see Figure 16-43. We can suspend the pair after the synchronization and reset the LUN reservation. Depending on whether or not data changes are taking place on the previous source volume, the failback command will operate; see 14.3, Failover and failback on page 179. Click OK to proceed.

Figure 16-43 Failback - Verify

During the failback, changes are copied. You can click Refresh to see if the copy is finished. When the copy is complete, the volumes at the backup site (B) will show as full duplex source volumes, as depicted in Figure 16-44.

Figure 16-44 Failback - Result

Chapter 16. Metro Mirror interfaces and examples

247

Note: If the failback fails, check whether your prior active server can still access the new target volume, or has not reset the reserve on the volume, and if your Metro Mirror path is established between these LSSs. If we display the status of the volumes at the production site (on serial 7503461), they will appear as full duplex targets, as shown in Figure 16-45. The source machine is serial 13-00247 at the backup site.

Figure 16-45 Failback - View from the production site Storage Unit

8. At this point, we shut down the servers at the backup site. Once they are shutdown, we know that no more changes are written to the volumes at the backup site. Now we repeat the failover and failback process, but we do so from the production site storage unit (A). In Figure 16-46, we select the volumes on the production storage unit and perform a Failover.

Figure 16-46 Starting failover - View on production site DS8000

248

IBM System Storage DS8000: Copy Services in Open Environments

9. When the Failover is complete, the volumes on the production site show as suspended Metro Mirror source volumes. Now, perform a Failback using the production site storage unit, as shown in Figure 16-47.

Figure 16-47 Starting failback on the production site DS8000

When the final Failback is complete, the production site storage unit is now the source again and the backup site storage unit is now the target again. This means the production site storage unit (in our example, serial 75-03461) now looks like Figure 16-48. This shows serial 75-03461 as the source machine while serial 13-00247 is the target machine. If we instead take the view from the backup site storage unit (in our example, serial 13-00247), it now looks like it did in Figure 16-39 on page 245, prior to starting this whole exercise.

Figure 16-48 Production site storage unit at conclusion of disaster recovery exercise

Note: If the Failback fails, check whether your prior active server can still access the new target volume, or has not reset the reserve on the volume, and if your Metro Mirror path is established between these LSSs. You might need to use the Reset Reservation option depicted in Figure 16-43.

Chapter 16. Metro Mirror interfaces and examples

249

250

IBM System Storage DS8000: Copy Services in Open Environments

Part 5

Part

Global Copy
In this part of the book, we describe IBM System Storage Global Copy. After presenting an overview of Global Copy, we discuss the options available, the interfaces you can use, and the configuration considerations. We also provide examples of the use of Global Copy.

Copyright IBM Corp. 2004-2008. All rights reserved.

251

252

IBM System Storage DS8000: Copy Services in Open Environments

17

Chapter 17.

Global Copy overview


In this chapter we describe the characteristics and operation of Global Copy. We also discuss the considerations for its implementation with the IBM System Storage DS8000.

Copyright IBM Corp. 2004-2008. All rights reserved.

253

17.1 Global Copy overview


Global Copy (formerly know as PPRC Extended Distance, PPRC-XD) is a non-synchronous remote copy function for open systems and System z environments, for longer distances than are possible with Metro Mirror. It is appropriate for remote data migration, off-site backups, and transmission of inactive database logs at virtually unlimited distances.

Server write
1

2 Write acknowledge

LUN or volume Primary (source)


Figure 17-1 Global Copy

4
Write to secondary (non-synchronously)

LUN or volume Secondary (target)

With Global Copy, write operations complete on the source disk subsystem before they are received by the target disk subsystem. This capability is designed to prevent the local systems performance from being affected by wait time from writes on the remote system. Therefore, the source and target copies can be separated by any distance. Figure 17-1 illustrates how Global Copy operates: 1. The host server makes a write I/O to the source (local) DS8000. The write is staged through cache and non-volatile storage (NVS). 2. The write returns as completed to the host servers application. 3. At a later time, that is, in an asynchronous manner, the source DS8000 sends the necessary data so that the updates are reflected on the target (remote) volumes. The updates are grouped in batches for efficient transmission. 4. The target DS8000 returns write complete to the source DS8000 when the updates are secured in the target DS8000 cache and NVS. The source DS8000 then resets its Global Copy change recording information. Note: The efficient extended distance mirroring technique of Global Copy is achieved with sophisticated algorithms. For example, if changed data is in the cache, then Global Copy sends only the changed sectors. There are also sophisticated queuing algorithms to schedule the processing of updated the tracks for each volume and set the batches of updates to be transmitted.

254

IBM System Storage DS8000: Copy Services in Open Environments

17.2 Volume states and change logic


Figure 17-2 illustrates the basic states and the change logic of a volume that is in either a Metro Mirror or a Global Copy relationship. The following considerations apply to the volume states when the pair is a Global Copy pair: Simplex: The volume is not in a Global Copy relationship. Suspended: In this state, the writes to the source volume are not mirrored onto the target volume. The target volume becomes out-of-sync. During this time, Global Copy keeps a bitmap record of the changed tracks in the source volume. Later, the volume pair can be re-started, and then only the tracks that were updated are copied. Full duplex: A Global Copy volume never reaches this volume state. In the full duplex condition updates on the source volumes are synchronously mirrored to the target volume. With Metro Mirror this is possible. With Global Copy, even when no tracks are out-of-sync, still the pair remains in copy pending status. Copy pending: Updates on the source volume are non-synchronously mirrored to the target volume.

Simplex
Terminate Terminate

Establish Global Copy Go to sync

Establish Metro Mirror

Copy Pending

Go to sync and suspend Terminate

Full Duplex

Resync

Resync Suspend

Suspend

Suspened

Figure 17-2 Global Copy and Mirror state change logic

In regard to the state change logic, the following considerations apply. For the following discussion, refer to the various arrows shown in Figure 17-2: When you initially establish a mirror relationship from a volume in simplex state, you have the option to request that it becomes a Global Copy pair (establish Global Copy arrow) or a Metro Mirror pair (establish Metro Mirror arrow). Pairs can change from the copy pending state to the full duplex state when a go-to-SYNC operation is executed (go-to-SYNC arrow). You can also request that a pair be suspended as soon as it reaches the full-duplex state (go-to-SYNC and suspend arrows).

Chapter 17. Global Copy overview

255

Pairs cannot change directly from full duplex state to copy pending state. They must go through an intermediate suspend state. You can go from suspended state to copy pending state during an incremental copy (copying out-of-sync tracks only). This process is similar to the traditional transition from suspended state to full duplex state (Resync arrow).

17.3 Global Copy positioning


Figure 17-3 lists the main points for considering Global Copy.

Global Copy is a recommended solution for remote data copy, data Global Copy is a recommended solution for remote data copy, data migration and offsite backup -- over continental distances without migration and offsite backup over continental distances without impacting application performance. impacting application performance. It can be used for application recovery implementations if application I/O It can be used for application recovery implementations if application I/O activity can be quiesced and non-zero data loss RPO is admissible. activity can be quiesced and non-zero data loss RPO is admissible.

It can be used over continental distances with excellent application performance The distances only limited by the network and channel extenders capabilities Application write operations do not have synchronous-like overheads Fuzzy copy of data at the recovery site (sequence of dependent writes may not be respected at the recovery site) Recovery data can become a consistent point-in-time copy of the primary data, if appropiate application checkpoints are set to do global catch-ups Pairs are synchronized with application group consistency Synchronizations can be done more frequently, because of short catch-ups RPO still not zero, but improves substantially
Figure 17-3 Global Copy positioning

As summarized in Figure 17-3, Global Copy is a recommended solution when you want to perform remote data copy, data migration, off-site backup, and transmission of inactive database logs without impacting application performance, which is particularly relevant when implemented over continental distances. Global Copy can also be used for application recovery solutions based on periodic point-in-time copies of the data. This requires short quiescings of the applications I/O activity.

256

IBM System Storage DS8000: Copy Services in Open Environments

18

Chapter 18.

Global Copy options and configuration


In this chapter we discuss the options available when using Global Copy. We also discuss the configuration guidelines that you should consider when planning for Global Copy with the IBM System Storage DS8000.

Copyright IBM Corp. 2004-2008. All rights reserved.

257

18.1 Global Copy basic options


First we review the basic options available when working with Global Copy. You will see that many of these options are common to Metro Mirror. You still must consider that the results might differ, as Metro Mirror and Global Copy have differences in the way they work.

18.1.1 Establishing a Global Copy pair


This is the operation where you establish a Global Copy relationship between a source volume and a target volume that is, you establish a Global Copy pair. During this operation, the volumes will transition from the simplex state to the copy pending state. When you establish a Global Copy pair, the following options are available: No copy: This option does not copy data from the source to the target volume. This option assumes that the volumes are already synchronized. The data synchronization is your responsibility and the DS8000 does not check its synchronization. Target read: This option allows host servers to read from the target volume. For a host server to read the volume, the volume pair must be in full duplex state. This parameter applies to open systems volumes. It does not apply to System z volumes. Note: In open systems file system environments, even if an application reads data from a file system, a SCSI write command can be issued to the LUN in which the file system resides, because some information, such as the last access time stamp, could be updated by the file system. In this case, even if you specify this option, the operation might fail. Suspend after data synchronization: This option suspends the remote copy pairs after the data has been synchronized. You can use this option with the Go-to-sync operation (the Go-to-sync operation is discussed later in 18.1.5, Converting a Global Copy pair to Metro Mirror on page 259. Wait option: This option delays the command response until the volumes are in one of the final states: simplex, full duplex, suspended, target full duplex, target suspended. This parameter cannot be used with -type gcp or -mode nocp, but it can be used with the Go-to-sync operation. Reset reserve on target: This option allows the establishment of a Global Copy relationship when the target volume is reserved by another host. If this option is not specified and the target volume is reserved, the command fails. This parameter can only be used with open system volumes.

18.1.2 Suspending a Global Copy Pair


This operation stops copying data to the target volume and leaves the pair in suspended state. Because the source DS8000 keeps record of all changed tracks on the source volume, you can resume the remote copy operation at a later time.

258

IBM System Storage DS8000: Copy Services in Open Environments

18.1.3 Resuming a Global Copy Pair


This operation resumes a Global Copy relationship for a volume pair and starts to copy data back again to the target volume. Only modified tracks are sent to the target volume because the DS8000 kept a record of all changed tracks on the source volume while the volumes were suspended. When resuming a Global Copy relationship, you can use the same options you use to initially establish a Global Copy pair, except for the no copy option.

18.1.4 Terminating a Global Copy Pair


This operation ends the remote copy relationship between the volume pair. The volumes return to the simplex state.

18.1.5 Converting a Global Copy pair to Metro Mirror


This operation is known as the Go-to-sync operation. There are two common situations in which you would convert a pair from Global Copy mode to Metro Mirror mode: Situation 1: You have used Global Copy to complete the bulk transfer of data in the creation of many copy pairs, and you now want to convert some or all of those pairs to Metro Mirror mode. Situation 2: You have Global Copy pairs for which you want to make FlashCopy backups on the recovery site. You convert the pairs temporarily to Metro Mirror mode in order to obtain point-in-time consistent copies. You can convert a Global Copy pair to a Metro Mirror pair using the DS CLI or DS GUI.

18.2 Creating a consistent point-in-time copy


copy of the data:
While the volume pairs are in the copy pending state, the target volumes maintain a fuzzy Because of the non-synchronous data transfer characteristics, at any time there will be a certain amount of updated data that is not reflected at the target volume. This data corresponds to the sectors that were updated since the last volume bitmap scan was done. These are out-of-sync sectors. Because of the bitmap scan method, writes are not ensured to be applied onto the target volume in the same sequence that they are written to the source volume. When terminating the Global Copy relationship to establish host access to the target volumes, the first issue might cause loss of transactions. Since a file systems or databases consistency depends on the correct ordering of write sequences, the second issue can cause inconsistent volumes. Therefore, to use target volumes by the host systems, you must make them point-in-time consistent: The application must be quiesced and the volume pairs temporarily suspended. This is necessary for ensuring consistency not only at the volume level, but also at the application level. The target volumes have to catch up to their source counterparts. Global Copy catch-up is the name of the transition that occurs to a Global Copy pair when it goes from its normal out-of-sync condition until it reaches a full sync condition. At the end of this transition, the source and target volumes become fully synchronized.

Chapter 18. Global Copy options and configuration

259

You should now perform a FlashCopy of the target volumes onto tertiary volumes, and then resume the Global Copy pairs. These tertiary volumes are then a consistent point-in-time copy of the primary volumes.

18.2.1 Procedure to take a consistent point-in-time copy


Figure 18-1 summarizes a typical procedure that provides a consistent point-in-time copy of the data. In the diagram, we call the Global Copy source volume at the production site the A volume, the Global Copy target volume at the recovery site the B volume, and the FlashCopy target volume at the recovery site the C volume.

Application running
1.Updated data is being sent in normal Global Copy

source A volume

target B volume C volume

Quiesce Application
2.Quiesce Application
source target

source

3.Go-to-sync and suspend volume pairs

target

Restart Application
4.Restart Application

source

target

Application running
5.Take FlashCopy B to C. Return to 1.
FlashCopy target

Now we have consistent data

source

Production site

Recovery site

Figure 18-1 Create consistent data in Global Copy

Here is a more detailed description of the steps in the procedure shown in Figure 18-1: 1. Normal Global Copy operation. In this phase, Global Copy runs normally. Data written to the source volume (A) is copied to the target volume (B) independently of the write sequence to the source volume. Therefore, at this phase, there is no guarantee that the target volume (B) holds a consistent copy of the source volume (A) data. 2. Quiesce the application at the production site. When you want to have a consistent copy at the recovery site, the application must be quiesced to cease all write I/O from updating the source volume (A). Depending on the host operating system, it might be necessary to dismount the source volume. Note: In a DB2 environment, you can use a function provided by DB2 to stop the application write I/O, which is the set write suspend/resume command.

260

IBM System Storage DS8000: Copy Services in Open Environments

3. Go-to-sync and suspend the pairs. Perform the catch-up by issuing a Go-to-sync operation. The volume pair will leave the copy pending state and it will reach the full duplex state. Wait for the synchronization of the pair and suspend the pair after the synchronization has completed. You can use the single mkpprc -type mmir -suspend command to perform a Go-to-sync and suspend operation. After this step, the Global Copy target (B) will be holding a consistent copy of the data. 4. Restart the application at the production site. Now that you have a consistent copy of the data at the recovery site, while no updates to the source (A) volumes are sent due to the suspend state, you can restart the application at the production site. 5. Take a FlashCopy B to C. In order to resume the Global Copy operation, we have to first perform a FlashCopy from the Global Copy target volumes (B) to the FlashCopy target volumes (C) to keep consistent data at the recovery site. Otherwise, further Global Copy data transmissions will update the Global Copy targets (B), modifying the consistent copy that at the moment we have at the recovery site. You can use the Remote FlashCopy function provided by the DS8000 to issue the FlashCopy command to the DS8000 at the recovery site via the Global Copy paths. (If you use Remote FlashCopy, you do not need to set up a LAN connection to the remote DS8000.) After taking the FlashCopy, you have a consistent copy in the FlashCopy target volume (C). You can then resume the Global Copy normal operation.

18.2.2 Making a FlashCopy at the remote site


In step 5 of the previous procedure, when doing a remote FlashCopy from the production to the recovery site, you must take into account that a disaster might occur while the FlashCopy establishment for multiple volumes is being taken; see Figure 18-2.

FlashCopy

1.Establish FlashCopy

GC target

FC target Vol 1

FlashCopy

2.Establish FlashCopy

GC target

FC target Vol 2

3.Establish FlashCopy

GC target

Vol 3

4.Establish FlashCopy

GC target

Vol 4

DS CLI client
Figure 18-2 FlashCopy establishment from the production site

Recovery site DS8000

Chapter 18. Global Copy options and configuration

261

In Figure 18-2, four FlashCopy pairs are established at the recovery site. The FlashCopy targets are Vol 1, Vol 2, Vol 3, and Vol 4. In this case, the DS CLI client issuing the FlashCopy establish command is in the production site. When the DS CLI client is giving the FlashCopy establish command, a disaster might occur. It might happen, although it is a very short window, that the DS CLI client can issue the first and second FlashCopy establish commands before going down. If this scenario occurs, the four volumes at the recovery site are not at the same step of the point-in-time copy. In other words, Vol 1 and Vol 2 have the latest copies, but Vol 3 and Vol 4 have previous copies. If you detect this situation, depending on your Global Copy operational scenario, you might be able to use the Global Copy target volumes, which might still have consistent point-in-time copies. As long as you issue the FlashCopy commands from the production site, the above situation can occur when you use any FlashCopy establish command, such as the mkflash, resyncflash, mkremoteflash, and resyncremoteflash commands. In order to avoid or detect this situation, you can implement several procedures, which include: Have the DS CLI client at the recovery site issue the FlashCopy command to the DS8000 in the recovery site. We also recommend that you keep the DS CLI execution log at the recovery site to determine what processes have been performed during a disaster. If you prefer having the DS CLI client at the production site, keep the DS CLI execution log at the recovery site to determine what processes have been performed during a disaster. If you use the Incremental FlashCopy function, you can use the FlashCopy sequence number. When you establish FlashCopy, you can add a certain FlashCopy sequence number to the set of the FlashCopy pairs. This sequence number can be used as an identifier for a FlashCopy relation or group of FlashCopy relations. When a disaster occurs, you can find whether you have the same set of the FlashCopy or not from this sequence number. However, for safer implementation, we also recommend that you keep the DS CLI execution log at the recovery site to determine what processes have been performed during a disaster.

18.3 Hardware requirements


Global Copy is an optional licensed function of the IBM System Storage DS8000. Licensed functions require the selection of a DS8000 series feature number (IBM 2107) and the acquisition of DS8000 series Function Authorization (IBM 2244) feature numbers. Authorization must be acquired for both the source and target 2107 machine. You can get Global Copy acquiring either the Metro Mirror licensed function or the Global Mirror licensed function. See 14.11, Hardware requirements on page 190, for more information.

Interoperability
Global Copy pairs can only be established between disk subsystems of the same (or similar) type and features. For example, a DS8000 can have a Global Copy pair with another DS8000, a DS6000, an ESS 800, or an ESS 750. It cannot have a Global Copy pair with an RVA or an ESS F20. Note that all disk subsystems must have the appropriate Global Copy feature installed. If your DS8000 is being mirrored to an ESS disk subsystem, the ESS must have PPRC Version 2 (which supports Fibre Channel links) with the appropriate licensed internal code level (LIC).

262

IBM System Storage DS8000: Copy Services in Open Environments

Refer to the DS8000 Interoperability Matrix or the System Storage Interoperation Center (SSIC) for more information: http://www.ibm.com/servers/storage/disk/ds8000/interop.html http://www.ibm.com/systems/support/storage/config/ssic

18.4 Global Copy connectivity: Paths and links


Global Copy pairs are set up between volumes in LSSs, usually in different disk subsystems, and these are normally in separate locations. A path (or group of paths) needs to be defined between the source LSS and the target LSS. These logical paths are defined over physical links between the disk subsystems. When we define paths, we do not specify any particular remote copy function paths can be used for Global Copy, Metro Mirror, or Global Mirror replication. Therefore, the discussion presented in 14.5, Metro Mirror paths and links on page 185, is also valid for Global Copy. We recommend that you read it.

18.4.1 Global Copy Fibre Channel links


The discussion presented in 14.5.1, Fibre Channel links on page 186, is also valid for Global Copy. We recommend that you read it.

18.4.2 Logical paths


The discussion presented in 14.5.1, Fibre Channel links on page 186, is also valid for Global Copy. We recommend that you read it.

18.5 Bandwidth
To determine the bandwidth requirement for your Global Copy environment, consider both the amount of data you have to transfer to the remote site and the available window. If using Global Copy for daily off-site backup with the technique mentioned in 18.2, Creating a consistent point-in-time copy on page 259, you can estimate the required window to be the sum of the application quiesce time, plus the FlashCopy establishment time, plus the application restart time. However, the FlashCopy background copy operation can have an influence over the Global Copy operation. In addition, if you take a backup to tape of the FlashCopy target volumes, this activity also influences the Global Copy activity and the FlashCopy background copy performance. Therefore, for a safer estimation of your daily off-site backup window, as a very conservative approach, we recommend that you estimate each time sequentially, such as Global Copy data transmission, application quiesce, FlashCopy establishment, application restart, FlashCopy background copy operation, and backing up FlashCopy target. You can estimate the approximate amount of data to be sent to the target by calculating the amount of write data to the source. Some tools to assist you with this are TotalStorage Productivity Center (TPC), or operating system dependent tools, such as iostat. See the Redbooks publication, The IBM TotalStorage DS8000 Series: Implementation, SG24-6786. Another method, but not quite so exact, is to monitor the traffic over the FC switches using FC switch tools and other management tools, but remember that only writes are mirrored by Global Copy. You can also get some feeling about the proportion of reads/writes by issuing datapath query devstats on SDD-attached servers.

Chapter 18. Global Copy options and configuration

263

Finally, you can estimate how much network bandwidth is necessary for your Global Copy solution by dividing the amount of data you need to transfer by the duration of the backup window. The FlashCopy relationship at the remote site can influence the performance of the Global Copy operation. If you use the FlashCopy with the nocopy option at the recovery site, when the Global Copy target receives an update, the track on the FlashCopy source, which is also the Global Copy target, has to be copied to the FlashCopy target before the data transfer operation completes. This copy operation to the FlashCopy target can complete by using the DS8000 cache and NVS without waiting for a physical write to the FlashCopy target. However, this data movement can influence the Global Copy activity. So, when considering the network bandwidth, consider that the FlashCopy effect over the Global Copy activity might in fact decrease the bandwidth utilization during some intervals. If not using the nocopy option, but resuming Global Copy before the FlashCopy background copy finishes at the remote site, this background data movement might also influence the Global Copy performance.

18.6 LSS design


Since the DS8000 has made the LSS a topological construct, which is not tied to a physical Array as in the ESS, the design of your LSS layout can be simplified. It is now possible to assign LSSs to applications, for example, without concern regarding under-allocation or over-allocation of physical disk subsystem resources. This can also help you when you use the freezepprc command. A freeze operation is performed at the LSS level, causing all Global Copy volumes in that LSS to go into suspended state with queue full condition and terminating all associated paths. If you assign LSSs to each of your applications, you can control the impact of the queue full condition caused by the freeze operation at an application level. If you put volumes used by several applications into the same LSS, all those applications sharing the LSS, their Global Copy volumes, will be affected by the queue full condition. You can issue one freezepprc command to multiple LSSs. Therefore, you do not need to consolidate the number of LSSs that your applications use. In general, the freezepprc and unfreezepprc commands will be used in the Metro Mirror environment or after converting from Global Copy mode to Metro Mirror mode. However, if you set the Consistency Group option on an LSS, even when the volumes in the LSS are Global Copy sources, the freeze operation triggered by the freezepprc command, or a failure of a target update, causes the queue full condition. Therefore, you should pay attention to the LSS design if you use the Consistency Group option.

18.7 Distance
The non-synchronous characteristics of Global Copy, combined with the great throughput characteristic of its efficient track mirroring technique, make Global Copy well suited for remote copy solutions at distances beyond the 300 km available with Metro Mirror. Global Copy achieves greater distances without having to incur in the distance latency penalties that synchronous write I/O methods exhibit. The maximum distance for a direct Fibre Channel connection is 10 km. If you want to use Global Copy over longer distances, the following connectivity technologies can be used to extend this distance:

264

IBM System Storage DS8000: Copy Services in Open Environments

Fibre Channel switches Channel Extenders over Wide Area Network (WAN) lines Dense Wave Division Multiplexors (DWDM) on dark fibre

Channel extender
Channel extender vendors connect DS8000 systems with a variety of Wide Area Network (WAN) connections, including Fibre Channel, Ethernet/IP, ATM-OC3, and T1/T3. When you use channel extender products with Global Copy, the channel extender vendor determines the maximum distance supported between the source and target DS8000. The channel extender vendor should be contacted for its distance capability, line quality requirements, and WAN attachment capabilities. A complete and current list of Global Copy supported environments, configurations, networks, and products is available in the DS8000 Interoperability Matrix or the System Storage Interoperation Center (SSIC) at: http://www.ibm.com/servers/storage/disk/ds8000/interop.html http://www.ibm.com/systems/support/storage/config/ssic You should also consult the channel extender vendor about hardware and software prerequisites if you are using their products in a DS8000 Global Copy configuration. Evaluation, qualification, approval, and support of Global Copy configurations using channel extender products is the sole responsibility of the channel extender vendor.

Dense Wave Division Multiplexor (DWDM)


Wave Division Multiplexing (WDM) and Dense Wave Division Multiplexing (DWDM) comprise the basic technology of fibre optical networking. The technique is used to carry many separate and independent optical channels on a single dark fibre. A simple way to envision DWDM is to consider that, at the sending end, multiple fibre optic input channels (such as Fibre Channel or Gbit Ethernet) are combined by the DWDM into a single fibre optic cable. Each channel is encoded as light for a different wavelength. Think of each individual channel as an individual color the DWDM system is transmitting a rainbow. At the receiving end, the DWDM fans out the different optical channels. DWDM, by the very nature of its operation, provides the full bandwidth capability of the individual channel. As the wavelength of light is, from a practical perspective. infinitely divisible, DWDM technology is only limited by the sensitivity of its receptors, as the total aggregate bandwidth possible. A complete and current list of Global Copy supported environments, configurations, networks, and products is available in the DS8000 Interoperability Matrix. You should contact the DWDM vendor regarding hardware and software prerequisites when using their products in an DS8000 Global Copy configuration.

18.8 DS8000 configuration at the remote site


Depending on your operational environment, when using Global Copy, you might also need FlashCopy target volumes; if so, you might be able to choose higher capacity DDMs for the DS8000 at the remote site than those at the local site. However, you should pay attention to the FlashCopy background copy performance mentioned in 18.5, Bandwidth on page 263. If you take backups of the FlashCopy target volumes onto tapes, you must ensure that the tape resources are capable of handling these dump operations in between the point-in-time checkpoints.
Chapter 18. Global Copy options and configuration

265

266

IBM System Storage DS8000: Copy Services in Open Environments

19

Chapter 19.

Global Copy interfaces and examples


In this chapter we describe the interfaces that can be used for Global Copy management in an open systems environment. We also present examples that illustrate step-by-step how to execute the setup and management tasks of the Global Copy environment. We cover the following topics: DS Command Line Interface and DS Storage Manager overview Setting up a Global Copy environment Removing a Global Copy environment Managing the Global Copy environment Failover and Failback operations (site switch) Using the DS Storage Manager GUI to manage Global Copy

Copyright IBM Corp. 2004-2008. All rights reserved.

267

19.1 Global Copy interfaces: Overview


There are various interfaces available for the configuration and management of Global Copy for DS8000 when used in an open systems environment: DS Storage Manager: This is a graphical user interface (DS GUI) running in a Web browser. The DS GUI can be accessed using the preinstalled browser on the HMC console, or through the DS8000 Element Manager on a TPC server, such as the SSPC (for new DS800 with Licensed Machine Code 5.30xx.xx), or for former DS8000 installations through a supported Web browser on any workstation connected to the HMC console. DS Command Line Interface (DS CLI): This interface provides a set of commands that are executed on a workstation that communicates with the DS HMC. TotalStorage Productivity Center for Replication (TPC for Replication): The TPC Replication Manager server, where TPC for Replication runs, connects to the DS8000.TotalStorage Productivity Center for Replication (TPC for Replication) provides management of DS8000 series business continuance solutions, including FlashCopy, Metro Mirror, and Global Mirror. TPC for Replication is covered in Chapter 6, IBM TotalStorage Productivity Center for Replication on page 43. DS Open Application Programming Interface (DS Open API): The DS Open API is a set of application programming interfaces that are available to be integrated in programs. The DS Open API is not covered in this book. For information on the DS Open API, refer to the publication IBM System Storage DS Open Application Programming Interface Reference, GC35-0516. Also, you can use the following interfaces (not covered in this book): z/OS interfaces: TSO, ICKDSF, and ANTRQST API In order to use a z/OS interface to manage open systems LUNs, the DS8000 must have at least one CKD volume. If you are interested in this possibility, you can refer to the Redbooks publication, IBM System Storage DS8000 Series: Copy Services with IBM System z, SG24-6787. For information about the DS Open API, refer to the publication IBM System Storage DS Open Application Programming Interface Reference, GC35-0516. This chapter gives you an overview of the DS CLI and the DS GUI for Global Copy management.

DS CLI and DS GUI similar functions for Global Copy management


Table 19-1 compares similar Global Copy commands from the DS CLI and the DS Storage Manager, and the corresponding action.

268

IBM System Storage DS8000: Copy Services in Open Environments

Table 19-1 DS CLI and DS GUI commands and actions Task Command with DS CLI Select option with DS GUI

Metro Mirror and Global Copy paths commands List available I/O ports that can be used to establish Metro Mirror paths. List established Metro Mirror paths. Establish path. Delete path. lsavailpprcport This information is shown during the process when a path is established. Copy Services Paths Copy Services Paths Create Copy Services Paths Delete

lspprcpath mkpprcpath rmpprcpath

Metro Mirror and Global Copy pairs commands Failback. failbackpprc Copy Services Metro Mirror / Global Copy Recovery Failback Copy Services Metro Mirror / Global Copy Recovery Failover Copy Services Metro Mirror / Global Copy Copy Service Metro Mirror / Global Copy Create Copy Services Metro Mirror / Global Copy Suspend Copy Services Metro Mirror / Global Copy Resume Copy Services Metro Mirror / Global Copy Delete

Failover.

failoverpprc

List Metro Mirror volume pairs. Establish pair. Suspend pair.

lspprc mkpprc pausepprc

Resume pair.

resumepprc

Delete pair.

rmpprc

Freeze Consistency Group. Thaw Consistency Group.

freezepprc unfreezepprc

The DS CLI has the advantage that you can make scripts and use them for automation. The DS Storage Manager GUI is a Web-based graphical user interface and is more intuitive than the DS CLI.

19.2 Copy Services network components


To implement the Global Copy environment, you need the same network environment as for Metro Mirror. You can refer to 16.2, Copy Services network components on page 199, for a description.
Chapter 19. Global Copy interfaces and examples

269

19.3 Using DS CLI examples


You can use the DS CLI interface to manage all DS8000 Copy Services functions, such as defining paths, establishing Global Copy pairs, and so on. For a detailed explanation about the DS CLI, refer to Chapter 5, DS Command-Line Interface on page 31. When you establish or remove Global Copy paths and volume pairs, you must give the DS CLI commands to the DS HMC that is connected to the source DS8000. Also, when checking status information at the local and remote site, you must issue DS CLI list type commands, such as the lspprc, to each DS8000, source, and target. The DS CLI commands are documented in the publication, IBM System Storage DS8000 Command-Line Interface Users Guide, SC26-7916. In this section we describe and give examples of the available DS CLI commands that you can use for Global Copy setup and control.

19.3.1 Setting up a Global Copy environment using the DS CLI


In the following sections we present an example of how to se up a Global Copy environment using the DS CLI. Figure 19-1 shows the configuration that we implement.

LSS10
1000 1001

LSS20
2000 2001
Physical Fibre path

LSS11
1100 1101 DS8000#1
-dev IBM.2107-7520781

LSS21
2100 2101 DS8000#2
-dev IBM.2107-75ABTV1

Figure 19-1 DS8000 configuration in the Global Copy set up example

In our example we use different LSS and LUN numbers for the Global Copy source and target elements, so that you can more clearly understand which one is being specified when going through the reading of the example. Note: In a real environment, and differently from our example, to simplify the management of your Global Copy environment, we recommend that you maintain a symmetrical configuration in terms of both physical and logical elements. The procedure to set up the Global Copy environment is similar to that of a Metro Mirror environment. One thing that is different is the type of relationship that is established between the volumes. In this example we define Global Copy pairs.

270

IBM System Storage DS8000: Copy Services in Open Environments

Preparing to work with the DS CLI


As we prepare to work with the DS CLI we do some initial tasks that will simplify our activities during the configuration process. Refer to 16.5.1, Preparing to work with the DS CLI on page 202.

Setup of Global Copy configuration


Figure 19-2 shows the configuration we set up for this example. The configuration has four Global Copy pairs that reside in two LSSs. Two paths are defined between each source and target LSS.

Source

LSS10
1000 1001

Paths

Target

LSS20
2000 2001

Global Copy pairs


FCP links

LSS11
1100 1101 DS8000#1
-dev IBM.2107-7520781

LSS21
2100 2101 DS8000#2
-dev IBM.2107-75ABTV1

Global Copy pairs

Figure 19-2 Global Copy environment to set up

To configure the Global Copy environment, we follow this procedure: 1. Determine the available Fibre Channel links for paths definition. 2. Define the paths that Global Copy will use. 3. Create Global Copy pairs.

Determine the available Fibre Channel links


This step is similar to the Metro Mirror setup example. Refer to 16.5.3, Determining the available Fibre Channel links on page 203.

Define paths for Global Copy


This step is similar to for the Metro Mirror example. Refer to 16.5.4, Creating Metro Mirror paths on page 204.

Create Global Copy pairs


After creating the paths, you can establish the Global Copy volume pairs. This is done with the mkpprc command and verifying the result with the lspprc command; see Example 19-1. When you create a Global Copy pair, you specify the -type gcp parameter with mkpprc.

Chapter 19. Global Copy interfaces and examples

271

Example 19-1 Create Global Copy pairs and verify the result
dscli> mkpprc -remotedev IBM.2107-75ABTV1 -type gcp 1000-1001:2000-2001 1100-1101:2100-2101 Date/Time: November 2, 2005 10:53:22 PM JST IBM DSCLI Version: 5.1.0.204 DS: IBM.2107-7520781 CMUC00153I mkpprc: Remote Mirror and Copy volume pair relationship 1000:2000 successfully created. CMUC00153I mkpprc: Remote Mirror and Copy volume pair relationship 1001:2001 successfully created. CMUC00153I mkpprc: Remote Mirror and Copy volume pair relationship 1100:2100 successfully created. CMUC00153I mkpprc: Remote Mirror and Copy volume pair relationship 1101:2101 successfully created. dscli> dscli> lspprc 1000-1001 1100-1101 Date/Time: November 2, 2005 10:57:30 PM JST IBM DSCLI Version: 5.1.0.204 DS: IBM.2107-7520781 ID State Reason Type SourceLSS Timeout (secs) Critical Mode First Pass Status ================================================================================================== 1000:2000 Copy Pending Global Copy 10 unknown Disabled False 1001:2001 Copy Pending Global Copy 10 unknown Disabled False 1100:2100 Copy Pending Global Copy 11 unknown Disabled False 1101:2101 Copy Pending Global Copy 11 unknown Disabled False

You can check the status of Global Copy using the lspprc -l command. Out Of Sync Tracks shows the remaining tracks to be sent to the target volume (the size of the logical track for the FB volume for the DS8000 is 64 KB). You can use the lspprc -fullid command flag to display the fully qualified DS8000 storage image ID in the command output; see Example 19-2.
Example 19-2 lspprc -l and lspprc -fullid for Global Copy pairs
dscli> lspprc -l 1000-1001 1100-1101
Date/Time: November 2, 2005 10:57:37 PM JST IBM DSCLI Version: 5.1.0.204 DS: IBM.2107-7520781 ID State Reason Type Out Of Sync Tracks Tgt Read Src Cascade Tgt Cascade Date Suspended SourceLSS Timeout (secs) Critical Mode First Pass Status ========================================================================================================================================== 1000:2000 Copy Pending Global Copy 38379 Disabled Disabled invalid 10 unknown Disabled False 1001:2001 Copy Pending Global Copy 38083 Disabled Disabled invalid 10 unknown Disabled False 1100:2100 Copy Pending Global Copy 60840 Disabled Disabled invalid 11 unknown Disabled False 1101:2101 Copy Pending Global Copy 60838 Disabled Disabled invalid 11 unknown Disabled False dscli>

dscli> lspprc

-fullid 1000-1001 1100-1101

Date/Time: November 2, 2005 11:06:40 PM JST IBM DSCLI Version: 5.1.0.204 DS: IBM.2107-7520781 ID State Reason Type SourceLSS Timeout (secs) Critical Mode First Pass Status ========================================================================================================================================== ==== IBM.2107-7520781/1000:IBM.2107-75ABTV1/2000 Copy Pending Global Copy IBM.2107-7520781/10 unknown Disabled True IBM.2107-7520781/1001:IBM.2107-75ABTV1/2001 Copy Pending Global Copy IBM.2107-7520781/10 unknown Disabled True IBM.2107-7520781/1100:IBM.2107-75ABTV1/2100 Copy Pending Global Copy IBM.2107-7520781/11 unknown Disabled True IBM.2107-7520781/1101:IBM.2107-75ABTV1/2101 Copy Pending Global Copy IBM.2107-7520781/11 unknown Disabled True

Unlike the Metro Mirror initial copy (first pass), the state of the Global Copy volumes still shows Copy Pending even after the Out Of Sync Tracks become 0; see Example 19-3.
Example 19-3 List the Global Copy pairs status after Global Copy first pass completes
dscli> lspprc -l 1000-1001 1100-1101
Date/Time: November 2, 2005 10:58:43 PM JST IBM DSCLI Version: 5.1.0.204 DS: IBM.2107-7520781 ID State Reason Type Out Of Sync Tracks Tgt Read Src Cascade Tgt Cascade Date Suspended SourceLSS Timeout (secs) Critical Mode First Pass Status ========================================================================================================================================== 1000:2000 Disabled 1001:2001 Disabled 1100:2100 Disabled 1101:2101 Disabled
dscli>

Copy Pending True Copy Pending True Copy Pending True Copy Pending
True

Global Copy 0 Global Copy 0 Global Copy 0 Global Copy 0

Disabled Disabled Disabled Disabled Disabled Disabled Disabled Disabled

invalid invalid invalid invalid

10 10 11 11

unknown unknown unknown unknown

272

IBM System Storage DS8000: Copy Services in Open Environments

Copy Pending is the state of the Global Copy source. The target state is Target Copy Pending. To see the target state in this example, you have to give the lspprc command to the
DS HMC connected to DS8000#2, which is the Global Copy target. See Example 19-4.
Example 19-4 lspprc for Global Copy target volumes dscli> lspprc 2000-2001 2100-2101
Date/Time: November 2, 2005 10:53:47 PM JST IBM DSCLI Version: 5.1.0.204 DS: IBM.2107-75ABTV1
ID State Reason Type SourceLSS Timeout (secs) Critical Mode First Pass Status ========================================================================================================= 1000:2000 Target Copy Pending Global Copy 10 unknown Disabled Invalid 1001:2001 Target Copy Pending Global Copy 10 unknown Disabled Invalid 1100:2100 Target Copy Pending Global Copy 11 unknown Disabled Invalid 1101:2101 Target Copy Pending Global Copy 11 unknown Disabled Invalid

19.3.2 Remove Global Copy environment using DS CLI


This step is similar to the Metro Mirror example (see 16.6, Removing Metro Mirror environment using DS CLI on page 206). However, here we show examples for Global Copy because the outputs of the lspprc command are different from that in the Metro Mirror environment. In general, you should do the following major steps to clear the Global Copy environment: 1. Remove Global Copy pairs. 2. Remove logical paths.

Step 1: Remove Global Copy pairs


The rmpprc command removes a volume pair relationship; see Example 19-5. You can use the -quiet parameter to turn off the confirmation prompt for this command.
Example 19-5 Removing Global Copy pairs
dscli> lspprc 1000-1001 1100-1101
Date/Time: November 2, 2005 11:26:34 PM JST IBM DSCLI Version: 5.1.0.204 DS: IBM.2107-7520781 ID State Reason Type SourceLSS Timeout (secs) Critical Mode First Pass Status ================================================================================================== 1000:2000 Copy Pending Global Copy 10 unknown Disabled True 1001:2001 Copy Pending Global Copy 10 unknown Disabled True 1100:2100 Copy Pending Global Copy 11 unknown Disabled True 1101:2101 Copy Pending Global Copy 11 unknown Disabled True dscli>

dscli> rmpprc -remotedev IBM.2107-75ABTV1

1000-1001:2000-2001 1100-1101:2100-2101

Date/Time: November 2, 2005 11:26:45 PM JST IBM DSCLI Version: 5.1.0.204 DS: IBM.2107-7520781 CMUC00160W rmpprc: Are you sure you want to delete the Remote Mirror and Copy volume pair relationship 1000-1001:2000-2001:? [y/n]:y CMUC00155I rmpprc: Remote Mirror and Copy volume pair 1000:2000 relationship successfully withdrawn. CMUC00155I rmpprc: Remote Mirror and Copy volume pair 1001:2001 relationship successfully withdrawn. CMUC00160W rmpprc: Are you sure you want to delete the Remote Mirror and Copy volume pair relationship 1100-1101:2100-2101:? [y/n]:y CMUC00155I rmpprc: Remote Mirror and Copy volume pair 1100:2100 relationship successfully withdrawn. CMUC00155I rmpprc: Remote Mirror and Copy volume pair 1101:2101 relationship successfully withdrawn.

You can add the -at tgt parameter to the rmpprc command to remove only the available Global Copy target volumes, as shown in Example 19-6. You have to issue this command to the HMC connected to DS8000#2, which is the Global Copy target.

Chapter 19. Global Copy interfaces and examples

273

Example 19-6 Results of rmpprc with -at tgt dscli> lspprc 2002
Date/Time: November 2, 2005 11:39:23 PM JST IBM DSCLI Version: 5.1.0.204 DS: IBM.2107-75ABTV1 ID State Reason Type SourceLSS Timeout (secs) Critical Mode First Pass Status ========================================================================================================= 1002:2002 Target Copy Pending Global Copy 10 unknown Disabled Invalid dscli>

dscli> rmpprc -remotedev IBM.2107-75ABTV1 -quiet -at tgt 1002:2002


Date/Time: November 2, 2005 11:40:33 PM JST IBM DSCLI Version: 5.1.0.204 DS: IBM.2107-75ABTV1 CMUC00155I rmpprc: Remote Mirror and Copy volume pair 1002:2002 relationship successfully withdrawn. dscli> dscli> lspprc 2002 Date/Time: November 2, 2005 11:40:39 PM JST IBM DSCLI Version: 5.1.0.204 DS: IBM.2107-75ABTV1 CMUC00234I lspprc: No Remote Mirror and Copy found.

Example 19-7 shows the Global Copy source volume status after the rmpprc -at tgt command has completed and it also shows the result of a rmpprc -at src command. In this case, there were still available paths. Therefore, the source volume state changed after the rmpprc -at tgt command completed. If there were no available paths, the state of the Global Copy source volumes would have been preserved. In Example 19-7, you must issue this command to the HMC connected to DS8000#1, which is the Global Copy source.
Example 19-7 Global Copy source volume status after rmpprc with -at tgt and rmpprc with -at src
dscli> lspprc 1002 Date/Time: November 2, 2005 11:39:11 PM JST IBM DSCLI Version: 5.1.0.204 DS: IBM.2107-7520781 ID State Reason Type SourceLSS Timeout (secs) Critical Mode First Pass Status ================================================================================================== 1002:2002 Copy Pending Global Copy 10 unknown Disabled False << After rmpprc -at tgt command completes >> dscli> lspprc 1002 Date/Time: November 2, 2005 11:40:52 PM JST IBM DSCLI Version: 5.1.0.204 DS: IBM.2107-7520781 ID State Reason Type SourceLSS Timeout (secs) Critical Mode First Pass Stat ===================================================================================================== 1002:2002 Suspended Simplex Target Global Copy 10 unknown Disabled True dscli> dscli> rmpprc -remotedev IBM.2107-75ABTV1 -quiet -at src 1002:2002 Date/Time: November 2, 2005 11:41:35 PM JST IBM DSCLI Version: 5.1.0.204 DS: IBM.2107-7520781 CMUC00155I rmpprc: Remote Mirror and Copy volume pair 1002:2002 relationship successfully withdrawn. dscli> dscli> lspprc 1002 Date/Time: November 2, 2005 11:41:39 PM JST IBM DSCLI Version: 5.1.0.204 DS: IBM.2107-7520781 CMUC00234I lspprc: No Remote Mirror and Copy found.

Step 2: Remove logical paths


The rmpprcpath command removes the paths. Before removing the paths, you must remove all volume pairs that are using the paths, or you have to use the -force parameter with the rmpprcpath command. See Example 19-8.
Example 19-8 Remove paths
dscli> lspprc 1000-1001 1100-1101 Date/Time: November 3, 2005 1:27:34 AM JST IBM DSCLI Version: 5.1.0.204 DS: IBM.2107-7520781 ID State Reason Type SourceLSS Timeout (secs) Critical Mode First Pass Status ================================================================================================== 1000:2000 Copy Pending Global Copy 10 unknown Disabled True 1001:2001 Copy Pending Global Copy 10 unknown Disabled True 1100:2100 Copy Pending Global Copy 11 unknown Disabled True 1101:2101 Copy Pending Global Copy 11 unknown Disabled True

274

IBM System Storage DS8000: Copy Services in Open Environments

dscli> dscli> rmpprc -remotedev IBM.2107-75ABTV1 -quiet 1000-1001:2000-2001 1100-1101:2100-2101


Date/Time: CMUC00155I CMUC00155I CMUC00155I CMUC00155I dscli> November 3, 2005 1:29:51 AM JST IBM DSCLI Version: 5.1.0.204 DS: IBM.2107-7520781 rmpprc: Remote Mirror and Copy volume pair 1000:2000 relationship successfully withdrawn. rmpprc: Remote Mirror and Copy volume pair 1001:2001 relationship successfully withdrawn. rmpprc: Remote Mirror and Copy volume pair 1100:2100 relationship successfully withdrawn. rmpprc: Remote Mirror and Copy volume pair 1101:2101 relationship successfully withdrawn.

dscli> lspprc 1000-1001 1100-1101


Date/Time: November 3, 2005 1:29:54 AM JST IBM DSCLI Version: 5.1.0.204 DS: IBM.2107-7520781 CMUC00234I lspprc: No Remote Mirror and Copy found. dscli>

dscli> rmpprcpath -quiet -remotedev IBM.2107-75ABTV1 10:20 11:21


Date/Time: November 3, 2005 1:31:09 AM JST IBM DSCLI Version: 5.1.0.204 DS: IBM.2107-7520781 CMUC00150I rmpprcpath: Remote Mirror and Copy path 10:20 successfully removed. CMUC00150I rmpprcpath: Remote Mirror and Copy path 11:21 successfully removed.

If you do not remove the Global Copy pairs that are using the paths, the rmpprcpath command fails; see Example 19-9.
Example 19-9 Remove paths without removing the Global Copy pairs
dscli> lspprc 1000-1001 1100-1101 Date/Time: November 3, 2005 1:38:19 AM JST IBM DSCLI Version: 5.1.0.204 DS: IBM.2107-7520781 ID State Reason Type SourceLSS Timeout (secs) Critical Mode First Pass Status ================================================================================================== 1000:2000 Copy Pending Global Copy 10 unknown Disabled True 1001:2001 Copy Pending Global Copy 10 unknown Disabled True 1100:2100 Copy Pending Global Copy 11 unknown Disabled True 1101:2101 Copy Pending Global Copy 11 unknown Disabled True dscli> dscli> rmpprcpath -quiet -remotedev IBM.2107-75ABTV1 10:20 11:21 Date/Time: November 3, 2005 1:38:29 AM JST IBM DSCLI Version: 5.1.0.204 DS: IBM.2107-7520781 CMUN03070E rmpprcpath: 10:20: Copy Services operation failure: pairs remain CMUN03070E rmpprcpath: 11:21: Copy Services operation failure: pairs remain

If you want to remove the paths still having the Global Copy pairs, you can use the -force parameter; see Example 19-10. After the path has been removed, the Global Copy pair is still in copy pending state until the source receives I/O from the servers. When I/O goes to the Global Copy source, the source volume becomes suspended. If you set the Consistency Group option for the LSS in which the volumes reside, I/Os from the servers are held with queue full status for the specified timeout value.
Example 19-10 Remove paths still having Global Copy pairs - use -force parameter
dscli> lspprc 1000-1001 1100-1101 Date/Time: November 3, 2005 1:14:27 AM JST IBM DSCLI Version: 5.1.0.204 DS: IBM.2107-7520781 ID State Reason Type SourceLSS Timeout (secs) Critical Mode First Pass Status ================================================================================================== 1000:2000 Copy Pending Global Copy 10 unknown Disabled True 1001:2001 Copy Pending Global Copy 10 unknown Disabled True 1100:2100 Copy Pending Global Copy 11 unknown Disabled True 1101:2101 Copy Pending Global Copy 11 unknown Disabled True dscli> rmpprcpath -quiet -remotedev IBM.2107-75ABTV1 -force 10:20 11:21 Date/Time: November 3, 2005 1:17:46 AM JST IBM DSCLI Version: 5.1.0.204 DS: IBM.2107-7520781 CMUC00150I rmpprcpath: Remote Mirror and Copy path 10:20 successfully removed. CMUC00150I rmpprcpath: Remote Mirror and Copy path 11:21 successfully removed. dscli> dscli> lspprc 1000-1001 1100-1101 Date/Time: November 3, 2005 1:17:52 AM JST IBM DSCLI Version: 5.1.0.204 DS: IBM.2107-7520781 ID State Reason Type SourceLSS Timeout (secs) Critical Mode First Pass Status ==================================================================================================

Chapter 19. Global Copy interfaces and examples

275

1000:2000 1001:2001 1100:2100 1101:2101 dscli>

Copy Copy Copy Copy

Pending Pending Pending Pending

Global Global Global Global

Copy Copy Copy Copy

10 10 11 11

unknown unknown unknown unknown

Disabled Disabled Disabled Disabled

True True True True

<< After I/O goes to the source volume(1000 and 1001) >> dscli> lspprc 1000-1001 1100-1101 Date/Time: November 3, 2005 1:26:16 AM JST IBM DSCLI Version: 5.1.0.204 DS: IBM.2107-7520781
ID State Reason Type SourceLSS Timeout (secs) Critical Mode First Pass Status ===================================================================================================================== = 1000:2000 Suspended Internal Conditions Target Global Copy 10 unknown Disabled True 1001:2001 Suspended Internal Conditions Target Global Copy 10 unknown Disabled True 1100:2100 Copy Pending Global Copy 11 unknown Disabled True 1101:2101 Copy Pending Global Copy 11 unknown Disabled True

19.3.3 Maintaining the Global Copy environment using the DS CLI


This section shows how we can manage the Global Copy environment, such as suspend and resume Global Copy pairs, change the mode from Global Copy to Metro Mirror, and change paths.

Suspending and resuming Global Copy data transfer


The pausepprc command stops transferring data to the Global Copy target volume. After this command completes, the Global Copy pair becomes suspended. See Example 19-11.
Example 19-11 Suspend Global Copy data transfer
dscli> lspprc 1000-1001 1100-1101 Date/Time: November 3, 2005 12:11:17 AM JST IBM DSCLI Version: 5.1.0.204 DS: IBM.2107-7520781 ID State Reason Type SourceLSS Timeout (secs) Critical Mode First Pass Status ================================================================================================== 1000:2000 Copy Pending Global Copy 10 unknown Disabled True 1001:2001 Copy Pending Global Copy 10 unknown Disabled True 1100:2100 Copy Pending Global Copy 11 unknown Disabled True 1101:2101 Copy Pending Global Copy 11 unknown Disabled True dscli> dscli> pausepprc -remotedev IBM.2107-75ABTV1 1000-1001:2000-2001 Date/Time: November 3, 2005 12:11:58 AM JST IBM DSCLI Version: 5.1.0.204 DS: IBM.2107-7520781 CMUC00157I pausepprc: Remote Mirror and Copy volume pair 1000:2000 relationship successfully paused. CMUC00157I pausepprc: Remote Mirror and Copy volume pair 1001:2001 relationship successfully paused. dscli> dscli> lspprc 1000-1001 1100-1101 Date/Time: November 3, 2005 12:12:01 AM JST IBM DSCLI Version: 5.1.0.204 DS: IBM.2107-7520781
ID State Reason Type SourceLSS Timeout (secs) Critical Mode First Pass Status ======================================================================================================= 1000:2000 Suspended Host Source Global Copy 10 unknown Disabled True 1001:2001 Suspended Host Source Global Copy 10 unknown Disabled True 1100:2100 Copy Pending Global Copy 11 unknown Disabled True 1101:2101 Copy Pending Global Copy 11 unknown Disabled True

Because the source DS8000 keeps records of all changed data on the source volume, you can resume Global Copy data transfer at a later time. The resumepprc command resumes a Global Copy relationship for a volume pair and restarts transferring data. You must specify the copy mode such as Metro Mirror or Global Copy with the -type parameter; see Example 19-12.

276

IBM System Storage DS8000: Copy Services in Open Environments

Example 19-12 Resume Global Copy pairs dscli> lspprc 1000-1001 1100-1101
Date/Time: November 3, 2005 12:12:01 AM JST IBM DSCLI Version: 5.1.0.204 DS: IBM.2107-7520781 ID State Reason Type SourceLSS Timeout (secs) Critical Mode First Pass Status ======================================================================================================= 1000:2000 Suspended Host Source Global Copy 10 unknown Disabled True 1001:2001 Suspended Host Source Global Copy 10 unknown Disabled True 1100:2100 Copy Pending Global Copy 11 unknown Disabled True 1101:2101 Copy Pending Global Copy 11 unknown Disabled True

dscli> dscli> resumepprc -remotedev IBM.2107-75ABTV1 -type gcp 1000-1001:2000-2001


Date/Time: November 3, 2005 12:16:35 AM JST IBM DSCLI Version: 5.1.0.204 DS: IBM.2107-7520781 CMUC00158I resumepprc: Remote Mirror and Copy volume pair 1000:2000 relationship successfully resumed. This message is being returned before the copy completes. CMUC00158I resumepprc: Remote Mirror and Copy volume pair 1001:2001 relationship successfully resumed. This message is being returned before the copy completes.

dscli> dscli> lspprc 1000-1001 1100-1101


Date/Time: November 3, 2005 12:16:40 AM JST IBM DSCLI Version: 5.1.0.204 DS: IBM.2107-7520781 ID State Reason Type SourceLSS Timeout (secs) Critical Mode First Pass Status ================================================================================================== 1000:2000 Copy Pending Global Copy 10 unknown Disabled True 1001:2001 Copy Pending Global Copy 10 unknown Disabled True 1100:2100 Copy Pending Global Copy 11 unknown Disabled True 1101:2101 Copy Pending Global Copy 11 unknown Disabled True

Changing copy mode from Global Copy to Metro Mirror


You can change the copy mode from Global Copy to Metro Mirror with the mkpprc command; see Example 19-13. This operation is called Go-to-sync. Depending on the amount of data to be sent to the target, it takes time until the pairs become full duplex.
Example 19-13 Change copy mode from Global Copy to Metro Mirror
dscli> lspprc 1000-1001 1100-1101 Date/Time: November 3, 2005 12:16:40 AM JST IBM DSCLI Version: 5.1.0.204 DS: IBM.2107-7520781 ID State Reason Type SourceLSS Timeout (secs) Critical Mode First Pass Status ================================================================================================== 1000:2000 Copy Pending Global Copy 10 unknown Disabled True 1001:2001 Copy Pending Global Copy 10 unknown Disabled True 1100:2100 Copy Pending Global Copy 11 unknown Disabled True 1101:2101 Copy Pending Global Copy 11 unknown Disabled True dscli> dscli> mkpprc -remotedev IBM.2107-75ABTV1 -type mmir 1000-1001:2000-2001 1100-1101:2100-2101 Date/Time: November 3, 2005 12:30:10 AM JST IBM DSCLI Version: 5.1.0.204 DS: IBM.2107-7520781 CMUC00153I mkpprc: Remote Mirror and Copy volume pair relationship 1000:2000 successfully created. CMUC00153I mkpprc: Remote Mirror and Copy volume pair relationship 1001:2001 successfully created. CMUC00153I mkpprc: Remote Mirror and Copy volume pair relationship 1100:2100 successfully created. CMUC00153I mkpprc: Remote Mirror and Copy volume pair relationship 1101:2101 successfully created. dscli> dscli> lspprc 1000-1001 1100-1101 Date/Time: November 3, 2005 12:30:14 AM JST IBM DSCLI Version: 5.1.0.204 DS: IBM.2107-7520781 ID State Reason Type SourceLSS Timeout (secs) Critical Mode First Pass Status ================================================================================================== 1000:2000 Full Duplex Metro Mirror 10 unknown Disabled Invalid 1001:2001 Full Duplex Metro Mirror 10 unknown Disabled Invalid 1100:2100 Full Duplex Metro Mirror 11 unknown Disabled Invalid 1101:2101 Full Duplex Metro Mirror 11 unknown Disabled Invalid

Chapter 19. Global Copy interfaces and examples

277

You can use the -suspend parameter with the mkpprc -type mmir command. If you use this parameter, the state of the pairs becomes suspended when the data synchronization is completed; see Example 19-14. You can use this option for your off-site backup scenario with the Global Copy function.
Example 19-14 mkpprc -type mmir with -suspend
dscli> lspprc 1000-1001 1100-1101 Date/Time: November 3, 2005 12:35:33 AM JST IBM DSCLI Version: 5.1.0.204 DS: IBM.2107-7520781 ID State Reason Type SourceLSS Timeout (secs) Critical Mode First Pass Status ================================================================================================== 1000:2000 Copy Pending Global Copy 10 unknown Disabled True 1001:2001 Copy Pending Global Copy 10 unknown Disabled True 1100:2100 Copy Pending Global Copy 11 unknown Disabled True 1101:2101 Copy Pending Global Copy 11 unknown Disabled True dscli> dscli> mkpprc -remotedev IBM.2107-75ABTV1 -type mmir -suspend 1000-1001:2000-2001 1100-1101:2100-2101 Date/Time: November 3, 2005 12:43:09 AM JST IBM DSCLI Version: 5.1.0.204 DS: IBM.2107-7520781 CMUC00153I mkpprc: Remote Mirror and Copy volume pair relationship 1000:2000 successfully created. CMUC00153I mkpprc: Remote Mirror and Copy volume pair relationship 1001:2001 successfully created. CMUC00153I mkpprc: Remote Mirror and Copy volume pair relationship 1100:2100 successfully created. CMUC00153I mkpprc: Remote Mirror and Copy volume pair relationship 1101:2101 successfully created. dscli> dscli> lspprc 1000-1001 1100-1101 Date/Time: November 3, 2005 12:43:13 AM JST IBM DSCLI Version: 5.1.0.204 DS: IBM.2107-7520781 ID State Reason Type SourceLSS Timeout (secs) Critical Mode First Pass Status ===================================================================================================== 1000:2000 Suspended Host Source Metro Mirror 10 unknown Disabled Invalid 1001:2001 Suspended Host Source Metro Mirror 10 unknown Disabled Invalid 1100:2100 Suspended Host Source Metro Mirror 11 unknown Disabled Invalid 1101:2101 Suspended Host Source Metro Mirror 11 unknown Disabled Invalid

You can add the -wait parameter with the mkpprc command. With the -wait parameter, the mkpprc -type mmir -suspend command does not return to the command prompt until the pairs complete data synchronization and reach the suspended state; see Example 19-15. Note: If you do not specify the -wait parameter with the mkpprc -type mmir -suspend command, the mkpprc command does not wait for the data synchronization. If you do not use the -wait parameter, you must check the completion of the synchronization with the lspprc command.

Example 19-15 mkpprc -type mmir -suspend with -wait


dscli> lspprc 1000-1001 1100-1101 Date/Time: November 3, 2005 12:47:50 AM JST IBM DSCLI Version: 5.1.0.204 DS: IBM.2107-7520781 ID State Reason Type SourceLSS Timeout (secs) Critical Mode First Pass Status ================================================================================================== 1000:2000 Copy Pending Global Copy 10 unknown Disabled True 1001:2001 Copy Pending Global Copy 10 unknown Disabled True 1100:2100 Copy Pending Global Copy 11 unknown Disabled True 1101:2101 Copy Pending Global Copy 11 unknown Disabled True dscli> dscli> mkpprc -remotedev IBM.2107-75ABTV1 -type mmir -suspend -wait 1000-1001:2000-2001 1100-1101:2100-2101 Date/Time: November 3, 2005 12:48:23 AM JST IBM DSCLI Version: 5.1.0.204 DS: IBM.2107-7520781 CMUC00153I mkpprc: Remote Mirror and Copy volume pair relationship 1000:2000 successfully created. CMUC00153I mkpprc: Remote Mirror and Copy volume pair relationship 1001:2001 successfully created. CMUC00153I mkpprc: Remote Mirror and Copy volume pair relationship 1100:2100 successfully created. CMUC00153I mkpprc: Remote Mirror and Copy volume pair relationship 1101:2101 successfully created. 1/4 pair 1001:2001 state: Suspended

278

IBM System Storage DS8000: Copy Services in Open Environments

2/4 pair 1000:2000 state: Suspended 3/4 pair 1101:2101 state: Suspended 4/4 pair 1100:2100 state: Suspended dscli> dscli> lspprc 1000-1001 1100-1101 Date/Time: November 3, 2005 12:48:56 AM JST IBM DSCLI Version: 5.1.0.204 DS: IBM.2107-7520781 ID State Reason Type SourceLSS Timeout (secs) Critical Mode First Pass Status ===================================================================================================== 1000:2000 Suspended Host Source Metro Mirror 10 unknown Disabled Invalid 1001:2001 Suspended Host Source Metro Mirror 10 unknown Disabled Invalid 1100:2100 Suspended Host Source Metro Mirror 11 unknown Disabled Invalid 1101:2101 Suspended Host Source Metro Mirror 11 unknown Disabled Invalid

Changing the copy mode from Metro Mirror to Global Copy


You cannot change the copy mode from the Metro Mirror to Global Copy directly. In order to do this, you must use the pausepprc command to suspend the Metro Mirror pair first, and then resume the pair in Global Copy mode with the resumepprc -type gcp command; see Example 19-16.
Example 19-16 Change copy mode from Metro Mirror to Global Copy
dscli> lspprc 1000-1001 1100-1101 Date/Time: November 3, 2005 12:30:14 AM JST IBM DSCLI Version: 5.1.0.204 DS: IBM.2107-7520781 ID State Reason Type SourceLSS Timeout (secs) Critical Mode First Pass Status ================================================================================================== 1000:2000 Full Duplex Metro Mirror 10 unknown Disabled Invalid 1001:2001 Full Duplex Metro Mirror 10 unknown Disabled Invalid 1100:2100 Full Duplex Metro Mirror 11 unknown Disabled Invalid 1101:2101 Full Duplex Metro Mirror 11 unknown Disabled Invalid dscli> dscli> mkpprc -remotedev IBM.2107-75ABTV1 -type gcp 1000-1001:2000-2001 1100-1101:2100-2101 Date/Time: November 3, 2005 12:33:55 AM JST IBM DSCLI Version: 5.1.0.204 DS: IBM.2107-7520781 CMUN03053E mkpprc: 1000:2000: Copy Services operation failure: invalid transition CMUN03053E mkpprc: 1001:2001: Copy Services operation failure: invalid transition CMUN03053E mkpprc: 1100:2100: Copy Services operation failure: invalid transition CMUN03053E mkpprc: 1101:2101: Copy Services operation failure: invalid transition dscli> dscli> pausepprc -remotedev IBM.2107-75ABTV1 1000-1001:2000-2001 1100-1101:2100-2101 Date/Time: November 3, 2005 12:34:35 AM JST IBM DSCLI Version: 5.1.0.204 DS: IBM.2107-7520781 CMUC00157I pausepprc: Remote Mirror and Copy volume pair 1000:2000 relationship successfully paused. CMUC00157I pausepprc: Remote Mirror and Copy volume pair 1001:2001 relationship successfully paused. CMUC00157I pausepprc: Remote Mirror and Copy volume pair 1100:2100 relationship successfully paused. CMUC00157I pausepprc: Remote Mirror and Copy volume pair 1101:2101 relationship successfully paused. dscli> dscli> lspprc 1000-1001 1100-1101 Date/Time: November 3, 2005 12:35:02 AM JST IBM DSCLI Version: 5.1.0.204 DS: IBM.2107-7520781 ID State Reason Type SourceLSS Timeout (secs) Critical Mode First Pass Status ===================================================================================================== 1000:2000 Suspended Host Source Metro Mirror 10 unknown Disabled Invalid 1001:2001 Suspended Host Source Metro Mirror 10 unknown Disabled Invalid 1100:2100 Suspended Host Source Metro Mirror 11 unknown Disabled Invalid 1101:2101 Suspended Host Source Metro Mirror 11 unknown Disabled Invalid dscli> dscli> resumepprc -remotedev IBM.2107-75ABTV1 -type gcp 1000-1001:2000-2001 1100-1101:2100-2101 Date/Time: November 3, 2005 12:35:26 AM JST IBM DSCLI Version: 5.1.0.204 DS: IBM.2107-7520781 CMUC00158I resumepprc: Remote Mirror and Copy volume pair 1000:2000 relationship successfully resumed. This message is being returned before the copy completes. CMUC00158I resumepprc: Remote Mirror and Copy volume pair 1001:2001 relationship successfully resumed. This message is being returned before the copy completes. CMUC00158I resumepprc: Remote Mirror and Copy volume pair 1100:2100 relationship successfully resumed. This message is being returned before the copy completes. CMUC00158I resumepprc: Remote Mirror and Copy volume pair 1101:2101 relationship successfully resumed. This message is being returned before the copy completes.

Chapter 19. Global Copy interfaces and examples

279

dscli> dscli> lspprc 1000-1001 1100-1101 Date/Time: November 3, 2005 12:35:33 AM JST IBM DSCLI Version: 5.1.0.204 DS: IBM.2107-7520781 ID State Reason Type SourceLSS Timeout (secs) Critical Mode First Pass Status ================================================================================================== 1000:2000 Copy Pending Global Copy 10 unknown Disabled True 1001:2001 Copy Pending Global Copy 10 unknown Disabled True 1100:2100 Copy Pending Global Copy 11 unknown Disabled True 1101:2101 Copy Pending Global Copy 11 unknown Disabled True

Adding and removing paths


This step is similar to the Metro Mirror example. Refer to 16.7.2, Adding and removing paths on page 210.

19.3.4 Periodic off-site backup procedure


This section shows how to control the procedure discussed in 18.2, Creating a consistent point-in-time copy on page 259, to use Global Copy for periodical off-site backup. Figure 19-3 shows a diagram of the DS8000 Global Copy environment for this example. We use the Remote Incremental FlashCopy function for the FlashCopy at the recovery site (DS8000#2). We use 1000, 1001, 1100, and 1101 in DS8000#1 for Global Copy (GC) sources; 2000, 2001, 2100, and 2101 in DS8000#2 for Global Copy targets and FlashCopy (FC) sources; and 2002, 2003, 2102, and 2103 in the DS8000#2 for FlashCopy targets.

GC Source
LSS10
1000 1001

Paths

GC Target and FC source

FC Target
FlashCopy 2002 2003 FlashCopy 2102 2103

LSS20
2000 2001

Global Copy pairs


FCP links

LSS11
1100 1101 DS8000#1
-dev IBM.2107-7520781

LSS21
2100 2101 DS8000#2

Global Copy pairs

-dev IBM.2107-75ABTV1

Figure 19-3 The DS8000 environment for Global Copy offsite backup

Initial setup for this environment


In order to set up the Global Copy and FlashCopy environment, we follow these steps: 1. Create paths, create pairs, and wait for the completion of the Global Copy initial copy. 2. Create the Incremental FlashCopy relationship at the recovery site and wait for the completion of the FlashCopy background copy.

280

IBM System Storage DS8000: Copy Services in Open Environments

Step 1: Create paths and pairs


An example of this step is shown in Example 19-17.
Example 19-17 Step 1 of the initial setup
dscli> mkpprcpath -remotedev IBM.2107-75ABTV1 -remotewwnn 5005076303FFC663 -srclss 10 -tgtlss 20 i0143:i0010 i0213:i0140 Date/Time: November 3, 2005 1:32:08 AM JST IBM DSCLI Version: 5.1.0.204 DS: IBM.2107-7520781 CMUC00149I mkpprcpath: Remote Mirror and Copy path 10:20 successfully established. dscli> mkpprcpath -remotedev IBM.2107-75ABTV1 -remotewwnn 5005076303FFC663 -srclss 11 -tgtlss 21 i0143:i0010 i0213:i0140 Date/Time: November 3, 2005 1:32:12 AM JST IBM DSCLI Version: 5.1.0.204 DS: IBM.2107-7520781 CMUC00149I mkpprcpath: Remote Mirror and Copy path 11:21 successfully established. dscli> dscli> mkpprc -remotedev IBM.2107-75ABTV1 -type gcp 1000-1001:2000-2001 1100-1101:2100-2101 Date/Time: November 3, 2005 1:32:19 AM JST IBM DSCLI Version: 5.1.0.204 DS: IBM.2107-7520781 CMUC00153I mkpprc: Remote Mirror and Copy volume pair relationship 1000:2000 successfully created. CMUC00153I mkpprc: Remote Mirror and Copy volume pair relationship 1001:2001 successfully created. CMUC00153I mkpprc: Remote Mirror and Copy volume pair relationship 1100:2100 successfully created. CMUC00153I mkpprc: Remote Mirror and Copy volume pair relationship 1101:2101 successfully created. dscli> dscli> lspprc -l 1000-1001 1100-1101 Date/Time: November 3, 2005 2:34:36 AM JST IBM DSCLI Version: 5.1.0.204 DS: IBM.2107-7520781
ID State Reason Type Out Of Sync Tracks Tgt Read Src Cascade Tgt Cascade Date Suspended SourceLSS Timeout (secs) Critical Mode First Pass Status ===================================================================================================================== ================================================ 1000:2000 unknown 1001:2001 unknown 1100:2100 unknown 1101:2101 unknown Copy Pending Disabled Copy Pending Disabled Copy Pending Disabled Copy Pending Disabled Global True Global True Global True Global True Copy 0 Copy 0 Copy 0 Copy 0 Disabled Disabled Disabled Disabled Disabled Disabled Disabled Disabled invalid invalid invalid invalid 10 10 11 11

Step 2: Create FlashCopy relationship


An example of this step is shown in Example 19-18.
Example 19-18 Step 2 of the initial setup
dscli> mkremoteflash -record -persist -conduit IBM.2107-7520781/10 -dev IBM.2107-75ABTV1 2000-2001:2002-2003 Date/Time: November 3, 2005 2:40:01 AM JST IBM DSCLI Version: 5.1.0.204 DS: IBM.2107-75ABTV1 CMUC00173I mkremoteflash: Remote FlashCopy volume pair 2000:2002 successfully created. Use the lsremoteflash command to determine copy completion. CMUC00173I mkremoteflash: Remote FlashCopy volume pair 2001:2003 successfully created. Use the lsremoteflash command to determine copy completion. dscli> dscli> mkremoteflash -record -persist -conduit IBM.2107-7520781/11 -dev IBM.2107-75ABTV1 2100-2101:2102-2103 Date/Time: November 3, 2005 2:40:22 AM JST IBM DSCLI Version: 5.1.0.204 DS: IBM.2107-75ABTV1 CMUC00173I mkremoteflash: Remote FlashCopy volume pair 2100:2102 successfully created. Use the lsremoteflash command to determine copy completion. CMUC00173I mkremoteflash: Remote FlashCopy volume pair 2101:2103 successfully created. Use the lsremoteflash command to determine copy completion.

We then verify the FlashCopy background copy completion; see Example 19-19.

Chapter 19. Global Copy interfaces and examples

281

Example 19-19 lsremotefrash to check the FlashCopy background copy completion


dscli> lsremoteflash -l -conduit IBM.2107-7520781/10 -dev IBM.2107-75ABTV1 2000-2001 Date/Time: November 3, 2005 2:42:59 AM JST IBM DSCLI Version: 5.1.0.204 DS: IBM.2107-75ABTV1
ID SrcLSS SequenceNum ActiveCopy Recording Persistent Revertible SourceWriteEnabled TargetWriteEnabled BackgroundCopy OutOfSyncTracks ========================================================================================================================================== 2000:2002 20 0 Enabled Enabled Enabled Disabled Enabled Enabled Enabled 44626 2001:2003 20 0 Enabled Enabled Enabled Disabled Enabled Enabled Enabled 11153 dscli>

dscli> lsremoteflash -l -conduit IBM.2107-7520781/11 -dev IBM.2107-75ABTV1 2100-2101 Date/Time: November 3, 2005 2:43:04 AM JST IBM DSCLI Version: 5.1.0.204 DS: IBM.2107-75ABTV1
ID SrcLSS SequenceNum ActiveCopy Recording Persistent Revertible SourceWriteEnabled TargetWriteEnabled BackgroundCopy OutOfSyncTracks ========================================================================================================================================== 2100:2102 21 0 Enabled Enabled Enabled Disabled Enabled Enabled Enabled 46289 2101:2103 21 0 Enabled Enabled Enabled Disabled Enabled Enabled Enabled 14695

dscli> dscli> lsremoteflash -l -conduit IBM.2107-7520781/10 -dev IBM.2107-75ABTV1 2000-2001 Date/Time: November 3, 2005 2:57:50 AM JST IBM DSCLI Version: 5.1.0.204 DS: IBM.2107-75ABTV1
ID SrcLSS SequenceNum ActiveCopy Recording Persistent Revertible SourceWriteEnabled TargetWriteEnabled BackgroundCopy OutOfSyncTracks ========================================================================================================================================= 2000:2002 20 0 Disabled Enabled Enabled Disabled Enabled Enabled Enabled 0 2001:2003 20 0 Disabled Enabled Enabled Disabled Enabled Enabled Enabled 0

dscli> dscli> lsremoteflash -l -conduit IBM.2107-7520781/11 -dev IBM.2107-75ABTV1 2100-2101 Date/Time: November 3, 2005 2:57:55 AM JST IBM DSCLI Version: 5.1.0.204 DS: IBM.2107-75ABTV1
ID SrcLSS SequenceNum ActiveCopy Recording Persistent Revertible SourceWriteEnabled TargetWriteEnabled BackgroundCopy OutOfSyncTracks ========================================================================================================================================== 2100:2102 21 0 Disabled Enabled Enabled Disabled Enabled Enabled Enabled 0 2101:2103 21 0 Disabled Enabled Enabled Disabled Enabled Enabled Enabled 0

Periodical backup operation


Depending on your backup requirement and application acceptance, you schedule the following scenario periodically, such as daily. We show examples of the DS CLI commands to control this procedure, based on the scenario illustrated in Figure 19-4.

Application running
1.Updated data is being sent in normal Global Copy

source A volume

target B volume C volume

Quiesce Application
2.Quiesce Application
source target

source

3.Go-to-sync and suspend volume pairs

target

Restart Application
4.Restart Application

source

target

Application running
5.Take FlashCopy B to C. Return to 1.
FlashCopy target

Now we have consistent data

source

Production site

Recovery site

Figure 19-4 Global Copy offsite backup scenario

282

IBM System Storage DS8000: Copy Services in Open Environments

Here is a more detailed description of the steps in the procedure shown in Figure 19-4 on page 282: 1. 2. 3. 4. 5. Normal Global Copy mode operation. Quiesce the application at the production site. Go-to-sync and suspend the Global Copy pairs. Restart the application at the production site. Take a FlashCopy B to C.

For a detailed description of the preceding steps refer to 18.2.1, Procedure to take a consistent point-in-time copy on page 260. Next we see how you can execute the procedure.

Step 1
In this step we are operate in the normal Global Copy mode. The lspprc and lsremoteflash commands show the status in Example 19-20.
Example 19-20 Step 1 of the periodical operation dscli> lspprc 1000-1001 1100-1101 Date/Time: November 3, 2005 5:25:38 PM JST IBM DSCLI Version: 5.1.0.204 DS: IBM.2107-7520781 ID State Reason Type SourceLSS Timeout (secs) Critical Mode First Pass Status ================================================================================================== 1000:2000 Copy Pending Global Copy 10 unknown Disabled True 1001:2001 Copy Pending Global Copy 10 unknown Disabled True 1100:2100 Copy Pending Global Copy 11 unknown Disabled True 1101:2101 Copy Pending Global Copy 11 unknown Disabled True dscli> lsremoteflash -conduit IBM.2107-7520781/10 -dev IBM.2107-75ABTV1 2000-2001 Date/Time: November 3, 2005 5:26:44 PM JST IBM DSCLI Version: 5.1.0.204 DS: IBM.2107-75ABTV1
ID
SrcLSS SequenceNum ActiveCopy Recording Persistent Revertible SourceWriteEnabled TargetWriteEnabled BackgroundCopy ============================================================================================================================ 2000:2002 20 0 Disabled Enabled Enabled Disabled Enabled Enabled Enabled 2001:2003 20 0 Disabled Enabled Enabled Disabled Enabled Enabled Enabled

dscli> lsremoteflash -conduit IBM.2107-7520781/11 -dev IBM.2107-75ABTV1 2100-2101 Date/Time: November 3, 2005 5:26:53 PM JST IBM DSCLI Version: 5.1.0.204 DS: IBM.2107-75ABTV1
ID SrcLSS SequenceNum ActiveCopy Recording Persistent Revertible SourceWriteEnabled TargetWriteEnabled BackgroundCopy ============================================================================================================================ 2100:2102 21 0 Disabled Enabled Enabled Disabled Enabled Enabled Enabled 2101:2103 21 0 Disabled Enabled Enabled Disabled Enabled Enabled Enabled

Step2
Depending on your application and platform, you should take the necessary actions to briefly stop updates to the source volumes quiesce application.

Step3
The DS CLI command examples for this step are shown in Example 19-21.

Chapter 19. Global Copy interfaces and examples

283

Example 19-21 The mkpprc -type mmir -wait -suspend dscli> lspprc 1000-1001 1100-1101 Date/Time: November 3, 2005 6:17:04 PM JST IBM DSCLI Version: 5.1.0.204 DS: IBM.2107-7520781 ID State Reason Type SourceLSS Timeout (secs) Critical Mode First Pass Status ================================================================================================== 1000:2000 Copy Pending Global Copy 10 unknown Disabled True 1001:2001 Copy Pending Global Copy 10 unknown Disabled True 1100:2100 Copy Pending Global Copy 11 unknown Disabled True 1101:2101 Copy Pending Global Copy 11 unknown Disabled True dscli> dscli> mkpprc -remotedev IBM.2107-75ABTV1 -type mmir -suspend -wait 1000-1001:2000-2001 1100-1101:2100-2101 Date/Time: November 3, 2005 6:18:42 PM JST IBM DSCLI Version: 5.1.0.204 DS: IBM.2107-7520781 CMUC00153I mkpprc: Remote Mirror and Copy volume pair relationship 1000:2000 successfully created. CMUC00153I mkpprc: Remote Mirror and Copy volume pair relationship 1001:2001 successfully created. CMUC00153I mkpprc: Remote Mirror and Copy volume pair relationship 1100:2100 successfully created. CMUC00153I mkpprc: Remote Mirror and Copy volume pair relationship 1101:2101 successfully created. << It may take time to show bellow messages depending on the amount of the Out Of Sync Tracks remained. >> 1/4 pair 1000:2000 state: Suspended 2/4 pair 1001:2001 state: Suspended 3/4 pair 1100:2100 state: Suspended 4/4 pair 1101:2101 state: Suspended dscli> dscli> lspprc 1000-1001 1100-1101 Date/Time: November 3, 2005 6:19:13 PM JST IBM DSCLI Version: 5.1.0.204 DS: IBM.2107-7520781 ID State Reason Type SourceLSS Timeout (secs) Critical Mode First Pass Status ===================================================================================================== 1000:2000 Suspended Host Source Metro Mirror 10 unknown Disabled Invalid 1001:2001 Suspended Host Source Metro Mirror 10 unknown Disabled Invalid 1100:2100 Suspended Host Source Metro Mirror 11 unknown Disabled Invalid 1101:2101 Suspended Host Source Metro Mirror 11 unknown Disabled Invalid
dscli> lspprc -l 1000-1001 1100-1101 Date/Time: November 3, 2005 6:19:29 PM JST IBM DSCLI Version: 5.1.0.204 DS: IBM.2107-7520781 ID State Reason Type Out Of Sync Tracks Tgt Read Src Cascade Tgt Cascade Date Suspended SourceLSS ======================================================================================================================== 1000:2000 Suspended Host Source Metro Mirror 0 Disabled Disabled invalid 10 1001:2001 Suspended Host Source Metro Mirror 0 Disabled Disabled invalid 10 1100:2100 Suspended Host Source Metro Mirror 0 Disabled Disabled invalid 11 1101:2101 Suspended Host Source Metro Mirror 0 Disabled Disabled invalid 11

dscli>

Step 4
Depending on your application and platform, you should take the necessary actions to restart your application resume I/O activity to the source volumes.

Step 5
The DS CLI command examples for this step are shown in Example 19-22 and Example 19-23. We use the resyncremoteflash command to perform the Incremental FlashCopy from the production site. We add the -seqnum parameter with the mkremoteflash to add the FlashCopy sequence number to identify this FlashCopy set. We specify the -type gcp parameter with the resumepprc command to restart the normal Global Copy mode.

284

IBM System Storage DS8000: Copy Services in Open Environments

Example 19-22 Take FlashCopy B to C dscli> resyncremoteflash -record -persist -seqnum 20051103 -conduit IBM.2107-7520781/10 -dev IBM.2107-75ABTV1 2000-2001:2002-2003 Date/Time: November 3, 2005 6:41:49 PM JST IBM DSCLI Version: 5.1.0.204 DS: IBM.2107-75ABTV1 CMUC00175I resyncremoteflash: Remote FlashCopy volume pair 2000:2002 successfully resynchronized. lsremoteflash command to determine copy completion. CMUC00175I resyncremoteflash: Remote FlashCopy volume pair 2001:2003 successfully resynchronized. lsremoteflash command to determine copy completion. dscli> resyncremoteflash -record -persist -seqnum 20051103 -conduit IBM.2107-7520781/11 -dev IBM.2107-75ABTV1 2100-2101:2102-2103 Date/Time: November 3, 2005 6:42:08 PM JST IBM DSCLI Version: 5.1.0.204 DS: IBM.2107-75ABTV1 CMUC00175I resyncremoteflash: Remote FlashCopy volume pair 2100:2102 successfully resynchronized. lsremoteflash command to determine copy completion. CMUC00175I resyncremoteflash: Remote FlashCopy volume pair 2101:2103 successfully resynchronized. lsremoteflash command to determine copy completion. dscli>

Use the Use the

Use the Use the

dscli> lsremoteflash -conduit IBM.2107-7520781/10 -dev IBM.2107-75ABTV1 2000-2001 Date/Time: November 3, 2005 6:42:19 PM JST IBM DSCLI Version: 5.1.0.204 DS: IBM.2107-75ABTV1 ID SrcLSS SequenceNum ActiveCopy Recording Persistent Revertible SourceWriteEnabled TargetWriteEnabled Background ======================================================================================================================== 2000:2002 20 20051103 Disabled Enabled Enabled Disabled Enabled Enabled Enabled 2001:2003 20 20051103 Disabled Enabled Enabled Disabled Enabled Enabled Enabled dscli> dscli> lsremoteflash -conduit IBM.2107-7520781/11 -dev IBM.2107-75ABTV1 2100-2101 Date/Time: November 3, 2005 6:42:25 PM JST IBM DSCLI Version: 5.1.0.204 DS: IBM.2107-75ABTV1 ID SrcLSS SequenceNum ActiveCopy Recording Persistent Revertible SourceWriteEnabled TargetWriteEnabled Background ======================================================================================================================== 2100:2102 21 20051103 Disabled Enabled Enabled Disabled Enabled Enabled Enabled 2101:2103 21 20051103 Disabled Enabled Enabled Disabled Enabled Enabled Enabled

Example 19-23 Resume normal Global Copy mode dscli> lspprc 1000-1001 1100-1101 Date/Time: November 3, 2005 7:24:06 PM JST IBM DSCLI Version: 5.1.0.204 DS: IBM.2107-7520781 ID State Reason Type SourceLSS Timeout (secs) Critical Mode First Pass Status ===================================================================================================== 1000:2000 Suspended Host Source Metro Mirror 10 unknown Disabled Invalid 1001:2001 Suspended Host Source Metro Mirror 10 unknown Disabled Invalid 1100:2100 Suspended Host Source Metro Mirror 11 unknown Disabled Invalid 1101:2101 Suspended Host Source Metro Mirror 11 unknown Disabled Invalid dscli> dscli> resumepprc -remotedev IBM.2107-75ABTV1 -type gcp 1000-1001:2000-2001 1100-1101:2100-2101 Date/Time: November 3, 2005 7:26:59 PM JST IBM DSCLI Version: 5.1.0.204 DS: IBM.2107-7520781 CMUC00158I resumepprc: Remote Mirror and Copy volume pair 1000:2000 relationship successfully resumed. message is being returned before the copy completes. CMUC00158I resumepprc: Remote Mirror and Copy volume pair 1001:2001 relationship successfully resumed. message is being returned before the copy completes. CMUC00158I resumepprc: Remote Mirror and Copy volume pair 1100:2100 relationship successfully resumed. message is being returned before the copy completes. CMUC00158I resumepprc: Remote Mirror and Copy volume pair 1101:2101 relationship successfully resumed. message is being returned before the copy completes. dscli> dscli> lspprc 1000-1001 1100-1101 Date/Time: November 3, 2005 7:27:06 PM JST IBM DSCLI Version: 5.1.0.204 DS: IBM.2107-7520781 ID State Reason Type SourceLSS Timeout (secs) Critical Mode First Pass Status ================================================================================================== 1000:2000 Copy Pending Global Copy 10 unknown Disabled True 1001:2001 Copy Pending Global Copy 10 unknown Disabled True 1100:2100 Copy Pending Global Copy 11 unknown Disabled True 1101:2101 Copy Pending Global Copy 11 unknown Disabled True

This This This This

Chapter 19. Global Copy interfaces and examples

285

19.4 DS Storage Manager GUI examples


In these examples we use the DS Storage Manager Graphical User Interface to establish paths and then establish and manage the Global Copy pairs. Note: To create and monitor Global Copy pairs, use the Metro Mirror panels.

19.4.1 Establish paths with the DS Storage Manager GUI


To establish a new path from one LSS to another LSS, you follow this procedure: 1. 2. 3. 4. Select Real-time manager. Select Copy services. Click Paths. Select your source Storage Complex, Storage Unit, Storage Image, and LSS.

The Paths panel is displayed; see Figure 19-5. Because there are no paths listed to the DS8000 serial 7520781, we have to configure them. In our example we want to configure two paths from the DS8000 serial 7503461 LSS 47 to the DS8000 serial 7520781 LSS 47. From the Select Actions pull-down menu, select Create.

Figure 19-5 Paths panel

We can now select the source LSS from the production DS8000; see Figure 19-6.

286

IBM System Storage DS8000: Copy Services in Open Environments

Figure 19-6 Select source LSS

When finished, click Next. The Select target LSS panel is displayed; see Figure 19-7. Here we select the target Storage Complex, Storage Unit, Storage Image, and target LSS.

Figure 19-7 Select target LSS

If your intended storage unit is not presented as an alternative, then you must add the storage complex that contains that storage unit. See Chapter 4, DS Storage Manager on page 27. First add the storage complex and then restart the process to create paths. Click Next when finished. The Select source I/O ports panel is displayed. Here you can select the source I/O ports; see Figure 19-8. Because we want to establish only two paths, we select the I/O ports on the bottom.

Chapter 19. Global Copy interfaces and examples

287

Figure 19-8 Selecting source ports

We then click Next. The Select target I/O ports panel is displayed. Here you select the target ports. Because we want to have only a one-to-one I/O port relationship, we select one target I/O port for the first source I/O port and the other target I/O port for the second source I/O port, as shown in Figure 19-9.

Figure 19-9 Select target I/O ports

Click Next when finished. The Select path options panel is displayed; see Figure 19-10. Here you can decide if you want to create a Consistency Group. Because this is the setup of a Global Copy environment, we do not need a Consistency Group; therefore, we do not select this option.

288

IBM System Storage DS8000: Copy Services in Open Environments

Figure 19-10 Select path options

Click Next when finished. The Verification panel is displayed; see Figure 19-11. Here we can verify the information we entered and, if necessary, click Back to do corrections or click Finish to create the logical paths.

Figure 19-11 Verification panel

After you finish, you are again presented the Paths panel, and now you will see the existing paths to DS8000 serial 7520781, that you have just created; see Figure 19-12.

Chapter 19. Global Copy interfaces and examples

289

Figure 19-12 Paths defined

19.4.2 Establishing Global Copy pairs


In this section we establish the Global Copy pairs. Once the volumes are in the copy pending state, we synchronize the volumes to change the status to full duplex. We follow these steps: 1. Select Real-time manager. 2. Select Copy services. 3. Select Metro Mirror / Global Copy. The Metro Mirror / Global Copy panel is displayed. See Figure 19-13. Here you select the source Storage Complex, Storage Unit, Storage Image, and LSS. Then from the Select Action pull-down menu, select Create.

Figure 19-13 Metro Mirror / Global Copy panel

290

IBM System Storage DS8000: Copy Services in Open Environments

The Volume Pairing Method panel is displayed; see Figure 19-14. Here we must select either Automated volume pair assignment or Manual volume pair assignment. In our example we select Automated volume pair assignment. In addition, you can allow, if the source or target volumes are Space Efficient volumes. Space Efficient volumes are optimized for use cases where less than 20% of the virtual capacity was updated during their lifetimes, so Space Efficient volumes are not very useful as a Global Copy source or target. Note: In the current implementation, Space Efficient volumes are only supported as FlashCopy target volumes.

Figure 19-14 Volume Paring Method

Click Next when finished.

Chapter 19. Global Copy interfaces and examples

291

The Select source volumes panel is displayed; see Figure 19-15. Here we select the source volumes 4705, and 4706.

Figure 19-15 Select source volumes

Click Next when finished. The Select target volumes (Auto pairing) panel is displayed; see Figure 19-16. Here we select the target Storage Complex, Storage Unit, Storage Image, and Resource Type. In our example we chose the resource type LSS, so now we have to select the LSS number. After this, we select the target volumes, which, because we chose automated pairing in the previous step, will be automatically paired to source volumes.

292

IBM System Storage DS8000: Copy Services in Open Environments

Figure 19-16 Select target volumes

Click Next when finished. The Select copy options panel is displayed; see Figure 19-17. Here we select Global Copy and we also select Perform initial copy.

Figure 19-17 Copy options

Click Next when finished.

Chapter 19. Global Copy interfaces and examples

293

The Verification panel is displayed; see Figure 19-18. Here we can verify the volume pairs we configured and, if necessary, click Back to do corrections or click Finish to create the Global Copy pairs.

Figure 19-18 Verification

After you finish, you are again presented the Metro Mirror panel, and now you will see the existing Global Copy pairs that were just created. The State column for these volumes will show copy pending. See Figure 19-19.

Figure 19-19 Global Copy volumes in copy pending status

19.4.3 Monitoring the copy status


You might want to see how many tracks of data must be copied from the source to the target volume. To do this you must select the volume pair in the Metro Mirror panel, and then from the Select Actions pull-down menu choose Properties. A Properties panel is displayed; see Figure 19-20. Here if you select the Out-of-sync tracks tab, you will see the number of tracks that must still be copied. In our example you can see that there are zero tracks to copy.

294

IBM System Storage DS8000: Copy Services in Open Environments

Figure 19-20 Out of sync tracks

19.4.4 Converting to Metro Mirror (synchronous)


To have the volume pairs in a full duplex state, we need to synchronize them. In the Metro Mirror panel we select the volumes we want to synchronize and then from the Select Actions pull-down menu we choose Convert to synchronous. See Figure 19-21.

Figure 19-21 Selecting the action to convert to synchronous

The Convert to synchronous confirmation panel is displayed; see Figure 19-22. This panel shows all the select volumes. Note that the Type column shows Metro Mirror. This is because we are converting from non-synchronous Global Copy to synchronous Metro Mirror.

Chapter 19. Global Copy interfaces and examples

295

Figure 19-22 Convert to synchronous confirmation panel

If everything is correct and you want to proceed with the synchronization, click OK to continue. Depending on the amount of write operations on the source and the available Fibre Channel links bandwidth, the time for the volumes to reach the full duplex state will vary. In the Metro Mirror panel, keep clicking Refresh until the State column changes to full duplex; see Figure 19-23.

Figure 19-23 Metro Mirror panel

19.4.5 Suspending a pair


After the volumes are in duplex state, we can stop the I/O for a short time on the production site to ensure that we have data consistency at the remote site. After the I/O has stopped, we can suspend the volumes. In the Metro Mirror panel, we select the volumes we want to suspend and then we choose Suspend from the Select Action pull-down. We then get the choice to suspend the volumes at the source or at the target. Normally we suspend at the source. We then click OK and the volumes will suspend. Figure 19-24 shows the volumes already in suspended state.

296

IBM System Storage DS8000: Copy Services in Open Environments

Figure 19-24 Metro Mirror panel

When the volumes are suspended, we can restart the I/O at the application site. We can also make a FlashCopy of the target volumes at the recovery site, and when finished, we could resume the pairs to re-establish the Global Copy replication again.

Chapter 19. Global Copy interfaces and examples

297

298

IBM System Storage DS8000: Copy Services in Open Environments

20

Chapter 20.

Global Copy performance and scalability


In this chapter we discuss performance and scalability considerations when using Global Copy with the DS8000.

Copyright IBM Corp. 2004-2008. All rights reserved.

299

20.1 Performance
As the distance between DS8000s increases, Metro Mirror response time is proportionally affected and this negatively impacts the application performance. When implementations over extended distances are required, Global Copy becomes an excellent trade-off solution. You can estimate Global Copy application impact as that of the application when working with Metro Mirror suspended volumes. For the DS8000, there is some more work to do with the Global Copy volumes, as compared to the suspended volumes, because with Global Copy, the changes have to be sent to the remote DS8000. But this is a negligible overhead for the application, as compared with the typical synchronous overhead. There will be no processor resources (CPU and memory) consumed by the Global Copy volume pairs, because this is managed by the DS8000 subsystem. If you take FlashCopy at the recovery site in your Global Copy implementation, you should take into account the influence between Global Copy and the FlashCopy background copy; refer to 18.5, Bandwidth on page 263.

20.2 Scalability
The DS8000 Global Copy environment can be scaled up or down as required. If new volumes are added to the DS8000 that require mirroring, they can be dynamically added. If additional Global Copy paths are required, they also can be dynamically added.

20.2.1 Adding capacity


As we have previously mentioned, the logical nature of the LSS has made a Global Copy implementation on the DS8000 easier to plan, implement, and manage. However, if you need to add more LSSs to your Global Copy environment, your management and automation solutions should be set up to handle this.

20.2.2 Capacity for existing versus new systems


Next we consider some requirements for adding capacity to the same or new DS8000s.

Adding capacity to the same DS8000


If you are adding capacity to an existing DS8000, provided that your Global Copy link bandwidth is not close to or over its limit, you might only need to add volume pairs into your configuration. If you are adding more LSSs, then you will need to define Global Copy paths before adding volume pairs. Keep in mind that when you add capacity for Global Copy use, you might have to acquire a new license feature code for Global Copy that corresponds to the new capacity.

Adding capacity in new DS8000s


If you are adding new DS8000s into your configuration, you will need to add physical links prior to defining your Global Copy paths and volume pairs. We recommend a minimum of two paths per DS8000 pair for redundancy reasons. Your bandwidth analysis will indicate if you require more than two paths.

300

IBM System Storage DS8000: Copy Services in Open Environments

Part 6

Part

Global Mirror
In this part of the book we describe the IBM System Storage Global Mirror when used in open systems environments with the DS8000. We discuss the characteristics of Global Mirror and describe the options for its setup. We also show which management interfaces can be used, as well as the important aspects to be considered when establishing a Global Mirror environment. We conclude with examples of Global Mirror management and setup. We cover the following topics: Global Mirror overview Global Mirror options and configuration Global Mirror interfaces Performance and scalability Examples

Copyright IBM Corp. 2004-2008. All rights reserved.

301

302

IBM System Storage DS8000: Copy Services in Open Environments

21

Chapter 21.

Global Mirror overview


In this chapter we provide an overview of what Global Mirror is. We also discuss the necessity for data consistency at a distant site when synchronous data replication such as Metro Mirror is not possible. We then explain how Global Mirror works in a similar manner to a distributed application, in a server and client relationship. Finally, we give you a step-by-step process to establish a Global Mirror environment. The information discussed in this chapter is complemented with the following IBM publications and Redbooks: The IBM TotalStorage DS8000 Series: Implementation, SG24-6786 IBM System Storage DS8000 Command-Line Interface Users Guide, SC26-7916

Copyright IBM Corp. 2004-2008. All rights reserved.

303

21.1 Synchronous and non-synchronous data replication


When replicating data over long distances, beyond 300 km, asynchronous data replication is the valid approach. This is basically so because with the asynchronous techniques, the application I/O processing at the local storage disk subsystem remains independent of the process of data transmission to the remote storage disk subsystem. Still with the asynchronous data replication techniques, we must provide additional means to ensure data consistency at the remote location. It even requires a solution that guarantees data consistency not only within a single local-remote pair of storage disk subsystems but also across multiple local and remote storage disk subsystems. For a given pair of local and remote storage disk subsystems, a time stamp approach leads to consistent data at the remote storage disk subsystem. Using this approach, and by sorting the I/Os by their time stamps, the write I/Os can be applied at the remote disk subsystem in the same sequence as they arrived at the local disk subsystem. But when the application volumes are spread across multiple storage disk subsystems, this time stamp concept alone is not sufficient to replicate data and provide data consistency at the remote site. This additionally requires a Consistency Group concept. In the rest of the present section we discuss how consistent data, that is, dependent writes, are managed either with a synchronous technique such as Metro Mirror versus different asynchronous techniques such as Global Copy or Global Mirror.

21.1.1 Synchronous data replication and dependent writes


In normal operations, the nature of synchronous data replication (Figure 21-1) preserves data consistency for dependent writes. Dependent writes and data consistency are explained in detail in 14.4, Consistency Group function on page 180.

Host Server
1
Storage Disk Subsystem 1 2C00

4
Storage Disk Subsystem 2

Primary Primary Primary Primary

A A1 A

2 3

Synchronous Replicate

B1
target remote

source local

Figure 21-1 Synchronous data replication

In synchronous data replication methods such as Metro Mirror, an application write always goes through the following four steps; refer to Figure 21-1: 1. Write the data to the source storage disk subsystem cache. Note that this does not end the I/O and does not present an I/O complete to the application.

304

IBM System Storage DS8000: Copy Services in Open Environments

2. Replicate the data from the source storage disk subsystem cache to the target storage disk subsystem cache. 3. Acknowledge to the source storage disk subsystem that data successfully arrived at the target storage disk subsystem. 4. Present the successful I/O completion to the server, which is presented to the application and concludes this I/O. Now, the next I/O that depends on the successful completion of the previous I/O can be issued. When you have dependent writes across multiple storage disk subsystems, synchronous data replication alone does not guarantee that you can restart applications at the remote site without doing some previous data recovery. Consider a database environment that spreads across multiple storage disk subsystems at the local or remote site; see Figure 21-2.

Database Subsystem
source
Storage Disk Subsystem 1 2C00

4
Switch over to remote site

Database Subsystem
target
5

1 2

Restart DB Subsystem
Storage Disk Subsystem 2 3C00

Primary Primary Primary Log Primary

A A1 A

Recover A2

Synchronous Replicate

Log'

B1

6
Recover A2

Storage Disk Subsystem 3 2D00 2D00

Primary Primary Primary Primary Primary Primary DB

A A A2

Storage Disk Subsystem 4 3D00

Synchronous Replicate

Primary Primary Primary DB'

A B2

Storage Disk Subsystem 5 2D00 2E00

Primary Primary Primary Primary Primary Primary RECON

A A A3

Storage Disk Subsystem 6

Synchronous Replicate

3E00

Primary Primary Primary RECON'

A B3

Figure 21-2 Synchronous data replication and unnecessary recovery after local site failure

A database update usually involves three dependent write I/Os to ensure data consistency, even in the event of a failure during this process: see Figure 21-2: 1. Update intent to the logging files. Logging may happen to two logging files. 2. Update the data. 3. Indicate update complete to the logging files. This sequence of I/Os is also called a two-phase commit process.

Chapter 21. Global Mirror overview

305

When you are in a remote copy environment, in an outage situation, the following sequence may occur; see Figure 21-2 on page 305: 1. Write update intent to logging volume A1. 2. Update to database on volume A2 eventually fails due to a replication problem from the local site to the remote site. This may also be the beginning of a rolling disaster. 3. The database subsystem recognizes the failed write I/O and indicates, in a configuration file on volume A3, that one or more databases on A2 have to be recovered due to an I/O error. This gets replicated to the remote site because the paths for this storage disk subsystem pair are still working and this particular source storage disk subsystem did not fail yet. 4. The failure eventually progresses and fails the local site completely. The database subsystem is restarted at the remote site after switching over to the remote site. 5. At startup, the database subsystem discovers in its configuration file that the database on A2-B2 has to be recovered as indicated before. This is actually not necessary because the data was synchronously replicated and both related volumes, A2 and B2, are identical and at the very same level of data currency that of the moment previous to the error. Nevertheless, the database subsystem will still recover all databases that are marked for recovery as per the information found in the configuration files on A3-B3. 6. This recovery is actually not necessary because the data in A2-B2 is perfectly in-sync due to the synchronous replication approach. Nonetheless, the recovery will take place because there was no automation window in place to freeze the configuration, which would have removed the necessary paths between the local and the remote sites, thus ensuring that no further I/Os continue to take place. This does not happen with automated solutions such as GDPS (System z environments) and TotalStorage Productivity Center for Replication (open systems environments) that make use of the freeze capabilities of the IBM System Storage DS8000. The freeze function is used to remove all involved replication paths even if only a subset of these paths failed. Had one of those automation software solutions been there, after failing over to the remote site, the database subsystem would have just restarted without the necessity for a lengthy database recovery.

306

IBM System Storage DS8000: Copy Services in Open Environments

Figure 21-3 illustrates a rolling disaster situation where we have an automation software that makes use of the freeze capability of the DS8000.

Database Subsystem
source
Storage Disk Subsystem 1 2C00

7
Switch over to remote site

Database Subsystem
target
Restart

1 3

6
Hold I/O

Recover A2

Storage Disk Subsystem 2

8 8 8

Log Primary

A Primary A1 A Primary Primary

4
Synchronous Redrive I/O Replicate

3C00

Log'

B1

Storage Disk Subsystem 3 2D00 2D00

5 2 4 4
Synchronous Replicate

Primary Primary Primary Primary Primary Primary DB

A A A2

Storage Disk Subsystem 4 3D00

Primary Primary Primary DB'

A B2

Storage Disk Subsystem 5 2D00 2E00

Primary Primary Primary Primary Primary Primary RECON

A A A3

4
Synchronous Replicate

Storage Disk Subsystem 6 3E00

Primary Primary Primary RECON'

A B3

Figure 21-3 Synchronous data replication, freeze, and restart without recovery required

The sequence of events in Figure 21-3 proceeds as follows: 1. Update intent to logging file. 2. The link between A2 and B2 becomes unavailable, or the storage disk subsystem with A2 fails, and a rolling disaster develops. 3. The first attempt to write to the database volume A2 will trigger a queue full condition. This is also called an automation window. 4. Within this automation window, a freeze operation removes all related paths between both sites for the selected LSSs. From now on, no data is replicated any longer between the selected LSSs. 5. After the queue full condition expires, or an un-freeze operation is issued to end the freeze period, the I/O is re-driven, and it may successfully complete when the pair is suspended, or may fail when the storage disk subsystem 3 is not available any longer. 6. When the I/O fails, then a recovery required is written to the database configuration file on A3. Note that volume A3 is no longer replicated due to the previous freeze. Note also that volumes A2 and B2 are still identical and at the very same level of data currency that of the moment previous to the error. 7. The local site eventually becomes unavailable and a switchover to the recovery site is carried out. 8. Database subsystem restart discovers a no recovery required indication because all related data is consistent the recovery required indication in step 6 was transmitted to B3. Then the database subsystems continue immediately after the restart phase is finished.

Chapter 21. Global Mirror overview

307

Summary
In summary, the following characteristics are typical of a synchronous data replication technique: Application write I/O response time is affected. This can be modeled and predicted. Local and remote copies of data are committed to both storage disk subsystems before host write I/O is complete. Data consistency is always maintained at the remote site as long as no failures occur. If a rolling disaster occurs, freeze/run is needed to maintain consistency. Bandwidth between both sites has to scale with the peak write I/O rate. Data at the remote site is always current. No extra means, such as additional journal volumes or tertiary copy, are required. A tier 7 solution is achieved with automation software. This is different with an asynchronous data replication approach. We still require data consistency at a distant site in order to just restart at the distant site when the local site becomes unavailable.

21.1.2 Asynchronous data replication and dependent writes


In normal operations, for asynchronous data replication (Figure 21-4), data consistency for dependent writes will be preserved depending on the technique used to replicate the data. Dependent writes and data consistency are explained in detail in 14.4, Consistency Group function on page 180. For example, Global Copy, which is explained in Part 5, Global Copy on page 251, and Global Mirror, that we are explaining now, use different techniques. An asynchronous remote copy approach is usually required when the distance between the local site and the remote site is beyond an efficient distance for a synchronous remote copy solution. Metro Mirror provides an efficient synchronous approach for up to 300 km, when utilizing Fibre Channel links.

Host Server
1
Storage Disk Subsystem 1 2C00

Storage Disk Subsystem 2

Primary Primary Primary Primary

A A1 A

Asynchronous

B1
target remote

source local
Figure 21-4 Asynchronous data replication

Replicate

In an asynchronous data replication environment, an application write I/O goes through the following steps; see Figure 21-4: 1. Write application data to the source storage disk subsystem cache.

308

IBM System Storage DS8000: Copy Services in Open Environments

2. Present successful I/O completion to the host server. The application can then immediately schedule the next I/O. 3. Replicate the data from the source storage disk subsystem cache to the target storage disk subsystem cache. 4. Acknowledge to the source storage disk subsystem that data successfully arrived at the target storage disk subsystem. From the previous procedure, note how in an asynchronous type technique, the data transmission and the I/O completion acknowledge are independent processes. This results in virtually no application I/O impact, or at most to a minimal degree only. This also derives its convenience when needing to replicate over long distances.

Global Copy non-synchronous technique


On its own, a non-synchronous technique like that of Global Copy provides no guarantee that the sequence of arrival of application write I/Os to the source volumes is preserved at the remote site. In other words, the order of dependent writes is not preserved at the remote site. This is illustrated in Figure 21-5.

Host Server

b
Storage Disk Subsystem 2

A aPrimary A Primary Primary b Primary


Storage Disk Subsystem 1 3 4

2C00

Non synchronous Replicate

b
target
remote site

source
local site

a in transit

Figure 21-5 Global Copy sequence of data arrival not preserved at remote site

Global Copy by itself as a non-synchronous data replication method does not provide data consistency at the remote site. In Figure 21-5 the sequence of data arrival at the local site is record b after record a. Due to the way Global Copy cycles through its out-of-sync bit map, it may replicate record a to the remote site after record b. This assumes that, for example, record a is written to a source disk location that the Global Copy replication cycle just passed before. Global Copy may approach another disk location that experienced just before an update with record b. Then record b gets replicated to the remote site before the replication cycle starts from the beginning and finds record a.

Chapter 21. Global Mirror overview

309

This situation may get even worse when involving multiple storage disk subsystems at the local site. As Figure 21-6 shows, when using Global Copy, dependent writes are not preserved at the recovery site when a failure occurs. This characteristic happens both at a particular storage disk subsystem pair as well as across multiple storage disk subsystems.

Database subsystem
local site source
Storage Disk Subsystem 1 2C00

remote site

target Storage Disk Subsystem 2 3C00

Primary Primary Primary Log Primary

A A

non synchronous

3
Replicate

Log'

Storage Disk Subsystem 3 2D00 2D00

2
non synchronous

Primary Primary Primary Primary Primary Primary DB

A A

Storage Disk Subsystem 4 3D00

Replicate

Primary Primary Primary DB'

A B

Figure 21-6 Global Copy non-synchronous data replication involving multiple disk subsystems

Notice that the numbers in Figure 21-6 indicate the transfer of data as well as the sequence of its corresponding write I/Os. Record 3 may be replicated to storage disk subsystem 2 before record 1 arrives to storage disk subsystem 2. Record 2 may never make it to storage disk subsystem 4 due to the connectivity problem between storage disk subsystem 3 and storage disk subsystem 4, which in turn may be the beginning of a rolling disaster. So, when using a non-synchronous technique like Global Copy, consistent data cannot be guaranteed at the remote site without any additional measures, functions, and procedures. The challenge is to provide data consistency at the distant site at any time and independent of the combination of storage disk subsystems involved. One solution is to combine Global Copy with FlashCopy and create a three-copy solution. This requires a series of additional steps that have to be organized and carried out: 1. At the local site, temporarily pause the application write I/Os on the source A volumes. 2. Wait for and make sure that the source A volumes and the target B volumes become synchronized. Note the challenge to manage all volumes. 3. Using FlashCopy, create a point-in-time (PiT) copy of the B volumes at the remote site. 4. The FlashCopy targets, that is, the C volumes, will then hold a copy of the A volumes at the time the application was paused or stopped. In this manner data consistency is kept at the remote site. 5. Restart or resume the application write I/O activity to the A volumes.

Global Mirror asynchronous technique


If we could incorporate the previous five basic steps into the storage disk subsystem internal microcode, and we could count with the corresponding management interface, then this would basically be an efficient asynchronous mirroring technique that would allow the

310

IBM System Storage DS8000: Copy Services in Open Environments

replication of data over long distances, without impacting the application I/O response time, its operation would be transparent and autonomic from the users point of view, and most importantly would provide a consistent copy of the data at the remote site at all times. All this is what Global Mirror is about. To accomplish the necessary activities with minimum impact on the application write I/O, Global Mirror introduces a smart bitmap approach in the source storage disk subsystem. With this, Global Mirror can resume the application I/O processing immediately after a very brief serialization period for all involved source storage disk subsystems. This brief serialization periodically occurs at the very beginning of a sequence of events that resemble the ones outlined above. In the following chapters we explain in detail the characteristics of Global Mirror, how it operates, and how can you manage it.

Summary
In summary, the following characteristics are what an asynchronous data replication technique should provide: Data replication to the remote site independent from application write I/O processing at the local site. This derives in no impact, or at least only minimal impact, to the application local write I/O response time. Data consistency and dependent writes always maintained at the remote site. Data currency at the remote site lags a little behind the local site. The remote site is always less current than the local site. In peak write workloads this difference is going to increase. Global Mirror does not throttle host write I/Os to eventually manage this discrepancy. The bandwidth requirement between local and remote sites does not have to be configured for peak write workload. Link bandwidth utilization is improved over synchronous solutions. Tertiary copies are required at the remote site to preserve data consistency. Data loss in disaster recovery situations is limited to the data in transit plus the data that may still be in the queue at the local site waiting to be replicated to the recovery site. All the previous attributes are characteristics of Global Mirror.

Chapter 21. Global Mirror overview

311

21.2 Basic concepts of Global Mirror


It is important to understand that Global Mirror works like a distributed application. A distributed application is usually built on a server to client relationship. The server functions as a supervisor and instructs the client. The client is able to do some work in an autonomic fashion but relies on the coordination efforts from the server; see Figure 21-7.

Task1 Task2

Local site
Task1

Remote site

Server

Task3

Client

Network
Task3

Client

Task2

Client

Figure 21-7 Distributed application

The server distributes the work to its clients. The server also coordinates all individual feedback from the clients and decides based on this feedback further actions. Looking at Figure 21-7, it is obvious that the communication paths between the server and all its clients are key. Without communication paths between these four components the functions will eventually come to a complete stop. Matters get more complicated when the communication fails unexpectedly in the middle of information exchange between the server and his clients or to some of his clients. Usually a two-phase commit process helps to provide a consistent state for certain functions and whether they have successfully completed at the client site. Once a function is successfully completed and is acknowledged to the server, the server progresses to the next function task. At the same time the server tries to do as much as possible in parallel to minimize the hit on throughput due to serialization and checkpoints. When certain activities are dependent on each other it is required that the server coordinate these activities to ensure a proper sequence.

312

IBM System Storage DS8000: Copy Services in Open Environments

The server and client can be replaced by terms such as master and subordinate; see Figure 21-8. These terms are used later when discussing Global Mirror.

Local site
FlashCopy 1

Remote site
FlashCopy 1

Master
Subord Flash 2

Target

source Network
FlashCopy2

Subordinate
source

FlashCopy2

Target

Figure 21-8 Global Mirror as a distributed application

Figure 21-8 shows the basic Global Mirror structure. A master coordinates all efforts within a Global Mirror environment. Once the master is started and manages a Global Mirror environment, the master issues all related commands over inband communication to its attached subordinates at the local site. This may include a subordinate within the master itself. This communication between the master and an internal subordinate is transparent and does not need any extra attention from the user. The subordinates use inband communication to communicate with their related target storage disk subsystems at the remote site. The master also receives all acknowledgements from his subordinates and their targets, and coordinates and serializes all the activities in the session. When the master and subordinate are in a single storage disk subsystem the subordinate is internally managed by the master. With two or more storage disk subsystems at the local site, which participate in a Global Mirror session, the subordinate is external and needs separate attention when creating and managing a Global Mirror session or environment. The following sections explain how Global Mirror works and how Global Mirror ensures consistent data at any time at the remote site. First we go through the process of how to create a Global Mirror environment. At the same time, this explanation gives us a first insight on how Global Mirror works.

21.3 Setting up a Global Mirror session


Global Mirror, as a long-distance remote copy solution, is based on an efficient combination of Global Copy and FlashCopy functions. It is the microcode that provides, from the user perspective, a transparent and autonomic mechanism to intelligently utilize Global Copy in conjunction with certain FlashCopy operations to attain consistent data at the remote site. In order to understand how Global Mirror works, we start explaining first how a Global Mirror environment, that is, a Global Mirror session, is created and started. This is a step-by-step approach and helps to understand further the Global Mirror operational aspects.

Chapter 21. Global Mirror overview

313

21.3.1 Simple configuration to start


In order to understand each step and to show the principles, we start with a simple application environment where a host makes write I/Os to a single application volume (A); see Figure 21-9.

Host
Write I/O

Primary Primary Primary

A A

Primary Primary Primary Primary Primary

Local site
Figure 21-9 Start with simple application environment

Remote site

21.3.2 Establishing connectivity to remote site


Now we add a distant site, which has a storage disk subsystem (B), and we want to interconnect both sites; see Figure 21-10.

Host
Write I/O

Primary Primary Primary

A A

Global Copy path

Primary Primary Primary Primary Primary

A B

Local site

FCP port

Remote site

Figure 21-10 Establish Global Copy connectivity between both sites

Note in Figure 21-10 that we establish Global Copy paths over an existing network. This network may be based on an FCP transport technology or on an IP-based network. Global Copy paths are logical connections that are defined over the physical links that interconnect both sites. Note that all remote mirror and copy paths (that is Metro Mirror, Global Copy, and Global Mirror paths) are similar and are based on FCP. The term Global Copy path just denotes that the path is intended for Global Copy use.

314

IBM System Storage DS8000: Copy Services in Open Environments

21.3.3 Creating Global Copy relationships


Next, we create a Global Copy relationship between the source volume and the target volume; see Figure 21-11.

Host
Write I/O

Primary Primary Primary


source

A A

Global Copy

Primary Primary Primary


target

A B

copy pending
O O S

copy pending

Primary Primary

OOS: Out-of-sync bit map

Local site
Figure 21-11 Establish Global Copy volume pair

Remote site

In Figure 21-11, we first change the target volume state from simplex (no relationship) to target copy pending. This copy pending state applies to both volumes, source copy pending and target copy pending. At the same time, data starts to be copied from the source volume to the target volume. After a first complete pass through the entire A volume, Global Copy will constantly scan through the out-of-sync bit map. This bitmap indicates changed data as it arrives from the applications to the source disk subsystem. Global Copy replicates the data from the A volume to the B volume based on this out-of-sync bit map. In the following paragraphs we refer to the source volume as the A volume and to the target volume as the B volume for simplicity. Global Copy does not immediately copy the data as it arrives to the A volume. Instead, this is an asynchronous process. As soon as a track is changed by an application write I/O, it is reflected in the out-of-sync bitmap as with all the other changed tracks. There can be several concurrent replication processes that work through this bitmap, thus maximizing the utilization of the high bandwidth Fibre Channel links. This replication process keeps running until the Global Copy volume pair A-B is explicitly or implicitly suspended or terminated. Note that a Global Mirror session command, for example, to pause or to terminate a Global Mirror session, does not affect the Global Copy operation between both volumes. At this point data consistency does not yet exist at the remote site.

Chapter 21. Global Mirror overview

315

21.3.4 Introducing FlashCopy


FlashCopy is an integral part of the Global Mirror solution, and now it follows as the next step in the course of establishing a Global Mirror session; see Figure 21-12. Starting with DS8000 LIC Release 3, both classical FlashCopy and Space Efficient FlashCopy (FlashCopy SE) can be used as C volumes in a Global Mirror environment. The creation and the handling of the Global Mirror environment is almost identical for FlashCopy and FlashCopy SE. There are dedicated parameters for the creation and removal of FlashCopy SE pairs only.

Host
Write I/O

FlashCopy
Primary Primary Primary
source

A A

Global Copy

Primary Primary Primary


target

A B

copy pending
O O S

copy pending

Primary Primary

C
tertiary
T B M

SBM: Source Bit Map TBM: T arget Bit Map

S B M

Local site
Figure 21-12 Introducing FlashCopy in the Global MIrror solution

Remote site

The focus is now on the remote site. Figure 21-12 shows a FlashCopy relationship with a Global Copy target volume as the FlashCopy source volume. Volume B is now both at the same time a Global Copy target volume and a FlashCopy source volume. In the same disk subsystem is the corresponding FlashCopy target volume. Note that this FlashCopy relationship has certain attributes that are typical and required when creating a Global Mirror session. These attributes are as follows: Inhibit target write: Protect the FlashCopy target volume from being modified by anyone other than Global Mirror related actions. Enable change recording: Apply changes only from the source volume to the target volume that occurred to the source volume in between FlashCopy establish operations, except for the first time when FlashCopy is initially established. Make relationship persistent: Keep the FlashCopy relationship until explicitly or implicitly terminated. This parameter is automatic due to the nocopy property.

316

IBM System Storage DS8000: Copy Services in Open Environments

Nocopy: Do not initiate background copy from source to target, but keep the set of FlashCopy bitmaps required for tracking the source and target volumes. These bitmaps are established at the first time when a FlashCopy relationship is created with the nocopy attribute. Before a track in the source volume B is modified, between Consistency Group creations, the track is copied to the target volume C to preserve the previous point-in-time copy. This includes updates to the corresponding bitmaps to reflect the new location of the track that belongs to the point-in-time copy. Note that each Global Copy write to its target volume within the window of two adjacent Consistency Groups may cause FlashCopy I/O operations. Space Efficient target: Use Space Efficient volumes as FlashCopy targets, which means that FlashCopy SE will be used in the Global Mirror setup. Virtual capacity was allocated in a Space Efficient repository when these volumes were created. A repository volume per extent pool is used to provide physical storage for all Space Efficient volumes in that extent pool. Background copy is not allowed if Space Efficient targets are used. For a detailed description of FlashCopy SE refer to Chapter 10, IBM FlashCopy SE on page 129. Where required, check the IBM Storage support Web site for the availability of FlashCopy features.

21.3.5 Defining the Global Mirror session


Creating a Global Mirror session does not involve any volume within the local nor the remote sites. Our focus is back onto the local site again; see Figure 21-13.

Host
Define Global Mirror session
FlashCopy

01
Global Copy

Primary Primary
target

A B

copy pending

Primary Primary

C
tertiary
T B M

S B M

Local site
Figure 21-13 Define Global Mirror session

Remote site

Defining a Global Mirror session creates a kind of token, which is a number between 1 and 255. This number represents the Global Mirror session. This session number is defined at the LSS level. Each LSS that has volumes that will be part of the session needs a corresponding define session command. Currently only a single session is allowed per DS8000 Storage Facility Image (SFI) shortly, storage LPAR or storage image. The architecture allows for more than one session and will be exploited in the future.

Chapter 21. Global Mirror overview

317

21.3.6 Populating the Global Mirror session with volumes


The next step is the definition of volumes in the Global Mirror session. The focus is still on the local site; see Figure 21-14. Note that only Global Copy source volumes are meaningful candidates to become a member of a Global Mirror session.

Host
Add Global Copy source volumes to Global Mirror session

01
Primary Primary source copy pending
O O S

FlashCopy
Primary Primary
target

A A

Global Copy

A B

copy pending

Primary Primary

C
tertiary
T B M

S B M

Local site
Figure 21-14 Add Global Copy source volumes to Global Mirror session

Remote site

This process adds source volumes to a list of volumes in the Global Mirror session. But at this stage it still does not perform consistency group formation. Note that Global Copy is replicating, on the Global Copy target volumes, the application updates that arrive to the Global Copy source volumes. Initially the Global Copy source volumes are placed in a join pending state. Once a Consistency Group is formed, the Global Copy source volume will then be added to the session and will be placed in an in session state. Nothing happens to the C volume after its initial establishment in a Global Mirror session.

318

IBM System Storage DS8000: Copy Services in Open Environments

21.3.7 Starting the Global Mirror session


This is the last step and it starts the Global Mirror session. Upon this, Global Mirror starts to form Consistency Groups at the remote site. As Figure 21-15 indicates, the focus here is on the local site, with the start command issued to an LSS in the source storage disk subsystem. With this start command you set the master storage disk subsystem and the master LSS. From now on, session related commands have to go through this master LSS.

Host
Start Global Mirror session

01

FlashCopy
Primary Primary
target

A Primary Primary A
source copy pending
O O S

Global Copy

A B

copy pending

Primary Primary

C
tertiary
T B M

C R

CR: Change recording bit map

S B M

C R

Local site
Figure 21-15 Start Global Mirror

Remote site

This start command triggers events that involve all the volumes within the session. This includes a very fast bitmap management at the local storage disk subsystem, issuing inband FlashCopy commands from the local site to the remote site, and verifying that the corresponding FlashCopy operations successfully finished. This all happens at the microcode level of the related storage disk subsystems that are part of the session, fully transparently, and in an autonomic fashion from the users perspective. All C volumes that belong to the Global Mirror session comprise the Consistency Group. Now let us see more details about the Consistency Group creation at the remote site.

21.4 Consistency Groups


To achieve the goal of creating a set of volumes at a remote site that contains consistent data, asynchronous data replication alone is not enough it must be complemented with either a kind of journal or a tertiary copy of the target volume. With Global Mirror this third copy is naturally created through the use of FlashCopy. The microcode automatically triggers a sequence of autonomic events to create a set of consistent data volumes at the remote site. We call this set of consistent data volumes a Consistency Group. The following sections describe the sequence of events that create a Consistency Group.

Chapter 21. Global Mirror overview

319

21.4.1 Consistency Group formation


The creation of a Consistency Group requires three steps that are internally processed and controlled by the microcode. Outside of the Licensed Internal Code (LIC), these steps are fully transparent and do not require any other external code invocation or user action. The numbers in Figure 21-16 illustrate the sequence of the events involved in the creation of a Consistency Group. This illustration provides a high-level view that is sufficient to understand how this process works.

Start

Done

Start

Done

Start

Done

Serialize all Global Copy source volumes

Drain data from local to remote site

Perform FlashCopy

C R

A1
source

O O S

B1
target

C1
tertiary

C R

A2
source

O O S

B2
target

C2
tertiary

Local

Remote

Figure 21-16 Formation of consistent set of volumes at the remote site

Note that before step 1 and after step 3, Global Copy constantly scans through the out-of-sync bitmaps and replicates data from A volumes to B volumes as described in 21.3.3, Creating Global Copy relationships on page 315. When the creation of Consistency Group is triggered, the following steps occur in a serial fashion: 1. Serialize all Global Copy source volumes. This imposes a brief hold on all incoming write I/Os to all involved Global Copy source volumes. Once all source volumes are serialized, the pause on the incoming write I/O is released and all further write I/Os are now noted in the change recording bitmap. They are not replicated until step 3 is done, but application write I/Os can immediately continue. 2. Drain includes the process to replicate all remaining data that is indicated in the out-of-sync bitmap and still not replicated. Once all out-of-sync bitmaps are empty (note that empty here does not mean in a literal sense) step 3 is triggered by the microcode from the local site. 3. Now the B volumes contain all data as a quasi point-in-time copy, and are consistent due to the serialization process in step 1 and the completed replication or drain process in step 2. Step 3 is now a FlashCopy that is triggered by the local systems microcode as an inband FlashCopy command to volume B, as FlashCopy source, and volume C, as FlashCopy target volume. Note that this FlashCopy is a two-phase process: First, the FlashCopy command to all involved FlashCopy pairs in the Global Mirror session. Then the master collects the feedback and all incoming FlashCopy completion messages. When all FlashCopy operations are successfully completed, then the master concludes that a new Consistency Group has been successfully created.

320

IBM System Storage DS8000: Copy Services in Open Environments

FlashCopy applies here only to changed data since the last FlashCopy operation. This is because the enable change recording property was set at the time when the FlashCopy relationship was established. The FlashCopy relationship does not end due to the nocopy property, which is also assigned at FlashCopy establish time; see 21.3.4, Introducing FlashCopy on page 316. Note that the nocopy attribute results in that the B volumes are not fully replicated to the C volumes by a background process. Bitmaps are maintained and updated instead. Once step 3 is complete, a consistent set of volumes have been created at the remote. This set of volumes, the C volumes, represents the Consistency Group. At this very moment also, the B volumes and the C volumes are, only for this brief moment, equal in their content. Immediately after the FlashCopy process is logically complete, the local systemss microcode is notified to continue with the Global Copy process. In order to replicate the changes to the A volumes that occurred during the step 1 to step 3 window, the change recording bitmap is mapped against the empty out-of-sync bitmap, and from now on, all arriving write I/Os will end up again in the out-of-sync bitmap. From now on the conventional Global Copy process, as outlined in 21.3.3, Creating Global Copy relationships on page 315, continues until the next Consistency Group creation process is started.

21.4.2 Consistency Group parameters


In the previous section we described the steps followed during the creation of a Consistency Group. These steps can be adjusted by means of the tuning parameters of Global Mirror.

Maximum

Maximum

coordination
time Serialize all Global Copy souce volumes

drain
time

Drain data from local to remote site

Perform FlashCopy

CG interval
time

Figure 21-17 Consistency Group tuning parameters

Global Mirror provides a set of three externalized parameters that can be used for tuning the Consistency Group formation process, overriding the default values; see Figure 21-17: Maximum coordination time: dictates, for the Global Copy source volumes that belong to this Consistency Group, how long a host write I/O may be impacted due to coordination and serialization activities. This time is measured in milli-seconds (ms). The default is 50 ms. Maximum drain time: is the maximum time allowed for draining the out-of-sync bitmap once the process to form a Consistency Group is started and step 1 successfully completed. The maximum drain time is specified in units of seconds. The default is 30 seconds and should normally not be changed.

Chapter 21. Global Mirror overview

321

If the maximum drain time is exceeded, Global Mirror will fail to form the Consistency Group and will evaluate the current throughput of the environment. If this indicates that another drain failure is likely, Global Mirror will stay in Global Copy mode while regularly re-evaluating the situation to determine when to start to form the next Consistency Group. If this persists for a significant period of time, then eventually Global Mirror will force the formation of a new Consistency Group. In this way Global Mirror ensures that during periods when the bandwidth is insufficient, production performance is protected and data is transmitted to the remote site in the most efficient manner possible. When the peak activity has passed, consistency group formation will resume in a timely fashion. Consistency Group interval time: once a Consistency Group has been created, the CG interval time determines how long to wait before starting with the formation of the next Consistency Group. This is specified in seconds, and the default is zero (0) seconds. Zero seconds means that Consistency Group formation happens constantly. As soon as a Consistency Group is successfully created, the process to create a new Consistency Group starts again immediately. There is no external parameter to limit the time for the FlashCopy operation. This is due to its very fast bitmap manipulation process. In the first step, the serialization step, when the Consistency Group spans over more than one source disk subsystem, Global Mirror not only serializes all related Global Mirror source volumes but also does the coordination with other storage disk subsystems. Global Mirror utilizes a distributed approach as well as a two-phase commit technique for activities between the master and its subordinate LSSs. The communication between the local and the remote site is organized through the subordinate LSS. The subordinates function here partly as a transient component for the Global Mirror activities, which are all triggered and coordinated by the master. This distributed concept allows ways to provide a set of data consistent volumes at the remote site independent of the number of involved storage disk subsystems at the local or at the remote site.

322

IBM System Storage DS8000: Copy Services in Open Environments

Part 7

Part

Solutions
In this part of the book, we provide an overview of solutions offered by IBM to assist you in the management, automation, and control of your Copy Services implementation on the DS8000.

Copyright IBM Corp. 2004-2008. All rights reserved.

323

324

IBM System Storage DS8000: Copy Services in Open Environments

22

Chapter 22.

Global Mirror options and configuration


In this chapter we provide a detailed description of Global Mirror options, including how to create a Global Mirror environment and how to remove it. We discuss how to change Global Mirror tuning parameters and how to modify an active Global Mirror session. We also discuss a scenario where a site switch is performed due to the local site failure. Global Mirror is intended for long distance data replication and, for this, it relies on the network infrastructure and components used between the remote sites. We do not cover network-related topics in this chapter. For information about these topics, refer to the Redbooks publication, IBM TotalStorage Business Continuity Solutions Guide, SG24-6547.

Copyright IBM Corp. 2004-2008. All rights reserved.

325

22.1 Terminology used in Global Mirror environments


First, let us review and further define some of the terms and new elements we have presented so far, and that are of common use when working in a Global Mirror context.

Dependent writes
If the start of one write operation is dependent upon the completion of a previous write, the writes are dependent. Application examples for dependent writes are databases with their associated logging files. For instance, the database logging file will be updated after a new entry has been successfully written to a tablespace. Compliance at the remote site, with the chronological order of dependent writes to source volumes, is the basis to provide consistent data at the remote site through remote copy operations. Dependent writes is discussed in detail in 14.4.1, Data consistency and dependent writes on page 181, in the Metro Mirror part.

Consistency
The consistency of data is ensured if the order of dependent writes to disks or disk groups is maintained. With Copy Services solutions the data consistency at the remote site is important for the usability of the data. Consistent data, for instance, gives the ability to perform a data base restart rather than a data base recovery that could take hours or even days. Data consistency across all target volumes spread across multiple storage disk subsystems is essential for logical data integrity.

Data currency
This term describes the difference of time since the last data was written at the production site, versus the time the same data was written to the remote site. It determines the amount of data you have to recover at the remote site after a disaster. This is also called recovery point objective or RPO. Only synchronous copy solutions such as Metro Mirror have a currency of zero or RPO = zero. All asynchronous copy solutions have a data currency greater than zero. With Global Mirror a data currency of a few seconds can be achieved, while data consistency is always maintained by the Consistency Group process within Global Mirror. For asynchronous replication, this means that the data is not replicated at the same time as the local I/O happens, but the data is replicated with certain time lag. Here are some examples of different non-synchronous or asynchronous replication methods: Global Copy is a non-synchronous method that does not guarantee consistent data at the remote site. z/OS Global Mirror (formerly XRC) is an asynchronous replication method that guarantees consistent data at the remote site. Global Mirror is also an asynchronous replication method that provides consistent data at the remote site.

Session
A Global Mirror session is a collection of volumes that are managed together when creating consistent copies of data volumes. This set of volumes can reside in one or more LSSs and one or more storage disk subsystems at the local site. Open systems volumes and z/OS volumes can both be members of the same session.

326

IBM System Storage DS8000: Copy Services in Open Environments

When you start or resume a session, the creation of Consistency Groups is performed, and the master storage disk subsystem controls the session by communicating with the subordinate storage disk subsystems. There is also a session concept at the LSS level. But all LSS sessions are combined and grouped together within a Global Mirror session.

Master
The master is a function inside a source storage disk subsystem that communicates with subordinates in other storage disk subsystems, and controls the creation of Consistency Groups while managing the Global Mirror session. The master is defined when the start command for a session is issued to any LSS in a source storage disk subsystem. This determines who becomes the master storage disk subsystem. The master needs communication paths over Fibre Channel links to any one of the LSSs in each subordinate disk storage subsystem.

Subordinate
The subordinate is a function inside a source storage disk subsystem that communicates with the master and is controlled by the master. At least one of the LSSs of each subordinate source storage disk subsystems needs Fibre Channel communication paths to the master. This enables the communication between the master and the subordinate, and is required to create Consistency Groups of volumes that spread more than one storage disk subsystem. If all the volumes of a Global Mirror session reside in one source storage disk subsystem no subordinate is required, because the master can communicate to all LSSs inside the source storage disk subsystem.

Consistency Group
This is a group of volumes in one or more target storage disk subsystems whose data must be kept consistent at the remote site.

Local Site
This is the site that contains the production servers. This and other publications use this term interchangeably with the terms primary, production, or source site.

Remote Site
This is the site that contains the backup servers of a disaster recovery solution. This and other publications use this term interchangeably with the terms secondary, backup, standby, or target site.

22.2 Creating a Global Mirror environment


The following section recommends a certain sequence of steps to establish a Global Mirror environment. This is independent of the interface used to create a Global Mirror environment. Note that for illustration purposes, parameters and keywords are used in this section that apply to the DS CLI commands.

Chapter 22. Global Mirror options and configuration

327

We explain a basic Global Mirror setup, illustrated in Figure 22-1.

Subordinate

01 A Primary Primary A
source copy pending

Global Copy Network

Primary Primary
target

A B

Primary Primary

copy pending

C
tertiary

paths

Local site

Remote site(s)

01 A Primary Master Primary A


source copy pending

Primary Primary
Network
target

A B

Primary Primary

copy pending
Global Copy

C
tertiary

Figure 22-1 Global Mirror basic configuration with master and subordinate disk subsystems

The order of commands to create a Global Mirror environment is not completely fixed and allows for some variation. In order to be consistent with other sources and to not confuse the user with a different sequence of commands, we recommend a meaningful order and suggest the following steps to create a Global Mirror environment: 1. Define the paths between the local site and the remote site. In Figure 22-1, these are the logical communication paths between corresponding LSSs at the local site and the remote site, defined over Fibre Channel physical links that are configured over the network. Global Copy source LSSs are represented by the A volumes and their corresponding Global Copy target LSSs by the B volumes. You can also define logical communication paths here between the master and any subordinate storage disk subsystem that will be part of the Global Mirror session. Note that these paths are defined between source storage disk subsystems at the local site. With only a single source storage disk subsystem, you do not need to define paths to connect internal LSSs within the source storage disk subsystem. The communication between the master and the subordinates within a single source storage disk subsystem is transparent and internally performed. 2. When the communication paths are defined, start the Global Copy pairs that will be part of a Global Mirror session. Global Copy has to be created with the mkpprc -type gcp command. We recommend that you wait until the first initial copy is complete before you continue to the next step. This avoids unnecessary FlashCopy background I/Os in the following step.

328

IBM System Storage DS8000: Copy Services in Open Environments

3. The next step is to create FlashCopy relationships between the B and C volumes. You can use the mkremoteflash (abbreviated mkflash) command, with the following parameters: -tgtinhibit, -record, and -nocp. The -persist parameter is automatically set when the -record parameter is selected. If you use Space Efficient volumes as FlashCopy target volumes, also add the -tgtse parameter. For a discussion of the particular FlashCopy attributes that are required for a Global Mirror FlashCopy see 21.3.4, Introducing FlashCopy on page 316. 4. With external subordinates, that is, with more than one involved disk subsystem at the local site, you need paths between the master LSS and any potential subordinate storage disk subsystem at the local site. If you did not define these paths in the very first step, then this is the time to create these paths before you continue with the next step. 5. Define a token that identifies the Global Mirror session. This is a session ID with a number between 1 and 255. Define this session number to the master storage disk subsystem and also to all potentially involved source LSSs that are going to be part of this Global Mirror session and will contain Global Copy source volumes that will belong to the Global Mirror session. All source LSSs include all LSSs in potential subordinate storage disk subsystems that are going to be part of the Global Mirror session. For this step you can use the mksession command. 6. The next step populates the session with the Global Copy source volumes. You should put these Global Copy source volumes into the session after they complete their first pass for initial copy. For this step you can use the chsession -action add -volume command. 7. Start the session. This command actually defines the master LSS. All further session commands have to go through this LSS. You can specify with the start command the Global Mirror tuning parameters such as maximum drain time, maximum coordination time, and Consistency Group interval time. For this step you can use the mkgmir command. You go through this recommended sequence independent of the interface used.

22.3 Modifying a Global Mirror session


When a session is active and running, you can alter the Global Mirror environment to add or remove volumes. You also can add storage disk subsystems to a Global Mirror session or you can change the interval between the formation of Consistency Groups.

22.3.1 Adding or removing volumes to the Global Mirror session


Volumes can be added to the session at any time after the session number is defined to the LSS where the volumes reside. After the session is started, volumes can be added to the session or can be removed from the session also at any time. Note: You only add Global Copy source volumes to a Global Mirror session. Volumes can be added to a session in any state, for example, simplex or pending. Volumes that have not completed their initial copy phase stay in a join pending state until the first initial copy is complete. If a volume in a session is suspended, it will cause Consistency Group formation to fail.

Chapter 22. Global Mirror options and configuration

329

We recommend that you add only Global Copy source volumes that have completed their initial copy or first pass, although the microcode itself will stop volumes from joining the Global Mirror session until the first pass is complete. Also, we recommend that you wait until the first initial copy is complete before you create the FlashCopy relationship between the B and the C volumes. Note: You cannot add a Metro Mirror source volume to a Global Mirror session. Global Mirror supports only Global Copy pairs. When Global Mirror detects a volume that, for example, is converted from Global Copy to Metro Mirror, the following formation of a Consistency Group will fail. When you add a rather large number of volumes at once to an existing Global Mirror session, then the available resources for Global Copy within the affected ranks can be utilized by the initial copy pass. To minimize the impact to the production servers when adding many new volumes, you might consider adding the new volumes to an existing Global Mirror session in stages. Suspending a Global Copy pair that belongs to an active Global Mirror session will impact the formation of Consistency Groups. When you intend to remove Global Copy volumes from an active Global Mirror session, follow these steps: 1. Remove the desired volumes from the Global Mirror session. 2. Withdraw the FlashCopy relationship between the B and C volumes. 3. Terminate the Global Copy pair to bring volume A and volume B into simplex mode. Note: When you remove A volumes without pausing Global Mirror, you might see this reflected as an error condition with the showmigr -metrics command, indicating that the Consistency Group formation failed. However, this does not mean you have lost a consistent copy at the remote site because Global Mirror does not take the FlashCopy (B to C) for the failed Consistency Group data. This message indicates that just one Consistency Group formation has failed, and Global Mirror will retry the sequence.

22.3.2 Adding or removing storage disk subsystems or LSSs


When you plan to add a new subordinate storage disk subsystem to an active session, you have to stop the session first. Then add the new subordinate storage disk subsystem and start the session again. The session start command will then contain the new subordinate storage disk subsystem. The same procedure applies when you remove a storage disk subsystem from a Global Mirror session, which can be a subordinate only. In other words, you cannot remove the master storage disk subsystem. When you add a new LSS to an active session and this LSS belongs to a storage disk subsystem that already has another LSS that belongs to this Global Mirror session, you can add the new LSS to the session without stopping and starting the session again. This is true for either the master or for a subordinate storage disk subsystem.

22.3.3 Modifing the Global Mirror session parameters


The parameters that are used for tuning a Global Mirror session can be modified by pausing the session and then resuming the session with the new values. The parameters that you can modify are: Consistency Group interval time Maximum coordination interval time Maximum Consistency drain time

330

IBM System Storage DS8000: Copy Services in Open Environments

When you give a start command after a pause command, as opposed to a resume, any value for the Consistency Group interval time, maximum coordination interval time, or maximum Consistency Group drain time in the start command is ignored. If these parameters have to be altered, you have to give a resume command to the paused session with the parameters specified. If you resume a paused session without specifying these parameters, they will be set to their default values; see 21.4.2, Consistency Group parameters on page 321. Important: When setting new values for the tuning parameters, be sure to check for errors in Consistency Group formation and in draining the out-of-sync bitmaps. A few errors are not significant and do not jeopardize the consistency of your Global Mirror. However, if failures repeatedly occur, for example, no more consistency groups are formed, or the percentage of successful Consistency Groups is unacceptable, or the frequency of Consistency Groups is not meeting your requirements (Recovery Point Objective-RPO), then the new values set for the tuning parameters need to be revised and changed.

22.3.4 Global Mirror environment topology changes


The topology of a Global Mirror session, for example, the master SSID, subordinate SSIDs, and subordinate serial numbers, cannot be altered through a pause and resume command sequence. When you resume the session after a pause and the Global Mirror topology is not the same as it was at the pause, the start of the session will fail. If you need to change the topology of your Global Mirror session, you have to stop the session and start the session again with the new topology structure information in the mkgmir command. Topology also refers to the list of storage disk subsystems that are subordinates. You define control paths between the master and subordinate LSSs. One LSS per subordinate disk subsystem is sufficient. When you define the control path you identify the source LSS on the master disk subsystem. The target LSS in the path definition command points to a corresponding subordinate disk subsystem. These LSSs go into the topology specification that defines the communication paths between master and subordinate storage disk subsystems. To change these values you must stop the Global Mirror process (rmgmir command) and start it again with the new topology specifications (mkgmir command).

22.3.5 Removing FlashCopy relationships


When you withdraw a FlashCopy relationship within a Global Mirror session, the Consistency Group process is affected. If a withdraw is required, then first remove the Global Copy source volume from the session. If FlashCopy SE has been used for the Global Mirror environment, you should also release the repository space that was used for the Space Efficient volumes with the -tgtreleasespace parameter. Refer to 10.4.2, Removing FlashCopy relationships and releasing space on page 145 for the options to release Space Efficient repository space. After the volumes are removed from the session, you can explicitly terminate the corresponding FlashCopy relationship that was tied to this source volume. The termination of FlashCopy relationships might be necessary when you want to change the FlashCopy targets within a Global Mirror configuration and choose, for example, another LSS for the FlashCopy targets. You might be doing this because you want to replace the FlashCopy targets due to a skew in the load pattern in the remote storage disk subsystem. In this situation you can pause the session before such activity, and then resume the session again once the replacement of the FlashCopy relationships is completed.
Chapter 22. Global Mirror options and configuration

331

Note: A pause command (pausemigr) will complete a Consistency Group formation in progress. A stop (rmgmir) will immediately end the formation of Consistency Groups.

22.3.6 Removing the Global Mirror environment


To remove a Global Mirror environment and remove all traces of Global Mirror, we recommend the following sequence of steps: 1. 2. 3. 4. 5. 6. 7. Terminate the Global Mirror session. Remove all Global Copy source volumes from the Global Mirror session. Close the Global Mirror session. Withdraw all FlashCopy relationships between the B and C volumes. Terminate all Global Copy pairs. Remove the paths between the local and remote sites. Remove the paths between the master LSS and all related subordinate LSSs.

22.4 Global Mirror with multiple storage disk subsystems


When you create a Global Mirror environment that spans multiple storage disk subsystems at the local site, and probably also at the remote site, then you need to define communication paths between the involved local storage disk subsystems. See Figure 22-2.

Define Global Mirror session

01

Network

Primary Primary Primary Primary

A B

C
Remote site

Define Global Mirror session

01
Network

Primary Primary Primary Primary

A B

Figure 22-2 Define Global Mirror session to all potentially involved storage disk subsystems

332

IBM System Storage DS8000: Copy Services in Open Environments

Figure 22-2 on page 332 shows a symmetrical configuration with a one-to-one mapping. You have to define the corresponding session, with its number, to all potentially involved LSSs at the local site. There is still no connection between both local storage disk subsystems. Therefore we have to define the corresponding Global Mirror paths; see Figure 22-3.
Global Copy Network

Subordinate

01 A Primary Primary A
source copy pending

Primary Primary
target

A B

Primary Primary

copy pending

C
tertiary

Local site Start Global Mirror session

paths

Remote site

01 A Primary Master Primary A


source copy pending

Primary Primary
Network
target

A B

Primary Primary

copy pending

C
tertiary

Global Copy

Figure 22-3 Decide for master disk subsystem and start Global Mirror session

Through the start command mkgmir, you decide which LSS becomes the master LSS and consequently which local storage disk subsystem becomes the master storage disk subsystem. This master acts like a server in a client/server environment. The required communication between the master storage disk subsystem and the subordinate storage disk subsystems is inband, over the defined Global Mirror paths. This communication is highly optimized, and minimizes any potential application write I/O impact during the coordination phase to about a few milliseconds. For details see 21.4, Consistency Groups on page 319.

Chapter 22. Global Mirror options and configuration

333

Note that this communication is performed over FCP links. At least one FCP link is required between the master storage disk subsystem and the subordinate storage disk subsystem. Figure 22-4 uses dashed lines to show the Global Mirror paths that are defined over FCP links, between the master storage disk subsystem and its associated subordinate storage disk subsystems. These FCP ports are dedicated for Global Mirror communication between master and subordinates.

Subordinate

01 A Primary Primary A
source copy pending
Global Copy links

Primary Primary
target

A B

Primary Primary

copy pending

C
tertiary

Subordinate

01 A Primary Primary A
source
Global Copy links

Primary Primary
target

A B

Primary Primary

copy pending

copy pending

C
tertiary

GM links

01 A Primary Master Primary A


source copy pending

Network

Primary Primary
target

A B

Primary Primary

Global Copy links

copy pending

C
tertiary

Figure 22-4 Global Mirror paths over FCP links between source storage disk subsystems

Also shown in Figure 22-4 is a shared port on the master storage disk subsystem, and dedicated ports at the subordinates. Not considering availability, from a communications and traffic viewpoint only, a link would be sufficient for the traffic between the master and its subordinates. For redundancy, we suggest configuring two links. Note that when you configure links over a SAN network, the same FCP ports of the storage disk subsystem can be used for the Global Mirror session communication, as well as for the Global Copy communication, and for host connectivity. However, for performance reasons, and to prevent host errors from disrupting your Global Mirror environment, it is often a good idea to use separate FCP ports.

334

IBM System Storage DS8000: Copy Services in Open Environments

The sample configuration shown in Figure 22-5 shows a mix of dedicated and shared FCP ports. In this example an FCP port in the master storage disk subsystem is used as Global Mirror link to the other two subordinate storage disk subsystems, and is also used as Global Copy link to the target disk subsystem. Also, there are ports at the subordinate disk subsystem that are used as Global Mirror session link as well as Global Copy link.

Subordinate

01 A Primary Primary A
source copy pending
Global Copy links

Primary Primary
target

A B

Primary Primary

copy pending

C
tertiary

Subordinate

01 A Primary Primary A
source
Global Copy links

Primary Primary
target

A B

Primary Primary

copy pending

copy pending

C
tertiary

GM links

01 A Primary Master Primary A


source copy pending

Network

Primary Primary
target

A B

Primary Primary tertiary

Global Copy links

copy pending

Figure 22-5 Dedicated and shared links

If possible, a better configuration is the one shown in Figure 22-6. Again, from a performance and throughput viewpoint you would not need two Global Mirror links between the master and its subordinate storage disk subsystems. Still, dedicated ports for Global Mirror control communication between master and subordinate provides a maximum of responsiveness.

Chapter 22. Global Mirror options and configuration

335

Subordinate

01 A Primary Primary A
source

Primary Primary
target

A B

Primary Primary

copy pending

copy pending Global Copy links

C
tertiary

Subordinate

01 A Primary Primary A
source

Primary Primary
target

A B

Primary Primary

copy pending

copy pending Global Copy links

C
tertiary

01 A Master Primary Primary A


source copy pending

GM links
Network

Primary Primary
target

A B

Primary Primary

copy pending Global Copy links

C
tertiary

Figure 22-6 Dedicated Global Mirror links and dedicated Global Copy links

22.5 Recovery scenario after production site failure


This section covers the steps that you need to follow in a Global Mirror environment when a production site failure requires you to recover at the remote site.

22.5.1 Normal Global Mirror operation


Figure 22-7 shows a simple configuration with Global Mirror active and running.

Server
FlashCopy

A Primary A Primary Primary Primary A


source copy pending

Global Copy

Primary Primary Primary Primary target copy pending

A A B

Primary Primary Primary Primary tertiary

A A C

Local site
Figure 22-7 Normal Global Mirror operation

Remote site

336

IBM System Storage DS8000: Copy Services in Open Environments

Writes from the server are replicated through Global Copy, and Consistency Groups are created as tertiary copies. The B volumes are Global Copy target volumes, and they are also FlashCopy source volumes. The C volumes are the FlashCopy target volumes. The FlashCopy relationship is a particular relationship; see 21.3.4, Introducing FlashCopy on page 316.

22.5.2 Production site failure


A failure at the local site prevents all I/O to and from the local storage disk subsystems; see Figure 22-8 on page 337. This can have some impact on the formation of Consistency Groups because the entire process is managed and controlled by the master storage disk subsystem that is also the source disk subsystem, and it just failed, and cannot communicate any longer with its partners at the remote site. The goal is to swap to the remote site and restart the applications. This requires, first, to make the set of consistent volumes at the remote site available for the application, before the application can be restarted at the remote site. Then, once the local site is back and operational again, we must return to the local site. Before returning to the local site, we must apply to the source volumes the changes that the application did to the target data while it was running at the remote site. After this, we should do a quick swap back to the local site and restart the application.

Server
FlashCopy

A Primary A Primary Primary Primary A


source copy pending

Global Copy

Primary Primary Primary Primary target copy pending

A A B

Primary Primary Primary Primary tertiary

A A C

Local site
Figure 22-8 Production site fails

Remote site

When the local storage disk subsystem fails, Global Mirror can no longer form Consistency Groups. Depending on the state of the local storage disk subsystem, you might be able to terminate the Global Mirror session. Usually this is not possible because the storage disk subsystem might not respond any longer. Host application I/O might have failed and the application ended. This usually goes along with messages or SNMP alerts that indicate the problem. With automation in place, for example, TotalStorage Productivity Center for Replication, these alert messages trigger the initial recovery actions. If the formation of a Consistency Group was in progress then, most probably, not all FlashCopy relationships between the B an C volumes at the remote site will have reached the corresponding point-in-time. Some FlashCopy pairs might have completed the FlashCopy phase to form a new Consistency Group, and committed the changes already. Others might not have completed yet, and are in the middle of forming their consistent copy, and remain in a revertible state. And there is no master any longer to control and coordinate what might still be going on. All of this imposes a closer look at the volumes at the remote site before we can

Chapter 22. Global Mirror options and configuration

337

continue to work with them. There is more discussion of this in the following sections. First, however, we bring the B volumes into a usable state using the failover command.

22.5.3 Global Copy Failover from B to A


Because the source storage disk subsystem might no longer be usable, now the recovery actions and processing occur at the remote site, using a server connected to the remote storage disk subsystems for storage management console functions. See Figure 22-9.

Server for GM recovery


Failover B to A

FlashCopy

A Primary A Primary Primary Primary A


source copy pending

Global Copy

Primary Primary Primary Primary source suspended

A A B

Primary Primary Primary Primary tertiary

A A C

Local site
Figure 22-9 Perform Global Copy Failover from B to A

Remote site

You can use DS Storage Manager (DS SM), DS Command-Line Interface (DS CLI), and TotalStorage Productivity Center for Replication to execute the needed commands to the remote storage disk subsystems. Doing a failover (Copy Services Failover function) on the Global Copy target volumes will turn them into source volumes and suspend them immediately. This sets the stage for change recording when application updates start changing the B volumes. Change recording in turn allows you to re-synchronize the changes later from the B to the A volumes, before returning to the local site to resume the application again at the local site. But at this stage the B volumes do not contain consistent data they are still useless. We just changed their state from target copy pending to suspended. The A volumes state remains unchanged. The key element when you run a Copy Services Failover is that the B volumes become the new source volumes. This action just changes the state of the B volumes from target copy pending to suspended. This action does not require communication with the other storage disk subsystem at all, even though it is specified in the failoverpprc command. When all the failover commands are successfully executed, we can move on to the next step.

22.5.4 Verifying for valid Consistency Group state


Next you have to investigate whether all FlashCopy relationships are in a consistent state; see Figure 22-10. This means that you must query all FlashCopy relationships between B and C, which are part of the Consistency Group, to determine the state of the FlashCopy relationship. Global Mirror might have been in the middle of forming a Consistency Group and FlashCopy might have not completed the creation of a complete set of consistent C volumes.

338

IBM System Storage DS8000: Copy Services in Open Environments

Server for GM recovery


Check Consistency Group

FlashCopy

A Prim ary A Prim ary Prim ary Prim ary A


source copy pending

Global Copy

Prim ary Prim ary Prim ary Prim ary source suspended

A A B

Prim ary Prim ary Prim ary Prim ary te rtiary

A A C

Local site Remote site


Figure 22-10 Check Consistency Group state

Each FlashCopy pair needs a FlashCopy query to identify its state. If the local site is still accessible and also the source storage disk subsystem is accessible, you might consider carrying these activities at the production site using remote FlashCopy commands. Most likely the source storage disk subsystem does not respond any longer. In this case you must target the query directly to the recovery site storage disk subsystem. When you query a FlashCopy pair, there are two pieces of information that are key to determine whether the C volume set is consistent or needs intervention: the revertible state and the sequence number. The lsflash command reports the Revertible state as either Enable or Disable, which indicates whether the state of the FlashCopy is revertible or non-revertible. A non-revertible state means that a FlashCopy process has completed successfully and all changes are committed. Global Mirror uses the two-phase FlashCopy establishment operation. This operation allows the storage disk subsystem to prepare for a new FlashCopy relationship without altering the existing FlashCopy relationship. You can either commit or revert this new FlashCopy relationship with revertible state by using the revertflash and commitflash commands. During the Consistency Group formation process, Global Mirror puts all FlashCopy relationships in the revertible state, and after they are in the revertible state, commits all FlashCopy relationships. With this operation, the situation in which some FlashCopy operatons have not started while others have completed does not occur. The Sequence number is an identifier that can be set at FlashCopy establish operations and it is then associated with the FlashCopy relationship. Subsequent FlashCopy withdraw operations can be directed to FlashCopy relationships with specific sequence numbers. Global Mirror uses the sequence number to identify a particular Consistency Group. The actual sequence number used by Global Mirror is the platform timer from the Global Mirror master storage disk subsystem (in seconds resolution) at the point when the Global Mirror source components have to be coordinated to form a Consistency Group. This is at a point before the Consistency Group is transferred to the remote site. If your master storage disk subsystem platform timer is set to the time of day, then the FlashCopy sequence for Global Mirror approximates a time stamp for the Consistency Group. The best situation is when all the FlashCopy pairs of a Global Mirror session are in the non-revertible state and all their sequence numbers are equal. No further action is necessary, and the set of C volumes is consistent, and the copy is good.

Chapter 22. Global Mirror options and configuration

339

Figure 22-11 shows the consistency group creation process. The action required depends on the state of the consistency group creation process when the failure occurred.

Create consistency group by holding application writes while creating Transmit updates in Global Copy mode FlashCopy issued bitmap containing updates for this while between consistency groups with revertible option consistency group on all volumes Consistency group interval 0s to 18hrs design point is 2-3ms All FlashCopies Maximum coordination time (eg. 10ms) revertible

Drain consistency group and send to remote DS using Global Copy. Application writes for next consistency group are recorded in change recording bitmap Maximum drain time eg.1 min

FlashCopy committed once all revertible Flashcopies have successfully completed

Start next consistency group

Action required

Figure 22-11 FlashCopy consistency group creation process

Depending on when the failure occurs, there are some combinations of revertible states and FlashCopy sequence numbers that need different corrective actions. Use Table 22-1 as a guide. This is a decision table and reads in the following way: When column 2 and column 3 are true, then take the action in column 4. Column 5 contains additional comments. Do this for each of the four cases. The cases are described in chronological order, starting from the left.
Table 22-1 Consistency Group and FlashCopy validation decision table Are all FC relationships revertible? Case 1 NO. Are all FC sequence numbers equal? YES. Action to take Comments

No action needed. All C volumes are consistent.

CG formation ended.

Case 2

SOME - Some FlashCopy pairs are revertible and others are not revertible.

Revertible FlashCopy pairs sequence numbers are equal. And non-revertible FlashCopy pairs sequence numbers are equal, but do not match the revertible FlashCopies sequence number. YES.

Revert FC relations.

Some FlashCopy pairs are running in a Consistency Group process and some have not yet started their incremental process.

Case 3

YES.

Revert to all FC relations.

All FlashCopy pairs are in a new Consistency Group process and none have finished their incremental process.

340

IBM System Storage DS8000: Copy Services in Open Environments

Are all FC relationships revertible? Case 4 SOME - Some FlashCopy pairs are revertible and others are not revertible.

Are all FC sequence numbers equal? YES.

Action to take

Comments

Commit FC relations.

Some FlashCopy pairs are running in a Consistency Group process and some have already finished their incremental process.

If you see a situation other than the above four situations, then the Global Mirror mechanism has been corrupted.

Case 1: FlashCopies still committed


This indicates the situation where all FlashCopy operations have completed their task (and the next FlashCopy operations have not been started). In this situation, we dont need any action to correct the FlashCopy status because we have consistent data in all C volumes. This state is also reached after the FlashCopies are complete, and Global Mirror is waiting to create the next Consistency Group.

Case 2: FlashCopies partially issued


In this case, there is a group of FlashCopy pairs that are all revertible and another group of FlashCopy pairs that are all non-revertible. Consistency can be restored if these two criteria are true: The FlashCopy sequence number for all revertible pairs is equal. The FlashCopy sequence number for all non-revertible pairs is equal too. This indicates that the FlashCopy operations were interrupted. Some FlashCopy operations for the new consistency group were started, but not all of them. The FlashCopy relationships that have not started are in a non-revertible state and all of them have the same FlashCopy sequence number. Other FlashCopy relationships, which had already started, are in a revertible state and all of them have the same FlashCopy sequence number, but the number is different from the sequence number of the non-revertible FlashCopy relationships. All these indications suggest that you have to return the revertible FlashCopy relationships to the previous Consistency Group using the revertflash command, and terminate the FlashCopy relationships.

Case 3: FlashCopies all revertible


In the case where all of the pairs are revertible and all FlashCopy sequence numbers are equal, this indicates that all FlashCopy operations were running and none completed their task for the Consistency Group formation. The fact that all relationships are still in a revertible state denotes that nothing was finished and committed. Also, the identical FlashCopy sequence numbers denote that all the FlashCopy operations were involved in the very same Consistency Group set. All these indications suggest that you have to use the revertflash command to return to the previous Consistency Group. The revert action, which is invoked by the revertflash command, restores the Consistency Group level between the source and target volumes to the prior state before the current FlashCopy ran and resets the revertible state to Disable. The FlashCopy relationship is preserved.

Chapter 22. Global Mirror options and configuration

341

When the FlashCopy relationship is in a non-revertible state, the revert operation is not possible. When you issue this command to FlashCopy pairs that are non-revertible, you are going to see only an error message, but no action is performed.

Case 4: FlashCopies partially committed


If at the failure point some of the FlashCopy operations had completed their task to create a consistent copy and committed this process, these operations will be non-revertible. Other FlashCopy relationships might have not completed their corresponding part of the new Consistency Group, so these will still be in a revertible state. If all of them, revertible and non-revertible, have the same FlashCopy sequence number, this means that they all were involved in the very same Consistency Group. This allows you to commit the revertible FlashCopy relationships using the commitflash command. To make the task easier, you can run the commitflash to all FlashCopy pairs. Non-revertible FlashCopy relationships will just return an error message. Usually all FlashCopy pairs are non-revertible and all sequence numbers are equal. In this case, you do not need to take any action. Nevertheless, and depending on the failure point-in-time, you might have to perform one of the above recovery actions. After this action, all FlashCopy pairs will be non-revertible and all sequence numbers will be equal. Now you can proceed to the next step.

22.5.5 Setting consistent data on B volumes


At this point only the C volumes comprise a set of consistent data volumes. The B volumes per definition do not provide consistent data, because Global Copy does not provide data consistency. We want to have to two good copies of the data at the recovery site. The aim is to have a consistent set of volumes to work with, still keeping a good copy to which we can resort to if needed. The next step then is to create the same consistency on the B volumes as we have on the C volumes; see Figure 22-12. This can be achieved with the reverseflash command, with the parameter -fast. This operation is called Fast Reverse Restore (FRR). You have to add the -tgtpprc parameter with the reverseflash -fast command because the B volume is also the Global Copy source at this step.

Server for GM recovery

FRR FlashCopy

A Primary A Primary Primary Primary A


source pending

Primary Primary Primary Primary GC source FC source suspended

A A B

Primary Primary Primary Primary tertiary FC target

A A C

Local site

Remote site

Figure 22-12 Set a consistent set of B volumes using the C volumes as source

342

IBM System Storage DS8000: Copy Services in Open Environments

Though the Fast Reverse Restore operation starts the background copy from C to B volumes, in the reverseflash command, you must specify the B volumes as the FlashCopy sources and the C volumes as the FlashCopy targets. With the reverseflash command, you have to use the following parameters: -fast With this parameter, the reverseflash command can be issued before the background copy completes. This option is intended for use as part of Global Mirror. -tgtpprc After the Failover B to A operation described in 22.5.3, Global Copy Failover from B to A on page 338, the B volume became a Global Copy source volume in suspended state. The -tgtpprc parameter allows the FlashCopy target volume to be a Global Copy source volume. You have to specify this parameter here because the B volume becomes a FlashCopy target in the reverseflash process. Because you do not specify the -persist parameter, the FlashCopy relationship ends after the background copy from C to B completes. The above Fast Reverse Restore (FRR) operation does a background copy of all tracks that changed on B since the last Consistency Group (CG) formation. This results in the B volume becoming equal to the image that was present on the C volume. This is the logical view. From the physical data placement point of view, the C volume does not have meaningful data after the FlashCopy relationship ends. You have to wait until all Fast Reverse Restore operations complete successfully and its background copy completes before you proceed with the next step. Again, when the background copy completes, the FlashCopy relation will end. Therefore, you should check if the FlashCopy relationships remain to determine when all Fast Reverse Restore operations are completed.

22.5.6 Re-establishing FlashCopy relationships between B and C


In this step you establish the former FlashCopy relationship between the B and C volumes, as it was at the beginning when you set up the Global Mirror environment. This step is in preparation for returning later to production at the local site. The command at this step is exactly the same FlashCopy command you might have used when you initially created the Global Mirror environment. See 21.3.4, Introducing FlashCopy on page 316. In a disaster situation, possibly you do not want to use the -nocp option for the FlashCopy from B to C. This will remove the FlashCopy I/O overhead when the application starts. Now you can restart the applications at the remote site using the B volumes. Note the B volumes are Global Copy source volumes in suspended state, which implies that change recording takes place. Later this allows you to re-synchronize from B to A, before returning to the local site. Eventually, here you might also decide to create another copy of this consistent set of volumes, and create D volumes, or preserve this set of consistent volumes on tape. When you create the D volumes, this is just a plain FlashCopy command. Note that this is a full volume FlashCopy. This is because at a previous step we re-established the FlashCopy relationship between B and C, indicating it was part of a Global Mirror environment. This indication carries the start change recording attribute, which implies that you can only use the same source volume for full volume FlashCopy. Only a single incremental relationship is allowed per source volume.

Chapter 22. Global Mirror options and configuration

343

22.5.7 Restarting the application at the remote site


At this stage you might restart the application at the remote site and work with the consistent set of B volumes; see Figure 22-13.

Application Server
Restart applications

a
A A B

Global Copy

Primary Primary Primary Primary source pending

A A A

Primary Primary Primary Primary


source

suspended

Primary Primary Primary Primary tertiary

A A C

Local site
Figure 22-13 Restart applications at remote site

Remote site

Note the suspended state of the B volumes, which implies change recording and indicates the changed tracks on the B volumes. When the local site is about to become ready to restart the applications again and resume the operations, you prepare the remote site for the next step.

22.5.8 Preparing to switch back to the local site


The local site is back and operative again. We assume that the local site did not lose the data at the time when the swap to the remote site was done. It is then possible to re-synchronize the changed data from B to A before re-starting the application at the local site. This is accomplished by doing a Failback operation (Copy Services Failback function) from B to A; see Figure 22-14.

344

IBM System Storage DS8000: Copy Services in Open Environments

Application Server

Failback B to A

A Primary A Primary Primary Primary A


target

re-synchronize from B to A

Primary Primary Primary Primary


Global Copy source

A A B

copy pending

copy pending

Primary Primary Primary Primary tertiary

A A C

Local site

Remote site

Figure 22-14 Failback operation from B to A in preparation for returning to local site

Note that the Failback operation is issued to the B volumes as the source and the A volumes as the target. This command changes the A volume from its previous source copy pending state to target copy pending and starts the re-synchronization of the changes from B to A. Note: Before doing the Failback operation, ensure that paths are defined from the remote site LSS to its corresponding LSS at the local site. Note that with Fibre Channel links, you can define paths in either direction on the very same FCP link. During the Failback operation the application stays running at the remote site to minimize the application outage. If the A volume is still online to the server at the local site or it was online when a crash happened, so that the SCSI persistent reserve is still set on the previous source disk (A volume), the Global Copy Failback process with the failbackpprc command fails. In this case the server at the production site locks the target with a SCSI persistent reserve. After this SCSI persistent reserve is reset with the varyoffvg command (in this case on AIX), the failbackpprc command completes successfully. There is a -resetreserve parameter for the failbackpprc command. This option resets the reserved state so that the failback operation can complete. In the Failback operation after a real disaster, you can use this parameter because the server might go down while the SCSI persistent reserve was set on the A volume. In the planned failback operation, you must not use this parameter because the server at the local site still owns the A volume and might be using it, and the Failback operation suddenly changes the contents of the volume. This can cause the server file systems corruption.

22.5.9 Returning to the local site


When the local site is ready, quiesce the application at the remote site. Then a sequence of Global Copy Failover - Failback operations from A to B will re-establish Global Copy back as it was originally before the local site outage.

Chapter 22. Global Mirror options and configuration

345

Server for GM recovery


Failover A to B

Application Server

A Primary A Primary Primary Primary A


source Global Copy

Primary Primary Primary Primary


source

A A B

suspended

copy pending

Primary Primary Primary Primary tertiary

A A C

Local site
Figure 22-15 Global Copy Failover from A to B

Remote site

Figure 22-15 shows the action at the local site, the Global Copy Failover operation from A to B. This failoverpprc command will change the state of the A volumes from target copy pending to source suspended, and start to keep a bitmap record of the changes to the A volumes. You issue this command to the storage disk subsystem at the local site. The state of the B volumes does not change. When the failover is completed, a failback operation from A to B is run; see Figure 22-16.

Server for GM recovery


Failback A to B
Re-synchronize from A to B

Server

A Primary A Primary Primary Primary A


source Global Copy

Primary Primary Primary Primary


target

A A B

copy pending

copy pending

Primary Primary Primary Primary tertiary

A A C

Local site

Remote site

Figure 22-16 Global Copy failback from A to B and resync Global Copy volumes

Note: Before doing the Failback operation, ensure that paths are defined from the local site LSS to its corresponding LSS at the remote site.

346

IBM System Storage DS8000: Copy Services in Open Environments

Figure 22-16 on page 346 shows the Failback operation at the local site. The failbackpprc command will change the state of the A volumes from source suspended to source copy pending. The state of the B volumes will change from source copy pending to target copy pending. Also, the replication of updates from A to B begins. This replication ends quickly because the application did not start yet at the local site. Finally, if you did not already establish the FlashCopy relationships from B to C during the Failover-Failback sequence at the remote site, then you have to do it now. This might be an inband FlashCopy, as shown in Figure 22-17.

Server for GM recovery


Remote FlashCopy B to C FlashCopy

A Primary A Primary Primary Primary A


source Global Copy

Primary Primary Primary Primary


target

A A B

copy pending

copy pending

Primary Primary Primary Primary tertiary FlashCopy target

A A C

Local site

Remote site

Figure 22-17 Establish Global Mirror FlashCopy relationship between B and C

The last step is to start the Global Mirror session again, as shown in Figure 22-18. Then the application can resume at the local site.

Application server
I/O

Server for GM recovery

Start GM session FlashCopy

A Primary A Primary Primary Primary A


source Global Copy

Primary Primary Primary Primary


target

A A B

copy pending

copy pending

Primary Primary Primary Primary Tertiary FlashCopy target

A A C

Local site

Remote site

Figure 22-18 Start Global Mirror session and resume application I/O at local site

Chapter 22. Global Mirror options and configuration

347

22.5.10 Conclusions of failover/failback example


This concludes the sequence of steps for a swap to the remote site and coming back to the local site after service is restored at the local site. In particular, the check on a valid Consistency Group after a production site failure is a challenge when you consider a large configuration with many volume pairs. Each command usually addresses a single pair of volumes. When you have a large quantity of volume pairs, this requires automation, for example, to check all the FlashCopy relationships after a production site failure. Note that for a planned site swap, you might not have to check for a valid Consistency Group because, through a proper command sequence and verifying their successful completion, the possibility of an inconsistent group of C volumes is very minimal, if at all. Global Mirror is a 2-site solution that can bridge any distance between both sites. There are ready-to-use packages and services available to implement a disaster recovery solution for 2-site remote copy configurations. There is TotalStorage Productivity Center for Replication (TPC for Replication) that supports not only Global Copy and Metro Mirror based configurations, but also Global Mirror configurations.

348

IBM System Storage DS8000: Copy Services in Open Environments

23

Chapter 23.

Global Mirror interfaces


In this chapter we provide an overview of the interfaces you can use to manage and control Global Mirror environments. The information discussed in this chapter can be complemented with the following sources: Part 2, Interfaces on page 25 Chapter 9, FlashCopy interfaces on page 101 Chapter 19, Global Copy interfaces and examples on page 267 The examples presented in Chapter 25, Global Mirror examples on page 367 The publication, IBM System Storage DS8000 Command-Line Interface Users Guide, SC26-7916

Copyright IBM Corp. 2004-2008. All rights reserved.

349

23.1 Global Mirror interfaces: Overview


Global Mirror combines Global Copy and FlashCopy, which work together in an autonomic fashion under the DS8000 microcode control. There are commands intended for Global Copy and FlashCopy, as well as commands that address Global Mirror sessions. All of them can be managed with the following interfaces: DS Command-Line Interface (DS CLI): This interface provides a set of commands which are executed on a workstation that communicates with the DS HMC. DS Storage Manager Graphical User Interface (DS GUI): This is a graphical user interface (DS GUI) running in a Web browser. The DS GUI can be accessed using the preinstalled browser on the HMC console, or through the DS8000 Element Manager on a TPC server, such as the SSPC (for new DS800 with Licensed Machine Code 5.30xx.xx), or for former DS8000 installations through a supported Web browser on any workstation connected to the HMC console. TotalStorage Productivity Center for Replication (TPC for Replication): The TPC Replication Manager server, where TPC for Replication runs, connects to the DS8000. TotalStorage Productivity Center for Replication (TPC for Replication) provides management of DS8000 series business continuance solutions, including FlashCopy, Metro Mirror, and Global Mirror. TPC for Replication is covered in Chapter 6, IBM TotalStorage Productivity Center for Replication on page 43. This chapter gives an overview of the DS CLI and the DS SM for Global Mirror management.

DS CLI and DS SM similar functions for Global Mirror management


Table 23-1 lists DS CLI commands and equivalent DS SM panel options for Global Mirror management.
Table 23-1 DS CLI commands and equivalent DS SM panel options Task Start Global Mirror session Stop Global Mirror session Pause Global Mirror session Resume Global Mirror session Show Global Mirror status Create a Global Mirror session on an LSS DS CLI mkgmir rmgmir pausegmir resumegmir showgmir mksession DS SM CS GM Create CS GM Delete CS GM Pause CS GM Resume CS GM Properties CS GM Create Or CS GM Modify CS GM Delete Or CS GM Modify CS GM Modify CS GM View session volumes

Remove a Global Mirror session from an LSS

rmsession

Change a Global Mirror session on an LSS Display a Global Mirror session on an LSS

chsession lssession

350

IBM System Storage DS8000: Copy Services in Open Environments

Task Create a complete Global Mirror environment

DS CLI mkpprcpath mkpprc mkflash mksession mkgmir rmgmir failoverpprc lsflash revertflash or commitflash reverseflash mkflash mkpprcpath failbackpprc lspprc mkpprcpath failoverpprc failbackpprc resumegmir pausegmir showgmir pausepprc failoverpprc lsflash reverseflash mkflash failbackpprc resumegmir rmgmir chsession rmsession rmflash rmpprc rmpprcpath

DS SM CS P Create CS MM Create CS FC Create CS GM Create CS GM Delete CS MM Failover CS FC Properties CS FC Reverse relationship CS FC Create

Fail over Global Mirror

Fail back Global Mirror

CS P Create CS MM Failback CS MM Properties CS MM Failover CS MM Failback CS GM Resume CS GM Pause CS GM Properties CS MM Failover CS FC Properties CS FC Reverse relationship CS FC Create CS MM Failback CS GM Resume CS GM Delete CS FC Delete CS MM Delete CS P Delete

Practice Failover Global Mirror

Practice Failback Global Mirror Clean up a complete Global Mirror environment

CS: Stands for Copy Services menu. GM: Stands for Global Mirror panel. MM: Stands for Metro Mirror panel. P: Stands for Paths panel. FC: Stands for FlashCopy panel.

23.2 DS Command-Line Interface


The DS Command-Line Interface can be used to set up and manage Global Mirror functions. One big usability advantage of the DS CLI is that all platforms use a common syntax. This can make it easier to learn, and make it a common tool for skilled people as well as lesser skilled people. Another big usability advantage is that you can make scripts with the commands, which saves lots of time when you have to execute several predetermined tasks.

Chapter 23. Global Mirror interfaces

351

The DS CLI commands are issued via the Ethernet network to the DS Storage Management Console (DS HMC). The DS CLI server resides on the DS HMC, which works as a gateway between the clients and the storage disk subsystem. You can install a DS CLI client to access the DS CLI server on various operating systems. See 16.2, Copy Services network components on page 199, for a detailed description of the DS HMC network environment. The DS CLI commands that are used for managing a Global Mirror environment fall into the following basic groups: For paths definitions and removal For Global Copy pairs creation, management, and removal For FlashCopy pairs creation, management, and removal For Global Mirror session definition, management, and removal The first three groups in the previous list refer to the same commands you use to manage and control Global Copy and FlashCopy environments. Still, when working in a Global Mirror environment specific command parameters must be used for some of the tasks. How these commands are used and the options that must be selected is covered in the discussions and examples we present in this part of the book. For the Global Mirror session itself and its definition, management, modification, and removal, we have the following DS CLI commands: showgmir displays detailed properties and performance figures for Global Mirror. showgmiroos displays the number of out of synchronization (out of sync) tracks for the session. mkgmir starts Global Mirror. rmgmir stops Global Mirror. pausegmir pauses Global Mirror. resumegmir resumes Global Mirror. mksession opens a Global Mirror session on a LSS. chsession allows you to modify a Global Mirror session on a LSS. You can add and remove a volume for Global Mirror with this command. rmsession removes an existing Global Mirror session on a LSS. lssession displays a Global Mirror session on a LSS. For most DS CLI commands, you will need to know some (or all) of the following information: The serial number and device type of the source and target storage disk subsystem. The World Wide Node name (WWNN) of the remote storage disk subsystem. The LSS numbers of the source and target volumes. The port IDs for source and target. Up to eight port pair IDs can be specified. A full establishment of a Global Mirror environment using DS CLI commands can take a long time, especially if you have to set up an environment involving many volumes on many LSSs and in several storage disk subsystems. For the setup and management of large-scale environments, TPC for Replication is more appropriate, especially if you want to test disaster recovery scenarios or if you want to run a real disaster recovery plan. Detailed examples of how to set up and manage a Global Mirror environment using DS CLI commands can be found in Chapter 25, Global Mirror examples on page 367. The DS CLI commands are documented in the publication IBM System Storage DS8000 Command-Line Interface Users Guide, SC26-7916.

352

IBM System Storage DS8000: Copy Services in Open Environments

23.3 DS Storage Manager GUI


You can use the DS Storage Manager (DS SM) graphical user interface (GUI) to set up and manage some Global Mirror functions. You work with the DS SM GUI through a series of predetermined panels and options you can select to accomplish the desired task. The DS SM operates from a Web session that communicates to the DS Hardware Management Console (DS HMC). See 16.2, Copy Services network components on page 199, for a detailed description of the DS HMC network environment. The DS SM interface is user friendly. However, you cannot use it for automation activities, and certain Copy Services functions are not supported from this interface. People who have fewer skills or less confidence with the other interfaces can use the DS Storage Manager interface. It is reasonably simple to use; however, it is usually slower than the other interfaces, and you cannot save actions for later use. For Global Mirror setup activities, you start from the following DS Storage Manager panels: For paths definitions, start from the Paths panel under the Copy Services menu. You can see this panel in Figure 25-17 on page 418. To create Global Copy pairs, start from the Metro Mirror panel under the Copy Services menu. You can see this panel in Figure 25-24 on page 422. To create FlashCopy pairs, start from the FlashCopy panel under the Copy Services menu. You can see this panel in Figure 25-32 on page 428. For Global Mirror session setup and control, start from the Global Mirror panel under the Copy Services menu of the DS SM; see Table 23-1 on page 350. Figure 23-1 here shows the Global Mirror Main panel.

Figure 23-1 Global Mirror main panel

Chapter 23. Global Mirror interfaces

353

If you do not select any check box for a Global Mirror session, the pull-down list only allows session creation. The pull-down list will look like Figure 23-2. If you are working with a storage disk subsystem that has more than one storage image, then in this panel you can also use the storage complex, storage unit, and storage image pull-down lists to access other storage images with which the Global Mirror session might be operating.

Figure 23-2 Global Mirror pull-down list - no session selected

If you select any check box for a Global Mirror session, the pull-down list allows session creation and management. The pull-down list looks like Figure 23-3.

Figure 23-3 Global Mirror pull-down list - Session selected

354

IBM System Storage DS8000: Copy Services in Open Environments

Consider that a full establishment of a Global Mirror environment requires using each Copy Services panel in the DS SM and, if you need to set up an environment involving many volumes on many LSSs and in several storage disk subsystems, this can take a very long time. Therefore, the DS SM is more convenient for specific non-large-scale Copy Services activities, and also it is a very didactic tool for users not skilled in other interfaces. For the setup and management of large-scale environments, TPC for Replication is more appropriate, especially if you want to test disaster recovery scenarios or if you want to run a real disaster recovery plan. Detailed examples of how to set up and manage a Global Mirror environment using the DS Storage Manager GUI can be found in Chapter 25, Global Mirror examples on page 367.

23.4 TotalStorage Productivity Center for Replication (TPC-R)


The IBM TotalStorage Productivity Center for Replication is an automated solution to provide a management front-end to Copy Services. In addition to the DS8000, TPC for Replication can also help manage copy services for the DS6000 and the SAN Volume Controller (SVC), as well as the ESS 800 when using FCP links for PPRC paths between ESS 800s or between ESS 800 and DS6000 and DS8000. This also applies to FlashCopy. For a detailed description of TPC for Replication concepts and usage, see Chapter 6, IBM TotalStorage Productivity Center for Replication on page 43.

Chapter 23. Global Mirror interfaces

355

356

IBM System Storage DS8000: Copy Services in Open Environments

24

Chapter 24.

Global Mirror performance and scalability


In this chapter we describe performance considerations for planning and configuring Global Mirror for DS8000. We discuss the potential impact that the three phases of Consistency Group formation might have on application write I/Os. Also covered are the distribution of the B volumes and the C volumes across different ranks, and providing extra care for very busy volumes.

Copyright IBM Corp. 2004-2008. All rights reserved.

357

24.1 Performance aspects for Global Mirror


Global Mirror is basically composed of Global Copy and FlashCopy, and combines both functions to create a solution that provides consistent data at a distant site. This means that to have a satisfactory Recovery Point Objective (RPO) without impacting production too much, performance should be analyzed at the production site, at the recovery site, and between both sites: At the production site, even if production is a higher priority than replication, the storage disk subsystem has to handle both production load and Global Copy replication load. If the source storage disk subsystem is overloaded by production, this will slow the Consistency Group formation process because it needs to wait for the drain of the out-of-sync tracks. Between both sites, the bandwidth needs to be sized for production load peaks. At the recovery site, even if production is not running, it is hosting the target Global Copy volumes and also handling the FlashCopy operations. Therefore performance of the remote storage disk subsystem also needs some analysis to ensure the best possible RPO. This section discusses the impact of Global Copy and FlashCopy in the overall performance of Global Mirror. Global Copy has at most only a minimal impact to the response time of an application write I/O to a Global Copy source volume. In the Global Mirror environment FlashCopy is used with the nocopy attribute. This implies that, for any write I/Os to the FlashCopy source volume, there will be additional internally triggered I/Os in the remote storage disk subsystem. These I/Os preserve the FlashCopy source volume tracks by making a copy to the FlashCopy target volume before the source tracks are updated. This happens every time within the interval in between two FlashCopy establish operations. Note that FlashCopy with the nocopy option does not start a background copy operation, but it only maintains a set of FlashCopy bitmaps for the B and C volumes. These bitmaps are established the first time a FlashCopy relationship is created with the nocopy option. Before a source track is modified between the creation of two Consistency Groups, the track is copied to the target C volume to preserve the previous point-in-time copy. This includes updates to the corresponding bitmaps to reflect the new location of the track, which belongs to the point-in-time copy. Each Global Copy write to its target B volume within the window of two adjacent Consistency Groups causes such a FlashCopy I/O operation. See Figure 24-1.

Write

Primary Primary source copy pending

A A1

Read

2
FCP links

Write

Primary Primary
targe t

A B1

Read

Write

Primary Primary

C1
tertiary

copy pending

Host

Local site

Remote site

Figure 24-1 Global Copy with write hit at the remote site

Note: What Figure 24-1 and Figure 24-2 show is not necessarily the exact sequence of internal I/O events, but a logical view approximation. There are internal microcode optimization and consolidation techniques that make the entire process more efficient.

358

IBM System Storage DS8000: Copy Services in Open Environments

An I/O in this context is also a Global Copy I/O when Global Copy replicates a track from a Global Copy source volume to a Global Copy target volume. This implies an additional read I/O in the source storage disk subsystem at the local site and a corresponding write I/O to the target storage disk subsystem at the remote site. In a Global Mirror environment, the Global Copy target volume is also the FlashCopy source volume. This can have some effect on a very busy disk subsystem. Figure 24-1 on page 358 roughly summarizes what happens between two Consistency Group creation points when application writes come in. The application write I/O completes immediately at the local site (1). Eventually Global Copy replicates the application I/O and reads the data at the local site (2). Because of the FlashCopy nocopy attribute, before the track gets updated on the B volume it is first preserved on the C volume. What is shown in this diagram is the normal sequence of I/Os within a Global Mirror configuration. The write (3) to the B1 volume is usually a write hit in the persistent memory (NVS) of the target disk subsystem, so this is an instant operation. Eventually, at some later moment, the B1 track is copied from B1 to C1 so it can be preserved and the update in NVS can be destaged to the B1 volume. There is some potential impact on the Global Copy data replication operation depending on whether persistent memory or non-volatile cache is overcommitted in the target storage disk subsystem. This might cause the FlashCopy source tracks to have to first be preserved on the target volume C1 before the Global Copy write is completed. See Figure 24-2.

Write

Primary Primary source copy pending

A A1

Read

2
FCP links

Write

Primary Primary
targe t

A B1

Read

Write

Primary Primary

C1
tertiary

copy pending

Host

Local site

Remote site

Figure 24-2 Global Copy with overcommitted NVS at the remote site

Figure 24-2 roughly summarizes what happens when persistent memory or NVS in the remote storage disk subsystem is overcommitted. A read (3) and a write (4) to preserve the FlashCopy source track and write it to the C1 volume are required before the write (5) can complete. Then, when the track is updated on the B1 volume, this completes the write (5) operation. Nevertheless, what you should usually see is all quick writes to cache and persistent memory, as Figure 24-1 on page 358 outlines. All write I/Os to FlashCopy source volumes also trigger bitmap updates. This is the bitmap that was created when the FlashCopy volume pair was created with the start change recording attribute. This allows the replication of only the changed recording bitmap to the corresponding bitmap for the target volume in the course of forming a Consistency Group. See 21.4, Consistency Groups on page 319.

24.2 Performance considerations at coordination time


When looking at the three phases that Global Mirror goes through to create a set of data consistent volumes at the remote site, the first question that comes to mind is whether or not the coordination window imposes an impact on the application write I/O; see Figure 24-3.

Chapter 24. Global Mirror performance and scalability

359

Maximum

Maximum

coordination
time Serialize all Global Copy source volumes

drain
time Perform FlashCopy

Drain data from local to remote site

Hold write I/O

Write I/O

A1
source

Global Copy paths Global Mirror session paths

B1
target

C1
tertiary

A2
source

Global Copy paths

B2
target

C2
tertiary

Local site

Remote site

Figure 24-3 Coordination time: How it impacts application write I/Os

The coordination time, which you can limit by specifying a number of milliseconds, is the maximum impact to an applications write I/Os that will be allowed when forming a Consistency Group. The intention is to keep this time window as small as possible. The default of 50 ms might be a bit high in a transaction processing environment. A valid number might also be a very low number in the single-digit range. The required communication between the master storage disk subsystem and the subordinate disk subsystems is inband over the paths between the master and the subordinates. This communication is highly optimized and allows you to minimize the potential application write I/O impact to 3 ms, for example. Note that this communication is performed over FCP links. There is at least one FCP link required between a master storage disk subsystem and a subordinate. For redundancy, we suggest using two FCP links. The following example illustrates the impact of the coordination time when Consistency Group formation starts, and whether this impact has the potential to be significant. Assume a total aggregate number of 5000 write I/Os over two source disk subsystems with 2500 write I/Os per second to each disk subsystem. Each write I/O takes 0.5 ms. You specified 3 ms maximum to coordinate between the master the subordinate disk subsystems. Assume further that a Consistency Group is created every 3 seconds, which is a goal set with the Consistency Group interval time of zero. 5000 write I/Os. 0.5 ms response time for each write I/O. Maximum coordination time is 3 ms. Every 3 seconds a Consistency Group is created. This is 5 I/Os for every millisecond or 15 I/Os within 3 ms. So each of these 15 write I/Os experience a 3 ms delay. This happens every 3 seconds. Then we observe an average response time delay of approximately: (15 IOs * 0.003 sec) / 3*5000 IO/sec) = 0.000003 sec or 0.003 ms The response time increases in average from 0.5 ms to 0.503 ms.

360

IBM System Storage DS8000: Copy Services in Open Environments

24.3 Consistency Group drain time


When the Consistency Group is set at the source disk subsystem by setting the corresponding bitmaps within the coordination time window, all remaining data that is still in the out-of-sync (OOS) bitmap is sent (drained) by Global Copy to the target disk subsystem. This drain period might also be limited to a maximum drain time. The default is 30 seconds and is an appropriate value for most environments. This replication process usually does not impact the application write I/O. There is a very low possibility for a track within a Consistency Group to be updated before this track is replicated to the remote site within this drain time period. When this unlikely event happens, the track is immediately replicated to the target disk subsystem before the application write I/O modifies the original track. The involved application write I/O experiences a similar response time delay as though the I/O had been written to a Metro Mirror source volume. Note that subsequent writes to this same track do not experience any delay because the track has already been replicated to the remote site.

24.4 Remote storage disk subsystem configuration


There will be I/O skews and hot spots in the storage disk subsystems. This is true for the local and remote disk subsystems. In local disk subsystems, you might consider a horizontal pooling approach and spread each volume type across all ranks. Volume types in this context are, for example, DB2 database volumes, logging volumes, batch volumes, and temporary volumes. Starting with DS8000 LIC Release 3, Space Efficient FlashCopy is being offered in addition to traditional IBM FlashCopy. The performance characteristics are different for FlashCopy SE and classical FlashCopy. In the following sections, we discuss both options starting with classical FlashCopy.

Logical configurations with Classical FlashCopy


Through a one-to-one mapping from a local to a remote disk subsystem, you achieve the same configuration at the remote site for the B volumes and the C volumes. Figure 24-4 proposes spreading the B and C volumes across different ranks. In such a symmetrical setup, your goal might be to have the same amount of each volume type within each rank. For volume type here, we refer to B volumes and C volumes within a Global Mirror configuration. In order to avoid performance bottlenecks, you should spread busy volumes over multiple ranks. Otherwise, hot spots can concentrate on single ranks when you put the B and C related volumes on the very same rank. We recommend spreading B and C volumes, as Figure 24-4 suggests. Another approach can be to focus on the very busy volumes and keep these volumes on separate ranks from all of the other volumes.

Chapter 24. Global Mirror performance and scalability

361

A Primary A1 Primary
source copy pending

Primary Primary
target

A B1

Primary Primary

C3

tertiary

copy pending

Rank 1 FCP links FCP FCP

Rank 1

A Primary Primary A2
source copy pending

Primary Primary
target

A B2

Primary Primary

C1
tertiary

copy pending

Rank 2

Rank 2

A Primary Primary A3
source

Primary Primary
target

A B3

Primary Primary

C2
tertiary

Host

copy pending

copy pending

Rank 3

Rank 3

Local site

Remote site

Figure 24-4 Remote disk subsystem with all ranks containing equal numbers of volumes

With mixed Disk Drive Module (DDM) capacities and different speeds at the remote storage disk subsystem, you might consider spreading B volumes, not only over the fast DDMs, but over all ranks. Basically, follow a similar approach, as we recommend in Figure 24-4. You can keep the especially busy B volumes and C volumes on the faster DDMs. Figure 24-5 shows a configuration that incorporates the D volumes, which you can create once in a while for testing or other purposes.

Primary Primary source copy pending

A A1

Primary Primary
target

A B1

Primary Primary

C3
tertiary

copy pending

Rank 1 FCP links FCP FCP

Rank 1

A Primary Primary A2
source copy pending

Primary Primary
target

A B2

Primary Primary

C1
tertiary

copy pending

Rank 2

Rank 2

Primary Primary source

A A3

Primary Primary
target

A B3

Primary Primary

C2
tertiary

Host

copy pending

copy pending

Rank 3

Rank 3

Local site

Host
Remote site

Primary Primary

A D1 A D3

Primary Primary

D2
Primary Primary

Primary Primary

D4

Rank 4

Figure 24-5 Remote disk subsystem with D volumes

362

IBM System Storage DS8000: Copy Services in Open Environments

For a situation like the one illustrated in Figure 24-5, we suggest as an alternative, a rank with larger and perhaps slower DDMs. The D volumes can be read from another host, and any other I/O to the D volumes does not impact the Global Mirror volumes in the other ranks. Note that if you use the nocopy option for the relationship between the B and D volumes, this will read the data from B when read I/Os to the D volume happen. So for this circumstance you might consider using the copy option, thus preventing additional I/O to the ranks with the B volumes. However, in this case, until the background copy between B and D completes, there might be some impact to the Global Mirror data transfer. An option here might be to spread all B volumes across all ranks again and also configure the same number of volumes in each rank. Still, put the B and C volumes in different ranks. We further recommend that you configure corresponding B and C volumes in such a way that these volumes have an affinity to the same server. It would be ideal to also have the B volumes connected to a different DA pair than the C volumes.

Logical configurations with Space Efficient FlashCopy


FlashCopy SE is characterized as a FlashCopy relationship in which the target volume is a Space Efficient volume. These volumes are physically allocated in a data repository. A repository volume per extent pool is used to provide physical storage for all Space Efficient volumes in that extent pool. FIG shows an example for a Global Mirror setup with FlashCopy SE. In this example, the FlashCopy targets are using a common repository.

A Primary Primary A1
source copy pending

Primary Primary
target

A B1

Primary Primary

copy pending

Rank 1 FCP links FCP FCP

Rank 1

A Primary Primary A2
source copy pending

Primary Primary
target

A B2

Primary Primary

C
tertiary

copy pending

Rank 2

Rank 2

Extpool 1
Primary Primary

A Primary Primary A3
source

Primary Primary
target

A B3

Host

copy pending

copy pending

Rank 3

Rank 3

Local site

Remote site

Figure 24-6 Remote disk subsystem with Space Efficient FlashCopy target volumes

FlashCopy SE is optimized for use cases where less than 20% of the source volume is updated during the life of the relationship. In most cases, Global Mirror is configured to schedule Consistency Group creation at an interval of a few seconds. This means that a small amount of data is copied to the FlashCopy targets. From this point of view, Global Mirror is a recommended area of application for FlashCopy SE.

Chapter 24. Global Mirror performance and scalability

363

In contrast, Standard FlashCopy will generally have superior performance to FlashCopy SE. The FlashCopy SE repository is critical regarding performance. When provisioning a repository, Storage Pool Striping will automatically be used with a multi-rank extent pool to balance the load across the available disks. In general, it is recommended that the extent pool contain a minimum of four RAID arrays. Depending on the logical configuration of the DS8000, you might also consider using multiple Space Efficient repositories for the FlashCopy target volume in a Global Mirror environment. Note that the repository extent pool can also contain additional non-repository volumes. Contention can arise if the extent pool is shared. After the repository is defined, it cannot be expanded so it is important that planning is done to make sure it will be large enough. If the repository fills, the FlashCopy SE relationship will be failed and the Global Mirror will not be able to successfully create Consistency Groups.

24.5 Balancing the disk subsystem configuration


In this section we show examples of how to gather information and do the analysis of how well the storage disk subsystem is balanced in relation to the I/O load. Example 24-1 shows showgmiroos command outputs, which display OutOfSyncTracks for the storage image scope as well as two LSS scopes. The first showgmiroos command with the -scope si parameter shows that some Out Of Sync Tracks remain within the disk subsystem. The second showgmiroos command with the -scope lss parameter shows OutOfSyncTracks with a value of 0 (zero). This means that for LSS 10, there are no tracks remaining at the local site that have not been transferred yet to the remote site. This is different with LSS 11, where there is still data that has not yet been transferred to the remote site. The third showgmiroos command shows that there are 67,847 tracks or roughly 4.14 GB still waiting to get replicated from the local to the remote site. This situation denotes a significant write I/O skew between LSS 10 and LSS 11.
Example 24-1 Out Of Sync Tracks shown by the showgmiroos command

dscli> showgmiroos -dev IBM.2107-7520781 -lss 10 -scope si 02


Date/Time: November 9, 2005 1:01:54 AM JST IBM DSCLI Version: 5.1.0.204 DS: IBM.2107-7520781

Scope IBM.2107-7520781 Session 02 OutOfSyncTracks 73125 dscli> showgmiroos -dev IBM.2107-7520781 -lss 10 -scope lss 02
Date/Time: November 9, 2005 1:02:00 AM JST IBM DSCLI Version: 5.1.0.204 DS: IBM.2107-7520781

Scope IBM.2107-7520781/10 Session 02 OutOfSyncTracks 0 dscli> showgmiroos -dev IBM.2107-7520781 -lss 11 -scope lss 02
Date/Time: November 9, 2005 1:02:04 AM JST IBM DSCLI Version: 5.1.0.204 DS: IBM.2107-7520781

Scope IBM.2107-7520781/11 Session 02 OutOfSyncTracks 67847 dscli>

364

IBM System Storage DS8000: Copy Services in Open Environments

Example 24-2 shows the lspprc command that reports the Out Of Sync Tracks by volume level. Here we see that two volumes in LSS 10 have no Out Of Sync Tracks and two volumes in LSS 11 have some number of Out Of Sync Tracks.
Example 24-2 Out Of Sync Tracks shown by the lspprc command

dscli> lspprc -l 1000-1001 1100-1101


Date/Time: November 9, 2005 1:02:10 AM JST IBM DSCLI Version: 5.1.0.204 DS: IBM.2107-7520781
ID State Reason Type Out Of Sync Tracks Tgt Read Src Cascade Tgt Cascade Date Suspended SourceLSS ==================================================================================================================== 1000:2000 Copy Pending Global Copy 0 Disabled Disabled invalid 10 1001:2001 Copy Pending Global Copy 0 Disabled Disabled invalid 10 1100:2100 Copy Pending Global Copy 32478 Disabled Disabled invalid 11 1101:2101 Copy Pending Global Copy 32404 Disabled Disabled invalid 11 dscli>

When the load distribution is unknown in your configuration, you could consider developing some rudimentary code based on a script, for example, which regularly issues the showgmiroos commands (as shown in Example 24-1) and the lspprc -l command (as shown in Example 24-2). You can then process the output of these commands to better understand the write load distribution over the Global Copy source volumes. Note that the numbers in Example 24-1 might be just a brief peak period. It is still feasible to use the conventional approach with I/O performance reports, such as iostat in the UNIX environment, to investigate the write workload. TotalStorage Productivity Center for Disk could also be used to analyze the storage disk subsystem performance.

24.6 Growth within Global Mirror configurations


When you add a rather large number of volumes at once to an existing Global Mirror session, then the available resources for Global Copy within the affected ranks might be over-utilized or even monopolized by the initial copy pass. To avoid too much impact, consider adding many new volumes to an existing Global Mirror session in stages. If possible, and as a rough rule of thumb, add only a few volumes in a rank during application peak I/O periods. When the first initial copy is complete, add the next few volumes. Again, as a rule of thumb, plan for adding one or two volumes in a rank during peak I/O load for the first initial copy pass. If possible, then plan a massive add of new Global Copy volumes into an existing session during off-peak periods.

Chapter 24. Global Mirror performance and scalability

365

366

IBM System Storage DS8000: Copy Services in Open Environments

25

Chapter 25.

Global Mirror examples


In this chapter we provide examples that illustrate how to set up and manage a Global Mirror environment using the DS CLI and the DS Storage Manager GUI. The examples describe how to: Set up Global Mirror (paths, volume pairs, session). Manage Global Mirror. Remove the environment. Manage a site swap from the production to the recovery site and back. For C volumes, Space Efficient volumes were used in the DS Storage Manager GUI example and normal volumes in the DS CLI example. The information discussed in this chapter is complemented with the publication, IBM System Storage DS8000 Command-Line Interface Users Guide, SC26-7916.

Copyright IBM Corp. 2004-2008. All rights reserved.

367

25.1 Setting up a Global Mirror environment using the DS CLI


In this section we present an example of how to set up a Global Mirror environment using the DS CLI. Figure 25-1 shows the configuration we are going to implement. In this configuration, different LSS and LUN numbers were used across the A, B, and C components, so that you could unmistakably identify every element when referenced along the discussion. Note: In a real environment and differently from our example in order to simplify the management of your Global Mirror environment, it is better for you to maintain a symmetrical configuration in terms of both physical and logical elements. .
FlashCopy Global Copy

A
LSS10
GM Master 1000 1001

B
LSS20
2000 2001

C
FlashCopy LSS22 2200 2201 FlashCopy LSS23 2300 2301

Global Copy paths FCP link

LSS11
1100 1101 DS8000#1 (local)
-dev IBM.2107-7520781

LSS21
FCP link Global Copy paths

2100 2101

DS8000#2 (remote)
-dev IBM.2107-75ABTV1

Figure 25-1 DS8000 configuration in the Global Mirror example

25.1.1 Preparing to work with the DS CLI


Before starting the tasks to configure the Global Mirror environment, we recommend that you first do an initial DS CLI setup similar to the one explained in 16.5.1, Preparing to work with the DS CLI on page 202. This initial DS CLI setup will allow a simpler syntax of the commands you will be using to configure the Global Mirror environment.

25.1.2 Configuration used for the environment


Figure 25-1 shows the configuration used for this example. The configuration has four A volumes residing in two LSSs on DS8000#1, four B volumes residing in two LSSs on DS8000#2, and four C volumes residing in two other LSSs also on DS8000#2. Two paths are defined for each Global Copy source and target LSS pair (LSS10:LSS20 and LSS11:LSS21). We start the Global Mirror master in LSS10.

368

IBM System Storage DS8000: Copy Services in Open Environments

25.1.3 Setup procedure


The sequence of steps for the creation of a Global Mirror environment is not completely fixed, and allows for some variation. Still, we recommend that you follow this procedure: 1. Create Global Copy relationships (A to B volumes). 2. Create FlashCopy relationships (B to C volumes). 3. Start Global Mirror session.

25.1.4 Creating Global Copy relationships: A to B volumes


The first step of the procedure is to create the Global Copy relationships between the A and the B volumes. For this, you must do the following steps: 1. 2. 3. 4. Determine the available FCP links between the local and the remote disk subsystems. Create the Global Copy paths between the local and the remote LSSs. Create the Global Copy volume pairs. Wait until the first copy of the Global Copy pairs completes.

This procedure is followed in Example 25-1 where you can see the sequence of commands and the corresponding results. Note that the tasks you must perform to create the Global Copy relationships in a Global Mirror environment are similar to what has been presented in Part 5, Global Copy on page 251. You can refer to Setup of Global Copy configuration on page 271.
Example 25-1 Create Global Copy pairs relationships (A to B) << Determine the available fibre links >> dscli> lsavailpprcport -l -remotedev IBM.2107-75ABTV1 -remotewwnn 5005076303FFC663 10:20
Date/Time: November 9, 2005 11:13:47 PM JST IBM DSCLI Version: 5.1.0.204 DS: IBM.2107-7520781 Local Port Attached Port Type Switch ID Switch Port =================================================== I0143 I0010 FCP NA NA I0213 I0140 FCP NA NA dscli>

dscli> lsavailpprcport -l -remotedev IBM.2107-75ABTV1 -remotewwnn 5005076303FFC663 11:21


Date/Time: November 9, 2005 11:13:50 PM JST IBM DSCLI Version: 5.1.0.204 DS: IBM.2107-7520781 Local Port Attached Port Type Switch ID Switch Port =================================================== I0143 I0010 FCP NA NA I0213 I0140 FCP NA NA dscli>

<< Create paths >> dscli> mkpprcpath -remotedev IBM.2107-75ABTV1 -remotewwnn 5005076303FFC663 -srclss 10 -tgtlss 20 i0143:i0010 i0213:i0140
Date/Time: November 9, 2005 11:14:05 PM JST IBM DSCLI Version: 5.1.0.204 DS: IBM.2107-7520781 CMUC00149I mkpprcpath: Remote Mirror and Copy path 10:20 successfully established. dscli>

dscli> mkpprcpath -remotedev IBM.2107-75ABTV1 -remotewwnn 5005076303FFC663 -srclss 11 -tgtlss 21 i0143:i0010 i0213:i0140
Date/Time: November 9, 2005 11:14:17 PM JST IBM DSCLI Version: 5.1.0.204 DS: IBM.2107-7520781 CMUC00149I mkpprcpath: Remote Mirror and Copy path 11:21 successfully established. dscli>

dscli> lspprcpath 10-11


Date/Time: November 9, 2005 11:14:41 PM JST IBM DSCLI Version: 5.1.0.204 DS: IBM.2107-7520781 Src Tgt State SS Port Attached Port Tgt WWNN ========================================================= 10 20 Success FF20 I0143 I0010 5005076303FFC663 10 20 Success FF20 I0213 I0140 5005076303FFC663 11 21 Success FF21 I0143 I0010 5005076303FFC663

Chapter 25. Global Mirror examples

369

11 21 Success FF21 I0213 I0140 dscli>

5005076303FFC663

<< Create Global Copy pairs >> dscli> mkpprc -remotedev IBM.2107-75ABTV1 -type gcp 1000-1001:2000-2001 1100-1101:2100-2101
Date/Time: CMUC00153I CMUC00153I CMUC00153I CMUC00153I dscli> November 9, 2005 11:15:20 mkpprc: Remote Mirror and mkpprc: Remote Mirror and mkpprc: Remote Mirror and mkpprc: Remote Mirror and PM JST IBM DSCLI Copy volume pair Copy volume pair Copy volume pair Copy volume pair Version: 5.1.0.204 DS: relationship 1000:2000 relationship 1001:2001 relationship 1100:2100 relationship 1101:2101 IBM.2107-7520781 successfully created. successfully created. successfully created. successfully created.

dscli> lspprc -l 1000-1001 1100-1101


Date/Time: November 9, 2005 11:15:36 PM JST IBM DSCLI Version: 5.1.0.204 DS: IBM.2107-7520781
ID State Reason Type Out Of Sync Tracks Tgt Read Src Cascade Tgt Cascade Date Suspended SourceLSS ==================================================================================================================== 1000:2000 1001:2001 1100:2100 1101:2101 dscli> Copy Copy Copy Copy Pending Pending Pending Pending Global Global Global Global Copy Copy Copy Copy 44383 44374 52920 52886 Disabled Disabled Disabled Disabled Disabled Disabled Disabled Disabled invalid invalid invalid invalid 10 10 11 11

<< wait to see that the Out Of Sync Tracks shows 0 >> dscli> lspprc -l 1000-1001 1100-1101
Date/Time: November 9, 2005 11:18:22 PM JST IBM DSCLI Version: 5.1.0.204 DS: IBM.2107-7520781
ID State Reason Type Out Of Sync Tracks Tgt Read Src Cascade Tgt Cascade Date Suspended SourceLSS ===================================================================================================================== 1000:2000 1001:2001 1100:2100 1101:2101 dscli> Copy Copy Copy Copy Pending Pending Pending Pending Global Global Global Global Copy Copy Copy Copy 0 0 0 0 Disabled Disabled Disabled Disabled Disabled Disabled Disabled Disabled invalid invalid invalid invalid 10 10 11 11

25.1.5 Creating FlashCopy relationships: B to C volumes


To create the FlashCopy relationships between the B and C volumes, use the mkflash or the mkremoteflash command with the following parameters: -tgtinhibit, -record, and -nocp. The -persist parameter is automatically selected when the -record parameter is selected. Therefore you do not specify the -persist parameter explicitly. Following is a brief explanation for each parameter; see also 21.3.4, Introducing FlashCopy on page 316: -tgtinhibit Prevents host system writes to the target while the FlashCopy relationship exists. -record Keeps record OF the tracks that were modified on both volumes within a FlashCopy pair. Select this parameter when you create an initial FlashCopy volume pair that you intend to use with the resyncflash command. -nocp Inhibits background copy. Data is copied from the source volume to the target volume only if a track on the source volume is modified. -persist Keep the FlashCopy relationship until explicitly or implicitly terminated. Depending on your network environment, you can give the FlashCopy command to the local DS8000#1 for its inband transmission to the remote DS8000#2. In this case you use the mkremoteflash command. Alternatively, if you have connectivity to the remote DS8000#2, then you can give the mkflash command directly to the DS8000#2. 370
IBM System Storage DS8000: Copy Services in Open Environments

In our example we use the inband functionality of FlashCopy, in which case we have to specify the LSS having the A volume for the -conduit parameter and the storage image ID at the remote site for the -dev parameter; see Example 25-2. You have to give this command to the DS HMC connected to the local DS8000#1. Because the -nocp parameter is specified and the Global Copy initial copy (first pass) completed in the previous step, no FlashCopy background copy occurs at this time. Note: You can create this FlashCopy relationship before the initial copy of Global Copy. However, to avoid unnecessary FlashCopy background I/Os, we do not recommend this.
Example 25-2 Create FlashCopy relationship - B to C volumes
dscli> mkremoteflash -tgtinhibit -nocp -record -conduit IBM.2107-7520781/10 -dev IBM.2107-75ABTV1 2000-2001:2200-2201 Date/Time: November 10, 2005 12:18:13 AM JST IBM DSCLI Version: 5.1.0.204 DS: IBM.2107-75ABTV1 CMUC00173I mkremoteflash: Remote FlashCopy volume pair 2000:2200 successfully created. Use the lsremoteflash command to determine copy completion. CMUC00173I mkremoteflash: Remote FlashCopy volume pair 2001:2201 successfully created. Use the lsremoteflash command to determine copy completion. dscli> dscli> mkremoteflash -tgtinhibit -nocp -record -conduit IBM.2107-7520781/11 -dev IBM.2107-75ABTV1 2100-2101:2300-2301 Date/Time: November 10, 2005 12:18:27 AM JST IBM DSCLI Version: 5.1.0.204 DS: IBM.2107-75ABTV1 CMUC00173I mkremoteflash: Remote FlashCopy volume pair 2100:2300 successfully created. Use the lsremoteflash command to determine copy completion. CMUC00173I mkremoteflash: Remote FlashCopy volume pair 2101:2301 successfully created. Use the lsremoteflash command to determine copy completion. dscli>

dscli> lsremoteflash -l -conduit IBM.2107-7520781/10 -dev IBM.2107-75ABTV1 2000-2001


Date/Time: November 10, 2005 12:18:40 AM JST IBM DSCLI Version: 5.1.0.204 DS: IBM.2107-75ABTV1
ID SrcLSS SequenceNum ActiveCopy Recording Persistent Revertible SourceWriteEnabled TargetWriteEnabled BackgroundCopy OutOfSyncTracks ========================================================================================================================================== 2000:2200 20 0 Disabled Enabled Enabled Disabled Enabled Disabled Disabled 61036 2001:2201 20 0 Disabled Enabled Enabled Disabled Enabled Disabled Disabled 61036 dscli>

dscli> lsremoteflash -l -conduit IBM.2107-7520781/11 -dev IBM.2107-75ABTV1 2100-2101


Date/Time: November 10, 2005 12:18:56 AM JST IBM DSCLI Version: 5.1.0.204 DS: IBM.2107-75ABTV1
ID SrcLSS SequenceNum ActiveCopy Recording Persistent Revertible SourceWriteEnabled TargetWriteEnabled BackgroundCopy OutOfSyncTracks ========================================================================================================================================== 2100:2300 21 0 Disabled Enabled Enabled Disabled Enabled Disabled Disabled 61036 2101:2301 21 0 Disabled Enabled Enabled Disabled Enabled Disabled Disabled 61036 dscli>

25.1.6 Starting Global Mirror


To start Global Mirror operation, we must perform the following steps: 1. Define the Global Mirror session on the involved LSSs (master and subordinates). 2. Add the A volumes to the session. 3. Start the Global Mirror session.

Define the Global Mirror session on the involved LSSs


The mksession command defines the Global Mirror session to the specified LSSs. You do this to all the LSSs that are going to be involved in the Global Mirror session, master and subordinates. You can verify the results with the lssession command. In our example, we have two LSSs in the local DS8000 that are going to participate in the Global Mirror environment, LSS10 and LSS11. Therefore, we give the mksession command twice and in each occasion we use the -lss parameter to specify the selected LSS. You also specify the Global Mirror session ID with this command. This session ID is used when you start Global Mirror in a later step. In our example we specify 02 for the Global Mirror session ID. Example 25-3 shows the commands mksession and lssession we used in our example.
Chapter 25. Global Mirror examples

371

Example 25-3 Open the Global Mirror session on each LSS dscli> mksession -dev IBM.2107-7520781 -lss IBM.2107-7520781/10 02 Date/Time: November 10, 2005 1:38:32 AM JST IBM DSCLI Version: 5.1.0.204 DS: IBM.2107-7520781 CMUC00145I mksession: Session 02 opened successfully. dscli> dscli> mksession -dev IBM.2107-7520781 -lss IBM.2107-7520781/11 02 Date/Time: November 10, 2005 1:38:41 AM JST IBM DSCLI Version: 5.1.0.204 DS: IBM.2107-7520781 CMUC00145I mksession: Session 02 opened successfully. dscli> dscli> lssession -l IBM.2107-7520781/10 Date/Time: November 10, 2005 1:46:08 AM JST IBM DSCLI Version: 5.1.0.204 DS: IBM.2107-7520781 LSS ID Session Status Volume VolumeStatus PrimaryStatus SecondaryStatus FirstPassComplete AllowCascading ======================================================================================================== 10 02 dscli> dscli> lssession -l IBM.2107-7520781/11 Date/Time: November 10, 2005 1:46:12 AM JST IBM DSCLI Version: 5.1.0.204 DS: IBM.2107-7520781 LSS ID Session Status Volume VolumeStatus PrimaryStatus SecondaryStatus FirstPassComplete AllowCascading ======================================================================================================== 11 02 dscli>

Add the A volumes to the session


The next step is to add the A volumes to the session that was defined in the previous step. For this we use the chsession -action add -volume command, and we verify the results with the lssession command. Example 25-4 shows the commands we used in our example.
Example 25-4 Add the A volumes to the session on each LSS dscli> chsession -dev IBM.2107-7520781 -lss IBM.2107-7520781/10 -action add -volume 1000-1001 02 Date/Time: November 10, 2005 1:53:58 AM JST IBM DSCLI Version: 5.1.0.204 DS: IBM.2107-7520781 CMUC00147I chsession: Session 02 successfully modified. dscli> dscli> chsession -dev IBM.2107-7520781 -lss IBM.2107-7520781/11 -action add -volume 1100-1101 02 Date/Time: November 10, 2005 1:54:20 AM JST IBM DSCLI Version: 5.1.0.204 DS: IBM.2107-7520781 CMUC00147I chsession: Session 02 successfully modified. dscli> dscli> lssession -l IBM.2107-7520781/10 Date/Time: November 10, 2005 1:54:40 AM JST IBM DSCLI Version: 5.1.0.204 DS: IBM.2107-7520781
LSS ID Session Status Volume VolumeStatus PrimaryStatus SecondaryStatus FirstPassComplete AllowCascading ================================================================================================================= 10 02 Normal 1000 Join Pending Primary Copy Pending Secondary Simplex True Disable 10 02 Normal 1001 Join Pending Primary Copy Pending Secondary Simplex True Disable dscli>

dscli> lssession -l IBM.2107-7520781/11 Date/Time: November 10, 2005 1:54:44 AM JST IBM DSCLI Version: 5.1.0.204 DS: IBM.2107-7520781
LSS ID Session Status Volume VolumeStatus PrimaryStatus SecondaryStatus FirstPassComplete AllowCascading ================================================================================================================= 11 02 Normal 1100 Join Pending Primary Copy Pending Secondary Simplex True Disable 11 02 Normal 1101 Join Pending Primary Copy Pending Secondary Simplex True Disable

With the chsession command, you specify the Global Mirror session ID (02 in our example) and the volumes to be part of the Global Mirror environment. After adding the A volumes to the session, the lssession command shows the volumes ID with its state in the LSS. At this time of the procedure, the volumes state is join pending. This means that the volumes are not active for the current session. We have not yet started the Global Mirror session.

372

IBM System Storage DS8000: Copy Services in Open Environments

Note: At this step we do not have to do anything to add the B and C volumes to the Global Mirror session. They are automatically recognized by the Global Mirror mechanism through the Global Copy relationships and the FlashCopy relationships. In addition to the chsession command, you can also add the A volumes with the mksession command when you define the Global Mirror session on a LSS; see Example 25-5.
Example 25-5 Add the A volumes when you create a Global Mirror session
dscli> mksession -dev IBM.2107-7520781 -lss IBM.2107-7520781/10 -volume 1000-1001 02 Date/Time: November 10, 2005 2:16:50 AM JST IBM DSCLI Version: 5.1.0.204 DS: IBM.2107-7520781 CMUC00145I mksession: Session 02 opened successfully. dscli> dscli> mksession -dev IBM.2107-7520781 -lss IBM.2107-7520781/11 -volume 1100-1101 02 Date/Time: November 10, 2005 2:17:02 AM JST IBM DSCLI Version: 5.1.0.204 DS: IBM.2107-7520781 CMUC00145I mksession: Session 02 opened successfully. dscli> dscli> lssession -l IBM.2107-7520781/10 Date/Time: November 10, 2005 2:17:09 AM JST IBM DSCLI Version: 5.1.0.204 DS: IBM.2107-7520781 LSS ID Session Status Volume VolumeStatus PrimaryStatus SecondaryStatus FirstPassComplete AllowCascading ================================================================================================================= 10 02 Normal 1000 Join Pending Primary Copy Pending Secondary Simplex True Disable 10 02 Normal 1001 Join Pending Primary Copy Pending Secondary Simplex True Disable dscli> dscli> lssession -l IBM.2107-7520781/11 Date/Time: November 10, 2005 2:17:13 AM JST IBM DSCLI Version: 5.1.0.204 DS: IBM.2107-7520781 LSS ID Session Status Volume VolumeStatus PrimaryStatus SecondaryStatus FirstPassComplete AllowCascading ================================================================================================================= 11 02 Normal 1100 Join Pending Primary Copy Pending Secondary Simplex True Disable 11 02 Normal 1101 Join Pending Primary Copy Pending Secondary Simplex True Disable

Starting the Global Mirror session: When there are no subordinates


Now we can start the Global Mirror session, so the process of Consistency Group formation begins. For this we use the mkgmir command. The results can be verified with the showgmir command; see Example 25-6.
Example 25-6 Start Global Mirror session 02 dscli> mkgmir -dev IBM.2107-7520781 -lss IBM.2107-7520781/10 -session 02 Date/Time: November 10, 2005 2:23:12 AM JST IBM DSCLI Version: 5.1.0.204 DS: IBM.2107-7520781 CMUC00162I mkgmir: Global Mirror for session 02 successfully started. dscli> dscli> showgmir -dev IBM.2107-7520781 IBM.2107-7520781/10 Date/Time: November 10, 2005 2:23:29 AM JST IBM DSCLI Version: 5.1.0.204 DS: IBM.2107-7520781 ID IBM.2107-7520781/10 Master Count 1 Master Session ID 0x02 Copy State Running Fatal Reason Not Fatal CG Interval (seconds) 0 XDC Interval(milliseconds) 50 CG Drain Time (seconds) 30 Current Time 11/10/2005 02:27:51 JST CG Time 11/10/2005 02:27:50 JST Successful CG Percentage 100 FlashCopy Sequence Number 0x43723196 Master ID IBM.2107-7520781 Subordinate Count 0 Master/Subordinate Assoc -

Chapter 25. Global Mirror examples

373

In the mkgmir command, the LSS we specified with the -lss parameter becomes the master. In our example, it is LSS10. With this command we also specify the Global Mirror session ID of the session we are starting. At the time when you start the Global Mirror session, you can change the Global Mirror tuning parameters of the session with the mkgmir command: -cginterval: Specifies how long to wait between the formation of Consistency Groups. If this number is not specified or is set to zero, Consistency Groups are formed continuously. -coordinate: Indicates the maximum time that Global Mirror processing can hold host I/Os in the source disk subsystem to start forming a Consistency Group. -drain: Specifies maximum amount of time in seconds allowed for the data to drain to the remote site before failing the current Consistency Group. For additional discussion about the tuning parameters refer to 21.4.2, Consistency Group parameters on page 321 and Chapter 24, Global Mirror performance and scalability on page 357. This showgmir command shows the Global Mirror current status. The Copy State field indicates Running, which means that Global Mirror is satisfactorily operating. A Fatal state would have indicated that Global Mirror failed, and the Fatal Reason field would have shown the reason for the failure. The showgmir command also shows the current time in the Current Time field, which is the time when the DS8000 received this command. The time when the last successful Consistency Group was formed is shown in the CG Time field. You can obtain the current Recovery Time Objective (RPO) for this Global Mirror session from the difference between the Current Time and the CG Time. From the lssession command in Example 25-7 you can see that after starting the Global Mirror session, the VolumeStatus of the A volumes changes from Join Pending to Active.
Example 25-7 The A volumes status after starting the Global Mirror session dscli> lssession 10-11
Date/Time: November 11, 2005 5:21:48 PM JST IBM DSCLI Version: 5.1.0.204 DS: IBM.2107-7520781 LSS ID Session Status Volume VolumeStatus PrimaryStatus SecondaryStatus FirstPassComplete AllowCascadin ======================================================================================================================== 10 02 CG In Progress 1000 Active Primary Copy Pending Secondary Simplex True Disable 10 02 CG In Progress 1001 Active Primary Copy Pending Secondary Simplex True Disable 11 02 CG In Progress 1100 Active Primary Copy Pending Secondary Simplex True Disable 11 02 CG In Progress 1101 Active Primary Copy Pending Secondary Simplex True Disable

If we use the -metrics parameter with the showgmir command, we can obtain metrics for Global Mirror after we have started the session; see Example 25-8.
Example 25-8 The showgmir command with -metrics parameter dscli> showgmir -metrics -dev IBM.2107-7520781 IBM.2107-7520781/10
Date/Time: November 11, 2005 5:23:59 PM JST IBM DSCLI Version: 5.1.0.204 DS: IBM.2107-7520781

ID Total Failed CG Count Total Successful CG Count Successful CG Percentage Failed CG after Last Success Last Successful CG Form Time Coord. Time (seconds) Interval Time (seconds)

IBM.2107-7520781/10 0 317 100 0 11/11/2005 17:28:49 JST 50 0

374

IBM System Storage DS8000: Copy Services in Open Environments

Max Drain Time (seconds) First Failure Control Unit First Failure LSS First Failure Status First Failure Reason First Failure Master State Last Failure Control Unit Last Failure LSS Last Failure Status Last Failure Reason Last Failure Master State Previous Failure Control Unit Previous Failure LSS Previous Failure Status Previous Failure Reason Previous Failure Master State

30 No Error No Error No Error -

In Example 25-8 on page 374, the Total Failed CG Count field indicates the total number of Consistency Groups that did not complete successfully after we have started the Global Mirror. The Total Successful CG Count indicates the total number of Consistency Groups that completed successfully. First Failure indicates the first failure after we have started this session. Last Failure indicates the latest failure, and Previous Failure indicates the failure before the latest one. All this failure information will be cleared after we stop the Global Mirror and start it again. Pausing and resuming the Global Mirror operation does not reset this information. Depending on the Global Mirror parameters you set and your system environment, the Consistency Group formation can occasionally fail, and the showgmir -metrics can show the error messages. A typical case is that you see Max Drain Time Exceeded with the showgmir command when data of the out-of-sync bitmap cannot be drained within the specified time. However, this failure does not mean that you lose consistent data at the remote site, because Global Mirror does not take FlashCopy (B to C) for the failed Consistency Group data. Global Mirror will continue to attempt to form additional Consistency Groups without external intervention. If failures repeatedly continue (no more Consistency Groups are formed), the percentage of successful Consistency Groups is unacceptable (many failures occur), or the frequency of Consistency Groups is not meeting your requirements (Recovery Point Objective-RPO), then the failures are a problem and need to be addressed. There is another command related to Global Mirror, which is the showgmiroos command. This command reports the number of Out Of Sync Tracks that at a given moment Global Mirror has to transmit to the remote site the size of the logical track on the DS8000 FB volume is 64 KB. With the -scope parameter you select either the storage image scope or the LSS scope for the information to be reported. See Example 25-9.
Example 25-9 The showgmiroos command dscli> showgmiroos -dev IBM.2107-7520781 -lss IBM.2107-7520781/10 -scope Date/Time: November 11, 2005 5:28:27 PM JST IBM DSCLI Version: 5.1.0.204 Scope IBM.2107-7520781 Session 02 OutOfSyncTracks 1138 dscli> dscli> showgmiroos -dev IBM.2107-7520781 -lss IBM.2107-7520781/10 -scope Date/Time: November 11, 2005 5:28:44 PM JST IBM DSCLI Version: 5.1.0.204 Scope IBM.2107-7520781/10 Session 02 OutOfSyncTracks 303 dscli> showgmiroos -dev IBM.2107-7520781 -lss IBM.2107-7520781/11 -scope si 02 DS: IBM.2107-7520781

lss 02 DS: IBM.2107-7520781

lss 02

Chapter 25. Global Mirror examples

375

Date/Time: November 11, 2005 5:28:48 PM JST IBM DSCLI Version: 5.1.0.204 DS: IBM.2107-7520781 Scope IBM.2107-7520781/11 Session 02 OutOfSyncTracks 0

Starting the Global Mirror session: When there is a subordinate


If we are going to start a Global Mirror configuration where there is more than one storage disk subsystem at the local site, which means more than one DS8000 (or DS8000 storage image) at the local site, then the Global Mirror control path information between the master LSS and the LSSs in the subordinate storage images must be indicated when we start the Global Mirror session. Prior to this operation, we must have created the Global Mirror control paths between the master LSS and the LSSs in the subordinates. Figure 25-2 shows the example DS8000 configuration where we have a total of eight A volumes (four on DS8000#1 and four DS8000#3) at the local site that participate in the Global Mirror session. We have one DS8000 (DS8000#2) at the remote site with the corresponding B and the C volumes.

A
LSS10
GM Master 1000 1001 Global Mirror Control Path

Global Copy Global Copy paths

B
LSS20
2000

FlashCopy

FlashCopy LSS22 2200 2201 FlashCopy LSS23 2300 2301

Global Copy Pairs


FCP links

2001

LSS11
1100 1101 DS8000#1
-dev IBM.2107-7520781

LSS21
2100 2101

Global Copy Pairs

LSS90
9000 9001 Global Copy Pairs

LSS24
2400 2401

FlashCopy LSS26 2600 2601 FlashCopy LSS27 2700 2701

LSS91
9100 9101 DS8000#3
-dev IBM.2107-7503461

LSS25
Global Copy Pairs 2500 2501 DS8000#2

-dev IBM.2107-75ABTV1

Figure 25-2 Start Global Mirror session with a subordinate DS8000#3

Example 25-10 shows how to start a Global Mirror configuration when there is a subordinate. The example does not show how to set up the Global Copy and FlashCopy relationships because these steps are exactly the same as in a no-subordinate situation.
Example 25-10 Start Global Mirror session when there is a subordinate << Create Global Mirror control path between DS8000#1 and DS8000#3 >> dscli> lsavailpprcport -l -remotedev IBM.2107-7503461 -remotewwnn 5005076303FFC08F 10:90 Date/Time: November 10, 2005 8:54:46 PM JST IBM DSCLI Version: 5.1.0.204 DS: IBM.2107-7520781

376

IBM System Storage DS8000: Copy Services in Open Environments

Local Port Attached Port Type Switch ID Switch Port =================================================== I0001 I0031 FCP NA NA I0101 I0101 FCP NA NA dscli> dscli> mkpprcpath -remotedev IBM.2107-7503461 -remotewwnn 5005076303FFC08F -srclss 10 -tgtlss 90 i0001:i0031 i0101:i0101 Date/Time: November 10, 2005 8:56:59 PM JST IBM DSCLI Version: 5.1.0.204 DS: IBM.2107-7520781 CMUC00149I mkpprcpath: Remote Mirror and Copy path 10:90 successfully established. dscli>
dscli> lspprcpath -fullid 10 Date/Time: November 10, 2005 8:57:15 PM JST IBM DSCLI Version: 5.1.0.204 DS: IBM.2107-7520781 Src Tgt State SS Port Attached Port Tgt WWNN =================================================================================================================== IBM.2107-7520781/10 IBM.2107-75ABTV1/20 Success FF20 IBM.2107-7520781/I0143 IBM.2107-75ABTV1/I0010 5005076303FFC663 IBM.2107-7520781/10 IBM.2107-75ABTV1/20 Success FF20 IBM.2107-7520781/I0213 IBM.2107-75ABTV1/I0140 5005076303FFC663 IBM.2107-7520781/10 IBM.2107-7503461/90 Success FF90 IBM.2107-7520781/I0001 IBM.2107-7503461/I0031 5005076303FFC08F IBM.2107-7520781/10 IBM.2107-7503461/90 Success FF90 IBM.2107-7520781/I0101 IBM.2107-7503461/I0101 5005076303FFC08F

<< Setup Global Mirror environment bewteen DS8000#3 and DS8000#2 (These steps are NOT shown here) >> << Start Global Mirror with a Subordinate (DS8000#3) >> dscli> mkgmir -dev IBM.2107-7520781 -lss IBM.2107-7520781/10 -session 02 IBM.2107-7520781/10:IBM.2107-7503461/90 Date/Time: November 10, 2005 9:03:37 PM JST IBM DSCLI Version: 5.1.0.204 DS: IBM.2107-7520781 CMUC00162I mkgmir: Global Mirror for session 02 successfully started. dscli> dscli> showgmir -dev IBM.2107-7520781 IBM.2107-7520781/10 Date/Time: November 10, 2005 9:03:53 PM JST IBM DSCLI Version: 5.1.0.204 DS: IBM.2107-7520781 ID IBM.2107-7520781/10 Master Count 1 Master Session ID 0x02 Copy State Running Fatal Reason Not Fatal CG Interval (seconds) 0 XDC Interval(milliseconds) 50 CG Drain Time (seconds) 30 Current Time 11/10/2005 21:08:42 JST CG Time 11/10/2005 21:08:42 JST Successful CG Percentage 100 FlashCopy Sequence Number 0x4373384A Master ID IBM.2107-7520781 Subordinate Count 1 Master/Subordinate Assoc IBM.2107-7520781/10:IBM.2107-7503461/90

25.2 Removing a Global Mirror environment with the DS CLI


In this section we provide an example of how to remove a Global Mirror environment using the DS Command-Line Interface. The Global Mirror environment delete process can be structured in five consecutive steps: 1. 2. 3. 4. 5. End Global Mirror processing. Remove volumes from the session. Remove the Global Mirror session. Terminate the FlashCopy pairs. Terminate the Global Copy pairs.

Chapter 25. Global Mirror examples

377

6. Remove paths: Between the local site to the remote site Between the master LSS and the subordinate LSSs

25.2.1 Ending Global Mirror processing


We terminate Global Mirror processing with the rmgmir command. With this command, we have to specify the master LSS and the Global Mirror session ID of the session we are going to close; see Example 25-11. Before you end Global Mirror processing, first display the session information using the showgmir command. You can use the -quiet parameter to turn off the confirmation prompt for this command.
Example 25-11 Terminate Global Mirror dscli> showgmir -dev IBM.2107-7520781 IBM.2107-7520781/10 Date/Time: November 10, 2005 10:11:21 PM JST IBM DSCLI Version: 5.1.0.204 DS: IBM.2107-7520781 ID IBM.2107-7520781/10 Master Count 1 Master Session ID 0x02 Copy State Running Fatal Reason Not Fatal CG Interval (seconds) 0 XDC Interval(milliseconds) 50 CG Drain Time (seconds) 30 Current Time 11/10/2005 22:16:11 JST CG Time 11/10/2005 22:16:10 JST Successful CG Percentage 100 FlashCopy Sequence Number 0x4373481A Master ID IBM.2107-7520781 Subordinate Count 0 Master/Subordinate Assoc dscli> dscli> rmgmir -dev IBM.2107-7520781 -lss IBM.2107-7520781/10 -session 02 Date/Time: November 10, 2005 10:11:33 PM JST IBM DSCLI Version: 5.1.0.204 DS: IBM.2107-7520781 CMUC00166W rmgmir: Are you sure you want to stop the Global Mirror session 02:? [y/n]:y CMUC00165I rmgmir: Global Mirror for session 02 successfully stopped. dscli> dscli> showgmir -dev IBM.2107-7520781 IBM.2107-7520781/10 Date/Time: November 10, 2005 10:11:42 PM JST IBM DSCLI Version: 5.1.0.204 DS: IBM.2107-7520781 ID IBM.2107-7520781/10 Master Count Master Session ID Copy State Fatal Reason CG Interval (seconds) XDC Interval(milliseconds) CG Drain Time (seconds) Current Time CG Time Successful CG Percentage FlashCopy Sequence Number Master ID Subordinate Count Master/Subordinate Assoc -

378

IBM System Storage DS8000: Copy Services in Open Environments

Example 25-11 on page 378 illustrates how to end Global Mirror session 02 processing. Although this command might interrupt the formation of a consistency group, every attempt is made to preserve the previous consistent copy of the data on the FlashCopy target volumes. If, due to failures, this command cannot complete without compromising the consistent copy, the command stops processing and an error code is issued. If this occurs, reissue the rmgmir command with the -force parameter to force the command to stop the Global Mirror process. If the Global Mirror configuration that you started also had subordinates, then to end Global Mirror you also have to specify the Global Mirror control path information with the rmgmir command. Otherwise, the command fails; see Example 25-12.
Example 25-12 Terminate Global Mirror when there is a subordinate dscli> rmgmir -dev IBM.2107-7520781 -lss IBM.2107-7520781/10 -session 02 Date/Time: November 10, 2005 9:17:56 PM JST IBM DSCLI Version: 5.1.0.204 DS: IBM.2107-7520781 CMUC00166W rmgmir: Are you sure you want to stop the Global Mirror session 02:? [y/n]:y CMUN03067E rmgmir: Copy Services operation failure: configuration does not exist dscli> dscli> rmgmir -dev IBM.2107-7520781 -lss IBM.2107-7520781/10 -session 02 IBM.2107-7520781/10:IBM.2107-7503461/90 Date/Time: November 10, 2005 9:18:37 PM JST IBM DSCLI Version: 5.1.0.204 DS: IBM.2107-7520781 CMUC00166W rmgmir: Are you sure you want to stop the Global Mirror session 02:? [y/n]:y CMUC00165I rmgmir: Global Mirror for session 02 successfully stopped.

25.2.2 Removing the A volumes from the Global Mirror session


With the chsession command with the -action remove -volume parameters you remove the A volumes from the Global Mirror session for a given LSS; see Example 25-13. First, you give an lssession command to get Global Mirror session volumes information for the required LSSs.
Example 25-13 Remove the A volumes from the Global Mirror session dscli> lssession 10-11 Date/Time: November 10, 2005 10:21:16 PM JST IBM DSCLI Version: 5.1.0.204 DS: IBM.2107-7520781 LSS ID Session Status Volume VolumeStatus PrimaryStatus SecondaryStatus FirstPassComplete AllowCas =========================================================================================================== 10 02 Normal 1000 Join Pending Primary Copy Pending Secondary Simplex True Disable 10 02 Normal 1001 Join Pending Primary Copy Pending Secondary Simplex True Disable 11 02 Normal 1100 Join Pending Primary Copy Pending Secondary Simplex True Disable 11 02 Normal 1101 Join Pending Primary Copy Pending Secondary Simplex True Disable dscli> dscli> chsession -dev IBM.2107-7520781 -lss IBM.2107-7520781/10 -action remove -volume 1000-1001 02 Date/Time: November 10, 2005 10:21:29 PM JST IBM DSCLI Version: 5.1.0.204 DS: IBM.2107-7520781 CMUC00147I chsession: Session 02 successfully modified. dscli> dscli> chsession -dev IBM.2107-7520781 -lss IBM.2107-7520781/11 -action remove -volume 1100-1101 02 Date/Time: November 10, 2005 10:21:37 PM JST IBM DSCLI Version: 5.1.0.204 DS: IBM.2107-7520781 CMUC00147I chsession: Session 02 successfully modified. dscli> dscli> lssession 10-11 Date/Time: November 10, 2005 10:21:43 PM JST IBM DSCLI Version: 5.1.0.204 DS: IBM.2107-7520781 LSS ID Session Status Volume VolumeStatus PrimaryStatus SecondaryStatus FirstPassComplete AllowCascading ======================================================================================================== 10 02 11 02 -

Chapter 25. Global Mirror examples

379

25.2.3 Removing the Global Mirror session


With the rmsession command you un-define the Global Mirror session from an LSS; see Example 25-14. You do this to all the source LSSs where the Global Mirror session was defined. At the end, you do an lssession command to verify that the Global Mirror session has been removed.
Example 25-14 Remove the Global Mirror session from the LSSs dscli> lssession 10-11 Date/Time: November 10, 2005 10:25:45 PM JST IBM DSCLI Version: 5.1.0.204 DS: IBM.2107-7520781 LSS ID Session Status Volume VolumeStatus PrimaryStatus SecondaryStatus FirstPassComplete AllowCascading ======================================================================================================== 10 02 11 02 dscli> dscli> rmsession -quiet -dev IBM.2107-7520781 -lss IBM.2107-7520781/10 02 Date/Time: November 10, 2005 10:26:07 PM JST IBM DSCLI Version: 5.1.0.204 DS: IBM.2107-7520781 CMUC00146I rmsession: Session 02 closed successfully. dscli> dscli> rmsession -quiet -dev IBM.2107-7520781 -lss IBM.2107-7520781/11 02 Date/Time: November 10, 2005 10:26:11 PM JST IBM DSCLI Version: 5.1.0.204 DS: IBM.2107-7520781 CMUC00146I rmsession: Session 02 closed successfully. dscli> dscli> lssession 10-11 Date/Time: November 10, 2005 10:26:16 PM JST IBM DSCLI Version: 5.1.0.204 DS: IBM.2107-7520781 CMUC00234I lssession: No Session found.

Before closing the Global Mirror session on the LSS, we must remove all the A volumes from the Global Mirror session on that LSS, otherwise, the rmsession command fails; see Example 25-15.
Example 25-15 rmsession command fails when Global Mirror volumes were not previously removed dscli> lssession 10-11 Date/Time: November 10, 2005 10:21:16 PM JST IBM DSCLI Version: 5.1.0.204 DS: IBM.2107-7520781 LSS ID Session Status Volume VolumeStatus PrimaryStatus SecondaryStatus FirstPassComplete AllowCas =========================================================================================================== 10 02 Normal 1000 Join Pending Primary Copy Pending Secondary Simplex True Disable 10 02 Normal 1001 Join Pending Primary Copy Pending Secondary Simplex True Disable 11 02 Normal 1100 Join Pending Primary Copy Pending Secondary Simplex True Disable 11 02 Normal 1101 Join Pending Primary Copy Pending Secondary Simplex True Disable dscli> dscli> rmsession -dev IBM.2107-7520781 -lss IBM.2107-7520781/10 02 Date/Time: November 10, 2005 10:25:14 PM JST IBM DSCLI Version: 5.1.0.204 DS: IBM.2107-7520781 CMUC00148W rmsession: Are you sure you want to close session 02? [y/n]:y CMUN03107E rmsession: Copy Services operation failure: volumes in session

25.2.4 Terminating FlashCopy pairs


Depending on your network environment, you can give the FlashCopy commands to the local disk subsystem for its inband transmission to the remote disk subsystem. In this case you use the rmremoteflash command. Alternatively, if you have connectivity to the remote disk subsystem, then you can give the rmflash command directly to the remote disk subsystem. In our example we use the inband functionality of FlashCopy, in which case we have to specify the LSS having the A volume for the -conduit parameter and the storage image ID at the remote site for the -dev parameter; see Example 25-16. You have to give this command to the DS HMC connected to the local DS8000#1. 380
IBM System Storage DS8000: Copy Services in Open Environments

Before terminating the pairs, we gather information with a lsremoteflash command.


Example 25-16 Remove all FlashCopy relationships between the B and C volumes dscli> lsremoteflash -conduit IBM.2107-7520781/10 -dev IBM.2107-75ABTV1 2000-2001
Date/Time: November 10, 2005 10:52:25 PM JST IBM DSCLI Version: 5.1.0.204 DS: IBM.2107-75ABTV1 ID SrcLSS SequenceNum ActiveCopy Recording Persistent Revertible SourceWriteEnabled TargetWriteEnabled Background ======================================================================================================================== 2000:2200 20 43734829 Disabled Enabled Enabled Disabled Enabled Disabled Disabled 2001:2201 20 43734829 Disabled Enabled Enabled Disabled Enabled Disabled Disabled dscli>

dscli> lsremoteflash

-conduit IBM.2107-7520781/11 -dev IBM.2107-75ABTV1 2100-2101

Date/Time: November 10, 2005 10:52:33 PM JST IBM DSCLI Version: 5.1.0.204 DS: IBM.2107-75ABTV1 ID SrcLSS SequenceNum ActiveCopy Recording Persistent Revertible SourceWriteEnabled TargetWriteEnabled Background ======================================================================================================================== 2100:2300 21 43734829 Disabled Enabled Enabled Disabled Enabled Disabled Disabled 2101:2301 21 43734829 Disabled Enabled Enabled Disabled Enabled Disabled Disabled dscli>

dscli> rmremoteflash -quiet -conduit IBM.2107-7520781/10 -dev IBM.2107-75ABTV1 2000-2001:2200-2201


Date/Time: November 10, 2005 10:52:49 PM JST IBM DSCLI Version: 5.1.0.204 DS: IBM.2107-75ABTV1 CMUC00180I rmremoteflash: Removal of the remote FlashCopy volume pair 2000:2200 has been initiated successfully. Use the lsremoteflash command to determine when the relationship is deleted. CMUC00180I rmremoteflash: Removal of the remote FlashCopy volume pair 2001:2201 has been initiated successfully. Use the lsremoteflash command to determine when the relationship is deleted. dscli>

dscli> rmremoteflash -quiet -conduit IBM.2107-7520781/11 -dev IBM.2107-75ABTV1 2100-2101:2300-2301


Date/Time: November 10, 2005 10:53:49 PM JST IBM DSCLI Version: 5.1.0.204 DS: IBM.2107-75ABTV1 CMUC00180I rmremoteflash: Removal of the remote FlashCopy volume pair 2100:2300 has been initiated successfully. Use the lsremoteflash command to determine when the relationship is deleted. CMUC00180I rmremoteflash: Removal of the remote FlashCopy volume pair 2101:2301 has been initiated successfully. Use the lsremoteflash command to determine when the relationship is deleted. dscli>

dscli> lsremoteflash

-conduit IBM.2107-7520781/10 -dev IBM.2107-75ABTV1 2000-2001

Date/Time: November 10, 2005 10:53:55 PM JST IBM DSCLI Version: 5.1.0.204 DS: IBM.2107-75ABTV1 CMUC00234I lsremoteflash: No Remote Flash Copy found. dscli> dscli> lsremoteflash -conduit IBM.2107-7520781/11 -dev IBM.2107-75ABTV1 2100-2101 Date/Time: November 10, 2005 10:53:59 PM JST IBM DSCLI Version: 5.1.0.204 DS: IBM.2107-75ABTV1 CMUC00234I lsremoteflash: No Remote Flash Copy found.

25.2.5 Terminating Global Copy pairs and remove the paths


For the termination of the Global Copy pairs you use the rmpprc command, and for the deletion of the paths you use the rmpprcpath command; see Example 25-17 on page 381. Before terminating the pairs and deleting the paths you request information by means of the lspprc and the lspprcpath commands, respectively. Note that the tasks you must perform to terminate the Global Copy relationships in a Global Mirror environment are similar to what has been presented in Part 5, Global Copy on page 251. You can refer to 19.3.2, Remove Global Copy environment using DS CLI on page 273.
Example 25-17 Remove all Global Copy pairs and remove the paths dscli> lspprc 1000-1001 1100-1101 Date/Time: November 10, 2005 11:14:19 PM JST IBM DSCLI Version: 5.1.0.204 DS: IBM.2107-7520781 ID State Reason Type SourceLSS Timeout (secs) Critical Mode First Pass Status ================================================================================================== 1000:2000 Copy Pending Global Copy 10 unknown Disabled True 1001:2001 Copy Pending Global Copy 10 unknown Disabled True 1100:2100 Copy Pending Global Copy 11 unknown Disabled True 1101:2101 Copy Pending Global Copy 11 unknown Disabled True

Chapter 25. Global Mirror examples

381

dscli> dscli> rmpprc -remotedev IBM.2107-75ABTV1 -quiet 1000-1001:2000-2001 1100-1101:2100-2101 Date/Time: November 10, 2005 11:14:32 PM JST IBM DSCLI Version: 5.1.0.204 DS: IBM.2107-7520781 CMUC00155I rmpprc: Remote Mirror and Copy volume pair 1000:2000 relationship successfully withdrawn. CMUC00155I rmpprc: Remote Mirror and Copy volume pair 1001:2001 relationship successfully withdrawn. CMUC00155I rmpprc: Remote Mirror and Copy volume pair 1100:2100 relationship successfully withdrawn. CMUC00155I rmpprc: Remote Mirror and Copy volume pair 1101:2101 relationship successfully withdrawn. dscli> dscli> lspprcpath 10-11 Date/Time: November 10, 2005 11:16:01 PM JST IBM DSCLI Version: 5.1.0.204 DS: IBM.2107-7520781 Src Tgt State SS Port Attached Port Tgt WWNN ========================================================= 10 20 Success FF20 I0143 I0010 5005076303FFC663 10 20 Success FF20 I0213 I0140 5005076303FFC663 11 21 Success FF21 I0143 I0010 5005076303FFC663 11 21 Success FF21 I0213 I0140 5005076303FFC663 dscli> dscli> rmpprcpath -quiet -remotedev IBM.2107-75ABTV1 10:20 11:21 Date/Time: November 10, 2005 11:16:34 PM JST IBM DSCLI Version: 5.1.0.204 DS: IBM.2107-7520781 CMUC00150I rmpprcpath: Remote Mirror and Copy path 10:20 successfully removed. CMUC00150I rmpprcpath: Remote Mirror and Copy path 11:21 successfully removed.

25.3 Managing the Global Mirror environment with the DS CLI


In this section we discuss and give examples of how to perform common Global Mirror control tasks using the DS CLI. The following management activities are presented: Pause and resume the Global Mirror Consistency Group formation. Change the Global Mirror tuning parameters. Stop and start Global Mirror. Add and remove A volumes to the Global Mirror environment. Add and remove an LSS to an existing Global Mirror environment. Add and remove a subordinate disk subsystem to an existing Global Mirror environment.

25.3.1 Pausing and resuming Global Mirror Consistency Group formation


The pausegmir command pauses Global Mirror Consistency Group formation. You have to specify the Global Mirror master LSS ID and session ID. You can verify the result with the showgmir command, which shows Paused state in the Copy State field; see Example 25-18 on page 382. Note: The pausegmir command does not influence the Global Copy data transfer. When you pause a Global Mirror session with the pausegmir command it will complete the Consistency Group formation in progress before it pauses. This is slightly different from the usage of the rmgmir command; refer to 25.3.3, Stopping and starting Global Mirror on page 385.
Example 25-18 Pause Global Mirror CG formation dscli> showgmir -dev IBM.2107-7520781 IBM.2107-7520781/10 Date/Time: November 11, 2005 12:13:47 AM JST IBM DSCLI Version: 5.1.0.204 DS: IBM.2107-7520781 ID IBM.2107-7520781/10 Master Count 1 Master Session ID 0x02 Copy State Running

382

IBM System Storage DS8000: Copy Services in Open Environments

Fatal Reason CG Interval (seconds) XDC Interval(milliseconds) CG Drain Time (seconds) Current Time CG Time Successful CG Percentage FlashCopy Sequence Number Master ID Subordinate Count Master/Subordinate Assoc dscli> dscli> lssession 10-11

Not Fatal 0 50 30 11/11/2005 00:18:37 JST 11/11/2005 00:18:37 JST 100 0x437364CD IBM.2107-7520781 0 -

Date/Time: November 11, 2005 12:15:48 AM JST IBM DSCLI Version: 5.1.0.204 DS: IBM.2107-7520781 LSS ID Session Status Volume VolumeStatus PrimaryStatus SecondaryStatus FirstPassComplete AllowCasca ===================================================================================================================== 10 02 CG In Progress 1000 Active Primary Copy Pending Secondary Simplex True Disable 10 02 CG In Progress 1001 Active Primary Copy Pending Secondary Simplex True Disable 11 02 CG In Progress 1100 Active Primary Copy Pending Secondary Simplex True Disable 11 02 CG In Progress 1101 Active Primary Copy Pending Secondary Simplex True Disable dscli>

dscli> pausegmir -dev IBM.2107-7520781 -lss IBM.2107-7520781/10 Date/Time: November 11, 2005 12:20:18 AM JST IBM DSCLI Version: CMUC00163I pausegmir: Global Mirror for session 02 successfully dscli> dscli> showgmir -dev IBM.2107-7520781 IBM.2107-7520781/10 Date/Time: November 11, 2005 12:20:29 AM JST IBM DSCLI Version: ID IBM.2107-7520781/10 Master Count 1 Master Session ID 0x02 Copy State Paused Fatal Reason Not Fatal CG Interval (seconds) 0 XDC Interval(milliseconds) 50 CG Drain Time (seconds) 30 Current Time 11/11/2005 00:25:19 JST CG Time 11/11/2005 00:25:10 JST Successful CG Percentage 100 FlashCopy Sequence Number 0x43736656 Master ID IBM.2107-7520781 Subordinate Count 0 Master/Subordinate Assoc dscli> dscli> lssession 10-11

-session 02 5.1.0.204 DS: IBM.2107-7520781 paused.

5.1.0.204 DS: IBM.2107-7520781

Date/Time: November 11, 2005 12:21:00 AM JST IBM DSCLI Version: 5.1.0.204 DS: IBM.2107-7520781 LSS ID Session Status Volume VolumeStatus PrimaryStatus SecondaryStatus FirstPassComplete AllowCascading ================================================================================================================= 10 02 Normal 1000 Active Primary Copy Pending Secondary Simplex True Disable 10 02 Normal 1001 Active Primary Copy Pending Secondary Simplex True Disable 11 02 Normal 1100 Active Primary Copy Pending Secondary Simplex True Disable 11 02 Normal 1101 Active Primary Copy Pending Secondary Simplex True Disable dscli>

dscli> lsremoteflash

-conduit IBM.2107-7520781/10 -dev IBM.2107-75ABTV1 2000-2001

Date/Time: November 11, 2005 12:22:03 AM JST IBM DSCLI Version: 5.1.0.204 DS: IBM.2107-75ABTV1 ID SrcLSS SequenceNum ActiveCopy Recording Persistent Revertible SourceWriteEnabled TargetWriteEnabled Background ======================================================================================================================== 2000:2200 20 43736656 Disabled Enabled Enabled Disabled Enabled Disabled Disabled 2001:2201 20 43736656 Disabled Enabled Enabled Disabled Enabled Disabled Disabled dscli>

dscli> lsremoteflash

-conduit IBM.2107-7520781/11 -dev IBM.2107-75ABTV1 2100-2101

Date/Time: November 11, 2005 12:22:20 AM JST IBM DSCLI Version: 5.1.0.204 DS: IBM.2107-75ABTV1 ID SrcLSS SequenceNum ActiveCopy Recording Persistent Revertible SourceWriteEnabled TargetWriteEnabled Background

Chapter 25. Global Mirror examples

383

======================================================================================================================== 2100:2300 21 43736656 Disabled Enabled Enabled Disabled Enabled Disabled Disabled 2101:2301 21 43736656 Disabled Enabled Enabled Disabled Enabled Disabled Disabled dscli>

<< SequnceNum field does not change >> dscli> lsremoteflash -conduit IBM.2107-7520781/10 -dev IBM.2107-75ABTV1 2000-2001
Date/Time: November 11, 2005 12:26:31 AM JST IBM DSCLI Version: 5.1.0.204 DS: IBM.2107-75ABTV1 ID SrcLSS SequenceNum ActiveCopy Recording Persistent Revertible SourceWriteEnabled TargetWriteEnabled Background ======================================================================================================================== 2000:2200 20 43736656 Disabled Enabled Enabled Disabled Enabled Disabled Disabled 2001:2201 20 43736656 Disabled Enabled Enabled Disabled Enabled Disabled Disabled dscli>

dscli> lsremoteflash

-conduit IBM.2107-7520781/11 -dev IBM.2107-75ABTV1 2100-2101

Date/Time: November 11, 2005 12:26:35 AM JST IBM DSCLI Version: 5.1.0.204 DS: IBM.2107-75ABTV1 ID SrcLSS SequenceNum ActiveCopy Recording Persistent Revertible SourceWriteEnabled TargetWriteEnabled Background ======================================================================================================================== 2100:2300 21 43736656 Disabled Enabled Enabled Disabled Enabled Disabled Disabled 2101:2301 21 43736656 Disabled Enabled Enabled Disabled Enabled Disabled Disabled

dscli>

The Status shown by the lssession command changes from CG In Progress, which means that the Consistency Group of the session is in progress, to Normal, which means that the session is in a normal Global Copy state. In fact, you see this state (Normal) between the time when a FlashCopy has been taken and the next Global Copy Consistency Group formation time. Note: The FlashCopy sequence number has not changed after pausing Global Mirror because the FlashCopy at the remote site has never been done; see Example 25-18. The resumegmir command resumes Global Mirror processing for a specified session; see Example 25-19. Consistency Group formation is resumed.
Example 25-19 Resume Global Mirror processing dscli> resumegmir -dev IBM.2107-7520781 -lss IBM.2107-7520781/10 -session 02 Date/Time: November 11, 2005 1:15:31 AM JST IBM DSCLI Version: 5.1.0.204 DS: IBM.2107-7520781 CMUC00164I resumegmir: Global Mirror for session 02 successfully resumed. dscli> dscli> showgmir -dev IBM.2107-7520781 IBM.2107-7520781/10 Date/Time: November 11, 2005 1:15:50 AM JST IBM DSCLI Version: 5.1.0.204 DS: IBM.2107-7520781 ID IBM.2107-7520781/10 Master Count 1 Master Session ID 0x02 Copy State Running Fatal Reason Not Fatal CG Interval (seconds) 0 XDC Interval(milliseconds) 50 CG Drain Time (seconds) 30 Current Time 11/11/2005 01:20:41 JST CG Time 11/11/2005 01:20:41 JST Successful CG Percentage 100 FlashCopy Sequence Number 0x43737359 Master ID IBM.2107-7520781 Subordinate Count 0 Master/Subordinate Assoc -

384

IBM System Storage DS8000: Copy Services in Open Environments

25.3.2 Changing the Global Mirror tuning parameters


You can change the three Global Mirror tuning parameters by pausing and resuming Global Mirror. In Example 25-20 we changed the Consistency Group interval time parameter from zero to 60 seconds.
Example 25-20 Change the Consistent Group interval (CG Interval) time parameter dscli> showgmir -dev IBM.2107-7520781 IBM.2107-7520781/10 Date/Time: November 11, 2005 1:15:50 AM JST IBM DSCLI Version: 5.1.0.204 DS: ID IBM.2107-7520781/10 Master Count 1 Master Session ID 0x02 Copy State Running Fatal Reason Not Fatal CG Interval (seconds) 0 XDC Interval(milliseconds) 50 CG Drain Time (seconds) 30 Current Time 11/11/2005 01:20:41 JST CG Time 11/11/2005 01:20:41 JST Successful CG Percentage 100 FlashCopy Sequence Number 0x43737359 Master ID IBM.2107-7520781 Subordinate Count 0 Master/Subordinate Assoc dscli> dscli> pausegmir -dev IBM.2107-7520781 -lss IBM.2107-7520781/10 -session 02 Date/Time: November 11, 2005 1:23:10 AM JST IBM DSCLI Version: 5.1.0.204 DS: CMUC00163I pausegmir: Global Mirror for session 02 successfully paused. dscli> dscli> resumegmir -dev IBM.2107-7520781 -lss IBM.2107-7520781/10 -session 02 Date/Time: November 11, 2005 1:23:35 AM JST IBM DSCLI Version: 5.1.0.204 DS: CMUC00164I resumegmir: Global Mirror for session 02 successfully resumed. dscli> dscli> showgmir -dev IBM.2107-7520781 IBM.2107-7520781/10 Date/Time: November 11, 2005 1:23:39 AM JST IBM DSCLI Version: 5.1.0.204 DS: ID IBM.2107-7520781/10 Master Count 1 Master Session ID 0x02 Copy State Running Fatal Reason Not Fatal CG Interval (seconds) 60 XDC Interval(milliseconds) 50 CG Drain Time (seconds) 30 Current Time 11/11/2005 01:28:29 JST CG Time 11/11/2005 01:28:25 JST Successful CG Percentage 100 FlashCopy Sequence Number 0x43737529 Master ID IBM.2107-7520781 Subordinate Count 0 Master/Subordinate Assoc IBM.2107-7520781

IBM.2107-7520781

-cginterval 60 IBM.2107-7520781

IBM.2107-7520781

25.3.3 Stopping and starting Global Mirror


To stop Global Mirror means to stop the Global Mirror master on a LSS, using the rmgmir command; see Example 25-21. You will stop Global Mirror when, for example, you want to start using a different topology; see 22.3.4, Global Mirror environment topology changes on page 331.

Chapter 25. Global Mirror examples

385

You do not need to remove the Global Copy and FlashCopy relationships to stop the Global Mirror master. After stopping the Global Mirror master, the Consistency Group formation does not continue, which means the FlashCopy sequence number at the remote site does not increment. Although the operation to stop Global Mirror with the rmgmir command might interrupt the formation of a Consistency Group, every attempt is made to preserve the previous consistent copy of the data on the FlashCopy target volumes. If due to failures the rmgmir command cannot complete without compromising the consistent copy, the command stops processing and an error code is issued. If this occurs, reissue the rmgmir command with the -force parameter to force the command to stop the Global Mirror process. The mkgmir command restarts the Global Mirror master. You can specify another LSS on which you will start the Global Mirror master. In Example 25-21, we stop the Global Mirror master running on LSS10 and then start again on LSS11.
Example 25-21 Stop and start Global Mirror dscli> lssession 10-12
Date/Time: November 11, 2005 9:42:17 PM JST IBM DSCLI Version: 5.1.0.204 DS: IBM.2107-7520781 LSS ID Session Status Volume VolumeStatus PrimaryStatus SecondaryStatus FirstPassComplete AllowCascading ======================================================================================================================== 10 02 CG In Progress 1000 Active Primary Copy Pending Secondary Simplex True Disable 10 02 CG In Progress 1001 Active Primary Copy Pending Secondary Simplex True Disable 11 02 CG In Progress 1100 Active Primary Copy Pending Secondary Simplex True Disable 11 02 CG In Progress 1101 Active Primary Copy Pending Secondary Simplex True Disable dscli>

dscli> rmgmir -quiet -dev IBM.2107-7520781 -lss IBM.2107-7520781/10 -session 02


Date/Time: November 11, 2005 9:42:20 PM JST IBM DSCLI Version: 5.1.0.204 DS: IBM.2107-7520781 CMUC00165I rmgmir: Global Mirror for session 02 successfully stopped. dscli>

dscli> showgmir -dev IBM.2107-7520781 IBM.2107-7520781/10


Date/Time: November 11, 2005 9:42:40 PM JST IBM DSCLI Version: 5.1.0.204 DS: IBM.2107-7520781 ID IBM.2107-7520781/10 Master Count Master Session ID Copy State Fatal Reason CG Interval (seconds) XDC Interval(milliseconds) CG Drain Time (seconds) Current Time CG Time Successful CG Percentage FlashCopy Sequence Number Master ID Subordinate Count Master/Subordinate Assoc dscli>

dscli> lssession 10-12


Date/Time: November 11, 2005 9:42:47 PM JST IBM DSCLI Version: 5.1.0.204 DS: IBM.2107-7520781 LSS ID Session Status Volume VolumeStatus PrimaryStatus SecondaryStatus FirstPassComplete AllowCascading ================================================================================================================= 10 02 Normal 1000 Active Primary Copy Pending Secondary Simplex True Disable 10 02 Normal 1001 Active Primary Copy Pending Secondary Simplex True Disable 11 02 Normal 1100 Active Primary Copy Pending Secondary Simplex True Disable 11 02 Normal 1101 Active Primary Copy Pending Secondary Simplex True Disable dscli>

dscli> mkgmir -dev IBM.2107-7520781 -lss IBM.2107-7520781/11 -session 02


Date/Time: November 11, 2005 9:43:30 PM JST IBM DSCLI Version: 5.1.0.204 DS: IBM.2107-7520781 CMUC00162I mkgmir: Global Mirror for session 02 successfully started. dscli> dscli> lssession 10-12

386

IBM System Storage DS8000: Copy Services in Open Environments

Date/Time: November 11, 2005 9:43:35 PM JST IBM DSCLI Version: 5.1.0.204 DS: IBM.2107-7520781 LSS ID Session Status Volume VolumeStatus PrimaryStatus SecondaryStatus FirstPassComplete AllowCascading ======================================================================================================================== 10 02 CG In Progress 1000 Active Primary Copy Pending Secondary Simplex True Disable 10 02 CG In Progress 1001 Active Primary Copy Pending Secondary Simplex True Disable 11 02 CG In Progress 1100 Active Primary Copy Pending Secondary Simplex True Disable 11 02 CG In Progress 1101 Active Primary Copy Pending Secondary Simplex True Disable dscli>

dscli> showgmir -dev IBM.2107-7520781 IBM.2107-7520781/11


Date/Time: November 11, 2005 9:43:46 PM JST IBM DSCLI Version: 5.1.0.204 DS: IBM.2107-7520781 ID IBM.2107-7520781/11 Master Count 1 Master Session ID 0x02 Copy State Running Fatal Reason Not Fatal CG Interval (seconds) 0 XDC Interval(milliseconds) 50 CG Drain Time (seconds) 30 Current Time 11/11/2005 21:49:09 JST CG Time 11/11/2005 21:49:08 JST Successful CG Percentage 100 FlashCopy Sequence Number 0x43749344 Master ID IBM.2107-7520781 Subordinate Count 0 Master/Subordinate Assoc -

25.3.4 Adding and removing A volumes to the Global Mirror environment


First you create the Global Copy (A to B) and FlashCopy (B to C) relationships for the A volume that you want to add to the Global Mirror environment. After this, you add the A volume to the Global Mirror session, using the chsession -action add -volume command. In Example 25-22 we added A volume 1002, which is associated to B volume 2002 and C volume 2202.
Example 25-22 Add an A volume to the Global Mirror environment << Preparing an A volume >> dscli> mkpprc -remotedev IBM.2107-75ABTV1 -type gcp 1002:2002 Date/Time: November 11, 2005 1:39:03 AM JST IBM DSCLI Version: 5.1.0.204 DS: IBM.2107-7520781 CMUC00153I mkpprc: Remote Mirror and Copy volume pair relationship 1002:2002 successfully created. dscli> dscli> lspprc -l 1002
Date/Time: November 11, 2005 1:39:12 AM JST IBM DSCLI Version: 5.1.0.204 DS: IBM.2107-7520781 ID State Reason Type Out Of Sync Tracks Tgt Read Src Cascade Tgt Cascade Date Suspended SourceLSS ======================================================================================================================== 1002:2002 Copy Pending Global Copy 36222 Disabled Disabled invalid 10 dscli>

dscli> lspprc -l 1002


Date/Time: November 11, 2005 1:39:46 AM JST IBM DSCLI Version: 5.1.0.204 DS: IBM.2107-7520781 ID State Reason Type Out Of Sync Tracks Tgt Read Src Cascade Tgt Cascade Date Suspended SourceLSS ======================================================================================================================== 1002:2002 Copy Pending Global Copy 0 Disabled Disabled invalid 10

dscli> dscli> mkremoteflash -tgtinhibit -nocp -record -conduit IBM.2107-7520781/10 -dev IBM.2107-75ABTV1 2002:2202 Date/Time: November 11, 2005 1:40:01 AM JST IBM DSCLI Version: 5.1.0.204 DS: IBM.2107-75ABTV1 CMUC00173I mkremoteflash: Remote FlashCopy volume pair 2002:2202 successfully created. Use the lsremoteflash command to determine copy completion dscli> dscli> lsremoteflash -conduit IBM.2107-7520781/10 -dev IBM.2107-75ABTV1 2002:2202 Date/Time: November 11, 2005 1:40:21 AM JST IBM DSCLI Version: 5.1.0.204 DS: IBM.2107-75ABTV1 ID SrcLSS SequenceNum ActiveCopy Recording Persistent Revertible SourceWriteEnabled TargetWriteEnabl =========================================================================================================== Chapter 25. Global Mirror examples

387

2002:2202 20 dscli>

Disabled

Enabled

Enabled

Disabled

Enabled

Disabled

<< Add the A volume to Global Mirror >> dscli> lssession 10 Date/Time: November 11, 2005 1:41:46 AM JST IBM DSCLI Version: 5.1.0.204 DS: IBM.2107-7520781 LSS ID Session Status Volume VolumeStatus PrimaryStatus SecondaryStatus FirstPassComplete AllowCas =========================================================================================================== 10 02 Normal 1000 Active Primary Copy Pending Secondary Simplex True Disable 10 02 Normal 1001 Active Primary Copy Pending Secondary Simplex True Disable dscli> dscli> chsession -dev IBM.2107-7520781 -lss IBM.2107-7520781/10 -action add -volume 1002 02 Date/Time: November 11, 2005 1:42:14 AM JST IBM DSCLI Version: 5.1.0.204 DS: IBM.2107-7520781 CMUC00147I chsession: Session 02 successfully modified. dscli> dscli> lssession 10 Date/Time: November 11, 2005 1:42:19 AM JST IBM DSCLI Version: 5.1.0.204 DS: IBM.2107-7520781 LSS ID Session Status Volume VolumeStatus PrimaryStatus SecondaryStatus FirstPassComplete AllowCas =========================================================================================================== 10 02 Normal 1000 Active Primary Copy Pending Secondary Simplex True Disable 10 02 Normal 1001 Active Primary Copy Pending Secondary Simplex True Disable 10 02 Normal 1002 Join Pending Primary Copy Pending Secondary Simplex True Disable dscli> dscli> lssession 10 Date/Time: November 11, 2005 1:43:24 AM JST IBM DSCLI Version: 5.1.0.204 DS: IBM.2107-7520781 LSS ID Session Status Volume VolumeStatus PrimaryStatus SecondaryStatus FirstPassComplete AllowCas =========================================================================================================== 10 02 Normal 1000 Active Primary Copy Pending Secondary Simplex True Disable 10 02 Normal 1001 Active Primary Copy Pending Secondary Simplex True Disable 10 02 Normal 1002 Active Primary Copy Pending Secondary Simplex True Disable

To be added to a Global Mirror session, the A volumes can be in any state, such as simplex (no relationship), copy pending, or suspended. Volumes that have not completed their initial copy phase (also called first pass) stay in a Join Pending state until the first pass is complete. You can check the first pass status with the lspprc -l command; see Example 25-23. The First Pass Status field reports this information, where True means that the Global Copy first pass has been completed.
Example 25-23 Check the first pass completion for the Global Copy initial copy dscli> lspprc -l -fmt stanza 1002
Date/Time: November 11, 2005 9:04:50 PM JST IBM DSCLI Version: 5.1.0.204 DS: IBM.2107-7520781

ID State Reason Type Out Of Sync Tracks Tgt Read Src Cascade Tgt Cascade Date Suspended SourceLSS Timeout (secs) Critical Mode First Pass Status

1002:2002 Copy Pending Global Copy 0 Disabled Disabled invalid 10 unknown Disabled True

388

IBM System Storage DS8000: Copy Services in Open Environments

When you want to remove an A volume from the Global Mirror environment, you can use the chsession -action remove -volume command. You first remove the A volume from the Global Mirror session and then remove its Global Copy and FlashCopy relationships. See Example 25-24.
Example 25-24 Remove an A volume from the Global Mirror environment dscli> lssession 10-11
Date/Time: November 11, 2005 5:44:52 PM JST IBM DSCLI Version: 5.1.0.204 DS: IBM.2107-7520781 LSS ID Session Status Volume VolumeStatus PrimaryStatus SecondaryStatus FirstPassComplete AllowCascadin ======================================================================================================================== 10 02 CG In Progress 1000 Active Primary Copy Pending Secondary Simplex True Disable 10 02 CG In Progress 1001 Active Primary Copy Pending Secondary Simplex True Disable 10 02 CG In Progress 1002 Active Primary Copy Pending Secondary Simplex True Disable 11 02 CG In Progress 1100 Active Primary Copy Pending Secondary Simplex True Disable 11 02 CG In Progress 1101 Active Primary Copy Pending Secondary Simplex True Disable dscli>

dscli>

chsession -dev IBM.2107-7520781 -lss IBM.2107-7520781/10 -action remove -volume 1002 02

Date/Time: November 11, 2005 5:46:11 PM JST IBM DSCLI Version: 5.1.0.204 DS: IBM.2107-7520781 CMUC00147I chsession: Session 02 successfully modified. dscli>

dscli> lssession 10-11


Date/Time: November 11, 2005 5:49:43 PM JST IBM DSCLI Version: 5.1.0.204 DS: IBM.2107-7520781 LSS ID Session Status Volume VolumeStatus PrimaryStatus SecondaryStatus FirstPassComplete AllowCascadin ======================================================================================================================== 10 02 CG In Progress 1000 Active Primary Copy Pending Secondary Simplex True Disable 10 02 CG In Progress 1001 Active Primary Copy Pending Secondary Simplex True Disable 11 02 CG In Progress 1100 Active Primary Copy Pending Secondary Simplex True Disable 11 02 CG In Progress 1101 Active Primary Copy Pending Secondary Simplex True Disable

Attention: Suspending or removing even one Global Copy pair that belongs to an active Global Mirror session will impact the formation of Consistency Groups. If you suspend or remove the Global Copy relationship from the A volume without removing the volume from the Global Mirror session, it will cause Consistency Group formation to fail, and periodical SNMP alerts will be issued.

25.3.5 Adding and removing an LSS to an existing Global Mirror environment


First you create the Global Copy (A to B) and FlashCopy (B to C) relationships for the LSS, and for the A volumes in the LSS that you want to add to the Global Mirror environment. After this, you add the LSS with the mksession command, and then you add the A volume. You can also use the mksession command to add the LSS along with the A volume; see Example 25-25.
Example 25-25 Add an LSS to the Global Mirror session << Prepare the A volume >>
dscli> mkpprcpath -remotedev IBM.2107-75ABTV1 -remotewwnn 5005076303FFC663 -srclss 12 -tgtlss 24 i0143:i0010 i0213:i0140 Date/Time: November 11, 2005 6:54:57 PM JST IBM DSCLI Version: 5.1.0.204 DS: IBM.2107-7520781 CMUC00149I mkpprcpath: Remote Mirror and Copy path 12:24 successfully established. dscli>

dscli> lspprcpath -fullid 12


Date/Time: November 11, 2005 6:55:11 PM JST IBM DSCLI Version: 5.1.0.204 DS: IBM.2107-7520781 Src Tgt State SS Port Attached Port Tgt WWNN =================================================================================================================== IBM.2107-7520781/12 IBM.2107-75ABTV1/24 Success FF24 IBM.2107-7520781/I0143 IBM.2107-75ABTV1/I0010 5005076303FFC663 IBM.2107-7520781/12 IBM.2107-75ABTV1/24 Success FF24 IBM.2107-7520781/I0213 IBM.2107-75ABTV1/I0140 5005076303FFC663 dscli>

dscli> mkpprc -remotedev IBM.2107-75ABTV1 -type gcp 1200:2400


Date/Time: November 11, 2005 6:55:42 PM JST IBM DSCLI Version: 5.1.0.204 DS: IBM.2107-7520781

Chapter 25. Global Mirror examples

389

CMUC00153I mkpprc: Remote Mirror and Copy volume pair relationship 1200:2400 successfully created. dscli>

dscli> lspprc -l 1200


Date/Time: November 11, 2005 6:55:50 PM JST IBM DSCLI Version: 5.1.0.204 DS: IBM.2107-7520781 ID State Reason Type Out Of Sync Tracks Tgt Read Src Cascade Tgt Cascade Date Suspended SourceLSS ======================================================================================================================== 1200:2400 Copy Pending Global Copy 37844 Disabled Disabled invalid 12 dscli>

dscli> lspprc -l 1200


Date/Time: November 11, 2005 6:56:27 PM JST IBM DSCLI Version: 5.1.0.204 DS: IBM.2107-7520781 ID State Reason Type Out Of Sync Tracks Tgt Read Src Cascade Tgt Cascade Date Suspended SourceLSS ======================================================================================================================== 1200:2400 Copy Pending Global Copy 0 Disabled Disabled invalid 12 dscli>

dscli> mkremoteflash -tgtinhibit -nocp -record -conduit IBM.2107-7520781/10 -dev IBM.2107-75ABTV1 2400:2600
Date/Time: November 11, 2005 6:57:17 PM JST IBM DSCLI Version: 5.1.0.204 DS: IBM.2107-75ABTV1 CMUN03044E mkremoteflash: 2400:2600: Copy Services operation failure: path not available dscli>

dscli> mkremoteflash -tgtinhibit -nocp -record -conduit IBM.2107-7520781/12 -dev IBM.2107-75ABTV1 2400:2600
Date/Time: November 11, 2005 6:57:35 PM JST IBM DSCLI Version: 5.1.0.204 DS: IBM.2107-75ABTV1 CMUC00173I mkremoteflash: Remote FlashCopy volume pair 2400:2600 successfully created. Use the lsremoteflash command to determine copy completion. dscli>

dscli> lsremoteflash -l -conduit IBM.2107-7520781/10 -dev IBM.2107-75ABTV1 2400


Date/Time: November 11, 2005 6:57:55 PM JST IBM DSCLI Version: 5.1.0.204 DS: IBM.2107-75ABTV1 CMUC00234I lsremoteflash: No Remote Flash Copy found. dscli>

dscli> lsremoteflash -l -conduit IBM.2107-7520781/12 -dev IBM.2107-75ABTV1 2400


Date/Time: November 11, 2005 6:58:03 PM JST IBM DSCLI Version: 5.1.0.204 DS: IBM.2107-75ABTV1
ID SrcLSS SequenceNum ActiveCopy Recording Persistent Revertible SourceWriteEnabled TargetWriteEnabled BackgroundCopy OutOfSyncTracks ========================================================================================================================================== 2400:2600 24 0 Disabled Enabled Enabled Disabled Enabled Disabled Disabled 61036

dscli>

<< Add a LSS and an A volume to the Global Mirror >> dscli> lssession 12
Date/Time: November 11, 2005 6:58:29 PM JST IBM DSCLI Version: 5.1.0.204 DS: IBM.2107-7520781 CMUC00234I lssession: No Session found. dscli>

dscli> mksession -dev IBM.2107-7520781 -lss IBM.2107-7520781/12 -volume 1200 02


Date/Time: November 11, 2005 7:01:08 PM JST IBM DSCLI Version: 5.1.0.204 DS: IBM.2107-7520781 CMUC00145I mksession: Session 02 opened successfully. dscli>

dscli> lssession 12
Date/Time: November 11, 2005 7:01:12 PM JST IBM DSCLI Version: 5.1.0.204 DS: IBM.2107-7520781 LSS ID Session Status Volume VolumeStatus PrimaryStatus SecondaryStatus FirstPassComplete AllowCascading ================================================================================================================= 12 02 Normal 1200 Join Pending Primary Copy Pending Secondary Simplex True Disable dscli>

dscli> lssession 10-12


Date/Time: November 11, 2005 7:09:13 PM JST IBM DSCLI Version: 5.1.0.204 DS: IBM.2107-7520781 LSS ID Session Status Volume VolumeStatus PrimaryStatus SecondaryStatus FirstPassComplete AllowCascading ================================================================================================================= 10 02 Normal 1000 Active Primary Copy Pending Secondary Simplex True Disable 10 02 Normal 1001 Active Primary Copy Pending Secondary Simplex True Disable 11 02 Normal 1100 Active Primary Copy Pending Secondary Simplex True Disable 11 02 Normal 1101 Active Primary Copy Pending Secondary Simplex True Disable 12 02 Normal 1200 Active Primary Copy Pending Secondary Simplex True Disable

When you remove an LSS from a Global Mirror environment, you have to first remove all the A volumes on the LSS with the chsession command and then remove the LSS with the rmsession command; see Example 25-26.

390

IBM System Storage DS8000: Copy Services in Open Environments

Example 25-26 Remove an LSS from a Global Mirror session dscli> lssession 10-12
Date/Time: November 11, 2005 7:15:55 PM JST IBM DSCLI Version: 5.1.0.204 DS: IBM.2107-7520781 LSS ID Session Status Volume VolumeStatus PrimaryStatus SecondaryStatus FirstPassComplete AllowCascadin ======================================================================================================================== 10 02 CG In Progress 1000 Active Primary Copy Pending Secondary Simplex True Disable 10 02 CG In Progress 1001 Active Primary Copy Pending Secondary Simplex True Disable 11 02 CG In Progress 1100 Active Primary Copy Pending Secondary Simplex True Disable 11 02 CG In Progress 1101 Active Primary Copy Pending Secondary Simplex True Disable 12 02 CG In Progress 1200 Active Primary Copy Pending Secondary Simplex True Disable dscli>

dscli>

chsession -dev IBM.2107-7520781 -lss IBM.2107-7520781/12 -action remove -volume 1200 02

Date/Time: November 11, 2005 7:16:35 PM JST IBM DSCLI Version: 5.1.0.204 DS: IBM.2107-7520781 CMUC00147I chsession: Session 02 successfully modified. dscli>

dscli> lssession 10-12


Date/Time: November 11, 2005 7:16:40 PM JST IBM DSCLI Version: 5.1.0.204 DS: IBM.2107-7520781 LSS ID Session Status Volume VolumeStatus PrimaryStatus SecondaryStatus FirstPassComplete AllowCascadin ======================================================================================================================== 10 02 Normal 1000 Active Primary Copy Pending Secondary Simplex True Disable 10 02 Normal 1001 Active Primary Copy Pending Secondary Simplex True Disable 11 02 CG In Progress 1100 Active Primary Copy Pending Secondary Simplex True Disable 11 02 CG In Progress 1101 Active Primary Copy Pending Secondary Simplex True Disable 12 02 dscli>

dscli> rmsession -dev IBM.2107-7520781 -lss IBM.2107-7520781/12 02


Date/Time: November 11, 2005 7:17:17 PM JST IBM DSCLI Version: 5.1.0.204 DS: IBM.2107-7520781 CMUC00148W rmsession: Are you sure you want to close session 02? [y/n]:y CMUC00146I rmsession: Session 02 closed successfully. dscli>

dscli> lssession 10-12


Date/Time: November 11, 2005 7:17:23 PM JST IBM DSCLI Version: 5.1.0.204 DS: IBM.2107-7520781
LSS ID Session Status Volume VolumeStatus PrimaryStatus SecondaryStatus FirstPassComplete AllowCascading ======================================================================================================================== 10 02 CG In Progress 1000 Active Primary Copy Pending Secondary Simplex True Disable 10 02 CG In Progress 1001 Active Primary Copy Pending Secondary Simplex True Disable 11 02 CG In Progress 1100 Active Primary Copy Pending Secondary Simplex True Disable 11 02 CG In Progress 1101 Active Primary Copy Pending Secondary Simplex True Disable

25.3.6 Adding and removing a subordinate disk subsystem


In order to add or remove a subordinate storage disk subsystem from an existing Global Mirror environment, you have to stop the Global Mirror session and start it again with or without the subordinate specification. This task is in fact a topology change of the Global Mirror configuration, which requires that you stop Global Mirror first in order to re-start it again with the new configuration; see 22.3.4, Global Mirror environment topology changes on page 331. For examples of Global Mirror stop and start tasks see 25.3.3, Stopping and starting Global Mirror on page 385.

25.4 Recovery scenario after local site failure with the DS CLI
The example presented in this section discusses how to perform the required steps to recover from a production site failure using DS CLI commands. For a detailed discussion of the internal Global Mirror considerations, refer to 22.5, Recovery scenario after production site failure on page 336.

Chapter 25. Global Mirror examples

391

For this example we use the configuration that we set up in 25.1, Setting up a Global Mirror environment using the DS CLI on page 368. Figure 25-3 shows the configuration during normal operations.

Application server and DS CLI client

Application Backup server and DS CLI client

LSS10
GM Master 1000 1001 Global Copy Pair

LSS20
2000 2001

FlashCopy LSS22 2200 2201 FlashCopy LSS23 2300 2301

LSS11
1100 1101 Global Copy Pair

LSS21
2100 2101

A
GC: Source Copy Pending

B
GC: Target Copy Pending FC: Source

C
FC: Target

DS8000#1
-dev IBM.2107-7520781

DS8000#2
-dev IBM.2107-75ABTV1

Figure 25-3 Global Mirror example before unplanned production site failure

Summary of the recovery scenario


The typical recovery scenario after the production site failure is: 1. 2. 3. 4. 5. 6. Stop Global Mirror processing. Perform Global Copy Failover from B to A. Verify for valid Consistency Group state. Create consistent data on B volumes (Reverse FlashCopy from B to C). Re-establish the FlashCopy relationship from B to C. Restart application at remote site.

The following sections discuss each of the listed tasks of the recovery scenario.

25.4.1 Stopping Global Mirror processing


Depending on the state of the Global Mirror local disk subsystem where the master was running, you might be able to stop the Global Mirror session. The rmgmir command stops Global Mirror processing. You give this command to the DS HMC connected to the local DS8000 (DS8000#1); see Example 25-27 on page 392.
Example 25-27 Terminate Global Mirror dscli> rmgmir -dev IBM.2107-7520781 -lss IBM.2107-7520781/10 -session 02 Date/Time: November 11, 2005 11:28:47 PM JST IBM DSCLI Version: 5.1.0.204 DS: IBM.2107-7520781 CMUC00166W rmgmir: Are you sure you want to stop the Global Mirror session 02:? [y/n]:y CMUC00165I rmgmir: Global Mirror for session 02 successfully stopped.

392

IBM System Storage DS8000: Copy Services in Open Environments

25.4.2 Performing Global Copy Failover from B to A


A Failover operation (Copy Services Failover function) on the Global Copy target B volumes turns these volumes into source volumes and also suspends them immediately. You can use the failoverpprc command to do this. This Failover operation sets the stage for change recording when application updates start changing the B volumes. Change recording in turn allows you to re-synchronize just the changes from the B to the A volumes, later before returning to the local site. But at this stage, the B volumes do not contain consistent data and are still useless. We just changed their Global Copy state from target to source and suspended. Figure 25-4 shows the DS8000 environment after the Failover operation.

Application server and DS CLI client

Application Backup server and DS CLI client

failoverpprc B to A

LSS10
1000 1001 Global Copy Pair

LSS20
2000 2001

FlashCopy LSS22 2200 2201 FlashCopy LSS23 2300 2301

LSS11
1100 1101 Global Copy Pair

LSS21
2100 2101

A
GC: Source Copy Pending

B
GC: Source Suspended FC: Source

C
FC: Target

DS8000#1
-dev IBM.2107-7520781

DS8000#2
-dev IBM.2107-75ABTV1

Figure 25-4 Site swap scenario after failoverpprc

Example 25-28 shows the command for this operation. You can check the result with the lspprc command.
Example 25-28 failoverpprc command example dscli> lspprc 2000-2001 2100-2101 Date/Time: November 11, 2005 11:30:34 PM JST IBM DSCLI Version: 5.1.0.204 DS: IBM.2107-75ABTV1 ID State Reason Type SourceLSS Timeout (secs) Critical Mode First Pass Status ========================================================================================================= 1000:2000 Target Copy Pending Global Copy 10 unknown Disabled Invalid 1001:2001 Target Copy Pending Global Copy 10 unknown Disabled Invalid 1100:2100 Target Copy Pending Global Copy 11 unknown Disabled Invalid 1101:2101 Target Copy Pending Global Copy 11 unknown Disabled Invalid dscli> dscli> failoverpprc -remotedev IBM.2107-7520781 -type gcp 2000-2001:1000-1001 2100-2101:1100-1101 Date/Time: November 11, 2005 11:30:53 PM JST IBM DSCLI Version: 5.1.0.204 DS: IBM.2107-75ABTV1 CMUC00196I failoverpprc: Remote Mirror and Copy pair 2000:1000 successfully reversed. CMUC00196I failoverpprc: Remote Mirror and Copy pair 2001:1001 successfully reversed.

Chapter 25. Global Mirror examples

393

CMUC00196I failoverpprc: Remote Mirror and Copy pair 2100:1100 successfully reversed. CMUC00196I failoverpprc: Remote Mirror and Copy pair 2101:1101 successfully reversed. dscli> dscli> lspprc 2000-2001 2100-2101 Date/Time: November 11, 2005 11:30:58 PM JST IBM DSCLI Version: 5.1.0.204 DS: IBM.2107-75ABTV1 ID State Reason Type SourceLSS Timeout (secs) Critical Mode First Pass Status ==================================================================================================== 2000:1000 Suspended Host Source Global Copy 20 unknown Disabled True 2001:1001 Suspended Host Source Global Copy 20 unknown Disabled True 2100:1100 Suspended Host Source Global Copy 21 unknown Disabled True 2101:1101 Suspended Host Source Global Copy 21 unknown Disabled True

25.4.3 Verifying for valid Consistency Group state


Now you have to investigate whether all FlashCopy relationships are in a consistent state. This means that you have to query all FlashCopy relationships between B and C, which are part of the Consistency Group, to determine the state of the FlashCopy relationship. Global Mirror might have been in the middle of forming a Consistency Group and FlashCopy might have not completed the creation of a complete set of consistent C volumes. Each FlashCopy pair needs a FlashCopy query to identify its state. You use the lsflash command to check the SequenceNum and the Revertible field status. Example 25-29 shows the Revertible status of all of the FlashCopy pairs is Disabled (that is, non-revertible) and the SequenceNums of all relationships are equal. Therefore, you do not need to take any action in this case. You can find a detailed discussion of this verification process in 22.5.4, Verifying for valid Consistency Group state on page 338.
Example 25-29 Verify the FlashCopy state dscli> lsflash 2000-2001 2100-2101 Date/Time: November 11, 2005 11:31:12 PM JST IBM DSCLI Version: 5.1.0.204 DS: IBM.2107-75ABTV1
ID SrcLSS SequenceNum Timeout ActiveCopy Recording Persistent Revertible SourceWriteEnabled TargetWriteEnabled BackgroundCopy ==================================================================================================================================== 2000:2200 20 4374ABB7 300 Disabled Enabled Enabled Disabled Enabled Disabled Disabled 2001:2201 20 4374ABB7 300 Disabled Enabled Enabled Disabled Enabled Disabled Disabled 2100:2300 21 4374ABB7 300 Disabled Enabled Enabled Disabled Enabled Disabled Disabled 2101:2301 21 4374ABB7 300 Disabled Enabled Enabled Disabled Enabled Disabled Disabled

Example 25-30 shows a hypothetical situation where all FlashCopy relationships are in the revertible state and have the same sequence number. In this case, you should execute a revertflash to all the FlashCopy relationships.
Example 25-30 All revertible and SeqNum is equal, then revertflash dscli> lsflash 2000-2001 2100-2101 Date/Time: November 14, 2005 11:20:48 PM JST IBM DSCLI Version: 5.1.0.204 DS: IBM.2107-75ABTV1
ID SrcLSS SequenceNum Timeout ActiveCopy Recording Persistent Revertible SourceWriteEnabled TargetWriteEnabled BackgroundCopy ==================================================================================================================================== 2000:2200 20 437895B2 300 Disabled Enabled Enabled Enabled Disabled Disabled Disabled 2001:2201 20 437895B2 300 Disabled Enabled Enabled Enabled Disabled Disabled Disabled 2100:2300 21 437895B2 300 Disabled Enabled Enabled Enabled Disabled Disabled Disabled 2101:2301 21 437895B2 300 Disabled Enabled Enabled Enabled Disabled Disabled Disabled

dscli> dscli> revertflash 2000-2001 2100-2101 Date/Time: November 14, 2005 11:21:09 PM JST IBM DSCLI Version: 5.1.0.204 DS: IBM.2107-75ABTV1
CMUC00171I CMUC00171I CMUC00171I CMUC00171I revertflash: revertflash: revertflash: revertflash: FlashCopy FlashCopy FlashCopy FlashCopy volume volume volume volume pair pair pair pair 2000:2000 2001:2001 2100:2100 2101:2101 successfully successfully successfully successfully reverted. reverted. reverted. reverted.

dscli>

394

IBM System Storage DS8000: Copy Services in Open Environments

dscli> lsflash 2000-2001 2100-2101 Date/Time: November 14, 2005 11:21:11 PM JST IBM DSCLI Version: 5.1.0.204 DS: IBM.2107-75ABTV1
ID SrcLSS SequenceNum Timeout ActiveCopy Recording Persistent Revertible SourceWriteEnabled TargetWriteEnabled BackgroundCopy ==================================================================================================================================== 2000:2200 20 437895B2 300 Disabled Enabled Enabled Disabled Enabled Disabled Disabled 2001:2201 20 437895B2 300 Disabled Enabled Enabled Disabled Enabled Disabled Disabled 2100:2300 21 437895B2 300 Disabled Enabled Enabled Disabled Enabled Disabled Disabled 2101:2301 21 437895B2 300 Disabled Enabled Enabled Disabled Enabled Disabled Disabled dscli>

If some FlashCopy pairs are revertible and others are not revertible while their sequence numbers are equal, you should execute a commitflash command to the FlashCopy relationships that have the revertible status; see Example 25-31. When the FlashCopy relationship is not in a revertible state, the commit operation is not possible. When you give this command to FlashCopy pairs that are non-revertible, you are going to see only an error message, but no action is performed. To make the task easier, you can run a commitflash command to all FlashCopy pairs. In Example 25-31, 2000 and 2001 are not in the revertible state; therefore, we see error messages.
Example 25-31 Some are revertible and SeqNum are equal, then commitflash dscli> lsflash 2000-2001 2100-2101 Date/Time: November 14, 2005 11:18:21 PM JST IBM DSCLI Version: 5.1.0.204 DS: IBM.2107-75ABTV1
ID SrcLSS SequenceNum Timeout ActiveCopy Recording Persistent Revertible SourceWriteEnabled TargetWriteEnabled BackgroundCopy ==================================================================================================================================== 2000:2200 20 437895B2 300 Disabled Enabled Enabled Disabled Enabled Disabled Disabled 2001:2201 20 437895B2 300 Disabled Enabled Enabled Disabled Enabled Disabled Disabled 2100:2300 21 437895B2 300 Disabled Enabled Enabled Enabled Disabled Disabled Disabled 2101:2301 21 437895B2 300 Disabled Enabled Enabled Enabled Disabled Disabled Disabled

dscli> dscli> commitflash 2000-2001 2100-2101 Date/Time: November 14, 2005 11:18:56 PM JST IBM DSCLI Version: 5.1.0.204 DS: IBM.2107-75ABTV1
CMUN03054E CMUN03054E CMUC00170I CMUC00170I commitflash: commitflash: commitflash: commitflash: 2000:2000: Copy Services operation failure: invalid revertible specification 2001:2001: Copy Services operation failure: invalid revertible specification FlashCopy volume pair 2100:2100 successfully committed. FlashCopy volume pair 2101:2101 successfully committed.

dscli> dscli> lsflash 2000-2001 2100-2101 Date/Time: November 14, 2005 11:19:04 PM JST IBM DSCLI Version: 5.1.0.204 DS: IBM.2107-75ABTV1
ID SrcLSS SequenceNum Timeout ActiveCopy Recording Persistent Revertible SourceWriteEnabled TargetWriteEnabled BackgroundCopy ==================================================================================================================================== 2000:2200 20 437895B2 300 Disabled Enabled Enabled Disabled Enabled Disabled Disabled 2001:2201 20 437895B2 300 Disabled Enabled Enabled Disabled Enabled Disabled Disabled 2100:2300 21 437895B2 300 Disabled Enabled Enabled Disabled Enabled Disabled Disabled 2101:2301 21 437895B2 300 Disabled Enabled Enabled Disabled Enabled Disabled Disabled

Example 25-32 shows a hypothetical situation where some FlashCopy pairs are revertible and others are non-revertible. The revertible FlashCopy pairs' sequence numbers are equal. The non-revertible FlashCopy pairs sequence numbers are also equal, but do not match the revertible FlashCopies sequence number. In this case, you should give a commitflash command to the FlashCopy relationships that have the revertible status. When the FlashCopy relationship is non-revertible, the revert operation is not possible. When you execute this command against FlashCopy pairs that are non-revertible, you are going to see only an error message, but no action is performed. To make the task easier, you can run a revertflash command to all FlashCopy pairs. In Example 25-32, 2000 and 2001 are not in the revertible state. Therefore we see error messages.

Chapter 25. Global Mirror examples

395

Example 25-32 Some are revertible and SeqNum are not equal, then revertflash dscli> lsflash 2000-2001 2100-2101 Date/Time: November 14, 2005 10:49:47 PM JST IBM DSCLI Version: 5.1.0.204 DS: IBM.2107-75ABTV1
ID SrcLSS SequenceNum Timeout ActiveCopy Recording Persistent Revertible SourceWriteEnabled TargetWriteEnabled BackgroundCopy ==================================================================================================================================== 2000:2200 20 437895B2 300 Disabled Enabled Enabled Disabled Enabled Disabled Disabled 2001:2201 20 437895B2 300 Disabled Enabled Enabled Disabled Enabled Disabled Disabled 2100:2300 21 437895B3 300 Disabled Enabled Enabled Enabled Disabled Disabled Disabled 2101:2301 21 437895B3 300 Disabled Enabled Enabled Enabled Disabled Disabled Disabled

dscli> dscli> revertflash 2000-2001:2200-2201 2100-2101:2300-2301 Date/Time: November 14, 2005 11:13:28 PM JST IBM DSCLI Version: 5.1.0.204 DS: IBM.2107-75ABTV1
CMUC00083E revertflash: Invalid volume 2000-2001:2200-2201.

dscli> dscli> revertflash 2000-2001 2100-2101 Date/Time: November 14, 2005 11:14:14 PM JST IBM DSCLI Version: 5.1.0.204 DS: IBM.2107-75ABTV1
CMUN03054E CMUN03054E CMUC00171I CMUC00171I revertflash: revertflash: revertflash: revertflash: 2000:2000: Copy Services operation failure: invalid revertible specification 2001:2001: Copy Services operation failure: invalid revertible specification FlashCopy volume pair 2100:2100 successfully reverted. FlashCopy volume pair 2101:2101 successfully reverted.

dscli> dscli> lsflash 2000-2001 2100-2101 Date/Time: November 14, 2005 11:14:25 PM JST IBM DSCLI Version: 5.1.0.204 DS: IBM.2107-75ABTV1
ID SrcLSS SequenceNum Timeout ActiveCopy Recording Persistent Revertible SourceWriteEnabled TargetWriteEnabled BackgroundCopy ==================================================================================================================================== 2000:2200 20 437895B2 300 Disabled Enabled Enabled Disabled Enabled Disabled Disabled 2001:2201 20 437895B2 300 Disabled Enabled Enabled Disabled Enabled Disabled Disabled 2100:2300 21 437895B2 300 Disabled Enabled Enabled Disabled Enabled Disabled Disabled 2101:2301 21 437895B2 300 Disabled Enabled Enabled Disabled Enabled Disabled Disabled

After these actions, all FlashCopy pairs are non-revertible and all sequence numbers are equal, so now you can proceed to the next step.

25.4.4 Reversing FlashCopy from B to C


At this point only the C volumes (logical data) comprise a set of consistent data volumes, although the physical data of the C volumes might be spread over the physical B and C volumes. The B volumes (logical data) do not provide consistent data volumes because Global Copy does not provide data consistency. We want to have to two good copies of the data at the recovery site. The aim is to have a consistent set of volumes to work with, still keeping a good copy to which we can resort to if needed. The next step is then to create the same consistency on the B volumes as we have on the C volumes; see Figure 25-5 on page 397. This can be achieved with the reverseflash -fast command. This operation is called Fast Reverse Restore (FRR). You have to use the -tgtpprc parameter with the reverseflash -fast command because the B volume is also a Global Copy source at this step. Note: Though the Fast Reverse Restore operation starts a background copy from the C to the B volumes, in the reverseflash command you have to specify the B volumes as the FlashCopy sources and the C volumes as the FlashCopy targets.

396

IBM System Storage DS8000: Copy Services in Open Environments

Figure 25-5 shows the remote DS8000 environment after the reverseflash command was issued. After executing this command and before C to B background copy is completed, the C volumes become the FlashCopy source and the B volumes become the FlashCopy target.
Application server and DS CLI client Application Backup server and DS CLI client

reverseflash B to C

LSS10
1000 1001 Global Copy Pair

LSS20

Background LSS22 Copy

2000 2001
LSS21

2200 2201

LSS11
1100 1101 Global Copy Pair

Background LSS23 Copy

2100 2101

2300 2301

A
GC: Source Copy Pending

B
GC: Source Suspended FC: Target

C
FC: Source

DS8000#1
-dev IBM.2107-7520781

DS8000#2
-dev IBM.2107-75ABTV1

Figure 25-5 Site swap scenario after the reverseflash command was issued

Example 25-33 shows the results of the reverseflash command. The lsflash command shows volume 2200 (C volume) as the FlashCopy source.
Example 25-33 The reverseflash B to C dscli> reverseflash -fast -tgtpprc 2000-2001:2200-2201 2100-2101:2300-2301 Date/Time: November 11, 2005 11:44:33 PM JST IBM DSCLI Version: 5.1.0.204 DS: IBM.2107-75ABTV1
CMUC00169I CMUC00169I CMUC00169I CMUC00169I reverseflash: reverseflash: reverseflash: reverseflash: FlashCopy FlashCopy FlashCopy FlashCopy volume volume volume volume pair pair pair pair 2000:2200 2001:2201 2100:2300 2101:2301 successfully successfully successfully successfully reversed. reversed. reversed. reversed.

dscli> dscli> lsflash -l 2000-2001 2100-2101 Date/Time: November 11, 2005 11:44:45 PM JST IBM DSCLI Version: 5.1.0.204 DS: IBM.2107-75ABTV1
ID SrcLSS SequenceNum Timeout ActiveCopy Recording Persistent Revertible SourceWriteEnabled TargetWriteEnabled BackgroundCopy OutOfSyncTracks DateCreated ================================================================================================================================================================ 2200:2000 22 4374ABB7 300 Enabled Disabled Disabled Disabled Enabled Enabled Enabled 40102 Fri Nov 11 2201:2001 22 4374ABB7 300 Enabled Disabled Disabled Disabled Enabled Enabled Enabled 41505 Fri Nov 11 2300:2100 23 4374ABB7 300 Enabled Disabled Disabled Disabled Enabled Enabled Enabled 42152 Fri Nov 11 2301:2101 23 4374ABB7 300 Enabled Disabled Disabled Disabled Enabled Enabled Enabled 40489 Fri Nov 11

dscli> dscli> lsflash 2000-2001 2100-2101 Date/Time: November 11, 2005 11:44:52 PM JST IBM DSCLI Version: 5.1.0.204 DS: IBM.2107-75ABTV1
ID SrcLSS SequenceNum Timeout ActiveCopy Recording Persistent Revertible SourceWriteEnabled TargetWriteEnabled BackgroundCopy ==================================================================================================================================== 2200:2000 22 4374ABB7 300 Enabled Disabled Disabled Disabled Enabled Enabled Enabled 2201:2001 22 4374ABB7 300 Enabled Disabled Disabled Disabled Enabled Enabled Enabled 2300:2100 23 4374ABB7 300 Enabled Disabled Disabled Disabled Enabled Enabled Enabled 2301:2101 23 4374ABB7 300 Enabled Disabled Disabled Disabled Enabled Enabled Enabled dscli>

Chapter 25. Global Mirror examples

397

The above Fast Reverse Restore (FRR) operation does a background copy of all tracks that changed on the B volumes since the last CG formation. This results in the B volumes becoming equal to the image that was present on the C volumes. This is the logical view. From the physical data placement point of view, the C volumes do not have meaningful data after the FlashCopy relationship ends. Because you do not specify the -persist parameter, the FlashCopy relationship ends after the background copy from C to B completes; see Figure 25-6.
Application server and DS CLI client Application Backup server and DS CLI client

LSS10
1000 1001 Global Copy Pair

LSS20

LSS22

2000 2001
LSS21

2200 2201
LSS23

LSS11
1100 1101 Global Copy Pair

2100 2101

2300 2301

A
GC: Source Copy Pending

B
GC: Source Suspended FC: None

C
FC: None

DS8000#1
-dev IBM.2107-7520781

DS8000#2
-dev IBM.2107-75ABTV1

Figure 25-6 After the completion of the background copy

You have to wait until all Fast Reverse Restore operations and their background copy complete successfully before you proceed with the next step. Again, when the background copy completes, the FlashCopy relation will end. Therefore, you should check if the FlashCopy relationships remain to determine when all Fast Reverse Restore operations have completed; see Example 25-34. This example shows the result of the lsflash command after the reverseflash background copy completes.
Example 25-34 The lsflash command to confirm the completion of the background copy dscli> lsflash 2000-2001 2100-2101 Date/Time: November 11, 2005 11:57:17 PM JST IBM DSCLI Version: 5.1.0.204 DS: IBM.2107-75ABTV1 CMUC00234I lsflash: No Flash Copy found.

398

IBM System Storage DS8000: Copy Services in Open Environments

25.4.5 Re-establishing the FlashCopy relationship from B to C


Figure 25-7

Application server and DS CLI client

Application Backup server and DS CLI client

mkflash B to C

LSS10
1000 1001 Global Copy Pair

LSS20

FlashCopy LSS22 2200 2201 FlashCopy 2300 2301


LSS23

2000 2001
LSS21

LSS11
1100 1101 Global Copy Pair

2100 2101

A
GC: Source Copy Pending

B
GC: Source Suspended FC: Source

C
FC: Target

DS8000#1
-dev IBM.2107-7520781

DS8000#2
-dev IBM.2107-75ABTV1

Figure 25-7 After mkflash B to C

In this step you create the former FlashCopy relationship between the B and C volumes, as they were at the beginning when you set up the Global Mirror environment; see Figure 25-7. This step is in preparation for returning later to production at the local site. The mkflash command used in this step is illustrated in Example 25-35, and is exactly the same FlashCopy command you might have used when you initially created the Global Mirror environment. See 21.3.4, Introducing FlashCopy on page 316. In a disaster situation it might be that you do not want to use the -nocp option for the FlashCopy from B to C. This will remove the FlashCopy I/O overhead when the application starts.
Example 25-35 Re-establish the FlashCopy relationships from B to C dscli> mkflash -tgtinhibit -nocp -record 2000-2001:2200-2201 2100-2101:2300-2301 Date/Time: November 11, 2005 11:59:31 PM JST IBM DSCLI Version: 5.1.0.204 DS: IBM.2107-75ABTV1 CMUC00137I mkflash: FlashCopy pair 2000:2200 successfully created. CMUC00137I mkflash: FlashCopy pair 2001:2201 successfully created. CMUC00137I mkflash: FlashCopy pair 2100:2300 successfully created. CMUC00137I mkflash: FlashCopy pair 2101:2301 successfully created. dscli> dscli> lsflash 2000-2001 2100-2101 Date/Time: November 11, 2005 11:59:38 PM JST IBM DSCLI Version: 5.1.0.204 DS: IBM.2107-75ABTV1
ID SrcLSS SequenceNum Timeout ActiveCopy Recording Persistent Revertible SourceWriteEnabled TargetWriteEnabled BackgroundCopy ==================================================================================================================================== 2000:2200 20 0 300 Disabled Enabled Enabled Disabled Enabled Disabled Disabled 2001:2201 20 0 300 Disabled Enabled Enabled Disabled Enabled Disabled Disabled 2100:2300 21 0 300 Disabled Enabled Enabled Disabled Enabled Disabled Disabled 2101:2301 21 0 300 Disabled Enabled Enabled Disabled Enabled Disabled Disabled

Chapter 25. Global Mirror examples

399

25.4.6 Restarting the application at the remote site


Depending on your operating system, it might be necessary to re-scan Fibre Channel devices (to remove device objects for the recovery site volumes and recognize the new sources) and mount the B volumes. And then start all applications at the recovery site. Now that the application has started at the remote site, all the write I/Os to the new source volumes (that is, the B volumes) are tracked in the bit-maps by the Failover function. Figure 25-8 shows this environment.

Application server and DS CLI client

Application Backup server and DS CLI client

Start Application
I/O LSS10
1000 1001 Global Copy Pair
LSS20

FlashCopy LSS22 2200 2201 FlashCopy 2300 2301


LSS23

2000 2001
LSS21

LSS11
1100 1101 Global Copy Pair

2100 2101

A
GC: Source Copy Pending

B
GC: Source Suspended FC: Source

C
FC: Target

DS8000#1
-dev IBM.2107-7520781

DS8000#2
-dev IBM.2107-75ABTV1

Figure 25-8 After the application start

25.5 Returning to the local site


The return to the normal production site typically follows this scenario: 1. Create paths from B to A. 2. Perform Global Copy Failback from B to A. 3. Query for the Global Copy first pass completion. 4. Quiesce the application at the remote site. 5. Query the Out Of Sync Tracks until it shows zero. 6. Create paths from A to B if they do not exist. 7. Perform Global Copy Failover from A to B. 8. Perform Global Copy Failback from A to B. 9. Start Global Mirror. 10.Start the application at the local site. In the following sections we discuss each of these steps.

400

IBM System Storage DS8000: Copy Services in Open Environments

25.5.1 Creating paths from B to A


The local site is operational again. If the local site did not lose the data at the time when the swap to the remote site occurred, then it is possible to re-synchronize the changed data from B to A in preparation for returning to the local site. Before doing this failback process, we need paths to be defined from B to A. For this task you use the lsavailpprcport, mkpprcpath, and lspprcpath commands; see Example 25-36. You give these commands to the DS HMC connected to the remote DS8000#2.
Example 25-36 Create paths from B to A dscli> lsavailpprcport -l -remotedev IBM.2107-7520781 -remotewwnn 5005076303FFC1A5 20:10
Date/Time: October 27, 2005 8:37:57 PM JST IBM DSCLI Version: 5.1.0.204 DS: IBM.2107-75ABTV1 Local Port Attached Port Type Switch ID Switch Port =================================================== I0010 I0143 FCP NA NA I0140 I0213 FCP NA NA dscli>

dscli> lsavailpprcport -l -remotedev IBM.2107-7520781 -remotewwnn 5005076303FFC1A5 21:11


Date/Time: October 27, 2005 8:38:03 PM JST IBM DSCLI Version: 5.1.0.204 DS: IBM.2107-75ABTV1 Local Port Attached Port Type Switch ID Switch Port =================================================== I0010 I0143 FCP NA NA I0140 I0213 FCP NA NA dscli>

dscli> mkpprcpath -remotedev IBM.2107-7520781 -remotewwnn 5005076303FFC1A5 -srclss 20 -tgtlss 10 i0010:i0143 i0140:i0213
Date/Time: October 27, 2005 8:39:26 PM JST IBM DSCLI Version: 5.1.0.204 DS: IBM.2107-75ABTV1 CMUC00149I mkpprcpath: Remote Mirror and Copy path 20:10 successfully established. dscli>

dscli> mkpprcpath -remotedev IBM.2107-7520781 -remotewwnn 5005076303FFC1A5 -srclss 21 -tgtlss 11 i0010:i0143 i0140:i0213
Date/Time: October 27, 2005 8:39:38 PM JST IBM DSCLI Version: 5.1.0.204 DS: IBM.2107-75ABTV1 CMUC00149I mkpprcpath: Remote Mirror and Copy path 21:11 successfully established. dscli>

dscli> lspprcpath -fullid 20-21


Date/Time: November 12, 2005 12:03:24 AM JST IBM DSCLI Version: 5.1.0.204 DS: IBM.2107-75ABTV1 Src Tgt State SS Port Attached Port Tgt WWNN =================================================================================================================== IBM.2107-75ABTV1/20 IBM.2107-7520781/10 Success FF10 IBM.2107-75ABTV1/I0010 IBM.2107-7520781/I0143 5005076303FFC1A5 IBM.2107-75ABTV1/20 IBM.2107-7520781/10 Success FF10 IBM.2107-75ABTV1/I0140 IBM.2107-7520781/I0213 5005076303FFC1A5 IBM.2107-75ABTV1/21 IBM.2107-7520781/11 Success FF11 IBM.2107-75ABTV1/I0010 IBM.2107-7520781/I0143 5005076303FFC1A5 IBM.2107-75ABTV1/21 IBM.2107-7520781/11 Success FF11 IBM.2107-75ABTV1/I0140 IBM.2107-7520781/I0213 5005076303FFC1A5

25.5.2 Performing Global Copy Failback from B to A


After defining the paths from B to A, you use the failbackpprc command to re-synchronize the changed data from B to A. The failbackpprc command is issued to the B volume as the source and the A volume as the target; see Example 25-37. This process changes the A volume from its previous state (source) copy pending to target copy pending; see Figure 25-9. You have to add the -type gcp parameter with the failbackpprc command to request Global Copy mode.

Chapter 25. Global Mirror examples

401

Application server and DS CLI client

Application Backup server and DS CLI client

failbackpprc B to A

Application running

LSS10
1000 1001 Global Copy Pair

LSS20

FlashCopy LSS22 2200 2201 FlashCopy 2300 2301


LSS23

2000 2001
LSS21

LSS11
1100 1101

Changed data sent from B to A

Global Copy Pair

2100 2101

A
GC: Target Copy Pending

B
GC: Source Copy Pending FC: Source

C
FC: Target

DS8000#1
-dev IBM.2107-7520781

DS8000#2
-dev IBM.2107-75ABTV1

Figure 25-9 Site swap scenario after Global Copy Failback from B to A

The failbackpprc initialization mode re-synchronizes the volumes in this manner: If a volume at the production site is in simplex state (no relationship), all of the data for that volume is sent from the recovery site to the production site. If a volume at the production site is in the copy pending or suspended state and without changed tracks, only the modified data on the volume at the recovery site is sent to the volume at the production site. If a volume at the production site is in a suspended state and has tracks on which data has been written, the volume at the recovery site will discover which tracks were modified on any site and send both the tracks changed on the production site and the tracks marked at the recovery site. The volume at the production site becomes a write-inhibited target volume. This action is performed on an individual volume basis. Example 25-37 shows the commands given in our example. First we listed the status of the B volumes and then performed the Global Copy Failback operation. The command is given to the DS HMC connected to the remote DS8000#2.
Example 25-37 Perform Global Copy Failback from B to A << Before the failbackpprc B to A >> << B volume status >> dscli> lspprc 2000-2001 2100-2101 Date/Time: November 11, 2005 11:30:58 PM JST IBM DSCLI Version: 5.1.0.204 DS: IBM.2107-75ABTV1 ID State Reason Type SourceLSS Timeout (secs) Critical Mode First Pass Status ==================================================================================================== 2000:1000 Suspended Host Source Global Copy 20 unknown Disabled True 2001:1001 Suspended Host Source Global Copy 20 unknown Disabled True 2100:1100 Suspended Host Source Global Copy 21 unknown Disabled True 2101:1101 Suspended Host Source Global Copy 21 unknown Disabled True

402

IBM System Storage DS8000: Copy Services in Open Environments

<< The failbackpprc B to A >> dscli> failbackpprc -remotedev IBM.2107-7520781 -type gcp 2000-2001:1000-1001 2100-2101:1100-1101 Date/Time: November 12, 2005 12:04:37 AM JST IBM DSCLI Version: 5.1.0.204 DS: IBM.2107-75ABTV1 CMUC00197I failbackpprc: Remote Mirror and Copy pair 2000:1000 successfully failed back. CMUC00197I failbackpprc: Remote Mirror and Copy pair 2001:1001 successfully failed back. CMUC00197I failbackpprc: Remote Mirror and Copy pair 2100:1100 successfully failed back. CMUC00197I failbackpprc: Remote Mirror and Copy pair 2101:1101 successfully failed back. << B volume status >> dscli> lspprc 2000-2001 2100-2101 Date/Time: November 12, 2005 12:05:32 AM JST IBM DSCLI Version: 5.1.0.204 DS: IBM.2107-75ABTV1 ID State Reason Type SourceLSS Timeout (secs) Critical Mode First Pass Status ================================================================================================== 2000:1000 Copy Pending Global Copy 20 unknown Disabled False 2001:1001 Copy Pending Global Copy 20 unknown Disabled False 2100:1100 Copy Pending Global Copy 21 unknown Disabled False 2101:1101 Copy Pending Global Copy 21 unknown Disabled False << A volume status >> dscli> lspprc 1000-1001 1100-1101 Date/Time: November 12, 2005 12:06:01 AM JST IBM DSCLI Version: 5.1.0.204 DS: IBM.2107-7520781 ID State Reason Type SourceLSS Timeout (secs) Critical Mode First Pass Status ========================================================================================================= 2000:1000 Target Copy Pending Global Copy 20 unknown Disabled Invalid 2001:1001 Target Copy Pending Global Copy 20 unknown Disabled Invalid 2100:1100 Target Copy Pending Global Copy 21 unknown Disabled Invalid 2101:1101 Target Copy Pending Global Copy 21 unknown Disabled Invalid

Notes on the failbackpprc command


If the server at the production site is still online and accessing the disk or a crash happens, so that the SCSI persistent reserve is still set on the previous source disk, the failbackpprc command fails. In this case, the server at the production site locks the target with a SCSI persistent reserve. After this SCSI persistent reserve is reset with the varyoffvg command (in this case on AIX), the failbackpprc command completes successfully. There is a -resetreserve parameter for the failbackpprc command. This option resets the reserved state so that the failback operation can complete. In the failback operation after a real disaster, you can use this parameter because the server might go down while the SCSI persistent reserve was set on the A volume. If the server at the production site is operational, for example, when a Global Mirror failback operation test is performed, you must not use this parameter because the server at the local site still owns the A volume and might be using it and the failback operation suddenly changes the contents of the volume. This could result in the server file systems corruption.

25.5.3 Querying for the Global Copy first pass completion


The first pass of Global Copy is the first phase of the re-synchronization process, when all the data that has changed while the B volumes were suspended is copied to the A volumes. As long as the first pass copy process continues, the Out Of Sync Tracks does not show zero. Therefore, depending on your failback scenario, you can continue to run the application at the remote site until the Global Copy first pass process completes.

Chapter 25. Global Mirror examples

403

You can query this status with the lspprc command; see Example 25-38. The First Pass Status field indicates the status of the first pass, where True means that the first pass has completed.
Example 25-38 Query for the Global Copy first pass completion dscli> lspprc -l 2000-2001 2100-2101 Date/Time: November 12, 2005 12:06:11 AM JST IBM DSCLI Version: 5.1.0.204 DS: IBM.2107-75ABTV1
ID State Reason Type Out Of Sync Tracks Tgt Read Src Cascade Tgt Cascade Date Suspended SourceLSS Timeout (secs) Critical Mode First Pass Status =============================================================================================================================================================== 2000:1000 Copy Pending Global Copy 0 Disabled Disabled invalid 20 unknown Disabled True 2001:1001 Copy Pending Global Copy 0 Disabled Disabled invalid 20 unknown Disabled True 2100:1100 Copy Pending Global Copy 0 Disabled Disabled invalid 21 unknown Disabled True 2101:1101 Copy Pending Global Copy 0 Disabled Disabled invalid 21 unknown Disabled True

dscli>

25.5.4 Quiescing the application at the remote site


Before returning to normal operation on the local site, the application (still updating B volumes in the recovery site) must be quiesced to cease all write I/O from updating the B volumes. Depending on the host operating system, it might be necessary to dismount the B volumes.

25.5.5 Querying the Out Of Sync Tracks until the result shows zero
After quiescing the application, to ensure that all the data has been written to the B volumes you wait until the Out Of Sync Tracks for the Global Copy pairs shows zero. You can check this status with the lspprc -l command; see Example 25-39. You have to issue this command to the DS HMC connected to the remote DS8000 (DS8000#2).
Example 25-39 Query the Global Copy Out Of Sync Tracks until the result shows zero dscli> lspprc -l 2000-2001 2100-2101 Date/Time: November 12, 2005 12:06:11 AM JST IBM DSCLI Version: 5.1.0.204 DS: IBM.2107-75ABTV1
ID State Reason Type OutOfSyncTracks Tgt Read Src Cascade Tgt Cascade Date Suspended SourceLSS Timeout (secs) Critical Mode First Pass ================================================================================================================================================================ 2000:1000 Copy Pending Global Copy 0 Disabled Disabled invalid 20 unknown Disabled True 2001:1001 Copy Pending Global Copy 0 Disabled Disabled invalid 20 unknown Disabled True 2100:1100 Copy Pending Global Copy 0 Disabled Disabled invalid 21 unknown Disabled True 2101:1101 Copy Pending Global Copy 0 Disabled Disabled invalid 21 unknown Disabled True dscli>

25.5.6 Creating paths from A to B if they do not exist


Most likely there are no paths from A to B at this point. You can check the current paths status with the lspprcpath command. We recommend that you run this command with the -fullid command flag so that you get fully qualified IDs in the output report. The fully qualified ID information will be of help when trying to identify whether you have paths between the correct DS8000s (DS8000#1 to DS8000#2 in this example). You have to run this command on the DS HMC connected to the local DS8000 (DS8000#1). In our example, the paths were still available; see Example 25-40. In there were no available paths, then you must define them now using the mkpprcpath command; Example 25-1 on page 369 shows how to do this task.
Example 25-40 Check available paths from A to B dscli> lspprcpath -fullid 10-11 Date/Time: November 12, 2005 12:10:36 AM JST IBM DSCLI Version: 5.1.0.204 DS: IBM.2107-7520781
Src Tgt State SS Port Attached Port Tgt WWNN =================================================================================================================== IBM.2107-7520781/10 IBM.2107-75ABTV1/20 Success FF20 IBM.2107-7520781/I0143 IBM.2107-75ABTV1/I0010 5005076303FFC663

404

IBM System Storage DS8000: Copy Services in Open Environments

IBM.2107-7520781/10 IBM.2107-75ABTV1/20 Success FF20 IBM.2107-7520781/I0213 IBM.2107-75ABTV1/I0140 5005076303FFC663 IBM.2107-7520781/11 IBM.2107-75ABTV1/21 Success FF21 IBM.2107-7520781/I0143 IBM.2107-75ABTV1/I0010 5005076303FFC663 IBM.2107-7520781/11 IBM.2107-75ABTV1/21 Success FF21 IBM.2107-7520781/I0213 IBM.2107-75ABTV1/I0140 5005076303FFC663

25.5.7 Performing Global Copy Failover from A to B


In order to return to the original configuration we have to return the A volumes to their original Global Copy (source) copy pending volume state. This is a two-step procedure. First, the failoverpprc command converts the state of the A volumes from target copy pending to (source) suspended. The state of the B volumes is preserved; see Figure 25-10.

Application server and DS CLI client

Application Backup server and DS CLI client

failoverpprc A to B

Application running

LSS10
1000 1001 Global Copy Pair

LSS20

FlashCopy LSS22 2200 2201 FlashCopy 2300 2301


LSS23

2000 2001
LSS21

LSS11
1100 1101 Global Copy Pair

2100 2101

A
GC: Source Suspended

B
GC: Source Copy Pending FC: Source

C
FC: Target

DS8000#1
-dev IBM.2107-7520781

DS8000#2
-dev IBM.2107-75ABTV1

Figure 25-10 Site swap scenario after Global Copy Failover from A to B

Example 25-41 shows the result of the failoverpprc command we used in our example, and the volume state after this command is issued. You have to give this command to the DS HMC connected to the local DS8000 (DS8000#1).
Example 25-41 Global Copy Failover from A to B << DS8000 #1 >> dscli> failoverpprc -remotedev IBM.2107-75ABTV1 -type gcp 1000-1001:2000-2001 1100-1101:2100-2101 Date/Time: November 12, 2005 12:14:28 AM JST IBM DSCLI Version: 5.1.0.204 DS: IBM.2107-7520781 CMUC00196I failoverpprc: Remote Mirror and Copy pair 1000:2000 successfully reversed. CMUC00196I failoverpprc: Remote Mirror and Copy pair 1001:2001 successfully reversed. CMUC00196I failoverpprc: Remote Mirror and Copy pair 1100:2100 successfully reversed. CMUC00196I failoverpprc: Remote Mirror and Copy pair 1101:2101 successfully reversed. dscli> dscli> lspprc 1000-1001 1100-1101 Date/Time: November 12, 2005 12:14:34 AM JST IBM DSCLI Version: 5.1.0.204 DS: IBM.2107-7520781 ID State Reason Type SourceLSS Timeout (secs) Critical Mode First Pass Status ====================================================================================================

Chapter 25. Global Mirror examples

405

1000:2000 1001:2001 1100:2100 1101:2101

Suspended Suspended Suspended Suspended

Host Host Host Host

Source Source Source Source

Global Global Global Global

Copy Copy Copy Copy

10 10 11 11

unknown unknown unknown unknown

Disabled Disabled Disabled Disabled

True True True True

<< DS8000 #2 >> dscli> lspprc 2000-2001 2100-2101 Date/Time: November 12, 2005 12:14:41 AM JST IBM DSCLI Version: 5.1.0.204 DS: IBM.2107-75ABTV1 ID State Reason Type SourceLSS Timeout (secs) Critical Mode First Pass Status ================================================================================================== 2000:1000 Copy Pending Global Copy 20 unknown Disabled True 2001:1001 Copy Pending Global Copy 20 unknown Disabled True 2100:1100 Copy Pending Global Copy 21 unknown Disabled True 2101:1101 Copy Pending Global Copy 21 unknown Disabled True

25.5.8 Performing Global Copy Failback from A to B


Next we return the Global Copy pairs to the original configuration with the failbackpprc command. Figure 25-11 shows the configuration after this command is executed.

Application server and DS CLI client

Application Backup server and DS CLI client

failbackpprc A to B

Application running

LSS10
1000 1001 Global Copy Pair

LSS20

FlashCopy LSS22 2200 2201 FlashCopy 2300 2301


LSS23

2000 2001
LSS21

LSS11
1100 1101 Global Copy Pair

2100 2101

A
GC: Source Copy Pending

B
GC: Target Copy Pending FC: Source

C
FC: Target

DS8000#1
-dev IBM.2107-7520781

DS8000#2
-dev IBM.2107-75ABTV1

Figure 25-11 Site swap scenario after failbackpprc A to B

Example 25-42 shows the result of the failbackpprc command used in our example, and the volume state after this command is issued. You have to give this command to the DS HMC connected to the local DS8000 (DS8000#1).
Example 25-42 Global Copy Failback from A to B << DS8000 #1 >> dscli> failbackpprc -remotedev IBM.2107-75ABTV1 -type gcp 1000-1001:2000-2001 1100-1101:2100-2101 Date/Time: November 12, 2005 12:15:30 AM JST IBM DSCLI Version: 5.1.0.204 DS: IBM.2107-7520781

406

IBM System Storage DS8000: Copy Services in Open Environments

CMUC00197I failbackpprc: Remote Mirror and Copy pair 1000:2000 successfully failed back. CMUC00197I failbackpprc: Remote Mirror and Copy pair 1001:2001 successfully failed back. CMUC00197I failbackpprc: Remote Mirror and Copy pair 1100:2100 successfully failed back. CMUC00197I failbackpprc: Remote Mirror and Copy pair 1101:2101 successfully failed back. dscli> dscli> lspprc 1000-1001 1100-1101 Date/Time: November 12, 2005 12:15:36 AM JST IBM DSCLI Version: 5.1.0.204 DS: IBM.2107-7520781 ID State Reason Type SourceLSS Timeout (secs) Critical Mode First Pass Status ================================================================================================== 1000:2000 Copy Pending Global Copy 10 unknown Disabled True 1001:2001 Copy Pending Global Copy 10 unknown Disabled True 1100:2100 Copy Pending Global Copy 11 unknown Disabled True 1101:2101 Copy Pending Global Copy 11 unknown Disabled True << DS8000 #2 >> dscli> lspprc 2000-2001 2100-2101 Date/Time: November 12, 2005 12:15:43 AM JST IBM DSCLI Version: 5.1.0.204 DS: IBM.2107-75ABTV1 ID State Reason Type SourceLSS Timeout (secs) Critical Mode First Pass Status ========================================================================================================= 1000:2000 Target Copy Pending Global Copy 10 unknown Disabled Invalid 1001:2001 Target Copy Pending Global Copy 10 unknown Disabled Invalid 1100:2100 Target Copy Pending Global Copy 11 unknown Disabled Invalid 1101:2101 Target Copy Pending Global Copy 11 unknown Disabled Invalid

25.5.9 Starting Global Mirror


Figure 25-12 shows what is entailed in starting the Global Mirror session.

Application server and DS CLI client

Application Backup server and DS CLI client

mkgmir

Application running

LSS10
GM Master Started 1000 1001 Global Copy Pair

LSS20

FlashCopy LSS22 2200 2201 FlashCopy 2300 2301


LSS23

2000 2001
LSS21

LSS11
1100 1101 Global Copy Pair

2100 2101

A
GC: Source Copy Pending

B
GC: Target Copy Pending FC: Source

C
FC: Target

DS8000#1
-dev IBM.2107-7520781

DS8000#2
-dev IBM.2107-75ABTV1

Figure 25-12 Start Global Mirror

Chapter 25. Global Mirror examples

407

The last step before starting the application at the production site is to start the Global Mirror session again; see Figure 25-12. If you did not already create the FlashCopy relationships from B to C volumes, then you have to do it before starting the Global Mirror. To start the Global Mirror session use the mkgmir command. Before starting Global Mirror, you can check the status for the Global Mirror session on each LSS with the lssession command. After starting Global Mirror, you can use the showgmir command to check its status. Example 25-43 shows the commands used in our example and the corresponding results. You run this command on the DS HMC connected to the local DS8000 (DS8000#1).
Example 25-43 Start Global Mirror dscli> lssession 10-11 Date/Time: November 12, 2005 12:15:50 AM JST IBM DSCLI Version: 5.1.0.204 DS: IBM.2107-7520781 LSS ID Session Status Volume VolumeStatus PrimaryStatus SecondaryStatus FirstPassComplete AllowCas =========================================================================================================== 10 02 Normal 1000 Active Primary Copy Pending Secondary Simplex True Disable 10 02 Normal 1001 Active Primary Copy Pending Secondary Simplex True Disable 11 02 Normal 1100 Active Primary Copy Pending Secondary Simplex True Disable 11 02 Normal 1101 Active Primary Copy Pending Secondary Simplex True Disable dscli> dscli> mkgmir -dev IBM.2107-7520781 -lss IBM.2107-7520781/10 -session 02 Date/Time: November 12, 2005 12:16:42 AM JST IBM DSCLI Version: 5.1.0.204 DS: IBM.2107-7520781 CMUC00162I mkgmir: Global Mirror for session 02 successfully started. dscli> dscli> showgmir -dev IBM.2107-7520781 IBM.2107-7520781/10 Date/Time: November 12, 2005 12:16:57 AM JST IBM DSCLI Version: 5.1.0.204 DS: IBM.2107-7520781 ID IBM.2107-7520781/10 Master Count 1 Master Session ID 0x02 Copy State Running Fatal Reason Not Fatal CG Interval (seconds) 0 XDC Interval(milliseconds) 50 CG Drain Time (seconds) 30 Current Time 11/12/2005 00:22:19 JST CG Time 11/12/2005 00:22:19 JST Successful CG Percentage 91 FlashCopy Sequence Number 0x4374B72B Master ID IBM.2107-7520781 Subordinate Count 0 Master/Subordinate Assoc -

408

IBM System Storage DS8000: Copy Services in Open Environments

25.5.10 Starting the application at the local site


Now we have an environment on which to start the application at the original local site. Depending on your operating system, it might be necessary to re-scan Fibre Channel devices and mount the new source volumes (A volumes) at the local site. Start all applications and check for consistency; see Figure 25-13. Depending on your path design, delete the paths from the recovery to the production LSSs.

Application server and DS CLI client

Application Backup server and DS CLI client

Start Application
I/O LSS10
GM Master 1000 1001 Global Copy Pair

Application running

LSS20

FlashCopy LSS22 2200 2201 FlashCopy 2300 2301


LSS23

2000 2001
LSS21

LSS11
1100 1101 Global Copy Pair

2100 2101

A
GC: Source Copy Pending

B
GC: Target Copy Pending FC: Source

C
FC: Target

DS8000#1
-dev IBM.2107-7520781

DS8000#2
-dev IBM.2107-75ABTV1

Figure 25-13 Start the application at the local site

25.6 Practicing disaster recovery readiness


In this section we discuss how to practice your disaster recovery readiness without stopping the application at the production site. You can use the same procedure to make a test or make a regular backup copy at the remote site. The typical scenario for this activity is the following: 1. Query the Global Mirror environment to have a look at the situation. 2. Pause Global Mirror and check its completion. 3. Pause Global Copy pairs. 4. Perform Global Copy Failover from B to A. 5. Create consistent data on B volumes (reverse FlashCopy from B to C). 6. Wait for the FlashCopy background copy to complete. 7. Re-establish FlashCopy pairs B to C with original Global Mirror options. 8. Take FlashCopy from B to (newly-created) D. 9. Perform the disaster recovery testing using the D volume. 10.Perform Global Copy Failback from A to B. 11.Resume Global Mirror.

Chapter 25. Global Mirror examples

409

Many steps in the aforementioned scenario are the same as those we discussed in 25.4, Recovery scenario after local site failure with the DS CLI on page 391, and 25.5, Returning to the local site on page 400. For those that are similar, we provide their pointers here.

25.6.1 Querying the Global Mirror environment to look at the situation


Next we describe several commands that you can use to look at the situation.

Query the Global Copy status


The lspprcpath and lspprc commands; see 25.1.4, Creating Global Copy relationships: A to B volumes on page 369.

Query the FlashCopy status


The lsremoteflash (or lsflash) command; see 25.1.5, Creating FlashCopy relationships: B to C volumes on page 370.

Query the Global Mirror status


The lssession, showgmir, showgmir -metrics, and showgmiroos commands; see 25.1.6, Starting Global Mirror on page 371.

25.6.2 Pausing Global Mirror and checking its completion


Example 25-44 shows how to perform this task. For detailed discussion and considerations, see 25.3.1, Pausing and resuming Global Mirror Consistency Group formation on page 382. Give this command to the DS HMC connected to the local disk subsystem.
Example 25-44 Pause Global Mirror dscli> pausegmir -dev IBM.2107-7520781 -lss IBM.2107-7520781/10 -session 02 Date/Time: November 14, 2005 6:44:30 PM JST IBM DSCLI Version: 5.1.0.204 DS: IBM.2107-7520781 CMUC00163I pausegmir: Global Mirror for session 02 successfully paused. dscli> dscli> showgmir -dev IBM.2107-7520781 IBM.2107-7520781/10 Date/Time: November 14, 2005 6:44:37 PM JST IBM DSCLI Version: 5.1.0.204 DS: IBM.2107-7520781 ID IBM.2107-7520781/10 Master Count 1 Master Session ID 0x02 Copy State Paused Fatal Reason Not Fatal CG Interval (seconds) 0 XDC Interval(milliseconds) 50 CG Drain Time (seconds) 30 Current Time 11/14/2005 18:50:04 JST CG Time 11/14/2005 18:49:59 JST Successful CG Percentage 100 FlashCopy Sequence Number 0x43785DC7 Master ID IBM.2107-7520781 Subordinate Count 0 Master/Subordinate Assoc dscli>

410

IBM System Storage DS8000: Copy Services in Open Environments

25.6.3 Pausing Global Copy pairs


The pausepprc command suspends the Global Copy pairs; see Example 25-45. Give this command to the DS HMC connected to the local disk subsystem.
Example 25-45 Pause Global Copy pairs << DS8000 #1 >> dscli> pausepprc -remotedev IBM.2107-75ABTV1 1000-1001:2000-2001 1100-1101:2100-2101 Date/Time: November 14, 2005 6:47:41 PM JST IBM DSCLI Version: 5.1.0.204 DS: IBM.2107-7520781 CMUC00157I pausepprc: Remote Mirror and Copy volume pair 1000:2000 relationship successfully paused. CMUC00157I pausepprc: Remote Mirror and Copy volume pair 1001:2001 relationship successfully paused. CMUC00157I pausepprc: Remote Mirror and Copy volume pair 1100:2100 relationship successfully paused. CMUC00157I pausepprc: Remote Mirror and Copy volume pair 1101:2101 relationship successfully paused. dscli> dscli> lspprc 1000-1001 1100-1101 Date/Time: November 14, 2005 6:47:47 PM JST IBM DSCLI Version: 5.1.0.204 DS: IBM.2107-7520781 ID State Reason Type SourceLSS Timeout (secs) Critical Mode First Pass Status ==================================================================================================== 1000:2000 Suspended Host Source Global Copy 10 unknown Disabled True 1001:2001 Suspended Host Source Global Copy 10 unknown Disabled True 1100:2100 Suspended Host Source Global Copy 11 unknown Disabled True 1101:2101 Suspended Host Source Global Copy 11 unknown Disabled True dscli> << DS8000 #2 >> dscli> lspprc 2000-2001 2100-2101 Date/Time: November 14, 2005 6:55:44 PM JST IBM DSCLI Version: 5.1.0.204 DS: IBM.2107-75ABTV1 ID State Reason Type SourceLSS Timeout (secs) Critical Mode First Pass Stat =========================================================================================================== 1000:2000 Target Suspended Update Target Global Copy 10 unknown Disabled Invalid 1001:2001 Target Suspended Update Target Global Copy 10 unknown Disabled Invalid 1100:2100 Target Suspended Update Target Global Copy 11 unknown Disabled Invalid 1101:2101 Target Suspended Update Target Global Copy 11 unknown Disabled Invalid

25.6.4 Performing Global Copy Failover from B to A


Example 25-46 shows how to perform this task. For detailed discussion and considerations see 25.4.2, Performing Global Copy Failover from B to A on page 393. Give this command to the DS HMC connected to the remote disk subsystem.
Example 25-46 Perform Global Copy Failover from B to A << DS8000 #2 >> dscli> lspprc 2000-2001 2100-2101 Date/Time: November 14, 2005 7:04:45 PM JST IBM DSCLI Version: 5.1.0.204 DS: IBM.2107-75ABTV1 ID State Reason Type SourceLSS Timeout (secs) Critical Mode First Pass Stat =========================================================================================================== 1000:2000 Target Suspended Update Target Global Copy 10 unknown Disabled Invalid 1001:2001 Target Suspended Update Target Global Copy 10 unknown Disabled Invalid 1100:2100 Target Suspended Update Target Global Copy 11 unknown Disabled Invalid 1101:2101 Target Suspended Update Target Global Copy 11 unknown Disabled Invalid dscli> dscli> failoverpprc -remotedev IBM.2107-7520781 -type gcp 2000-2001:1000-1001 2100-2101:1100-1101 Date/Time: November 14, 2005 7:04:50 PM JST IBM DSCLI Version: 5.1.0.204 DS: IBM.2107-75ABTV1 CMUC00196I failoverpprc: Remote Mirror and Copy pair 2000:1000 successfully reversed. CMUC00196I failoverpprc: Remote Mirror and Copy pair 2001:1001 successfully reversed. CMUC00196I failoverpprc: Remote Mirror and Copy pair 2100:1100 successfully reversed. CMUC00196I failoverpprc: Remote Mirror and Copy pair 2101:1101 successfully reversed. dscli> Chapter 25. Global Mirror examples

411

dscli> lspprc 2000-2001 2100-2101 Date/Time: November 14, 2005 7:04:53 PM JST IBM DSCLI Version: 5.1.0.204 DS: IBM.2107-75ABTV1 ID State Reason Type SourceLSS Timeout (secs) Critical Mode First Pass Status ==================================================================================================== 2000:1000 Suspended Host Source Global Copy 20 unknown Disabled True 2001:1001 Suspended Host Source Global Copy 20 unknown Disabled True 2100:1100 Suspended Host Source Global Copy 21 unknown Disabled True 2101:1101 Suspended Host Source Global Copy 21 unknown Disabled True dscli>

25.6.5 Creating consistent data on B volumes


Example 25-47 shows how to perform this task. For detailed discussion and considerations, see 25.4.4, Reversing FlashCopy from B to C on page 396. Give the reverseflash command to the DS HMC connected to the remote disk subsystem.
Example 25-47 Reverse FlashCopy B to C dscli> reverseflash -fast -tgtpprc 2000-2001:2200-2201 2100-2101:2300-2301 Date/Time: November 14, 2005 7:11:26 PM JST IBM DSCLI Version: 5.1.0.204 DS: IBM.2107-75ABTV1 CMUC00169I reverseflash: FlashCopy volume pair 2000:2200 successfully reversed. CMUC00169I reverseflash: FlashCopy volume pair 2001:2201 successfully reversed. CMUC00169I reverseflash: FlashCopy volume pair 2100:2300 successfully reversed. CMUC00169I reverseflash: FlashCopy volume pair 2101:2301 successfully reversed.

25.6.6 Waiting for the FlashCopy background copy to complete


After the FlashCopy background copy completes, the FlashCopy relationship ends. You can check this with the lsflash command. See Example 25-48.
Example 25-48 Check the FlashCopy background copy completion dscli> lsflash 2000-2001 2100-2101 Date/Time: November 14, 2005 7:11:30 PM JST IBM DSCLI Version: 5.1.0.204 DS: IBM.2107-75ABTV1 CMUC00234I lsflash: No Flash Copy found.

25.6.7 Re-establishing the FlashCopy relationships


In order to resume the Global Mirror environment quickly, we re-establish the FlashCopy relationships from B to C with the original options for the Global Mirror environment; see Example 25-49. For detailed discussion and considerations see 25.4.5, Re-establishing the FlashCopy relationship from B to C on page 399. Give this command to the DS HMC connected to the remote disk subsystem.
Example 25-49 Reestablish the FlashCopy relationships dscli> mkflash -tgtinhibit -nocp -record 2000-2001:2200-2201 2100-2101:2300-2301 Date/Time: November 14, 2005 7:19:48 PM JST IBM DSCLI Version: 5.1.0.204 DS: IBM.2107-75ABTV1 CMUC00137I mkflash: FlashCopy pair 2000:2200 successfully created. CMUC00137I mkflash: FlashCopy pair 2001:2201 successfully created. CMUC00137I mkflash: FlashCopy pair 2100:2300 successfully created. CMUC00137I mkflash: FlashCopy pair 2101:2301 successfully created. dscli> dscli> lsflash 2000-2001 2100-2101 Date/Time: November 14, 2005 7:19:51 PM JST IBM DSCLI Version: 5.1.0.204 DS: IBM.2107-75ABTV1

412

IBM System Storage DS8000: Copy Services in Open Environments

ID SrcLSS SequenceNum Timeout ActiveCopy Recording Persistent Revertible SourceWriteEnabled TargetWriteEnabled BackgroundCopy ==================================================================================================================================== 2000:2200 20 0 300 Disabled Enabled Enabled Disabled Enabled Disabled Disabled 2001:2201 20 0 300 Disabled Enabled Enabled Disabled Enabled Disabled Disabled 2100:2300 21 0 300 Disabled Enabled Enabled Disabled Enabled Disabled Disabled 2101:2301 21 0 300 Disabled Enabled Enabled Disabled Enabled Disabled Disabled

25.6.8 Taking FlashCopy from B to D


In the previous step we created a consistent copy of the data on the B volumes. Now we make another copy of the B volumes for the disaster recovery testing. We call these FlashCopy targets the D volumes. In our example we have four D volumes, which are 2400, 2401, 2500, and 2501; see Figure 25-14.

Application server and DS CLI client

Application Backup server and DS CLI client

mkflash B to D
FlashCopy
LSS20 LSS22 LSS24

LSS10
1000 1001 Global Copy Pair

2000 2001
LSS21

2200 2201 FlashCopy


LSS23

2400 2401
LSS26

LSS11
1100 1101 Global Copy Pair

2100 2101

2300 2301

2500 2501

A
GC: Source Copy Pending

B
GC: Source Suspended FC: Source

C
FC: Target

D
FC: Target

DS8000#1
-dev IBM.2107-7520781

DS8000#2
-dev IBM.2107-75ABTV1

Figure 25-14 After mkflash B to D

Example 25-50 shows the DS CLI log for this operation. We use the -nocp option for the FlashCopy. You can also use the copy option.
Example 25-50 Take FlashCopy B to D dscli> lsflash 2000-2001 2100-2101 Date/Time: November 14, 2005 7:30:46 PM JST IBM DSCLI Version: 5.1.0.204 DS: IBM.2107-75ABTV1
ID SrcLSS SequenceNum Timeout ActiveCopy Recording Persistent Revertible SourceWriteEnabled TargetWriteEnabled BackgroundCopy ==================================================================================================================================== 2000:2200 20 0 300 Disabled Enabled Enabled Disabled Enabled Disabled Disabled 2001:2201 20 0 300 Disabled Enabled Enabled Disabled Enabled Disabled Disabled 2100:2300 21 0 300 Disabled Enabled Enabled Disabled Enabled Disabled Disabled 2101:2301 21 0 300 Disabled Enabled Enabled Disabled Enabled Disabled Disabled

dscli> dscli> mkflash -nocp 2000-2001:2400-2401 2100-2101:2500-2501 Date/Time: November 14, 2005 7:30:52 PM JST IBM DSCLI Version: 5.1.0.204 DS: IBM.2107-75ABTV1 CMUC00137I mkflash: FlashCopy pair 2000:2400 successfully created. CMUC00137I mkflash: FlashCopy pair 2001:2401 successfully created. CMUC00137I mkflash: FlashCopy pair 2100:2500 successfully created.

Chapter 25. Global Mirror examples

413

CMUC00137I mkflash: FlashCopy pair 2101:2501 successfully created. dscli> dscli> lsflash 2000-2001 2100-2101 Date/Time: November 14, 2005 7:30:57 PM JST IBM DSCLI Version: 5.1.0.204 DS: IBM.2107-75ABTV1
ID SrcLSS SequenceNum Timeout ActiveCopy Recording Persistent Revertible SourceWriteEnabled TargetWriteEnabled BackgroundCopy ==================================================================================================================================== 2000:2200 20 0 300 Disabled Enabled Enabled Disabled Enabled Disabled Disabled 2000:2400 20 0 300 Disabled Disabled Disabled Disabled Enabled Enabled Disabled 2001:2201 20 0 300 Disabled Enabled Enabled Disabled Enabled Disabled Disabled 2001:2401 20 0 300 Disabled Disabled Disabled Disabled Enabled Enabled Disabled 2100:2300 21 0 300 Disabled Enabled Enabled Disabled Enabled Disabled Disabled 2100:2500 21 0 300 Disabled Disabled Disabled Disabled Enabled Enabled Disabled 2101:2301 21 0 300 Disabled Enabled Enabled Disabled Enabled Disabled Disabled 2101:2501 21 0 300 Disabled Disabled Disabled Disabled Enabled Enabled Disabled

dscli>

25.6.9 Performing the disaster recovery testing using the D volume


Depending on your operating system and system environment, it might be necessary to re-scan Fibre Channel devices and mount the D volume at the remote site. After this, you can perform your disaster recovery testing using the D volumes. You can also use the D volumes for backup or to make a tape backup.

25.6.10 Performing Global Copy Failback from A to B


In order to return to the normal Global Mirror environment, we have to resume the Global Copy pairs we had suspended in a previous step. Because the application at the production site keeps running and we must not lose these updates, we have to re-synchronize the A and the B volumes with the As being the source and the Bs being the target. Give this command to the DS HMC connected to the local DS8000 (DS8000#1); see Figure 25-15.

Application server and DS CLI client

Application Backup server and DS CLI client

failbackpprc A to B
FlashCopy
LSS20 LSS22 LSS24

LSS10
1000 1001 Global Copy Pair

2000 2001
LSS21

2200 2201 FlashCopy


LSS23

2400 2401
LSS26

LSS11
1100 1101 Global Copy Pair

2100 2101

2300 2301

2500 2501

A
GC: Source Copy Pending

B
GC: Target Copy Pending FC: Source

C
FC: Target

D
FC: Target

DS8000#1
-dev IBM.2107-7520781

DS8000#2
-dev IBM.2107-75ABTV1

Figure 25-15 Perform Global Copy Failback A to B - test scenario

414

IBM System Storage DS8000: Copy Services in Open Environments

For the failback operation, we use the failbackpprc command; see Example 25-51. Important: Do not specify the B volume as a source with the failbackpprc command (to the DS8000#2), otherwise data on the B volume will be copied to the A volume. If the A volume does not have reserve status, data on the A volume might be overwritten.
Example 25-51 Perform Global Copy Failback from A to B - test scenario << Before failbackpprc >> << DS8000 #1 >> dscli> lspprc 1000-1001 1100-1101 Date/Time: November 14, 2005 6:51:47 PM JST IBM DSCLI Version: 5.1.0.204 DS: IBM.2107-7520781 ID State Reason Type SourceLSS Timeout (secs) Critical Mode First Pass Status ==================================================================================================== 1000:2000 Suspended Host Source Global Copy 10 unknown Disabled True 1001:2001 Suspended Host Source Global Copy 10 unknown Disabled True 1100:2100 Suspended Host Source Global Copy 11 unknown Disabled True 1101:2101 Suspended Host Source Global Copy 11 unknown Disabled True dscli> << DS8000 #2 >> dscli> lspprc 2000-2001 2100-2101 Date/Time: November 14, 2005 8:56:22 PM JST IBM DSCLI Version: 5.1.0.204 DS: IBM.2107-75ABTV1 ID State Reason Type SourceLSS Timeout (secs) Critical Mode First Pass Status ==================================================================================================== 2000:1000 Suspended Host Source Global Copy 20 unknown Disabled True 2001:1001 Suspended Host Source Global Copy 20 unknown Disabled True 2100:1100 Suspended Host Source Global Copy 21 unknown Disabled True 2101:1101 Suspended Host Source Global Copy 21 unknown Disabled True << DS8000 #1 >> dscli> failbackpprc -remotedev IBM.2107-75ABTV1 -type gcp 1000-1001:2000-2001 1100-1101:2100-2101 Date/Time: November 14, 2005 8:57:09 PM JST IBM DSCLI Version: 5.1.0.204 DS: IBM.2107-7520781 CMUC00197I failbackpprc: Remote Mirror and Copy pair 1000:2000 successfully failed back. CMUC00197I failbackpprc: Remote Mirror and Copy pair 1001:2001 successfully failed back. CMUC00197I failbackpprc: Remote Mirror and Copy pair 1100:2100 successfully failed back. CMUC00197I failbackpprc: Remote Mirror and Copy pair 1101:2101 successfully failed back. dscli> dscli> lspprc 1000-1001 1100-1101 Date/Time: November 14, 2005 8:57:15 PM JST IBM DSCLI Version: 5.1.0.204 DS: IBM.2107-7520781 ID State Reason Type SourceLSS Timeout (secs) Critical Mode First Pass Status ================================================================================================== 1000:2000 Copy Pending Global Copy 10 unknown Disabled True 1001:2001 Copy Pending Global Copy 10 unknown Disabled True 1100:2100 Copy Pending Global Copy 11 unknown Disabled True 1101:2101 Copy Pending Global Copy 11 unknown Disabled True dscli>

Chapter 25. Global Mirror examples

415

25.6.11 Waiting for the Global Copy first pass to complete


The Global Copy first pass needs to not complete to resume Global Mirror. However, the Consistency Group formation does not start until this completion. You can check the status with the lspprc command; see Example 25-52. The First Pass Status field indicates the status of the first pass, where True means the first pass is complete. You can also use the lssession command output field FirstPassComplete to verify the status of the first pass.
Example 25-52 Check the Global Copy first pass completion dscli> lspprc 1000-1001 1100-1101 Date/Time: November 14, 2005 9:08:05 PM JST IBM DSCLI Version: 5.1.0.204 DS: IBM.2107-7520781 ID State Reason Type SourceLSS Timeout (secs) Critical Mode First Pass Status ================================================================================================== 1000:2000 Copy Pending Global Copy 10 unknown Disabled True 1001:2001 Copy Pending Global Copy 10 unknown Disabled True 1100:2100 Copy Pending Global Copy 11 unknown Disabled True 1101:2101 Copy Pending Global Copy 11 unknown Disabled True dscli> dscli> lspprc -l 1000-1001 1100-1101 Date/Time: November 14, 2005 9:08:08 PM JST IBM DSCLI Version: 5.1.0.204 DS: IBM.2107-7520781
ID State Reason Type Out Of Sync Tracks Tgt Read Src Cascade Tgt Cascade Date Suspended SourceLSS Timeout (secs) Critical Mode First Pass ============================================================================================================================================================== 1000:2000 Copy Pending Global Copy 0 Disabled Disabled invalid 10 unknown Disabled True 1001:2001 Copy Pending Global Copy 0 Disabled Disabled invalid 10 unknown Disabled True 1100:2100 Copy Pending Global Copy 0 Disabled Disabled invalid 11 unknown Disabled True 1101:2101 Copy Pending Global Copy 0 Disabled Disabled invalid 11 unknown Disabled True

dscli> dscli> lssession 10-11 Date/Time: November 14, 2005 9:19:30 PM JST IBM DSCLI Version: 5.1.0.204 DS: IBM.2107-7520781
LSS ID Session Status Volume VolumeStatus PrimaryStatus SecondaryStatus FirstPassComplete AllowCascading ================================================================================================================= 10 02 Normal 1000 Active Primary Copy Pending Secondary Simplex True Disable 10 02 Normal 1001 Active Primary Copy Pending Secondary Simplex True Disable 11 02 Normal 1100 Active Primary Copy Pending Secondary Simplex True Disable 11 02 Normal 1101 Active Primary Copy Pending Secondary Simplex True Disable

25.6.12 Resuming Global Mirror


Now you can resume Global Mirror with the resumegmir command. You can verify the result with the showgmir command; see Example 25-53. For detailed discussion and considerations see 25.3.1, Pausing and resuming Global Mirror Consistency Group formation on page 382. Give this command to the DS HMC connected to the local DS8000 (DS8000#1). In our example, the first showgmir output shows Running in the Copy State field but the Consistency Group formation is not done yet. You can find this from the Current Time and the CG Time field. The next showgmir command shows that at least one Consistency Group formation has been done after the showgmir command.
Example 25-53 Resume Global Mirror dscli> resumegmir -dev IBM.2107-7520781 -lss IBM.2107-7520781/10 -session 02 Date/Time: November 14, 2005 9:25:45 PM JST IBM DSCLI Version: 5.1.0.204 DS: IBM.2107-7520781 CMUC00164I resumegmir: Global Mirror for session 02 successfully resumed. dscli> dscli> showgmir -dev IBM.2107-7520781 IBM.2107-7520781/10 Date/Time: November 14, 2005 9:25:48 PM JST IBM DSCLI Version: 5.1.0.204 DS: IBM.2107-7520781 ID IBM.2107-7520781/10 Master Count 1 Master Session ID 0x02 Copy State Running Fatal Reason Not Fatal

416

IBM System Storage DS8000: Copy Services in Open Environments

CG Interval (seconds) 0 XDC Interval(milliseconds) 50 CG Drain Time (seconds) 30 Current Time 11/14/2005 21:31:15 JST CG Time 11/14/2005 18:49:59 JST Successful CG Percentage 99 FlashCopy Sequence Number 0x43785DC7 Master ID IBM.2107-7520781 Subordinate Count 0 Master/Subordinate Assoc dscli> showgmir -dev IBM.2107-7520781 IBM.2107-7520781/10 Date/Time: November 14, 2005 9:26:03 PM JST IBM DSCLI Version: 5.1.0.204 DS: IBM.2107-7520781 ID IBM.2107-7520781/10 Master Count 1 Master Session ID 0x02 Copy State Running Fatal Reason Not Fatal CG Interval (seconds) 0 XDC Interval(milliseconds) 50 CG Drain Time (seconds) 30 Current Time 11/14/2005 21:31:30 JST CG Time 11/14/2005 21:31:30 JST Successful CG Percentage 99 FlashCopy Sequence Number 0x437883A2 Master ID IBM.2107-7520781 Subordinate Count 0 Master/Subordinate Assoc -

25.7 DS Storage Manager GUI: Examples


In the following sections we explain how to create and manage a Global Mirror session using the DS Storage Manager (DS SM) graphical user interface (GUI). In the DS SM GUI examples we use two DS8000s, serial numbers 7503461 and 75ABTV1, in a quite similar configuration as the one used in the previous examples. Still, in this case, note that 7503461 is at the local production site and 75ABTV1 is at the remote backup site. Also, on each machine, we use volumes in LSS 47. Figure 25-16 shows the DS8000 setup that was used in this example.

FlashCopy

A
GM Master 4710 4711

Global Copy PPRC path Global Copy Pair


Physical Fibre path

B
4730 4731

C
FlashCopy 4750 4751

LSS47
DS8000#1
-dev IBM.2107-7520781

LSS47
DS8000#2

LSS47

-dev IBM.2107-75ABTV1

Figure 25-16 DS8000 configuration in the GUI example for Global Mirror Chapter 25. Global Mirror examples

417

In this Global Mirror example, we used Space Efficient volumes as FlashCopy targets (C volumes). This configuration step is described in 25.8.3, Creating FlashCopy relationships on page 427. You can check the IBM Storage support Web site for the availability of Copy Services features. We had Ethernet connectivity between the sites to connect to the storage complexes. We also performed the procedure to add the remote storage complex to the local storage complex.

25.8 Setting up a Global Mirror environment using the DS GUI


To set up a Global Mirror environment, we follow a similar procedure as with the DS CLI.

25.8.1 Defining paths


With DS Storage Manager, to create new paths between two LSSs on two different storage disk subsystems, it is necessary to go through a multi-stage process. You need to repeat this wizard for each data path between LSSs on the source storage image on site 1 and the target LSSs on the target storage image on site 2, and for each control path between the master storage image LSS and the subordinate storage image LSSs. To launch this wizard, you need to go first to the Paths panel under the Copy Services menu of the DS Storage Manager GUI; see Figure 25-17. Select from the pull-down lists the storage complex, then the storage unit, then the storage image, and finally the LSS that contains the source volumes of the Global Copy pairs we want to create. In the Select Action pull-down list, choose Create to proceed with the first step of the wizard.

Figure 25-17 Global Copy paths creation - Launch the creation process

Now the creation wizard is displaying the Select source LSS panel; see Figure 25-18. Here you select from the pull-down list, the LSS that contains the source volumes of the Global Copy pairs you are going to create.

418

IBM System Storage DS8000: Copy Services in Open Environments

Figure 25-18 Global Copy paths creation - step 1: select the source LSS

Click Next to proceed with the second step of this wizard. When the creation wizard displays the Select target LSS panel, select from the pull-down lists the storage complex, then the storage unit, then the storage image, and finally the LSS that contains the target volumes; see Figure 25-19.

Figure 25-19 Global Copy paths creation - step 2: select the target LSS

Click Next to proceed with the third step of this wizard. When the creation wizard displays the Select source I/Os ports panel, select (using the check boxes) at least one I/O port (two is better) to use for Global Copy replication; see Figure 25-20.

Chapter 25. Global Mirror examples

419

In the location column, four digits indicate the location of a port: The first digit (R) is to locate the frame. The second digit (E) is for the I/O enclosure. The third digit (C) is for the adapter. The fourth digit (P) is for the adapters port.

Figure 25-20 Global Copy paths creation - step 3: select the source I/O ports

Click Next to proceed with the fourth step of this wizard. When the creation wizard displays the Select target I/Os ports panel, for each I/O port selected during the third step, select from its related pull-down list the target I/O port; see Figure 25-21. Refer to step 3 of this wizard for an explanation of the RECP digits. In this example, no choices are being presented, as there is only one path for our port.

Figure 25-21 Global Copy paths creation - step 4: select the target I/O ports

420

IBM System Storage DS8000: Copy Services in Open Environments

Click Next to proceed with the fifth step of this wizard. When the creation wizard displays the Select path options panel, you can select using the check box the option Define as Consistency Group; see Figure 25-22. This option is not mandatory since the Global Mirror session will handle the consistency of the data across the set of volumes.

Figure 25-22 Global Copy paths creation - step 5: select the paths options

Click Next to proceed with the sixth and last step of this wizard. Figure 25-23 shows the Verification panel. Here you check all the components of your paths configuration and, if necessary, click Back to correct any of them or click Finish to validate the configuration and end the wizard.

Figure 25-23 Global Copy paths creation - step 6: verification

Chapter 25. Global Mirror examples

421

25.8.2 Creating Global Copy pairs


With DS Storage Manager, to create new Global Copy relationships for one or several volume pairs, it is necessary to go through a five-step process. Note: If Global Copy pairs are in several LSSs, do not forget to select all of them during this wizard or you will have to run the wizard again on each LSS. If Global Copy pairs are spread over several storage units, run this wizard again on each of them. To launch this wizard, you need to go first to the Metro Mirror panel under the Copy Services menu of the DS Storage Manager GUI; see Figure 25-24. In the Select Action pull-down list, choose Create.

Figure 25-24 Global Copy creation - Launch the creation process

422

IBM System Storage DS8000: Copy Services in Open Environments

When the creation wizard displays the Volume Pairing Method, select the radio button Manual volume pair assignment; see Figure 25-25.

Figure 25-25 Global Copy creation - step 1: choose volume pairing method

Click Next to proceed with the second step of this wizard. When the creation wizard displays the Select source volumes panel, select from the pull-down lists the storage complex, the storage unit, the storage image, the resource type, and if necessary its appropriate parameter to display the list of volumes. See Figure 25-26.

Chapter 25. Global Mirror examples

423

Figure 25-26 Global Copy creation - step 2: select the source volumes

If you have chosen the resource type LSS, then select from the pull-down list the LSS number that contains the source volumes you are going to use for Global Mirror and then check the boxes of the selected volumes. Note: At this step, if paths have not yet been created we can click Create Paths to launch the wizard described in 25.8.1, Defining paths on page 418. Click Next to proceed with the third step of this wizard. When the creation wizard displays the Select target volumes panel, we can notice that only one source volume is indicated on the top of the panel; see Figure 25-27 on page 425. This means that we will repeat this third step for each volume that we have selected during the second step.

424

IBM System Storage DS8000: Copy Services in Open Environments

Figure 25-27 Global Copy creation - Step 3: select the target volume 1 (first view)

Figure 25-28 Global Copy creation - Step 3: select the target volume 1 (second view)

From the pull-down lists, select the storage complex, then the storage unit, then the storage image, then the resource type, and if required, its appropriate parameter to display the list of volumes. If necessary, scroll down to the required LSS. Then, on the required LSS, check the box for the selected target volume.

Chapter 25. Global Mirror examples

425

Click Next to proceed with the selection of the second target volume; see Figure 25-29.

Figure 25-29 Global Copy creation - Step 3: select the target volume 2

When the creation wizard once again displays the Select target volumes panel, we notice that only the indicated source volume on the top of the panel is different. Once again, select from the pull-down lists the storage complex, then the storage unit, then the storage image, then the resource type, and if it is necessary its appropriate parameter to display the list of volumes. Then on the required LSS check the box for the selected target volume. If we had selected more source volumes, we would once again proceed to the next target volume selection panel. Since it is the second and last volume in our selection, click Next to proceed with the fourth step of this wizard. When the creation wizard displays the Select copy options panel, select the radio button Global Copy to define the type of replication, then check the box for Permit read access from target, and if it is the first synchronization between source and target volumes, check the box for Perform initial copy. See Figure 25-30.

426

IBM System Storage DS8000: Copy Services in Open Environments

Figure 25-30 Global Copy creation - Step 4: select the copy options

Click Next to proceed with the fifth and last step of this wizard. In the Verification panel, verify all the components of your Global Copy session configuration and, if necessary, click Back to correct any of them or click Finish to validate; see Figure 25-31.

Figure 25-31 Global Copy creation - Step 5: verification

25.8.3 Creating FlashCopy relationships


With the DS Storage Manager, to create new FlashCopy relationships for one or several pairs of volumes, it is necessary to go through a five-step process. Note: If FlashCopy pairs are in several LSSs, do not forget to select all of them during this wizard or you will have to run the wizard again on each LSS. If FlashCopy pairs are spread over several storage images, run this wizard again on each of them. 427

Chapter 25. Global Mirror examples

Because they are for a Global Mirror environment, these FlashCopy pairs are created on the remote machine using the Global Copy targets as the FlashCopy source. This means that we have to select the remote storage complex and the remote storage unit. To launch this wizard, you first need to go to the FlashCopy panel under the Copy Services menu of the DS Storage Manager GUI. Select from the pull-down lists the storage complex, then the storage unit, then the storage image, and finally the LSS that contains the Global Copy target volumes. In the Select Action pull-down list, choose Create to proceed with the first step of the wizard; see Figure 25-32.

Figure 25-32 FlashCopy creation - Launch the creation process

When the creation wizard displays the Define relationship type, select the radio button A single source with a single target. This is because we will have to activate the record option on the fourth step of this wizard; see Figure 25-33.

Figure 25-33 FlashCopy creation - Step 1: Select the relationship type

Click Next to proceed with the second step of this wizard.

428

IBM System Storage DS8000: Copy Services in Open Environments

When the creation wizard displays the Select source volumes panel, select from the pull-down lists the storage complex, then the storage unit, then the storage image, then the resource type, and if it is necessary its appropriate parameter to display the list of volumes. If you have chosen the resource type LSS, select from the pull-down lists the LSS number that contains the source volumes you want to use for the Global Mirror environment. Then check the boxes of the selected source volumes. See Figure 25-34.

Figure 25-34 FlashCopy creation - Step 2: select the source volumes

Click Next to proceed with the third step of this wizard. When the creation wizard displays the Select target volumes panel, select from the pull-down lists the resource type and, if it is necessary, its appropriate parameter to display the list of volumes. If you have chosen the resource type LSS, select from the pull-down lists the LSS number that contains the target volumes you want to use. Then check the boxes of the selected target volumes. See Figure 25-35.

Figure 25-35 FlashCopy creation - Step 3: select the target volumes

Chapter 25. Global Mirror examples

429

Click Next to proceed with the fourth step of this wizard. When the creation wizard displays the Select common options panel, check the box for Enable change recording. This will automatically check the box for Make relationship(s) persistent. You can leave the *Sequence number field with its default value since Global Mirror will take care of it automatically. See Figure 25-36. Note: Do not check the box for Initiate background copy. You might have to un-select it.

Figure 25-36 FlashCopy creation - Step 4: select the common option

Click Next to proceed with the fifth and last step of this wizard. In the Verification panel, verify all the components of the FlashCopy configurations, and if necessary, click Back to correct any of them or click Finish to validate. See Figure 25-37.

430

IBM System Storage DS8000: Copy Services in Open Environments

Figure 25-37 FlashCopy creation - Step 5: verification

25.8.4 Creating the Global Mirror session


With the DS Storage Manager, to create a Global Mirror session you go through a three-step process. To launch this wizard you need to go first to the Global Mirror panel under the Copy Services menu of DS Storage Manager GUI; see Figure 25-38. Select from the pull-down lists the storage complex, then the storage unit, and then the storage image that contains the source volumes that will be part of the Global Mirror. Then from the Select Action pull-down list, choose Create.

Figure 25-38 Global Mirror creation - Launch the creation process Chapter 25. Global Mirror examples

431

The creation wizard displays the Select volumes panel; see Figure 25-39.

Figure 25-39 Global Mirror creation - Step 1: select volumes (first view)

Figure 25-40 Global Mirror creation - Step 1: select volumes (second view)

When the creation wizard displays the Select volumes panel, click the required storage unit, if necessary scroll down to the required LSS (Figure 25-40), then select the required LSS, and check the boxes for the Global Copy source volumes you want to be part of the Global Mirror session.

432

IBM System Storage DS8000: Copy Services in Open Environments

Note: At this step, if Global Copy pairs and FlashCopy pairs are not yet created, we can click Create Metro Mirror to launch the wizard described in 25.8.2, Creating Global Copy pairs on page 422, and we can click Create FlashCopy to launch the wizard described in 25.8.3, Creating FlashCopy relationships on page 427. Click Next to proceed with the second step of this wizard (Figure 25-41).

Figure 25-41 Global Mirror creation - Step 2: define properties

When the creation wizard displays the Define properties panel, the session number field should be filled with the appropriate session number. We can click Get Available Session Numbers if we have forgotten the one we want to use. See Figure 25-41. There are also three important fields that can be set at this moment using this panel, and these are the Global Mirror session tuning parameters. For a detailed understanding of these options, see 21.4.2, Consistency Group parameters on page 321. Following is a brief discussion of each of them: Consistency Group interval time (seconds): specifies how long to wait between the formation of Consistency Groups. If this number is not specified or is set to zero, Consistency Groups are formed continuously. Maximum coordination interval (milliseconds): indicates the maximum time that Global Mirror can queue I/Os in the source disk subsystem to start forming a Consistency Group. Maximum time writes inhibited to remote site (seconds): specifies the maximum amount of time allowed for the consistent set of data to drain to the remote site before failing the current Consistency Group. Click Next to proceed with the third and last step of this wizard.

Chapter 25. Global Mirror examples

433

In the Verification panel, check all the components of your Global Mirror session configuration and, if necessary, click Back to correct any of them or click Finish to validate; see Figure 25-42.

Figure 25-42 Global Mirror creation - Step 3 - Verification

To view the status of the Global Mirror session, go to the Global Mirror panel under the Copy Services menu of the DS Storage Manager GUI. Select from the pull-down lists the storage complex, then the storage unit, then the storage image, which is the master Global Mirror session manager, and wait until the screen is refreshed. If it is necessary, click Refresh to refresh the panel. See Figure 25-43.

Figure 25-43 Global Mirror - Visualize the session status

434

IBM System Storage DS8000: Copy Services in Open Environments

25.9 Managing the Global Mirror environment with the DS GUI


In this section we discuss and give examples of how to perform common Global Mirror control tasks using the DS CLI. The following management activities are presented: Query a Global Mirror environment. Pause a Global Mirror session. Resume a Global Mirror session. Add or remove volumes from an LSS to the Global Mirror session. Many of the tasks described in this section were already described in previous sections as part of procedures like the site swap procedure or the disaster recovery test procedure.

25.9.1 Viewing settings and error information of the Global Mirror session
To see session information you first to go to the Global Mirror panel under the Copy Services menu of the DS Storage Manager GUI. Then check the box for the Global Mirror session for which you want to display its properties, and in the Select Action pull-down list, choose Properties; see Figure 25-44.

Figure 25-44 View Global Mirror sessions properties: Launch the viewing process

Chapter 25. Global Mirror examples

435

The Global Mirror session properties: Real-time panel will be displayed. In the General tab, you can review the settings for this session; see Figure 25-45.

Figure 25-45 View Global Mirror session settings: General tab

You can then click Failures to view the Failures tab; see Figure 25-46. In the Global Mirror session properties: Real-time panel, on the Failures tab, you can request selected information using the radio buttons Most recent failure, Previous failure, and First failure. When you are done reviewing the information, click OK to finish and go back to the main Global Mirror panel.

Figure 25-46 View Global Mirror session errors information - Failures tab

436

IBM System Storage DS8000: Copy Services in Open Environments

25.9.2 Viewing information of volumes in the Global Mirror session


To launch this viewing action, you need to first go to the Global Mirror panel under the Copy Services menu of the DS Storage Manager GUI. Check the box for the Global Mirror session for which you want to display its associated volumes information, and in the Select Action pull-down list, choose View session volumes to proceed. The Global Mirror session volumes: Real-time panel will be displayed and it will show information about the volumes associated with the Global Mirror session; see Figure 25-47. You can either download or print this information table. Then click OK to finish and go back to the main Global Mirror panel.

Figure 25-47 Global Mirror session volumes information

25.9.3 Pausing a Global Mirror session


To pause a Global Mirror session, you first go to the Global Mirror panel under the Copy Services menu of the DS Storage Manager GUI. Then select the Global Mirror session with which you want to work. Then in the Select Action pull-down list, choose Pause to proceed with the first step of the wizard. When the Global Mirror: Real-time panel displays the warning message shown in Figure 25-48, either click Cancel to return to the main Global Mirror panel without pausing the Global Mirror session or OK to pause the Global Mirror session and to return to the main Global Mirror panel.

Figure 25-48 Pause Global Mirror: Confirm the pause of the Global Mirror session

Chapter 25. Global Mirror examples

437

When the main Global Mirror panel is displayed, note that the state of the Global Mirror session is now Paused; see Figure 25-49.

Figure 25-49 Pause Global Mirror: View session status paused

25.9.4 Resuming a Global Mirror session


To resume a Global Mirror session first go to the Global Mirror panel under the Copy Services menu of the DS Storage Manager GUI. Then select the Global Mirror session that you are going to resume. Then in the Select Action pull-down list choose Resume to proceed. When the Global Mirror Real-time panel displays the warning message (see Figure 25-50), either click Cancel to return to the main Global Mirror panel without resuming the Global Mirror session or click OK to resume the Global Mirror session and to return to the main Global Mirror panel. When the main Global Mirror panel is displayed, the state of the Global Mirror session will now show Running.

Figure 25-50 Resume Global Mirror - Confirm the resume of the Global Mirror session

438

IBM System Storage DS8000: Copy Services in Open Environments

25.9.5 Modifying a Global Mirror session


When using the DS Storage Manager to modify an existing Global Mirror session, either to add or remove one or several sets of volumes or to change the Global Mirror settings, it is necessary to go through a three-step process. To launch the wizard, you first go to the Global Mirror panel under the Copy Services menu of the DS Storage Manager GUI. Then select the Global Mirror session you want to work with. Then, in the Select Action pull-down list, choose Modify to proceed with the first step of the wizard; see Figure 25-44 on page 435. When the modification wizard displays the Select volumes panel, click the required storage unit, then the required LSS, and either select or un-select the check box for the source volumes you wish to either add or remove from the Global Mirror session. The panel will look similar to Figure 25-39 on page 432. Note: At this step, if Global Copy pairs and FlashCopy pairs have not been created yet, you can click Create Metro Mirror to launch the wizard described in 25.8.2, Creating Global Copy pairs on page 422, and you can click Create FlashCopy to launch the wizard described in 25.8.3, Creating FlashCopy relationships on page 427. Click Next to proceed with the second step of this wizard. When the creation wizard displays the Define properties panel, the session number field should be filled with the appropriate session number. This panel will look similar to Figure 25-41 on page 433. In this panel there are three important fields you can modify at this moment: Consistency Group interval time (seconds), Maximum coordination interval (milliseconds), and Maximum time writes inhibited to remote site (seconds). For a detailed discussion of these Global Mirror tuning parameters see 21.4.2, Consistency Group parameters on page 321. Click Next to proceed with the third and last step of this wizard. This is the Verification panel. Here you verify all the components of your Global Mirror session configuration and, if necessary, click Back to correct any of them or click Finish to validate.

Chapter 25. Global Mirror examples

439

440

IBM System Storage DS8000: Copy Services in Open Environments

Part 8

Part

Metro/Global Mirror
In this part we describe a new function for the DS8000, Metro/Global Mirror, a 3-site disaster recovery solution.

Copyright IBM Corp. 2004-2008. All rights reserved.

441

442

IBM System Storage DS8000: Copy Services in Open Environments

26

Chapter 26.

Metro/Global Mirror overview


In this chapter we give you an overview of the characteristics and operation of IBM System Storage Metro/Global Mirror, a 3-site, high availability, disaster recovery implementation. Also discussed are the considerations for its implementation on the IBM System Storage DS8000. Note: Metro/Global Mirror is not supported on DS6000.

Copyright IBM Corp. 2004-2008. All rights reserved.

443

26.1 Metro/Global Mirror overview


Metro/Global Mirror is a 3-site, multi-purpose, replication solution for both System z and open systems data. As shown in Figure 26-1, Metro Mirror provides high availability replication from local site (site A) to intermediate site (site B), while Global Mirror provides long distance disaster recovery replication from intermediate site (site B) to remote site (site C).

Server or Servers

***
normal application I/Os Global Mirror network Global Mirror FlashCopy asynchronous incremental long distance NOCOPY

Metro Mirror
A B

C D

Metro Mirror network synchronous short distance

Global Mirror
Intermediate Site (site B) Remote Site (site C)

Local Site (site A)

Figure 26-1 Metro/Global Mirror elements

26.1.1 Metro Mirror and Global Mirror: Comparison


Both Metro Mirror and Global Mirror are well established replication solutions. Metro/Global Mirror combines Metro Mirror and Global Mirror to incorporate the best features of the two solutions, as follows: Metro Mirror: Synchronous operation supports zero data loss. The opportunity to locate the intermediate site disk subsystems close to the local site disk subsystems allows use of intermediate site disk subsystems in a high availability configuration. Note: Metro Mirror is supported to a distance of up to 300 km but, when in a Metro/Global Mirror implementation, a shorter distance might be more appropriate in support of the high availability functionality. Global Mirror: Asynchronous operation supports long distance replication for disaster recovery. Global Mirror methodology has no impact on applications at the local site. Provides a recoverable, restartable, consistent image at the remote site with a Recovery Point Objective (RPO) typically in the 35 second range. 444
IBM System Storage DS8000: Copy Services in Open Environments

This chapter provides a high level overview of Metro/Global Mirror. The details of the individual processes/elements of the solution, for example, Metro Mirror and Global Mirror, are not repeated here because they are already described in great detail in respective parts of this book.

26.1.2 Metro/Global Mirror design objectives


The development of the Metro/Global Mirror solution was based upon the following requirements and principles: System z data or open systems data or both: Metro Mirror and Global Mirror are replication technologies that support both System z data and open systems data separately or intermixed on the same disk subsystems. Proven technology: Both Metro Mirror and Global mirror are technologies that have existed in client production environments for an extended period. Consistent, recoverable data: Metro Mirror - synchronous copy and Freeze/Run support this requirement. Global Mirror - momentarily pausing the primary application write I/Os for approximately 3 msec every 35 seconds. This allows formation of consistent groups of updates that are drained to the remote site and FlashCopied to create the appropriate image. High availability - mirrored data immediately available and accessible by local hosts: The Metro Mirror image is a zero data loss image that can be placed in or near the local site and can be quickly made available to the local site applications. Disaster recovery - mirrored data at long distance for regional disasters: Global Mirror is asynchronous and the remote site can be at continental distance. Recovery Point Objective (RPO) in single digit seconds: Metro Mirror - RPO = zero Global Mirror - RPO = typically 3-5 seconds Scalable solution - accommodate growth, across many disk subsystems: Metro Mirror - There are no architectural limits on the number of disk subsystems. Global Mirror - There are some current limits on the total number disk subsystems.

26.2 Metro/Global Mirror processes


Figure 26-2 illustrates our overview of Metro/Global Mirror processes. You can understand the Metro/Global Mirror process better if you understand the component processes. There is one notable difference in that the intermediate site volumes (site B volumes) are special because they act as both source (GM) and target (MM) volumes at the same time. The local site (site A) to intermediate site (site B) component is identical to Metro Mirror. Application writes are synchronously copied to the intermediate site before write complete is signaled to the application. All writes to the local site volumes in the mirror are treated in exactly the same way. This is explained in great detail in Chapter 13, Metro Mirror overview on page 173, of this book.

Chapter 26. Metro/Global Mirror overview

445

The intermediate site (site B) to remote site (site C) component is identical to Global Mirror, except that: The writes to intermediate site volumes are Metro Mirror secondary writes and not application primary writes. The intermediate site volumes are both source (GM) and target (MM) at the same time. The intermediate site disk subsystems are collectively paused by the Global Mirror master disk subsystem to create the Consistency Group (CG) set of updates. This pause would normally take 3 ms every 3 to 5 seconds. After the CG set is formed, the Metro Mirror writes, from local site (site A) volumes to intermediate site (site B) volumes, are allowed to continue. Also, the CG updates continue to drain to remote site (site C) volumes. The intermediate site to remote site drain is expected to take only a few seconds to complete, perhaps as few as 2 or 3 seconds. When all updates are drained to the remote site, all changes since the last FlashCopy from the C volumes to the D volumes are logically (NOCOPY) FlashCopied to the D volumes. After the logical FlashCopy is complete, the intermediate site to remote site Global Copy data transfer is resumed until the next formation of a Global Mirror Consistency Group. The process described above is repeated every 3 to 5 seconds if the interval for Consistency Group formation is set to zero. Otherwise it will be repeated at the specified interval plus 3 to 5 seconds. The Global Mirror processes are presented in much greater detail in Chapter 21, Global Mirror overview on page 303.

Server or Servers

***
4
normal application I/Os Global Mirror network asynchronous long distance

1
A

Metro Mirror
2 3
B

Global Mirror FlashCopy incremental NOCOPY

Metro Mirror network synchronous short distance

Global Mirror
Global Mirror consistency group formation (CG)
a. write updates to B volumes paused (< 3ms) to create CG b. CG updates to B volumes drained to C volumes c. after all updates drained, FlashCopy changed data from C to D volumes

Metro Mirror write


1. application to VolA 2. VolA to VolB 3. write complete to A 4. write complete to application

Local Site (Site A)

Intermediate Site (Site B)

Remote Site (Site C)

Figure 26-2 Metro/Global Mirror overview diagram

446

IBM System Storage DS8000: Copy Services in Open Environments

IBM offers services and solutions for the automation and management of the Metro Mirror environment. These include GDPS for System z and TPC for Replication (see Chapter 6, IBM TotalStorage Productivity Center for Replication on page 43). More details about GDPS can be found at the following Web site. http://www-03.ibm.com/systems/z/gdps/

Chapter 26. Metro/Global Mirror overview

447

448

IBM System Storage DS8000: Copy Services in Open Environments

27

Chapter 27.

Configuration and setup


In this chapter we give you an overview of possible setups of Metro/Global Mirror and explain how it could be made applicable for an open systems environment. We also provide a hands-on section on how to set up a simple Metro/Global Mirror environment.

Copyright IBM Corp. 2004-2008. All rights reserved.

449

27.1 Metro/Global Mirror configuration


As described in Chapter 26, Metro/Global Mirror overview on page 443, the Metro/Global Mirror is a cascade of a Metro Mirror and a Global Mirror. The Metro Mirror covers the data replication between local site and intermediate site. The Global Mirror is cascaded from the secondary volumes of the Metro Mirror and continues to copy the data from intermediate site to remote site.

27.1.1 Metro/Global Mirror with additional Global Mirror


A recovery of the storage at the remote site could be initiated for various reasons, such as a disaster situation at the local site or a planned failover, to avoid the risk of production impact in case of maintenance operations at the local site. After a recovery at the remote site and when production has been started at the remote site, it is likely that this status will persist for an extended period of time. During that time, the data at the remote site is not protected against a disaster situation that might occur at the remote site. One method to be prepared for a disaster at the remote site is to use the storage at the intermediate site for a target copy. Because the bandwidth between the remote and the intermediate site will generally be too low to set up a Metro Mirror, the only possible way to copy the data is to establish a Global Copy relationship. Global Copy by itself does not guarantee that the copied data is consistent, but a Global Mirror relationship can provide the necessary data consistency. To set up a Global Mirror between the remote and intermediate sites, an additional set of FlashCopy volumes is required at the intermediate site. Figure 27-1 illustrates the volume setup for Metro/Global Mirror with an additional Global Mirror.

Local Site

Intermediate Site

Remote Site

Global Mirror from remote to intermediate site

A
Metro Mirror

B
Global Mirror from intermediate to remote site

C D

Figure 27-1 Metro/Global Mirror with additional Global Mirror from remote to intermediate site

450

IBM System Storage DS8000: Copy Services in Open Environments

27.1.2 Metro/Global Mirror with multiple storage subsystems


Based on the capabilities of Global Mirror, it is possible to span the Metro/Global Mirror across multiple storage subsystems. At the intermediate site one of the storage subsystems has the role of the Global Mirror master and the other ones are the subordinates. The Global Mirror master controls the formation of the Consistency Group. It requires additional RMC (PPRC) links between the master and each subordinate. See Figure 27-2. The Metro Mirror will be set up from multiple storage subsystems at the local site to the intermediate to the Global Mirror primary systems. The paths of the Metro Mirror relations must be established with consistency group enabled.

Local site

Intermediate site

Remote site

Global Mirror Master Metro Mirror Global Mirror

Global Mirror Metro Mirror

Subordinate

Global Mirror Metro Mirror

Subordinate

Figure 27-2 Setup of a Metro/Global Mirror with multiple storage subsystems

27.2 Configuration examples


In an open systems environment, large applications are typically distributed across multiple servers according to their components and functionality. For example, database-related applications typically consist of a database engine and a certain number of application servers acting as front ends. In larger configurations, you might find additional servers that offer file services. All these components can be part of a single application.
Chapter 27. Configuration and setup

451

In a disaster situation where a failover to the intermediate or remote site has to be done, all servers and storage that are required to start up an application are subject to the takeover. Thus a disaster recovery concept must be worked out for each application individually. It defines which components and functionality must be recovered at the intermediate or remote sites in case of a disaster. In Figure 27-3 a possible configuration of a Metro/Global Mirror for an open systems environment is illustrated. It is divided into a primary production, a secondary production, and a remote site. Between the primary and secondary production sites Metro Mirror relations can be configured in either direction. In both production sites a Global Mirror is configured to the remote site. All connections are set up with redundant inter-switch links between fiber directors or switches. Note: The schematic in Figure 27-3 shows only one Fibre Channel director per site. In a real implementation, the connections between the sites should be realized by two redundant fabrics across all locations.

Primary production site

Secondary production site


Global Mirror

Remote site

DS8000 Metro Mirror Metro Mirror

DS8000 Global Mirror

DS8000

2109-M14

2109-M14

Primary single Host


P590

2109-M14

Secondary single Host

P590

P550

P550

P550

P550

P590

HACMP stretched clusters

Remote production for stretched clusters

Figure 27-3 Configuration example for an open systems environment

When an application uses a Metro Mirror from the primary production site to the secondary production site, the primary production site is the Metro/Global Mirror local site and the secondary site is the intermediate site and vice versa for applications running in the other direction.

452

IBM System Storage DS8000: Copy Services in Open Environments

This setup allows you to configure cluster systems that span across both production sites, offering more flexibility for high availability setups. A failure of a server at the production site can be taken over automatically at the other production site using the automatic take over feature of the cluster software while the primary storage is still being used. Otherwise, if a single host system, as shown in Figure 27-3 on page 452, fails at the primary production site, the only possibility to bring up the production again is a recovery of the storage and the server at the remote site. A failure of the storage can be seen as a failure of an infrastructure component, and this can be categorized as at least a partial disaster. In this case, a recovery of storage at the intermediate or remote site has to be performed. The storage at the intermediate site can be accessed by the server at the local site, when the bandwidth between both production sites is high enough. This implies that there is not necessarily a takeover of the servers required, which offers more flexibility on how to start up the applications. For cost effectiveness it is possible to consolidate the needed disk capacity at the remote site. Because of the distance, a stretched cluster environment might not be possible, so single host systems or local clustered systems are implemented at the remote site. A large scale server which is capable of providing multiple logical partitions to run the multiple applications from the production sites could be used. It is also possible to equip the storage subsystems at the remote site with disk drive modules of higher capacity to reduce the number of installed storage subsystems.

27.3 Initial setup of Metro/Global Mirror


In this section we describe how to set up a Metro/Global Mirror environment. For each step we provide sample commands and their output using the DS CLI. Figure 27-4 shows the logical setup of the Metro/Global Mirror. It also shows the steps in the order of the operational sequence. Considering the potentially large amount of data that has to be copied, we recommend that you first set up Global Copy with the NOCOPY option. When the Metro Mirror is created and starts to synchronize from the local site to the intermediate site, the Global Copy passes the tracks to the remote site. To minimize the impact to a running production system, we recommend that you start the Metro Mirror initial copy when the production write activity is low.

Local Site

Intermediate Site
1 3

Remote Site
2

C
4

6 5

Figure 27-4 Set up Metro/Global Mirror

Chapter 27. Configuration and setup

453

As shown in Figure 27-4 on page 453, the setup of the Metro/Global Mirror is done in the following steps: 1. Set up all Metro Mirror and Global Mirror paths. 2. Set up Global Copy with NOCOPY from intermediate site to remote site. 3. Set up Metro Mirror between local and intermediate sites. We recommend that you let the initial Metro Mirror copy complete before proceeding to the next step. 4. Set up FlashCopy at the remote site. 5. Create a Global Mirror session and add volumes to the session at the intermediate site. 6. Start Global Mirror at the intermediate site.

27.3.1 Identifying the PPRC ports


For performance reasons, it is very important that the links between sites are used exclusively for PPRC paths. In general, it is possible to mix ports for PPRC paths with host connections, but because of the high link utilization for PPRC communication, it might impact host I/Os. Use the lsavailpprcport command to get all possible routes for mirror links by displaying local and remote ports that have a physical connection. This could be either a point-to-point link or a zoned connection through a storage area network. In Example 27-1 you can see that there are four PPRC links (physical paths) available. These connections can be used to create the PPRC paths according to the users installation plan.
Example 27-1 Display available PPRC ports dscli> lsavailpprcport -dev IBM.2107-75ABTV1 -remotedev IBM.2107-75ABTV2 -remotewwnn 5005076303FFCE63 -fullid 70:72 Date/Time: June 19, 2006 2:48:43 PM CEST IBM DSCLI Version: 5.1.600.196 DS: IBM.2107-75ABTV1 Local Port Attached Port Type ================================================== IBM.2107-75ABTV1/I0033 IBM.2107-75ABTV2/I0233 FCP IBM.2107-75ABTV1/I0102 IBM.2107-75ABTV2/I0301 FCP

27.3.2 Step 1: Set up all Metro Mirror and Global Mirror paths
Based on the PPRC links that are discovered, the paths have to be created for each LSS pair in the Metro/Global Mirror configuration. There are two prerequisites which have to be fulfilled to create PPRC paths successfully: The physical connection must be available, either by an appropriate zone in a storage area network or by point-to-point links. The LSSs must exist in both storage devices, and at least one volume must exist in each of the designated LSSs. It is a good practice to create the PPRC paths for both directions. In case of a disaster, all paths in all directions are available and do not have to be created in the critical situation of the disaster.

454

IBM System Storage DS8000: Copy Services in Open Environments

Example 27-2 shows the setup of Metro Mirror paths between the local and intermediate sites. The mkpprcpath command is always executed at the DS8000 where the source volumes are for each direction. This means that for the direction local to intermediate, the mkpprcpath is executed at the local site and vice versa for the opposite direction.
Example 27-2 Set up PPRC paths between local and intermediate site At local site: dscli> mkpprcpath -dev IBM.2107-75ABTV1 -remotedev 5005076303FFCE63 -srclss 70 -tgtlss 72 -consistgrp Date/Time: June 19, 2006 3:22:45 PM CEST IBM DSCLI IBM.2107-75ABTV1 CMUC00149I mkpprcpath: Remote Mirror and Copy path At intermediate site: dscli> mkpprcpath -dev IBM.2107-75ABTV2 -remotedev 5005076303FFC663 -srclss 72 -tgtlss 70 -consistgrp Date/Time: June 19, 2006 3:26:05 PM CEST IBM DSCLI IBM.2107-75ABTV2 CMUC00149I mkpprcpath: Remote Mirror and Copy path IBM.2107-75ABTV1 -remotewwnn I0233:I0033 I0301:I0102 Version: 5.1.600.196 DS: 72:70 successfully established. IBM.2107-75ABTV2 -remotewwnn I0033:I0233 I0102:I0301 Version: 5.1.600.196 DS: 70:72 successfully established.

Note: To ensure consistency of the Metro Mirror, the PPRC paths have to be created with the option -consistgrp. Between the intermediate site and the remote site, the Global Mirror is responsible for the creation of the Consistency Group. This means that the paths for the Global Copy must not be set up with the -consistgrp option. Example 27-3 shows the mkpprcpath command for the Global Mirror. Again, the paths are created for both directions, and commands are executed at the DS8000 where the source is.
Example 27-3 Set up PPRC paths between intermediate site and remote site At intermediate site: dscli> mkpprcpath -dev IBM.2107-75ABTV2 -remotedev 5005076304FFC671 -srclss 72 -tgtlss 74 I0230:I0231 Date/Time: June 19, 2006 3:28:46 PM CEST IBM DSCLI IBM.2107-75ABTV2 CMUC00149I mkpprcpath: Remote Mirror and Copy path At remote site: dscli> mkpprcpath -dev IBM.2107-75BYGT1 -remotedev IBM.2107-75ABTV2 -remotewwnn 5005076303FFCE63 -srclss 74 -tgtlss 72 I0231:I0230 I0301:I0303 Date/Time: June 19, 2006 3:31:37 PM CEST IBM DSCLI Version: 5.1.600.196 DS: IBM.2107-75BYGT1 CMUC00149I mkpprcpath: Remote Mirror and Copy path 74:72 successfully established. IBM.2107-75BYGT1 -remotewwnn I0303:I0301 Version: 5.1.600.196 DS: 72:74 successfully established.

Chapter 27. Configuration and setup

455

27.3.3 Step 2: Set up Global Copy NOCOPY from intermediate to remote sites
The Global Copy relationship will be cascaded from a Metro Mirror relationship. To enable the cascade, the option -cascade has to be supplied with the mkpprc command. In Example 27-4 the complete command is shown. In step 3 the Metro Mirror will be created and will copy all data from local site volumes to intermediate site volumes. The data are then forwarded by the Global Copy to remote site volumes. To avoid copying the data twice, the initial setup of the Global Copy is initiated with NOCOPY mode.
Example 27-4 Set up Global Copy from intermediate to remote sites
dscli> mkpprc -dev IBM.2107-75ABTV2 -remotedev IBM.2107-75BYGT1 -type gcp -mode nocp -cascade 7200-7203:7400-7403 Date/Time: June 19, 2006 3:57:00 PM CEST IBM DSCLI Version: 5.1.600.196 DS: IBM.2107-75ABTV2 CMUC00153I mkpprc: Remote Mirror and Copy volume pair relationship 7200:7400 successfully created. CMUC00153I mkpprc: Remote Mirror and Copy volume pair relationship 7201:7401 successfully created. CMUC00153I mkpprc: Remote Mirror and Copy volume pair relationship 7202:7402 successfully created. CMUC00153I mkpprc: Remote Mirror and Copy volume pair relationship 7203:7403 successfully created.

27.3.4 Step 3: Set up Metro Mirror between local and intermediate sites
To set up the Metro Mirror the mkpprc command is used, where the option -type mmir signifies that a Metro Mirror will be created. For the initial setup the pairs are created in the full copy mode. Example 27-5 shows the mkpprc command, which is executed at the local site.
Example 27-5 Set up Metro Mirror from the local site to the intermediate sites
dscli> mkpprc -dev IBM.2107-75ABTV1 -remotedev IBM.2107-75ABTV2 -type mmir -mode full 7000-7003:7200-7203 Date/Time: June 19, 2006 3:58:51 PM CEST IBM DSCLI Version: 5.1.600.196 DS: IBM.2107-75ABTV1 CMUC00153I mkpprc: Remote Mirror and Copy volume pair relationship 7000:7200 successfully created. CMUC00153I mkpprc: Remote Mirror and Copy volume pair relationship 7001:7201 successfully created. CMUC00153I mkpprc: Remote Mirror and Copy volume pair relationship 7002:7202 successfully created. CMUC00153I mkpprc: Remote Mirror and Copy volume pair relationship 7003:7203 successfully created. dscli>

The volumes at the intermediate site are target volumes for Metro Mirror and source volumes for Global Copy at the same time. When the lspprc command is executed against these volumes, it shows the pair status of both the Metro Mirror and Global Copy relationships, as illustrated in Example 27-6.
Example 27-6 Query the PPRC relationship at the intermediate site dscli> lspprc -dev IBM.2107-75ABTV2 -remotedev IBM.2107-75BYGT1 -fullid -fmt default 7200-7203 Date/Time: June 19, 2006 4:02:26 PM CEST IBM DSCLI Version: 5.1.600.196 DS: IBM.2107-75ABTV2 ID State Reason Type SourceLSS Timeout (secs) Critical Mode First Pass Status =========================================================================================== ========================================================== IBM.2107-75ABTV1/7000:IBM.2107-75ABTV2/7200 Target Full Duplex Metro Mirror IBM.2107-75ABTV1/70 unknown Disabled Invalid IBM.2107-75ABTV1/7001:IBM.2107-75ABTV2/7201 Target Full Duplex Metro Mirror IBM.2107-75ABTV1/70 unknown Disabled Invalid IBM.2107-75ABTV1/7002:IBM.2107-75ABTV2/7202 Target Full Duplex Metro Mirror IBM.2107-75ABTV1/70 unknown Disabled Invalid IBM.2107-75ABTV1/7003:IBM.2107-75ABTV2/7203 Target Full Duplex Metro Mirror IBM.2107-75ABTV1/70 unknown Disabled Invalid

456

IBM System Storage DS8000: Copy Services in Open Environments

IBM.2107-75ABTV2/7200:IBM.2107-75BYGT1/7400 IBM.2107-75ABTV2/72 unknown Disabled IBM.2107-75ABTV2/7201:IBM.2107-75BYGT1/7401 IBM.2107-75ABTV2/72 unknown Disabled IBM.2107-75ABTV2/7202:IBM.2107-75BYGT1/7402 IBM.2107-75ABTV2/72 unknown Disabled IBM.2107-75ABTV2/7203:IBM.2107-75BYGT1/7403 IBM.2107-75ABTV2/72 unknown Disabled True

Copy Pending True Copy Pending True Copy Pending True Copy Pending

Global Copy Global Copy Global Copy Global Copy

27.3.5 Step 4: Set up FlashCopy at remote site


When the Metro Mirror relations have reached Full Duplex state, set up the FlashCopy at the remote site. Starting with DS8000 LIC Release 3, both standard FlashCopy and FlashCopy to a Space Efficient target volume (IBM FlashCopy SE) can be used as D volumes in a Metro/Global Mirror environment. The creation and the handling of the Global Mirror is almost identical for FlashCopy and FlashCopy SE. There are dedicated parameters for the creation and removal of FlashCopy SE pairs only. Note that this FlashCopy relationship has certain attributes that are typical and required when creating a Global Mirror. These attributes are: Inhibit target write: Protect the FlashCopy target volume from being modified by anyone other than Global Mirror related actions. Enable change recording: Apply changes only from the source volume to the target volume that occurred to the source volume in between FlashCopy establish operations, except for the first time when FlashCopy is initially established. Make relationship persistent: Keep the FlashCopy relationship until explicitly or implicitly terminated. This parameter is automatic due to the nocopy property. Nocopy: Do not initiate background copy from source to target, but keep the set of FlashCopy bitmaps required for tracking the source and target volumes. These bitmaps are established at the first time when a FlashCopy relationship is created with the nocopy attribute. Before a track in the source volume B is modified, between Consistency Group creations, the track is copied to the target volume C to preserve the previous point-in-time copy. This includes updates to the corresponding bitmaps to reflect the new location of the track that belongs to the point-in-time copy. Note that each Global Copy write to its target volume within the window of two adjacent Consistency Groups can cause FlashCopy I/O operations. Space Efficient target: Use Space Efficient volumes as FlashCopy targets, which means that FlashCopy SE will be used in the Global Mirror setup. Virtual capacity was allocated in a Space Efficient repository when these volumes were created. A repository volume per extent pool is used to provide physical storage for all Space Efficient volumes in that extent pool. Background copy is not allowed if Space Efficient targets are used. Target out of space: Indicates the actions to be taken if the Space Efficient repository runs of space. We recommend that you use the value fail for this parameter. This will cause a failure of the FlashCopy pair relationship if the repository runs out of space. For a detailed description of FlashCopy SE, refer to Chapter 10, IBM FlashCopy SE on page 129. You can check the IBM Storage support Web site for the availability of Copy Services features.

Chapter 27. Configuration and setup

457

Example 27-7 shows how to create the standard FlashCopy for the Global Mirror at the remote site.
Example 27-7 Create the FlashCopy for the Global Mirror at the remote site dscli> mkflash -dev Date/Time: June 19, IBM.2107-75BYGT1 CMUC00137I mkflash: CMUC00137I mkflash: CMUC00137I mkflash: CMUC00137I mkflash: IBM.2107-75BYGT1 -tgtinhibit -record -persist -nocp 7400-7403:7600-7603 2006 4:04:17 PM CEST IBM DSCLI Version: 5.1.600.196 DS: FlashCopy FlashCopy FlashCopy FlashCopy pair pair pair pair 7400:7600 7401:7601 7402:7602 7403:7603 successfully successfully successfully successfully created. created. created. created.

27.3.6 Step 5: Create Global Mirror session and add volumes to session
This is done at the intermediate site. The session is created with the mksession command. For each LSS a session has to be created. The session is denoted by a session number. Only one session can be active in a storage subsystem. This means that all LSSs in a Global Mirror must have the same session number. Example 27-8 shows that in a first step an empty session is created. The second command populates the session with the primary volumes of the Global Copy. As long as the Global Mirror has not been started and no consistency group has been formed, the status of the volumes will be Join Pending.
Example 27-8 Create the sessions and add the volumes
dscli> mksession -dev IBM.2107-75ABTV2 -lss 72 1 Date/Time: June 19, 2006 4:06:16 PM CEST IBM DSCLI Version: 5.1.600.196 DS: IBM.2107-75ABTV2 CMUC00145I mksession: Session 1 opened successfully. dscli> dscli> chsession -dev IBM.2107-75ABTV2 -lss 72 -action add -volume 7200-7203 1 Date/Time: June 19, 2006 4:07:27 PM CEST IBM DSCLI Version: 5.1.600.196 DS: IBM.2107-75ABTV2 CMUC00147I chsession: Session 1 successfully modified. dscli> dscli> lssession -dev IBM.2107-75ABTV2 72 Date/Time: June 19, 2006 4:07:45 PM CEST IBM DSCLI Version: 5.1.600.196 DS: IBM.2107-75ABTV2 LSS ID Session Status Volume VolumeStatus PrimaryStatus SecondaryStatus FirstPassComplete AllowCascading ====================================================================================================== =============== 72 01 Normal 7200 Join Pending Primary Copy Pending Secondary Full Duplex True Enable 72 01 Normal 7201 Join Pending Primary Copy Pending Secondary Full Duplex True Enable 72 01 Normal 7202 Join Pending Primary Copy Pending Secondary Full Duplex True Enable 72 01 Normal 7203 Join Pending Primary Copy Pending Secondary Full Duplex True Enable

27.3.7 Step 6: Start Global Mirror at intermediate site


Finally, the Global Mirror is started. The mkgmir command requires the session number and one of the LSSs that are configured in the session. In Example 27-9, the lssession command shows that the volumes are now in the active state.

458

IBM System Storage DS8000: Copy Services in Open Environments

Example 27-9 Start the Global Mirror dscli> mkgmir -dev IBM.2107-75ABTV2 -lss 72 -session 1 Date/Time: June 19, 2006 4:10:57 PM CEST IBM DSCLI Version: 5.1.600.196 DS: IBM.2107-75ABTV2 CMUC00162I mkgmir: Global Mirror for session 1 successfully started. dscli> lssession -dev IBM.2107-75ABTV2 72 Date/Time: June 19, 2006 4:11:23 PM CEST IBM DSCLI Version: 5.1.600.196 DS: IBM.2107-75ABTV2 LSS ID Session Status Volume VolumeStatus PrimaryStatus SecondaryStatus FirstPassComplete AllowCascading =========================================================================================== ================================== 72 01 CG In Progress 7200 Active Primary Copy Pending Secondary Full Duplex True Enable 72 01 CG In Progress 7201 Active Primary Copy Pending Secondary Full Duplex True Enable 72 01 CG In Progress 7202 Active Primary Copy Pending Secondary Full Duplex True Enable 72 01 CG In Progress 7203 Active Primary Copy Pending Secondary Full Duplex True Enable

With the command showgmir, you can verify if the Global Mirror was successfully created. The Copy State should show Running. See Example 27-10.
Example 27-10 Monitor Global Mirror dscli> showgmir -dev IBM.2107-75ABTV2 72 Date/Time: June 19, 2006 4:20:57 PM CEST IBM DSCLI Version: 5.1.600.196 DS: IBM.2107-75ABTV2 ID IBM.2107-75ABTV2/72 Master Count 1 Master Session ID 0x01 Copy State Running Fatal Reason Not Fatal CG Interval (seconds) 0 XDC Interval(milliseconds) 50 CG Drain Time (seconds) 30 Current Time 06/19/2006 16:18:55 CEST CG Time 06/19/2006 16:18:55 CEST Successful CG Percentage 100 FlashCopy Sequence Number 0x4496B24F Master ID IBM.2107-75ABTV2 Subordinate Count 0 Master/Subordinate Assoc -

When the option -metrics is supplied with the showgmir command the progress of the Consistency Group formation can be monitored. The entry Total Successful CG Count shows the current number of successful created Consistency Groups. When the Global Mirror is running, the number of Consistency Groups is steadily growing each time the showgmir command is issued. See Example 27-11.
Example 27-11 Show progress of consistency formation dscli> showgmir -dev IBM.2107-75ABTV2 -metrics 72 Date/Time: June 19, 2006 4:21:57 PM CEST IBM DSCLI Version: 5.1.600.196 DS: IBM.2107-75ABTV2 ID IBM.2107-75ABTV2/72 Total Failed CG Count 0 Total Successful CG Count 660 Chapter 27. Configuration and setup

459

Successful CG Percentage 100 Failed CG after Last Success 0 Last Successful CG Form Time 06/19/2006 16:19:55 CEST Coord. Time (seconds) 50 Interval Time (seconds) 0 Max Drain Time (seconds) 30 First Failure Control Unit First Failure LSS First Failure Status No Error First Failure Reason First Failure Master State Last Failure Control Unit Last Failure LSS Last Failure Status No Error Last Failure Reason Last Failure Master State Previous Failure Control Unit Previous Failure LSS Previous Failure Status No Error Previous Failure Reason Previous Failure Master State dscli> showgmir -dev IBM.2107-75ABTV2 -metrics 72 Date/Time: June 19, 2006 4:22:00 PM CEST IBM DSCLI Version: 5.1.600.196 DS: IBM.2107-75ABTV2 ID IBM.2107-75ABTV2/72 Total Failed CG Count 0 Total Successful CG Count 663 Successful CG Percentage 100 Failed CG after Last Success 0 Last Successful CG Form Time 06/19/2006 16:19:58 CEST Coord. Time (seconds) 50 Interval Time (seconds) 0 Max Drain Time (seconds) 30 First Failure Control Unit First Failure LSS First Failure Status No Error First Failure Reason First Failure Master State Last Failure Control Unit Last Failure LSS Last Failure Status No Error Last Failure Reason Last Failure Master State Previous Failure Control Unit Previous Failure LSS Previous Failure Status No Error Previous Failure Reason Previous Failure Master State -

The command showgmiroos displays the number of tracks which are out of synchronization. Example 27-12 shows the OutOfSyncTracks of LSS 72.
Example 27-12 Display the number of out-of-sync tracks dscli> showgmiroos -dev IBM.2107-75ABTV2 -lss 72 -scope lss 1 Date/Time: June 19, 2006 4:25:54 PM CEST IBM DSCLI Version: 5.1.600.196 DS: IBM.2107-75ABTV2 Scope IBM.2107-75ABTV2/72 Session 01 OutOfSyncTracks 0

460

IBM System Storage DS8000: Copy Services in Open Environments

The status of the session is displayed with the command lssession. It displays the session status of all volumes of an LSS. The command can take a list of LSSs to inspect the session status of volumes from multiple LSSs.

27.4 Going from Metro Mirror to Metro/Global Mirror


Many installations today use Metro Mirror to provide high availability replication across two sites. This setup can be extended to provide long-distance disaster recovery replication to a third site using Metro/Global Mirror. In this section we describe the procedure on how to migrate from a 2-site Metro Mirror setup (local site to intermediate site) to a 3-site Metro/Global Mirror setup (local site to intermediate site to remote site) using DS CLI. This can be done while production is up and running at the local site. Depending on the capability of the link between the intermediate site and the remote site, the initial copy of the data can be more or less influenced by write activity of the applications. If possible, the initial copy should be started during a low write activity of the application. But in any case, the first pass of the Global Copy must be completed for the Global Mirror to be able to form consistency groups. Note: To ensure consistency from the local to the remote site, it is essential that the Metro Mirror paths are created with consistency enabled. If this was not the case for the existing Metro Mirror, the paths for each LSS can be changed using the chlss command.

Local Site

Intermediate Site

Remote Site

C
3

4 5

Figure 27-5 Migrating from Metro Mirror to Metro/Global Mirror

As shown in Figure 27-5, the steps required to extend an existing Metro Mirror to a 3-site setup using Metro/Global Mirror are: 1. Set up PPRC paths from the intermediate site to the remote site. 2. Set up Global Copy with COPY from the intermediate site to the remote site. In contrast to the Metro/Global Mirror setup, the Global Copy is established with copy mode to ensure that all data is copied from the intermediate site to the remote site. 3. Set up FlashCopy at the remote site. 4. Create a Global Mirror session and add volumes to the session at the intermediate site. 5. Start Global Mirror at the intermediate site.

Chapter 27. Configuration and setup

461

The steps are described in detail in 27.3, Initial setup of Metro/Global Mirror on page 453.

27.5 Recommendations for setting up Metro/Global Mirror


In this section we give you some brief recommendations for planning the setup of Metro/Global Mirror. Use LSS to group volumes logically to applications. Because some Copy Services functions manage LSSs, for ease of management we recommend that you dedicate LSSs to one application. For example, a freezepprc command freezes all volumes of the specified LSS. To manage applications independently, each application should use dedicated LSSs. Use at least two PPRC links. We strongly recommend that you implement at least two independent PPRC links each for Metro Mirror and Global Mirror for redundancy. Capacity planning for Metro Mirror might yield a requirement for more than two links, because the synchronous relationship requires a higher bandwidth. With Global Copy only the last modified version of a track is transmitted to the secondary volumes, so the bandwidth requirement is not as high as it is for Metro Mirror. However, to provide redundancy at least two independent links should be deployed for the Global Copy. Use at least two LSSs per host (one per server node). For performance, we recommend that you use at least two LSSs per host that are processed by both server nodes of the DS8000. The volume identifiers that are assigned at volume creation time determine this distribution. Even-number LSSs are processed by server 1 and odd-number LSSs are processed by server 2. Volume placement for FlashCopy at the remote site. FlashCopy is a copy process that copies data within the storage subsystem. FlashCopy target volumes should use LSSs that are processed by the same server as the FlashCopy source volumes. Otherwise the data has to be passed to the other server. Use the same number of ports at the local and remote sites. Make sure that the number of ports at the local and remote site are balanced. If there are two ports at the local site, there should be two ports at the remote site.

462

IBM System Storage DS8000: Copy Services in Open Environments

28

Chapter 28.

General Metro/Global Mirror operations


In this chapter we discuss general considerations for Metro Mirror and Global Mirror when used within the context of Metro/Global Mirror. We also provide hints and tips related to the specific operation of Metro/Global Mirror.

Copyright IBM Corp. 2004-2008. All rights reserved.

463

28.1 Definitions
For reference, we define some terms used in this chapter: Host A host is a server where applications or components of applications are running. Hosts can be implemented as single servers or can be clustered with other servers. An application comprises all software components that are used to build a self-contained solution for the users business. The different software components are running on one or more servers. Applications that are running on cluster servers can take advantage of the take-over procedures offered by the cluster software. In a failure situation the cluster software stops the application from where it is currently running and starts it automatically at the other clustered server. In a remote copy relation the primary storage is where the regular production resides; it is the source for the data replication. In a remote copy relation the secondary storage is where the data is replicated to. A secondary storage is the target of the copy relation. A storage failover changes the access point of the data from the primary to the secondary storage subsystem. The application is started using the secondary storage subsystem.

Applications

Application takeover

Primary storage Secondary storage

Storage failover

28.2 General considerations for storage failover


A failover of the storage has a serious impact on a running production. It results in down time for the applications, because they must be started with the storage of the secondary site. Depending on the configuration of the storage environment, access to the secondary storage has to be configured after the failover and before the applications can be started up. Thus a storage failover is only activated for situations such as: Resulting from a disaster and where required infrastructure components are now inaccessible. See Chapter 31, Unplanned scenarios on page 503. In case of maintenance operations at the local site. Migration of applications. Disaster recovery tests can be performed without failover of the applications.

28.3 Checking pair status before failover


Before doing a failover of the Metro Mirror or the Global Copy relationship, it is essential that the status of the pair relations is checked for synchronicity. For a planned failover the status of the Metro Mirror at the primary and secondary sites must be in the Full Duplex mode. If the status is still Copy Pending, it means that the data is not completely copied to the secondary volumes.

464

IBM System Storage DS8000: Copy Services in Open Environments

In Example 28-1 all volumes of the Metro Mirror are in Full Duplex mode.
Example 28-1 Full Duplex Mode of a Metro Mirror dscli>lspprc -dev IBM.2107-7520781 -remotedev IBM.2107-75ABTV1 -fullid -fmt default 6000-6003 ID State Reason Type SourceLSS Timeout (secs) Critical Mode First Pass Status =========================================================================================== =================================================== IBM.2107-7520781/6000:IBM.2107-75ABTV1/6200 Full Duplex Metro Mirror IBM.2107-7520781/60 unknown Disabled Invalid IBM.2107-7520781/6001:IBM.2107-75ABTV1/6201 Full Duplex Metro Mirror IBM.2107-7520781/60 unknown Disabled Invalid IBM.2107-7520781/6002:IBM.2107-75ABTV1/6202 Full Duplex Metro Mirror IBM.2107-7520781/60 unknown Disabled Invalid IBM.2107-7520781/6003:IBM.2107-75ABTV1/6203 Full Duplex Metro Mirror IBM.2107-7520781/60 unknown Disabled Invalid

The Global Copy volume status will always be Copy Pending. To identify that all tracks were copied to the secondary site the Out of Sync Tracks must be obtained. The Global Copy is synchronized when for all volume pairs the out Out of Sync Tracks has the value zero, as shown in Example 28-2.
Example 28-2 Synchronized Global Copy relation dscli> lspprc -dev IBM.2107-75ABTV2 -remotedev IBM.2107-75ABTV1 -l -fullid -fmt default 6400-6403 ID State Reason Type Out Of Sync Tracks Tgt Read Src Cascade Tgt Cascade Date Suspended SourceLSS Timeout (secs) Critical Mode First Pass Status =========================================================================================== =========================================================================================== =========================== IBM.2107-75ABTV2/6400:IBM.2107-75ABTV1/6200 Copy Pending Global Copy 0 Disabled Enabled invalid IBM.2107-75ABTV2/64 unknown Disabled True IBM.2107-75ABTV2/6401:IBM.2107-75ABTV1/6201 Copy Pending Global Copy 0 Disabled Enabled invalid IBM.2107-75ABTV2/64 unknown Disabled True IBM.2107-75ABTV2/6402:IBM.2107-75ABTV1/6202 Copy Pending Global Copy 0 Disabled Enabled invalid IBM.2107-75ABTV2/64 unknown Disabled True IBM.2107-75ABTV2/6403:IBM.2107-75ABTV1/6203 Copy Pending Global Copy 0 Disabled Enabled invalid IBM.2107-75ABTV2/64 unknown Disabled True

In a Metro/Global Mirror the volumes at the intermediate site are in a different state than in a conventional Metro Mirror or Global Mirror. Because the Global Mirror is cascaded to the Metro Mirror, the volumes at the intermediate site are target and source at the same time. Thus the lspprc command issued at the intermediate site will show both the Metro Mirror and the Global Mirror, as shown in Example 28-3.
Example 28-3 Status of the intermediate volumes in a Metro/Global Mirror dscli> lspprc -dev IBM.2107-75ABTV1 -remotedev IBM.2107-7520781 -fullid -fmt default 6200-6203 ID State Reason Type SourceLSS Timeout (secs) Critical Mode First Pass Status

Chapter 28. General Metro/Global Mirror operations

465

=========================================================================================== ========================================================== IBM.2107-7520781/6000:IBM.2107-75ABTV1/6200 Target Full Duplex Metro Mirror IBM.2107-7520781/60 unknown Disabled Invalid IBM.2107-7520781/6001:IBM.2107-75ABTV1/6201 Target Full Duplex Metro Mirror IBM.2107-7520781/60 unknown Disabled Invalid IBM.2107-7520781/6002:IBM.2107-75ABTV1/6202 Target Full Duplex Metro Mirror IBM.2107-7520781/60 unknown Disabled Invalid IBM.2107-7520781/6003:IBM.2107-75ABTV1/6203 Target Full Duplex Metro Mirror IBM.2107-7520781/60 unknown Disabled Invalid IBM.2107-75ABTV1/6200:IBM.2107-75ABTV2/6400 Copy Pending Global Copy IBM.2107-75ABTV1/62 unknown Disabled True IBM.2107-75ABTV1/6201:IBM.2107-75ABTV2/6401 Copy Pending Global Copy IBM.2107-75ABTV1/62 unknown Disabled True IBM.2107-75ABTV1/6202:IBM.2107-75ABTV2/6402 Copy Pending Global Copy IBM.2107-75ABTV1/62 unknown Disabled True IBM.2107-75ABTV1/6203:IBM.2107-75ABTV2/6403 Copy Pending Global Copy IBM.2107-75ABTV1/62 unknown Disabled True

28.4 Freezing and unfreezing Metro Mirror volumes


The freezepprc and unfreezepprc commands are working in a Metro Mirror when the PPRC paths are created with the -consistencygrp option. A freezepprc is doing two different things: It sets the long busy condition on the primary volumes of the given LSS. This causes the I/O to these volumes to be blocked. All I/O from the hosts will wait until the long busy condition is removed by the unfreezepprc command or when the long busy time-out has been exceeded. The default time-out value is 120 seconds. Since it is a value regarding the LSS, this value can be changed by the chlss command using the option -extlongbusy time-out. The paths between the primary site and the secondary site are removed. For a detailed description of how consistency is provided in a Metro Mirror see 14.4, Consistency Group function on page 180. As shown in Example 28-4, the pair status at the primary site after the freezepprc is now Suspended with the reason Freeze. At the secondary site the pairs status is still in Target Full Duplex mode.
Example 28-4 Example of a freezepprc and the pair status at the primary site At the local DS8000: dscli> freezepprc -dev IBM.2107-7503461 -remotedev IBM.2107-75ABTV1 68:62 CMUC00161W freezepprc: Remote Mirror and Copy consistency group 68:62 successfully created. dscli> lspprc -dev IBM.2107-7503461 -remotedev IBM.2107-75ABTV1 -fmt default 6800-6803 ID State Reason Type SourceLSS Timeout (secs) Critical Mode First Pass Status =========================================================================================== ===== 6800:6200 Suspended Freeze Metro Mirror 68 unknown Disabled Invalid 6801:6201 Suspended Freeze Metro Mirror 68 unknown Disabled Invalid 6802:6202 Suspended Freeze Metro Mirror 68 unknown Disabled Invalid 6803:6203 Suspended Freeze Metro Mirror 68 unknown Disabled Invalid At the intermediate DS8000:

466

IBM System Storage DS8000: Copy Services in Open Environments

dscli> lspprc -dev IBM.2107-75ABTV1 -remotedev IBM.2107-7503461 -fmt default 6200-6203 ID State Reason Type SourceLSS Timeout (secs) Critical Mode First Pass Status =========================================================================================== ============== 6800:6200 Target Full Duplex Metro Mirror 68 unknown Disabled Invalid 6801:6201 Target Full Duplex Metro Mirror 68 unknown Disabled Invalid 6802:6202 Target Full Duplex Metro Mirror 68 unknown Disabled Invalid 6803:6203 Target Full Duplex Metro Mirror 68 unknown Disabled Invalid

The unfreezepprc removes the long busy status from the primary volumes and I/O will continue. The pair status of the primary volumes is still Suspended, as shown in Example 28-5
Example 28-5 Unfreezepprc after the freezepprc dscli> unfreezepprc -dev IBM.2107-7503461 -remotedev IBM.2107-75ABTV1 68:62 CMUC00198I unfreezepprc: Remote Mirror and Copy pair 68:62 successfully thawed. dscli> lspprc -dev IBM.2107-7503461 -remotedev IBM.2107-75ABTV1 -fmt default 6800-6803 ID State Reason Type SourceLSS Timeout (secs) Critical Mode First Pass Status =========================================================================================== ===== 6800:6200 Suspended Freeze Metro Mirror 68 unknown Disabled Invalid 6801:6201 Suspended Freeze Metro Mirror 68 unknown Disabled Invalid 6802:6202 Suspended Freeze Metro Mirror 68 unknown Disabled Invalid 6803:6203 Suspended Freeze Metro Mirror 68 unknown Disabled Invalid

A freezepprc command can be issued against a running application, although it has an impact on the application: after the freezepprc the applications are waiting to continue with the I/O. This is either enabled by the unfreezepprc or after a time-out of 120 seconds. This method is used to provide a consistent copy of the data at the secondary site when the applications at the primary site are not stopped. Important: We recommend that you run both commands in one script to ensure that the time delay between the freeze and the unfreeze is as small as possible.

28.5 Suspending volumes before failover


In a Metro Mirror that is in full duplex mode, a failoverpprc command will be issued against the secondary volumes. This will change the status of the secondary volumes from Target Full Duplex to Host Suspended, while the primary volumes remain in Full Duplex mode. This has two implications: If host I/O is still ongoing, the primary storage subsystem tries to copy the data to the secondary volumes. Because they are suspended after the failover, this I/O fails since it finds the secondary volume in a wrong state. The primary volumes then go into the suspend mode as well. This failure causes an error message to be sent to the host. The host then sends a retry and I/O will continue. If the Metro Mirror will be re-established from the primary volumes to the secondary volumes, the failbackpprc will fail because it requires that the primary volumes are in suspend state. The primary volumes must be put in suspend state with a pausepprc command with the option -unconditional issued at the primary site.
Chapter 28. General Metro/Global Mirror operations

467

A good practice is to suspend the Metro Mirror or Global Copy before any failover operation. You should always do a freeze before failover to guarantee consistency.

28.6 Removing volumes from the session


The Global Mirror forms consistency groups for all volumes in the active Global Mirror session. This requires that the Global Copy volumes are in the Copy Pending state and that the FlashCopy relation exists. If one or more volumes in an active session are suspended or even removed, the Global Mirror is no longer able to form consistency groups because one or more members of the session are in the wrong state. Assuming an environment as described in 27.2, Configuration examples on page 451, multiple applications can be managed by the Global Mirror. In planned scenarios only one or a few applications, but not all applications, will be processed for a failover. It is important that the Global Mirror is able to continue forming Consistency Groups for those applications that will reside in production at the local site. To achieve this, the failed-over volumes have to be removed from the Global Mirror session. This removes these volumes from the focus of the Global Mirror, and the other volumes are processed by the Global Mirror. To ensure that the volumes that have to be removed from the session are consistent, the Global Mirror should be paused until all volumes are removed. After the Global Mirror is resumed, Consistency Groups will be formed for the remaining volumes and the failover procedure can continue. Example 28-6 shows the commands.
Example 28-6 Removing volumes from the Global Mirror session dscli> pausegmir -dev IBM.2107-75ABTV1 -session 1 -lss 54 Date/Time: December 11, 2005 4:02:20 PM CET IBM DSCLI Version: 5.1.0.204 DS: IBM.2107-75ABTV1 CMUC00163I pausegmir: Global Mirror for session 1 successfully paused. dscli> dscli> chsession -dev IBM.2107-75ABTV1 -action remove -volume 54d4-54d7 -lss 54 1 Date/Time: December 11, 2005 4:03:14 PM CET IBM DSCLI Version: 5.1.0.204 DS: IBM.2107-75ABTV1 CMUC00147I chsession: Session 1 successfully modified. dscli> dscli> resumegmir -dev IBM.2107-75ABTV1 -session 1 -lss 54 Date/Time: December 11, 2005 4:03:28 PM CET IBM DSCLI Version: 5.1.0.204 DS: IBM.2107-75ABTV1 CMUC00164I resumegmir: Global Mirror for session 1 successfully resumed.

28.7 Checking consistency at the remote site


The Metro Mirror from the local to the intermediate site ensures consistency using the freeze and unfreeze functionality. The paths must be created with the -consistgrp option. The consistency is provided to the Metro Mirror by an extended long busy condition, which is set to the primary volumes, when all the Metro Mirror links have failed. When I/O to the primary volumes can resume, an unfreezepprc command is issued to remove the extended long busy condition. The Global Mirror is a process running at the intermediate site, which uses the FlashCopy at the remote site. When more than one storage subsystem is used at the intermediate site, one of these is denoted as the master where the Global Mirror resides (see 27.1.2, Metro/Global Mirror with multiple storage subsystems on page 451). The other storage subsystems are

468

IBM System Storage DS8000: Copy Services in Open Environments

the subordinates. The Global Mirror coordinates the consistency formation for all volumes of the master and the subordinates, which are joined into the Global Mirror session. The consistency is formed in three steps: 1. The master is coordinating with all subordinates to stop the I/O for 35 msec to all primary volumes to form a Consistency Group. 2. The Consistency Group is transmitted to the secondary volumes. 3. The FlashCopy copies all tracks that have been changed since the last write operation to the secondary volumes. See 21.4, Consistency Groups on page 319, for a detailed description of consistency formation in a Global Mirror.

Create consistency group by holding application writes while creating bitmap containing updates for this consistency group on all volumes - design point is 2-3ms. Maximum coordination time eg 10ms

FlashCopy issued with revertible option

Transmit updates in Global Copy mode while between consistency groups Consistency group interval - 0s to 18hrs

Drain consistency group and send to remote DS8000 using Global Copy Application writes for next consistency group are recorded in change recording bitmap Maximum drain time - eg 1 min

FlashCopy committed once all revertible flashcopies have successfully completed

Start next consistency group

Figure 28-1 Phases of consistency group formation

The consistency of the data can be provided in each phase of the consistency group formation process, as shown in Figure 28-1: When the failure occurs during the coordination time or the draining of the data to the remote site, consistency is still available on the FlashCopy volumes, because the FlashCopy has not yet started. When the failure happens while the FlashCopy command is executed, a manual intervention is required to either roll revert or commit the consistency group before continuing with the recovery or restarting Global Mirror. The action that is indicated depends on the current status of the FlashCopy, where the sequence numbers and the revertible flag are of special interest: When the sequence numbers of the FlashCopy are different, the copy process has not started for all the volumes. In this case, the recent FlashCopy is inconsistent and cannot be used. You must roll back with the revertflash command, which removes all not-committed sequences from the FlashCopy target.

Chapter 28. General Metro/Global Mirror operations

469

When the sequence numbers are all equal and there is a mix of revertible and unrevertible volumes, the copy to the FlashCopy targets has taken place but the process has not been finished for some volumes. In this case the resent FlashCopy targets are usable and the process has to be committed manually by a commitflash. Example 28-7 shows a commit situation. All volumes shown in the lsflash command have the same sequence number. The volumes 50E8 and further have the Revertible flag enabled while the volumes above the flag are disabled. In this case the commitflash command must be issued to the volumes 50E8 and further to recreate the consistency.
Example 28-7 Commit situation dscli> lsflash -dev IBM.2107-75ABTV2 -l -fmt default 5000-50ff 5100-51ff 5300-5344 ID SrcLSS SequenceNum Timeout ActiveCopy Recording Persistent Revertible SourceWriteEnabled TargetWriteEnabled BackgroundCopy OutOfSyncTracks DateCreated DateSynced =========================================================================================== =========================================================================================== ======================== 5000:5A00 50 437DABED 300 Disabled Enabled Enabled Disabled Enabled Disabled Disabled 15259 Fri Nov 18 09:22:09 CET 2005 Fri Nov 18 11:24:44 CET 2005 5001:5A01 50 437DABED 300 Disabled Enabled Enabled Disabled Enabled Disabled Disabled 15259 Fri Nov 18 09:22:09 CET 2005 Fri Nov 18 11:24:44 CET 2005 ..... 50E7:5AE7 50 437DABED 300 Disabled Disabled 11:24:44 CET 2005 50E8:5AE8 50 437DABED 300 Disabled Disabled 11:24:44 CET 2005 50E9:5AE9 50 437DABED 300 Disabled Disabled 11:24:44 CET 2005 .....

Disabled 15259 Disabled 15259 Disabled 15259

Enabled Enabled Disabled Enabled Fri Nov 18 09:22:09 CET 2005 Fri Nov 18 Enabled Enabled Enabled Disabled Fri Nov 18 09:22:09 CET 2005 Fri Nov 18 Enabled Enabled Enabled Disabled Fri Nov 18 09:22:09 CET 2005 Fri Nov 18

Example 28-8 shows a revertible situation. The sequence numbers have two different values, whereby for the lower sequence number the revertible flag is disabled. This shows that for these volumes the FlashCopy process has not yet started. The volumes with the higher sequence numbers have the revertible flag enabled, which means that the relationship exists but is not committed. The correct interaction to bring back consistency is to issue a revertflash command.
Example 28-8 Revertible situation dscli> lsflash -dev IBM.2107-75ABTV2 -l -fmt default 5000-50ff 5100-51ff 5300-5344 ID SrcLSS SequenceNum Timeout ActiveCopy Recording Persistent Revertible SourceWriteEnabled TargetWriteEnabled BackgroundCopy OutOfSyncTracks DateCreated DateSynced =========================================================================================== =========================================================================================== ======================== 5000:5A00 50 437DC7BD 300 Disabled Enabled Enabled Enabled Disabled Disabled Disabled 15259 Fri Nov 18 09:22:09 CET 2005 Fri Nov 18 13:23:24 CET 2005 5001:5A01 50 437DC7BD 300 Disabled Enabled Enabled Enabled Disabled Disabled Disabled 15259 Fri Nov 18 09:22:09 CET 2005 Fri Nov 18 13:23:24 CET 2005

470

IBM System Storage DS8000: Copy Services in Open Environments

..... 50E3:5AE3 50 Disabled 13:23:24 CET 50E4:5AE4 50 Disabled 13:23:23 CET 50E5:5AE5 50 Disabled 13:23:24 CET 50E6:5AE6 50 Disabled 13:23:23 CET 50E7:5AE7 50 Disabled 13:23:24 CET 50E8:5AE8 50 Disabled 13:23:23 CET 50E9:5AE9 50 Disabled 13:23:23 CET 50EA:5AEA 50 Disabled 13:23:24 CET 50EB:5AEB 50 Disabled 13:23:24 CET

437DC7BD 300 Disabled 2005 437DC7BC 300 Disabled 2005 437DC7BD 300 Disabled 2005 437DC7BC 300 Disabled 2005 437DC7BD 300 Disabled 2005 437DC7BC 300 Disabled 2005 437DC7BC 300 Disabled 2005 437DC7BD 300 Disabled 2005 437DC7BD 300 Disabled 2005

Disabled 15259 Disabled 15259 Disabled 15259 Disabled 15259 Disabled 15259 Disabled 15259 Disabled 15259 Disabled 15259 Disabled 15259

Enabled Enabled Enabled Disabled Fri Nov 18 09:22:09 CET 2005 Fri Nov 18 Enabled Enabled Disabled Enabled Fri Nov 18 09:22:09 CET 2005 Fri Nov 18 Enabled Enabled Enabled Disabled Fri Nov 18 09:22:09 CET 2005 Fri Nov 18 Enabled Enabled Disabled Enabled Fri Nov 18 09:22:09 CET 2005 Fri Nov 18 Enabled Enabled Enabled Disabled Fri Nov 18 09:22:09 CET 2005 Fri Nov 18 Enabled Enabled Disabled Enabled Fri Nov 18 09:22:09 CET 2005 Fri Nov 18 Enabled Enabled Disabled Enabled Fri Nov 18 09:22:09 CET 2005 Fri Nov 18 Enabled Enabled Enabled Disabled Fri Nov 18 09:22:09 CET 2005 Fri Nov 18 Enabled Enabled Enabled Disabled Fri Nov 18 09:22:09 CET 2005 Fri Nov 18

When the option -revertible is supplied with the lsflash command, only the revertible volumes are listed. As a final step to provide complete and consistent data, after a failover to a remote site, the data located at the FlashCopy target volumes must be reversed to the source volumes. After the failover to the Global Mirror secondary site, the host access is gained to the secondary volumes, which are also the FlashCopy source volumes. After a failover to the remote site, the host access is gained to the secondary volumes of the Global Copy relation. But some last-saved consistent data might be located at the FlashCopy target volumes. In this case the FlashCopy target volumes must be reversed to their sources, which are also the secondary Global Copy volumes. In Example 28-9 the command of the fast reverse restore is shown. Note: When the application has been stopped and two consistency groups have been formed, we can assume that the data on the FlashCopy source and on the FlashCopy target are the same. In this case a fast reverse restore is not necessary.
Example 28-9 Fast reverse restore of the FlashCopy target volumes dscli> reverseflash -dev CMUC00169I reverseflash: CMUC00169I reverseflash: CMUC00169I reverseflash: CMUC00169I reverseflash: IBM.2107-75ABTV2 FlashCopy volume FlashCopy volume FlashCopy volume FlashCopy volume -fast 6400-6403:6600-6603 pair 6400:6600 successfully pair 6401:6601 successfully pair 6402:6602 successfully pair 6403:6603 successfully reversed. reversed. reversed. reversed.

Chapter 28. General Metro/Global Mirror operations

471

28.8 Setting up an additional Global Mirror from remote site


When a planned failover of the production to the remote site has been fulfilled, depending on the reason for the failover, it might be that the production will remain for an extended period of time at the remote site. If the intermediate site is still available it is possible to set up an additional Global Mirror from the remote site to the intermediate site to protect the data against a possible disaster at the remote site. To do so requires that additional volumes are available to set up a FlashCopy relation with the volumes at the intermediate site as the FlashCopy source volumes. After the failover to the remote site is completed (see 29.4, Recovery at remote site on page 484, for the details) a failback of the Global Copy will be used to copy the data to the intermediate site. Finally, a session with the related volumes must be created and the Global Mirror must be started at the remote site. Figure 28-2 shows the steps to set up the additional Global Mirror.

Local Site

Intermediate Site

Remote Site

E
2

B
3

C D

Figure 28-2 Set up an additional Global Mirror after failover to remote site

The setup of the additional Global Mirror consists of the following steps: 1. Create or fail back from the remote site to the intermediate site. 2. Establish FlashCopy to the additional volumes at the intermediate site. 3. Create a session and start up the Global Mirror.

472

IBM System Storage DS8000: Copy Services in Open Environments

28.8.1 Step 1: Create Global Copy from remote to intermediate site


The volumes at C (see Figure 28-2) are now a valid source for the Global Mirror, which is established from the remote site to the intermediate site. The first step is to set up the Global Copy from the remote volumes to the intermediate volumes. Example 28-10 shows how to set up the Global Copy. Because the data at the remote site is the same as at the intermediate site, the Global Copy can be established with the option -mode nocp to avoid background copy. The -cascade option has to be omitted because the Global Copy is now not a cascaded relation.
Example 28-10 Set up Global Copy from intermediate to remote site At the remote site: dscli> mkpprc -dev IBM.2107-75ABTV2 -remotedev IBM.2107-75ABTV1 -type gcp -mode nocp 6400-6403:6200-6203 xecuting command: CMUC00153I mkpprc: Remote Mirror and Copy volume pair relationship 6400:6200 successfully created. CMUC00153I mkpprc: Remote Mirror and Copy volume pair relationship 6401:6201 successfully created. CMUC00153I mkpprc: Remote Mirror and Copy volume pair relationship 6402:6202 successfully created. CMUC00153I mkpprc: Remote Mirror and Copy volume pair relationship 6403:6203 successfully created. dscli> dscli> lspprc -dev IBM.2107-75ABTV2 -remotedev IBM.2107-75ABTV1 -l -fullid -fmt default 6400-6403 ID State Reason Type Out Of Sync Tracks Tgt Read Src Cascade Tgt Cascade Date Suspended SourceLSS Timeout (secs) Critical Mode First Pass Status =========================================================================================== =========================================================================================== =========================== IBM.2107-75ABTV2/6400:IBM.2107-75ABTV1/6200 Copy Pending Global Copy 0 Disabled Enabled invalid IBM.2107-75ABTV2/64 unknown Disabled True IBM.2107-75ABTV2/6401:IBM.2107-75ABTV1/6201 Copy Pending Global Copy 0 Disabled Enabled invalid IBM.2107-75ABTV2/64 unknown Disabled True IBM.2107-75ABTV2/6402:IBM.2107-75ABTV1/6202 Copy Pending Global Copy 0 Disabled Enabled invalid IBM.2107-75ABTV2/64 unknown Disabled True IBM.2107-75ABTV2/6403:IBM.2107-75ABTV1/6203 Copy Pending Global Copy 0 Disabled Enabled invalid IBM.2107-75ABTV2/64 unknown Disabled True

28.8.2 Step 2: Create FlashCopy at the intermediate site


The Global Mirror at the remote site requires a FlashCopy relation at the secondary site from the volumes at B to a new set of volumes named E (see Figure 28-2 on page 472).

Chapter 28. General Metro/Global Mirror operations

473

Example 28-11 shows how to set up the FlashCopy.


Example 28-11 Set up FlashCopy at intermediate site dscli> mkflash -dev 6200-6203:6300-6303 CMUC00137I mkflash: CMUC00137I mkflash: CMUC00137I mkflash: CMUC00137I mkflash: IBM.2107-75ABTV1 FlashCopy FlashCopy FlashCopy FlashCopy pair pair pair pair -tgtinhibit -record -persist -nocp successfully successfully successfully successfully created. created. created. created.

6200:6300 6201:6301 6202:6302 6203:6303

28.8.3 Step 3: Create session and Global Mirror at remote site


To complete the setup of the Global Mirror, a session has to be created at the remote site and the Global Copy source volumes have to be added into the session. Finally, the Global Mirror is started. Example 28-12 shows the setup of the session and the Global Mirror and how to check if the Global Mirror is running properly.
Example 28-12 Create session and start up Global Mirror dscli> mksession -dev IBM.2107-75ABTV2 -lss 64 1 dscli> chsession -dev IBM.2107-75ABTV2 -lss 64 -action add -volume 6400-6403 1 CMUC00145I mksession: Session 1 opened successfully. CMUC00147I chsession: Session 1 successfully modified. dscli> lssession 64 LSS ID Session Status Volume VolumeStatus PrimaryStatus SecondaryStatus FirstPassComplete AllowCascading =========================================================================================== ============================== 64 01 CG In Progress 6400 Active Primary Copy Pending Secondary Simplex True Enable 64 01 CG In Progress 6401 Active Primary Copy Pending Secondary Simplex True Enable 64 01 CG In Progress 6402 Active Primary Copy Pending Secondary Simplex True Enable 64 01 CG In Progress 6403 Active Primary Copy Pending Secondary Simplex True Enable dscli> mkgmir -dev IBM.2107-75ABTV2 -lss 64 -session 1 CMUC00162I mkgmir: Global Mirror for session 1 successfully started. dscli> showgmir 64 ID IBM.2107-75ABTV2/64 Master Count 1 Master Session ID 0x01 Copy State Running Fatal Reason Not Fatal CG Interval (seconds) 0 XDC Interval(milliseconds) 50 CG Drain Time (seconds) 30 Current Time 11/14/2005 10:38:21 CET CG Time 11/14/2005 10:38:21 CET Successful CG Percentage 10 FlashCopy Sequence Number 0x43785B0D Master ID IBM.2107-75ABTV2 Subordinate Count 0 Master/Subordinate Assoc -

474

IBM System Storage DS8000: Copy Services in Open Environments

29

Chapter 29.

Planned recovery scenarios


In this chapter we describe planned recovery scenarios. For each scenario, we describe in detail all operations in a step-by-step approach. You can use this chapter as a cookbook for Metro/Global Mirror operations.

Copyright IBM Corp. 2004-2008. All rights reserved.

475

29.1 Overview
Planned recovery scenarios are a series of operations initiated by the user and are based on failover/failback advanced copy function features. A storage failover always implies an impact to the production. For this reason all failover operations need to be planned very carefully by the user in terms of integrity of procedures, time schedule, and availability of the applications. In a planned failover the application has to be stopped before recovery at the intermediate site. The host is then given access to the volumes at the intermediate site and the application can be started using these volumes. A reason for a planned recovery at the intermediate site could be because of maintenance activities at the local site that impact production (see 28.2, General considerations for storage failover on page 464). A recovery at the intermediate site minimizes the impact to the production normally running at the local site. In a large data center, many applications running on different servers can participate in the Metro/Global Mirror. In this case, it is most likely that failover operations will not be accomplished against the complete environment but rather to specific applications. In this case the failover operations have to be applied in a way that will not affect the copy relations for the remaining applications. This situation is accounted for in all the scenarios presented in this chapter. Also, all the scenarios presented here have been practically tested and represent the best practice for the situations they address. However, additional or alternate scenarios remain possible, depending on particular circumstances within your data center. Note: If other scenarios are required, we strongly recommend that you extensively test them before they are exercised in the production environment.

29.2 Recovery at intermediate site


A recovery at the intermediate site (Figure 29-1) is indicated when the applications can be started on servers that are located at the intermediate site. The advantage of a recovery at the intermediate site is that the re-synchronization between the local and the intermediate site is quite fast (during the failback function), because the bandwidth for the Metro Mirror is usually higher than the bandwidth between the intermediate and the remote site. When the servers have a connectivity from the local site to the storage at the intermediate site it is also possible to start the applications at the local site. This makes sense when the maintenance operations at the local site only affect the storage system, but not the servers. In this case the recovery process for the application becomes simpler. A recovery at the intermediate site is in fact a simple failover feature of the Metro Mirror. The Global Mirror remains untouched and continues to copy data to the remote site.

476

IBM System Storage DS8000: Copy Services in Open Environments

Local Site
2

Intermediate Site
3

Remote Site

A
1 4

C D

Local production host

Prodution host at intermediate site

Figure 29-1 Recovery of production to the intermediate site

The steps (Figure 29-1) to failover the production to the intermediate site are as follows: 1. 2. 3. 4. Stop the production application at the local site. Suspend the Metro Mirror. Failover the Metro Mirror to the intermediate site. Start the application at the intermediate site.

29.2.1 Step 1: Stop the production application at the local site


When the production application has to move to the intermediate site the application I/O needs to be stopped. It will be restarted after failover of the volumes at the intermediate site. To prepare for the failback at a later point in time, the primary volumes must be released by the host operating system. For the AIX operating system, all volume groups containing volumes that belong to the Metro Mirror must be varied off.

29.2.2 Step 2: Suspend the Metro Mirror


Suspending the Metro Mirror is not mandatory, but is recommended prior to any failover operations. See 28.5, Suspending volumes before failover on page 467, for details. The failoverpprc command will change the status of the target (secondary) volumes but not the source (primary) volumes. A suspend right before the failover will set both the primary and the secondary volumes into the suspend state. Example 29-1 illustrates how to bring the Metro Mirror pair into the suspended state. Before the failoverpprc command is issued, the status of the volume pairs at the local and the intermediate sites will be checked.
Example 29-1 Suspend the Metro Mirror At the local site: dscli> pausepprc -dev IBM.2107-75ABTV1 -remotedev IBM.2107-75ABTV2 7000-7003:7200-7203

Chapter 29. Planned recovery scenarios

477

Date/Time: June 20, 2006 1:51:05 PM CEST IBM DSCLI Version: 5.1.600.196 DS: IBM.2107-75ABTV1 CMUC00157I pausepprc: Remote Mirror and Copy volume pair 7000:7200 relationship successfully paused. CMUC00157I pausepprc: Remote Mirror and Copy volume pair 7001:7201 relationship successfully paused. CMUC00157I pausepprc: Remote Mirror and Copy volume pair 7002:7202 relationship successfully paused. CMUC00157I pausepprc: Remote Mirror and Copy volume pair 7003:7203 relationship successfully paused. dscli> lspprc -dev IBM.2107-75ABTV1 -remotedev IBM.2107-75ABTV2 -fullid 7000-7003 Date/Time: June 20, 2006 1:52:26 PM CEST IBM DSCLI Version: 5.1.600.196 DS: IBM.2107-75ABTV1 ID State Reason Type SourceLSS Timeout (secs) Critical Mode First Pass Status =========================================================================================== ====================================================== IBM.2107-75ABTV1/7000:IBM.2107-75ABTV2/7200 Suspended Host Source Metro Mirror IBM.2107-75ABTV1/70 300 Disabled Invalid IBM.2107-75ABTV1/7001:IBM.2107-75ABTV2/7201 Suspended Host Source Metro Mirror IBM.2107-75ABTV1/70 300 Disabled Invalid IBM.2107-75ABTV1/7002:IBM.2107-75ABTV2/7202 Suspended Host Source Metro Mirror IBM.2107-75ABTV1/70 300 Disabled Invalid IBM.2107-75ABTV1/7003:IBM.2107-75ABTV2/7203 Suspended Host Source Metro Mirror IBM.2107-75ABTV1/70 300 Disabled Invalid At the intermediate site: dscli> lspprc -dev IBM.2107-75ABTV2 -remotedev IBM.2107-75BYGT1 -fullid 7200-7203 Date/Time: June 20, 2006 1:53:42 PM CEST IBM DSCLI Version: 5.1.600.196 DS: IBM.2107-75ABTV2 ID State Reason Type SourceLSS Timeout (secs) Critical Mode First Pass Status =========================================================================================== =============================================================== IBM.2107-75ABTV1/7000:IBM.2107-75ABTV2/7200 Target Suspended Update Target Metro Mirror IBM.2107-75ABTV1/70 unknown Disabled Invalid IBM.2107-75ABTV1/7001:IBM.2107-75ABTV2/7201 Target Suspended Update Target Metro Mirror IBM.2107-75ABTV1/70 unknown Disabled Invalid IBM.2107-75ABTV1/7002:IBM.2107-75ABTV2/7202 Target Suspended Update Target Metro Mirror IBM.2107-75ABTV1/70 unknown Disabled Invalid IBM.2107-75ABTV1/7003:IBM.2107-75ABTV2/7203 Target Suspended Update Target Metro Mirror IBM.2107-75ABTV1/70 unknown Disabled Invalid IBM.2107-75ABTV2/7200:IBM.2107-75BYGT1/7400 Copy Pending Global Copy IBM.2107-75ABTV2/72 unknown Disabled True IBM.2107-75ABTV2/7201:IBM.2107-75BYGT1/7401 Copy Pending Global Copy IBM.2107-75ABTV2/72 unknown Disabled True IBM.2107-75ABTV2/7202:IBM.2107-75BYGT1/7402 Copy Pending Global Copy IBM.2107-75ABTV2/72 unknown Disabled True IBM.2107-75ABTV2/7203:IBM.2107-75BYGT1/7403 Copy Pending Global Copy IBM.2107-75ABTV2/72 unknown Disabled True

29.2.3 Step 3: Failover the intermediate site


After the Metro Mirror has been suspended a failover to the secondary volumes can be issued. When the failoverpprc command has successfully completed, the volumes at the intermediate site can be accessed by the host.

478

IBM System Storage DS8000: Copy Services in Open Environments

Note: To failover to the intermediate site you have to specify the secondary volumes as the source and the primary volumes as the target in the failoverpprc command. Example 29-2. shows how to failover to the intermediate site using the failoverpprc command.
Example 29-2 Failoverpprc at the intermediate site dscli> failoverpprc -dev IBM.2107-75ABTV2 -remotedev IBM.2107-75ABTV1 -type mmir 7200-7203:7000-7003 Date/Time: June 20, 2006 1:55:48 PM CEST IBM DSCLI Version: 5.1.600.196 DS: IBM.2107-75ABTV2 CMUC00196I failoverpprc: Remote Mirror and Copy pair 7200:7000 successfully reversed. CMUC00196I failoverpprc: Remote Mirror and Copy pair 7201:7001 successfully reversed. CMUC00196I failoverpprc: Remote Mirror and Copy pair 7202:7002 successfully reversed. CMUC00196I failoverpprc: Remote Mirror and Copy pair 7203:7003 successfully reversed. dscli> lspprc -dev IBM.2107-75ABTV2 -remotedev IBM.2107-75BYGT1 -fullid 7200-7203 Date/Time: June 20, 2006 1:56:40 PM CEST IBM DSCLI Version: 5.1.600.196 DS: IBM.2107-75ABTV2 ID State Reason Type SourceLSS Timeout (secs) Critical Mode First Pass Status =========================================================================================== ============================================================= IBM.2107-75ABTV1/7000:IBM.2107-75ABTV2/7200 Target Suspended Host Target Metro Mirror IBM.2107-75ABTV1/70 unknown Disabled Invalid IBM.2107-75ABTV1/7001:IBM.2107-75ABTV2/7201 Target Suspended Host Target Metro Mirror IBM.2107-75ABTV1/70 unknown Disabled Invalid IBM.2107-75ABTV1/7002:IBM.2107-75ABTV2/7202 Target Suspended Host Target Metro Mirror IBM.2107-75ABTV1/70 unknown Disabled Invalid IBM.2107-75ABTV1/7003:IBM.2107-75ABTV2/7203 Target Suspended Host Target Metro Mirror IBM.2107-75ABTV1/70 unknown Disabled Invalid IBM.2107-75ABTV2/7200:IBM.2107-75BYGT1/7400 Copy Pending Global Copy IBM.2107-75ABTV2/72 unknown Disabled True IBM.2107-75ABTV2/7201:IBM.2107-75BYGT1/7401 Copy Pending Global Copy IBM.2107-75ABTV2/72 unknown Disabled True IBM.2107-75ABTV2/7202:IBM.2107-75BYGT1/7402 Copy Pending Global Copy IBM.2107-75ABTV2/72 unknown Disabled True IBM.2107-75ABTV2/7203:IBM.2107-75BYGT1/7403 Copy Pending Global Copy IBM.2107-75ABTV2/72 unknown Disabled True

Note that after a failover, the status of the Metro Mirror secondary volumes used in a Metro/Global Mirror is different than the status of the failover in a conventional Metro Mirror. In a conventional Metro Mirror the secondary volumes will be in the status Suspended with reason Host Source. Because in a Metro/Global Mirror the Metro Mirror secondary volumes are still cascaded to the remote volumes, the status of these volumes will not be Suspended Host Source. In the current implementation the status of the secondary volumes is displayed as Target Suspended with the reason Host Target. Restriction: In an MGM configuration, once you have effectively done the failover at the intermediate site (failover from B to A), while production was still running on A, you will not be able to failback from A to B and resychronize the B volumes with the A volumes.

Chapter 29. Planned recovery scenarios

479

29.2.4 Step 4: Start the production application at the intermediate site


After you have varied on the volume groups and mounted the file systems in case of an AIX operating system, the applications can be started.

29.3 Return to local from intermediate site


In this section we describe how to move the production back to the local site after a transition to the intermediate site. It relates to the scenario described in 29.2, Recovery at intermediate site on page 476, or when the production has been taken over to the intermediate site after failure at the local site. The return from the intermediate site to the local site requires that the Global Copy must be suspended. This is because the volumes cannot be the source for the remote and the local at the same time. This enables two possibilities for the applications at the intermediate site that have to be stopped: 1. When the data at the local site is too old because the production was kept for a relatively long time at the intermediate site, it is safer to rely on the data in the Global Mirror. The Global Mirror was always active, and consistent data is most current at the remote site. In case of a problem during the failback, the data can be recovered from the Global Mirror. For the failback process the applications should be stopped before the failback is issued. The application can be started up at the local site, when the volume status of the Metro Mirror is Full Duplex. 2. If the application downtime must be as short as possible, the applications can be stopped after the failback to the local site, when all volumes are in Full Duplex. But this requires to terminate the Global Copy. During the time when the intermediate volumes are synchronized with the local volumes, the applications have no valid copy. In Figure 29-2, we illustrate the steps to move back to local, whereby the application is stopped first, to ensure that there is always a valid copy of the data available.

Local Site

Intermediate Site
6 7 3 8

Remote Site

A
5 9 10 4 2

B
1

C D

Production Host

Disaster Recovery Test Host

Figure 29-2 Return from intermediate site to local site

480

IBM System Storage DS8000: Copy Services in Open Environments

In the following sections, we describe the main steps shown in Figure 29-2: 1. Stop I/O at the intermediate site. 2. Terminate Global Mirror or remove volumes from the Global Mirror session. 3. Suspend Global Copy. 4. Fail back Metro Mirror to the local site and wait for Full Duplex volume status. 5. Suspend Metro Mirror. 6. Fail over to the local site. 7. Fail back Metro Mirror from the local site to the intermediate site. 8. Resume Global Copy. 9. Start I/O at the local site. 10.Start Global Mirror or add volumes to the session.

29.3.1 Step 1: Stop I/O at the intermediate site


The Global Copy has to be suspended to enable the failback from the intermediate volumes to the local volumes. In order to keep a valid and consistent copy of the data at the remote site the applications running at the intermediate site need to be stopped right now. The volumes must also be released by the host to enable the failback.

29.3.2 Step 2: Terminate Global Mirror or remove volumes from the session
Typically Metro/Global Mirror is deployed in an environment with more than one host connected to the Metro/Global Mirror storage. Terminate Global Mirror session in case all hosts are failed over to the intermediate site. If all hosts are not failed over to the intermediate site, during the failback to the local site, the Global Mirror should be kept up and running to ensure that the data for those hosts which remain at the local site is still copied in a consistent way. Prior to the failback the volumes belonging to the host which will failback, have to be removed from the session. When this is done the Global Copy for these volumes can be suspended. See Example 29-3.
Example 29-3 Remove the volumes from the session and pause the Global Copy dscli> chsession -dev IBM.2107-75ABTV2 -lss 72 -action remove -volume 7200-7203 1 Date/Time: June 20, 2006 2:25:12 PM CEST IBM DSCLI Version: 5.1.600.196 DS: IBM.2107-75ABTV2 CMUC00147I chsession: Session 1 successfully modified.

29.3.3 Step 3: Suspend Global Copy


Before the failback of the Metro Mirror from the intermediate site to the local site can be accomplished the status of the volumes at the intermediate needs to be Suspend Host Source. This implies that the Global Copy between the intermediate and the remote site must be suspended. Example 29-4 shows how to suspend the Global Copy.
Example 29-4 Suspend Global Copy
dscli> pausepprc -dev IBM.2107-75ABTV2 -remotedev IBM.2107-75BYGT1 7200-7203:7400-7403 Date/Time: June 20, 2006 2:27:01 PM CEST IBM DSCLI Version: 5.1.600.196 DS: IBM.2107-75ABTV2 CMUC00157I pausepprc: Remote Mirror and Copy volume pair 7200:7400 relationship successfully paused.

Chapter 29. Planned recovery scenarios

481

CMUC00157I pausepprc: Remote Mirror and Copy volume pair 7201:7401 relationship successfully paused. CMUC00157I pausepprc: Remote Mirror and Copy volume pair 7202:7402 relationship successfully paused. CMUC00157I pausepprc: Remote Mirror and Copy volume pair 7203:7403 relationship successfully paused.

29.3.4 Step 4: Fail back Metro Mirror to local site and wait for Full Duplex
Now the failback to the local site can be started. The failbackpprc command is executed at the intermediate site, as shown in Example 29-5. Note that the secondary volumes are specified as the source volumes and the primary volumes as the target volumes. Also, check that the Metro Mirror volumes are in the Full Duplex mode before proceeding to the next step.
Example 29-5 Fail back to the local site and check the status dscli> failbackpprc -dev IBM.2107-75ABTV2 -remotedev IBM.2107-75ABTV1 -type mmir 7200-7203:7000-7003 Date/Time: June 20, 2006 2:29:43 PM CEST IBM DSCLI Version: 5.1.600.196 DS: IBM.2107-75ABTV2 CMUC00197I failbackpprc: Remote Mirror and Copy pair 7200:7000 successfully failed back. CMUC00197I failbackpprc: Remote Mirror and Copy pair 7201:7001 successfully failed back. CMUC00197I failbackpprc: Remote Mirror and Copy pair 7202:7002 successfully failed back. CMUC00197I failbackpprc: Remote Mirror and Copy pair 7203:7003 successfully failed back. dscli> lspprc -dev IBM.2107-75ABTV2 -remotedev IBM.2107-75ABTV1 -fullid -fmt default 7200-7203 Date/Time: June 20, 2006 2:30:53 PM CEST IBM DSCLI Version: 5.1.600.196 DS: IBM.2107-75ABTV2 ID State Reason Type SourceLSS Timeout (secs) Critical Mode First Pass Status =========================================================================================== =================================================== IBM.2107-75ABTV2/7200:IBM.2107-75ABTV1/7000 Full Duplex Metro Mirror IBM.2107-75ABTV2/72 300 Disabled Invalid IBM.2107-75ABTV2/7201:IBM.2107-75ABTV1/7001 Full Duplex Metro Mirror IBM.2107-75ABTV2/72 300 Disabled Invalid IBM.2107-75ABTV2/7202:IBM.2107-75ABTV1/7002 Full Duplex Metro Mirror IBM.2107-75ABTV2/72 300 Disabled Invalid IBM.2107-75ABTV2/7203:IBM.2107-75ABTV1/7003 Full Duplex Metro Mirror IBM.2107-75ABTV2/72 300 Disabled Invalid

29.3.5 Steps 5 and 6: Suspend Metro Mirror and fail over to the local site
When all the volumes at the local site are synchronized with the intermediate site, the Metro Mirror must be failed over to the local site before the applications can be started. In Example 29-6, prior to the failover, the Metro Mirror is suspended by a pausepprc command according to the recommendation in 28.5, Suspending volumes before failover on page 467. At this point the I/O to the volumes has been stopped and the direction of the mirror will be changed with the next failback. Note: To fail over to the intermediate site, you have to specify the secondary volumes as the source and the primary volumes as targets in the failoverpprc command.
Example 29-6 Suspend and fail over to the local site dscli> pausepprc -dev IBM.2107-75ABTV2 -remotedev IBM.2107-75ABTV1 7200-7203:7000-7003 Date/Time: June 20, 2006 2:32:58 PM CEST IBM DSCLI Version: 5.1.600.196 DS: IBM.2107-75ABTV2

482

IBM System Storage DS8000: Copy Services in Open Environments

CMUC00157I pausepprc: Remote Mirror and Copy volume pair 7200:7000 relationship successfully paused. CMUC00157I pausepprc: Remote Mirror and Copy volume pair 7201:7001 relationship successfully paused. CMUC00157I pausepprc: Remote Mirror and Copy volume pair 7202:7002 relationship successfully paused. CMUC00157I pausepprc: Remote Mirror and Copy volume pair 7203:7003 relationship successfully paused. dscli> failoverpprc -dev IBM.2107-75ABTV1 -remotedev IBM.2107-75ABTV2 -type mmir 7000-7003:7200-7203 Date/Time: June 20, 2006 2:36:04 PM CEST IBM DSCLI Version: 5.1.600.196 DS: IBM.2107-75ABTV1 CMUC00196I failoverpprc: Remote Mirror and Copy pair 7000:7200 successfully reversed. CMUC00196I failoverpprc: Remote Mirror and Copy pair 7001:7201 successfully reversed. CMUC00196I failoverpprc: Remote Mirror and Copy pair 7002:7202 successfully reversed. CMUC00196I failoverpprc: Remote Mirror and Copy pair 7003:7203 successfully reversed.

29.3.6 Step 7: Fail back Metro Mirror from the local site to the intermediate site
The failback from the local to the intermediate site triggers the Metro Mirror to copy the tracks that are out-of-sync immediately after I/O to the primary volumes has been started. Example 29-7 shows the failback to the intermediate site. This command is executed at the local site.
Example 29-7 Fail back from the local to the intermediate site dscli> failbackpprc -dev 7000-7003:7200-7203 Date/Time: June 20, 2006 IBM.2107-75ABTV1 CMUC00197I failbackpprc: CMUC00197I failbackpprc: CMUC00197I failbackpprc: CMUC00197I failbackpprc: IBM.2107-75ABTV1 -remotedev IBM.2107-75ABTV2 -type mmir 2:37:09 PM CEST IBM DSCLI Version: 5.1.600.196 DS: Remote Remote Remote Remote Mirror Mirror Mirror Mirror and and and and Copy Copy Copy Copy pair pair pair pair 7000:7200 7001:7201 7002:7202 7003:7203 successfully successfully successfully successfully failed failed failed failed back. back. back. back.

29.3.7 Step 8: Resume Global Copy


Because the Global Copy was suspended (see 29.3.3, Step 3: Suspend Global Copy on page 481) the Global Copy must now be resumed with the resumepprc command, which is shown in Example 29-8.
Example 29-8 Resume the Global Copy dscli> resumepprc -dev IBM.2107-75ABTV2 -remotedev IBM.2107-75BYGT1 -type gcp -cascade 7200-7203:7400-7403 Date/Time: June 20, 2006 2:38:34 PM CEST IBM DSCLI Version: 5.1.600.196 DS: IBM.2107-75ABTV2 CMUC00158I resumepprc: Remote Mirror and Copy volume pair 7200:7400 relationship successfully resumed. This message is being returned before the copy completes. CMUC00158I resumepprc: Remote Mirror and Copy volume pair 7201:7401 relationship successfully resumed. This message is being returned before the copy completes. CMUC00158I resumepprc: Remote Mirror and Copy volume pair 7202:7402 relationship successfully resumed. This message is being returned before the copy completes. CMUC00158I resumepprc: Remote Mirror and Copy volume pair 7203:7403 relationship successfully resumed. This message is being returned before the copy completes.

Chapter 29. Planned recovery scenarios

483

29.3.8 Step 9: Start I/O at the local site


Now when the Metro Mirror has been reversed, the applications can be started at the local site. The Metro Mirror is ready to copy tracks. Since the Global Copy is still not up and the volumes need to be added to the session, we recommend that the remaining steps are processed promptly to avoid having Global Copy catch too many tracks when production is running. This also ensures that Consistency Groups are formed as soon as possible.

29.3.9 Step 10: Start Global Mirror or add volumes to the session
In order to form Consistency Groups, start the Global Mirror session if it was terminated (see 29.3.2, Step 2: Terminate Global Mirror or remove volumes from the session on page 481) or add the volumes to the session again if the Global Mirror is still up and running (as would be the case when other applications that are not the subject of the failover/failback scenarios are present and need to have the Global Mirror kept in operation). The volumes are added to the session with the chsession command using the option -action add, as shown in Example 29-9. When done, the volume status will be Join Pending as long as the first pass copy of the Global Copy has not been finished. Use the lssession command to query the status of the volumes in an LSS.
Example 29-9 Add volumes to the session and check the session dscli> chsession -dev IBM.2107-75ABTV2 -lss 72 -action add -volume 7200-7203 1 Date/Time: June 20, 2006 2:41:38 PM CEST IBM DSCLI Version: 5.1.600.196 DS: IBM.2107-75ABTV2 CMUC00147I chsession: Session 1 successfully modified. dscli> lssession -dev IBM.2107-75ABTV2 -fmt default 72 Date/Time: June 20, 2006 2:42:18 PM CEST IBM DSCLI Version: 5.1.600.196 DS: IBM.2107-75ABTV2 LSS ID Session Status Volume VolumeStatus PrimaryStatus SecondaryStatus FirstPassComplete AllowCascading =========================================================================================== ================================== 72 01 CG In Progress 7200 Active Primary Copy Pending Secondary Full Duplex True Enable 72 01 CG In Progress 7201 Active Primary Copy Pending Secondary Full Duplex True Enable 72 01 CG In Progress 7202 Active Primary Copy Pending Secondary Full Duplex True Enable 72 01 CG In Progress 7203 Active Primary Copy Pending Secondary Full Duplex True Enable

29.4 Recovery at remote site


As described in 29.2, Recovery at intermediate site on page 476, one possible reason for a recovery of the production is to minimize impact to the production at the local site, for example, in case of maintenance at the local site. The recovery at the remote site is indicated when no servers are available at the intermediate site.

484

IBM System Storage DS8000: Copy Services in Open Environments

Figure 29-3 illustrates the steps it takes to recover to the remote site. This scenario includes the setup of a Global Mirror from the remote site to the intermediate site in order to be prepared for keeping the production at the remote site for an extended period of time. Therefore the data will be replicated to the intermediate site.

Local Site
4 5

Intermediate Site
6

Remote Site

A
1 2

B
3

C
7

Primary production host

Secondary production host

Figure 29-3 Recovery of production to remote site

When the application will remain for an extended period of time at the remote site and the intermediate is still available, it is possible to create an additional Global Mirror from the remote site to the intermediate site in order to protect the data against a possible disaster at the remote site. See 28.8, Setting up an additional Global Mirror from remote site on page 472, for the details. The recovery of the production to the remote site (Figure 29-3) consists of these steps: 1. Stop I/O at the local site. 2. Terminate Global Mirror or remove volumes from the session. 3. Terminate Global Copy. 4. Suspend Metro Mirror. 5. Fail over Metro Mirror to the intermediate site. 6. Establish Global Copy from the remote site to the intermediate site. 7. Start I/O at the remote site.

29.4.1 Step 1: Stop I/O at the local site


Before any failover can happen, the I/O to the primary volumes must the stopped. For this purpose, applications running on hosts that are subject of the failover should be stopped immediately. Data must be identical for the local, intermediate, and remote sites. During the whole scenario make sure that data is not changing at the local site. The volumes must also be released by the host to enable the failbackpprc of the Metro Mirror when the production will return again to the local site (see 29.5.1, Step 1: Stop I/O at remote site on page 490).

Chapter 29. Planned recovery scenarios

485

29.4.2 Step 2: Terminate Global Mirror or remove volumes from session


As described in 29.2, Recovery at intermediate site on page 476, terminate Global Mirror if all applications at the local site should be recovered at the remote site. If not all applications from the local site will be transferred to the remote site and to ensure that Consistency Groups continue to be formed for the applications that reside at the local site, the volumes by the transferred applications will be removed from the session of the Global Mirror. Example 29-10 shows how to remove volumes from the sessions.
Example 29-10 Remove volumes from the session dscli> pausegmir -dev IBM.2107-75ABTV2 -session 1 -lss 72 Date/Time: June 21, 2006 10:34:41 AM CEST IBM DSCLI Version: 5.1.600.196 DS: IBM.2107-75ABTV2 CMUC00163I pausegmir: Global Mirror for session 1 successfully paused. dscli> chsession -dev IBM.2107-75ABTV2 -lss 72 -action remove -volume 7200-7203 1 Date/Time: June 21, 2006 10:35:43 AM CEST IBM DSCLI Version: 5.1.600.196 DS: IBM.2107-75ABTV2 CMUC00147I chsession: Session 1 successfully modified. dscli> resumegmir -dev IBM.2107-75ABTV2 -session 1 -lss 72 Date/Time: June 21, 2006 10:36:15 AM CEST IBM DSCLI Version: 5.1.600.196 DS: IBM.2107-75ABTV2 CMUC00164I resumegmir: Global Mirror for session 1 successfully resumed.

29.4.3 Step 3: Terminate Global Copy


To prepare for the return back to the local site and in case the intermediate site is still available, the Global Copy will be reversed from the remote site to the intermediate site. Because of the cascading status of the intermediate volumes, a reverse of the Global Copy using the failover and failback would result in the situation where the intermediate volumes are target volumes for the Metro Mirror and the Global Copy at the same time. Since this is not supported, the Global Copy must be terminated in this step. Later in this scenario (see 29.4.6, Step 6: Establish Global Copy from remote to intermediate site on page 488) the Global Copy is recreated in no-copy mode between the remote site and the intermediate site. Attention: Between the current step (step 3) and up through step 6, make sure that all I/O to the volumes are suspended. If not, data can be corrupted due to the no copy option, which would then require a full copy. Example 29-11 shows how to remove the Global Copy. Important: Before the Global Copy is removed, check that all tracks were copied to the secondary site. Use the command lspprc -l for obtaining the out-of-sync tracks.

Example 29-11 Remove Global Copy dscli> rmpprc -quiet -dev IBM.2107-75ABTV2 -remotedev IBM.2107-75BYGT1 7200-7203:7400-7403 Date/Time: June 21, 2006 10:38:25 AM CEST IBM DSCLI Version: 5.1.600.196 DS: IBM.2107-75ABTV2 CMUC00155I rmpprc: Remote Mirror and Copy volume pair 7200:7400 relationship successfully withdrawn. CMUC00155I rmpprc: Remote Mirror and Copy volume pair 7201:7401 relationship successfully withdrawn.

486

IBM System Storage DS8000: Copy Services in Open Environments

CMUC00155I rmpprc: Remote Mirror and Copy volume pair 7202:7402 relationship successfully withdrawn. CMUC00155I rmpprc: Remote Mirror and Copy volume pair 7203:7403 relationship successfully withdrawn.

29.4.4 Step 4: Suspend Metro Mirror


It is not necessary to suspend the Metro Mirror before the failover, because when production will be transmitted to the local site, the failback will change the status of the secondary volumes to Host Source Suspended. The status of the primary will remain unchanged. However, to be prepared for the failback that will be issued later, it is a good practice to suspend the Metro Mirror to bring the status of both the primary and the secondary volumes into the suspended status before the failover. Important: Before the Metro Mirror is suspended, check that all tracks were copied to the intermediate site. Use the lspprc -l command for obtaining the out-of-sync tracks.

Note: If the links between the local and the intermediate site have failed, suspending the Metro Mirror is not applicable, because the pair status has been turned into suspended. Example 29-1 on page 477 illustrates how to bring the Metro Mirror pair into the suspended state. Before the failoverpprc command is issued, the status of the volume pairs at the local and the intermediate site must be checked.
Example 29-12 Suspend the Metro Mirror At local site: dscli> pausepprc -dev IBM.2107-75ABTV1 -remotedev IBM.2107-75ABTV2 7000-7003:7200-7203 Date/Time: June 21, 2006 10:41:37 AM CEST IBM DSCLI Version: 5.1.600.196 DS: IBM.2107-75ABTV1 CMUC00157I pausepprc: Remote Mirror and Copy volume pair 7000:7200 relationship successfully paused. CMUC00157I pausepprc: Remote Mirror and Copy volume pair 7001:7201 relationship successfully paused. CMUC00157I pausepprc: Remote Mirror and Copy volume pair 7002:7202 relationship successfully paused. CMUC00157I pausepprc: Remote Mirror and Copy volume pair 7003:7203 relationship successfully paused.

29.4.5 Step 5: Fail over Metro Mirror to intermediate site


The volumes at the intermediate site will become secondary volumes for the Global Copy, which will subsequently be set up between the remote and intermediate sites. Therefore the Metro Mirror must be failed over to the intermediate site. The Metro Mirror failover can also be seen as a preparation for the failback to the local site. The reversed Metro Mirror will act as a cascade from the reversed Global Copy between the remote and intermediate sites. For this reason, the Metro Mirror secondary volumes failover is issued with the -cascade option. Also, to avoid extended response times at the host side, a cascaded copy relation should not be set up as a synchronous relation. Thus, the failoverpprc command for the Metro Mirror will be executed with the option -type gcp.

Chapter 29. Planned recovery scenarios

487

Example 29-13 shows the failover of the Metro Mirror. Note: To fail over to the intermediate site you must specify the secondary volumes as the source and the primary volumes as targets in the failoverpprc command.
Example 29-13 Failover of the Metro Mirror At intermediate site: dscli> failoverpprc -dev 7200-7203:7000-7003 Date/Time: June 21, 2006 IBM.2107-75ABTV2 CMUC00196I failoverpprc: CMUC00196I failoverpprc: CMUC00196I failoverpprc: CMUC00196I failoverpprc: IBM.2107-75ABTV2 -remotedev IBM.2107-75ABTV1 -type gcp -cascade 10:43:50 AM CEST IBM DSCLI Version: 5.1.600.196 DS: Remote Remote Remote Remote Mirror Mirror Mirror Mirror and and and and Copy Copy Copy Copy pair pair pair pair 7200:7000 7201:7001 7202:7002 7203:7003 successfully successfully successfully successfully reversed. reversed. reversed. reversed.

29.4.6 Step 6: Establish Global Copy from remote to intermediate site


In anticipation of returning the production back to the local site and if the volumes at the intermediate and the links are still available, it is now possible to establish the Global Copy from the remote to the intermediate site. If at the intermediate site additional volumes can be provided for a new FlashCopy relation to the intermediate volumes, it is possible to set up an additional Global Mirror. Assuming that the production might remain for a longer period of time at the remote site, this approach would provide consistent data at the intermediate site and protect the production against a possible disaster at the remote site. See 28.8, Setting up an additional Global Mirror from remote site on page 472, for details about how to set it up. Example 29-14 shows how to establish the Global Copy. The -cascade option has to be omitted because the Global Copy is now not a cascaded relation anymore. Since the volumes at the remote and intermediate sites are identical, use a -mode nocp option in order to avoid a FULL COPY.
Example 29-14 Establish Global Copy to the intermediate site dscli> mkpprc -dev IBM.2107-75BYGT1 -remotedev IBM.2107-75ABTV2 -type gcp -mode nocp 7400-7403:7200-7203 Date/Time: June 21, 2006 10:49:53 AM CEST IBM DSCLI Version: 5.1.600.196 DS: IBM.2107-75BYGT1 CMUC00153I mkpprc: Remote Mirror and Copy volume pair relationship 7400:7200 successfully created. CMUC00153I mkpprc: Remote Mirror and Copy volume pair relationship 7401:7201 successfully created. CMUC00153I mkpprc: Remote Mirror and Copy volume pair relationship 7402:7202 successfully created. CMUC00153I mkpprc: Remote Mirror and Copy volume pair relationship 7403:7203 successfully created.

29.4.7 Step 7: Start I/O at the remote site


At this point all necessary actions to recover the production application from the local to the remote site have been successfully completed. The application can now be started against the volumes at the remote site (see Figure 29-3 on page 485). 488
IBM System Storage DS8000: Copy Services in Open Environments

29.5 Return from remote site


The scenario described in this section includes the return from the remote site after a planned recovery, which was discussed in 29.4, Recovery at remote site on page 484. This scenario also applies after a recovery at the remote site has taken place due to a failure situation at the local site. If the recovery at the remote site has been accomplished because of a failure situation at the local site, the return to the local site can only be accomplished when all required resources are available again. As part of the recovery at the remote site scenario a Global Copy was established from the remote to the intermediate site to ensure that the data was copied to the intermediate site while production was running at the remote site. If this step had been omitted it must be applied now. Ensure that all data has been drained to the intermediate site and then to the local site before the applications are started again at the local site. If an additional Global Mirror has been set up as described in 28.8, Setting up an additional Global Mirror from remote site on page 472, then it has to be removed before you proceed to perform the steps to return to the local site. In Figure 29-4 we illustrate the steps to return to the local site.

Local Site

Intermediate Site

Remote Site

A
5 4 2 9 8

B
3

C D
1

Primary prodution Host

Secondary production host

Figure 29-4 Return from the remote to the local site

The steps to return production from remote to local are as follows: 1. Stop I/O at remote site. 2. Fail back Metro Mirror from the intermediate site to the local site and wait until pairs are Full Duplex. 3. Terminate Global Copy from the remote site to the intermediate site. 4. Suspend Metro Mirror. 5. Fail over to the local site. 6. Fail back Metro Mirror from the local site to the intermediate site. 7. Establish Global Copy from the intermediate site to the remote site.

Chapter 29. Planned recovery scenarios

489

8. Start I/O at the local site. 9. Start Global Mirror or add volumes to session.

29.5.1 Step 1: Stop I/O at remote site


To return the production back to the local site, the applications have to be stopped at a certain point in time. It is better to stop the I/O to the remote volumes right now. This guarantees that the data will remain identical in each location while performing this scenario.

29.5.2 Step 2: Fail back Metro Mirror from the intermediate site to the local site
The running applications data at the remote site has been replicated because a Global Copy was established during the failover procedure (see 29.4.6, Step 6: Establish Global Copy from remote to intermediate site on page 488). When the local site becomes available again, the Metro Mirror can be failed back. The Metro Mirror failover described under 29.4.5, Step 5: Fail over Metro Mirror to intermediate site on page 487, must be issued with the option -cascade since with the failback to the local site, the Metro Mirror is now cascaded again to the Global Copy. This also implies that a cascaded copy relation to the local site must not be a synchronous relation. This is realized with the option -type gcp, which turns the Metro Mirror into Global Copy mode. If the failover was not executed with these two options during the failover to the remote site scenario, these options can be supplied right now with the failbackpprc command. In Example 29-15 the failbackpprc command is issued including the options -cascade and -type gcp. It is executed at the intermediate site. Before taking any further action, make sure that all data was replicated to the local site. Check this with the lspprc command.
Example 29-15 Fail back Metro Mirror to local site dscli> failbackpprc -dev IBM.2107-75ABTV2 -remotedev IBM.2107-75ABTV1 -type gcp -cascade 7200-7203:7000-7003 Date/Time: June 21, 2006 11:01:24 AM CEST IBM DSCLI Version: 5.1.600.196 DS: IBM.2107-75ABTV2 CMUC00197I failbackpprc: Remote Mirror and Copy pair 7200:7000 successfully failed back. CMUC00197I failbackpprc: Remote Mirror and Copy pair 7201:7001 successfully failed back. CMUC00197I failbackpprc: Remote Mirror and Copy pair 7202:7002 successfully failed back. CMUC00197I failbackpprc: Remote Mirror and Copy pair 7203:7003 successfully failed back. dscli> lspprc -dev IBM.2107-75ABTV2 -remotedev IBM.2107-75ABTV1 7200-7203:7000-7003 Date/Time: June 21, 2006 11:03:07 AM CEST IBM DSCLI Version: 5.1.600.196 DS: IBM.2107-75ABTV2 ID State Reason Type SourceLSS Timeout (secs) Critical Mode First Pass Status =========================================================================================== ======= 7200:7000 Copy Pending Global Copy 72 300 Disabled True 7201:7001 Copy Pending Global Copy 72 300 Disabled True 7202:7002 Copy Pending Global Copy 72 300 Disabled True 7203:7003 Copy Pending Global Copy 72 300 Disabled True

490

IBM System Storage DS8000: Copy Services in Open Environments

29.5.3 Step 3: Terminate Global Copy from remote to intermediate site


Reversing the Global Copy using failover and failback functions would result in having the intermediate volumes as sources for the reversed Metro Mirror and the Global Copy, which is not permitted. For this reason the Global Copy will be terminated in this step. In a later step (see 29.5.7, Step 7: Create Global Copy from intermediate to remote site on page 492) the Global Copy is recreated from the intermediate site to the remote site in nocopy mode. Attention: Make sure that between this step and step 6, no I/O will happen to the volumes. Otherwise data can be corrupted due to the no copy option, which would then require a full copy Example 29-16 shows the DS CLI command to remove the Global Copy. Important: Before the Global Copy is removed, check whether all tracks are copied to the secondary site. Use the TSO CQUERY command to verify whether the Percent of copy complete is 100%.
Example 29-16 Remove the Global Copy from remote to intermediate site dscli> rmpprc -quiet -dev IBM.2107-75BYGT1 -remotedev IBM.2107-75ABTV2 7400-7403:7200-7203 Date/Time: June 21, 2006 12:44:33 PM CEST IBM DSCLI Version: 5.1.600.196 DS: IBM.2107-75BYGT1 CMUC00155I rmpprc: Remote Mirror and Copy volume pair 7400:7200 relationship successfully withdrawn. CMUC00155I rmpprc: Remote Mirror and Copy volume pair 7401:7201 relationship successfully withdrawn. CMUC00155I rmpprc: Remote Mirror and Copy volume pair 7402:7202 relationship successfully withdrawn. CMUC00155I rmpprc: Remote Mirror and Copy volume pair 7403:7203 relationship successfully withdrawn.

29.5.4 Step 4: Suspend Metro Mirror


It is not necessary to suspend the Metro Mirror before the failover, because when production will be shifted to the local site, the failback will change the status of the secondary volumes to Host Source Suspended. The status of the primary will remain unchanged. However in preparation of the failback, it is a good practice to suspend the Metro Mirror to bring the status of both the primary and the secondary volumes to suspended prior to the failover.

29.5.5 Step 5: Fail over to local site


Before the failover at the local site, ensure that all tracks were transmitted to the local site. The lspprc command issued at the intermediate site should show that there are no more out-of-sync tracks. Example 29-17 shows how to fail over to the local site. The command is executed at the intermediate site. Note: To fail over to the intermediate site you must specify the secondary volumes as the source and the primary volumes as targets in the failoverpprc command.

Chapter 29. Planned recovery scenarios

491

Example 29-17 Failover at the local site dscli> failoverpprc -dev 7000-7003:7200-7203 Date/Time: June 21, 2006 IBM.2107-75ABTV1 CMUC00196I failoverpprc: CMUC00196I failoverpprc: CMUC00196I failoverpprc: CMUC00196I failoverpprc: IBM.2107-75ABTV1 -remotedev IBM.2107-75ABTV2 -type mmir 12:47:59 PM CEST IBM DSCLI Version: 5.1.600.196 DS: Remote Remote Remote Remote Mirror Mirror Mirror Mirror and and and and Copy Copy Copy Copy pair pair pair pair 7000:7200 7001:7201 7002:7202 7003:7203 successfully successfully successfully successfully reversed. reversed. reversed. reversed.

29.5.6 Step 6: Fail back Metro Mirror from the local site to the intermediate site
The failback enables tracks to be copied from the local to the intermediate site. The Metro Mirror is now no longer in a cascaded relation, so the copy type is now mmir again. Because the applications have not yet been started, the content of the volume at the local and the intermediate site are the same, and the status of the Metro Mirror should be Full Duplex. Example 29-18 shows the failback to the intermediate site at the local site.
Example 29-18 Fail back the Metro Mirror from the local site to the intermediate site dscli> failbackpprc -dev IBM.2107-75ABTV1 -remotedev IBM.2107-75ABTV2 -type mmir 7000-7003:7200-7203 Date/Time: June 21, 2006 12:49:33 PM CEST IBM DSCLI Version: 5.1.600.196 DS: IBM.2107-75ABTV1 CMUC00197I failbackpprc: Remote Mirror and Copy pair 7000:7200 successfully failed back. CMUC00197I failbackpprc: Remote Mirror and Copy pair 7001:7201 successfully failed back. CMUC00197I failbackpprc: Remote Mirror and Copy pair 7002:7202 successfully failed back. CMUC00197I failbackpprc: Remote Mirror and Copy pair 7003:7203 successfully failed back. dscli> lspprc -dev IBM.2107-75ABTV1 -remotedev IBM.2107-75ABTV2 -fullid -fmt default 7000-7003 Date/Time: June 21, 2006 12:50:59 PM CEST IBM DSCLI Version: 5.1.600.196 DS: IBM.2107-75ABTV1 ID State Reason Type SourceLSS Timeout (secs) Critical Mode First Pass Status =========================================================================================== =================================================== IBM.2107-75ABTV1/7000:IBM.2107-75ABTV2/7200 Full Duplex Metro Mirror IBM.2107-75ABTV1/70 300 Disabled Invalid IBM.2107-75ABTV1/7001:IBM.2107-75ABTV2/7201 Full Duplex Metro Mirror IBM.2107-75ABTV1/70 300 Disabled Invalid IBM.2107-75ABTV1/7002:IBM.2107-75ABTV2/7202 Full Duplex Metro Mirror IBM.2107-75ABTV1/70 300 Disabled Invalid IBM.2107-75ABTV1/7003:IBM.2107-75ABTV2/7203 Full Duplex Metro Mirror IBM.2107-75ABTV1/70 300 Disabled Invalid

29.5.7 Step 7: Create Global Copy from intermediate to remote site


After the Metro Mirror has been prepared for production at the local site, the Global Copy has to be created from the intermediate site to the remote site with the -cascade option. Since the volumes at the remote and intermediate sites are identical, the Global Copy should be established with the -mode nocp option (NO COPY). Important: Be sure that there is no active I/O to the volumes between step 3 and step 7. Otherwise data can be corrupted due to the nocopy option, which would then require a full copy.

492

IBM System Storage DS8000: Copy Services in Open Environments

Example 29-19 shows the DS CLI command to create the Global Copy.
Example 29-19 Create Global Copy from the intermediate site to the remote site dscli> mkpprc -dev IBM.2107-75ABTV2 -remotedev IBM.2107-75BYGT1 -type gcp -mode nocp -cascade 7200-7203:7400-7403 Date/Time: June 21, 2006 12:53:07 PM CEST IBM DSCLI Version: 5.1.600.196 DS: IBM.2107-75ABTV2 CMUC00153I mkpprc: Remote Mirror and Copy volume pair relationship 7200:7400 successfully created. CMUC00153I mkpprc: Remote Mirror and Copy volume pair relationship 7201:7401 successfully created. CMUC00153I mkpprc: Remote Mirror and Copy volume pair relationship 7202:7402 successfully created. CMUC00153I mkpprc: Remote Mirror and Copy volume pair relationship 7203:7403 successfully created.

29.5.8 Step 8: Start I/O


Now that the Metro Mirror and Global Copy are fully re-established from the local to the intermediate to the remote site, the application can be started at the local site.

29.5.9 Step 9: Start Global Mirror or add volumes to the session


Start the Global Mirror if it has been stopped in the corresponding recovery scenario described in 29.4.2, Step 2: Terminate Global Mirror or remove volumes from session on page 486. Or add the volumes to the session again if the Global Mirror is still up and running. This is the case whenever the other applications are not the subject of the failover/failback scenarios and need to have the Global Mirror to be in operation. Example 29-20 shows how to add the volumes into the session and how to monitor the session. Because the applications have not been started, the volumes should go immediately into the active state.
Example 29-20 Add volumes to the session and check them dscli> chsession -dev IBM.2107-75ABTV2 -lss 72 -action add -volume 7200-7203 1 Date/Time: June 21, 2006 12:57:54 PM CEST IBM DSCLI Version: 5.1.600.196 DS: IBM.2107-75ABTV2 CMUC00147I chsession: Session 1 successfully modified. dscli> lssession -dev IBM.2107-75ABTV2 72 Date/Time: June 21, 2006 12:58:23 PM CEST IBM DSCLI Version: 5.1.600.196 DS: IBM.2107-75ABTV2 LSS ID Session Status Volume VolumeStatus PrimaryStatus SecondaryStatus FirstPassComplete AllowCascading =========================================================================================== ================================== 72 01 CG In Progress 7200 Active Primary Copy Pending Secondary Full Duplex True Enable 72 01 CG In Progress 7201 Active Primary Copy Pending Secondary Full Duplex True Enable 72 01 CG In Progress 7202 Active Primary Copy Pending Secondary Full Duplex True Enable 72 01 CG In Progress 7203 Active Primary Copy Pending Secondary Full Duplex True Enable

Chapter 29. Planned recovery scenarios

493

494

IBM System Storage DS8000: Copy Services in Open Environments

30

Chapter 30.

Disaster recovery test scenarios


In this chapter we describe several disaster recovery test scenarios. For each scenario, we describe all operations in detail using a step-by-step approach.

Copyright IBM Corp. 2004-2008. All rights reserved.

495

30.1 Overview
Disaster recovery test scenarios are operations initiated by the user and based on failover/failback advanced copy function features. Disaster recovery test scenarios are used to practice readiness for a disaster. They have minimal impact on the existing Metro/Global Mirror and no impact on the production at the local site. We describe two disaster recovery test scenarios, fail over to the intermediate site and fail over to the remote site. Also, all of the scenarios presented here have been practically tested and represent the best practice for the situations they address. However, additional or alternate scenarios remain possible depending on particular circumstances within your data center. Note: We strongly recommend that you test all procedures extensively before deployment to the production environment.

30.2 Disaster recovery test at the intermediate site


In this section we describe the sequence of steps required to perform a disaster recovery test at the intermediate site, while the production keeps running at the local site. It is assumed that the storage at the intermediate site is accessible to a host that will start up the application for the disaster recovery test purpose. Figure 30-1 illustrates the steps to perform a failover to the intermediate site, while the production remains at the local site.

Local Site
3 4

Intermediate Site

Remote Site

A
1

B E

C D

Production Host

Disaster Recovery Test Host

Figure 30-1 Failover to intermediate site for disaster recovery test

The steps for this scenario of the failover to the intermediate site are as follows: 1. Prepare the failover by issuing a freeze and unfreeze of the Metro Mirror. Note: During the freeze/unfreeze interval no data will be copied to the remote site.

496

IBM System Storage DS8000: Copy Services in Open Environments

2. Establish FlashCopy to the additional volumes at the intermediate site. 3. Set up PPRC paths from the local site to the intermediate site. 4. Resume Metro Mirror. 5. Start I/O at the disaster recovery host.

30.2.1 Step 1: Prepare the failover for disaster recovery test


Because the applications will continue after the failover, you must ensure that the data is consistent at the intermediate site. To provide consistency at the intermediate site, prior to the failover, a freeze of the Metro Mirror pairs will be issued in combination with an unfreeze (see 28.4, Freezing and unfreezing Metro Mirror volumes on page 466). The freeze of the Metro Mirror will set the queue full state to the primary volumes and terminate the links between the local and the remote site. The unfreeze that follows up immediately will remove the queue full state from the volume, so that volumes become writable again. Note: During the queue full state the I/O from the application is blocked and waits until the volumes are writable again. This implies an interruption of the application I/O for the time it takes to terminate the PPRC links. The application does not have to be stopped. Example 30-1 shows the usage of the freezepprc and the unfreezepprc for the Metro Mirror. Theses commands are issued at the local site. A subsequent lspprc command at the local storage subsystem shows that the primary volumes went to a Suspended state as a result of Freeze Metro Mirror. An lspprc command issued at the intermediate site shows the status of the secondary volumes, which are still in Target Full Duplex mode.
Example 30-1 Freeze and unfreeze prior to the failover At the local site: dscli> freezepprc -dev IBM.2107-75ABTV1 -remotedev IBM.2107-75ABTV2 70:72 Date/Time: June 22, 2006 11:41:01 AM CEST IBM DSCLI Version: 5.1.600.196 DS: IBM.2107-75ABTV1 CMUC00161W freezepprc: Remote Mirror and Copy consistency group 70:72 successfully created. dscli> lspprc -dev IBM.2107-75ABTV1 -remotedev IBM.2107-75ABTV2 -fmt default 7000-7003 Date/Time: June 22, 2006 11:41:19 AM CEST IBM DSCLI Version: 5.1.600.196 DS: IBM.2107-75ABTV1 ID State Reason Type SourceLSS Timeout (secs) Critical Mode First Pass Status =========================================================================================== ===== 7000:7200 Suspended Freeze Metro Mirror 70 300 Disabled Invalid 7001:7201 Suspended Freeze Metro Mirror 70 300 Disabled Invalid 7002:7202 Suspended Freeze Metro Mirror 70 300 Disabled Invalid 7003:7203 Suspended Freeze Metro Mirror 70 300 Disabled Invalid At the intermediate site: dscli> lspprc -dev IBM.2107-75ABTV2 -remotedev IBM.2107-75ABTV1 -fmt default 7200-7203 Date/Time: June 22, 2006 11:42:15 AM CEST IBM DSCLI Version: 5.1.600.196 DS: IBM.2107-75ABTV2 ID State Reason Type SourceLSS Timeout (secs) Critical Mode First Pass Status =========================================================================================== ============== 7000:7200 Target Full Duplex Metro Mirror 70 unknown Disabled Invalid

Chapter 30. Disaster recovery test scenarios

497

7001:7201 Invalid 7002:7202 Invalid 7003:7203 Invalid 7200:7400 True 7201:7401 True 7202:7402 True 7203:7403 True

Target Full Duplex Target Full Duplex Target Full Duplex Copy Pending Copy Pending Copy Pending Copy Pending -

Metro Mirror 70 Metro Mirror 70 Metro Mirror 70 Global Copy Global Copy Global Copy Global Copy 72 72 72 72

unknown unknown unknown unknown unknown unknown unknown

Disabled Disabled Disabled Disabled Disabled Disabled Disabled

To quickly re-enable I/O to the primary volumes, the unfreezepprc command is issued immediately after the freezepprc. Example 30-2 illustrates the unfreezepprc command.
Example 30-2 Unfreeze the primary volumes and recreate PPRC paths dscli> unfreezepprc -dev IBM.2107-75ABTV1 -remotedev IBM.2107-75ABTV2 70:72 Date/Time: June 22, 2006 11:43:00 AM CEST IBM DSCLI Version: 5.1.600.196 DS: IBM.2107-75ABTV1 CMUC00198I unfreezepprc: Remote Mirror and Copy pair 70:72 successfully thawed.

30.2.2 Step 2: Set up FlashCopy to the additional volumes


The freezepprc has terminated the PPRC links and suspended the primary volumes. The volumes at the intermediate site are now ready for a FlashCopy. Example 30-3 shows the mkflash command, which is executed at the intermediate site.
Example 30-3 Set up FlashCopy on intermediate site dscli> mkflash -dev IBM.2107-75ABTV2 7200-7203:7204-7207 Date/Time: June 22, 2006 1:24:48 PM CEST IBM DSCLI Version: 5.1.600.196 DS: IBM.2107-75ABTV2 CMUC00137I mkflash: FlashCopy pair 7200:7204 successfully created. CMUC00137I mkflash: FlashCopy pair 7201:7205 successfully created. CMUC00137I mkflash: FlashCopy pair 7202:7206 successfully created. CMUC00137I mkflash: FlashCopy pair 7203:7207 successfully created. dscli> lsflash -dev IBM.2107-75ABTV2 7200-7203:7204-7207 Date/Time: June 22, 2006 1:26:14 PM CEST IBM DSCLI Version: 5.1.600.196 DS: IBM.2107-75ABTV2 ID SrcLSS SequenceNum Timeout ActiveCopy Recording Persistent Revertible SourceWriteEnabled TargetWriteEnabled BackgroundCopy =========================================================================================== ========================================= 7200:7204 72 0 300 Enabled Disabled Disabled Disabled Enabled Enabled Enabled 7201:7205 72 0 300 Disabled Disabled Disabled Disabled Enabled Enabled Enabled 7202:7206 72 0 300 Disabled Disabled Disabled Disabled Enabled Enabled Enabled 7203:7207 72 0 300 Disabled Disabled Disabled Disabled Enabled Enabled Enabled

498

IBM System Storage DS8000: Copy Services in Open Environments

30.2.3 Step 3: Set up PPRC paths from the local site to the intermediate site
After the FlashCopy has been initiated, we can set up the PPRC paths again. More details about the PPRC paths set up can be found in 27.3.2, Step 1: Set up all Metro Mirror and Global Mirror paths on page 454. When the paths are recreated the Metro Mirror will not copy data because the primary volumes are still in Suspend mode. Example 30-4 shows the mkpprcpath and lspprcpath commands, which are executed on the local site.
Example 30-4 Set up PPRC paths from the local site to the intermediate site and check the paths dscli> mkpprcpath -dev IBM.2107-75ABTV1 -remotedev IBM.2107-75ABTV2 -remotewwnn 5005076303FFCE63 -srclss 70 -tgtlss 72 -consistgrp I0033:I0233 I0102:I0301 Date/Time: June 22, 2006 1:40:36 PM CEST IBM DSCLI Version: 5.1.600.196 DS: IBM.2107-75ABTV1 CMUC00149I mkpprcpath: Remote Mirror and Copy path 70:72 successfully established. dscli> lspprcpath -dev IBM.2107-75ABTV1 70 Date/Time: June 22, 2006 1:52:07 PM CEST IBM DSCLI Version: 5.1.600.196 DS: IBM.2107-75ABTV1 Src Tgt State SS Port Attached Port Tgt WWNN ========================================================= 70 72 Success FF72 I0033 I0233 5005076303FFCE63 70 72 Success FF72 I0102 I0301 5005076303FFCE63

30.2.4 Step 4: Resume Metro Mirror


Now we can resume the Metro Mirror, which was suspended because of the freezepprc in 30.2.1, Step 1: Prepare the failover for disaster recovery test on page 497. Once the Metro Mirror has resumed, the Metro/Global Mirror is active again. Example 30-5 shows the resumepprc and lspprc commands.
Example 30-5 Resume PPRC pairs and check the state At the local site: dscli> resumepprc -dev IBM.2107-75ABTV1 -remotedev IBM.2107-75ABTV2 -type mmir 7000-7003:7200-7203 Date/Time: June 22, 2006 2:10:35 PM CEST IBM DSCLI Version: 5.1.600.196 DS: IBM.2107-75ABTV1 CMUC00158I resumepprc: Remote Mirror and Copy volume pair 7000:7200 relationship successfully resumed. This message is being returned before the copy completes. CMUC00158I resumepprc: Remote Mirror and Copy volume pair 7001:7201 relationship successfully resumed. This message is being returned before the copy completes. CMUC00158I resumepprc: Remote Mirror and Copy volume pair 7002:7202 relationship successfully resumed. This message is being returned before the copy completes. CMUC00158I resumepprc: Remote Mirror and Copy volume pair 7003:7203 relationship successfully resumed. This message is being returned before the copy completes. dscli> lspprc -dev IBM.2107-75ABTV1 -remotedev IBM.2107-75ABTV2 7000-7003 Date/Time: June 22, 2006 2:10:58 PM CEST IBM DSCLI Version: 5.1.600.196 DS: IBM.2107-75ABTV1 ID State Reason Type SourceLSS Timeout (secs) Critical Mode First Pass Status =========================================================================================== =======

Chapter 30. Disaster recovery test scenarios

499

7000:7200 7001:7201 7002:7202 7003:7203

Full Full Full Full

Duplex Duplex Duplex Duplex

Metro Metro Metro Metro

Mirror Mirror Mirror Mirror

70 70 70 70

300 300 300 300

Disabled Disabled Disabled Disabled

Invalid Invalid Invalid Invalid

At the intermediate site: dscli> lspprc -dev IBM.2107-75ABTV2 -remotedev IBM.2107-75ABTV1 7200-7203 Date/Time: June 22, 2006 2:11:23 PM CEST IBM DSCLI Version: 5.1.600.196 DS: IBM.2107-75ABTV2 ID State Reason Type SourceLSS Timeout (secs) Critical Mode First Pass Status =========================================================================================== ============== 7000:7200 Target Full Duplex Metro Mirror 70 unknown Disabled Invalid 7001:7201 Target Full Duplex Metro Mirror 70 unknown Disabled Invalid 7002:7202 Target Full Duplex Metro Mirror 70 unknown Disabled Invalid 7003:7203 Target Full Duplex Metro Mirror 70 unknown Disabled Invalid 7200:7400 Copy Pending Global Copy 72 unknown Disabled True 7201:7401 Copy Pending Global Copy 72 unknown Disabled True 7202:7402 Copy Pending Global Copy 72 unknown Disabled True 7203:7403 Copy Pending Global Copy 72 unknown Disabled True

30.2.5 Step 5: Start I/O on the disaster recovery host


After you have varied on the volume groups and mounted the file systems (in the case of an AIX operating system), the applications can be started on the disaster recovery host. When you have finished the disaster recovery test, you can unmount the file systems and vary off the volume groups on the host and remove your FlashCopy target volumes at the intermediate DS8000, so you are in the same state you were at the beginning of the disaster recovery test.

30.3 Disaster recovery test at remote site


In this section we describe the steps required to perform a disaster recovery test at the remote site, while the production continues at the local site. The storage at the remote site must be accessible to a host on which to start the application.

500

IBM System Storage DS8000: Copy Services in Open Environments

Figure 30-2 illustrates the steps to perform a failover to the remote site, while the production remains at the local site.

Local Site
3 4

Intermediate Site

Remote Site

A
1

C
2

E
5

Prodution Host

Disaster Recovery Test Host

Figure 30-2 Failover for disaster recovery test to remote

The steps are as follows: 1. Freeze and unfreeze the Metro Mirror. Note: During the freeze/unfreeze interval, no data will be copied to the remote site. 2. Establish FlashCopy to the additional volumes at the remote site. 3. Set up PPRC paths from the local site to the intermediate site. 4. Resume Metro Mirror. 5. Start I/O at the disaster recovery host. For this scenario, we assume that an additional host is available to start the application from the additional volumes that have been copied with FlashCopy at the remote site. The steps are explained in detail in 30.2, Disaster recovery test at the intermediate site on page 496. The only difference is that steps 2 and 5 will be executed on the remote site.

Chapter 30. Disaster recovery test scenarios

501

502

IBM System Storage DS8000: Copy Services in Open Environments

31

Chapter 31.

Unplanned scenarios
In this chapter we present some possible impacts to the different data center locations that can be managed by Metro/Global Mirror. We describe various ways to fail over to the intermediate site or the remote site. Each scenario points to an entry into the planned scenarios described in Chapter 29, Planned recovery scenarios on page 475, which are used to complete the failover process.

Copyright IBM Corp. 2004-2008. All rights reserved.

503

31.1 Overview
Unplanned scenarios are operations that apply when an exceptional event affects the normal operations at the local site and require a failover to storage resources at the intermediate or remote site. Depending on the kind of incident, different actions are required. It is therefore essential to analyze the current situation very carefully in order to make the right decisions for the storage takeover. When the situation is understood, actions should be managed in the way that one of the planned scenarios described in Chapter 29, Planned recovery scenarios on page 475, can be used to bring the system into a defined failover status. Some situations are easier to manage when a disaster occurs. For example, when the local site is completely destroyed, it is obvious that the storage and the applications have to be failed over to the intermediate or the remote site. Other situations might be more complex, for example, when some components of the local site are still in operation and other components are damaged or inaccessible. In this case a careful analysis of the situation has to be done before the failover processes can be accomplished. In the following sections several types of incidents and disaster situations are discussed, though many variations are possible.

31.2 Server outages


This situation can occur when one or more servers are down at the local site but the storage system is not affected by the outage. A server outage is, in general, addressed through a high availability solution like HACMP for AIX or other cluster solutions implemented at the local site. When the intermediate site is not too far away from the local site and the bandwidth is high enough, a cluster across the local and the intermediate site can be implemented as described in 27.2, Configuration examples on page 451.

504

IBM System Storage DS8000: Copy Services in Open Environments

When there is no cluster implementation and a server outage happens, the situation can be categorized as a disaster situation where the application and the storage have to be taken over. This is the situation discussed in this section and illustrated in Figure 31-1.

Local Site

Intermediate Site

Remote Site

C D

Failover
Primary production Host Secondary production host

Figure 31-1 Failover after server outage at local site

In this case the storage focal point is failed over to the remote site, since the second server will typically reside at the remote site. To achieve the failover to the remote site the scenario described in 29.4, Recovery at remote site on page 484, can be used.

31.3 Link failures


A link failure is detected when a storage system cannot see its counterpart via the PPRC links. Typically there are more than one PPRC link in use, which offers, beside the higher bandwidth, a resiliency against single link failures. If not all links have failed, the Metro Mirror or the Global Copy continues. As long as there are no massive performance impacts, a failover is not needed.

31.3.1 Metro Mirror link failure


If the performance degrades or all links are broken, depending on where the secondary server resides, a failover to the intermediate or the remote site is required. In this case the planned scenarios in 29.2, Recovery at intermediate site on page 476, or in 29.4, Recovery at remote site on page 484, can be used. Using the Incremental Resync functionality it is possible to set up a Global Mirror to the remote site, which bypasses the intermediate site, without any interruption of the data replication. See Chapter 32, MGM Incremental Resync on page 511, for more information.

Chapter 31. Unplanned scenarios

505

Local Site

Intermediate Site

Remote Site

C D

Primary production Host

Secondary production host

Figure 31-2 Link failure of the Metro Mirror

When all links have failed in a Metro Mirror, the pairs will go into the suspend mode after a time-out of 30 seconds (which is the default). To fail over to the intermediate site, see 29.2.3, Step 3: Failover the intermediate site on page 478, and proceed with this step. When a failover to the remote site has to be done, because the secondary production server is located at the remote site, the processing should start with 29.4.2, Step 2: Terminate Global Mirror or remove volumes from session on page 486, and the following steps. If the Metro Mirror links are back before a failover has been started, the Metro Mirror pairs have to be resumed manually with the resumepprc command.

31.3.2 Global Copy link failures


When the Global Copy links have failed completely, the copy pairs will go into the suspend mode after the time-out of 30 seconds. The Metro Mirror is still active. A failover to the intermediate or the remote site is not indicated. When the links are back again, the Global Copy resumes automatically. The Global Mirror will also automatically start forming Consistency Groups. In this case no action other than fixing the link problems is required.

31.4 Partial disasters


A disaster is always a serious impact to the infrastructure of a data center. Systems installed in a data center can sometimes rely on redundancies provided by the data center infrastructure. For example, server, storage, and network components are equipped, in general, with two independent power supplies, connected to two independent power paths. Redundancy provides an overall resilient environment for each data center, and is enough to resist most of the incidents against single components like the servers, storage, network, or parts of the infrastructure. But serious incidents, and usually beyond your control, such as a

506

IBM System Storage DS8000: Copy Services in Open Environments

massive lightning stroke, can cause an outage of more than one component of the infrastructure. Other parts of the data center might still be faultless. Figure 31-3 illustrates an example of a partial disaster in a data center caused by multiple outages of the electrical power distribution.
Secondary power distribution
360V 4kV

SD1

Primary power distribution


SD2

Server 1

Storage 1

20kV

PD1 SD3
Server 2

Storage 2

Storage 3

20kV

SD4 PD1

SD5

SD6

Figure 31-3 Example for a partial disaster

In this example, three secondary power distributors are out of order. The result is that all components that are connected to SD1 or SD4 only or connected to both SD1 and SD4 have no electrical power. All other components continue to operate. Assume that Server1 is configured into several Logical Partitions (LPARs) and that in each LPAR a different application or a part of an application, for example, a database engine, is running. The storage for server 1 can be supplied by the storage subsystem, Storage1, Storage2, and Storage3. In order to make the correct decisions for recovery, a detailed and careful analysis of this situation is essential. Remember that not only the servers and the storage subsystems are the subject of the recovery, but that the application must be made available at the recovery site. With Metro/Global Mirror you can bring applications back very quickly, and in a flexible way, while keeping data consistent.

Chapter 31. Unplanned scenarios

507

In our example, the following types of situations are possible: All servers with a total loss of power or without network access are subject to a failover. For the servers using Storage1, a storage failover has to be executed to the intermediate or remote site, depending on where the secondary server is installed. For the servers that are clustered with a server at the intermediate site and that are supplied by another storage subsystem, a cluster failover can be applied. For those servers that have their secondary at the remote site, a storage failover to the remote site has to be done. All servers that are still available but use Storage1 are subject to a storage failover. For the servers that are clustered with the intermediate site, the storage can be failed over to the intermediate site. The server can reside at the local site, but could also be taken over to the intermediate site as well. For the servers having their secondary at the remote site, a storage failover to the remote site has to be done. Based on the conclusions made above, the following decision table can help you to find the correct scenario for the Metro/Global Mirror operations.
Table 31-1 Failover decision table Server status Running Running Running Storage status Available Available Failed Location of secondary server Intermediate Remote Intermediate Apply storage failover N/A N/A To interm. Apply server failover N/A N/A Yes/noa Applicable scenario starting at N/A N/A 29.2.3, Step 3: Failover the intermediate site on page 478 29.4.2, Step 2: Terminate Global Mirror or remove volumes from session on page 486 29.2.3, Step 3: Failover the intermediate site on page 478 29.4.2, Step 2: Terminate Global Mirror or remove volumes from session on page 486 29.2.3, Step 3: Failover the intermediate site on page 478

Running

Failed

Remote

Remote

Implicitb

Failed

Available

Intermediate

Yes/noc

Yes

Failed

Available

Remote

To remote

Implicit

Failed

Failed

Intermediate

To interm.

Yes

508

IBM System Storage DS8000: Copy Services in Open Environments

Server status Failed

Storage status Failed

Location of secondary server Remote

Apply storage failover To remote

Apply server failover Implicit

Applicable scenario starting at 29.4.2, Step 2: Terminate Global Mirror or remove volumes from session on page 486

a. Depends on the decision of the client, when the server is clustered to the intermediate site. b. A storage failover to the remote site implies a server failover to the remote site, since there is no stretched cluster between the local and remote site. c. Not recommended when the server is clustered to the intermediate site.

A partial disaster, as can occur in large data centers, is the most challenging kind of disaster because of the many different situations that can result. The complexity is apparent even when only 20% of the applications are affected. This percentage is too low to fail over a complete data center. But when a data center hosts 100 applications, the client has to manage 20 different critical situations.

31.5 Data center outages


A data center outage is when a massive external incident happens that completely impacts the data center. This does not necessarily mean that the center is completely destroyed, but that all the essential infrastructure components, like electrical power, network, storage, or the data center air conditioning have completely failed. For a Metro/Global Mirror implementation, disaster recovery is only indicated when there is an outage of the data center at the local site. In this case an unplanned failover scenario like that described in 31.3.1, Metro Mirror link failure on page 505, can be applied. When the intermediate or the remote site is down, the production at the local site can still continue. Only the disaster recovery capabilities are degraded. When the remote site is down, Metro Mirror to the intermediate site is still available. When the intermediate site is down and there is a connection with sufficient bandwidth available between the local and the remote site, a Global Mirror to the remote site can be set up. In this case a Global Copy between the Metro Mirror primary volumes and the Global Copy secondary volumes can be set up. This requires that at the local and the remote site the current pair relation is removed using the rmpprc command with the options -at src and -unconditional. The links between the storage subsystems at the local and the remote site have to be established and the paths have to be created according to 27.3.2, Step 1: Set up all Metro Mirror and Global Mirror paths on page 454. The steps required to set up the Global Mirror are described in 27.3.3, Step 2: Set up Global Copy NOCOPY from intermediate to remote sites on page 456, and in 27.3.6, Step 5: Create Global Mirror session and add volumes to session on page 458, and the following sections. When there is a mix of primary and secondary production between the local and the intermediate site, as shown in the example configurations in Chapter 27, Configuration and setup on page 449, the applications of the failed data center have to be failed over to their secondary site. The Global Mirror for these applications is already established because it is part of the normal setup of the Metro/Global Mirror.

Chapter 31. Unplanned scenarios

509

510

IBM System Storage DS8000: Copy Services in Open Environments

32

Chapter 32.

MGM Incremental Resync


In this chapter we explain and illustrate the Incremental Resynchronization (or Incremental Resync) feature now available in the context of a Metro/Global Mirror environment. We start with an overview and functional description of Incremental Resync and discuss the new DSCLI options that support it. This is followed by a section explaining how to set up Incremental Resync for Metro/Global Mirror. Finally, we present scenarios with detailed operations presented in a step-by-step approach and covering the following situations: Failure at the local site Failure at the intermediate site Returning to normal operations

Copyright IBM Corp. 2004-2008. All rights reserved.

511

32.1 Overview
We know that in a Metro/Global Mirror environment, data is copied from the local to the intermediate site and then cascaded from the intermediate site to the remote site. Obviously, if there is a storage failure (or disaster) at the intermediate or local site, or even simply a loss of connectivity between the local and intermediate sites, data can no longer be cascaded to the remote site. However, if we establish physical connectivity as well directly between the local and remote sites, we could in case of failure at the intermediate site still copy data from the local to the remote site. Incremental Resync gives the capability to use this connection with a Global Mirror relationship between the local and remote to copy data after a failure at the remote site, without having to recopy all the data. Figure 32-1 shows how the Incremental Resync is used.
Local site Intermediate site
Global Mirror
DS8000 DS8000 DS8000

Remote site

Metro Mirror

Global Mirror

2109-M14

2109-M14 2109-M14

P590

P590

P550

P550

Figure 32-1 Setup of a Metro/Global Mirror for Incremental Resync

32.1.1 Functional description


A prerequisite to Incremental Resync is to have established paths from the local site to the remote site. These paths are required for the Global Mirror which will be created when the failing intermediate site is bypassed. Depending on the nature of the failure at the intermediate site, it could be that the Global Mirror between the local and remote site will need to be used for an extended period of time. To be prepared for future failover and failback operations we recommend that you establish the paths in both directions. To enable the Incremental Resync capability, a specific option must be specified when initially establishing the Metro Mirror relationship between the local and intermediate sites. If the Metro Mirror is already established and in full duplex, the Incremental Resync is enabled by issuing the mkpprc command against the existing relation with the specific option to enable Incremental Resync.

512

IBM System Storage DS8000: Copy Services in Open Environments

When you specify Incremental Resync, in addition to the existing Change Recording bitmap of the Global Mirror at the intermediate site, another Change Recording bitmap is created at the Metro Mirror primary volumes. This bitmap keeps track of any incoming I/O while a consistency group is copied from the intermediate site until it has arrived to the FlashCopy target volumes at the remote site. The Global Mirror at the intermediate site is queried periodically for the current status of the tracks between the intermediate and the remote site. The change recording will restart from a cleared change recording bitmap, when the next consistency group is formed at the intermediate site. Figure 32-2 shows an overview of the different phases of the Incremental Resync.

Local Site

Intermediate Site

Remote Site

Start
Metro Mirror

Done

Done Start
GM Coord, time

Start

Done

Start

Done

time Phases

GM Drain time

GM FlashCopy

C R
Incremental Resync bitmap

CO RO
S Global Mirror bitmaps

querying

C D

Figure 32-2 Incremental Resync phases

To illustrate the functionality of Incremental Resync, a failure of the intermediate side is assumed, which means that no more data can be copied to the remote site. If there is a failure of the intermediate site, the Metro Mirror primary volumes will go into suspend mode. With Incremental Resync enabled, the tracks that could not be copied to the intermediate site are recorded in the change recording bitmap at the primary site. To bypass the failed intermediate site, a new Global Copy relationship should now be established from the local volumes to the FlashCopy source volumes at the remote site. At this point in time, however, these volumes are still Global Copy target volumes of the relation from the intermediate site. Therefore, failover of the existing Global Copy at the remote site has to be applied before the new Global Copy between the local and remote sites can be established.

Chapter 32. MGM Incremental Resync

513

The new Global Copy from the local site to the remote site must be established with the Incremental Resync option. As a result, all the tracks recorded in the change recording bitmap and in the out-of-sync recording of the Metro Mirror are copied to the remote site. When all out-of-sync tracks have been sent from the local to the remote site, a Global Mirror can be started at the local site using the remote volumes as the Global Mirror target volumes (see Figure 32-3).

Local Site

Intermediate Site

Remote Site

C D

Primary production host

Figure 32-3 Incremental Resync overview

When the intermediate site is available again, the Metro/Global Mirror must be re-configured back to normal operation. Any surviving configuration of the Global Mirror at the intermediate site must be cleared. To copy data missing at the intermediate site since the failure, a failback from the remote site to the intermediate site is accomplished. When the incremental copy is completed, the procedure to return to normal operations from local to remote via the intermediate site can be initiated. The Global Copy between the intermediate and the remote site has to be re-established in its original direction. Then the Metro Mirror can be recreated using the Incremental Resync option, which will copy only the data changed since the Global Copy from the local to the remote site was removed. The final step is to recreate the Global Mirror at the intermediate site, which enables consistency groups to be formed at the intermediate and drained to the remote site.

32.1.2 Options for DSCLI


The Incremental Resync is enabled using the mkpprc command with the option -incrementalresync. The option requires the following values to specify the different parameters of Incremental Resync: enable This enables the Incremental Resync functionality. It creates initialized change recording bitmaps. (Bitmaps consist initially of all ones. As consistency groups are formed in Global Mirror, the bitmaps will change to reflect only the changed data.)

514

IBM System Storage DS8000: Copy Services in Open Environments

enablenoinit

This enables the Incremental Resync functionality and creates change recording bitmaps. The bitmaps will not be initialized. That is, they will all be zeroes. This option should only be used in certain cases as part of the return scenario for Incremental Resync. This stops the Incremental Resync mechanism. This brings in relations of local to remote volumes as new pairs. The devices specified in the command are checked to ensure that they currently have a target device in common. In other words, it verifies that the devices are part of an existing Metro/Global Mirror relationship. This is the same as above, but the volumes cannot be part of a Metro/Global Mirror. As opposed to a recover, in this case the checking is not done.

disable recover

override

32.2 Setting up Metro/Global Mirror with Incremental Resync


In this section we describe how to set up Metro/Global Mirror with incremental re-synchronization, either from scratch or from an existing 2-site Global Mirror. The initial setup procedure (from scratch) assumes that no existing Metro Mirror or Global Copy relationships. The setup procedure from an existing Global Mirror environment explains how to set up Metro/Global Mirror with Incremental Resync environment by adding an intermediate site.

32.2.1 Setup of Incremental Resync Metro/Global Mirror


To set up Incremental Resync Metro/Global Mirror, follow the steps in 27.3, Initial setup of Metro/Global Mirror on page 453. The steps must be executed in the same order. However, 27.3.4, Step 3: Set up Metro Mirror between local and intermediate sites on page 456, is modified as follows to implement Metro/Global Mirror with Incremental Resync. Specify the -incrementalresync enable option in conjunction with -type mmir options when executing the mkpprc command at the local site to create the Metro Mirror relationship. This enables the Incremental Resync function and creates change recording bitmaps for the Metro Mirror primary volumes running at the local site. When the Metro Mirror is already established, but Incremental Resync is not yet enabled, the option -mode nocp is required to enable Incremental Resync. Example 32-1 shows how to set up incremental re-synchronization for Metro/Global Mirror.
Example 32-1 Setup of Incremental Resync Metro/Global Mirror
dscli> mkpprc -dev IBM.2107-1301651 -remotedev IBM.2107-1301261 -type mmir -mode nocp -incrementalresync enable 2000-2007:2000-2007 CMUC00153I mkpprc: Remote Mirror and Copy volume pair relationship 2000:2000 successfully created. CMUC00153I mkpprc: Remote Mirror and Copy volume pair relationship 2001:2001 successfully created. CMUC00153I mkpprc: Remote Mirror and Copy volume pair relationship 2002:2002 successfully created. CMUC00153I mkpprc: Remote Mirror and Copy volume pair relationship 2003:2003 successfully created. CMUC00153I mkpprc: Remote Mirror and Copy volume pair relationship 2004:2004 successfully created. CMUC00153I mkpprc: Remote Mirror and Copy volume pair relationship 2005:2005 successfully created. CMUC00153I mkpprc: Remote Mirror and Copy volume pair relationship 2006:2006 successfully created. CMUC00153I mkpprc: Remote Mirror and Copy volume pair relationship 2007:2007 successfully created. dscli> dsclI> lspprc -dev IBM.2107-1301651 -l 2000-2007 ID State Reason Type Out Of Sync Tracks Tgt Read Src Cascade Tgt Cascade Date Suspended SourceLSS Timeout (secs) Critical Mode First Pass Status Incremental Resync Tgt Write GMIR CG PPRC CG

Chapter 32. MGM Incremental Resync

515

========================================================================================================================================== ========================================================================= 2000:2000 Full Duplex Metro Mirror 0 Disabled Disabled Invalid 20 300 Disabled Invalid Enabled Disabled Disabled Enabled 2001:2001 Full Duplex Metro Mirror 0 Disabled Disabled Invalid 20 300 Disabled Invalid Enabled Disabled Disabled Enabled 2002:2002 Full Duplex Metro Mirror 0 Disabled Disabled Invalid 20 300 Disabled Invalid Enabled Disabled Disabled Enabled 2003:2003 Full Duplex Metro Mirror 0 Disabled Disabled Invalid 20 300 Disabled Invalid Enabled Disabled Disabled Enabled 2004:2004 Full Duplex Metro Mirror 0 Disabled Disabled Invalid 20 300 Disabled Invalid Enabled Disabled Disabled Enabled 2005:2005 Full Duplex Metro Mirror 0 Disabled Disabled Invalid 20 300 Disabled Invalid Enabled Disabled Disabled Enabled 2006:2006 Full Duplex Metro Mirror 0 Disabled Disabled Invalid 20 300 Disabled Invalid Enabled Disabled Disabled Enabled 2007:2007 Full Duplex Metro Mirror 0 Disabled Disabled Invalid 20 300 Disabled Invalid Enabled Disabled Disabled Enabled dscli> dscli>

The remaining steps in 27.3, Initial setup of Metro/Global Mirror on page 453, are followed to complete the setup for Metro/Global Mirror with incremental resync.

32.2.2 Going from Global Mirror to Incremental Resync Metro/Global Mirror


Incremental Resync Metro/Global Mirror can be initialized if production is already running in a Global Mirror environment. The transition from the Global Mirror environment to an Incremental Resync Metro/Global Mirror environment is performed with a full copy from the remote site to the intermediate site. In this scenario the new site will be the intermediate site. Figure 32-4 illustrates the steps to transition from a current Global Mirror environment to a Metro/Global Mirror with Incremental Resync environment.
Primary production host

Local Site

A
4 3 1 7 2

Remote Site

Intermediate Site

C D

B
8

6 1

Figure 32-4 Moving from Global Mirror to Incremental Resync Metro/Global Mirror

516

IBM System Storage DS8000: Copy Services in Open Environments

The steps for setting up and initializing Metro/Global Mirror with Incremental Resync if Global Mirror is already running are as follows (we assume that there is an existing Global Mirror between the A and C volumes, and an intermediate site (B volumes) is added to the configuration): 1. 2. 3. 4. 5. 6. 7. 8. Set up all PPRC paths. Start Global Copy from the remote to the new intermediate site. Start incremental re-synchronization from the local site. Terminate Global Mirror and suspend Global Copy at the local site. Terminate locate to remote Global Copy local to remote at remote site. Reverse Global Copy to run from intermediate to remote site. Set up Metro Mirror from local to intermediate site. Set up Global Mirror at intermediate site.

Step 1: Set up all PPRC paths


Before setting up the Metro/Global Mirror with a new intermediate site, all paths should be established. Before establishing PPRC paths, the PPRC ports should first be identified. See 27.3.1, Identifying the PPRC ports on page 454, to determine which PPRC ports are available and can be used as links for the Metro/Global Mirror environment. When the PPRC ports are identified, PPRC paths can then be created from the local to the intermediate site and another from the intermediate to the remote site. It is also good practice to create the PPRC paths in the opposite directions, from the intermediate to the local site and from the remote to the intermediate site. There should be a total of four new PPRC paths created. Prerequisites for creating a successful PPRC path and reasons for creating PPRC in both directions are explained in 27.3.2, Step 1: Set up all Metro Mirror and Global Mirror paths on page 454. To create the new PPRC paths, the command mkpprcpath is executed from each of the sites.

Step 2: Start Global Copy from the remote to the intermediate site
To migrate the data from the remote site to the intermediate site, Global Copy is started. This will take the data that is currently running from the local to remote and copy it (or rather cascade it) to the intermediate site. The mkpprc command with the option -type gcp is executed from the remote site to the intermediate site. Because there is already a Global Copy relationship from the local to the remote site, the -cascade option must also be used with the mkpprc command. This command will create a Global Copy relationship from the remote site volumes that are currently Global Copy secondaries to the intermediate site volumes. Example 32-2 shows the DSCLI command to create the Global Copy. The subsequent lspprc command shows the cascading to from local to remote to intermediate.
Example 32-2 Start Global Copy from the remote to the intermediate site
dscli> mkpprc -dev IBM.2107-75DNXC1 -remotedev IBM.2107-7520781 -type gcp -mode full -cascade 6000-6003:6600-6603 CMUC00153I mkpprc: Remote Mirror and Copy volume pair relationship 6000:6600 successfully created. CMUC00153I mkpprc: Remote Mirror and Copy volume pair relationship 6001:6601 successfully created. CMUC00153I mkpprc: Remote Mirror and Copy volume pair relationship 6002:6602 successfully created. CMUC00153I mkpprc: Remote Mirror and Copy volume pair relationship 6003:6603 successfully created. dscli> dscli> lspprc -dev IBM.2107-75DNXC1 -fullid 6000-6003 ID State Reason Type SourceLSS Timeout (secs) Critical Mode First Pass Status==================================================================================================================================== ================= IBM.2107-7503461/6200:IBM.2107-75DNXC1/6000 Target Copy Pending Global Copy IBM.2107-7503461/62 unknown Disabled Invalid IBM.2107-7503461/6201:IBM.2107-75DNXC1/6001 Target Copy Pending Global Copy IBM.2107-7503461/62 unknown Disabled Invalid

Chapter 32. MGM Incremental Resync

517

IBM.2107-7503461/6202:IBM.2107-75DNXC1/6002 Invalid IBM.2107-7503461/6203:IBM.2107-75DNXC1/6003 Invalid IBM.2107-75DNXC1/6000:IBM.2107-7520781/6600 IBM.2107-75DNXC1/6001:IBM.2107-7520781/6601 IBM.2107-75DNXC1/6002:IBM.2107-7520781/6602 IBM.2107-75DNXC1/6003:IBM.2107-7520781/6603 dscli> dscli>

Target Copy Pending Target Copy Pending Copy Copy Copy Copy Pending Pending Pending Pending -

Global Copy IBM.2107-7503461/62 unknown Global Copy IBM.2107-7503461/62 unknown Global Global Global Global Copy Copy Copy Copy IBM.2107-75DNXC1/60 IBM.2107-75DNXC1/60 IBM.2107-75DNXC1/60 IBM.2107-75DNXC1/60 unknown unknown unknown unknown

Disabled Disabled Disabled Disabled Disabled Disabled False False False False

Step 3: Start incremental re-synchronization from the local site


Before terminating the original Global Mirror that is running from the local to the remote site, Incremental Resync should be enabled at the local site. This takes the current Global Mirror relationship and creates change recording bitmaps at the local site to keep track of all updates occurring from production at the local site. The updates at the local site will be needed when Metro Mirror with Incremental Resync is established later in Step 7: Set up Metro Mirror from the local to the intermediate site on page 520. The mkpprc command with the option -incrementalresync enablenoinit is executed from the local site using the current Global Mirror relationship from the local to the remote site as shown in Example 32-3.
Example 32-3 Start incremental re-synchronization from the local site
dscli> mkpprc -dev IBM.2107-7503461 -remotedev IBM.2107-75DNXC1 -type gcp -mode nocp -incrementalresync enablenoinit 6200-6203:6000-6003 CMUC00153I mkpprc: Remote Mirror and Copy volume pair relationship 6200:6000 successfully created. CMUC00153I mkpprc: Remote Mirror and Copy volume pair relationship 6201:6001 successfully created. CMUC00153I mkpprc: Remote Mirror and Copy volume pair relationship 6202:6002 successfully created. CMUC00153I mkpprc: Remote Mirror and Copy volume pair relationship 6203:6003 successfully created. dscli> dscli> lspprc -dev IBM.2107-7503461 -l 6200-6203 ID State Reason Type Out Of Sync Tracks Tgt Read Src Cascade Tgt Cascade Date Suspended SourceLSS Timeout (secs) Critical Mode Fi rst Pass Status Incremental Resync Tgt Write GMIR CG PPRC CG ====================================================================================================================================================== ============================================================== 6200:6000 Copy Pending Global Copy 0 Disabled Enabled Invalid 62 120 Disabled Tr ue Enabled Disabled Disabled Disabled 6201:6001 Copy Pending Global Copy 0 Disabled Enabled Invalid 62 120 Disabled Tr ue Enabled Disabled Disabled Disabled 6202:6002 Copy Pending Global Copy 0 Disabled Enabled Invalid 62 120 Disabled Tr ue Enabled Disabled Disabled Disabled 6203:6003 Copy Pending Global Copy 0 Disabled Enabled Invalid 62 120 Disabled Tr ue Enabled Disabled Disabled Disabled dscli> dscli>

Step 4: Terminate Global Mirror and suspend Global Copy at local


Now that the current active volumes are being updated according to the change recording bitmaps on the local site, Global Mirror can be terminated at the local site. This will cause the FlashCopy targets on the remote site to age. The rmgmir command is executed at the local site to terminate the Global Mirror from the local to remote site. The pausepprc command is used to suspend the Global Copy relationship from the local site to the remote site. By suspending the Global Copy relationship, the local and remote site volumes will both be in a suspended state and the change recording bitmaps will remain active at the local site. By suspending Global Copy from the local to the remote site, data at the intermediate site will no longer be in sync with data at the local site. When Global Copy is suspended, data is no longer moving to the intermediate site via the remote site. Therefore the data at the intermediate site begins to age as production continues running at the local site. The change recording bitmaps at the local site are keeping track of all updates from production.

518

IBM System Storage DS8000: Copy Services in Open Environments

In Example 32-4, the Global Mirror and the sessions are removed at the local site. Finally, the Global Copy from local to remote is suspended.
Example 32-4 Terminate Global Mirror and suspend Global Copy at local dscli> rmgmir -dev IBM.2107-7503461 -quiet -lss 62 -session 2 CMUC00165I rmgmir: Global Mirror for session 2 successfully stopped. dscli> chsession -dev IBM.2107-7503461 -lss 62 -action remove -volume 6200-6203 2 CMUC00147I chsession: Session 10 successfully modified. dscli> rmsession -dev IBM.2107-7503461 -quiet -lss 62 2 CMUC00146I rmsession: Session 2 closed successfully. dscli> dscli> pausepprc -dev IBM.2107-7503461 -remotedev IBM.2107-75DNXC1 6200-6203:6000-6003 CMUC00157I pausepprc: Remote Mirror and Copy volume pair 6200:6000 relationship successfully CMUC00157I pausepprc: Remote Mirror and Copy volume pair 6201:6001 relationship successfully CMUC00157I pausepprc: Remote Mirror and Copy volume pair 6202:6002 relationship successfully CMUC00157I pausepprc: Remote Mirror and Copy volume pair 6203:6003 relationship successfully dscli> dscli>

paused. paused. paused. paused.

Step 5: Terminate Global Copy at the remote site


Even though the Global Copy has been suspended from the local to the remote site, the volumes at the remote site are still Global Copy secondaries. For the remote to lose knowledge of being a secondary of the local, Global Copy is terminated at the remote site. The command rmpprc with the -unconditional -at tgt option is executed at the remote site. This termination does not affect the local sites state. Therefore it allows for the out-of-sync bitmaps to remain in operation at the local site. The state of the local will remain primary suspended while the remote will no longer show as suspended target. This step is necessary to allow the failback of the intermediate to the remote site when reversing Global Copy in the next stepStep 6: Reverse Global Copy to run from intermediate to remote site on page 520. Note: All remaining out-of-sync tracks are transferred at this point from the remote to the intermediate site and must be drained before proceeding to the next step. To query out-of-sync tracks, issue the command lspprc -l at the remote site. When the out-of-sync tracks are drained, the remote site and the intermediate site will be in sync. In Example 32-5, we show the DSCLI command to terminate the Global Copy at the remote site. The lspprc command at the local site shows that the relations at this site are untouched, while at the remote site only the Global Copy from remote to intermediate exists.
Example 32-5 Terminate Global Copy local to remote at remote site
dscli> rmpprc -quiet -dev IBM.2107-75DNXC1 -unconditional -at tgt 6000-6003 CMUC00155I rmpprc: Remote Mirror and Copy volume pair :6000 relationship successfully withdrawn. CMUC00155I rmpprc: Remote Mirror and Copy volume pair :6001 relationship successfully withdrawn. CMUC00155I rmpprc: Remote Mirror and Copy volume pair :6002 relationship successfully withdrawn. CMUC00155I rmpprc: Remote Mirror and Copy volume pair :6003 relationship successfully withdrawn. dscli> dscli> lspprc -dev IBM.2107-7503461 -remotedev IBM.2107-75DNXC1 -fullid -fmt default 6200-6203 ID State Reason Type SourceLSS Timeout (secs) Critical Mode First Pass Status ========================================================================================================================================== ====== IBM.2107-7503461/6200:IBM.2107-75DNXC1/6000 Suspended Host Source Global Copy IBM.2107-7503461/62 120 Disabled True IBM.2107-7503461/6201:IBM.2107-75DNXC1/6001 Suspended Host Source Global Copy IBM.2107-7503461/62 120 Disabled True IBM.2107-7503461/6202:IBM.2107-75DNXC1/6002 Suspended Host Source Global Copy IBM.2107-7503461/62 120 Disabled True IBM.2107-7503461/6203:IBM.2107-75DNXC1/6003 Suspended Host Source Global Copy IBM.2107-7503461/62 120 Disabled True dscli>

# Chapter 32. MGM Incremental Resync

519

# At remote site #
dscli> lspprc -dev IBM.2107-75DNXC1 -fullid 6000-6003 ID State Reason Type SourceLSS Timeout (secs) Critical Mode First Pass Status ========================================================================================================================================== ==== IBM.2107-75DNXC1/6000:IBM.2107-7520781/6600 Copy Pending Global Copy IBM.2107-75DNXC1/60 120 Disabled True IBM.2107-75DNXC1/6001:IBM.2107-7520781/6601 Copy Pending Global Copy IBM.2107-75DNXC1/60 120 Disabled True IBM.2107-75DNXC1/6002:IBM.2107-7520781/6602 Copy Pending Global Copy IBM.2107-75DNXC1/60 120 Disabled True IBM.2107-75DNXC1/6003:IBM.2107-7520781/6603 Copy Pending Global Copy IBM.2107-75DNXC1/60 120 Disabled True dscli> dscli>

Step 6: Reverse Global Copy to run from intermediate to remote site


When the out-of-sync tracks are drained, the Global Copy relationship initially established from the remote to the intermediate site can be reversed. To reverse the Global Copy relationship a failover and failback must first take place. To fail over to the intermediate, the failoverpprc command with the option -type gcp is executed at the intermediate site. Because the Global Copy that is currently running from the remote to the intermediate is cascaded, the -cascade option is also specified with the failoverpprc command. When the failoverpprc command is executed with the -type gcp and -cascade options, the remote site volumes status will become primary suspended. Next we fail back Global Copy at the intermediate site by using the failbackpprc command with the option -type gcp at the intermediate site. Once again, since Global Copy is in a cascaded relation, the -cascade option is also specified. Failover and failback has successfully reversed the Global Copy relationship. Now Global Copy can be started from the intermediate site. To start Global Copy, the mkpprc command with the -type gcp and -cascade options is executed. Once again, the -cascade option is used in preparation for the next step, where the Global Copy primary is also going to be a Metro Mirror secondary.

Step 7: Set up Metro Mirror from the local to the intermediate site
Metro Mirror from the local to the intermediate site can now be created. The command mkpprc with the -type mmir option is executed at the local site. In Step 3: Start incremental re-synchronization from the local site on page 518 change recording bitmaps were created to keep track of all updates from production. To recover and restore these updates, the -incrementalresync recover option is specified with the mkpprc command. The recover parameter of the -incrementalresync will establish the Metro Mirror relationship after doing a check at the intermediate site (the Metro Mirror secondary) for a relationship. The change recording bitmaps created in Step 3: Start incremental re-synchronization from the local site on page 518 are then merged with the out-of-sync bitmaps at the local site. Note: To prepare for a disaster, the next step would be to start Metro Mirror with Incremental Resync enabled. The command mkpprc with the options -type mmir and -incrementalresync enable would be executed at the local site. This creates new change recording bitmaps for all the Metro Mirror primary volumes.

Note: Metro Mirror must become full duplex, and Global Copy must complete first pass. To query the status of Global Mirror and Metro Mirror, use the lspprc -l command at the local site to query Metro Mirror, and at the intermediate site to query Global Copy.

520

IBM System Storage DS8000: Copy Services in Open Environments

In Example 32-6, we create the Metro Mirror in the first step, using the option -incrementalresync recover, which initiates copying all tracks that are marked in the change recording bitmap at the local site. When all the tracks have drained to the intermediate site, the incremental re-synchronization is restarted.
Example 32-6 Set up Metro Mirror from local to intermediate site
dscli> mkpprc -dev IBM.2107-7503461 -remotedev IBM.2107-7520781 -type mmir -mode full -incrementalresync recover 6200-6203:6600-6603 CMUC00153I mkpprc: Remote Mirror and Copy volume pair relationship 6200:6600 successfully created. CMUC00153I mkpprc: Remote Mirror and Copy volume pair relationship 6201:6601 successfully created. CMUC00153I mkpprc: Remote Mirror and Copy volume pair relationship 6202:6602 successfully created. CMUC00153I mkpprc: Remote Mirror and Copy volume pair relationship 6203:6603 successfully created. dscli> dscli> lspprc -dev IBM.2107-7503461 -l -fullid 6200-6203 ID State Reason Type Out Of Sync Tracks Tgt Read Src Cascade Tgt Cascade Date Suspended SourceLSS Timeout (secs) Critical Mode First Pass Status Incremental Resync Tgt Write GMIR CG PPRC CG ========================================================================================================================================== ====================================================================================================================== IBM.2107-7503461/6200:IBM.2107-7520781/6600 Full Duplex Metro Mirror 0 Disabled Enabled Invalid IBM.2107-7503461/62 120 Disabled Invalid Disabled Disabled Disabled Disabled IBM.2107-7503461/6201:IBM.2107-7520781/6601 Full Duplex Metro Mirror 0 Disabled Enabled Invalid IBM.2107-7503461/62 120 Disabled Invalid Disabled Disabled Disabled Disabled IBM.2107-7503461/6202:IBM.2107-7520781/6602 Full Duplex Metro Mirror 0 Disabled Enabled Invalid IBM.2107-7503461/62 120 Disabled Invalid Disabled Disabled Disabled Disabled IBM.2107-7503461/6203:IBM.2107-7520781/6603 Full Duplex Metro Mirror 0 Disabled Enabled Invalid IBM.2107-7503461/62 120 Disabled Invalid Disabled Disabled Disabled Disabled dscli> dscli> mkpprc -dev IBM.2107-7503461 -remotedev IBM.2107-7520781 -type mmir -mode nocp -incrementalresync enable 6200-6203:6600-6603 CMUC00153I mkpprc: Remote Mirror and Copy volume pair relationship 6200:6600 successfully created. CMUC00153I mkpprc: Remote Mirror and Copy volume pair relationship 6201:6601 successfully created. CMUC00153I mkpprc: Remote Mirror and Copy volume pair relationship 6202:6602 successfully created. CMUC00153I mkpprc: Remote Mirror and Copy volume pair relationship 6203:6603 successfully created. dscli> dscli>lspprc -dev IBM.2107-7503461 -l 6200-6203 ID State Reason Type Out Of Sync Tracks Tgt Read Src Cascade Tgt Cascade Date Suspended SourceLSS Timeout (secs) Critical Mode First Pass Status Incremental Resync Tgt Write GMIR CG PPRC CG ========================================================================================================================================== ========================================================================== 6200:6600 Full Duplex Metro Mirror 0 Disabled Enabled Invalid 62 120 Disabled Invalid Enabled Disabled Disabled Disabled 6201:6601 Full Duplex Metro Mirror 0 Disabled Enabled Invalid 62 120 Disabled Invalid Enabled Disabled Disabled Disabled 6202:6602 Full Duplex Metro Mirror 0 Disabled Enabled Invalid 62 120 Disabled Invalid Enabled Disabled Disabled Disabled 6203:6603 Full Duplex Metro Mirror 0 Disabled Enabled Invalid 62 120 Disabled Invalid Enabled Disabled Disabled Disabled dscli> dscli>

Step 8: Set up Global Mirror at the intermediate site


When Metro Mirror is full duplex, then Global Mirror can be started at the intermediate site and consistency groups will start forming successfully. Global Mirror is started by executing the command mkgmir at the intermediate site. Important: Verify that the out-of-sync tracks have completed for Metro Mirror before starting Global Mirror. Otherwise the consistency groups will start to fail when Global Mirror is started with the mkgmir command. Example 32-7 shows the steps to setup the Global Mirror. The showgmir commands at the end verify that the Global Mirror is forming consistency groups.
Example 32-7 Set up Global Mirror at intermediate site
dscli> mksession -dev IBM.2107-7520781 -lss 66 2 CMUC00145I mksession: Session 2 opened successfully. dscli> dscli> chsession -dev IBM.2107-7520781 -lss 66 -action add -volume 6600-6603 2 CMUC00147I chsession: Session 2 successfully modified. dscli> dscli> mkgmir -dev IBM.2107-7520781 -lss 66 -session 2 CMUC00162I mkgmir: Global Mirror for session 2 successfully started. dscli>

Chapter 32. MGM Incremental Resync

521

dscli> lssession -dev IBM.2107-7520781 66 2 LSS ID Session Status Volume VolumeStatus PrimaryStatus SecondaryStatus FirstPassComplete AllowCascading ============================================================================================================================= 66 02 CG In Progress 6600 Active Primary Copy Pending Secondary Full Duplex True Enable 66 02 CG In Progress 6601 Active Primary Copy Pending Secondary Full Duplex True Enable 66 02 CG In Progress 6602 Active Primary Copy Pending Secondary Full Duplex True Enable 66 02 CG In Progress 6603 Active Primary Copy Pending Secondary Full Duplex True Enable dscli> dscli> showgmir -metrics 66 ID IBM.2107-7520781/66 Total Failed CG Count 0 Total Successful CG Count 56 Successful CG Percentage 100 Failed CG after Last Success 0 Last Successful CG Form Time 11/17/2006 23:20:06 CET Coord. Time (seconds) 50 Interval Time (seconds) 0 Max Drain Time (seconds) 30 First Failure Control Unit First Failure LSS First Failure Status No Error First Failure Reason First Failure Master State Last Failure Control Unit Last Failure LSS Last Failure Status No Error Last Failure Reason Last Failure Master State Previous Failure Control Unit Previous Failure LSS Previous Failure Status No Error Previous Failure Reason Previous Failure Master State dscli> dscli> showgmir -metrics 66 ID IBM.2107-7520781/66 Total Failed CG Count 0 Total Successful CG Count 69 Successful CG Percentage 100 Failed CG after Last Success 0 Last Successful CG Form Time 11/17/2006 23:20:19 CET Coord. Time (seconds) 50 Interval Time (seconds) 0 Max Drain Time (seconds) 30 First Failure Control Unit First Failure LSS First Failure Status No Error First Failure Reason First Failure Master State Last Failure Control Unit Last Failure LSS Last Failure Status No Error Last Failure Reason Last Failure Master State Previous Failure Control Unit Previous Failure LSS Previous Failure Status No Error Previous Failure Reason Previous Failure Master State dscli> dscli>

32.3 Failure at the local site scenario


The failure at the local site scenario is used when the local site becomes unavailable or is inaccessible at the local site. The steps that we use for this scenario proceed as follows: The loss of the local site or a planned outage, where production is moved to the intermediate site and Global Mirror continues from the intermediate to the remote site in a 2-site environment. Re-introducing the local site when it is back, where the current 2-site environment is moved back to a Metro/Global Mirror environment with production remaining at the intermediate site.

522

IBM System Storage DS8000: Copy Services in Open Environments

Moving production back to the local site and going back to the original Metro/Global Mirror configuration.

32.3.1 Local site fails


This scenario starts with Metro/Global Mirror with Incremental Resync running from local to intermediate and cascading from intermediate to the remote site. When there is an unplanned or planned outage at the local site, the Metro/Global Mirror can then be transitioned to a 2-site mode running from the intermediate to the remote site. To get to this state, a series of tasks must be performed. The corresponding steps are first outlined without a detailed description or illustration of the commands. The steps are depicted in Figure 32-5.

Local Site
Primary production host

Remote Site

A
1 2

Intermediate Site
Secondary production host

C D B
3

Figure 32-5 Local site failure

You must perform the following steps to transition to a 2-site environment after the local site failure: 1. Fail over to the remote site. 2. Terminate Metro Mirror at the intermediate site. 3. Start applications at the intermediate site.

Step 1: Fail over to the remote site


First, a failover with the force command is executed to the remote site devices. This allows for changes to be tracked for later re-synchronization when going back to the local site devices. The failover with force option means that there is no validation of the remote site devices to check that they are secondary devices from the local site devices. Doing this will later allow us to make the remote site the primary for the local site devices (see Step 1: Failback Global Copy from remote to local site on page 526). When this is done, the Global Copy from remote to intermediate site will be a cascade relation to the Global Copy from Intermediate to remote. Thus the -cascade option must be specified in this step as well.

Chapter 32. MGM Incremental Resync

523

Example 32-8 shows the execution of the failoverpprc command with the -force and -cascade options. The lspprc command, issued at the remote site, shows that now the Global Copy is cascaded to the Global Copy from intermediate to remote.
Example 32-8 Forced failoverpprc command from local to remote
dscli> failoverpprc -dev IBM.2107-1300561 -remotedev IBM.2107-1301651 -type gcp -cascade -force 2000-2007:2000-2007 CMUC00196I failoverpprc: Remote Mirror and Copy pair 2000:2000 successfully reversed. CMUC00196I failoverpprc: Remote Mirror and Copy pair 2001:2001 successfully reversed. CMUC00196I failoverpprc: Remote Mirror and Copy pair 2002:2002 successfully reversed. CMUC00196I failoverpprc: Remote Mirror and Copy pair 2003:2003 successfully reversed. CMUC00196I failoverpprc: Remote Mirror and Copy pair 2004:2004 successfully reversed. CMUC00196I failoverpprc: Remote Mirror and Copy pair 2005:2005 successfully reversed. CMUC00196I failoverpprc: Remote Mirror and Copy pair 2006:2006 successfully reversed. CMUC00196I failoverpprc: Remote Mirror and Copy pair 2007:2007 successfully reversed. dscli> dscli> lspprc -dev IBM.2107-1300561 -fullid 2000-2007 ID State Reason Type SourceLSS Timeout (secs) Critical Mode First Pass S tatus ================================================================================================================================================= ===== IBM.2107-1300561/2000:IBM.2107-1301651/2000 Suspended Host Source Global Copy IBM.2107-1300561/20 unknown Disabled True IBM.2107-1300561/2001:IBM.2107-1301651/2001 Suspended Host Source Global Copy IBM.2107-1300561/20 unknown Disabled True IBM.2107-1300561/2002:IBM.2107-1301651/2002 Suspended Host Source Global Copy IBM.2107-1300561/20 unknown Disabled True IBM.2107-1300561/2003:IBM.2107-1301651/2003 Suspended Host Source Global Copy IBM.2107-1300561/20 unknown Disabled True IBM.2107-1300561/2004:IBM.2107-1301651/2004 Suspended Host Source Global Copy IBM.2107-1300561/20 unknown Disabled True IBM.2107-1300561/2005:IBM.2107-1301651/2005 Suspended Host Source Global Copy IBM.2107-1300561/20 unknown Disabled True IBM.2107-1300561/2006:IBM.2107-1301651/2006 Suspended Host Source Global Copy IBM.2107-1300561/20 unknown Disabled True IBM.2107-1300561/2007:IBM.2107-1301651/2007 Suspended Host Source Global Copy IBM.2107-1300561/20 unknown Disabled True IBM.2107-1301261/2000:IBM.2107-1300561/2000 Target Copy Pending Global Copy IBM.2107-1301261/20 unknown Disabled Invalid IBM.2107-1301261/2001:IBM.2107-1300561/2001 Target Copy Pending Global Copy IBM.2107-1301261/20 unknown Disabled Invalid IBM.2107-1301261/2002:IBM.2107-1300561/2002 Target Copy Pending Global Copy IBM.2107-1301261/20 unknown Disabled Invalid IBM.2107-1301261/2003:IBM.2107-1300561/2003 Target Copy Pending Global Copy IBM.2107-1301261/20 unknown Disabled Invalid IBM.2107-1301261/2004:IBM.2107-1300561/2004 Target Copy Pending Global Copy IBM.2107-1301261/20 unknown Disabled Invalid IBM.2107-1301261/2005:IBM.2107-1300561/2005 Target Copy Pending Global Copy IBM.2107-1301261/20 unknown Disabled Invalid IBM.2107-1301261/2006:IBM.2107-1300561/2006 Target Copy Pending Global Copy IBM.2107-1301261/20 unknown Disabled Invalid IBM.2107-1301261/2007:IBM.2107-1300561/2007 Target Copy Pending Global Copy IBM.2107-1301261/20 unknown Disabled Invalid dscli> dscli>

Step 2: Terminate Metro Mirror at intermediate site


Next we terminate Metro Mirror at the intermediate site. By terminating Metro Mirror, the intermediate site devices will no longer have any knowledge of being a secondary device from the local site devices, which will allow production to be started and recovered at the intermediate site. In Example 32-9 the DSCLI command to terminate the Metro Mirror at the intermediate site is shown. With the following lspprc command issued at the intermediate site, you can see that only the relation to the remote site remains.
Example 32-9 Terminate Metro Mirror at intermediate site
dscli> rmpprc -quiet -dev IBM.2107-1301261 -unconditional -at tgt 2000-2007 CMUC00155I rmpprc: Remote Mirror and Copy volume pair :2000 relationship successfully withdrawn. CMUC00155I rmpprc: Remote Mirror and Copy volume pair :2001 relationship successfully withdrawn. CMUC00155I rmpprc: Remote Mirror and Copy volume pair :2002 relationship successfully withdrawn. CMUC00155I rmpprc: Remote Mirror and Copy volume pair :2003 relationship successfully withdrawn. CMUC00155I rmpprc: Remote Mirror and Copy volume pair :2004 relationship successfully withdrawn. CMUC00155I rmpprc: Remote Mirror and Copy volume pair :2005 relationship successfully withdrawn. CMUC00155I rmpprc: Remote Mirror and Copy volume pair :2006 relationship successfully withdrawn. CMUC00155I rmpprc: Remote Mirror and Copy volume pair :2007 relationship successfully withdrawn. dscli> dscli> lspprc -dev IBM.2107-1301261 -fullid 2000-2007 ID State Reason Type SourceLSS Timeout (secs) Critical Mode First Pass Status ========================================================================================================================================== ==== IBM.2107-1301261/2000:IBM.2107-1300561/2000 Copy Pending Global Copy IBM.2107-1301261/20 120 Disabled True IBM.2107-1301261/2001:IBM.2107-1300561/2001 Copy Pending Global Copy IBM.2107-1301261/20 120 Disabled True IBM.2107-1301261/2002:IBM.2107-1300561/2002 Copy Pending Global Copy IBM.2107-1301261/20 120 Disabled True IBM.2107-1301261/2003:IBM.2107-1300561/2003 Copy Pending Global Copy IBM.2107-1301261/20 120 Disabled True IBM.2107-1301261/2004:IBM.2107-1300561/2004 Copy Pending Global Copy IBM.2107-1301261/20 120 Disabled True IBM.2107-1301261/2005:IBM.2107-1300561/2005 Copy Pending Global Copy IBM.2107-1301261/20 120 Disabled True IBM.2107-1301261/2006:IBM.2107-1300561/2006 Copy Pending Global Copy IBM.2107-1301261/20 120 Disabled True IBM.2107-1301261/2007:IBM.2107-1300561/2007 Copy Pending Global Copy IBM.2107-1301261/20 120 Disabled True dscli>

dscli>

524

IBM System Storage DS8000: Copy Services in Open Environments

Step 3: Start applications to the intermediate site


Production can now be moved from the local site to the intermediate site and applications can be started. The out-of-sync bitmaps will have continued to be in use. Global Mirror will continue to operate from the intermediate site to the remote site until the local site is brought back and made available again.

32.3.2 Local site is back


When the local site is available again, then Metro/Global Mirror with Incremental Resync can be restarted. When the 2-site mode is transitioned back to Incremental Resync Metro/Global Mirror, production will remain running at the intermediate site. In addition, Metro/Global Mirror will be running with Metro Mirror from the intermediate to the local site and Global Copy from the intermediate to the remote site. Figure 32-6 illustrates the steps to start up Metro/Global Mirror with Incremental Resync when the local site is available to use.

Local Site
8

Remote Site
1 4 5 6

Intermediate Site

C
7

D B
3

Secondary production host

Figure 32-6 Local site is back

You would take the following steps when re-introducing the local site and moving back to a Metro/Global Mirror with Incremental Resync environment: 1. Fail back Global Copy from remote to local site. 2. Start incremental re-synchronization at the intermediate site. 3. Terminate Global Mirror and suspend Global Copy from the intermediate to the remote site. 4. Suspend Global Copy from the remote site to the local site. 5. Terminate Global Copy at the remote site.

Chapter 32. MGM Incremental Resync

525

6. Reverse Global Copy to run from the local site to the remote site. 7. Start Metro Mirror from the intermediate site to the local site. 8. Start Global Mirror at the local site.

Step 1: Failback Global Copy from remote to local site


When the local site is available again and all relationships are cleaned up, re-synchronization of the local site can be started by starting a Global Copy from remote to local (this is possible because of the forced failover previously executed from the remote to the local site; see Step 1: Fail over to the remote site on page 523). The Global Mirror from the intermediate to the remote site will provide the recovery capability to resynchronize the local site. The local sites out-of-sync bitmaps are used to re-synchronize changes that might have occurred during the failure. The remote sites out-of-sync bitmaps contain changes made after the failure. There might still be changes at the intermediate site that have not made it to the remote site yet. In this case, a full copy is performed. Note: Remote to local site out-of-sync tracks will need to be drained completely before continuing to the next step. This will ensure that the local site has been fully re-synchronized.

Tip: When the local site is back, in addition to cleaning up the remaining relations, it might also be necessary to clear remaining SCSI reservations from the host access before the failure occurred. Example 32-10 shows the command execution.
Example 32-10 Failback Global Copy from remote to local site dscli> failbackpprc -dev 2000-2007:2000-2007 CMUC00197I failbackpprc: CMUC00197I failbackpprc: CMUC00197I failbackpprc: CMUC00197I failbackpprc: CMUC00197I failbackpprc: CMUC00197I failbackpprc: CMUC00197I failbackpprc: CMUC00197I failbackpprc: dscli> dscli> IBM.2107-1301651 -remotedev IBM.2107-1300561 -type gcp -cascade Remote Remote Remote Remote Remote Remote Remote Remote Mirror Mirror Mirror Mirror Mirror Mirror Mirror Mirror and and and and and and and and Copy Copy Copy Copy Copy Copy Copy Copy pair pair pair pair pair pair pair pair 2000:2000 2001:2001 2002:2002 2003:2003 2004:2004 2005:2005 2006:2006 2007:2007 successfully successfully successfully successfully successfully successfully successfully successfully failed failed failed failed failed failed failed failed back. back. back. back. back. back. back. back.

Step 2: Start incremental re-synchronization at the intermediate site


Before terminating the Global Mirror relationship from the intermediate to the local site, the incremental re-synchronization command is enabled without initialization from the intermediate to the local site. This utilizes the current Global Mirror relationship and creates change recording bitmaps at the intermediate site to keep track of all updates occurring from production at the intermediate site. The updates at the intermediate site will be used when the local site is re-synchronized after the remote is no longer getting data from the intermediate, as we will see in Step 7: Start Metro Mirror from intermediate to local site on page 530. To enable incremental re-synchronization the option -mode nocp must be used within the mkpprc command. Example 32-11 shows the command to enable incremental re-synchronization at the intermediate site. To verify, the lssprc -l command is issued.

526

IBM System Storage DS8000: Copy Services in Open Environments

Example 32-11 Start incremental re-synchronization at intermediate site


dscli> mkpprc -dev IBM.2107-1301261 -remotedev IBM.2107-1301651 -type mmir -mode nocp -incrementalresync enable 2000-2007:2000-2007 CMUC00153I mkpprc: Remote Mirror and Copy volume pair relationship 2000:2000 successfully created. CMUC00153I mkpprc: Remote Mirror and Copy volume pair relationship 2001:2001 successfully created. CMUC00153I mkpprc: Remote Mirror and Copy volume pair relationship 2002:2002 successfully created. CMUC00153I mkpprc: Remote Mirror and Copy volume pair relationship 2003:2003 successfully created. CMUC00153I mkpprc: Remote Mirror and Copy volume pair relationship 2004:2004 successfully created. CMUC00153I mkpprc: Remote Mirror and Copy volume pair relationship 2005:2005 successfully created. CMUC00153I mkpprc: Remote Mirror and Copy volume pair relationship 2006:2006 successfully created. CMUC00153I mkpprc: Remote Mirror and Copy volume pair relationship 2007:2007 successfully created. dscli> dscli> lspprc -dev IBM.2107-1301261 -l 2000-2007 ID State Reason Type Out Of Sync Tracks Tgt Read Src Cascade Tgt Cascade Date Suspended SourceLSS Timeout (secs) Critical Mode F irst Pass Status Incremental Resync Tgt Write GMIR CG PPRC CG ===================================================================================================================================================== =============================================================== 2000:2000 Full Duplex Metro Mirror 0 Disabled Enabled Invalid 20 120 Disabled I nvalid Enabled Disabled Disabled Disabled 2001:2001 Full Duplex Metro Mirror 0 Disabled Enabled Invalid 20 120 Disabled I nvalid Enabled Disabled Disabled Disabled 2002:2002 Full Duplex Metro Mirror 0 Disabled Enabled Invalid 20 120 Disabled I nvalid Enabled Disabled Disabled Disabled 2003:2003 Full Duplex Metro Mirror 0 Disabled Enabled Invalid 20 120 Disabled I nvalid Enabled Disabled Disabled Disabled 2004:2004 Full Duplex Metro Mirror 0 Disabled Enabled Invalid 20 120 Disabled I nvalid Enabled Disabled Disabled Disabled 2005:2005 Full Duplex Metro Mirror 0 Disabled Enabled Invalid 20 120 Disabled I nvalid Enabled Disabled Disabled Disabled 2006:2006 Full Duplex Metro Mirror 0 Disabled Enabled Invalid 20 120 Disabled I nvalid Enabled Disabled Disabled Disabled 2007:2007 Full Duplex Metro Mirror 0 Disabled Enabled Invalid 20 120 Disabled I nvalid Enabled Disabled Disabled Disabled dscli> dscli>

Step 3: Terminate Global Mirror and suspend Global Copy


Now that the current active volumes are being updated according the change recording bitmaps on the intermediate site, Global Mirror can be terminated at the intermediate to remote site. When this scenario is executed with volumes that belong to one or more application hosts, but not for all volumes, the Global Mirror cannot be stopped, because it is used to form consistency groups to the remote site for the remaining applications. In this case the volumes are removed from the session of the Global Mirror. The consistency groups of those volumes at the FlashCopy targets on the remote will begin to age. In addition, to stop data transfer to the remote site, the Global Copy relationship from the intermediate to the remote site is suspended. By suspending the Global Copy relationship, the intermediate and remote sites volumes will both be in a suspended state and the change recording bitmaps will remain active at the intermediate site. In Example 32-12 the volumes of a certain application host are removed from the session. To ensure that a final consistency group is formed, the Global Mirror is paused first. When the volumes have been removed from the session the Global Mirror is resumed. Finally, the Global Copy intermediate to remote is suspended.
Example 32-12 Removing volumes from the session and suspending the Global Copy intermediate to remote
dscli> pausegmir -dev IBM.2107-1301261 -lss 20 -session ad CMUC00163I pausegmir: Global Mirror for session ad successfully paused. dscli> dscli> chsession -dev IBM.2107-1301261 -lss 20 -action remove -volume 2000-2007 ad CMUC00147I chsession: Session ad successfully modified. dscli> dscli> resumegmir -dev IBM.2107-1301261 -lss 20 -session ad CMUC00164I resumegmir: Global Mirror for session ad successfully resumed. dscli> dscli> lssession -dev IBM.2107-1301261 20 ad LSS ID Session Status Volume VolumeStatus PrimaryStatus SecondaryStatus FirstPassComplete AllowCascading ======================================================================================================== 20 AD dscli> dsclI> pausepprc -dev IBM.2107-1301261 -remotedev IBM.2107-1300561 2000-2007:2000-2007 CMUC00157I pausepprc: Remote Mirror and Copy volume pair 2000:2000 relationship successfully paused. CMUC00157I pausepprc: Remote Mirror and Copy volume pair 2001:2001 relationship successfully paused. CMUC00157I pausepprc: Remote Mirror and Copy volume pair 2002:2002 relationship successfully paused. CMUC00157I pausepprc: Remote Mirror and Copy volume pair 2003:2003 relationship successfully paused.

Chapter 32. MGM Incremental Resync

527

CMUC00157I pausepprc: Remote Mirror and Copy volume pair 2004:2004 relationship successfully paused. CMUC00157I pausepprc: Remote Mirror and Copy volume pair 2005:2005 relationship successfully paused. CMUC00157I pausepprc: Remote Mirror and Copy volume pair 2006:2006 relationship successfully paused. CMUC00157I pausepprc: Remote Mirror and Copy volume pair 2007:2007 relationship successfully paused. dscli> dscli> lspprc -dev IBM.2107-1301261 -remotedev IBM.2107-1300561 -fullid -fmt default 2000-2007 ID State Reason Type SourceLSS Timeout (secs) Critical Mode First Pass Status ========================================================================================================================================== ====== IBM.2107-1301261/2000:IBM.2107-1300561/2000 Suspended Host Source Global Copy IBM.2107-1301261/20 120 Disabled True IBM.2107-1301261/2001:IBM.2107-1300561/2001 Suspended Host Source Global Copy IBM.2107-1301261/20 120 Disabled True IBM.2107-1301261/2002:IBM.2107-1300561/2002 Suspended Host Source Global Copy IBM.2107-1301261/20 120 Disabled True IBM.2107-1301261/2003:IBM.2107-1300561/2003 Suspended Host Source Global Copy IBM.2107-1301261/20 120 Disabled True IBM.2107-1301261/2004:IBM.2107-1300561/2004 Suspended Host Source Global Copy IBM.2107-1301261/20 120 Disabled True IBM.2107-1301261/2005:IBM.2107-1300561/2005 Suspended Host Source Global Copy IBM.2107-1301261/20 120 Disabled True IBM.2107-1301261/2006:IBM.2107-1300561/2006 Suspended Host Source Global Copy IBM.2107-1301261/20 120 Disabled True IBM.2107-1301261/2007:IBM.2107-1300561/2007 Suspended Host Source Global Copy IBM.2107-1301261/20 120 Disabled True dsclI> dsclI>

Step 4: Suspend Global Copy remote to local site


In preparation for reversing Global Copy at the remote site, the Global Copy from the remote to the local site that was created in Step 1: Failback Global Copy from remote to local site on page 526 is suspended. This should be done once the remote to local site out-of-sync bitmaps are completely drained from the last of the updates at the remote site. Note: Remote to local site out-of-sync tracks must be drained completely before continuing to the next step. This will ensure that the local site has been fully re-synchronized. In Example 32-13, we show the command to suspend the Global Copy from remote to local.
Example 32-13 Suspend Global Copy from remote to local site dscli> pausepprc -dev CMUC00157I pausepprc: CMUC00157I pausepprc: CMUC00157I pausepprc: CMUC00157I pausepprc: CMUC00157I pausepprc: CMUC00157I pausepprc: CMUC00157I pausepprc: CMUC00157I pausepprc: dscli> dscli> IBM.2107-1300561 -remotedev IBM.2107-1301651 Remote Mirror and Copy volume pair 2000:2000 Remote Mirror and Copy volume pair 2001:2001 Remote Mirror and Copy volume pair 2002:2002 Remote Mirror and Copy volume pair 2003:2003 Remote Mirror and Copy volume pair 2004:2004 Remote Mirror and Copy volume pair 2005:2005 Remote Mirror and Copy volume pair 2006:2006 Remote Mirror and Copy volume pair 2007:2007 2000-2007:2000-2007 relationship successfully relationship successfully relationship successfully relationship successfully relationship successfully relationship successfully relationship successfully relationship successfully paused. paused. paused. paused. paused. paused. paused. paused.

Step 5: Terminate Global Copy intermediate to remote at the remote site


Although Global Copy was suspended from the intermediate to the remote site, the remote site volumes still have the status of Global Copy secondary devices. For the remote site devices to lose knowledge of being a secondary of the intermediate, the intermediate to remote Global Copy relationship is terminated at the remote site. This termination does not change the intermediate sites state and therefore allows for the out-of-sync bitmaps to remain in operation at the intermediate site. The state of the intermediate will remain primary suspended while the remote will no longer show as suspended target and will be terminated. This step is necessary to allow the failback of the intermediate to the remote site when reversing Global Copy in Step 6: Reverse Global Copy to run local to remote site on page 529.

528

IBM System Storage DS8000: Copy Services in Open Environments

Example 32-14 shows how to terminate the Global copy at the remote site.
Example 32-14 Terminate Global Copy from intermediate to remote at remote dscli> rmpprc -quiet -dev CMUC00155I rmpprc: Remote CMUC00155I rmpprc: Remote CMUC00155I rmpprc: Remote CMUC00155I rmpprc: Remote CMUC00155I rmpprc: Remote CMUC00155I rmpprc: Remote CMUC00155I rmpprc: Remote CMUC00155I rmpprc: Remote dscli> dscli> IBM.2107-1300561 -unconditional -at tgt 2000-2007 Mirror and Copy volume pair :2000 relationship successfully Mirror and Copy volume pair :2001 relationship successfully Mirror and Copy volume pair :2002 relationship successfully Mirror and Copy volume pair :2003 relationship successfully Mirror and Copy volume pair :2004 relationship successfully Mirror and Copy volume pair :2005 relationship successfully Mirror and Copy volume pair :2006 relationship successfully Mirror and Copy volume pair :2007 relationship successfully withdrawn. withdrawn. withdrawn. withdrawn. withdrawn. withdrawn. withdrawn. withdrawn.

Step 6: Reverse Global Copy to run local to remote site


When the Global Copy out-of-sync tracks are transferred and drained to the local site, then the Global Copy relationship from the remote to the local site can be reversed. To reverse the Global Copy relationship a failover and failback must be done before creating the Global Copy in the reverse direction. The local to remote failover is executed at the local site and using cascading Global Copy mode options. The failover will change the local site volumes status to become primary suspended. Next the failback Global Copy local to remote site is executed at the local site with cascading allowed and mode Global Copy. A successful failover and failback reverses the Global Copy, which now runs from the local to the remote site. Global Copy can be established to run from the local to the remote site with Global Copy mode and cascading allowed. Example 32-15 shows how to reverse the direction of the Global Copy between local and remote.
Example 32-15 Reverse the direction of the Global Copy between local to remote dscli> failoverpprc -dev 2000-2007:2000-2007 CMUC00196I failoverpprc: CMUC00196I failoverpprc: CMUC00196I failoverpprc: CMUC00196I failoverpprc: CMUC00196I failoverpprc: CMUC00196I failoverpprc: CMUC00196I failoverpprc: CMUC00196I failoverpprc: dscli> dscli> failbackpprc -dev 2000-2007:2000-2007 CMUC00197I failbackpprc: CMUC00197I failbackpprc: CMUC00197I failbackpprc: CMUC00197I failbackpprc: CMUC00197I failbackpprc: CMUC00197I failbackpprc: CMUC00197I failbackpprc: CMUC00197I failbackpprc: dscli> dscli> IBM.2107-1301651 -remotedev IBM.2107-1300561 -type gcp -cascade Remote Remote Remote Remote Remote Remote Remote Remote Mirror Mirror Mirror Mirror Mirror Mirror Mirror Mirror and and and and and and and and Copy Copy Copy Copy Copy Copy Copy Copy pair pair pair pair pair pair pair pair 2000:2000 2001:2001 2002:2002 2003:2003 2004:2004 2005:2005 2006:2006 2007:2007 successfully successfully successfully successfully successfully successfully successfully successfully reversed. reversed. reversed. reversed. reversed. reversed. reversed. reversed.

IBM.2107-1301651 -remotedev IBM.2107-1300561 -type gcp -cascade Remote Remote Remote Remote Remote Remote Remote Remote Mirror Mirror Mirror Mirror Mirror Mirror Mirror Mirror and and and and and and and and Copy Copy Copy Copy Copy Copy Copy Copy pair pair pair pair pair pair pair pair 2000:2000 2001:2001 2002:2002 2003:2003 2004:2004 2005:2005 2006:2006 2007:2007 successfully successfully successfully successfully successfully successfully successfully successfully failed failed failed failed failed failed failed failed back. back. back. back. back. back. back. back.

Chapter 32. MGM Incremental Resync

529

Step 7: Start Metro Mirror from intermediate to local site


Metro Mirror is now established in the direction from intermediate to the local site with Incremental Resync and force options and then again with Incremental Resync initialization. The command to establish Metro Mirror is executed at the intermediate site. First it will be established with a force option to recover and restart without doing a check at the local site for any relationships. It will also take the change recording bitmaps that were created in Step 2: Start incremental re-synchronization at the intermediate site on page 526 and merge it with the out-of-sync bitmaps at the intermediate site. Then Metro Mirror is established with Incremental Resync initialized, which will create new change recording bitmaps for Metro Mirror at the intermediate site. Note: At this point, At this point, Metro Mirror must become full duplex, and Global Copy must complete a first pass before going on to the next step. In Example 32-16 the Metro Mirror from intermediate to local is established with the option -incrementalresync override, which causes the copying of all tracks recorded in the change recording bitmap at the intermediate site. This can be monitored with the lspprc -l command shown next in the example. When all tracks have been drained, the incremental re-synchronization is enabled with the option -incrementalresync enable.
Example 32-16 Start Metro Mirror from intermediate to local with incremental re-synchronization
dscli> mkpprc -dev IBM.2107-1301261 -remotedev IBM.2107-1301651 -type mmir -mode nocp -incrementalresync override 2000-2007:2000-2007 CMUC00153I mkpprc: Remote Mirror and Copy volume pair relationship 2000:2000 successfully created. CMUC00153I mkpprc: Remote Mirror and Copy volume pair relationship 2001:2001 successfully created. CMUC00153I mkpprc: Remote Mirror and Copy volume pair relationship 2002:2002 successfully created. CMUC00153I mkpprc: Remote Mirror and Copy volume pair relationship 2003:2003 successfully created. CMUC00153I mkpprc: Remote Mirror and Copy volume pair relationship 2004:2004 successfully created. CMUC00153I mkpprc: Remote Mirror and Copy volume pair relationship 2005:2005 successfully created. CMUC00153I mkpprc: Remote Mirror and Copy volume pair relationship 2006:2006 successfully created. CMUC00153I mkpprc: Remote Mirror and Copy volume pair relationship 2007:2007 successfully created. dsclI> lspprc -dev IBM.2107-1301651 -l 2000-2007 ID State Reason Type Out Of Sync Tracks Tgt Read Src Cascade Tgt Cascade Date Suspended SourceLSS Timeout (secs) Critical Mode First Pass Status Incremental Resync Tgt Write GMIR CG PPRC CG ===================================================================================================================================================== ====================================================================== 2000:2000 Target Full Duplex Metro Mirror 0 Disabled Invalid Enabled 20 unknown Disabled Invalid Disabled Disabled Disabled Disabled 2000:2000 Copy Pending Global Copy 9 Disabled Enabled Invalid 20 unknown Disabled True Disabled Disabled Disabled Disabled 2001:2001 Target Full Duplex Metro Mirror 0 Disabled Invalid Enabled 20 unknown Disabled Invalid Disabled Disabled Disabled Disabled 2001:2001 Copy Pending Global Copy 7 Disabled Enabled Invalid 20 unknown Disabled True Disabled Disabled Disabled Disabled 2002:2002 Target Full Duplex Metro Mirror 0 Disabled Invalid Enabled 20 unknown Disabled Invalid Disabled Disabled Disabled Disabled 2002:2002 Copy Pending Global Copy 9 Disabled Enabled Invalid 20 unknown Disabled True Disabled Disabled Disabled Disabled 2003:2003 Target Full Duplex Metro Mirror 0 Disabled Invalid Enabled 20 unknown Disabled Invalid Disabled Disabled Disabled Disabled 2003:2003 Copy Pending Global Copy 7 Disabled Enabled Invalid 20 unknown Disabled True Disabled Disabled Disabled Disabled 2004:2004 Target Full Duplex Metro Mirror 0 Disabled Invalid Enabled 20 unknown Disabled Invalid Disabled Disabled Disabled Disabled 2004:2004 Copy Pending Global Copy 8 Disabled Enabled Invalid 20 unknown Disabled True Disabled Disabled Disabled Disabled 2005:2005 Target Full Duplex Metro Mirror 0 Disabled Invalid Enabled 20 unknown Disabled Invalid Disabled Disabled Disabled Disabled 2005:2005 Copy Pending Global Copy 11 Disabled Enabled Invalid 20 unknown Disabled True Disabled Disabled Disabled Disabled 2006:2006 Target Full Duplex Metro Mirror 0 Disabled Invalid Enabled 20 unknown Disabled Invalid Disabled Disabled Disabled Disabled 2006:2006 Copy Pending Global Copy 6 Disabled Enabled Invalid 20 unknown Disabled True Disabled Disabled Disabled Disabled 2007:2007 Target Full Duplex Metro Mirror 0 Disabled Invalid Enabled 20 unknown Disabled Invalid Disabled Disabled Disabled Disabled 2007:2007 Copy Pending Global Copy 14 Disabled Enabled Invalid 20 unknown Disabled True Disabled Disabled Disabled Disabled dsclI>

# # Wait until all Out Of Sync Tracks has drained #


dscli> mkpprc -dev IBM.2107-1301261 -remotedev IBM.2107-1301651 -type mmir -mode nocp -incrementalresync enable 2000-2007:2000-2007 CMUC00153I mkpprc: Remote Mirror and Copy volume pair relationship 2000:2000 successfully created.

530

IBM System Storage DS8000: Copy Services in Open Environments

CMUC00153I mkpprc: Remote Mirror and Copy volume pair relationship 2001:2001 successfully created. CMUC00153I mkpprc: Remote Mirror and Copy volume pair relationship 2002:2002 successfully created. CMUC00153I mkpprc: Remote Mirror and Copy volume pair relationship 2003:2003 successfully created. CMUC00153I mkpprc: Remote Mirror and Copy volume pair relationship 2004:2004 successfully created. CMUC00153I mkpprc: Remote Mirror and Copy volume pair relationship 2005:2005 successfully created. CMUC00153I mkpprc: Remote Mirror and Copy volume pair relationship 2006:2006 successfully created. CMUC00153I mkpprc: Remote Mirror and Copy volume pair relationship 2007:2007 successfully created. dsclI> dscli> lspprc -dev IBM.2107-1301261 -l 2000-2007 ID State Reason Type Out Of Sync Tracks Tgt Read Src Cascade Tgt Cascade Date Suspended SourceLSS Timeout (secs) Critical Mode F irst Pass Status Incremental Resync Tgt Write GMIR CG PPRC CG ===================================================================================================================================================== =============================================================== 2000:2000 Full Duplex Metro Mirror 0 Disabled Enabled Invalid 20 120 Disabled I nvalid Enabled Disabled Disabled Disabled 2001:2001 Full Duplex Metro Mirror 0 Disabled Enabled Invalid 20 120 Disabled I nvalid Enabled Disabled Disabled Disabled 2002:2002 Full Duplex Metro Mirror 0 Disabled Enabled Invalid 20 120 Disabled I nvalid Enabled Disabled Disabled Disabled 2003:2003 Full Duplex Metro Mirror 0 Disabled Enabled Invalid 20 120 Disabled I nvalid Enabled Disabled Disabled Disabled 2004:2004 Full Duplex Metro Mirror 0 Disabled Enabled Invalid 20 120 Disabled I nvalid Enabled Disabled Disabled Disabled 2005:2005 Full Duplex Metro Mirror 0 Disabled Enabled Invalid 20 120 Disabled I nvalid Enabled Disabled Disabled Disabled 2006:2006 Full Duplex Metro Mirror 0 Disabled Enabled Invalid 20 120 Disabled I nvalid Enabled Disabled Disabled Disabled 2007:2007 Full Duplex Metro Mirror 0 Disabled Enabled Invalid 20 120 Disabled I nvalid Enabled Disabled Disabled Disabled dscli> dscli>

Step 8: Start Global Mirror at local site


When Metro Mirror is full duplex, Global Mirror can be started at the local site. If not already done, the Global Mirror A session must be created and populated with the Global Copy primary volumes at the local site. When the Global Mirror is started at the local site, consistency groups will be formed and the FlashCopy targets at the remote site will refresh. Note: In some cases, production can remain running at the intermediate site with a 3-site Metro/Global Mirror disaster recovery solution in place. In this case, no further steps are taken. In other cases, it might be necessary to restore production at the local site. In that case, proceed to 32.3.3, Returning to the original configuration on page 531.

32.3.3 Returning to the original configuration


In this section we discuss the steps for moving production back to the local site from the intermediate site and then restoring the original Incremental Resync Metro/Global Mirror environment.

Chapter 32. MGM Incremental Resync

531

Figure 32-7 illustrates the steps to move Incremental Resync Metro/Global Mirror from the intermediate site to the local site.

Primary production host


5 9

Local Site Remote Site

8 7 4 2

12

Intermediate Site
Secondary production host

C
11 10 6 3

13

Figure 32-7 Move incremental Resync Metro/Global Mirror back to local site

The following steps move production back to the local site and restore Incremental Resync Metro/Global Mirror to its original configuration: 1. Stop applications at the intermediate site. 2. Suspend Metro Mirror from the intermediate site to the local site. 3. Fail over Global Copy from the remote to the intermediate site. 4. Terminate Metro Mirror at the local site. 5. Start applications at the local site. 6. Fail back Global Copy remote to the intermediate site. 7. Start Incremental Resync at the local site. 8. Terminate Global Mirror at the local site. 9. Suspend and terminate Global Copy from the local to the remote site. 10.Suspend Global Copy from the remote to the intermediate site. 11.Reverse Global Copy to run from the intermediate to the remote site. 12.Establish Metro Mirror at the local to the intermediate site. 13.Start Global Mirror at the intermediate site.

Step 1: Stop applications at the intermediate site


To prepare moving production back to the local site, all applications that are currently running at the intermediate site are stopped.

Step 2: Suspend Metro Mirror from intermediate to remote


Before moving back to the local site, Metro Mirror is suspended from the intermediate to the local site. This stops data from being copied to the local site.

532

IBM System Storage DS8000: Copy Services in Open Environments

Example 32-17 shows the DSCLI command to suspend the Metro Mirror.
Example 32-17 Suspend Metro Mirror from intermediate to remote dscli> pausepprc -dev CMUC00157I pausepprc: CMUC00157I pausepprc: CMUC00157I pausepprc: CMUC00157I pausepprc: CMUC00157I pausepprc: CMUC00157I pausepprc: CMUC00157I pausepprc: CMUC00157I pausepprc: dscli> dscli> IBM.2107-1301261 -remotedev IBM.2107-1301651 Remote Mirror and Copy volume pair 2000:2000 Remote Mirror and Copy volume pair 2001:2001 Remote Mirror and Copy volume pair 2002:2002 Remote Mirror and Copy volume pair 2003:2003 Remote Mirror and Copy volume pair 2004:2004 Remote Mirror and Copy volume pair 2005:2005 Remote Mirror and Copy volume pair 2006:2006 Remote Mirror and Copy volume pair 2007:2007 2000-2007:2000-2007 relationship successfully relationship successfully relationship successfully relationship successfully relationship successfully relationship successfully relationship successfully relationship successfully paused. paused. paused. paused. paused. paused. paused. paused.

Step 3: Fail over Global Copy from remote to intermediate site


Next a failover with force and cascading options is issued from the remote to the intermediate site. The force will bypass validation at the remote to determine if the remote is a secondary of the intermediate, thus allowing the failover to be successful. The effect of the failover is to change the state of the local site devices to suspended primary. Example 32-18 shows the DSCLI command.
Example 32-18 Fail over Global Copy from remote to intermediate site dscli> failoverpprc -dev 2000-2007:2000-2007 CMUC00196I failoverpprc: CMUC00196I failoverpprc: CMUC00196I failoverpprc: CMUC00196I failoverpprc: CMUC00196I failoverpprc: CMUC00196I failoverpprc: CMUC00196I failoverpprc: CMUC00196I failoverpprc: dscli> dscli> IBM.2107-1300561 -remotedev IBM.2107-1301261 -type gcp -cascade -force Remote Remote Remote Remote Remote Remote Remote Remote Mirror Mirror Mirror Mirror Mirror Mirror Mirror Mirror and and and and and and and and Copy Copy Copy Copy Copy Copy Copy Copy pair pair pair pair pair pair pair pair 2000:2000 2001:2001 2002:2002 2003:2003 2004:2004 2005:2005 2006:2006 2007:2007 successfully successfully successfully successfully successfully successfully successfully successfully reversed. reversed. reversed. reversed. reversed. reversed. reversed. reversed.

Step 4: Terminate Metro Mirror at local


In Step 7: Start Metro Mirror from intermediate to local site on page 530 the volumes at the local site are Metro Mirror secondary. In the next the applications should be started at the local site, which is not possible as long as these volumes are secondary volumes of the Metro Mirror. Since the Metro Mirror was created with incremental resync enabled, a failover of the Metro Mirror to the local site would discard the change recording bitmap at the intermediate site. To avoid this the Metro Mirror is now terminated only at the local site. This clears the local site from having any knowledge of being a secondary to the intermediate site.
Example 32-19 Terminate Metro Mirror at local dscli> rmpprc -quiet -dev CMUC00155I rmpprc: Remote CMUC00155I rmpprc: Remote CMUC00155I rmpprc: Remote CMUC00155I rmpprc: Remote CMUC00155I rmpprc: Remote CMUC00155I rmpprc: Remote CMUC00155I rmpprc: Remote IBM.2107-1301651 -unconditional -at tgt 2000-2007 Mirror and Copy volume pair :2000 relationship successfully Mirror and Copy volume pair :2001 relationship successfully Mirror and Copy volume pair :2002 relationship successfully Mirror and Copy volume pair :2003 relationship successfully Mirror and Copy volume pair :2004 relationship successfully Mirror and Copy volume pair :2005 relationship successfully Mirror and Copy volume pair :2006 relationship successfully withdrawn. withdrawn. withdrawn. withdrawn. withdrawn. withdrawn. withdrawn.

Chapter 32. MGM Incremental Resync

533

CMUC00155I rmpprc: Remote Mirror and Copy volume pair :2007 relationship successfully withdrawn. dscli> dscli>

Step 5: Start applications at local site


Production can now be moved to the local site by restarting applications at the local site.

Step 6: Fail back remote to intermediate


With production now running at the local site and Global Copy still running from the local to the remote site, the next steps are to prepare for reinstating Global Copy at the intermediate site. A failback with force is executed from the remote to the intermediate site. The local sites out-of-sync bitmaps can be obtained for resync changes that might have occurred since the swap. The remote sites out-of sync bitmaps will contain changes made after the move back to the local site. The intermediate sites incremental resync change recording bitmaps will be released during the failback. There might still be changes on the remote site that have not made it to the intermediate site and still need to be updated. Note: The writes from the remote to the intermediate site must be completely drained or have completed the first pass before continuing to the next step. All changes at the local site should have already been updated to the remote site, which is also being updated to the intermediate site with the failover with force. Example 32-20 shows how to issue the failbackpprc command with -force option. To monitor the Out Of Sync Tracks a lspprc -l command must be issued at the remote site. The output shows that the Global Copy from remote to intermediate is still a cascaded relation of the Global Copy from local to remote.
Example 32-20 Fail back remote to intermediate
dscli> failbackpprc -dev IBM.2107-1300561 -remotedev IBM.2107-1301261 -type gcp -force 2000-2007:2000-2007 CMUC00197I failbackpprc: Remote Mirror and Copy pair 2000:2000 successfully failed back. CMUC00197I failbackpprc: Remote Mirror and Copy pair 2001:2001 successfully failed back. CMUC00197I failbackpprc: Remote Mirror and Copy pair 2002:2002 successfully failed back. CMUC00197I failbackpprc: Remote Mirror and Copy pair 2003:2003 successfully failed back. CMUC00197I failbackpprc: Remote Mirror and Copy pair 2004:2004 successfully failed back. CMUC00197I failbackpprc: Remote Mirror and Copy pair 2005:2005 successfully failed back. CMUC00197I failbackpprc: Remote Mirror and Copy pair 2006:2006 successfully failed back. CMUC00197I failbackpprc: Remote Mirror and Copy pair 2007:2007 successfully failed back. dscli> dscli> lspprc -dev IBM.2107-1300561 -l 2000-2007 ID State Reason Type Out Of Sync Tracks Tgt Read Src Cascade Tgt Cascade Date Suspended SourceLSS Timeout (secs) Critical Mode First Pass Status Incremental Resync Tgt Write GMIR CG PPRC CG ===================================================================================================================================================== ====================================================================== 2000:2000 Target Copy Pending Global Copy 0 Disabled Invalid Enabled 20 unknown Disabled Invalid Disabled Disabled Disabled Disabled 2000:2000 Copy Pending Global Copy 0 Disabled Enabled Invalid 20 unknown Disabled True Disabled Disabled Disabled Disabled 2001:2001 Target Copy Pending Global Copy 0 Disabled Invalid Enabled 20 unknown Disabled Invalid Disabled Disabled Disabled Disabled 2001:2001 Copy Pending Global Copy 0 Disabled Enabled Invalid 20 unknown Disabled True Disabled Disabled Disabled Disabled 2002:2002 Target Copy Pending Global Copy 0 Disabled Invalid Enabled 20 unknown Disabled Invalid Disabled Disabled Disabled Disabled 2002:2002 Copy Pending Global Copy 0 Disabled Enabled Invalid 20 unknown Disabled True Disabled Disabled Disabled Disabled 2003:2003 Target Copy Pending Global Copy 0 Disabled Invalid Enabled 20 unknown Disabled Invalid Disabled Disabled Disabled Disabled 2003:2003 Copy Pending Global Copy 0 Disabled Enabled Invalid 20 unknown Disabled True Disabled Disabled Disabled Disabled 2004:2004 Target Copy Pending Global Copy 0 Disabled Invalid Enabled 20 unknown Disabled Invalid Disabled Disabled Disabled Disabled 2004:2004 Copy Pending Global Copy 0 Disabled Enabled Invalid 20 unknown Disabled True Disabled Disabled Disabled Disabled 2005:2005 Target Copy Pending Global Copy 0 Disabled Invalid Enabled 20 unknown Disabled Invalid Disabled Disabled Disabled Disabled 2005:2005 Copy Pending Global Copy 0 Disabled Enabled Invalid 20 unknown Disabled True Disabled Disabled Disabled Disabled 2006:2006 Target Copy Pending Global Copy 0 Disabled Invalid Enabled 20 unknown Disabled Invalid Disabled Disabled Disabled Disabled 2006:2006 Copy Pending Global Copy 0 Disabled Enabled Invalid 20 unknown Disabled True Disabled Disabled Disabled Disabled

534

IBM System Storage DS8000: Copy Services in Open Environments

2007:2007 Target Copy Pending Global Copy 0 Disabled Invalid Enabled 20 unknown Disabled Invalid Disabled Disabled Disabled Disabled 2007:2007 Copy Pending Global Copy 0 Disabled Enabled Invalid 20 unknown Disabled True Disabled Disabled Disabled Disabled dscli> dscli>lspprc -dev IBM.2107-1300561 -fullid 2000-2007 ID State Reason Type SourceLSS Timeout (secs) Critical Mode First Pass Status ===================================================================================================================================================== IBM.2107-1300561/2000:IBM.2107-1301261/2000 Copy Pending Global Copy IBM.2107-1300561/20 unknown Disabled True IBM.2107-1300561/2001:IBM.2107-1301261/2001 Copy Pending Global Copy IBM.2107-1300561/20 unknown Disabled True IBM.2107-1300561/2002:IBM.2107-1301261/2002 Copy Pending Global Copy IBM.2107-1300561/20 unknown Disabled True IBM.2107-1300561/2003:IBM.2107-1301261/2003 Copy Pending Global Copy IBM.2107-1300561/20 unknown Disabled True IBM.2107-1300561/2004:IBM.2107-1301261/2004 Copy Pending Global Copy IBM.2107-1300561/20 unknown Disabled True IBM.2107-1300561/2005:IBM.2107-1301261/2005 Copy Pending Global Copy IBM.2107-1300561/20 unknown Disabled True IBM.2107-1300561/2006:IBM.2107-1301261/2006 Copy Pending Global Copy IBM.2107-1300561/20 unknown Disabled True IBM.2107-1300561/2007:IBM.2107-1301261/2007 Copy Pending Global Copy IBM.2107-1300561/20 unknown Disabled True IBM.2107-1301651/2000:IBM.2107-1300561/2000 Target Copy Pending Global Copy IBM.2107-1301651/20 unknown Disabled Invalid IBM.2107-1301651/2001:IBM.2107-1300561/2001 Target Copy Pending Global Copy IBM.2107-1301651/20 unknown Disabled Invalid IBM.2107-1301651/2002:IBM.2107-1300561/2002 Target Copy Pending Global Copy IBM.2107-1301651/20 unknown Disabled Invalid IBM.2107-1301651/2003:IBM.2107-1300561/2003 Target Copy Pending Global Copy IBM.2107-1301651/20 unknown Disabled Invalid IBM.2107-1301651/2004:IBM.2107-1300561/2004 Target Copy Pending Global Copy IBM.2107-1301651/20 unknown Disabled Invalid IBM.2107-1301651/2005:IBM.2107-1300561/2005 Target Copy Pending Global Copy IBM.2107-1301651/20 unknown Disabled Invalid IBM.2107-1301651/2006:IBM.2107-1300561/2006 Target Copy Pending Global Copy IBM.2107-1301651/20 unknown Disabled Invalid IBM.2107-1301651/2007:IBM.2107-1300561/2007 Target Copy Pending Global Copy IBM.2107-1301651/20 unknown Disabled Invalid dscli> dscli>

Step 7: Start incremental re-synchronization at local site


Incremental re-synchronization is established with Global Copy for the local to remote site relationship. The incremental resync command will be enabled without initialization of the changes bitmap (enablenoinit option). This will track the changes made to the local site so that the intermediate can be re-synchronized in a later step after the remote is no longer getting updates from the local site. Example 32-21 shows the DSCLI to start incremental re-synchronization at local.
Example 32-21 Start incremental resync at local site dscli> mkpprc -dev -incrementalresync CMUC00153I mkpprc: CMUC00153I mkpprc: CMUC00153I mkpprc: CMUC00153I mkpprc: CMUC00153I mkpprc: CMUC00153I mkpprc: CMUC00153I mkpprc: CMUC00153I mkpprc: dscli> dscli> IBM.2107-1301651 -remotedev IBM.2107-1300561 -type gcp -mode nocp -cascade enablenoinit 2000-2007:2000-2007 Remote Mirror and Copy volume pair relationship 2000:2000 successfully created. Remote Mirror and Copy volume pair relationship 2001:2001 successfully created. Remote Mirror and Copy volume pair relationship 2002:2002 successfully created. Remote Mirror and Copy volume pair relationship 2003:2003 successfully created. Remote Mirror and Copy volume pair relationship 2004:2004 successfully created. Remote Mirror and Copy volume pair relationship 2005:2005 successfully created. Remote Mirror and Copy volume pair relationship 2006:2006 successfully created. Remote Mirror and Copy volume pair relationship 2007:2007 successfully created.

Step 8: Terminate Global Mirror at local site


To begin the process of moving to a Metro/Global Mirror environment with production running at the local site, Global Mirror is terminated. Global Mirror is currently running from the local to the remote site, with FlashCopy targets being updated at the remote site. When this relationship is terminated, the consistency groups at the remote FlashCopy targets will begin to age.
Example 32-22 Terminate Global Mirror at local dscli> rmgmir -dev IBM.2107-1301651 -quiet -lss 20 -session 10 CMUC00165I rmgmir: Global Mirror for session 10 successfully stopped. dscli> chsession -dev IBM.2107-1301651 -lss 20 -action remove -volume 2000-2007 10 CMUC00147I chsession: Session 10 successfully modified. dscli> rmsession -dev IBM.2107-1301651 -quiet -lss 20 10

Chapter 32. MGM Incremental Resync

535

CMUC00146I rmsession: Session 10 closed successfully. dscli> dscli>

Step 9: Suspend and remove Global Copy local to remote site


Now that Global Mirror (local to remote) has been terminated, Global Copy running from the local to the remote site can be suspended and terminated. By suspending Global Copy, data will stop being copied to the remote site. This will allow re-synchronization to complete from the remote to the intermediate site. When the re-synchronization of the intermediate is complete, Global Copy from the local to the remote site can be terminated at the remote site. By terminating only at the remote site, the status at the remote site will no longer be a Global Copy secondary, which will allow a failback from intermediate to remote in a later step. In addition, the local site will continue to have out-of-sync bitmaps in operation with its status being suspended primary. In Example 32-23 the Global Copy from the local to the remote site is suspended. Subsequently, the Global Copy from remote to intermediate is queried to check that all Out Of Sync Tracks has been drained. If this is the case the Global Copy relation at the remote site is removed.
Example 32-23 Suspend and remove Global Copy local to remote at remote
dscli> pausepprc -dev CMUC00157I pausepprc: CMUC00157I pausepprc: CMUC00157I pausepprc: CMUC00157I pausepprc: CMUC00157I pausepprc: CMUC00157I pausepprc: CMUC00157I pausepprc: CMUC00157I pausepprc: dscli> IBM.2107-1301651 -remotedev IBM.2107-1300561 Remote Mirror and Copy volume pair 2000:2000 Remote Mirror and Copy volume pair 2001:2001 Remote Mirror and Copy volume pair 2002:2002 Remote Mirror and Copy volume pair 2003:2003 Remote Mirror and Copy volume pair 2004:2004 Remote Mirror and Copy volume pair 2005:2005 Remote Mirror and Copy volume pair 2006:2006 Remote Mirror and Copy volume pair 2007:2007 2000-2007:2000-2007 relationship successfully relationship successfully relationship successfully relationship successfully relationship successfully relationship successfully relationship successfully relationship successfully paused. paused. paused. paused. paused. paused. paused. paused.

# # Wait until all OOS has drained from remote to intermediate # dscli> lspprc -dev IBM.2107-1300561 -remotedev IBM.2107-1301261 -l -fmt default 2000-2007 ID State Reason Type Out Of Sync Tracks Tgt Read Src Cascade Tgt Cascade Date Suspended SourceLSS Timeout (secs) Crit ical Mode First Pass Status Incremental Resync Tgt Write GMIR CG PPRC CG ===================================================================================================================================================== ========================================================================== 2000:2000 Target Suspended Update Target Global Copy 0 Disabled Invalid Enabled 20 unknown Disa bled Invalid Disabled Disabled Disabled Disabled 2000:2000 Copy Pending Global Copy 0 Disabled Enabled Invalid 20 unknown Disa bled True Disabled Disabled Disabled Disabled 2001:2001 Target Suspended Update Target Global Copy 0 Disabled Invalid Enabled 20 unknown Disa bled Invalid Disabled Disabled Disabled Disabled 2001:2001 Copy Pending Global Copy 0 Disabled Enabled Invalid 20 unknown Disa bled True Disabled Disabled Disabled Disabled 2002:2002 Target Suspended Update Target Global Copy 0 Disabled Invalid Enabled 20 unknown Disa bled Invalid Disabled Disabled Disabled Disabled 2002:2002 Copy Pending Global Copy 0 Disabled Enabled Invalid 20 unknown Disa bled True Disabled Disabled Disabled Disabled 2003:2003 Target Suspended Update Target Global Copy 0 Disabled Invalid Enabled 20 unknown Disa bled Invalid Disabled Disabled Disabled Disabled 2003:2003 Copy Pending Global Copy 0 Disabled Enabled Invalid 20 unknown Disa bled True Disabled Disabled Disabled Disabled 2004:2004 Target Suspended Update Target Global Copy 0 Disabled Invalid Enabled 20 unknown Disa bled Invalid Disabled Disabled Disabled Disabled 2004:2004 Copy Pending Global Copy 0 Disabled Enabled Invalid 20 unknown Disa bled True Disabled Disabled Disabled Disabled 2005:2005 Target Suspended Update Target Global Copy 0 Disabled Invalid Enabled 20 unknown Disa bled Invalid Disabled Disabled Disabled Disabled 2005:2005 Copy Pending Global Copy 0 Disabled Enabled Invalid 20 unknown Disa bled True Disabled Disabled Disabled Disabled 2006:2006 Target Suspended Update Target Global Copy 0 Disabled Invalid Enabled 20 unknown Disa bled Invalid Disabled Disabled Disabled Disabled 2006:2006 Copy Pending Global Copy 0 Disabled Enabled Invalid 20 unknown Disa bled True Disabled Disabled Disabled Disabled 2007:2007 Target Suspended Update Target Global Copy 0 Disabled Invalid Enabled 20 unknown Disa bled Invalid Disabled Disabled Disabled Disabled 2007:2007 Copy Pending Global Copy 0 Disabled Enabled Invalid 20 unknown Disa bled True Disabled Disabled Disabled Disabled dscli> dscli> rmpprc -quiet -dev IBM.2107-1300561 -remotedev IBM.2107-1301651 -unconditional -at tgt 2000-2007 CMUC00155I rmpprc: Remote Mirror and Copy volume pair :2000 relationship successfully withdrawn. CMUC00155I rmpprc: Remote Mirror and Copy volume pair :2001 relationship successfully withdrawn. CMUC00155I rmpprc: Remote Mirror and Copy volume pair :2002 relationship successfully withdrawn.

536

IBM System Storage DS8000: Copy Services in Open Environments

CMUC00155I CMUC00155I CMUC00155I CMUC00155I CMUC00155I dscli> dscli>

rmpprc: rmpprc: rmpprc: rmpprc: rmpprc:

Remote Remote Remote Remote Remote

Mirror Mirror Mirror Mirror Mirror

and and and and and

Copy Copy Copy Copy Copy

volume volume volume volume volume

pair pair pair pair pair

:2003 :2004 :2005 :2006 :2007

relationship relationship relationship relationship relationship

successfully successfully successfully successfully successfully

withdrawn. withdrawn. withdrawn. withdrawn. withdrawn.

Step 10: Suspend Global Copy remote to intermediate


The process of reversing the Global Copy running from the remote to the intermediate is initiated by first suspending the Global Copy relationship from remote to intermediate. This should be done once the remote to intermediate site out-of-sync bitmaps is completely drained from the last of the updates at the remote site, since Global Copy from the local site has already been suspended.
Example 32-24 Suspend Global Copy remote to intermediated dscli> pausepprc -dev CMUC00157I pausepprc: CMUC00157I pausepprc: CMUC00157I pausepprc: CMUC00157I pausepprc: CMUC00157I pausepprc: CMUC00157I pausepprc: CMUC00157I pausepprc: CMUC00157I pausepprc: dscli> dscli> IBM.2107-1300561 -remotedev IBM.2107-1301261 Remote Mirror and Copy volume pair 2000:2000 Remote Mirror and Copy volume pair 2001:2001 Remote Mirror and Copy volume pair 2002:2002 Remote Mirror and Copy volume pair 2003:2003 Remote Mirror and Copy volume pair 2004:2004 Remote Mirror and Copy volume pair 2005:2005 Remote Mirror and Copy volume pair 2006:2006 Remote Mirror and Copy volume pair 2007:2007 2000-2007:2000-2007 relationship successfully relationship successfully relationship successfully relationship successfully relationship successfully relationship successfully relationship successfully relationship successfully paused. paused. paused. paused. paused. paused. paused. paused.

Step 11: Reverse Global Copy to run intermediate to remote


To reverse the Global Copy relationship a failover and then failback must be done before creating the Global Copy in the reverse direction. A failover of Global Copy is first executed with cascading allowed from intermediate to remote site. The failover will cause the intermediate site volumes status to be primary suspended and sets up for local to intermediate and intermediate to remote connection. Next the failback Global Copy intermediate to remote site is executed at the intermediate site with cascading allowed and mode Global Copy. The result of the failover and failback has successfully reversed the Global Copy relationship to run from the intermediate to the remote site. Global Copy can now be established from the intermediate to the remote site with Global Copy mode and cascading options.
Example 32-25 Reverse Global Copy to sun intermediate to remote dscli> failoverpprc -dev 2000-2007:2000-2007 CMUC00196I failoverpprc: CMUC00196I failoverpprc: CMUC00196I failoverpprc: CMUC00196I failoverpprc: CMUC00196I failoverpprc: CMUC00196I failoverpprc: CMUC00196I failoverpprc: CMUC00196I failoverpprc: dscli> failbackpprc -dev 2000-2007:2000-2007 CMUC00197I failbackpprc: CMUC00197I failbackpprc: CMUC00197I failbackpprc: IBM.2107-1301261 -remotedev IBM.2107-1300561 -type gcp -cascade Remote Mirror and Copy pair Remote Mirror and Copy pair Remote Mirror and Copy pair Remote Mirror and Copy pair Remote Mirror and Copy pair Remote Mirror and Copy pair Remote Mirror and Copy pair Remote Mirror and Copy pair IBM.2107-1301261 -remotedev 2000:2000 successfully 2001:2001 successfully 2002:2002 successfully 2003:2003 successfully 2004:2004 successfully 2005:2005 successfully 2006:2006 successfully 2007:2007 successfully IBM.2107-1300561 -type reversed. reversed. reversed. reversed. reversed. reversed. reversed. reversed. gcp -cascade

Remote Mirror and Copy pair 2000:2000 successfully failed back. Remote Mirror and Copy pair 2001:2001 successfully failed back. Remote Mirror and Copy pair 2002:2002 successfully failed back.

Chapter 32. MGM Incremental Resync

537

CMUC00197I CMUC00197I CMUC00197I CMUC00197I CMUC00197I dscli> dscli>

failbackpprc: failbackpprc: failbackpprc: failbackpprc: failbackpprc:

Remote Remote Remote Remote Remote

Mirror Mirror Mirror Mirror Mirror

and and and and and

Copy Copy Copy Copy Copy

pair pair pair pair pair

2003:2003 2004:2004 2005:2005 2006:2006 2007:2007

successfully successfully successfully successfully successfully

failed failed failed failed failed

back. back. back. back. back.

Step 12: Establish Metro Mirror from local to intermediate site


Metro Mirror is now established in the direction from local to intermediate site with incremental resync and force and then again with incremental resync initialization. The command to establish Metro Mirror is executed at the local site. The forced mirror also causes the change recording bitmaps that were created in Step 7: Start incremental re-synchronization at local site on page 535 to be merged with the out-of-sync bitmaps at the local site. Then Metro Mirror is established with incremental resync initialized, which will create new change recording bitmaps for Metro Mirror at the local site. Note: At this point, Metro Mirror must become full duplex, and Global Copy must complete first pass before going on to the next step.

Tip: In case that at the failure of the local site (see 32.3.1, Local site fails on page 523) a freezepprc command has been issued, the paths from local to intermediate have been removed. In order to succeed with the failback from local to intermediate the paths must be reestablished right now. Example 32-26 shows that the option -incrementalresync override is used first to copy the track marked in the change recording bitmap at the local site. All Out Of Sync Tracks have been drained to the intermediate site. When done, the incremental re-synchronization is started with empty bitmaps at the local site.
Example 32-26 Establish Metro Mirror from local to intermediate site
dscli> mkpprc -dev IBM.2107-1301651 -remotedev IBM.2107-1301261 -type mmir -incrementalresync override 2000-2007:2000-2007 CMUC00153I mkpprc: Remote Mirror and Copy volume pair relationship 2000:2000 successfully created. CMUC00153I mkpprc: Remote Mirror and Copy volume pair relationship 2001:2001 successfully created. CMUC00153I mkpprc: Remote Mirror and Copy volume pair relationship 2002:2002 successfully created. CMUC00153I mkpprc: Remote Mirror and Copy volume pair relationship 2003:2003 successfully created. CMUC00153I mkpprc: Remote Mirror and Copy volume pair relationship 2004:2004 successfully created. CMUC00153I mkpprc: Remote Mirror and Copy volume pair relationship 2005:2005 successfully created. CMUC00153I mkpprc: Remote Mirror and Copy volume pair relationship 2006:2006 successfully created. CMUC00153I mkpprc: Remote Mirror and Copy volume pair relationship 2007:2007 successfully created. dscli> dsclI> # # Wait until all OOS has drained # dscli> lspprc -dev IBM.2107-1301651 -l 2000-2007 ID State Reason Type Out Of Sync Tracks Tgt Read Src Cascade Tgt Cascade Date Suspended SourceLSS Timeout (secs) Critical Mode F irst Pass Status Incremental Resync Tgt Write GMIR CG PPRC CG ===================================================================================================================================================== ============================================================== 2000:2000 Full Duplex Metro Mirror 0 Disabled Enabled Invalid 20 120 Disabled I nvalid Disabled Disabled Disabled Enabled 2001:2001 Full Duplex Metro Mirror 0 Disabled Enabled Invalid 20 120 Disabled I nvalid Disabled Disabled Disabled Enabled 2002:2002 Full Duplex Metro Mirror 0 Disabled Enabled Invalid 20 120 Disabled I nvalid Disabled Disabled Disabled Enabled 2003:2003 Full Duplex Metro Mirror 0 Disabled Enabled Invalid 20 120 Disabled I nvalid Disabled Disabled Disabled Enabled 2004:2004 Full Duplex Metro Mirror 0 Disabled Enabled Invalid 20 120 Disabled I nvalid Disabled Disabled Disabled Enabled 2005:2005 Full Duplex Metro Mirror 0 Disabled Enabled Invalid 20 120 Disabled I nvalid Disabled Disabled Disabled Enabled 2006:2006 Full Duplex Metro Mirror 0 Disabled Enabled Invalid 20 120 Disabled I nvalid Disabled Disabled Disabled Enabled 2007:2007 Full Duplex Metro Mirror 0 Disabled Enabled Invalid 20 120 Disabled I nvalid Disabled Disabled Disabled Enabled dscli>

538

IBM System Storage DS8000: Copy Services in Open Environments

dscli> mkpprc -dev IBM.2107-1301651 -remotedev IBM.2107-1301261 -type mmir -mode nocp -incrementalresync enable 2000-2007:2000-2007 CMUC00153I mkpprc: Remote Mirror and Copy volume pair relationship 2000:2000 successfully created. CMUC00153I mkpprc: Remote Mirror and Copy volume pair relationship 2001:2001 successfully created. CMUC00153I mkpprc: Remote Mirror and Copy volume pair relationship 2002:2002 successfully created. CMUC00153I mkpprc: Remote Mirror and Copy volume pair relationship 2003:2003 successfully created. CMUC00153I mkpprc: Remote Mirror and Copy volume pair relationship 2004:2004 successfully created. CMUC00153I mkpprc: Remote Mirror and Copy volume pair relationship 2005:2005 successfully created. CMUC00153I mkpprc: Remote Mirror and Copy volume pair relationship 2006:2006 successfully created. CMUC00153I mkpprc: Remote Mirror and Copy volume pair relationship 2007:2007 successfully created. dscli> dscli> lspprc -dev IBM.2107-1301651 -l 2000-2007 ID State Reason Type Out Of Sync Tracks Tgt Read Src Cascade Tgt Cascade Date Suspended SourceLSS Timeout (secs) Critical Mode F irst Pass Status Incremental Resync Tgt Write GMIR CG PPRC CG ===================================================================================================================================================== ============================================================== 2000:2000 Full Duplex Metro Mirror 0 Disabled Enabled Invalid 20 120 Disabled I nvalid Enabled Disabled Disabled Enabled 2001:2001 Full Duplex Metro Mirror 0 Disabled Enabled Invalid 20 120 Disabled I nvalid Enabled Disabled Disabled Enabled 2002:2002 Full Duplex Metro Mirror 0 Disabled Enabled Invalid 20 120 Disabled I nvalid Enabled Disabled Disabled Enabled 2003:2003 Full Duplex Metro Mirror 0 Disabled Enabled Invalid 20 120 Disabled I nvalid Enabled Disabled Disabled Enabled 2004:2004 Full Duplex Metro Mirror 0 Disabled Enabled Invalid 20 120 Disabled I nvalid Enabled Disabled Disabled Enabled 2005:2005 Full Duplex Metro Mirror 0 Disabled Enabled Invalid 20 120 Disabled I nvalid Enabled Disabled Disabled Enabled 2006:2006 Full Duplex Metro Mirror 0 Disabled Enabled Invalid 20 120 Disabled I nvalid Enabled Disabled Disabled Enabled 2007:2007 Full Duplex Metro Mirror 0 Disabled Enabled Invalid 20 120 Disabled I nvalid Enabled Disabled Disabled Enabled dscli> dscli>

Step 13: Start Global Mirror


When Metro Mirror is full duplex and Global Copy has completed a first pass, Global Mirror can be started at the intermediate site and consistency groups will start forming successfully. By starting Global Mirror at the intermediate site, the FlashCopy targets at the remote site will start to refresh. Note: You can now verify that Consistency Groups are forming successfully. In Example 32-27 we just add the volumes to the existing Global Mirror because in Step 3: Terminate Global Mirror and suspend Global Copy on page 527 we remove the application hosts volumes from the session instead of terminating the whole Global Mirror. The Global Mirror is inspected if consistency formation is ongoing by two subsequent showgmir -metrics commands to see whether the Total Successful CG Count is increasing.
Example 32-27 Add volumes to session and check Global Mirror
dscli> chsession -dev IBM.2107-1301261 -lss 20 -action add -volume 2000-2007 ad CMUC00147I chsession: Session ad successfully modified. dscli> dscli> lssession -dev IBM.2107-1301261 20 ad LSS ID Session Status Volume VolumeStatus PrimaryStatus SecondaryStatus FirstPassComplete AllowCascading ======================================================================================================================== ===== 20 AD CG In Progress 2000 Active Primary Copy Pending Secondary Full Duplex True Enable 20 AD CG In Progress 2001 Active Primary Copy Pending Secondary Full Duplex True Enable 20 AD CG In Progress 2002 Active Primary Copy Pending Secondary Full Duplex True Enable 20 AD CG In Progress 2003 Active Primary Copy Pending Secondary Full Duplex True Enable 20 AD CG In Progress 2004 Active Primary Copy Pending Secondary Full Duplex True Enable 20 AD CG In Progress 2005 Active Primary Copy Pending Secondary Full Duplex True Enable 20 AD CG In Progress 2006 Active Primary Copy Pending Secondary Full Duplex True Enable 20 AD CG In Progress 2007 Active Primary Copy Pending Secondary Full Duplex True Enable dscli> dscli> showgmir -metrics 20 ID IBM.2107-1301261/20 Total Failed CG Count 3 Total Successful CG Count 19378 Successful CG Percentage 99

Chapter 32. MGM Incremental Resync

539

Failed CG after Last Success Last Successful CG Form Time Coord. Time (seconds) Interval Time (seconds) Max Drain Time (seconds) First Failure Control Unit First Failure LSS First Failure Status First Failure Reason First Failure Master State Last Failure Control Unit Last Failure LSS Last Failure Status Last Failure Reason Last Failure Master State Previous Failure Control Unit Previous Failure LSS Previous Failure Status Previous Failure Reason Previous Failure Master State dscli> dscli> showgmir -metrics 20 ID Total Failed CG Count Total Successful CG Count Successful CG Percentage Failed CG after Last Success Last Successful CG Form Time Coord. Time (seconds) Interval Time (seconds) Max Drain Time (seconds) First Failure Control Unit First Failure LSS First Failure Status First Failure Reason First Failure Master State Last Failure Control Unit Last Failure LSS Last Failure Status Last Failure Reason Last Failure Master State Previous Failure Control Unit Previous Failure LSS Previous Failure Status Previous Failure Reason Previous Failure Master State dscli> dscli>

0 11/09/2006 20:40:31 MST 50 0 30 IBM.2107-1301261 Not Available Error Members in Incorrect State Drain in Progress IBM.2107-1301261 0x40 Error Global Mirror Consistency Cannot be Maintained Global Mirror Start Increment in Progress IBM.2107-1301261 0x20 Error Session or Session Members not in Correct State Global Mirror Run in Progress

IBM.2107-1301261/20 3 19384 99 0 11/09/2006 20:40:38 MST 50 0 30 IBM.2107-1301261 Not Available Error Members in Incorrect State Drain in Progress IBM.2107-1301261 0x40 Error Global Mirror Consistency Cannot be Maintained Global Mirror Start Increment in Progress IBM.2107-1301261 0x20 Error Session or Session Members not in Correct State Global Mirror Run in Progress

32.4 Failure at the intermediate site scenario


The failure at the intermediate site scenario is used when the intermediate site becomes unavailable or is inaccessible to the Metro/Global Mirror session. This scenario is divided into four parts: When a disaster occurs at the intermediate site, a recovery is done so that Global Mirror is run from the local site to the remote site. When the intermediate site is available again for Metro/Global Mirror, it is cleaned up. When the intermediate is cleaned up, it is then re-synchronized with the data that was being copied from the local site to the remote site.

540

IBM System Storage DS8000: Copy Services in Open Environments

When the intermediate is fully re-synchronized, the original configuration can be restored and Global Mirror is started at the intermediate site.

32.4.1 Intermediate site failure


This section describes the steps for recovery from failure at the intermediate site while production continues at the local site. When the storage at the intermediate site is no longer accessible (failure), data will be copied from the local to the remote site using the incremental re-synchronization implementation in place. When transitioning from the original MGM configuration to the configuration of the local to the remote site, consistency groups are formed at local and drained to remote, while data consistency is retained between the two sites. Production will be able to continue at the local site until the intermediate site is operational again. Figure 32-8 illustrates the steps to recover from a failure at the intermediate site while production is able to continue at the local site.

Primary production host

Local Site Remote Site

6 1

Intermediate Site

C
2 4

B
3

Figure 32-8 Failure at the intermediate

The steps for recovery after failure at the intermediate site are as follows: 1. 2. 3. 4. 5. 6. Suspend Metro Mirror at local site. Clean up surviving components of Global Mirror (if possible). Fail over Global Copy at remote site. Verify Global Mirror consistency group. Start Global Copy from local to remote site. Create sessions and restart Global Mirror at local site.

Chapter 32. MGM Incremental Resync

541

Step 1: Suspend Metro Mirror at local site


Metro Mirror at the local site might already be suspended due to the failure of the intermediate. However, in the case where there is no I/O running, Metro Mirror will not suspend. In this case, the pausepprc command can be used to suspend all volume pairs. Since the intermediate might not be accessible due to the failure, the -unconditional -at src options will suspend the primary volumes and ignore the target volumes, therefore allowing the suspension at the local site. This step ensures that all remaining pairs are suspended at the local site. Example 32-28 shows the command to suspend the Metro Mirror at the local site.
Example 32-28 Suspend Metro Mirror at local site dscli> pausepprc -dev CMUC00157I pausepprc: CMUC00157I pausepprc: CMUC00157I pausepprc: CMUC00157I pausepprc: CMUC00157I pausepprc: CMUC00157I pausepprc: CMUC00157I pausepprc: CMUC00157I pausepprc: dscli> dscli> IBM.2107-1301651 -unconditional -at src 2000-2007 Remote Mirror and Copy volume pair 2000: relationship Remote Mirror and Copy volume pair 2001: relationship Remote Mirror and Copy volume pair 2002: relationship Remote Mirror and Copy volume pair 2003: relationship Remote Mirror and Copy volume pair 2004: relationship Remote Mirror and Copy volume pair 2005: relationship Remote Mirror and Copy volume pair 2006: relationship Remote Mirror and Copy volume pair 2007: relationship successfully successfully successfully successfully successfully successfully successfully successfully paused. paused. paused. paused. paused. paused. paused. paused.

Step 2: Clean up surviving components of Global Mirror (if possible)


Depending on the type of failure at the intermediate, there might be some components of Global Mirror that can be cleaned up. If the intermediate site is still partially accessible, a terminate of Global Mirror can be attempted by executing the command rmgmir. This command might not succeed, depending on the nature the failure. If the command fails, it could mean that the subordinates might be orphaned due to the master at the intermediate site not having access to them. In this case, the rmgmir command can be attempted at any of the subordinates. If none of the cases mentioned were successful and Global Mirror or any of its components cannot be cleaned up, it will be done once the intermediate is accessible again in 32.4.2, Intermediate site is back on page 545. Note: Depending on the extent of the failure at the intermediate, Global Mirror might no longer be running, and might show FATAL or failing consistency groups. If Global Mirror was in the middle of a FlashCopy, then the consistency group might need to be verified at some point. To check the status of the Global Mirror and see if consistency groups are failing, the command showgmir -metrics will list the current status.

Step 3: Fail over Global Copy at remote site


Global Copy is failed over at the remote site with cascading allowed to change the state of the volumes at the remote from secondary duplex pending (or suspended) to suspend host source. The data will be registered in the out-of-sync bitmaps and will be used during the re-synchronization of the intermediate in 32.4.3, Re-synchronization at intermediate on page 548. The command issued at the remote site is a failoverpprc with the -cascade option, as shown in Example 32-29.

542

IBM System Storage DS8000: Copy Services in Open Environments

Example 32-29 Fail over Global Copy at remote site dscli> failoverpprc -dev 2000-2007:2000-2007 CMUC00196I failoverpprc: CMUC00196I failoverpprc: CMUC00196I failoverpprc: CMUC00196I failoverpprc: CMUC00196I failoverpprc: CMUC00196I failoverpprc: CMUC00196I failoverpprc: CMUC00196I failoverpprc: dscli> dscli> IBM.2107-1300561 -remotedev IBM.2107-1301261 -type gcp -cascade Remote Remote Remote Remote Remote Remote Remote Remote Mirror Mirror Mirror Mirror Mirror Mirror Mirror Mirror and and and and and and and and Copy Copy Copy Copy Copy Copy Copy Copy pair pair pair pair pair pair pair pair 2000:2000 2001:2001 2002:2002 2003:2003 2004:2004 2005:2005 2006:2006 2007:2007 successfully successfully successfully successfully successfully successfully successfully successfully reversed. reversed. reversed. reversed. reversed. reversed. reversed. reversed.

Step 4: Verify Global Mirror consistency group


It is essential to check the Global Mirror consistency at the remote site. This step is necessary in case the intermediate site failed in the middle of a consistency group formation, which would mean that the FlashCopy of the Global Mirror will need to be reverted or committed. To verify the Global Mirror consistency, see 28.7, Checking consistency at the remote site on page 468 describes how to verify the consistency group and determine if any action needs to be taken. Tip: The lsflash -l command will show a query of the FlashCopy status. This will be helpful when verifying the Global Mirror consistency group.

Step 5: Start Global Copy from local to remote site


The incremental resync with copy option is established from the local to the remote site by issuing the command mkpprc with the option -incrementalresync recover. The recover parameter for the option -incrementalresync will do a check to see if there was a former relationship at the remote site. In Step 2: Clean up surviving components of Global Mirror (if possible) on page 542 the local site was failed over and it is no longer in a relationship. When the Global Copy with incremental resync is completely established, the function that was running previously at the local site is stopped. The current change recording bitmaps at the local site are left alone and merged with the out-of-sync bitmaps. If there was a former incremental resync relationship at the remote site, then the override parameter must be used with the -incrementalresync option when establishing Global Copy. Important: All writes are transferred at this point from the local to the remote site and all the out-of-sync tracks must have drained before continuing to the next step. To query out-of-sync tracks, issue the command lspprc -l at the local site. In Example 32-30 the lspprc command issued at the local site shows the relation of the Metro Mirror from the local to the intermediate site. When the mkpprc with -incrementalresync recover, the pair relation has changed to the Global Copy relation local to remote.

Chapter 32. MGM Incremental Resync

543

Example 32-30 Start Global Copy from local to remote with incremental re-synchronization
dscli> lspprc -dev IBM.2107-1301651 -fullid 2000-2007 ID State Reason Type SourceLSS Timeout (secs) Critical Mode First Pass Status ================================================================================================================================================================ IBM.2107-1301651/2000:IBM.2107-1301261/2000 Suspended Internal Conditions Target Metro Mirror IBM.2107-1301651/20 300 Disabled Invalid IBM.2107-1301651/2001:IBM.2107-1301261/2001 Suspended Internal Conditions Target Metro Mirror IBM.2107-1301651/20 300 Disabled Invalid IBM.2107-1301651/2002:IBM.2107-1301261/2002 Suspended Internal Conditions Target Metro Mirror IBM.2107-1301651/20 300 Disabled Invalid IBM.2107-1301651/2003:IBM.2107-1301261/2003 Suspended Internal Conditions Target Metro Mirror IBM.2107-1301651/20 300 Disabled Invalid IBM.2107-1301651/2004:IBM.2107-1301261/2004 Suspended Internal Conditions Target Metro Mirror IBM.2107-1301651/20 300 Disabled Invalid IBM.2107-1301651/2005:IBM.2107-1301261/2005 Suspended Internal Conditions Target Metro Mirror IBM.2107-1301651/20 300 Disabled Invalid IBM.2107-1301651/2006:IBM.2107-1301261/2006 Suspended Internal Conditions Target Metro Mirror IBM.2107-1301651/20 300 Disabled Invalid IBM.2107-1301651/2007:IBM.2107-1301261/2007 Suspended Internal Conditions Target Metro Mirror IBM.2107-1301651/20 300 Disabled Invalid dscli> dsclI> mkpprc -dev IBM.2107-1301651 -remotedev IBM.2107-1300561 -type gcp -mode full -incrementalresync recover 2000-2007:2000-2007 CMUC00153I mkpprc: Remote Mirror and Copy volume pair relationship 2000:2000 successfully created. CMUC00153I mkpprc: Remote Mirror and Copy volume pair relationship 2001:2001 successfully created. CMUC00153I mkpprc: Remote Mirror and Copy volume pair relationship 2002:2002 successfully created. CMUC00153I mkpprc: Remote Mirror and Copy volume pair relationship 2003:2003 successfully created. CMUC00153I mkpprc: Remote Mirror and Copy volume pair relationship 2004:2004 successfully created. CMUC00153I mkpprc: Remote Mirror and Copy volume pair relationship 2005:2005 successfully created. CMUC00153I mkpprc: Remote Mirror and Copy volume pair relationship 2006:2006 successfully created. CMUC00153I mkpprc: Remote Mirror and Copy volume pair relationship 2007:2007 successfully created. dsclI> dsclI> lspprc -dev IBM.2107-1301651 -l -fullid 2000-2007 ID State Reason Type Out Of Sync Tracks Tgt Read Src Cascade Tgt Cascade Date Suspended SourceLSS Timeout (secs) Critical Mode First Pass Status Incremental Resync Tgt Write GMIR CG PPRC CG ============================================================================================================================================================= =================================================================================================== IBM.2107-1301651/2000:IBM.2107-1300561/2000 Copy Pending Global Copy 9 Disabled Disabled Invalid IBM.2107-13016 51/20 300 Disabled True Disabled Disabled Disabled Disabled IBM.2107-1301651/2001:IBM.2107-1300561/2001 Copy Pending Global Copy 10 Disabled Disabled Invalid IBM.2107-13016 51/20 300 Disabled True Disabled Disabled Disabled Disabled IBM.2107-1301651/2002:IBM.2107-1300561/2002 Copy Pending Global Copy 8 Disabled Disabled Invalid IBM.2107-13016 51/20 300 Disabled True Disabled Disabled Disabled Disabled IBM.2107-1301651/2003:IBM.2107-1300561/2003 Copy Pending Global Copy 5 Disabled Disabled Invalid IBM.2107-13016 51/20 300 Disabled True Disabled Disabled Disabled Disabled IBM.2107-1301651/2004:IBM.2107-1300561/2004 Copy Pending Global Copy 0 Disabled Disabled Invalid IBM.2107-13016 51/20 300 Disabled True Disabled Disabled Disabled Disabled IBM.2107-1301651/2005:IBM.2107-1300561/2005 Copy Pending Global Copy 1 Disabled Disabled Invalid IBM.2107-13016 51/20 300 Disabled True Disabled Disabled Disabled Disabled IBM.2107-1301651/2006:IBM.2107-1300561/2006 Copy Pending Global Copy 9 Disabled Disabled Invalid IBM.2107-13016 51/20 300 Disabled True Disabled Disabled Disabled Disabled IBM.2107-1301651/2007:IBM.2107-1300561/2007 Copy Pending Global Copy 13 Disabled Disabled Invalid IBM.2107-13016 51/20 300 Disabled True Disabled Disabled Disabled Disabled dscli> dscli>

Step 6: Create session and start Global Mirror at local site


For the Global Copy relation from the local site to the remote site, we create a session and add volumes to the session prior to starting the new Global Mirror at the local site. The session is created using the mksession command, and the volumes can then be added to the session by issuing the chsession command with the option -action add at the local site. The Global Mirror session can now be started by issuing the mkgmir command at the local site. This configuration remains unchanged until the intermediate site is available again for Metro/Global Mirror. Production continues to run at the local site without interruption while the original Metro/Global Mirror configuration of local to intermediate to remote site transitions to local to remote site. Example 32-31 shows the steps to create sessions and Global Mirror and how to check it.

544

IBM System Storage DS8000: Copy Services in Open Environments

Example 32-31 Create session and start Global Mirror at local site
dscli> mksession -dev IBM.2107-1301651 -lss 20 10 CMUC00145I mksession: Session 10 opened successfully. dscli> dscli> chsession -dev IBM.2107-1301651 -lss 20 -action add -volume 2000-2007 10 CMUC00147I chsession: Session 10 successfully modified. dsclI> dsciI> lssession -dev IBM.2107-1301651 20 10 LSS ID Session Status Volume VolumeStatus PrimaryStatus SecondaryStatus FirstPassComplete AllowCascading ================================================================================================================= 20 10 Normal 2000 Join Pending Primary Copy Pending Secondary Simplex True Disable 20 10 Normal 2001 Join Pending Primary Copy Pending Secondary Simplex True Disable 20 10 Normal 2002 Join Pending Primary Copy Pending Secondary Simplex True Disable 20 10 Normal 2003 Join Pending Primary Copy Pending Secondary Simplex True Disable 20 10 Normal 2004 Join Pending Primary Copy Pending Secondary Simplex True Disable 20 10 Normal 2005 Join Pending Primary Copy Pending Secondary Simplex True Disable 20 10 Normal 2006 Join Pending Primary Copy Pending Secondary Simplex True Disable 20 10 Normal 2007 Join Pending Primary Copy Pending Secondary Simplex True Disable dscli> dscli> mkgmir -dev IBM.2107-1301651 -lss 20 -session 10 CMUC00162I mkgmir: Global Mirror for session 10 successfully started. dscli> dscli> showgmir -metrics 20 ID IBM.2107-1301651/20 Total Failed CG Count 1 Total Successful CG Count 39 Successful CG Percentage 97 Failed CG after Last Success 0 Last Successful CG Form Time 11/07/2006 13:46:01 MST Coord. Time (seconds) 50 Interval Time (seconds) 0 Max Drain Time (seconds) 30 First Failure Control Unit First Failure LSS First Failure Status No Error First Failure Reason First Failure Master State Last Failure Control Unit Last Failure LSS Last Failure Status No Error Last Failure Reason Last Failure Master State Previous Failure Control Unit Previous Failure LSS Previous Failure Status No Error Previous Failure Reason Previous Failure Master State dscli> dscli> lssession -dev IBM.2107-1301651 20 10 LSS ID Session Status Volume VolumeStatus PrimaryStatus SecondaryStatus FirstPassComplete AllowCascading ========================================================================================================================= 20 10 CG In Progress 2000 Active Primary Copy Pending Secondary Simplex True Disable 20 10 CG In Progress 2001 Active Primary Copy Pending Secondary Simplex True Disable 20 10 CG In Progress 2002 Active Primary Copy Pending Secondary Simplex True Disable 20 10 CG In Progress 2003 Active Primary Copy Pending Secondary Simplex True Disable 20 10 CG In Progress 2004 Active Primary Copy Pending Secondary Simplex True Disable 20 10 CG In Progress 2005 Active Primary Copy Pending Secondary Simplex True Disable 20 10 CG In Progress 2006 Active Primary Copy Pending Secondary Simplex True Disable 20 10 CG In Progress 2007 Active Primary Copy Pending Secondary Simplex True Disable dscli> dscli>

32.4.2 Intermediate site is back


In this section we describe the process of cleaning up the intermediate site once it becomes available again. The cleanup is required before re-synchronizing the intermediate, as discussed in 32.4.3, Re-synchronization at intermediate on page 548. There might be Metro Mirror or Global Mirror relationships or both that need to be terminated. These relationships, depending on how the intermediate was lost, might not have been cleaned up prior to the failure of the intermediate in Section Step 2: Clean up surviving components of Global Mirror (if possible) on page 542. The components that were not terminated at that stage must be terminated now.

Chapter 32. MGM Incremental Resync

545

Figure 32-9 illustrates the steps to clean up the intermediate once it is recovered.

Local Site Remote Site


Primary production host

Intermediate Site

C D B

Figure 32-9 Cleanup of the intermediate when it becomes available

The steps to clean up the intermediate site after it is fully recovered are as follows: 1. Remove Metro Mirror. 2. Suspend Global Copy intermediate to remote at intermediate. 3. Terminate former Global Mirror.

Step 1: Remove Metro Mirror


Metro Mirror, previously running from the local to the intermediate site, will need be terminated at the intermediate site. When the intermediate is available, the volumes might still show as Target Full Duplex from the former Metro Mirror relationship. Removing the Metro Mirror will allow for the failback from the remote to the intermediate site to be done in a later step. The command rmpprc issued at the intermediate site will terminate all Metro Mirror relationships at the intermediate. Tip: To remove the Metro Mirror pairs, the communication between local and intermediate site must work. We recommend that you check the PPRC paths in both directions when the intermediate site became available again.

546

IBM System Storage DS8000: Copy Services in Open Environments

In Example 32-32 the paths between local and intermediate are checked first. When the Metro Mirror was removed at the local site the Global Copy between local and remote is show with the subsequent lssprc command issued at the local site.
Example 32-32 Check paths and remove Metro Mirror
dcsli> lspprcpath -fmt default -fullid -dev IBM.2107-1301651 20 Src Tgt State SS Port Attached Port Tgt WWNN =================================================================================================================== IBM.2107-1301651/20 IBM.2107-1301261/20 Success FF20 IBM.2107-1301651/I0003 IBM.2107-1301261/I0200 5005076303FFC04D IBM.2107-1301651/20 IBM.2107-1301261/20 Success FF20 IBM.2107-1301651/I0200 IBM.2107-1301261/I0330 5005076303FFC04D IBM.2107-1301651/20 IBM.2107-1300561/20 Success FF20 IBM.2107-1301651/I0000 IBM.2107-1300561/I0200 5005076303FFC02C IBM.2107-1301651/20 IBM.2107-1300561/20 Success FF20 IBM.2107-1301651/I0003 IBM.2107-1300561/I0003 5005076303FFC02C dscli> dscli> rmpprc -quiet -dev IBM.2107-1301261 -remotedev IBM.2107-1301651 -unconditional -at tgt 2000-2007 CMUC00155I rmpprc: Remote Mirror and Copy volume pair :2000 relationship successfully withdrawn. CMUC00155I rmpprc: Remote Mirror and Copy volume pair :2001 relationship successfully withdrawn. CMUC00155I rmpprc: Remote Mirror and Copy volume pair :2002 relationship successfully withdrawn. CMUC00155I rmpprc: Remote Mirror and Copy volume pair :2003 relationship successfully withdrawn. CMUC00155I rmpprc: Remote Mirror and Copy volume pair :2004 relationship successfully withdrawn. CMUC00155I rmpprc: Remote Mirror and Copy volume pair :2005 relationship successfully withdrawn. CMUC00155I rmpprc: Remote Mirror and Copy volume pair :2006 relationship successfully withdrawn. CMUC00155I rmpprc: Remote Mirror and Copy volume pair :2007 relationship successfully withdrawn. dscli> dscli> lspprc -dev IBM.2107-1301651 -remotedev IBM.2107-1301261 -fullid -fmt default 2000-2007 ID State Reason Type SourceLSS Timeout (secs) Critical Mode First Pass Status ========================================================================================================================================== ==== IBM.2107-1301651/2000:IBM.2107-1300561/2000 Copy Pending Global Copy IBM.2107-1301651/20 300 Disabled True IBM.2107-1301651/2001:IBM.2107-1300561/2001 Copy Pending Global Copy IBM.2107-1301651/20 300 Disabled True IBM.2107-1301651/2002:IBM.2107-1300561/2002 Copy Pending Global Copy IBM.2107-1301651/20 300 Disabled True IBM.2107-1301651/2003:IBM.2107-1300561/2003 Copy Pending Global Copy IBM.2107-1301651/20 300 Disabled True IBM.2107-1301651/2004:IBM.2107-1300561/2004 Copy Pending Global Copy IBM.2107-1301651/20 300 Disabled True IBM.2107-1301651/2005:IBM.2107-1300561/2005 Copy Pending Global Copy IBM.2107-1301651/20 300 Disabled True IBM.2107-1301651/2006:IBM.2107-1300561/2006 Copy Pending Global Copy IBM.2107-1301651/20 300 Disabled True IBM.2107-1301651/2007:IBM.2107-1300561/2007 Copy Pending Global Copy IBM.2107-1301651/20 300 Disabled True dscli> dscli>

Step 2: Suspend Global Copy intermediate to remote at intermediate


The volumes at the intermediate site might still have Global Copy relationships with the remote site. In some cases, these Global Copy relations might already be suspended due either to the failure of the intermediate or from the steps taken in Step 2: Clean up surviving components of Global Mirror (if possible) on page 542. The command pausepprc issued at the intermediate site will suspend the Global Copy relations between the intermediate and the remote site. The -unconditional -at src option must be used with the pausepprc command, because Global Copy needs only to be suspended at the intermediate, the Global Copy primary. The previous Global Copy secondary at the remote site is already in a failover state, and thus is primary suspended. Example 32-33 shows how to suspend Global Copy at the intermediate site.
Example 32-33 Suspend Global Copy intermediate to remote dscli> rmpprc -quiet -dev CMUC00155I rmpprc: Remote CMUC00155I rmpprc: Remote CMUC00155I rmpprc: Remote CMUC00155I rmpprc: Remote CMUC00155I rmpprc: Remote CMUC00155I rmpprc: Remote CMUC00155I rmpprc: Remote CMUC00155I rmpprc: Remote dscli> dscli> IBM.2107-1301261 -unconditional -at tgt 2000-2007 Mirror and Copy volume pair :2000 relationship successfully Mirror and Copy volume pair :2001 relationship successfully Mirror and Copy volume pair :2002 relationship successfully Mirror and Copy volume pair :2003 relationship successfully Mirror and Copy volume pair :2004 relationship successfully Mirror and Copy volume pair :2005 relationship successfully Mirror and Copy volume pair :2006 relationship successfully Mirror and Copy volume pair :2007 relationship successfully withdrawn. withdrawn. withdrawn. withdrawn. withdrawn. withdrawn. withdrawn. withdrawn.

Chapter 32. MGM Incremental Resync

547

Step 3: Terminate former Global Mirror


The former Global Mirror session will also have to be terminated at the intermediate site. The command rmgmir will terminate Global Mirror when issued at the intermediate site. However, if this operation was already successfully executed at Step 2: Clean up surviving components of Global Mirror (if possible) on page 542, it will fail if attempted again because the Global Mirror session was already terminated. It if was not terminated before and fails again now, it could be caused by orphaned subordinates. In this case, Global Mirror must be terminated at the orphaned subordinate by using the command rmgmir. Note: The failover state at the remote site will prevent any previous Global Mirror configuration from operating.

32.4.3 Re-synchronization at intermediate


When the cleanup of the intermediate is complete, re-synchronization at the intermediate can begin. During the re-synchronization, the data will be copied from the remote to the intermediate. Figure 32-10 illustrates the steps to re-synchronize the intermediate before restoring the original configuration.

Primary production host

Local Site Remote Site

Intermediate Site

C D
1

B
Figure 32-10 Re-synchronization of the intermediate

The steps to re-synchronize the intermediate are as follows: 1. Fail back Global Copy remote to intermediate site. 2. Start incremental re-synchronization at local site.

548

IBM System Storage DS8000: Copy Services in Open Environments

Step 1: Fail back Global Copy remote to intermediate site


At Step 2: Clean up surviving components of Global Mirror (if possible) on page 542, the remote site volumes were changed to primary volumes suspended in preparation for the failback to the remote site. We can now issue a failbackpprc command at the remote site, and this begins copying data from the remote site to the intermediate site. Note: Waiting for the initial pass of the re-synchronization before restarting incremental re-synchronization is good practice to reduce the number of updates sent later when Metro Mirror is started with incremental re-synchronization and force at the local site. To query the out-of-sync status, the lspprc -l command is issued at the intermediate site. Example 32-34 shows the command to fail back the Global Copy.
Example 32-34 Fail back Global Copy remote to intermediate site dscli> failbackpprc -dev CMUC00197I failbackpprc: CMUC00197I failbackpprc: CMUC00197I failbackpprc: CMUC00197I failbackpprc: CMUC00197I failbackpprc: CMUC00197I failbackpprc: CMUC00197I failbackpprc: CMUC00197I failbackpprc: dscli> dscli> IBM.2107-1300561 -remotedev Remote Mirror and Copy pair Remote Mirror and Copy pair Remote Mirror and Copy pair Remote Mirror and Copy pair Remote Mirror and Copy pair Remote Mirror and Copy pair Remote Mirror and Copy pair Remote Mirror and Copy pair IBM.2107-1301261 -type 2000:2000 successfully 2001:2001 successfully 2002:2002 successfully 2003:2003 successfully 2004:2004 successfully 2005:2005 successfully 2006:2006 successfully 2007:2007 successfully gcp 2000-2007:2000-2007 failed back. failed back. failed back. failed back. failed back. failed back. failed back. failed back.

Step 2: Start Incremental Resync at local site


Incremental re-synchronization can now be started at the local site with the no initialization option. The command mkpprc with the option -incrementalresync enablenoinit will successfully start Incremental Resync at the local site without enabling the bitmaps. Note: This step is necessary to allow the Metro Mirror relationship at the local site to be restored in a later step using the -incrementalresync override option.
Example 32-35 Start incremental resync at local site dscli> mkpprc -dev IBM.2107-1301651 -remotedev IBM.2107-1300561 -type gcp -mode nocp -incrementalresync enablenoinit 2000-2007:2000-2007 CMUC00153I mkpprc: Remote Mirror and Copy volume pair relationship 2000:2000 successfully created. CMUC00153I mkpprc: Remote Mirror and Copy volume pair relationship 2001:2001 successfully created. CMUC00153I mkpprc: Remote Mirror and Copy volume pair relationship 2002:2002 successfully created. CMUC00153I mkpprc: Remote Mirror and Copy volume pair relationship 2003:2003 successfully created. CMUC00153I mkpprc: Remote Mirror and Copy volume pair relationship 2004:2004 successfully created. CMUC00153I mkpprc: Remote Mirror and Copy volume pair relationship 2005:2005 successfully created. CMUC00153I mkpprc: Remote Mirror and Copy volume pair relationship 2006:2006 successfully created. CMUC00153I mkpprc: Remote Mirror and Copy volume pair relationship 2007:2007 successfully created. dscli> dscli>

Chapter 32. MGM Incremental Resync

549

32.4.4 Restoring the original configuration


When re-synchronization at the intermediate has completed, the original configuration can be restored without interrupting the production at the local site. Figure 32-11 illustrates the steps.
Primary production host

Local Site Remote Site

A
1 6

2 3

Intermediate Site

C D

Figure 32-11 Restoring the Metro/Global Mirror configuration

The steps to restore the original configuration are as follows: 1. 2. 3. 4. 5. 6. 7. Stop Global Mirror at the local site. Suspend Global Copy at local to remote site. Stop Global Copy local to remote at the remote site. Fail over Global Copy from the remote to intermediate site. Fail back Global Copy at the intermediate to remote site. Create Metro Mirror with Incremental Resync at the local site. Start Global Mirror at the intermediate site.

Step 1: Stop Global Mirror at local site


Global Mirror has been running from the local to remote site while the intermediate was being recovered. Before restoring the original configuration, the Global Mirror with Incremental Resync from the local site to the remote site is terminated. The command rmgmir is issued at the local site. As a result, the remote FlashCopy target will begin to age while the transition back to the original configuration is in progress. Note: The swap back to the intermediate can be done at any time but would normally be done at a planned time.

550

IBM System Storage DS8000: Copy Services in Open Environments

Example 32-36 Stop Global Mirror at local site dscli> rmgmir -dev IBM.2107-1301651 -quiet -lss 20 -session 10 CMUC00165I rmgmir: Global Mirror for session 10 successfully stopped. dscli> chsession -dev IBM.2107-1301651 -lss 20 -action remove -volume 2000-2007 10 CMUC00147I chsession: Session 10 successfully modified. dscli> rmsession -dev IBM.2107-1301651 -quiet -lss 20 10 CMUC00146I rmsession: Session 10 closed successfully. dscli> dscli>

Step 2: Suspend Global Copy at local to remote


To stop data being copied to the remote site and to allow the re-synchronization to complete between the remote and intermediate sites, Global Copy at the local site is suspended. The pausepprc command is issued at the local site, which will primary suspend the local and primary suspend the remote. Important: Out-of-sync tracks need to be drained completely to the remote. Wait until this is the case before continuing to the next step. To query the out-of-sync tracks, the lspprc -l command can be issued from the remote site. Example 32-37 shows the DSCLI command to suspend the Global Copy.
Example 32-37 Suspend Global Copy at local to remote dscli> pausepprc -dev CMUC00157I pausepprc: CMUC00157I pausepprc: CMUC00157I pausepprc: CMUC00157I pausepprc: CMUC00157I pausepprc: CMUC00157I pausepprc: CMUC00157I pausepprc: CMUC00157I pausepprc: dscli> dscli> IBM.2107-1301651 -remotedev IBM.2107-1300561 Remote Mirror and Copy volume pair 2000:2000 Remote Mirror and Copy volume pair 2001:2001 Remote Mirror and Copy volume pair 2002:2002 Remote Mirror and Copy volume pair 2003:2003 Remote Mirror and Copy volume pair 2004:2004 Remote Mirror and Copy volume pair 2005:2005 Remote Mirror and Copy volume pair 2006:2006 Remote Mirror and Copy volume pair 2007:2007 2000-2007:2000-2007 relationship successfully relationship successfully relationship successfully relationship successfully relationship successfully relationship successfully relationship successfully relationship successfully paused. paused. paused. paused. paused. paused. paused. paused.

Step 3: Stop Global Copy local to remote at remote site


For the remote to lose knowledge of being a secondary of the local, the Global Copy is only terminated at the remote site. The command rmpprc with the -unconditional -at tgt option is issued at the remote site. This termination does not affect the local sites state and therefore allows for the out-of-sync bitmaps to remain in operation at the local site. The state of the local will remain primary suspended while the remote will no longer show as suspended target. This step is necessary to allow the fail back of the intermediate to remote site in a later step. Note: The local site has updates for the intermediate and remote being recorded in the incremental resync change recording and out-of-sync bitmap.

Chapter 32. MGM Incremental Resync

551

Example 32-38 shows the command to remove the Global Copy to the remote site. The lspprc command, which was issued at the local site, shows that the relation at the local site is still there. It is still required for the incremental re-synchronization in Step 6: Create Metro Mirror with incremental resync at local site on page 553.
Example 32-38 Stop Global Copy local to remote at remote site
dscli> rmpprc -quiet -dev CMUC00155I rmpprc: Remote CMUC00155I rmpprc: Remote CMUC00155I rmpprc: Remote CMUC00155I rmpprc: Remote CMUC00155I rmpprc: Remote CMUC00155I rmpprc: Remote CMUC00155I rmpprc: Remote CMUC00155I rmpprc: Remote dscli> IBM.2107-1300561 -unconditional -at tgt 2000-2007 Mirror and Copy volume pair :2000 relationship successfully Mirror and Copy volume pair :2001 relationship successfully Mirror and Copy volume pair :2002 relationship successfully Mirror and Copy volume pair :2003 relationship successfully Mirror and Copy volume pair :2004 relationship successfully Mirror and Copy volume pair :2005 relationship successfully Mirror and Copy volume pair :2006 relationship successfully Mirror and Copy volume pair :2007 relationship successfully withdrawn. withdrawn. withdrawn. withdrawn. withdrawn. withdrawn. withdrawn. withdrawn.

# # At Local Site #
dscli>lspprc -dev IBM.2107-1301651 -remotedev IBM.2107-1300561 -fullid -fmt default 2000-2007 ID State Reason Type SourceLSS Timeout (secs) Critical Mode First Pass Status ========================================================================================================================================== ====== IBM.2107-1301651/2000:IBM.2107-1300561/2000 Suspended Host Source Global Copy IBM.2107-1301651/20 300 Disabled True IBM.2107-1301651/2001:IBM.2107-1300561/2001 Suspended Host Source Global Copy IBM.2107-1301651/20 300 Disabled True IBM.2107-1301651/2002:IBM.2107-1300561/2002 Suspended Host Source Global Copy IBM.2107-1301651/20 300 Disabled True IBM.2107-1301651/2003:IBM.2107-1300561/2003 Suspended Host Source Global Copy IBM.2107-1301651/20 300 Disabled True IBM.2107-1301651/2004:IBM.2107-1300561/2004 Suspended Host Source Global Copy IBM.2107-1301651/20 300 Disabled True IBM.2107-1301651/2005:IBM.2107-1300561/2005 Suspended Host Source Global Copy IBM.2107-1301651/20 300 Disabled True IBM.2107-1301651/2006:IBM.2107-1300561/2006 Suspended Host Source Global Copy IBM.2107-1301651/20 300 Disabled True IBM.2107-1301651/2007:IBM.2107-1300561/2007 Suspended Host Source Global Copy IBM.2107-1301651/20 300 Disabled True

dscli> dscli>

Step 4: Fail over Global Copy remote to intermediate site


In this step the Global Copy between the intermediate and the remote site is prepared to be reversed into the initial direction from intermediate to remote. The command issued is failoverpprc at the intermediate. Because the volumes at the intermediate are cascaded, the -cascade option is issued with the failoverpprc command. This will make the intermediate primary suspended. Note: In Step 2: Suspend Global Copy at local to remote on page 551 Global Copy was suspended at the local, thus stopping any new data from going to the remote. At this point, out-of-sync tracks should be drained. To check the out-of-sync tracks, issue the lspprc -l command at the remote site. Example 32-39 shows the appropriate DSCLI command to fail over the Global Copy.
Example 32-39 Fail over Global Copy remote to intermediate site dscli> failoverpprc -dev 2000-2007:2000-2007 CMUC00196I failoverpprc: CMUC00196I failoverpprc: CMUC00196I failoverpprc: CMUC00196I failoverpprc: CMUC00196I failoverpprc: CMUC00196I failoverpprc: CMUC00196I failoverpprc: IBM.2107-1301261 -remotedev IBM.2107-1300561 -type gcp -cascade Remote Remote Remote Remote Remote Remote Remote Mirror Mirror Mirror Mirror Mirror Mirror Mirror and and and and and and and Copy Copy Copy Copy Copy Copy Copy pair pair pair pair pair pair pair 2000:2000 2001:2001 2002:2002 2003:2003 2004:2004 2005:2005 2006:2006 successfully successfully successfully successfully successfully successfully successfully reversed. reversed. reversed. reversed. reversed. reversed. reversed.

552

IBM System Storage DS8000: Copy Services in Open Environments

CMUC00196I failoverpprc: Remote Mirror and Copy pair 2007:2007 successfully reversed. dscli> dscli>

Step 5: Fail back Global Copy at intermediate to remote site


The failbackpprc command is issued at the intermediate site to fail back Global Copy at the intermediate site. Since Global Copy is in a cascaded relation, the failbackpprc would be issued with the copy type gmir with the -cascade option. This starts Global Copy from the intermediate to the remote site.
Example 32-40 Fail back Global Copy at intermediate to remote site dscli> failbackpprc -dev 2000-2007:2000-2007 CMUC00197I failbackpprc: CMUC00197I failbackpprc: CMUC00197I failbackpprc: CMUC00197I failbackpprc: CMUC00197I failbackpprc: CMUC00197I failbackpprc: CMUC00197I failbackpprc: CMUC00197I failbackpprc: dscli> dsclI> IBM.2107-1301261 -remotedev IBM.2107-1300561 -type gcp -cascade Remote Remote Remote Remote Remote Remote Remote Remote Mirror Mirror Mirror Mirror Mirror Mirror Mirror Mirror and and and and and and and and Copy Copy Copy Copy Copy Copy Copy Copy pair pair pair pair pair pair pair pair 2000:2000 2001:2001 2002:2002 2003:2003 2004:2004 2005:2005 2006:2006 2007:2007 successfully successfully successfully successfully successfully successfully successfully successfully failed failed failed failed failed failed failed failed back. back. back. back. back. back. back. back.

Step 6: Create Metro Mirror with incremental resync at local site


The command to create Metro Mirror with the incremental resync option is issued twice in this step using two different parameters. First, to stop incremental resync from the local to the remote site and to be able to move it to the intermediate site, Metro Mirror with incremental resync is first set by issuing the mkpprc command with the -incrementalresync override option. By using the override parameter for -incrementalresync, Metro Mirror with incremental resync will be established without doing a check at the intermediate site (the Metro Mirror secondary). The change recording bitmaps are also merged with the out-of-sync bitmaps at the local site during this step. Next, to monitor and track data as it is being written on the primary volumes at the local site, a Metro Mirror relationship with incremental resync is created from the local to the intermediate site by using the mkpprc command with the -incrementalresync enable option. By using the enable parameter for -incrementalresync, incremental resync is initialized by creating a new change recording bitmap on the local site. Note: Both Metro Mirror and Global Copy might still be in first pass. To query the status of Global Mirror or Metro Mirror, use the lspprc -l command at the local site to query Metro Mirror and at the intermediate site to query Global Copy. Example 32-41 shows the DSCLI command to create the Metro Mirror with -incremental override. During this phase all tracks marked in the incremental re-synchronization are not enabled. When all tracks have been drained to the intermediate site the incremental re-synchronization is enabled.
Example 32-41 Create Metro Mirror with incremental resync at local site
dscli> mkpprc -dev CMUC00153I mkpprc: CMUC00153I mkpprc: CMUC00153I mkpprc: CMUC00153I mkpprc: IBM.2107-1301651 -remotedev IBM.2107-1301261 -type mmir -mode nocp -incrementalresync override 2000-2007:2000-2007 Remote Mirror and Copy volume pair relationship 2000:2000 successfully created. Remote Mirror and Copy volume pair relationship 2001:2001 successfully created. Remote Mirror and Copy volume pair relationship 2002:2002 successfully created. Remote Mirror and Copy volume pair relationship 2003:2003 successfully created.

Chapter 32. MGM Incremental Resync

553

CMUC00153I mkpprc: Remote Mirror and Copy volume pair relationship 2004:2004 successfully created. CMUC00153I mkpprc: Remote Mirror and Copy volume pair relationship 2005:2005 successfully created. CMUC00153I mkpprc: Remote Mirror and Copy volume pair relationship 2006:2006 successfully created. CMUC00153I mkpprc: Remote Mirror and Copy volume pair relationship 2007:2007 successfully created. dscli> dscli> lspprc -dev IBM.2107-1301651 -remotedev IBM.2107-1301261 -l -fmt default 2000-2007 ID State Reason Type Out Of Sync Tracks Tgt Read Src Cascade Tgt Cascade Date Suspended SourceLSS Timeout (secs) Critical Mode First Pass Status Incremental Resync Tgt Write GMIR CG PPRC CG ========================================================================================================================================== ========================================================================= 2000:2000 Full Duplex Metro Mirror 0 Disabled Disabled Invalid 20 300 Disabled Invalid Disabled Disabled Disabled Enabled 2001:2001 Full Duplex Metro Mirror 0 Disabled Disabled Invalid 20 300 Disabled Invalid Disabled Disabled Disabled Enabled 2002:2002 Full Duplex Metro Mirror 0 Disabled Disabled Invalid 20 300 Disabled Invalid Disabled Disabled Disabled Enabled 2003:2003 Full Duplex Metro Mirror 0 Disabled Disabled Invalid 20 300 Disabled Invalid Disabled Disabled Disabled Enabled 2004:2004 Full Duplex Metro Mirror 0 Disabled Disabled Invalid 20 300 Disabled Invalid Disabled Disabled Disabled Enabled 2005:2005 Full Duplex Metro Mirror 0 Disabled Disabled Invalid 20 300 Disabled Invalid Disabled Disabled Disabled Enabled 2006:2006 Full Duplex Metro Mirror 0 Disabled Disabled Invalid 20 300 Disabled Invalid Disabled Disabled Disabled Enabled 2007:2007 Full Duplex Metro Mirror 0 Disabled Disabled Invalid 20 300 Disabled Invalid Disabled Disabled Disabled Enabled dscli> dscli>mkpprc -dev IBM.2107-1301651 -remotedev IBM.2107-1301261 -type mmir -mode nocp -incrementalresync enable 2000-2007:2000-2007 CMUC00153I mkpprc: Remote Mirror and Copy volume pair relationship 2000:2000 successfully created. CMUC00153I mkpprc: Remote Mirror and Copy volume pair relationship 2001:2001 successfully created. CMUC00153I mkpprc: Remote Mirror and Copy volume pair relationship 2002:2002 successfully created. CMUC00153I mkpprc: Remote Mirror and Copy volume pair relationship 2003:2003 successfully created. CMUC00153I mkpprc: Remote Mirror and Copy volume pair relationship 2004:2004 successfully created. CMUC00153I mkpprc: Remote Mirror and Copy volume pair relationship 2005:2005 successfully created. CMUC00153I mkpprc: Remote Mirror and Copy volume pair relationship 2006:2006 successfully created. CMUC00153I mkpprc: Remote Mirror and Copy volume pair relationship 2007:2007 successfully created. dscli> dscli> lspprc -dev IBM.2107-1301651 -remotedev IBM.2107-1301261 -l -fmt default 2000-2007 ID State Reason Type Out Of Sync Tracks Tgt Read Src Cascade Tgt Cascade Date Suspended SourceLSS Timeout (secs) Critical Mode First Pass Status Incremental Resync Tgt Write GMIR CG PPRC CG ========================================================================================================================================== ========================================================================= 2000:2000 Full Duplex Metro Mirror 0 Disabled Disabled Invalid 20 300 Disabled Invalid Enabled Disabled Disabled Enabled 2001:2001 Full Duplex Metro Mirror 0 Disabled Disabled Invalid 20 300 Disabled Invalid Enabled Disabled Disabled Enabled 2002:2002 Full Duplex Metro Mirror 0 Disabled Disabled Invalid 20 300 Disabled Invalid Enabled Disabled Disabled Enabled 2003:2003 Full Duplex Metro Mirror 0 Disabled Disabled Invalid 20 300 Disabled Invalid Enabled Disabled Disabled Enabled 2004:2004 Full Duplex Metro Mirror 0 Disabled Disabled Invalid 20 300 Disabled Invalid Enabled Disabled Disabled Enabled 2005:2005 Full Duplex Metro Mirror 0 Disabled Disabled Invalid 20 300 Disabled Invalid Enabled Disabled Disabled Enabled 2006:2006 Full Duplex Metro Mirror 0 Disabled Disabled Invalid 20 300 Disabled Invalid Enabled Disabled Disabled Enabled 2007:2007 Full Duplex Metro Mirror 0 Disabled Disabled Invalid 20 300 Disabled Invalid Enabled Disabled Disabled Enabled dscli> dscli>

Step 7: Start Global Mirror at intermediate site


When the original configuration is restored and Metro Mirror is full duplex, then Global Mirror can be started and it will start forming consistency groups successfully. Global Mirror is started by issuing the mkgmir command from the intermediate. Depending on the Global Mirror configuration shown after the intermediate site has become available again, it is possible that the sessions also have to be created. Important: Verify that the out-of-sync tracks have completed for Metro Mirror before starting Global Mirror. Otherwise, the consistency groups will start to fail when Global Mirror is started with the mkgmir command.

554

IBM System Storage DS8000: Copy Services in Open Environments

Example 32-42 shows how to start the Global Mirror and the sessions at the intermediate site.
Example 32-42 Start Global Mirror at intermediate site
dscli> mksession -dev IBM.2107-1301261 -lss 20 ad CMUC00145I mksession: Session AD opened successfully. dscli> dscli> chsession -dev IBM.2107-1301261 -lss 20 -action add -volume 2000-2007 ad CMUC00147I chsession: Session AD successfully modified. dscli> dscli> lssession -dev IBM.2107-1301261 20 ad LSS ID Session Status Volume VolumeStatus PrimaryStatus SecondaryStatus FirstPassComplete AllowCascading ======================================================================================================================== ===== 20 AD CG In Progress 2000 Active Primary Copy Pending Secondary Full Duplex True Enable 20 AD CG In Progress 2001 Active Primary Copy Pending Secondary Full Duplex True Enable 20 AD CG In Progress 2002 Active Primary Copy Pending Secondary Full Duplex True Enable 20 AD CG In Progress 2003 Active Primary Copy Pending Secondary Full Duplex True Enable 20 AD CG In Progress 2004 Active Primary Copy Pending Secondary Full Duplex True Enable 20 AD CG In Progress 2005 Active Primary Copy Pending Secondary Full Duplex True Enable 20 AD CG In Progress 2006 Active Primary Copy Pending Secondary Full Duplex True Enable 20 AD CG In Progress 2007 Active Primary Copy Pending Secondary Full Duplex True Enable dscli> dscli> mkgmir -dev IBM.2107-1301261 -lss 20 -session ad CMUC00162I mkgmir: Global Mirror for session ADsuccessfully started. dscli> dscli>

Chapter 32. MGM Incremental Resync

555

556

IBM System Storage DS8000: Copy Services in Open Environments

33

Chapter 33.

Metro/Global Mirror with IBM TotalStorage Productivity Center for Replication


In this chapter we give you an overview of how to set up Metro/Global Mirror with IBM TotalStorage Productivity Center for Replication (TPC-R). Included is a section on how to set up a simple Metro/Global Mirror environment. Metro/Global Mirror is a 3-site continuous copy solution for the DS8000 that combines Metro Mirror synchronous copy and Global Mirror asynchronous copy into one session. To use Metro/Global Mirror with TPC-R you need the Three Site Business Continuity (BC) license. TPC-R uses by default incremental resync for Metro/Global Mirror.

Copyright IBM Corp. 2004-2008. All rights reserved.

557

33.1 Metro/Global Mirror: Additional references


For an introduction and overview of TPC-R, refer to Chapter 6, IBM TotalStorage Productivity Center for Replication on page 43. For further details on TPC-R, refer to the following Redbooks publications: IBM TotalStorage Productivity Center for Replication on Windows 2003, SG24-7250 IBM TotalStorage Productivity Center for Replication on AIX, SG24-7407 IBM TotalStorage Productivity Center for Replication on Linux, SG24-7411 Note: With TPC-R, it is only possible to configure (and not to manage) Metro/Global Mirror on a DS8000 storage server.

33.2 Metro/Global Mirror scenario


In this section we use the TPC-R graphical user interface (GUI) to configure a Metro/Global Mirror between three DS8000 storage subsystems. To keep it simple only a few volumes will be used to show the principle of working with TPC-R. In addition to the GUI, you can also configure a Metro/Global Mirror session with the TPC-R command line interface, refer to 6.16, Command line interface to TPC for Replication on page 75. In the example illustrtated by Figure 33-1, two volumes from DS8000#1 are mirrored to DS8000#2 with Metro Mirror and then mirrored from DS8000#2 to DS8000#3 with Global Mirror.

P h y s ic a l F ib r e p a t h

LSS48
4800 H1 4801 M e tr o M ir r o r
P h y s ic a l F ib r e p a t h

LSS49
4900 H 2 4901 G lo b a l M irr o r
P h y s ic a l F ib r e p a t h

LSS60
6000 H3 6001

LSSB0
B 000 J3 D S 8000#1
- d e v IB M .2 1 0 7 - 7 5 V 6 5 0 1

B 001

D S 8000#2
- d e v IB M .2 1 0 7 - 7 5 2 0 7 8 1

D S 8000#3
- d e v IB M .2 1 0 7 - 7 5 0 3 4 6 1

Figure 33-1 Metro/Global Mirror setup

558

IBM System Storage DS8000: Copy Services in Open Environments

33.2.1 Creating a session for Metro/Global Mirror


Create a Metro/Global Mirror session by performing the following steps: 1. Login to the TPC-R GUI using a Web-browser (Internet Explorer or Mozilla). Enter the URL: https://<TPC-R server ip-adress>:9443/CSM; see Figure 33-2.

Figure 33-2 Starting TPC-R GUI

2. After entering userid and password, the Health Overview panel is displayed. Figure 33-3 shows that you always start from My Work and click the hyperlink Sessions. This provides you with an overview of all defined sessions. At this point there are no defined sessions.

Figure 33-3 Session panel

3. By clicking the button Create Session... the Create Session wizard displays. From the drop-down menu, select the Metro Global Mirror session type; see Figure 33-4. The drop-down menu displays all the available session types in TPC-R.

Chapter 33. Metro/Global Mirror with IBM TotalStorage Productivity Center for Replication

559

Figure 33-4 choose the session type

4. Only one session type is associated any single session. In our example we choose Metro Global Mirror as session type. Note that this requires a TPC-R Three Site BC license. 5. Figure 33-5 shows the chosen session type as Metro Global Mirror. Continue by clicking Next.

Figure 33-5 Session type Metro Global Mirror

6. After you have selected a given session type, the diagram displayed on the right side symbolizes the sites involved and their volume types. Here, with Metro/Global Mirror, Site 1 shows with H1 volumes, Site 2 shows with H2 volumes, and Site 3 shows with H3 volumes and the journal volumes J3. When you define Copy Sets, this view helps you locate and understand where you define the Copy Sets. The arrows indicate the copy direction, and the triangle over the H1 volumes shows where the production can run.

560

IBM System Storage DS8000: Copy Services in Open Environments

7. The Properties panel is also important because it requires that you specify at least a name for the session which is about to be created; see Figure 33-6. You can add a location for each storage server and a date when the session is created or changed.

Figure 33-6 Session properties

8. As a next step in the Create Session wizard you can specify a dedicated location which is considered as Site 1. Note that each location in TPC-R has a number of Storage Systems assigned and only volumes from the assigned Storage Systems will be selectable when creating Copy Sets. The location concept prevents mis-selection of volumes for the dedicated site. If you do not want to limit the selectable volumes per site, choose None as location. 9. In this example the DS8000 V6501 is specified for Site 1 from the drop-down menu; see Figure 33-7. Continue by clicking Next.

Chapter 33. Metro/Global Mirror with IBM TotalStorage Productivity Center for Replication

561

Figure 33-7 Specify the DS8000 for Site 1

10.Next, we specify a dedicated location for Site 2. The storage subsystem for Site 2 is selected from the drop-down menu as shown in Figure 33-8. Click Next to continue.

Figure 33-8 Specify the DS8000 for Site 2

11.We now specify a dedicated location for Site 3. The storage subsystem for Site 3 is selected from the drop-down menu as shown in Figure 33-9. Click Next to continue.

562

IBM System Storage DS8000: Copy Services in Open Environments

Figure 33-9 Specify the DS8000 for Site 3

12.Figure 33-10 shows the Results panel which contains the message, that the session ITSO_MGM was successfully created. When you click the Finish button, the wizard process ends.

Figure 33-10 Session results window

13.The application displays the Session overview panel as shown in Figure 33-11, and you can see the newly created Metro/Global Mirror session.

Chapter 33. Metro/Global Mirror with IBM TotalStorage Productivity Center for Replication

563

Figure 33-11 Defined Metro/Global Mirror session

This session, with its name, is just a unique configuration container and represents a Metro/Global Mirror Copy Services type. There are no volumes ITSO_MGM associated with this session at the moment.

33.2.2 Creating paths for the Metro/Global Mirror session


Before you can start a data replication, PPRC paths have to exist over a Fibre Channel connection between the three DS8000 storage subsystems. A PPRC path is a logical entity and is defined over a physical connection which is called a PPRC link or FCP link. PPRC paths are defined automatically by TPC-R when you start the session in case the paths don't exist. Note that TPC-R will establish automatically only one PPRC path per LSS pair used in the Session. An automatically generated path will be shown in TPC-R with the auto generated Flag. You can also define manually in TPC-R, a PPRC path between a primary LSS and corresponding secondary LSS. To define paths manually, you have to select ESS/DS Path on the Health Overview panel. From the ESS/DS Path panel, you click the button, Manage Path. This opens the wizard shown in Figure 33-13 on page 566. 1. In our example, a path is specified between Site 1 and Site 2. The LSSs to be used are LSS 48 for the DS8000 IBM.2107-75V6501BM and LSS 49 for the DS8000 IBM.2107-7520781.

564

IBM System Storage DS8000: Copy Services in Open Environments

Note: For Disaster Recovery reasons a physical fibre channel link should exist between Site 1 and Site 3. If Site 2 goes down a Global Mirror can be established between Site 1 and Site 3 with incremental resynchronization capability. The same physical fibre channel link between Site 1 and Site 3 is also required to re-integrate the failed site(s) after a Recovery was done on either Site 2 or Site 3. This link is required by TPC-R to utilize the Metro/Global Mirror incremental resynchronization capabilities and avoid a recoverability impact when a initial full copy must be done to the site that is reassigned. 2. From the Health Overview panel, select the ESS/DS Path link, which displays the panel shown in Figure 33-12.

Figure 33-12 Path management - Storage subsystems overview

3. Clicking the Manage Paths button guides you through a set of panels to create the desired PPRC paths. The TPC-R GUI assists you to determine which PPRC links actually exist between the selected storage servers and takes you through an easy to understand panel sequence. 4. Figure 33-13 shows the first panel in the path management sequence. Here you first select the source storage server. We choose the DS8000 V6501 as the source storage server and DS8000 20781 as the target storage server.

Chapter 33. Metro/Global Mirror with IBM TotalStorage Productivity Center for Replication

565

Figure 33-13 Path Management - specify storage servers and LSSs

5. On the next panel, the available PPRC source and target ports have to be specified. See Figure 33-14. On this panel, additional logical paths for one LSS pair can be defined by using different ports to increase path redundancy. Click Add to accept the source and target port relationship, then click Next.

Figure 33-14 Path Management - Specify the Source Port and Target Port

566

IBM System Storage DS8000: Copy Services in Open Environments

6. Figure 33-15 shows the final overview panel which displays all selected paths. Continue by clicking Next to display the confirmation panel. The path will be established between source port I0003 and target port I0043.

Figure 33-15 path confirmation panel

7. After completing the last steps, the application goes back to the ESS/DS Paths overview panel seen in Figure 33-16. This panel now shows the source DS8000 with the defined PPRC paths.

Figure 33-16 Path overview panel

For further information about path management, refer to the IBM TotalStorage Productivity Center for Replication Users Guide, SC32-0103.

Chapter 33. Metro/Global Mirror with IBM TotalStorage Productivity Center for Replication

567

33.2.3 Adding Copy Sets to the Metro/Global Mirror session


Now that the session and paths are defined, the volumes can be assigned to the session. In our example two volumes are replicated from Site1 to Site 2 over Metro Mirror and then from Site 2 to Site 3 over Global Mirror. On Site 3 we need two additional volumes for the FlashCopy relationship that is required for Global Mirror. We follow these steps: 1. From the Sessions panel, select the previously created Metro/Global Mirror session ITSO_MGM. Select the radio button next to the ITSO_MGM session as shown in Figure 33-17.

Figure 33-17 Select the session ITSO_MGM for adding Copy Sets

2. Select Add Copy Sets as shown in Figure 33-18 to start the Add Copy Set wizard.

568

IBM System Storage DS8000: Copy Services in Open Environments

Figure 33-18 Select action for MGM session

3. The Host 1 volumes are defined in Figure 33-19. We select the Host 1 (H1) storage subsystem, which is the V6501 storage server. As Host1 logical storage subsystem, select the LSS 48 and then select All Volumes. Clicking Next opens the next panel.

Figure 33-19 Defining the host volume for H1

Chapter 33. Metro/Global Mirror with IBM TotalStorage Productivity Center for Replication

569

Importing lists of copy sets from comma-separated values (CSV) files


It is also possible to import copy sets from a CSV file. Select the check box next to Use a CSV file to import copy sets. You can either enter the full path name of the CSV file in the text box, or click Browse to open a file dialog box and select the CSV file. Follow these steps: 1. This feature allows you to import one or many copy sets from a CSV file. It is useful, when multiple copy sets have to be assigned to a session. Example 33-1 shows the CSV file for the ITSO_MGM session.
Example 33-1 CSV file example of two copy sets #ITSO_MGM #Metro Global Mirror #Nov 2, 2007 11:17:47 AM H1,H2,H3,J3 DS8000:2107.V6501:VOL:4800,DS8000:2107.20781:VOL:4900,DS8000:2107.03461:VOL:6000,DS8000:210 7.03461:VOL:B000 DS8000:2107.V6501:VOL:4801,DS8000:2107.20781:VOL:4901,DS8000:2107.03461:VOL:6001,DS8000:210 7.03461:VOL:B001

2. The Host 2 volumes are defined for the DS8000 20781 storage server as shown in Figure 33-20. For Host2 logical storage subsystem we select the LSS 49 and then select All Volumes. Clicking Next opens the next panel.

Figure 33-20 Defining the host volume for H2

3. The Host 3 volumes are defined for the DS8000 03461 storage server (see Figure 33-21). We select the LSS 60 and then select All Volumes. Clicking Next opens the next panel.

570

IBM System Storage DS8000: Copy Services in Open Environments

Figure 33-21 Defining the host volume for H3

4. In Figure 33-22, the journal volumes (J3) are defined. Select the Host 3 (H3) storage subsystem, which is the 03461 storage server. For Journal3 logical storage subsystem, select the LSS B0 and then select All Volumes. Click Next to go to the next panel.

Chapter 33. Metro/Global Mirror with IBM TotalStorage Productivity Center for Replication

571

Figure 33-22 Defining the journal volumes for H3

5. Select the checkboxes next to the copy sets that should be added. You can also click Select All to select all copy sets, or Deselect All to unselect all copy sets; see Figure 33-23. Then click Next.

Figure 33-23 Select Copy Sets

572

IBM System Storage DS8000: Copy Services in Open Environments

6. Finally, the confirmation panel is displayed and the wizard adds the copy sets to the session.

Figure 33-24 MGM session with Copy Sets

The session overview panel in Figure 33-24 now displays the session ITSO_MGM with two Copy Sets defined. Note that these Copy Sets are just database entries in the TPC-R database. However, TPC-R verifies that all volumes are available and ready to become active members in the session.

33.2.4 Managing Metro/Global Mirror through the GUI


When a Metro/Global Mirror session has been defined and populated with Copy Sets, the session is ready to start. If specific I/O ports are required for the paths, they should also be specified before the session starts. Follow these steps: 1. To activate the Metro/Global Mirror session, you have to select the action Start H1->H2->H3. This will start the configuration on the DS8000 subsystems and will initiate a Metro Mirror from local site (H1) to intermediate site (H2) and a Global Mirror from intermediate site (H2) to remote site (H3); see Figure 33-25 on page 574. Click the Go button to start the Metro/Global Mirror session. Note: A detailed Session management and monitoring can be done within the Session Details panel, which is displayed after clicking the Session Name link.

Chapter 33. Metro/Global Mirror with IBM TotalStorage Productivity Center for Replication

573

Figure 33-25 Starting the Metro/Global Mirror session

2. Click the Go button to start the session; all volumes in the session change from SIMPLEX state to DUPLEX PENDING state. TPC-R reflects this PENDING state through a Warning Status and as Preparing State as shown in Figure 33-26.

Figure 33-26 Session in State: Preparing

574

IBM System Storage DS8000: Copy Services in Open Environments

3. As soon as the session status is Normal and the state is Prepared, the Metro/Global Mirror is running; see Figure 33-27. The session state Prepared means that the source to target data transfer is active. In Metro/Global Mirror, this means that the data written to the source is transferred to the target, and all volumes are consistent and recoverable.

Figure 33-27 Metro/Global Mirror is active

33.2.5 Disaster Recovery with TPC-R


Scenarios for disaster recovery describe the steps to recover from the loss of a complete site and restart operations after a disaster or disruption. The following example shows the steps for one Disaster Recovery scenario. An unplanned outage of local site (H1) and intermediate site (H2), with production move to (H3).

Unplanned outage of local (H1) and intermediate (H2), with move to (H3)
Follow these steps: 1. Start H1->H2->H3 Issue this command to begin Metro/Global Mirror on the system when the production I/O is running on H1. From the drop-down menu select Start H1 H2 H3 and click Go as shown in Figure 33-28. As soon as the session is in the state Prepared, the volumes are recoverable.

Chapter 33. Metro/Global Mirror with IBM TotalStorage Productivity Center for Replication

575

Figure 33-28 Start Metro/Global Mirror Session

2. Unplanned outage of H1 and H2 An unplanned outage of H1 and H2 has happened, which causes the session to go into the status Severe. It is recoverable if the session was in a prepared state when the outage occurred. If the state was Preparing, the session consistency is questionable, which is shown with the Recoverable Flag = NO. From the drop-down menu, select Suspend and click Go; see Figure 33-29.

576

IBM System Storage DS8000: Copy Services in Open Environments

Figure 33-29 unplanned outage of H1 and H2

3. Recover H3 This command makes the H3 target-available. In our example, it will make ATS DS8000 03461 volumes usable and will establish change recording on the hardware for session ITSO_MGM. From the drop-down menu, select Recover H3 and click Go. When the session state is target-available at H3, the production can restart on H3. This is shown in Figure 33-30.

Figure 33-30 Target H3 is available for production

Chapter 33. Metro/Global Mirror with IBM TotalStorage Productivity Center for Replication

577

4. Start H3->H1->H2 Issue this command when H1 and H2 are available and ready to be brought back into the configuration. Changes from production at the remote site (H3) flow from the local site (H1) to the intermediate site (H2). From the drop-down menu select Start H3 H1 H2 and click Go. The H3-H1 and H1-H2 pairs are Global Copy. This is an extended distance copy, and pairs will not become consistent until a suspend is manually done to the session. This is shown in Figure 33-31.

Figure 33-31 Start H3->H1->H2

5. Suspend This command will turn the Global Copy relationship between H3->H1 and H1->H2 to Full Duplex and then suspend the H3->H1 relationship in a consistent way (Freeze/Unfreeze). From the drop-down menu, select Suspend and click Go. When this step is complete, the session state is Suspended and will leave a recoverable copy of the data for session ITSO_MGM; see Figure 33-32.

578

IBM System Storage DS8000: Copy Services in Open Environments

Figure 33-32 Suspended

6. Recover This command makes H1 target-available. When the session state is target-available at H1, run production on H1. From the drop-down menu, select Recover and click Go. The DS8000 V6501 volumes will be usable again and TPC-R will establish change recording on the hardware for session ITSO_MGM; see Figure 33-33.

Figure 33-33 Recover H1

Chapter 33. Metro/Global Mirror with IBM TotalStorage Productivity Center for Replication

579

7. Start H1->H2->H3 This command restores the original configuration. From the drop-down menu, select Start H1->H2->H3 and click Go. It will initiate the copying of data from DS8000 V6501 to ATS DS8000 20781 and ATS DS8000 03461 for session ITSO_MGM, overwriting any data on ATS DS8000 20781 and ATS DS8000 03461 for any inactive copy sets. In this sample scenario, several TPC-R steps where shown; first the failover to the remote site (H3) and then the restore of the original configuration. Compared to the steps that would have been necessary with the DS command line interface, the copy services management with TPC-R is less complex. For more detailed Metro/Global Mirror Scenarios, refer to the IBM TotalStorage Productivity Center for Replication Users Guide, SC32-0103.

580

IBM System Storage DS8000: Copy Services in Open Environments

34

Chapter 34.

MGC Incremental Resync


In this chapter we explain and illustrate the Incremental Resynchronization (or Incremental Resync) feature when it is used in a Metro/Global Copy environment.

Copyright IBM Corp. 2004-2008. All rights reserved.

581

34.1 Overview
The Metro/Global Copy scenario is not for any production use. However, this combination of Metro Mirror and Global Copy is quite useful when you have an existing Metro Mirror configuration and you plan to replace the secondary storage subsystem by a new one (see Figure 34-1). Of course, you could just delete the old pairs and create new one on the new secondary storage system. But a complete resynchronization, particularly for a large capacity system might take too long, especially if your disaster recovery requirements do not allow for such a long time without data protection.

Local site "A"


2

Remote site "B" old system


New Metro Mirror established with resync

Remote site "C" new system

DS8000

DS8000

DS8000

Existing Metro Mirror

Establish Global Copy

Remove old system

Figure 34-1 Migration scenario with Metro/Global Copy

The Metro/Global Mirror license is required for the storage system at the local site A. For this migration scenario, we cannot use the procedure outlined in Chapter 32, MGM Incremental Resync on page 511. The Metro/Global Mirror implementation requires the maintenance of several bitmaps to keep track of the changed tracks at the primary, intermediate, and remote site. Global Mirror plays an important role as it resets the bits in the bitmap when data is secured at the remote site. With Metro/Global Copy, there is no instance that resets these bits. Therefore we have to follow another procedure if we want to do a resync of a copy operation from the A volume to the C volume.

34.2 Creating a Global Copy relationship between B and C


First we must install and configure the new C storage subsystem. We also have to establish paths from the B LSSs to the C LSSs, but this step is not shown here. We create the cascaded Global Copy relationships between the volumes in the old Metro Mirror secondary box( B volumes) and the corresponding volumes in the new storage system, (C volumes). Refer to Example 34-1.

582

IBM System Storage DS8000: Copy Services in Open Environments

Example 34-1 Setting up the Global Copy relationship mkpprc -dev IBM.2107-7520781 -remotedev IBM.2107-75ABTV1 -type gcp -cascade B000:C000

Note that we used the -cascade parameter. Before we can proceed, we have to wait until the initial copy phase from the B to C volumes has completed.

34.3 Modifying an existing Metro Mirror relationship


In a normal Metro Mirror relationship, an out-of-sync bitmap is maintained when a pair is suspended to be able to do a resync. For a Metro/Global Mirror or Metro/Global Copy, however, another bitmap is required that we call the incremental resync bitmap. Hence, when we plan to do a resync from A to C we have to modify the existing Metro Mirror relationship to create that bitmap. This is shown in Example 34-2. The command can be applied to an active Metro Mirror relationship. There is no need to suspend the Metro Mirror relationship between the participating primary and secondary devices before using this command.
Example 34-2 Modifiyng an existing Metro Mirror relationship for incremental resync mkpprc -dev IBM.2107-7520781 -remotedev IBM.2107-7503461 -type mmir -mode nocp -incrementalresync enablenoinit A000:B000

Note that we have specified the parameters -mode nocp -incrementalresync enablenoinit. In a Metro/Global Mirror relationship you would have specified -incrementalresync enable. The -incrementalresync option determins how the incremental resync bitmaps are initialized. The parameter -incrementalresync enable would initialize the bitmap in a way that means everything must be copied. It is the Global Mirror function at the B side that informs the A side about updates sent to the C side. So Global Mirror will reset the A sides bitmaps to all data copied. When updates to the A volumes occure, the incremental resync bitmaps at A will reflect this and when the updates have arrived at the C volumes, Global Mirror will reset these bits for the A volumes. Since we do not have Global Mirror running at the B side, there is no one to reset the incremental resync bits for the A volumes. When -incrementalresync enablenoinit is used, the bitmaps for the A volumes are initialized indicating that nothing has to be copied. When updates to the A volumes occur, these will be reflected in the bitmaps. The corresponding tracks will be transmitted to the C volumes if we would do an incremental resync from A to C. These bits will not be reset. Change bits will accumulate until we do the incremental resync. Therefore, timing is crucial when we plan to do an incremental resync in a Metro/Global Copy relationship. We have to modify the existing relationship short before we do the switch from the A to B pairs to the A to C pairs. If we modify the existing Metro Mirror relationship a long time before the switch, the amount of data that must be resynced will be high.

Chapter 34. MGC Incremental Resync

583

When we issued the command to modify the Metro Mirror relationship with the -incrementalresync enablenoinit parameter, we told the system to assume that all A, B, and C volumes contained the same data. But there might be some data in flight and you should wait some time before doing the switch of the pairs to ensure that in flight updates have reached the C volumes.

34.4 Suspending Metro Mirror between A and B


Now we are ready for the switch. We have to suspend the Metro Mirror relationships (see Example 34-3).
Example 34-3 Suspending the Metro Mirror relationship pausepprc -dev IBM.2107-7520781 -remotedev IBM.2107-7503461 A000:B000

34.5 Failover from C to B


To make the C volumes usable, we have to do a failover operation from C to B. This will put the C volumes in a primary suspended state while the status of the B volumes is not changed (see Example 34-4).
Example 34-4 Failover from C to B failoverpprc -dev IBM.2107-75ABTV1 -remotedev IBM.2107-7503461 -type gcp -cascade C000:B000

From now on the B volumes are no longer needed. The volumes, however, are still in a Metro Mirror state and paths exist from the B system to the C system. You should delete the pairs and remove the paths before removing the system.

34.6 Incremental resync from A to C


Now we can do the incremental resync from A to C. Before we can do this we have to establish paths from A to C (not shown here). Then we establish pairs between A and C volumes with the option -incrementalresync override.
Example 34-5 Incremental resync from A to C mkpprc -dev IBM.2107-7520781 -remotedev IBM.2107-75ABTV1 -type mmir -incrementalresync override A000:C000

We now have Metro Mirror pairs between A and C volumes and the old storage subsystem can be removed.

584

IBM System Storage DS8000: Copy Services in Open Environments

Part 9

Part

Copy Services with System i


In this part of the book we examine the DS8000 Copy Services when used in a System i environment. We discuss the interoperability of the various Copy Services functions on the DS8000, and also the interoperability of the DS8000 with other IBM System Storage and TotalStorage disk subsystems in Copy Services implementations.

Copyright IBM Corp. 2004-2008. All rights reserved.

585

586

IBM System Storage DS8000: Copy Services in Open Environments

35

Chapter 35.

Copy Services with System i5


In this chapter we provide descriptions of business continuity solutions with System i5 and based on the DS8000 Copy Services. In this chapter, System i refers to entire product line, including older iSeries models, while System i5 refers only to models based on the Power5 processor. We differentiate System i5 because some of the solutions we discuss use Boot from SAN and this function can only be realized on System i5. Other solutions discussed here rely on Independent Auxiliary Storage Pools (IASPs), and these can be implemented on older models as well. Only in some instances when we want to point out a characteristic that apply to all models, do we use the name System i.

Copyright IBM Corp. 2004-2008. All rights reserved.

587

35.1 Introduction
System i servers have been able to take advantage of Copy Services provided by external disk storage (ESS and then DS8000) since 2001. Over time, with the evolving System i technologies and new external storage offerings, more and more System i clients have addressed their business continuity issues by relying on external storage. Today, System i5 clients have a variety of choices for external storage solutions and features. With the DS8000 storage systems, they can use Metro Mirror or Global Mirror for disaster recovery, they can replicate all disks or just application data, and they can use FlashCopy to minimize downtime when taking backups. This chapter presents and describes solutions designed to minimize downtime. First we review the System i5 functions that are relevant to implementing external storage. An understanding of those functions is critical to properly design and manage external storage based solutions and scenarios. When discussing business continuity solutions for a System i5 audience, we use terms such as continuous operations, high availability and continuous availability, which are usually used within the System i community. We have included some formal definitions here. Definitions: Continuous Operations (CO): Attribute of a system ability to continuously operate and mask planned outages from end-users. It employs Non-disruptive hardware and software changes, nondisruptive configuration, software coexistence. High Availability (HA): Attribute of a system ability to provide service during defined periods, at acceptable or agreed upon levels, and mask unplanned outages from end users. It employs fault tolerance, automated failure detection, recovery, bypass reconfiguration, testing, and problem and change management Continuous Availability (CA): Attribute of a system ability to deliver non disruptive service to the end user 7 days a week, 24 hours a day (there are no planned or unplanned outages). Continuous Operations + High Availability = Continuous Availability

588

IBM System Storage DS8000: Copy Services in Open Environments

35.2 System i5 functions and external storage


To better understand solutions using System i5 and DS storage systems, it is necessary to have basic knowledge of System i5 functions and features that enable external storage implementation and usage. The following functions are discussed in this section: System i5 structure Single level storage Input output processors (IOPs) Clusters Independent Auxiliary Storage Pools (IASPs)

35.2.1 System i5 structure


System i5 is the newest generation of System i (formerly iSeries) servers. It is based on the POWER5 processor. A System i5 can run one or multiple logical partitions, managed by an hypervisor that resides in main memory. A partition can host one of the following operating systems: i5/OS (formerly OS/400), Linux, or AIX. Configure and manage partitions through a Hardware Management Console (HMC) that is connected to the System i5 via an Ethernet connection. System i5 partitions and HMC are depicted on Figure 35-1.

Hardware Management Console

i5/OS
Service Partition

AIX

LINUX

i5/OS

SLIC
Firmware Desktop OR and/or Public Network Rack mount Private Network

SLIC

Firmware
Perm | Temp

Figure 35-1 System i5 partitions

In this chapter we discuss solutions with i5/OS partitions and DS systems. In the remaining of this chapter, we refer to an i5/OS partition in System i5, simply as partition.

Chapter 35. Copy Services with System i5

589

35.2.2 Single-level storage


For storage, the System i5 with i5/OS uses the same architectural component already used by iSeries and AS/400, that is a single-level storage. This means that the System i5 sees all disk space and the main memory as one storage area, and uses the same set of virtual addresses to cover both main memory and disk space. Paging in this virtual address space is performed in 4-KB pages. Single-level storage is graphically depicted in Figure 35-2.

I5/OS Partition

Single-Level Storage

Main Memory

Figure 35-2 Single level storage

When the application performs an input output (IO) operation, the portion of the program that contains read or write instructions is first brought into main memory where the instructions are then executing. With the read request, the virtual addresses of the needed record are resolved and for each needed page, storage management first looks to see if it is in the main memory. If the page is there, it is used for resolving the read request. But if the corresponding page is not in the main memory, it must be retrieved from disk (page fault). When a page is retrieved, it replaces a page that was recently not used; the replaced page is swapped to disk. Similarly, writing a new record or updating an existing record is done in main memory, and the affected pages are marked as changed. A changed page remains in main memory until it is swapped to disk as a result of a page fault. Pages are also written to disk when a file is closed or when write to disk is forced by a user through commands and parameters. Also, database journals are written to the disk. A subject in System i5 is anything that exists and occupies space in storage and on which operations can be performed. For example, a library, a database file, a user profile, a program, are objects in System i5.

35.2.3 Input Output Processors


Input Output Processor (IOP) is a special card in the System i5 to which input output adapters for disk, tape, LAN communication, and other similar devices are attached. The function of IOP is to provide a certain degree of managing IO operations, and consequently off load some of the I/O-related work from the System i5 central processor. Fibre Channel (FC) 590
IBM System Storage DS8000: Copy Services in Open Environments

adapters are used to connect external storage, attach to IOPs, and I/O operations to and from external storage are managed by IOPs. Some of the functions performed by an IOP with attached FC adapter include translating a request for a block of data so that the adapter can understand it and informing the adapter of the location of a data block in main memory. The IOP concept is shown in Figure 35-3.
System i5 Partition

Main memory
RIO

IOP
PCI-X

IOP
PCI-X

FC ad

FC ad

SAN SAN

DS - External Storage

Figure 35-3 Input Output Processors (IOPs)

35.2.4 Clusters
Many of System i5 continuous availability solutions which use external storage are based on clusters and independent disk pools (independent auxiliary storage pools). Clusters provide continuous availability in several ways: they use techniques like switched disks, replicating data with Cross Site Mirroring, or replicating data with external disks while using the copy services functions they provide. Continuous availability solutions from ISVs also use System i5 clusters. A System i5 cluster is a group of one or more systems or logical partitions that work together as a single system. The basic concepts related to cluster include cluster nodes, cluster resource group, and domains: A cluster node is either a system or logical partition that is a member of the cluster. When you create a cluster, you specify the systems or logical partitions that you want to include in the cluster as nodes. The primary node is the cluster node that is the primary point of access for the resilient cluster resource.
Chapter 35. Copy Services with System i5

591

A backup node is a cluster node that will take over the role of primary access if the present primary node fails or a manual switchover is initiated. A cluster resource group (CRG) is an object in system i5 that represents a set of cluster resources that are used to manage events that occur in a clustered environment. Different types of CRG are used for represent different resources. For example, Application CRGs are used to handle applications at different cluster events, data CRG are used for data files, device CRG are used for devices. Within these types of CRGs there are two common elements: a recovery domain and an exit program. A recovery domain defines the role of each node in the CRG. When you create a CRG in a cluster, the CRG object is created on all nodes specified to be included in the recovery domain. A recovery domain specifies the order of recovery for the nodes in cluster. A device domain is a subset of nodes in a cluster that share device resources. Nodes that are in a device domain can participate at a recovery of a group of devices. Elements of a cluster are shown in Figure 35-4.

Elements of a Cluster
Device Domain
CRG A CRG A

Cluster

Cluster Node

CRG C CRG B CRG B CRG C

Recovery Domain Cluster Resources (e.g., IASP)

Cluster Resource Group

Figure 35-4 Elements of System i5 Cluster

Once the cluster is set up, the necessary information is maintained on all cluster nodes through the heartbeating which runs on IP connection among the nodes. If a system outage or a site loss occurs, the functions that are provided on a system or partition within a cluster can be accessed through other systems or partitions that have been defined in the cluster. When maintenance is needed on production partition, another node in a cluster can handle resources of production partition and continues production work. This functionality is achieved through cluster events like failover, switchover, replication, and rejoin. Solutions that are based on copy services provided by external storage systems and Independent Auxiliary Storage Pool (IASP) require that all involved systems are in a cluster. For more information about System i5 clusters, refer to the iSeries Information Center on the following Web page: http://publib.boulder.ibm.com/iseries/

592

IBM System Storage DS8000: Copy Services in Open Environments

Also refer to the Redbooks publication, Clustering and IASPs for Higher Availability on the IBM eServer iSeries Server, SG24-5194.

35.2.5 Independent Auxiliary Storage Pools (IASPs)


System i has a rich storage management heritage. From the start, the System i platform made managing storage simple through the use of disk pools. For most customers, this meant a single pool of disks called the System Auxiliary Storage Pools (ASPs). Automatic use of newly added disk units, RAID protection, and automatic data spreading, load balancing, and performance management makes this single disk pool concept the right choice. However, for many years, customers have found needs for additional storage granularity, including the need to sometimes isolate data into a separate disk pool. Since the first AS/400, many customers have utilized User ASPs to isolate disk units for journaling data (logging), saving data to disk, and isolating slower, denser, less costly disk units for archive needs. User ASP provide the same automation and ease-of-use benefits as the System ASP, but provide additional storage isolation when needed. With software level Version 5, IBM takes this storage granularity option a huge step forward with the availability of Independent Auxiliary Storage Pools (IASPs). An IASP is a collection of disk units that can be brought online or taken off-line independent of the rest of the storage on a system. An IASP can be switched between systems or logical partitions. And unlike User ASPs, IASPs coupled with i5/OS Clusters allow granular replication based on the IASP, using storage options such as i5/OS Cross Site Mirroring or IBM Storage Systems Copy Services. IASPs represent a strategic direction for System i Storage Management. IASPs help keep the System i promise of simplified IT management, while joining together the need for more storage granularity, flexibility, and storage options ASPs and IASPs in System i5 are shown in Figure 35-5.

Auxiliary Storage Pools (Disk Pools)

System ASP (Disk pool 1)

User ASPs (Disk Pools 2-255)

Traditional User ASPs (Basic Pools 2-32)

Sysbas

Independent ASPs - IASPs ( Independent Pools 33-255)

Figure 35-5 IASPs in System i5 Chapter 35. Copy Services with System i5

593

A system or partition has always a system ASP where vital system data reside. It can also have user ASPs and IASPs. In partition with IASPs, we use the term sysbas. Sysbas refers to system ASP plus user ASPs. When talking about solutions in such partition, we usually distinguish between IASPs and sysbas. IASPs can be on internal disks in System i5 (can contain internal disks), or they can be on external storage (contain volumes from ESS, DS6000, or DS8000). When an IASP resides on external storage it can be replicated by DS6000 or DS8000 Copy Services. The copy of an IASP can then be varied on to another System i or partition in a cluster, making applications and data in that IASP available to the other system. Continuous availability solutions for IASPs residing on DS6000 or DS8000 external storage, are based on Metro Mirror or Global Mirror of IASP volumes. Solutions for minimizing downtime for backups are based on FlashCopy of IASP volumes.

594

IBM System Storage DS8000: Copy Services in Open Environments

35.3 Metro Mirror for an IASP


This section describes Metro Mirror operations with an Independent Auxiliary Storage Pool (IASP). In System i5 environment Metro Mirror is typically used to replicate data on distance up to 50 km. For longer distances between local and remote site, we recommend to use Global Mirror. For more information about Metro Mirror, refer to Part 4, Metro Mirror on page 171. For more information about IASPs, refer to Independent Auxiliary Storage Pools (IASPs) on page 593. When describing functions of solutions with DS Copy services, we use terms planned outage, unplanned outage and switchover. We have included some formal definitions here. Note: During a planned outage (also called a scheduled outage), you deliberately make your system unavailable to users. You might use a scheduled outage to run batch work, back up your server, or apply fixes. An unplanned outage (also called an unscheduled outage) is usually caused by a failure. You can recover from some unplanned outages (such as disk failure, system failure, power failure, program failure, or human error) by using backups, or with disaster reocvery solution. An unplanned outage that causes a complete system loss, such as a tornado or fire, requires you to use disaster recovery solution and have a detailed disaster recovery plan in place in order to recover. A switchover is a manual process initiated by the user in order to move server functions during a planned outage of the primary server to the backup server. This process involves the quiescing of the application environment and system functions on the primary server and the initiation of the application and system resource on the backup system. A switchover can take anywhere from minutes to hours depending on the complexities of the infrastructure and application environment.

35.3.1 Solution description


Our setup scenario for this solution consists of a local partition and a remote partition in separate System i5 servers, but both partitions are grouped in a cluster. Critical application data reside in an IASP in the local partition, and the IASP contains volumes that reside on an external disk system. A Metro Mirror relationship is established between the volumes in that IASP and volumes in another, remote disk system to which the remote partition has access. The Metro Mirror secondary volumes (in other words, the exact copy of the production IASP) can be varied on to the recovery partition. Note that it is only necessary that IASP on both partitions reside on external storage. sysbas in each partition can be on internal disks or on external disk system. For more information about sysbas, refer to Independent Auxiliary Storage Pools (IASPs) on page 593.

Chapter 35. Copy Services with System i5

595

Figure 35-6 illustrates the setup for Metro Mirror of an IASP.

Production

Connection for cluster

Recovery

Metro Mirror

IASP

IASP

Figure 35-6 Metro mirror of IASP

This solution provides continuous availability in case of planned and unplanned outages: For each planned or unplanned outage, the Metro Mirror copy of the production IASP is made available to the remote partition which continues to run production application from the data in the mirrored IASP. When the planned outage is finished or the cause of failure of the unplanned outage has been fixed, a reverse of the Metro Mirror direction (from secondary volumes to primary volumes) is performed to transfer updated data in the remote IASP back to the local partition. When the transfer is complete, the Metro Mirror relationship is established again in the original direction (from the primary to the secondary volume) and production can resume at the primary site. This solution is implemented with the System i Copy Services Toolkit available for purchase from the System i Client Technology Center in Rochester. Note: The System i Copy Services Toolkit is a software product that gets installed on all partitions in a FlashCopy or PPRC clustered environment.

35.3.2 Solution benefits


Metro Mirror of an IASP offers several benefits: Basically, recovery takes as long as is needed to vary on an IASP. Therefore recovery time is significantly shorter than with solutions which use Boot from SAN, or external loadsource, where IPL is needed for recovery. Recovery time with Metro Mirror of an IASP can be compared to the recovery time for high availability solutions provided by ISVs. Note that Boot from SAN is an IBM implementation of an external boot disk in System i. In this implementation a boot disk is connected through a special input output processor (IOP) and an FC adapter in System i5. For more information about Boot from SAN, refer to the Redbooks publication, IBM System Storage DS6000 Series: Architecture and Implementation, SG24-6781. External loadsource (boot disk) is also implemented by competitors external storage solutions, though it doesnt use Boot from SAN IOP. Due to synchronous copying of writes, Metro Mirror has always some impact on production performance. With this solution we only use Metro Mirror of data in an IASP. Therefore impact on performance is relatively small. On the other hand, solutions with 596

IBM System Storage DS8000: Copy Services in Open Environments

Boot from SAN or external loadsource replicate the entire partition disk space, including temporary files, job queues, and so on and the performance degradation of the production system can be more significant. This solution only has impact on write operations performances. It does not use any of the System i5 CPU resources. This is different from some high availability implementations provided by other vendors, where there is an impact to performances due to CPU usage as part of their solution. With this solution we can run a non production workload, like test or development tasks in the remote partition. This is again different from the implementation using Boot from SAN where the recovery parititon has to be in stand-by, typically without any workload. The System i Copy Services Toolkit which is used for this solution provides full automation of steps performed at planned or unplanned outages. All the needed steps for failover and failback are initiated by one command in a partition of System i5. Once this solution is established it requires very little maintenance and administration. Therefore some customers might prefer it to other high availability solutions available in System i5.

35.3.3 Planning and requirements


The following points should be taken into account when planning for Metro Mirror of an IASP: The customer application must be implemented with the IASP before installing the System i Copy Services Toolkit. The customer can engage IBM services for this. In any case, the application setup in an IASP must be well panned before establishing the Metro Mirror. For successful implementation it is essential to contact the Rochester Toolkit team well in advance. Alternatively, you can contact Advanced Technical Support. Note: For information about the Toolkit contact the Client Technology Center, via the following Internet page: http://www-03.ibm.com/servers/eserver/services/iseriesservices.html Solutions that use the System i Copy Services Toolkit cannot be sold to the customer without prior approval by the Client Technology Center. The solution requires a System i and a DS6000, DS8000, or ESS at both local and remote sites. It also requires appropriate FC connections between the DS systems for Metro Mirror and connections between Systems i5 for cluster communication via TCP/IP. Each IASP has its own Fibre Channel attachment cards. This is valid for both System i5 and earlier System i models like 8xx and 270. A Metro Mirror license on both local and remote disk systems is required. i5/OS software for clustering (5722-SS1 option 41) must be installed in both production and recovery partition. This prerequisite is needed on each System i5 and earlier System i models like 8xx and 270. The following are software prerequisites on both System i5 and earlier models of System i like the 8xx and 270: i5/OS licensed product JDK 1.4.2 (5722-JV1 Option 6) and the latest JAVA group PTF i5/OS licensed product Crypto Access Provider 128-bit (5722-AC3) i5/OS software Portable Application Solutions Environment (PACE) (5722-SS1 Option 33)
Chapter 35. Copy Services with System i5

597

i5/OS licensed product IBM HTTP Server (5722-DG1) i5/OS licensed product Digital Certificate Manager (5722-SS1 Option 34) The i5/OS licensed product Secure Shell, SSH (5733-SC1) is needed on System i5. However, it is not needed on earlier System i models. Careful sizing of links for Metro Mirror is needed. Follow these steps to size Metro Mirror links (for detailed information, refer to the Redbooks publication, iSeries and IBM TotalStorage: A Guide to implementing external disk on IBM eServer i5, SG24-7120): Collect system i5 performance data. It is best to collect it over one week, and if needed, during heavy workload such as when running end-of-month jobs. If the customer has already established IASP, observe in performance reports the number of writes that go to the IASP. If the IASP is not set up yet, observe the number of writes that go to the database. Multiply the writes to IASP by the reported transfer size to get the write rate (MB/sec) for the entire period of collecting performance data. Look for the highest reported write rate. Size the number of Metro Mirror links so that their bandwidth can accommodate the highest write rate.

The Client Technology Center can perform an in-depth analysis of bandwidth requirements by using testing workload PEX if more accurate sizing is needed.

35.3.4 Considerations
Following points should be considered with this solution: An application in System i5 typically consists of different objects, like programs, data files, data areas, spool files user profiles and others. After the application is set up in an IASP, the majority of application objects reside in the IASP. However, some objects can reside in sysbas. For example, data files that belong to the application go to the IASP, while the programs and job queues can stay in sysbas. It is necessary to maintain application objects in sysbas on both production and recovery partitions in sync, to allow the application to run in the recovery parititon after Metro Mirror copy of the production IASP is made available to it. As experienced with many applications, maintenance of sysbas is rather simple and straightforward: it is achieved by i5/OS change control, or by installing new application versions on both partitions at the same time. However, some customers might want to use other ways of maintaining sysbas, like regularly saving the objects in the production partition and restoring them in the recovery partition, or using a limited version of an ISV solution. For more information about i5/OS change control, refer to the iSeries Information Center on the following Web page and look for Management Central: http://publib.boulder.ibm.com/infocenter/iseries/v5r4/index.jsp i5/OS release V5R4 brings significant improvement in replicating i5/OS user profiles that reside in sysbas. With this release it is possible to define a new type of cluster domain, administrative domain, which enables that changes in user profiles are automatically propagated from the local sysbas to remote sysbas. For more information about cluster domains, refer to Clusters on page 591. We strongly recommend journaling the application objects when using Metro Mirror for an IASP, or even Metro Mirror of an entire disk space.

598

IBM System Storage DS8000: Copy Services in Open Environments

Important: When using Copy Services functions such as Metro Mirror, Global Mirror, or FlashCopy for the replication of the load source unit or other i5/OS disk units within the same DS6000/DS8000 or between two or more DS6000 and DS8000 systems, the source volume and the target volume characteristics must be identical. The target and source must be of matching capacities and matching protection types. Additionally, once a volume is assigned to an i5/OS partition and added to that partitions configuration, its characteristics must not be changed. If there is a requirement to change some characteristic of a configured volume, it must first be completely removed from the i5/OS configuration. After the characteristic changes are made, for example, protection type, capacity, and so on, by destroying and recreating the volume or by utilizing the DS CLI, the volume can be reassigned to the i5/OS configuration. To simplify the configuration, we recommend a symmetrical configuration between two IBM Storage systems, creating the same volumes with the same volume ID that decides the LSS_ID.

35.3.5 Implementation and usage


This solution is implemented by services which come with the System i Copy Services Toolkit. The following steps are performed by toolkit services to implement the solution: 1. Enable System i5 Hardware Management Console (HMC) to communicate with the Toolkit. 2. Install needed software products on both production and recovery System i5. 3. Configure communication between Toolkit code and HMC. 4. Configure needed TCP/IP server. 5. Set up production and remote partitions in a cluster. 6. Create IASP on recovery parititon. 7. Install the toolkit programming code. 8. Create needed cluster resources. 9. Install and configure DS CLI on System i5, and create needed DS CLI profile, password file and DS CLI scripts on System i5. After the toolkit is installed and tested you can perform switchover and failback during planned and unplanned outages by using toolkit commands. The following toolkit commands are used: swpprc - You use this command with parameters *scheduled or *unscheduled depending on whether a planned or unplanned outage is being handled. The swpprc command initiates a sequence of steps in the Toolkit by which the required actions (switchover, failback, and other actions) are achieved. You can also check whether the solution is in the correct status by using the System i5 chkpprc and dspessdta commands from the toolkit. Results of Toolkit actions at switch and when checking status, can be observed in the Toolkit log by using command viewlog. The listed commands are described later in this section. When such a System i5 command is executed, the Toolkit triggers the different tasks by communicating with the System i5 HMC, performing System i5 commands within the partition, and communicating with external disks systems by using the DSCLI. This is illustrated in Figure 35-7.

Chapter 35. Copy Services with System i5

599

iSeries Cluster /Device Domain/Data CRG


SSH TCP/IP HMC HMC SSH TCP/IP

Toolkit
FSP

DS/ ESS

Toolkit
FSP

IASP DS CLI
(FIBRE) IOA

IASP
(FIBRE) IOA

DS CLI

PPRC / FlashCopy Relationship

Figure 35-7 Functioning of the System i Copy Services Toolkit

Next we describe the sequence of steps required for a switchover, for checking the Metro Mirror relationship status, and for setting up the toolkit. Note that in the examples shown next, the partition ITCHA2 is a local parititon, ITCHA3 is a remote partition, and the IASP name is DS6000.

Checking the solution components


You might want to check the status of the following solution components: IASP Metro Mirror Toolkit DS CLI scripts Toolkit log

Checking IASP status


To observe the logical volumes in the IASP, use the System i5 graphical interface known as iSeries Navigator, which is a part of the iSeries Access for Windows licensed product. Instructions on how to install iSeries Access for Windows can be found in the iSeries Information Center located at: http://publib.boulder.ibm.com/infocenter/iseries/v5r4/index.jsp In iSeries Navigator make a connection to the System i5 that contains the IASP. Note: Thew System i5 user ID and password are required to connect to the System i5.

600

IBM System Storage DS8000: Copy Services in Open Environments

After the connection is established, look for the system in the left-hand side panel of the iSeries Navigator window and expand it. Then expand Configuration and Service Hardware Disk Units Disk Pools. Note that for expanding disk units you will need a System i5 Service Tools user ID and password. Observe that IASPs are numbered 33 and higher, while traditional disk pools (user ASPs) have numbers below 33. Click the relevant IASP to show its disks in the right-hand side panel. This is shown in Figure 35-8.

Figure 35-8 Checking status of IASP

To check the status of the IASP, you can also use the i5/OS wrkcfgsts command. After entering this command, press F4, which brings up a screen where you can insert parameters. At *type insert *dev, at Configuration description, insert the name of IASP. Leave parameter Output as *, as shown in Figure 35-9 on page 602, and press Enter twice. Attention: In the example shown in Figure 35-9, the name of IASP is DS6000. Be aware that DS6000 does not indicate a specific disk system here: it is only the name given to the System i5 disk pool.

Chapter 35. Copy Services with System i5

601

Work with Configuration Status (WRKCFGSTS) Type choices, press Enter. Type . . . . . . . . . . . . . . Configuration description . . . Output . . . . . . . . . . . . . *dev ds6000 * *NWS, *NWI, *LIN, *CTL, *DEV Name, generic*, *ALL, *CMN... *, *PRINT

F3=Exit F4=Prompt F5=Refresh F13=How to use this display

F10=Additional parameters F24=More keys

Bottom F12=Cancel

Figure 35-9 wrkcfgsts

The next screen, as shown in Figure 35-10. displays the name and status of the IASP. Work with Configuration Status Position to . . . . . ITCHA2 06/25/06 Starting characters 11:51:29

Type options, press Enter. 1=Vary on 2=Vary off 5=Work with job 8=Work with description 9=Display mode status 13=Work with APPN status... Opt Description DS6000 Status AVAILABLE -------------Job--------------

Bottom Parameters or command ===> F3=Exit F4=Prompt F12=Cancel


Figure 35-10 Status of IASP

F23=More options

F24=More keys

Checking the Toolkit setup


To observe the currently used Toolkit options, execute the dspessdta command on either the production or remote partition of a System i5. To achieve this, establish a Telnet connection using IBM Personal Communications, with the appropriate System i5 partition. For more information about how to use IBM Personal Communications to Telnet to a System i5, refer to the iSeries Information Center located at: http://publib.boulder.ibm.com/infocenter/iseries/v5r4/index.jsp

602

IBM System Storage DS8000: Copy Services in Open Environments

When the Telnet connection is established, specify the System i5 user ID and password to sign in. Type dspessdta in the command line of the System i5 Telnet screen, as shown in Figure 35-11, and press Enter. * MAIN Select one of the following: 1. 2. 3. 4. 5. 6. 7. 8. 9. 10. 11. User tasks Office tasks General system tasks Files, libraries, and folders Programming Communications Define or change the system Problem handling Display a menu Information Assistant options iSeries Access tasks OS/400 Main Menu System: ITCHA1

90. Sign off Selection or command ===> dspessdta F3=Exit F4=Prompt F9=Retrieve F23=Set initial menu F12=Cancel F13=Information Assistant

Figure 35-11 dspessdta

This brings up the screen shown in Figure 35-12 where you insert the IASP name and then press Enter. Change or Display ESS IASP Data Type IASP name, press Enter. Independent ASP Name .. . . . . DS6000 Name

F1=Help

F3=Exit

F12=Cancel

Figure 35-12 IASP name

Chapter 35. Copy Services with System i5

603

A new screen now displays the IASP name and other options or status information such as the Metro Mirror direction, auto-start of cluster after link failure, and so on. This is shown in Figure 35-13. We recommend using this command and observing the options and status before performing a switchover. After you have checked the values displayed for the different fields, press F3 or F12 to exit this screen and will bring back the i5/OS command line.
Display ESS IASP Data Press Enter to continue. Independent ASP Name . . Copy type . . . . . . . Request type . . . . . . FlashCopy status . . . . PPRC status . . . . . . PPRC direction . . . . . Auto start cluster . . Enable FlashCopy scripts Enable PPRC scripts . . Automatic PPRC Replicate Multipath . . . . . . . Warm FlashCopy . . . . . Device CRG name . . . . Wait time . . . . . . . Message Queue . . . . . Library . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . : : : : : : : : : : : : : : : : DS6000 *BOTH 0 *NONE *READY *NORMAL *YES *YES *YES *YES *YES *YES 60 *SYSOPR

Bottom F1=Help F3=Exit F12=Cancel

Figure 35-13 Display Toolkit options

Checking the Metro Mirror status


To check for a correct status of the DS storage systems prior to the switchover, use the System i5 chkpprc command and specify the IASP name, as is shown in Figure 35-14. We recommend regularly performing the chkpprc command to be sure that DS systems are in a correct status for switching to the remote site. You can execute this command in either the local or recovery partition.

604

IBM System Storage DS8000: Copy Services in Open Environments

MAIN Select one of the following: 1. 2. 3. 4. 5. 6. 7. 8. 9. 10. 11.

i5/OS Main Menu System: ITCHA2

User tasks Office tasks General system tasks Files, libraries, and folders Programming Communications Define or change the system Problem handling Display a menu Information Assistant options iSeries Access tasks

90. Sign off Selection or command ===> CHKPPRC ds6000 F3=Exit F4=Prompt F9=Retrieve F23=Set initial menu Figure 35-14 chkpprc F12=Cancel F13=Information Assistant

During execution of this command, the Toolkit checks connection to the DS system and displays online information at the bottom of the screen, as shown in Figure 35-15.
MAIN Select one of the following: 1. 2. 3. 4. 5. 6. 7. 8. 9. 10. 11. User tasks Office tasks General system tasks Files, libraries, and folders Programming Communications Define or change the system Problem handling Display a menu Information Assistant options iSeries Access tasks i5/OS Main Menu System: ITCHA2

90. Sign off Selection or command ===> CHKPPRC ds6000 F3=Exit F4=Prompt F9=Retrieve F12=Cancel F13=Information Assistant F23=Set initial menu Checking lspprc DSCLI script for production node. Figure 35-15 Checking PPRC

Chapter 35. Copy Services with System i5

605

Finally you are presented the message that PPRC was successfully checked, or you are informed about a failed check. This is shown in Figure 35-16.
MAIN Select one of the following: 1. 2. 3. 4. 5. 6. 7. 8. 9. 10. 11. User tasks Office tasks General system tasks Files, libraries, and folders Programming Communications Define or change the system Problem handling Display a menu Information Assistant options iSeries Access tasks i5/OS Main Menu System: ITCHA2

90. Sign off Selection or command ===> CHKPPRC ds6000 F3=Exit F4=Prompt F9=Retrieve F12=Cancel F13=Information Assistant F23=Set initial menu A PPRC check for IASP CRG DS6000 completed successfully. Figure 35-16 chkpprc completed

You can also check results in the log after the chkpprc command has been executed. To achieve this, enter the viewlog command at the System i5 command line as shown in Figure 35-17.
MAIN Select one of the following: 1. 2. 3. 4. 5. 6. 7. 8. 9. 10. 11. User tasks Office tasks General system tasks Files, libraries, and folders Programming Communications Define or change the system Problem handling Display a menu Information Assistant options iSeries Access tasks i5/OS Main Menu System: ITCHA2

90. Sign off Selection or command ===> viewlog F3=Exit F4=Prompt F9=Retrieve F23=Set initial menu Figure 35-17 viewlog F12=Cancel F13=Information Assistant

606

IBM System Storage DS8000: Copy Services in Open Environments

You are presented with the Toolkit log, where you can see the results of checking the Metro Mirror status, as shown in Figure 35-18.
Edit File: /tmp/qzrdiash.log Record : 1721 of 1733 by Control :

Column :

246 by 74

CMD ....+....1....+....2....+....3....+....4....+....5....+....6....+....7....+ Fri Jun 23 08:01:16 2006 workWithHmcProfile: FYI, /QIBM/Qzrdiash5/hmc/DS600 "21020003/none/1,21030002/none/1,21050002/none/0,21010003/none/1,21030003/ Fri Jun 23 08:01:16 2006 doPPRCScript: Executing: QDSCLI/DSCLI SCRIPT('/QIB Fri Jun 23 08:01:25 2006 checkThoseResults: Processing file /QIBM/Qzrdiash5 Fri Jun 23 08:01:25 2006 checkThoseResults: Strings Full Duplex -Target Ful Fri Fri Fri Fri Jun Jun Jun Jun 23 23 23 23 08:01:25 08:01:25 08:01:33 08:01:33 2006 2006 2006 2006 doPPRCScript: Results ok. doPPRCScript: Executing: QDSCLI/DSCLI SCRIPT('/QIB checkThoseResults: Processing file /QIBM/Qzrdiash5 checkThoseResults: Strings Target Full Duplex Metr

Fri Jun 23 08:01:33 2006 doPPRCScript: Results ok. Fri Jun 23 08:01:33 2006 chkpprc: SUCCESSFUL, ready for the SWPPRC command. ************End of Data********************

F2=Save F3=Save/Exit F12=Exit F15=Services F17=Repeat change F19=Left F20=Right Figure 35-18 Toolkit log

F16=Repeat find

Checking DS CLI profile


To check the DS CLI profile in the System i5 partition, perform the following steps: 1. Enter the viewprof command. This brings up a screen where you can insert the IASP name, as shown in Figure 35-19. Press Enter.
View Profiles (VIEWPROF) Type choices, press Enter. Independent ASP name . . . . . . System name . . . . . . . . . . ds6000 *LOCAL Name Character value

F3=Exit F4=Prompt F5=Refresh F24=More keys Parameter IASPNAME required. Figure 35-19 Viewprof

F12=Cancel

Bottom F13=How to use this display

2. Next you are presented with the screen showing profiles used in the partition. Look for the profile you want to check and type 5 in the Opt field next to that profile name, as is shown in Figure 35-20.

Chapter 35. Copy Services with System i5

607

Directory: /qibm/qzrdiash5/profiles/DS6000 Position to : Record : 1 of 4 New File : 2=Edit 4=Delete File 5=Display 6=Path Size 9=Recursive Delete Opt Name 5 pprc_A.profile pprc_B.profile pprc_Asec.dat pprc_Bsec.dat Size 8K 8K 8K 8K Owner QSECOFR QSECOFR QSECOFR QSECOFR Changed 06/22/06 06/22/06 06/22/06 06/22/06 Used 07/06/06 07/05/06 07/05/06 07/05/06

17:02 17:02 10:58 10:58

11:22 10:34 10:32 10:34

Bottom F3=Exit F5=Refresh F12=Cancel F22=Display entire field Figure 35-20 DS CLI profiles on i5/OS F16=Sort F17=Position to

3. Press Enter to get the screen showing the selected DS CLI profile, as shown in Figure 35-21. Browse : /qibm/qzrdiash5/profiles/DS6000/pprc_A.profile Record : 1 of 82 by 14 Column : Control :

68 by

79

....+....1....+....2....+....3....+....4....+....5....+....6....+....7....+.... ************Beginning of data************** # # DS CLI Profile # # # Management Console/Node IP Address(es) # hmc1 and hmc2 are equivalent to -hmc1 and -hmc2 command options. hmc1: 9.5.110.66 #hmc2: 127.0.0.1 pwfile: /qibm/qzrdiash5/profiles/DS6000/pprc_Asec.dat # # Default target Storage Image ID F3=Exit F10=Display Hex F19=Left F20=Right
Figure 35-21 Display DS CLI profile

F12=Exit

F15=Services

F16=Repeat find

608

IBM System Storage DS8000: Copy Services in Open Environments

Checking a DS CLI script


To check a DS CLI script in a System i5 partition, enter the viewscript command with the IASP name as the parameter. You are presented with a screen listing the DS CLI scripts used in this partition. Look for the script you want to examine and type 5 in the Opt field next to this script, as is shown in Figure 35-22. Press Enter.
Directory: /qibm/qzrdiash5/scripts/DS6000 Position to : Record : 1 of New File : 2=Edit 4=Delete File 5=Display 6=Path Size 9=Recursive Delete Opt Name Size lspprc_A.script lspprc_B.script <overtask_A_1.script <overtask_A_2.script <overtask_B_1.script <overtask_B_2.script mkpprc_A.script pausepprc_A.script resumepprc_A.script lspprcpath_A.script 5 pausepprc_B.script resumepprc_B.script lspprcpath_B.script lspprc_A.result Owner QLPAR QLPAR QLPAR QLPAR QLPAR QLPAR QLPAR QLPAR QLPAR QLPAR QLPAR QLPAR QLPAR QLPAR Changed 06/22/06 06/22/06 06/22/06 06/22/06 06/22/06 06/22/06 06/22/06 06/22/06 06/22/06 06/22/06 06/22/06 06/22/06 06/22/06 07/05/06 Used 07/05/06 10:32 07/05/06 10:34 07/05/06 10:34 07/05/06 10:34 06/22/06 16:24 06/29/06 17:32 07/03/06 07:50 06/22/06 16:23 06/22/06 16:23 06/22/06 16:23 06/22/06 16:23 06/22/06 16:23 06/22/06 16:23 07/05/06 10:32 More...

8K 8K 8K 8K 8K 8K 8K 8K 8K 8K 8K 8K 8K 8K

16:23 16:23 16:23 16:23 16:23 16:23 16:23 16:23 16:23 16:23 16:23 16:23 16:23 10:32

F3=Exit F5=Refresh F12=Cancel F22=Display entire field

F16=Sort

F17=Position to

Figure 35-22 Select DS CLI script on i5/OS

The selected script is displayed, as shown in Figure 35-23.

Browse : /qibm/qzrdiash5/scripts/DS6000/pausepprc_B.script Record : 1 of 2 by 14 Column : 1 Control :

80 by 79

....+....1....+....2....+....3....+....4....+....5....+....6....+....7....+.... ************Beginning of data************** # Generated: Thu Jun 22 16:23:10 2006 pausepprc -dev IBM.1750-6877801 -remotedev IBM.1750-13ADLWA 1200-1204:1200-1204 ************End of Data********************

F3=Exit F10=Display Hex F19=Left F20=Right 1 records folded.

F12=Exit

F15=Services F16=Repeat find

Figure 35-23 DS CLI script on i5/OS

Chapter 35. Copy Services with System i5

609

Switch to remote site at planned outages


To switch the production system to the remote site in planned outage situations, use the swpprc command in the remote partition and press F4, as shown in Figure 35-24.
MAIN Select one of the following: 1. 2. 3. 4. 5. 6. 7. 8. 9. 10. 11. User tasks Office tasks General system tasks Files, libraries, and folders Programming Communications Define or change the system Problem handling Display a menu Information Assistant options iSeries Access tasks OS/400 Main Menu System: ITCHA1

90. Sign off Selection or command ===> swpprc F3=Exit F4=Prompt F9=Retrieve F23=Set initial menu Figure 35-24 swpprc F12=Cancel F13=Information Assistant

This brings up a screen where you can insert the IASP name and specify the parameter *scheduled or *unscheduled. If you perform a switch to the production system for planned outage specify *scheduled, if you switch to production for an unplanned outage, specify *unscheduled. Press Enter to execute the command. This is shown in Figure 35-25.
Switch PPRC (SWPPRC) Type choices, press Enter. Independent ASP name . . . . . . Switch type . . . . . . . . . . ds6000 *SCHEDULED Name *SCHEDULED, *UNSCHEDULED...

F3=Exit F4=Prompt F24=More keys

F5=Refresh

F12=Cancel

Bottom F13=How to use this display

Figure 35-25 Parameter scheduled

610

IBM System Storage DS8000: Copy Services in Open Environments

The swpprc toolkit command executes all the required actions to switch over to the IASP at the recovery site, for both systems and external storage. The following actions take place during the execution of swpprc *scheduled: 1. Run chkpprc command. 2. Vary off the IASP on local partition. 3. Perform Metro Mirror failover to secondary site. For more information about Metro Mirror failover, refer to 14.3, Failover and failback on page 179. 4. Perform Metro Mirror failback from secondary to primary site. For more information about Metro Mirror failback, refer to 14.3, Failover and failback on page 179. 5. Vary on the IASP for remote partition. 6. By using System i5 logical partitioning functions, remove IOPs and adapters logically from the former production system and modify the description of the local System i5 partition. During its execution, the swpprc command displays messages about the currently executed step at the bottom of the screen, as shown in Figure 35-26.
Switch PPRC (SWPPRC) Type choices, press Enter. Independent ASP name . . . . . . Switch type . . . . . . . . . . ds6000 *SCHEDULED Name *SCHEDULED, *UNSCHEDULED...

Bottom F3=Exit F4=Prompt F5=Refresh F12=Cancel F13=How to use this display F24=More keys Checking lspprc DSCLI script for production node. Figure 35-26 Messages about progress-2

Some other messages shown during execution of swpprc are as follows: Sent inquiry message to production system operator. Starting DS CLI Failover of ITCHA2 to ITCHA3. Starting DS CLI Replicate of ITCHA3 to ITCHA2. Adding IOAs to the backup HMC Profile. Removing IOAs from the production cluster node. Removing IOAs from the production HMC Profile.

Chapter 35. Copy Services with System i5

611

After the swpprc *scheduled is performed, you can use the viewlog command to check if it was succesfuly executed. An example is shown in Figure 35-27.
Edit File: /tmp/qzrdiash.log Record : 1713 of 1726 by Control :

Column :

240 by 74

CMD ....+....1....+....2....+....3....+....4....+....5....+....6....+....7....+ Fri Jun 23 09:13:22 2006 workWithHmcProfile: FYI, issued command: chsyscfg -r prof -m "ITCHA3" -i "name=ITCHA3,lpar_name=ITCHA3,io_slots=\\\ Fri Jun 23 09:14:23 2006 startDasdMgmtOperaton: First of 2 ASP records retu Fri Jun 23 09:14:23 2006 startDasdMgmtOperaton: MultiPath has been reset fo Fri Jun 23 09:14:23 2006 startDasdMgmtOperaton: Record 2 of 2 ASP records r Fri Jun 23 09:14:23 2006 startDasdMgmtOperaton: MultiPath has been reset fo Fri Jun 23 09:15:40 2006 addRemoveHmcDevices: Removed device at bus 2, slot Fri Jun 23 09:15:40 2006 addRemoveHmcDevices: Removed a total of 1 device(s Fri Jun 23 09:15:41 2006 workWithHmcProfile: FYI, /QIBM/Qzrdiash5/hmc/DS600 "21030002/none/0,21020003/none/1,21050002/none/0,21010003/none/1,21040002/ Fri Jun 23 09:15:41 2006 workWithHmcProfile: FYI, device on ITCHA2 bus 2 sl Fri Jun 23 09:15:53 2006 workWithHmcProfile: FYI, issued command: chsyscfg -r prof -m "ITCHA2" -i "name=ITCHA2,lpar_name=ITCHA2,io_slots=\\\ Fri Jun 23 09:15:53 2006 swpprc: SUCCESSFUL. ************End of Data******************** F2=Save F3=Save/Exit F12=Exit F15=Services F17=Repeat change F19=Left F20=Right Figure 35-27 Log of SWPPRC F16=Repeat find

You can check the status of the IASP copy, which is now available at the remote partition. Proceed as follows: 1. In the remote partition, enter the command wrkscfgsts cfgtype(*dev) cfgd(ds6000). This will bring up a screen showing the IASP status. After successful switchover, the IASP should have a status of available, as shown in Figure 35-28.
Work with Configuration Status Position to . . . . . Opt Description Status DS6000 AVAILABLE System: ITCHA3 Starting characters -------------Job--------------

Bottom ===> F21=Select assistance level Figure 35-28 Metro Mirror copy if IASP after switchover

2. Since varying off the production IASP is done as a part of swpprc, the IASP is no longer available to the local partition after the switchover. By using the command wrkcfgsts cfgtype(*dev) cfgd(ds6000) in the production partition, we can verify that the production IASP is no longer available. This is shown in Figure 35-29.

612

IBM System Storage DS8000: Copy Services in Open Environments

Work with Configuration Status Position to . . . . .

ITCHA2 06/23/06 Starting characters 08:58:54

Type options, press Enter. 1=Vary on 2=Vary off 5=Work with job 8=Work with description 9=Display mode status 13=Work with APPN status... Opt Description DS6000 Status VARIED OFF -------------Job--------------

Figure 35-29 Original IASP after switchover

3. You can now observe the toolkit status and options by entering the command dspessdta with the IASP name DS6000, in our case, as parameter. As shown in Figure 35-30, the direction of Metro Mirror is now reversed, and the current production partition is ITCHA3, which was originally the recovery partition.
Display ESS IASP Data Press Enter to continue. Independent ASP Name . . Copy type . . . . . . . Request type . . . . . . FlashCopy status . . . . PPRC status . . . . . . PPRC direction . . . . . Current Production node Auto start cluster . . Enable FlashCopy scripts Enable PPRC scripts . . Automatic PPRC Replicate Multipath . . . . . . . Warm FlashCopy . . . . . Device CRG name . . . . Wait time . . . . . . . Message Queue . . . . . Library . . . . . . . F1=Help F3=Exit . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . : : : : : : : : : : : : : : : : : DS6000 *BOTH 0 *NONE *READY *REVERSED ITCHA3 *YES *YES *YES *YES *YES *YES 60 *SYSOPR Bottom F12=Cancel

Figure 35-30 dspessdta after switchover

Switch back to local site after planned outages


After a planned outage, switch back to the production system by issuing the swpprc command in the production partition. After issuing the command, a screen displays where you can specify how to switch back: *scheduled for planned outages or *unscheduled for unplanned outages. Specify *scheduled, is shown in Figure 35-31.

Chapter 35. Copy Services with System i5

613

Switch PPRC (SWPPRC) Type choices, press Enter. Independent ASP name . . . . . . Switch type . . . . . . . . . . ds6000 *SCHEDULED Name *SCHEDULED, *UNSCHEDULED...

F3=Exit F4=Prompt F5=Refresh F24=More keys Parameter IASPNAME required. Figure 35-31 swpprcback to production

F12=Cancel

Bottom F13=How to use this display

During execution of this command, the following actions are performed: 1. 2. 3. 4. 5. 6. Run the chkpprc command. Vary off the IASP on the recovery system. Perform Metro Mirror failover to the primary site. Perform Metro Mirror failback from the primary to the secondary site. Vary on the IASP on the production system. By using the System i5 logical partitioning functions remove IOPs and adapters logically from the recovery system and modify the System i5 remote partition description.

While swpprc executes, messages indicating progress with the above actions appear at the bottom of the screen. When the swpprc execution has completed, you can examine the toolkit log to see if switchback to production was successful. To do this, use the viewlog command on the production partition. An example of a log after switchback is shown in Figure 35-32.
Edit File: /tmp/qzrdiash.log Record : 1797 of 1808 by Control :

Column :

224 by 74

CMD ....+....1....+....2....+....3....+....4....+....5....+....6....+....7....+ Fri Jun 23 09:38:17 2006 startDasdMgmtOperaton: First of 2 ASP records retu Fri Jun 23 09:38:17 2006 startDasdMgmtOperaton: MultiPath has been reset fo Fri Jun 23 09:38:17 2006 startDasdMgmtOperaton: Record 2 of 2 ASP records r Fri Jun 23 09:38:17 2006 startDasdMgmtOperaton: MultiPath has been reset fo Fri Jun 23 09:39:39 2006 addRemoveHmcDevices: Removed device at bus 2, slot Fri Jun 23 09:39:39 2006 addRemoveHmcDevices: Removed a total of 1 device(s Fri Jun 23 09:39:40 2006 workWithHmcProfile: FYI, /QIBM/Qzrdiash5/hmc/DS600 "21020003/none/1,21030002/none/1,21050002/none/0,21010003/none/1,21040002/ Fri Jun 23 09:39:40 2006 workWithHmcProfile: FYI, device on ITCHA3 bus 2 sl Fri Jun 23 09:39:52 2006 workWithHmcProfile: FYI, issued command: chsyscfg -r prof -m "ITCHA3" -i "name=ITCHA3,lpar_name=ITCHA3,io_slots=\\\ Fri Jun 23 09:39:52 2006 swpprc: SUCCESSFUL. ************End of Data******************** F2=Save F3=Save/Exit F12=Exit F15=Services F17=Repeat change F19=Left F20=Right Figure 35-32 Toolkit log after successful switchback F16=Repeat find

614

IBM System Storage DS8000: Copy Services in Open Environments

Check the status of the production IASP by entering the command swrkcfgsts cfgtypw(*dev) cfgd(ds6000) on the local partition. The status should be available, as shown in Figure 35-33.
Work with Configuration Status Position to . . . . . ITCHA2 06/23/06 Starting characters 10:07:21

Type options, press Enter. 1=Vary on 2=Vary off 5=Work with job 8=Work with description 9=Display mode status 13=Work with APPN status... Opt Description DS6000 Status AVAILABLE -------------Job--------------

Bottom Parameters or command ===> F3=Exit F4=Prompt F12=Cancel

F23=More options

F24=More keys

Figure 35-33 Status of IASP on production after switchback

You can also check the toolkit status by entering the dspessdta command with the IASP name as the parameter. As is shown in Figure 35-34, the direction of the Metro Mirror is now normal.
Display ESS IASP Data Press Enter to continue. Independent ASP Name . . Copy type . . . . . . . Request type . . . . . . FlashCopy status . . . . PPRC status . . . . . . PPRC direction . . . . . Auto start cluster . . Enable FlashCopy scripts Enable PPRC scripts . . Automatic PPRC Replicate Multipath . . . . . . . Warm FlashCopy . . . . . Device CRG name . . . . Wait time . . . . . . . Message Queue . . . . . Library . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . : : : : : : : : : : : : : : : : DS6000 *BOTH 0 *NONE *READY *NORMAL *YES *YES *YES *YES *YES *YES 60 *SYSOPR

Bottom F1=Help F3=Exit F12=Cancel

Figure 35-34 Toolkit status after switchback to production

The application in the local partition can now resume work with the local IASP.

Chapter 35. Copy Services with System i5

615

Re-trying SWPPRC
If for any reason executing swpprc fails before completion, you must run it again with the *retry parameter to complete the switchover or switchback process.

Switch to remote site at unplanned outages


To perform a switch during unplanned outages, use swpprc with the parameter *unscheduled. This command attempts to complete all the needed steps for a scheduled switch but will allow a switch to happen even if the following errors are detected: Production node HMC failure Production Node failure Production DS/ESS failure (in other words, if the failback task cannot be run) A unscheduled switch will always be an incomplete switch due to failures. The command swpprc with parameter *complete must be run once the failures have been corrected. This will complete Metro Mirror failover. These commands are described next. Use the following steps to switch to the remote site in unplanned outage situations: 1. At the failure of production site, use the command swpprc *unscheduled in the remote partition, as shown in Figure 35-35.
Switch PPRC (SWPPRC) Type choices, press Enter. Independent ASP name . . . . . . > DS6000 Switch type . . . . . . . . . . *unscheduled Name *SCHEDULED, *UNSCHEDULED...

F3=Exit F4=Prompt F24=More keys

F5=Refresh

F12=Cancel

Bottom F13=How to use this display

Figure 35-35 swpprc*unscheduled

616

IBM System Storage DS8000: Copy Services in Open Environments

Shortly after the command starts executing, the message Waiting for reply to message on message queue QSYSOPR appears, as shown in Figure 35-36. Switch PPRC (SWPPRC) Type choices, press Enter. Independent ASP name . . . . . . > DS6000 Name Switch type . . . . . . . . . . *unscheduled *SCHEDULED, *UNSCHEDULED...

Bottom F3=Exit F4=Prompt F5=Refresh F12=Cancel F13=How to use this display F24=More keys Waiting for reply to message on message queue QSYSOPR.
Figure 35-36 Message Waiting for reply

2. To reply to the message perform the following: a. Make sure that a window with the Telnet session is open, press and hold the Shift key, and press the Esc key. This brings up a line at the bottom of the Telnet session. Make sure that cursor is located on this line and press Enter. b. This brings up a screen named System Request. In this screen, specify option 6 Display system operator messages, as shown in Figure 35-37.
System Request System: Select one of the following: 1. 2. 3. 4. 5. 6. 7. Display sign on for secondary job End previous request Display current job Display messages Send a message Display system operator messages Display work station user ITCHA3

80. Disconnect job 90. Sign off

Bottom Selection 6 F3=Exit F12=Cancel (C) COPYRIGHT IBM CORP. 1980, 2005. Figure 35-37 Display operator message

Chapter 35. Copy Services with System i5

617

c. This brings up the screen with operator messages. In this screen confirm that you want to perform an unscheduled switch by replying g to the message An Unscheduled SWPPRC command was issued for IASP device DS6000? (G C), as shown in Figure 35-38. After replying to the message, press F3 twice to get back to the original Telnet screen.
Display Messages Queue . . . . . : Library . . . : Severity . . . : QSYSOPR QSYS 99 System: Program . . . . : Library . . . : Delivery . . . : ITCHA3 *DSPMSG *HOLD

Type reply (if required), press Enter. i5/OS usage limit exceeded - operator action required. i5/OS usage limit exceeded - operator action required. Usage limit of 0 exceeded. Grace period expires in 34 days on 08/01/06. i5/OS usage limit exceeded - operator action required. Job 016703/DHQB/ANZDFTPWD5 submitted for job schedule entry ANZDFTPWD5 number 000253. Job 016703/DHQB/ANZDFTPWD5 completed normally on 06/29/06 at 01:00:00. i5/OS usage limit exceeded - operator action required. i5/OS grace period expires in 34 days on 08/01/06. i5/OS usage limit exceeded - operator action required. i5/OS usage limit exceeded - operator action required. An Unscheduled SWPPRC command was issued for IASP device DS6000? (G C) Reply . . . g Bottom F3=Exit F11=Remove a message F12=Cancel F13=Remove all F16=Remove all except unanswered F24=More keys Figure 35-38 Confirming Unscheduled switch by replying the message

When confirming to perform the unscheduled switch, the command executes further. During execution, messages indicating the actions performed are displayed at the bottom of the screen. The messages are similar to the ones shown in Switch to remote site at planned outages on page 610.

618

IBM System Storage DS8000: Copy Services in Open Environments

3. Once the command swpprc *unscheduled has completed, use the command dspessdta with the IASP name as the parameter to display the Metro Mirror status. Output of this command is shown in Figure 35-39. Observe that the Metro Mirror status is incomplete. The command dspessdta is described in Checking the Toolkit setup on page 602.
Display ESS IASP Data Press Enter to continue. Independent ASP Name . . Copy type . . . . . . . Request type . . . . . . FlashCopy status . . . . PPRC status . . . . . . PPRC direction . . . . . Current Production node Auto start cluster . . Enable FlashCopy scripts Enable PPRC scripts . . Automatic PPRC Replicate Multipath . . . . . . . Warm FlashCopy . . . . . Device CRG name . . . . Wait time . . . . . . . Message Queue . . . . . Library . . . . . . . F1=Help F3=Exit . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . : : : : : : : : : : : : : : : : : DS6000 *BOTH 0 *NONE *INCOMPLETE *REVERSED ITCHA3 *YES *YES *YES *NO *YES *YES 10 *SYSOPR Bottom F12=Cancel Figure 35-39 dspessdta after unscheduled switch

You can also look at the toolkit log to check for successful execution of the swpprc *unscheduled command. For information about how to view the log, refer to Checking the Toolkit setup on page 602. 4. After the local site is operational again, perform the command swpprc *complete at the local site to complete the switch. The command is shown in Figure 35-40.
Switch PPRC (SWPPRC) Type choices, press Enter. Independent ASP name . . . . . . > DS6000 Switch type . . . . . . . . . . > *COMPLETE Name *SCHEDULED, *UNSCHEDULED...

F3=Exit F4=Prompt F24=More keys

F5=Refresh

F12=Cancel

Bottom F13=How to use this display

Figure 35-40 SWPPRC *COMPLETE

During the command execution, the message Waiting for reply to message on message queue QSYSOPR is displayed at the bottom of the screen, as shown in Figure 35-41.

Chapter 35. Copy Services with System i5

619

Switch PPRC (SWPPRC) Type choices, press Enter. Independent ASP name . . . . . . Switch type . . . . . . . . . . ds6000 *complete Name *SCHEDULED, *UNSCHEDULED...

F3=Exit F4=Prompt F24=More keys

F5=Refresh

F12=Cancel

Bottom F13=How to use this display

Waiting for reply to message on message queue QSYSOPR.


Figure 35-41 Waiting for reply to operator message

5. To answer the message, open another telnet session with the local partition, enter the command dspmsg qsysopr, and reply g. This is illustrated in Figure 35-42.
Display Messages Queue . . . . . : Library . . . : Severity . . . : QSYSOPR QSYS 99 System: Program . . . . : Library . . . : Delivery . . . : ITCHA2 *DSPMSG *HOLD

Type reply (if required), press Enter. Cleanup of system journals and system logs successfully completed. Cleanup has completed. i5/OS grace period expires in 34 days on 08/01/06. Usage limit of 0 exceeded. Grace period expires in 34 days on 08/01/06. Job 054555/DHQB/ANZDFTPWD5 submitted for job schedule entry ANZDFTPWD5 number 000841. Job 054555/DHQB/ANZDFTPWD5 completed normally on 06/29/06 at 01:00:00. An Unscheduled SWPPRC command was issued for IASP device DS6000? (G C) Reply . . : G i5/OS grace period expires in 34 days on 08/01/06. Perform an automated Replicate from current Production node ITCHA3 to current Backup node ITCHA2 for IASP device DS6000? (G C) Reply . . . g Bottom F3=Exit F11=Remove a message F12=Cancel F13=Remove all F16=Remove all except unanswered F24=More keys

Figure 35-42 Confirm replicate from remote site to local site

6. After the command swpprc *complete has completed, the switch to the remote site is normally done. To check, use the command dspessdta on the local or remote partition. Verify that the Metro Mirror direction is reversed, and that the remote partition ITCHA3 is currently the production node. This is shown in Figure 35-43.

620

IBM System Storage DS8000: Copy Services in Open Environments

Display ESS IASP Data Press Enter to continue. Independent ASP Name . . Copy type . . . . . . . Request type . . . . . . FlashCopy status . . . . PPRC status . . . . . . PPRC direction . . . . . Current Production node Auto start cluster . . Enable FlashCopy scripts Enable PPRC scripts . . Automatic PPRC Replicate Multipath . . . . . . . Warm FlashCopy . . . . . Device CRG name . . . . Wait time . . . . . . . Message Queue . . . . . Library . . . . . . . F1=Help F3=Exit . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . : : : : : : : : : : : : : : : : : DS6000 *BOTH 0 *NONE *READY *REVERSED ITCHA3 *YES *YES *YES *NO *YES *YES 10 *SYSOPR Bottom F12=Cancel

Figure 35-43 Status after switch is completed

7. You can also check the status of IASP in both the local and the remote partition by using the wrkcfgsts command, as described in Checking the Toolkit setup on page 602. Check that the IASP at the remote site is available and that the IASP at the local site is not available (varied off).

Switchback to local site at unplanned outages


To accomplish the switchback to the local site in an unplanned outage situation, issue the command swpprc *scheduled in the local partition. This command performs the actions needed to establish the local partition as production, and establish Metro Mirror from the local to the remote DS system. For instructions on how to run this command, refer to Switch back to local site after planned outages on page 613.

Chapter 35. Copy Services with System i5

621

35.4 Global Mirror for an IASP


This section describes Global Mirror operations with an Independent Auxiliary Storage Pool. In a System i5 environment, Global Mirror is typically considered for distances greater than 50 km. Since the performance impact of Global Mirror on production is very small, this mode of replication can be used for replicating IASPs, or an entire disk space with Boot from SAN. For information about Global Mirror with Boot from SAN, refer to Global Mirror for the entire disk space on page 637. For more information about Global Mirror, refer to Part 6, Global Mirror on page 301. For more information about IASPs, refer to Independent Auxiliary Storage Pools (IASPs) on page 593.

35.4.1 Solution description


This solution consists of a System i5 local partition and a System i5 remote partition grouped in a cluster. Critical application data reside in an IASP in the local partition, the IASP containing volumes on an external disk system. A Global Mirror relation for the IASP volumes is established with a remote disk system. The System i5 remote partition has access to this remote disk system. In other words, the Global Mirror secondary volumes (copy of the production IASP volumes) can be varied on for the remote partition. For more information about System i5 clusters, refer to Clusters on page 591. Note that it is only necessary that IASP on both partitions (local and remote) reside on external storage. Still, the sysbas for each partition can be on internal disks or on external disk systems. For more information about sysbas, refer to Independent Auxiliary Storage Pools (IASPs) on page 593. The solution is implemented with the System i Copy Services Toolkit available for purchase from System i Client Technology Center in Rochester. For more information about the Toolkit, refer to Solution description on page 595 and Implementation and usage on page 599. The solution provides continuous availability in case of planned and unplanned outages at the local partition: When a planned outage is scheduled for the local partition, the following actions are done by the Toolkit: a. Synchronize Global Copy on secondary volumes. b. Vary on IASP (Global Mirror secondary volumes) at the remote partition. c. Once IASP is available to the remote partition, start Global Copy to replicate the IASP back to the local partition. d. After the planned outage is finished, vary on the local IASP, which now contains the updates that were made at the recovery site, at the local partition. e. Start the replication again in the original direction. For unplanned outages the toolkit performs the following actions: a. Ensure that Global Mirror secondary volumes are consistent and as up-to-date as possible by performing a fast reverse restore from FlashCopy target volumes to the Global Mirror secondary volumes. b. Vary on IASP (Global Mirror secondary volumes) at the remote partition where production application continues to run using this IASP.

622

IBM System Storage DS8000: Copy Services in Open Environments

c. When the local site is back, replicate updated IASP to the local site using a Global Copy. d. Vary on IASP at the local partition where production can now resume. An overview of IASP Global Mirror is shown in Figure 35-44.

Production

Connection for cluster

Recovery

GM secondary

Global Mirror

IASP

IASP

Flash copy

Figure 35-44 Global Mirror of IASP

35.4.2 Solution benefits


Global Mirror of an IASP includes the following benefits: Since replication to the remote site is done asynchronously, the impact on application response time is minimal. For System i5 clients who require long-distance replication, Global Mirror, thanks to its inherent asynchronous mode, is the most suitable solution amongst other long-distance replication solution offerings. Clients get the possibility to balance between RPO and bandwidth: If clients can cope with some bigger RPO, they can use relatively narrow bandwidth for links between the two sites. On the other hand, if they use large bandwidth, RPO will be very small and the loss of data at recovery is almost negligible. Basically, recovery takes as long as is needed to vary on an IASP. This recovery is significantly shorter with Global Mirror than with solutions relying on Boot from SAN, or an external loadsource, where IPL is needed for recovery. In a production partition we run critical applications, while in a remote partition we can run a non production workload like testing or development. This is different from implementation using Boot from SAN where the recovery partition has to remain in standby, typically without any workload. Global Mirror requires a carefully performed sequence of steps during the recovery procedure in unplanned outage situations. For clients with a huge number of volumes, it is virtually impossible to perform these steps manually. The System i Copy Services Toolkit, which is used as part of the solution, provides full automation of all the tasks needed to recover the IASP. Once this solution is established it requires very little maintenance and administration. Therefore clients might prefer it to other high availability solutions available for System i5.

Chapter 35. Copy Services with System i5

623

35.4.3 Planning and requirements


When clients plan for Global Mirror, their principal considerations are recovery point objective (RPO) and needed bandwidth between local and remote DS systems. Expectations about RPO should be well understood, and careful sizing is needed to provide enough bandwidth to achieve the expected RPO. For sizing Global Mirror links in a System i5 environment, follow these steps (for detailed information, refer to the Redbooks publication, iSeries and IBM TotalStorage: A Guide to Implementing External Disk on eServer i5, SG24-7120). Follow these steps: 1. Collect system i5 performance data. The best is to collect it over one week, and if needed, during heavy workload such as when running end-of-month jobs. 2. If the customer has already established IASP, observe in performance reports the number of writes that go to the IASP. If the IASP is not set up yet, observe the number of writes that go to the database: a. Multiply the writes to IASP by the reported transfer size to get the write rate (MB/sec) for the entire period of collecting performance data. b. Look for the highest reported write rate. Calculate the needed bandwidth as follows: i. You should assume 10 bits per byte for network overhead. ii. If the compression of devices for remote links is known, you can apply it. If it is not known, you can assume a 2:1 compression. iii. You should assume a maximum 80% utilization of the network. iv. Apply 10% uplift factor to the result to account for peaks in the 5-minute intervals. The Client Technology Center can perform an in-depth analysis of bandwidth requirements by using testing workload PEX, if more accurate sizing is needed. Other planning considerations and requirements for this solution are the same as for the solution for Metro Mirror of an IASP described in Planning and requirements on page 597, with the following change: for this solution a Global Mirror license is needed for the local and remote DS systems, and FlashCopy license is needed for the remote DS system. Note: For information about the toolkit contact the Client Technology Center via the following Internet page: http://www-03.ibm.com/servers/eserver/services/iseriesservices.html

35.4.4 Considerations
The same considerations as in Considerations on page 598 for Metro Mirror of an IASP also apply for the Global Mirror of an IASP. Important: When using Copy Services functions such as Metro Mirror, Global Mirror, or FlashCopy for the replication of the load source unit or other i5/OS disk units within the same DS6000/DS8000 or between two or more DS6000 and DS8000 systems, the source volume and the target volume characteristics must be identical. The target and source must be of matching capacities and matching protection types.

624

IBM System Storage DS8000: Copy Services in Open Environments

Also, once a volume is assigned to an i5/OS partition and added to that partitions configuration, its characteristics must not be changed. If there is a requirement to change some characteristic of a configured volume, it must first be completely removed from the i5/OS configuration. After the characteristic changes are made, for example, protection type, capacity, and so on, by destroying and recreating the volume or by utilizing the DS CLI, the volume can be reassigned to the i5/OS configuration. To simplify the configuration, we recommend a symmetrical configuration between two IBM Storage systems, creating the same volumes with the same volume ID that decides the LSS_ID.

35.4.5 Implementation and usage


This solution is implemented as a service included with System i Copy Services Toolkit. The installation steps for the Toolkit for Global Mirror are similar to those steps explained for Metro Mirror in Implementation and usage on page 625. After the Toolkit for Global Mirror is installed and tested you are able to perform the switch to the remote site during planned and unplanned outages. The switch to remote site and the failback to the production site are achieved with a series of actions performed by the toolkit. They are started by toolkit commands swpprc and falovrgmir. The Global Mirror status and toolkit setup status can be checked by issuing the commands chkpprc and dspessdta, respectively. Results of toolkit actions during the switch, or for checking the status after the switch, can be observed in the Toolkit log by using a viewlog command. The Toolkit triggers the different tasks by communicating with the System i5 HMC, performing System i5 commands within the partition, and communicating with external disk systems by using the DSCLI. This is illustrated in Figure 35-7 on page 600.

Checking status
For a description on how to check the IASP status, refer to Implementation and usage on page 599.

Switch to remote site at planned outages


To switch to the remote system at planned outages use the command swpprc *scheduled. For information about how to enter this command refer to Implementation and usage on page 599.

Switch to remote site at unplanned outages


To switch to the remote site in case of a failure of the System i5 or the DS storage at the local site or a disaster at local site, use the following commands: 1. In the remote partition perform the command swpprc *unscheduled. This command places the cluster and IOPs in partitions in the appropriate status for a failover. 2. In the remote partition enter the Toolkit command falovrgmir. In the Global Mirroring Failover screen, specify the IASP name, as shown in Figure 35-45.

Chapter 35. Copy Services with System i5

625

Global Mirroring Failover (FALOVRGMIR) Type choices, press Enter. Environment name . . . . . . . . gmir Character value

F3=Exit F4=Prompt F5=Refresh F24=More keys Parameter ENV required.


Figure 35-45 falovrgmir

F12=Cancel

Bottom F13=How to use this display

While executing this command, the toolkit performs the following actions: a. Run chkpprc command. b. Vary off the IASP at the local partition. c. Fail over Global Copy to the remote site. d. Check the revertible status of the volumes at the remote site and programmatically create the correct scripts to revert FlashCopy on remote volumes. e. Reverse FlashCopy on remote volumes. f. Vary on IASP on Global Mirror secondary volumes to the remote partition. 3. Once the command falovrgmir has been executed, the production application continues to run in the remote partition with the Global Mirror copy of the IASP.

Switch back to the local site at unplanned outages


Failback to local partition is achieved by running DS CLI scripts, which are contained in the toolkit.

626

IBM System Storage DS8000: Copy Services in Open Environments

35.5 Metro Mirror for the entire disk space


In this section we discuss Metro Mirror for a System i5 partition entire disk space. We also assume that Boot from SAN (BfS) is used for both local and remote partitions. For more information about Metro Mirror refer to Part 4, Metro Mirror on page 171. For more information about Boot from SAN refer to the Redbooks publication, IBM System Storage DS6000 Series: Architecture and Implementation, SG24-6781.

35.5.1 Solution description


Because of the System i5 single level storage architecture, it is required that all disk units for the local partition reside on external storage and that all of them are replicated (with Metro Mirror) to the remote site. In the solution we review here, the loadsource is an external disk, is connected through Boot from SAN IOP, and is also replicated to the remote site using Metro Mirror. We assume that the standby partition in the System i5 (recovery partition) at the remote site has access to the copy of the production volumes (Metro Mirror targets), and that Boot from SAN is done from the Metro Mirror copy of the loadsource. This solution is depicted in Figure 35-46.

Production

Recovery

Metro Mirror

BfS

BfS

Figure 35-46 Metro Mirror of entire disk space

In case of planned or unplanned outage at the production site, the stand-by partition at the recovery site uses Boot from SAN to start the system from the Metro Mirror copy of production volumes. After the boot is completed, the recovery system (stand-by partition) has access to an exact clone of the production system, including database files, application programs, user profiles, job queues, dataareas, and so on. Critical applications can continue to run in this production system clone. After planned outage is over or the unplanned outage is fixed, a Metro Mirror of all the disk space is performed from the standby partition to the local partition. This copies back to production system all the updates that occurred during the outage. The original local partition can then be rebooted (Boot from SAN) from the updated primary volumes and will also get access to primary volumes updated with the changes that accused during the outage. For more information about System i5 single level storage, refer to Single-level storage on page 590. For more information about Boot from SAN, refer to the Redbooks publication, IBM System Storage DS6000 Series: Architecture and Implementation, SG24-6781.
Chapter 35. Copy Services with System i5

627

35.5.2 Solution benefits


This solution offers the following major benefits: It can be implemented without any updates or changes to the production i5/OS. If for any reason you are not ready yet to set up your application in an IASP, you can still get the benefits of a Metro Mirror solution by applying it for the entire partition disk space. The solution does not require any special maintenance on the production or stand-by system partition. Practically, the only required task is to set up the Metro Mirror relations for all the volumes making up the partition entire disk space. Once done, no further actions are required. With the Metro Mirror technology and mechanisms, the performance impact on the production system is usually smaller than with similar solutions by other external storage vendors. However, the minimum impact on production performance, even when using Metro Mirror, is achieved when using IASPs. Since Metro Mirror works completely within DS systems, this scenario does not use any processor or memory resources of production and remote partition in System i5. This is different from other i5/OS replication solutions, which use some CPU resources in the production and recovery partitions.

35.5.3 Planning and requirements


The following points should be taken into consideration when planning a Metro Mirror of the entire disk space: It is essential to properly size primary and secondary DS storage systems. Careful sizing of Metro Mirror links is needed. To size the number of links use the following steps: a. Collect system i5 performance data. It is to collect them over a one-week period, and if applicable, during heavy workload like when running end-of-month jobs. b. Multiply the writes/sec by reported transfer size to get the write rate (MB/sec) for the entire period over which performance data were collected. c. Look for the highest reported write rate. Size the number of Metro Mirror links so that the bandwidth can accommodate the highest write rate. Since Boot from SAN is used in this solution, hardware and software requirements for Boot form SAN need to be taken into consideration. We strongly recommend using two external load sources mirrored by i5/OS mirroring, each of them connected via BfS. For guidelines on how to size external storage for System i5, as well as for information about how to collect System i5 performance data and which reports are needed for sizing external storage, as well as requirements for BfS, refer to the IBM RedbookiSeries and IBM TotalStorage: A Guide to Implementing External Disk on Eserver i5, SG24-7120. Important: When using Copy Services functions such as Metro Mirror, Global Mirror, or FlashCopy for the replication of the load source unit or other i5/OS disk units within the same DS6000/DS8000 or between two or more DS6000 and DS8000 systems, the source volume and the target volume characteristics must be identical. The target and source must be of matching capacities and matching protection types. Also, once a volume is assigned to an i5/OS partition and added to that partitions configuration, its characteristics must not be changed. If there is a requirement to change 628
IBM System Storage DS8000: Copy Services in Open Environments

some characteristic of a configured volume, it must first be completely removed from the i5/OS configuration. After the characteristic changes are made, for example, protection type, capacity, and so on, by destroying and recreating the volume or by utilizing the DS CLI, the volume can be reassigned to the i5/OS configuration. To simplify the configuration, we recommend a symmetrical configuration between two IBM Storage systems, creating the same volumes with the same volume ID that decides the LSS_ID.

35.5.4 Considerations
Some other important considerations for this solution are as follows: When doing a Metro Mirror of all the disk units in the local partition, not only the critical application data but all i5/OS objects (including temporary objects, job queues, message queues, and so on) are synchronously copied to remote site. This usually impacts production performance more than just doing a Metro Mirror of objects in an IASP. Some installations can experience significant differences in performance between Metro Mirror of entire disk space and Metro Mirror of an IASP. Also, it should be noted that Metro Mirror of the entire disk space sometimes causes the System i5 storage management to change the transfer sizes of IO operations. The transfer size changes can have a significant performance impact to the production parititon. Within the System i5 single level storage architecture, updates to application data are performed in main memory. Updated database rows usually stay in main memory for some time and are only written to disk as a result of a page swap. However, if a database is journaled, changes to journaled objects are written immediately to the journal receiver on disk. When a switch to the remote site is performed at unplanned outages, the updates that were still in main memory and not yet written to disk will not be reflected at the remote site, since Metro Mirror only replicates what is on disk. However, with a database, since the journaled objects are immediately written to disk and replicated by Metro Mirror, they will be available at the remote site. Thus we strongly recommend journaling application objects when using Metro Mirror. This applies whether you are mirroring the entire disk space or just an IASP. For more information about System i5 single level storage refer to Single-level storage on page 590. After a remote partition is IPLed from Metro Mirror secondary volumes, it has access to an exact clone of the production partition, including i5/OS attributes and descriptions. Usually some changes need to be done to the cloned partition for applications to run properly on the recovery System i5. For more information about this topic, refer to the IBM RedbookiSeries and IBM TotalStorage: A Guide to Implementing External Disk on Eserver i5, SG24-7120.

35.5.5 Implementation and usage


In this section we describe how to implement Metro Mirror for the entire System i5 disk space and how to use it to achieve Continuous Availability in planned and unplanned outage situations.

Chapter 35. Copy Services with System i5

629

Implementation
Since implementation includes several System i5 related tasks, we recommend that a System i5 specialist be involved in implementing it. The implementation is done according to the following steps: 1. For each System i5 production and remote partitions, install FC adapters and IOPs as needed to implement Boot from SAN. 2. On the DS system define logical volumes for the System i5 and connect them to the System i5. 3. Install i5/OS in the local partition or migrate an existing i5/OS to external disk. 4. Restore and start System i5 production applications. 5. Start Metro Mirror for all System i5 volumes. For more information about how to do this refer to 14.1, Basic Metro Mirror operation on page 178. You can use DS CLI on System i5 to perform this action. For more information about the preceding topics refer to the IBM Redbooks iSeries and IBM TotalStorage: A Guide to Implementing External Disk on Eserver i5, SG24-7120, and IBM System Storage DS6000 Series: Architecture and Implementation, SG24-6781.

Check tagging of external loadsource


In order to check the System I5 external loadsource, perform the following steps at the System i5 HMC: In navigation area of the HMC GUI, expand Server and Partition and click Server management to bring up the Server and Partition: Server Management panel. In this panel expand Server, expand Partitions, look for the icon on the local partition, expand it, right-click the lower icon (profile title), and select Properties from the pull-down menu. This brings up the Logical Partition Profile Properties panel. In this panel click the tab Tagged I/O and note the location of IOP or FC adapter for the loadsource, as shown in Figure 35-47.

Figure 35-47 Tagged loadsource

On the Tagged I/O tab click the Select button at the tagged load source. This brings up the panel Load Source Device. On the Load Source Device panel expand the unit, expand buses, look for the slot that is tagged, select the slot, and click Properties. This is shown in Figure 35-48.

630

IBM System Storage DS8000: Copy Services in Open Environments

Figure 35-48 External loadsource

This brings up the Physical I/O Properties panel, as shown in Figure 35-49.

Figure 35-49 Properties of loadsource

Click the I5/OS tab. You are presented with the description of IOP, as shown in Figure 35-50. Note the feature code 2847 that indicates Boot from SAN.

Chapter 35. Copy Services with System i5

631

Figure 35-50 IOP for Boot from SAN

Switch to remote site at planned outages


Note: The examples shown next were done on a DS8000, using LSS number 37. When performing these actions on a DS6000, another LSS number will be used since DS6000 supports only LSS numbers 00 to 1F. Otherwise, the DS CLI scripts used for DS6000 are the same as shown in these examples. To achieve a switch to remote site perform the following steps: 1. Power down the local System i5 partition. This ensures that all the data are written to disk and that the remote partition IPL will be normal. Note: A normal IPL in System i5 occurs after the system is powered down with the power down System (PWRDWNSYS) command and no jobs have ended abnormally. All other IPLs are abnormal for i5/OS. An abnormal IPL takes longer because of additional recovery and verification needed. 2. By using the DS GUI or DS CLI, list the status of all the volumes in the local partition and the recovery partition. The DS CLI command and a sample result are shown in Figure 35-51. Observe that both Metro Mirror primary and secondary volumes are in Full Duplex state.

632

IBM System Storage DS8000: Copy Services in Open Environments

dscli> lspprc 3700-3707 Date/Time: June 27, 2006 1:21:56 PM CEST IBM DSCLI Version: 5.1.600.260 DS: IBM 2107-7503461 ID State Reason Type SourceLSS Timeout (secs) Critical Mod First Pass Status ================================================================================================= 3700:3700 Full Duplex Metro Mirror 37 300 Disabled Invalid 3701:3701 Full Duplex Metro Mirror 37 300 Disabled Invalid 3702:3702 Full Duplex Metro Mirror 37 300 Disabled Invalid 3703:3703 Full Duplex Metro Mirror 37 300 Disabled Invalid 3704:3704 Full Duplex Metro Mirror 37 300 Disabled Invalid 3705:3705 Full Duplex Metro Mirror 37 300 Disabled Invalid 3706:3706 Full Duplex Metro Mirror 37 300 Disabled Invalid 3707:3707 Full Duplex Metro Mirror 37 300 Disabled Invalid

Figure 35-51 LSPPRC

Note that in the example shown next, we used the same scripts to perform actions on primary and secondary DS systems. This can be done since the volumes on primary and secondary DS have the same volume IDs. The action was performed on one or the other DS by specifying relevant DS CLI profile as part of the DS CLI command. Each profile contains IP addresses and image IDs of the local and remote DS storage systems. Note: The DS CLI commands and scripts can be used from a System i5. To do this, you need a separate partition in the System i5 (not the production partition or recovery partition). The separate partition must have connectivity to both the local and remote DS systems. 3. Suspend Metro Mirror for the local partition volumes. The DS CLI script for suspending the volumes is shown in Figure 35-52. # # suspend Metro Mirror # pausePPRC 3700-3707:3700-3707
Figure 35-52 Script for suspend Metro mirror

Performing the script and the results are shown in Figure 35-53. Suspend Metro Mirror is performed on the primary DS system.
C:\Program Files\ibm\dscli>dscli

-cfg "c:\R1_dscli.profile" -user admin -script


CEST IBM and Copy and Copy and Copy and Copy and Copy and Copy and Copy and Copy DSCLI Version: 5.1.600.260 DS: IBM.2107-7503461 volume pair 3700:3700 relationship successfully volume pair 3701:3701 relationship successfully volume pair 3702:3702 relationship successfully volume pair 3703:3703 relationship successfully volume pair 3704:3704 relationship successfully volume pair 3705:3705 relationship successfully volume pair 3706:3706 relationship successfully volume pair 3707:3707 relationship successfully paused. paused. paused. paused. paused. paused. paused. paused.

"c:\PausePPRC.cli"
Date/Time: CMUC00157I CMUC00157I CMUC00157I CMUC00157I CMUC00157I CMUC00157I CMUC00157I CMUC00157I June 27, 2006 2:01:40 PM pausepprc: Remote Mirror pausepprc: Remote Mirror pausepprc: Remote Mirror pausepprc: Remote Mirror pausepprc: Remote Mirror pausepprc: Remote Mirror pausepprc: Remote Mirror pausepprc: Remote Mirror

Figure 35-53 Performing script for suspend Metro Mirror

4. Perform a Metro Mirror failover to the secondary DS system. For more information about failover refer to 14.3, Failover and failback on page 179. The DS CLI script for failover and the results of running the script are shown in Figure 35-54 and Figure 35-55. Failover is performed on a secondary DS system.

Chapter 35. Copy Services with System i5

633

# # Residency Mainz - failover Metor Mirror # failoverPPRC -type mmir 3700-3707:3700-3707


Figure 35-54 Script for failover of Metro Mirror

C:\Program Files\ibm\dscli>dscli

-cfg "c:\R2_dscli.profile" -user admin -script


IBM.2107-7520781 reversed. reversed. reversed. reversed. reversed. reversed. reversed. reversed.

"c:\Failover.cli"
Date/Time: CMUC00196I CMUC00196I CMUC00196I CMUC00196I CMUC00196I CMUC00196I CMUC00196I CMUC00196I June 27, 2006 failoverpprc: failoverpprc: failoverpprc: failoverpprc: failoverpprc: failoverpprc: failoverpprc: failoverpprc: 2:18:51 PM CEST IBM DSCLI Version: 5.1.600.260 DS: Remote Mirror and Copy pair 3700:3700 successfully Remote Mirror and Copy pair 3701:3701 successfully Remote Mirror and Copy pair 3702:3702 successfully Remote Mirror and Copy pair 3703:3703 successfully Remote Mirror and Copy pair 3704:3704 successfully Remote Mirror and Copy pair 3705:3705 successfully Remote Mirror and Copy pair 3706:3706 successfully Remote Mirror and Copy pair 3707:3707 successfully

Figure 35-55 Performing script for Failover of Metro Mirror

Observe that both primary volumes (volumes belonging to the production partition) and secondary volumes (belonging to the recovery partition) are in Source suspended state, as shown in Figure 35-56 and Figure 35-57 on page 634. Note that in our examples the DS primary system is 7503461 and the DS secondary system is 7520781.

dscli> lspprc 3700-3707 Date/Time: June 27, 2006 2:28:03 PM CEST IBM DSCLI Version: 5.1.600.260 DS: IBM.2107-7503461 ID State Reason Type SourceLSS Timeout (secs) Critical mode First Pass Status =================================================================================================== 3700:3700 Suspended Host Source Metro Mirror 37 300 Disabled Invalid 3701:3701 Suspended Host Source Metro Mirror 37 300 Disabled Invalid 3702:3702 Suspended Host Source Metro Mirror 37 300 Disabled Invalid 3703:3703 Suspended Host Source Metro Mirror 37 300 Disabled Invalid 3704:3704 Suspended Host Source Metro Mirror 37 300 Disabled Invalid 3705:3705 Suspended Host Source Metro Mirror 37 300 Disabled Invalid 3706:3706 Suspended Host Source Metro Mirror 37 300 Disabled Invalid 3707:3707 Suspended Host Source Metro Mirror 37 300 Disabled Invalid

Figure 35-56 Status of primary volumes after suspend and failover

dscli> lspprc 3700-3707:3700-3707 Date/Time: June 27, 2006 2:33:18 PM CEST IBM DSCLI Version: 5.1.600.260 DS: IBM.2107-7520781 ID State Reason Type SourceLSS Timeout (secs) Critical Mode First Pass Status ===================================================================================================== 3700:3700 Suspended Host Source Metro Mirror 37 300 Disabled Invalid 3701:3701 Suspended Host Source Metro Mirror 37 300 Disabled Invalid 3702:3702 Suspended Host Source Metro Mirror 37 300 Disabled Invalid 3703:3703 Suspended Host Source Metro Mirror 37 300 Disabled Invalid 3704:3704 Suspended Host Source Metro Mirror 37 300 Disabled Invalid 3705:3705 Suspended Host Source Metro Mirror 37 300 Disabled Invalid 3706:3706 Suspended Host Source Metro Mirror 37 300 Disabled Invalid 3707:3707 Suspended Host Source Metro Mirror 37 300 Disabled Invalid

Figure 35-57 Status of secondary volumes after suspend and failover

5. IPL the remote partition from the secondary volumes. To perform this, the secondary volumes have to be connected to the standby partition. We recommend that the standby partition contain two Boot form SAN (BfS) IOPs, each one of them connected to a copy of a mirrored mate of load source. Make sure that one of the BfS IOPs is tagged as load source in the System i5 HMC.

634

IBM System Storage DS8000: Copy Services in Open Environments

To IPL the remote partition perform the following steps from the System i5 HMC: a. In the HMC navigation area expand Server and Partition and click Server management to bring up the Server and Partition: Server Management panel. In this panel expand Server, expand Partitions, and look for the remote partition icon. b. Expand the view by clicking the plus sign next to the icon, then right-click the upper icon and select Activate from the pull-down menu. This is shown in Figure 35-58.

Figure 35-58 IPL partition

6. Once the remote partition is up and running, the applications can continue to run in the recovery partition. 7. When the planned outage is finished, perform Metro Mirror failback to the primary volumes at the production site. Doing this ensures that updates done at the remote partition during the outage are now replicated back to the production partition. You might want to do a Metro Mirror failback even before the outage is over, depending on the type of outage. The DS CLI script used for the failback and performing the script are illustrated in Figure 35-59 on page 635 and Figure 35-60. Note that the failback is done on the secondary DS system. # # Residency Mainz - failback MetrorMirror # failbackPPRC -type mmir 3700-3707:3700-3707
Figure 35-59 Script for failback of Metro Mirror

Chapter 35. Copy Services with System i5

635

C:\Program Files\ibm\dscli>dscli

-cfg "c:\R2_dscli.profile" -user admin -script


IBM.2107-7520781 failed back. failed back. failed back. failed back. failed back. failed back. failed back. failed back.

"c:\Failback.cli"
Date/Time: CMUC00197I CMUC00197I CMUC00197I CMUC00197I CMUC00197I CMUC00197I CMUC00197I CMUC00197I June 27, 2006 failbackpprc: failbackpprc: failbackpprc: failbackpprc: failbackpprc: failbackpprc: failbackpprc: failbackpprc: 2:57:58 PM CEST IBM DSCLI Version: 5.1.600.260 DS: Remote Mirror and Copy pair 3700:3700 successfully Remote Mirror and Copy pair 3701:3701 successfully Remote Mirror and Copy pair 3702:3702 successfully Remote Mirror and Copy pair 3703:3703 successfully Remote Mirror and Copy pair 3704:3704 successfully Remote Mirror and Copy pair 3705:3705 successfully Remote Mirror and Copy pair 3706:3706 successfully Remote Mirror and Copy pair 3707:3707 successfully

Figure 35-60 Performing script for failback of Metro Mirror

Fail back to local partition


To switch back (fail back) to the local partition perform the following steps: 1. Power down the recovery partition. 2. Suspend the Metro Mirror on secondary volumes. The relevant DS CLI script and results of its execution are shown in Figure 35-52 on page 633 and Figure 35-61, respectively. This action is performed on the secondary DS system.
C:\Program Files\ibm\dscli>dscli

-cfg "c:\R2_dscli.profile" -user admin -script


CEST IBM and Copy and Copy and Copy and Copy and Copy and Copy and Copy and Copy DSCLI Version: 5.1.600.260 DS: IBM.2107-7520781 volume pair 3700:3700 relationship successfully volume pair 3701:3701 relationship successfully volume pair 3702:3702 relationship successfully volume pair 3703:3703 relationship successfully volume pair 3704:3704 relationship successfully volume pair 3705:3705 relationship successfully volume pair 3706:3706 relationship successfully volume pair 3707:3707 relationship successfully paused. paused. paused. paused. paused. paused. paused. paused.

"c:\PausePPRC.cli"
Date/Time: CMUC00157I CMUC00157I CMUC00157I CMUC00157I CMUC00157I CMUC00157I CMUC00157I CMUC00157I June 27, 2006 4:45:02 PM pausepprc: Remote Mirror pausepprc: Remote Mirror pausepprc: Remote Mirror pausepprc: Remote Mirror pausepprc: Remote Mirror pausepprc: Remote Mirror pausepprc: Remote Mirror pausepprc: Remote Mirror

Figure 35-61 Pause Metro Mirror or secondary DS

3. Fail over Metro Mirror to the primary volumes at the production site. The DS CLI script for this action and results of its execution are shown in Figure 35-54 on page 634 and Figure 35-62. This action is performed on the primary DS system.

C:\Program Files\ibm\dscli>dscli

-cfg "c:\R1_dscli.profile" -user admin -script


IBM.2107-7503461 reversed. reversed. reversed. reversed. reversed. reversed. reversed. reversed.

"c:\Failover.cli"
Date/Time: CMUC00196I CMUC00196I CMUC00196I CMUC00196I CMUC00196I CMUC00196I CMUC00196I CMUC00196I June 27, 2006 failoverpprc: failoverpprc: failoverpprc: failoverpprc: failoverpprc: failoverpprc: failoverpprc: failoverpprc: 4:51:21 PM CEST IBM DSCLI Version: 5.1.600.260 DS: Remote Mirror and Copy pair 3700:3700 successfully Remote Mirror and Copy pair 3701:3701 successfully Remote Mirror and Copy pair 3702:3702 successfully Remote Mirror and Copy pair 3703:3703 successfully Remote Mirror and Copy pair 3704:3704 successfully Remote Mirror and Copy pair 3705:3705 successfully Remote Mirror and Copy pair 3706:3706 successfully Remote Mirror and Copy pair 3707:3707 successfully

Figure 35-62 Fail over to the primary DS

4. Fail back Metro Mirror from primary volumes to secondary volumes. The DS CLI script and result of its execution are shown in Figure 35-63. This is performed on the production DS system.

636

IBM System Storage DS8000: Copy Services in Open Environments

C:\Program Files\ibm\dscli>dscli

-cfg "c:\R1_dscli.profile" -user admin -script


IBM.2107-7503461 failed back. failed back. failed back. failed back. failed back. failed back. failed back. failed back.

"c:\Failback.cli"
Date/Time: CMUC00197I CMUC00197I CMUC00197I CMUC00197I CMUC00197I CMUC00197I CMUC00197I CMUC00197I June 27, 2006 failbackpprc: failbackpprc: failbackpprc: failbackpprc: failbackpprc: failbackpprc: failbackpprc: failbackpprc: 4:59:02 PM CEST IBM DSCLI Version: 5.1.600.260 DS: Remote Mirror and Copy pair 3700:3700 successfully Remote Mirror and Copy pair 3701:3701 successfully Remote Mirror and Copy pair 3702:3702 successfully Remote Mirror and Copy pair 3703:3703 successfully Remote Mirror and Copy pair 3704:3704 successfully Remote Mirror and Copy pair 3705:3705 successfully Remote Mirror and Copy pair 3706:3706 successfully Remote Mirror and Copy pair 3707:3707 successfully

Figure 35-63 Failback on primary DS

5. IPL the production parititon and start the applications at the production site. Make sure that the correct IOP for Boot from SAN is tagged for load source on the System i5 HMC, as described earlier in this section.

Switch to remote site at unplanned outages


In unplanned outage situations like a System i5 or DS storage system failure at the production site, perform the following steps to switch to the remote site: 1. Perform Metro Mirror failover to the secondary volumes at the remote site, as is described in earlier in this section. 2. Make sure that the secondary volumes are connected to the recovery partition, and that the correct BfS IOP is tagged as load source in System i5 HMC at the recovery site. Perform an IPL of the recovery partition. Since the local partition was probably not powered down, IPL at the recovery site will be abnormal. After the remote partition is IPLed, it becomes a clone of the local partition, and the production applications continue to run with this cloned system. For considerations regarding the cloning, like different serial numbers, i5/OS licenses, and others, refer to the IBM RedbookiSeries and IBM TotalStorage: A Guide to Implementing External Disk on Eserver i5, SG24-7120. 3. After the failure on production site is fixed perform Metro Mirror failback to primary volumes. If this is not possible remove Metro Mirror or secondary DS system, and establish Metro Mirror from secondary to primary.

35.6 Global Mirror for the entire disk space


In this section we look at doing a Global Mirror of the entire local partition disk space. This solution provides continuous availability with a recovery site located at a long distance while minimizing the production performance impact. For more information about Global Mirror refer to Part 6, Global Mirror on page 301.

35.6.1 Solution description


In this solution, the entire local partition disk space resides on external storage. The load source is also external and connected to the System i5 via Boot from SAN (BfS). A Global Mirror for all volumes belonging to the production partition is established with another DS storage system located at the remote site. An important consideration here is the balance between the bandwidth to be provided for Global Mirror, and RPO. It is essential to understand this issue and set the right expectations when planning for this solution.

Chapter 35. Copy Services with System i5

637

For more information about System i5 Boot from SAN refer to the Redbooks publication, IBM System Storage DS6000 Series: Architecture and Implementation, SG24-6781. In planned or unplanned outage situations, a standby partition in the System i5 at the remote site is IPLed from the Global Mirror secondary volumes. The Global Mirror copy of the load source is also connected to the standby partition with Boot from SAN. To ensure that the data to recover from are as up-to-date as possible, the solution includes FlashCopy volumes of the Global Mirror targets. A fast restore from FlashCopy targets of the Global Mirror secondary volumes is performed before IPL. The solution is depicted in Figure 35-64.

Production
GM secondary

Recovery

Global Mirror

BfS
Flashcopy tgts

BfS

Figure 35-64 Global Mirror of entire disk space

35.6.2 Solution benefits


Some of the benefits of establishing a Global Mirror for the entire disk space include: The solution provides replication of production data over long distances while minimizing production performance impact. The solution offers the possibility to balance between RPO and bandwidth. If you can cope with longer RPO, you can use relatively narrow bandwidth for links between the two sites. On the other hand, if you have a large bandwidth, the RPO will be very small and of data loss at recovery will be almost negligible. Considerations about impact on System i5 production performances which apply to Metro Mirror of entire disk space do not apply to Global Mirror of entire disk space. Therefore this solution is recommended to System i5 customers. For more information about Metro Mirror impact on System i5 performances refer to Considerations on page 629. The solution does not require any special maintenance on the production or standby partition. Practically, the only required task is to set up Global Mirror for the entire disk space. Since Global Mirror is entirely driven by the DS storage systems, this solution does not use any processor or memory resources from the System i5 production and remote partition. This is different from other i5/OS replication solutions, which use some of the production and recovery partitions resources.

638

IBM System Storage DS8000: Copy Services in Open Environments

35.6.3 Planning and requirements


Take the following points into account when planning for Global Mirror of an entire disk space: It is very important to carefully size bandwidth of connection links between production and recovery site. Proper sizing of bandwidth can only be done based on System i5 performance data. Sizing guidelines and instructions on how to collect System i5 performance data, which are given in Planning and requirements on page 624, are valid also for Global Mirror of the entire disk space. The only difference being: when sizing Global Mirror for entire disk space you take into account all reported writes/sec, you do not apply percentage of writes that go to IASP. You must also properly size the primary and secondary DS systems. Since Boot from SAN is used in this solution, hardware and software requirements for Boot from SAN need to be taken into consideration. We strongly recommended that you use two external load sources mirrored by i5/OS mirroring, each of them connected via BfS. For more information about the preceding topics, you can refer to the Redbooks publication, iSeries and IBM TotalStorage: A Guide to Implementing External Disk on eServer i5, SG24-7120.

35.6.4 Considerations
Other considerations for this solution are: Recovery from Global Mirror secondary volumes in unplanned outage situation requires additional steps compared to the recovery from Metro Mirror. The extra steps are required to ensure a minimal data loss. In environments with large capacity and thus a large number of volumes, it is extremely difficult to perform these steps by only issuing DS CLI commands, and the use of an automation tool is recommended. We also recommend using i5/OS journaling with this solution. For more information about journaling, refer to Considerations on page 629. Important: When using Copy Services functions such as Metro Mirror, Global Mirror, or FlashCopy for the replication of the load source unit or other i5/OS disk units within the same DS6000/DS8000 or between two or more DS6000 and DS8000 systems, the source volume and the target volume characteristics must be identical. The target and source must be of matching capacities and matching protection types. Also, once a volume is assigned to an i5/OS partition and added to that partitions configuration, its characteristics must not be changed. If there is a requirement to change some characteristic of a configured volume, it must first be completely removed from the i5/OS configuration. After the characteristic changes are made, for example, protection type, capacity, and so on, by destroying and recreating the volume or by utilizing the DS CLI, the volume can be reassigned to the i5/OS configuration. To simplify the configuration, we recommend a symmetrical configuration between two IBM Storage systems, creating the same volumes with the same volume ID that decides the LSS_ID.

Chapter 35. Copy Services with System i5

639

35.6.5 Implementation and usage


Customers who want to implement Global Mirror for the entire System i disk space can use the System i Copy Services Toolkit available for purchase from the System i Client Technology Center in Rochester. The toolkit is described in Global Mirror for an IASP on page 622. Once the Toolkit is installed, you can do a switchover to the remote site during planned and unplanned outages by using the Toolkit commands. Still, for smaller installations with only a few volumes, it is feasible to perform the switchover to the remote site by simply using the DS CLI commands. We describe the steps to perform for both situations: either using Toolkit or using DS CLI. For information about how to set up Global Mirror refer to Part 6, Global Mirror on page 301.

Switch to the remote site at planned outages


First we describe the sequence of steps when using the Toolkit and then using the DS CLI.

Using the Toolkit


When using the Toolkit, follow these steps: 1. Power down the System i5 local partition to ensure that all data are written to disk and that the IPL for the remote partition will be normal. For more information about System i5 disk space and IPL refer to Single-level storage on page 590 and Switch to remote site at planned outages on page 625. 2. Use the toolkit command swpprc to fail over to the remote partition. 3. Use the toolkit command chghostvol to connect Global Mirror secondary volumes to the remote partition. With this command you activate DS CLI scripts to connect volumes to FC adapters in the System i5 remote partition. 4. Check that the external load source is tagged in the remote System i5 HMC and perform an IPL of the remote partition. For information about how to achieve this refer to Switch to remote site at planned outages on page 632. If customers want all the steps listed to be performed automatically by one toolkit command, they can request this as an additional service with the toolkit services.

Using DS CLI
Note: The examples shown next were done on a DS8000, using LSS numbers 37 and 39. When performing these actions on a DS6000, another LSS number will be used since the DS6000 supports only LSS numbers 00 to 1F. Otherwise, DS CLI scripts used on the DS6000 are the same as shown in these examples. Perform the following steps to switch to the remote site: 1. Power down the System i5 local partition by issuing the command pwrdwnsys in a Telnet session. For more information about how to use IBM personal communication to Telnet to the System i5, refer to iSeries Information Center located at the following Web page: http://publib.boulder.ibm.com/infocenter/iseries/v5r4/index.jsp

640

IBM System Storage DS8000: Copy Services in Open Environments

2. Pause Global Mirror on the local DS storage system, as shown in Figure 35-65. dscli> pausegmir -lss 37 -session 01 Date/Time: July 4, 2006 3:45:01 PM CEST IBM DSCLI Version: 5.1.600.260 DS: IBM.2 107-7503461 CMUC00163I pausegmir: Global Mirror for session 01 successfully paused.
Figure 35-65 Pause GM

3. Pause Global Copy on the local system, as shown in Figure 35-66.


dscli> pausepprc 3700-3707:3700-3707
Date/Time: CMUC00157I CMUC00157I CMUC00157I CMUC00157I CMUC00157I CMUC00157I CMUC00157I CMUC00157I July 4, 2006 3:51:53 PM CEST pausepprc: Remote Mirror and pausepprc: Remote Mirror and pausepprc: Remote Mirror and pausepprc: Remote Mirror and pausepprc: Remote Mirror and pausepprc: Remote Mirror and pausepprc: Remote Mirror and pausepprc: Remote Mirror and IBM DSCLI Version: 5.1.600.260 DS: IBM.2107-7503461 Copy volume pair 3700:3700 relationship successfully Copy volume pair 3701:3701 relationship successfully Copy volume pair 3702:3702 relationship successfully Copy volume pair 3703:3703 relationship successfully Copy volume pair 3704:3704 relationship successfully Copy volume pair 3705:3705 relationship successfully Copy volume pair 3706:3706 relationship successfully Copy volume pair 3707:3707 relationship successfully paused. paused. paused. paused. paused. paused. paused. paused.

Figure 35-66 Pause Global Copy

4. On remote DS perform failover of Global Copy, as shown in example on Figure 35-67.


dscli> failoverpprc -type gcp 3700-3707:3700-3707
Date/Time: CMUC00196I CMUC00196I CMUC00196I CMUC00196I CMUC00196I CMUC00196I CMUC00196I CMUC00196I July 4, 2006 3:56:16 failoverpprc: Remote failoverpprc: Remote failoverpprc: Remote failoverpprc: Remote failoverpprc: Remote failoverpprc: Remote failoverpprc: Remote failoverpprc: Remote PM CEST IBM DSCLI Version: 5.1.600.260 DS: IBM.2107-7520781 Mirror and Copy pair 3700:3700 successfully reversed. Mirror and Copy pair 3701:3701 successfully reversed. Mirror and Copy pair 3702:3702 successfully reversed. Mirror and Copy pair 3703:3703 successfully reversed. Mirror and Copy pair 3704:3704 successfully reversed. Mirror and Copy pair 3705:3705 successfully reversed. Mirror and Copy pair 3706:3706 successfully reversed. Mirror and Copy pair 3707:3707 successfully reversed.

Figure 35-67 Failover Global Copy

5. In planned outage situations, power down the local partition and suspend Global Mirror before the failover. This eliminates the need to perform a fast restore of the FlashCopy. 6. IPL the remote partition from the Global Mirror secondary volumes using Boot from SAN. For instruction on how to do this refer to Switch to remote site at planned outages on page 632. Once the partition is IPLed, it will be a clone of the local partition, and production work can continue in the cloned partition at the remote site.

Fail back to local site at planned outages


To do a failback to the local site at planned outages, use the steps described next. We describe the steps when using the toolkit and when using DS CLI successively.

Using the toolkit


The Toolkit provides DS CLI scripts for failback to the local site after a planned outage. The scripts reside at the remote partition.

Chapter 35. Copy Services with System i5

641

Using DS CLI
To use this: 1. On the remote DS storage system, perform a failback, as shown in Figure 35-68.
dscli> failbackpprc -type gcp 3700-3707:3700-3707
Date/Time: CMUC00197I CMUC00197I CMUC00197I CMUC00197I CMUC00197I CMUC00197I CMUC00197I CMUC00197I July 4, 2006 4:11:58 failbackpprc: Remote failbackpprc: Remote failbackpprc: Remote failbackpprc: Remote failbackpprc: Remote failbackpprc: Remote failbackpprc: Remote failbackpprc: Remote PM CEST IBM DSCLI Version: 5.1.600.260 DS: IBM.2107-7520781 Mirror and Copy pair 3700:3700 successfully failed back. Mirror and Copy pair 3701:3701 successfully failed back. Mirror and Copy pair 3702:3702 successfully failed back. Mirror and Copy pair 3703:3703 successfully failed back. Mirror and Copy pair 3704:3704 successfully failed back. Mirror and Copy pair 3705:3705 successfully failed back. Mirror and Copy pair 3706:3706 successfully failed back. Mirror and Copy pair 3707:3707 successfully failed back.

Figure 35-68 Failback Global Copy

2. On the local DS storage system, perform a failover, as shown in Figure 35-69.


dscli> failoverpprc -type gcp 3700-3707:3700-3707
Date/Time: CMUC00196I CMUC00196I CMUC00196I CMUC00196I CMUC00196I CMUC00196I CMUC00196I CMUC00196I July 4, 2006 4:15:51 failoverpprc: Remote failoverpprc: Remote failoverpprc: Remote failoverpprc: Remote failoverpprc: Remote failoverpprc: Remote failoverpprc: Remote failoverpprc: Remote PM CEST IBM DSCLI Version: 5.1.600.260 DS: IBM.2107-7503461 Mirror and Copy pair 3700:3700 successfully reversed. Mirror and Copy pair 3701:3701 successfully reversed. Mirror and Copy pair 3702:3702 successfully reversed. Mirror and Copy pair 3703:3703 successfully reversed. Mirror and Copy pair 3704:3704 successfully reversed. Mirror and Copy pair 3705:3705 successfully reversed. Mirror and Copy pair 3706:3706 successfully reversed. Mirror and Copy pair 3707:3707 successfully reversed.

Figure 35-69 Fail over to local site

3. IPL the local partition. 4. Perform a failback to the remote site, as shown in Figure 35-70.
dscli> failbackpprc -type gcp 3700-3707:3700-3707
Date/Time: CMUC00197I CMUC00197I CMUC00197I CMUC00197I CMUC00197I CMUC00197I CMUC00197I CMUC00197I July 4, 2006 4:18:47 failbackpprc: Remote failbackpprc: Remote failbackpprc: Remote failbackpprc: Remote failbackpprc: Remote failbackpprc: Remote failbackpprc: Remote failbackpprc: Remote PM CEST IBM DSCLI Version: 5.1.600.260 DS: IBM.2107-7503461 Mirror and Copy pair 3700:3700 successfully failed back. Mirror and Copy pair 3701:3701 successfully failed back. Mirror and Copy pair 3702:3702 successfully failed back. Mirror and Copy pair 3703:3703 successfully failed back. Mirror and Copy pair 3704:3704 successfully failed back. Mirror and Copy pair 3705:3705 successfully failed back. Mirror and Copy pair 3706:3706 successfully failed back. Mirror and Copy pair 3707:3707 successfully failed back.

Figure 35-70 Failback Global Copy

5. On local DS resume Global Mirror, as shown in Figure 35-71. dscli> resumegmir -lss 37 -session 01 Date/Time: July 4, 2006 4:21:12 PM CEST IBM DSCLI Version: 5.1.600.260 DS: IBM.2 107-7503461 CMUC00164I resumegmir: Global Mirror for session 01 successfully resumed.
Figure 35-71 Resume Global Mirror

642

IBM System Storage DS8000: Copy Services in Open Environments

Switch to the remote site at unplanned outages


To achieve the switch to the remote site at unplanned outages use the steps described next. We describe the steps when using toolkit and when using DS CLI.

Using the Toolkit


To switch over to the remote site at unplanned outages, perform the following steps: 1. Use the toolkit command falovrgmir to fail over to the remote partition. During this command execution, the following actions are taken: a. Run the command chkpprc to check for the correct status of the Global Mirror. b. Fail over Global Copy to the remote site. c. Check the revertible status of the volumes at the remote site and programmatically create the correct scripts to revert the FlashCopy on remote volumes. d. Reverse FlashCopy on remote volumes. 2. Use the Toolkit command chghostvol to connect Global Mirror secondary volumes to the remote partition. With this command you activate DS CLI scripts to connect volumes to FC adapters in the remote partition. 3. Check that the external load source is tagged at the remote system i5 HMC and perform an IPL of the remote partition. For information about how to achieve this, refer to Switch to remote site at planned outages on page 632. If customers want all the steps listed to be performed automatically by one toolkit command, they can request this as an additional service with the Toolkit services.

Using the DS CLI


To use this: 1. After failure of the local System i5 or the DS storage system, suspend or stop Global Mirror on the local DS, if possible. We recommend checking the status of Global Copy and paths, as well. 2. Fail over Global Copy on the remote DS, as shown in Figure 35-72.
dscli> failoverpprc -type gcp 3700-3707:3700-3707
Date/Time: CMUC00196I CMUC00196I CMUC00196I CMUC00196I CMUC00196I CMUC00196I CMUC00196I CMUC00196I July 4, 2006 4:45:37 failoverpprc: Remote failoverpprc: Remote failoverpprc: Remote failoverpprc: Remote failoverpprc: Remote failoverpprc: Remote failoverpprc: Remote failoverpprc: Remote PM CEST IBM DSCLI Version: 5.1.600.260 DS: IBM.2107-7520781 Mirror and Copy pair 3700:3700 successfully reversed. Mirror and Copy pair 3701:3701 successfully reversed. Mirror and Copy pair 3702:3702 successfully reversed. Mirror and Copy pair 3703:3703 successfully reversed. Mirror and Copy pair 3704:3704 successfully reversed. Mirror and Copy pair 3705:3705 successfully reversed. Mirror and Copy pair 3706:3706 successfully reversed. Mirror and Copy pair 3707:3707 successfully reversed.

Figure 35-72 Failover Global Copy

Chapter 35. Copy Services with System i5

643

3. List FlashCopy volumes to see the status of sequences and revertible volumes. The DS CLI command and partial output are shown in Figure 35-73.
dscli> lsflash -l -fmt default 3700-3707:3900-3907
Date/Time: July 4, 2006 4:50:49 PM CEST IBM DSCLI Version: 5.1.600.260 DS: IBM.2107-7520781 ID SrcLSS SequenceNum Timeout ActiveCopy Recording Persistent Revertible SourceWriteEnabled Target ========================================================================================================== 3700:3900 37 44AA7EA4 300 Disabled Enabled Enabled Disabled Enabled Disabled 3701:3901 37 44AA7EA4 300 Disabled Enabled Enabled Disabled Enabled Disabled 3702:3902 37 44AA7EA4 300 Disabled Enabled Enabled Disabled Enabled Disabled 3703:3903 37 44AA7EA4 300 Disabled Enabled Enabled Disabled Enabled Disabled 3704:3904 37 44AA7EA4 300 Disabled Enabled Enabled Disabled Enabled Disabled 3705:3905 37 44AA7EA4 300 Disabled Enabled Enabled Disabled Enabled Disabled 3706:3906 37 44AA7EA4 300 Disabled Enabled Enabled Disabled Enabled Disabled 3707:3907 37 44AA7EA4 300 Disabled Enabled Enabled Disabled Enabled Disabled dscli>

Figure 35-73 LsFlash at GM unplanned outages

Examine the sequence numbers and volumes revertible status, and consequently decide to revert FlashCopy, commit FlashCopy, or not take any action. For more information refer to Chapter 22, Global Mirror options and configuration on page 325. In our example all sequence numbers are equal and we do not have revertible volumes, so we do not perform any action here. 4. Reverse FlashCopy on remote DS, as shown in Figure 35-74
dscli> reverseflash -fast -tgtpprc 3700-3707:3900-3907
Date/Time: CMUC00169I CMUC00169I CMUC00169I CMUC00169I CMUC00169I CMUC00169I CMUC00169I CMUC00169I July 4, 2006 5:16:19 PM reverseflash: FlashCopy reverseflash: FlashCopy reverseflash: FlashCopy reverseflash: FlashCopy reverseflash: FlashCopy reverseflash: FlashCopy reverseflash: FlashCopy reverseflash: FlashCopy CEST IBM DSCLI Version: 5.1.600.260 DS: IBM.2107-7520781 volume pair 3700:3900 successfully reversed. volume pair 3701:3901 successfully reversed. volume pair 3702:3902 successfully reversed. volume pair 3703:3903 successfully reversed. volume pair 3704:3904 successfully reversed. volume pair 3705:3905 successfully reversed. volume pair 3706:3906 successfully reversed. volume pair 3707:3907 successfully reversed.

Figure 35-74 Reverse FlashCopy

5. IPL the remote partition from the Global Mirror secondary volumes by using BfS. Once IPL is done, the production application can continue to run in the clone of the local partition.

35.7 FlashCopy of IASP


The downtime needed to save application data to tape is critical for many System i5 installations because saving to tape is usually a part of typical batch jobs running in those environments. For many clients, this downtime window is generally very limited. The solution described in this section is based on FlashCopy and ensures minimal downtime for saving application data to tape. For more information about FlashCopy, refer to Part 3, FlashCopy on page 77. For more information about IASP refer to Independent Auxiliary Storage Pools (IASPs) on page 593.

644

IBM System Storage DS8000: Copy Services in Open Environments

35.7.1 Solution description


The setup for this solution consists of two System i5 partitions. Usually both partitions are defined in the same physical System i5, but they could also reside in two separate units. The two partitions are grouped in a cluster. We call the production partition, the partition where the production applications run. The other partition is called the backup parititon and typically clients would run testing or development workload in the backup partition. Both partitions are connected to the same DS system. The production application runs in an IASP that resides in the DS system and contains FlashCopy primary volumes. To take a backup of the application using the backup partition, follow these steps: 1. Perform a FlashCopy of the IASP. 2. Vary on the IASP FlashCopy targets in the backup partition. 3. Use the backup partition to save the data to tape without impacting the production partition. During the backup operation, the production application continues to run in production partition. We recommended varying off the IASP in the production partition just before taking the FlashCopy. Doing so will minimize the time needed to bring up the application in the backup partition. The solution is depicted in Figure 35-75.

Production

IASP
Flash copy

Backup Backup Save

IASP

Figure 35-75 FlashCopy of an IASP

The solution is implemented through the System i Copy Services Toolkit.

Chapter 35. Copy Services with System i5

645

Note: For information about the Toolkit contact the Client Technology Center via the following Internet page: http://www.ibm.com/servers/eserver/services/iseriesservices.html

35.7.2 Solution benefits


The most important benefits are as follows: The production application downtime is only is as long as it takes to vary off the IASP, perform a FlashCopy of the volumes in the IASP, and vary on IASP. Usually this time is estimated to be about 30 minutes. This is very little compared to the downtime normally experienced when saving to tape without a Save While Active function. For more information about Save While Active refer to the iSeries Information Center on the following Web page: http://publib.boulder.ibm.com/iseries The performance impact on the production application during the save to tape operation is minimal since it is only influenced by the FlashCopy activity, which is mostly confined within the DS system. Furthermore, this FlashCopy activity is typically very low since save to tape operations are typically performed at night or during light workload periods. This solution can be implemented together with Backup, Recovery and Media Services for iSeries (BRMS), a System i5 software for saving application data to tape. For more information about BRMS refer to the iSeries Information Center at the following web page: http://publib.boulder.ibm.com/iseries/

35.7.3 Planning and requirements


The following points should be taken into account when planning for this solution: An important requirement is the need to run the application in an IASP. If you do not have your applications in IASPs yet, you must do so before installing the System i Copy Services Toolkit (you can also engage IBM services to set up your application for IASP). A solution that uses the System i Copy Services Toolkit must be approved by the Client Technology Center. Each IASP has its own Fibre Channel attachment cards. This is valid for both System i5 and earlier System i models like 8xx and 270. A FlashCopy license must be purchased for the DS or ESS system using this solution. For System i5 software prerequisites, the i5/OS software for clustering (5722-SS1 option 41) must be installed in both production and recovery partition. This prerequisite is needed on each System i5 and earlier System i models like 8xx and 270..

35.7.4 Considerations
This solution requires you to vary off the production IASP every time a FlashCopy of the IASP is taken. Some clients prefer not to vary off the IASP before FlashCopy, and therefore might experience a longer time to vary on the IASP in the backup partition. Other considerations listed under Considerations on page 598 for Metro Mirror of IASP also apply here.

646

IBM System Storage DS8000: Copy Services in Open Environments

Important: When using Copy Services functions such as Metro Mirror, Global Mirror, or FlashCopy for the replication of the load source unit or other i5/OS disk units within the same DS6000/DS8000 or between two or more DS6000 and DS8000 systems, the source volume and the target volume characteristics must be identical. The target and source must be of matching capacities and matching protection types. Also, once a volume is assigned to an i5/OS partition and added to that partitions configuration, its characteristics must not be changed. If there is a requirement to change some characteristic of a configured volume, it must first be completely removed from the i5/OS configuration. After the characteristic changes are made, for example, protection type, capacity, and so on, by destroying and recreating the volume or by utilizing the DS CLI, the volume can be reassigned to the i5/OS configuration. To simplify the configuration, we recommend a symmetrical configuration between two IBM Storage systems, creating the same volumes with the same volume ID that decides the LSS_ID.

35.7.5 Using Metro Mirror and FlashCopy of IASP in the same scenario
Many clients like to combine their disaster recovery solutions with minimizing their backup window. In this case, they implement Metro Mirror or Global Mirror of an IASP along with FlashCopy of the IASP. Some might decide to use FlashCopy on the local DS, while others might decide to use FlashCopy on the remote DS and implement saving to tape at the remote site. The following should be taken into account when planning and implementing such combined solutions: System i Copy Services Toolkit for both Metro Mirror (or Global Mirror) and FlashCopy should be purchased. If FlashCopy is implemented at the remote site, the solution typically needs two remote System i5 partitions: one for recovery from Metro Mirror (or Global Mirror) targets and one for FlashCopy. Each of them requires dedicated FC adapters. Regardless of whether FlashCopy is implemented at the local or remote site, it requires that the production IASP be varied off before taking the FlashCopy. For more information refer to Considerations on page 646.

35.7.6 Implementation and usage


The solution is implemented through System i Copy Services Toolkit. Installation of the toolkit code, creating the DS CLI script, and testing are performed as part of the toolkit-provided services. The toolkit installation steps are described in Implementation and usage on page 599. Once the toolkit is installed and tested, the following actions are performed to check the solution components and perform a save to tape in the backup partition.

Checking solution components


You might want to check the following solution components: IASP status FlashCopy status Toolkit setup DS CLI scripts

Chapter 35. Copy Services with System i5

647

Refer to Checking the solution components on page 600. To check the FlashCopy status use the command dspessdta.

Prepare for save


To prepare the partition for taking a backup (that is, saving to tape the production application data), use the toolkit command prpiaspbck, as described next. In a Telnet session with the System i5 partition, enter the command prpiaspbck. You are presented with a screen where you can insert the IASP name, as shown in Figure 35-76. Press Enter to start executing the command. Prepare for IASP Backup (PRPIASPBCK) Type choices, press Enter. Independent ASP name . . . . . . ds6000 Name

F3=Exit F4=Prompt F5=Refresh F24=More keys Parameter IASPNAME required.


Figure 35-76 PRPIASPBCK

F12=Cancel

Bottom F13=How to use this display

While executing the prpiaspbck command, the toolkit performs the following: Vary off the production IASP. Make a FlashCopy of the volumes in the DS system by running a DS CLI script in the backup partition. Add IOPs and FC adapters to the backup partition that will attach the FlashCopy of the production IASP volumes. For more information about IOPs refer to Input Output Processors on page 590. Vary on the production IASP. Vary on the FlashCopy of the production IASP in the backup partition. After completion of the prpiaspbck command, check the IASP status in the backup partition. For information about how to check this refer to Checking the solution components on page 600.

Saving to tape
At this stage the backup partition is ready for taking a backup of the production application based on the copy (FlashCopy) of the production IASP. The actual backup can be done with BRMS or by using i5/OS commands. For more information about System i5 backups refer to iSeries Information Center at: http://publib.boulder.ibm.com/iseries You can also consult the manual IBM Systems - iSeries Backup and Recovery Version 5 Revision 4, SC41-5304-08. 648
IBM System Storage DS8000: Copy Services in Open Environments

For more information about BRMS refer to the iSeries Information Center and the manual IBM Systems - iSeries Backup Recovery and Media Services for iSeries Version 5, SC41-5345-05. Note that if BRMS is used to take backups in this solution, BRMS systems in the production and backup partition must be in a BRMS Network. Backups taken in the backup partition can be restored to the production partition. Single System i5 objects from these backups can be restored to production partition, if needed.

Cleaning up the backup partition after a backup


After a backup is finished certain cleaning activities are required in the backup parititon to make sure that it is in an appropriate state for the next backup operation. These activities can be controlled and executed by the toolkit by issuing the command rsmiapsbck in the backup partition. In a Telnet session established with the System i5 partition, enter the command rsmiaspbck. You are presented with a screen where you can insert the IASP name, as shown in Figure 35-77. Press Enter to start executing the command.
MAIN Select one of the following: 1. 2. 3. 4. 5. 6. 7. 8. 9. 10. 11. User tasks Office tasks General system tasks Files, libraries, and folders Programming Communications Define or change the system Problem handling Display a menu Information Assistant options iSeries Access tasks i5/OS Main Menu System: ITCHA3

90. Sign off Selection or command ===> rsmiaspbck F3=Exit F4=Prompt F9=Retrieve F23=Set initial menu F12=Cancel F13=Information Assistant

Figure 35-77 Executing the command rsmiaspbck

While the rsmiaspbck command execution takes place, the following actions are performed by the toolkit: Vary off the FlashCopy of the production IASP, which is attached to the backup node. Remove the FlashCopy for the volumes in the DS unit by running the dscli rmflash script. IPL the backup node. Once rsmiaspbck has completed its execution, the backup partition is in the appropriate state for the next backup through the prpiaspbck command.

Chapter 35. Copy Services with System i5

649

35.8 FlashCopy of the entire disk space


In this solution, FlashCopy is implemented for the entire disk space attached to a System i5 parititon. This solution will minimize the production partition downtime for taking backups. For more information about FlashCopy refer to Chapter 7, FlashCopy overview on page 79.

35.8.1 Solution description


The setup for this solution consists of two System i5 partitions. Usually both partitions reside on the same physical System i5, but they could be also on different units. We call the production partition the partition where the production applications run. The other partition is called the backup parititon. The backup partition is standby and typically does not run any workload. All disks attached to the production partition reside on the external DS storage system and Boot from SAN is implemented. The standby backup partition also contains Boot from SAN IOP. For more information about Boot from SAN, refer to the Redbooks publication, IBM System Storage DS6000 Series: Architecture and Implementation, SG24-6781. To take a backup, perform the following steps: 1. Power down the production partition to ensure that all data is flushed out to disks and that a subsequent IPL in the backup partition will be normal. 2. Do a FlashCopy of all production volumes. 3. IPL the production partition. 4. IPL the backup partition off the FlashCopy target volumes by using BfS. After being IPLed, the backup partition is a clone of the production partition. Any backup taken in the backup parititon will reflect actual and current production data, and later it will be possible to restore this backup to the production partition. For more information about the previous considerations, refer to the Redbooks publication, iSeries and IBM TotalStorage: A Guide to Implementing External Disk on eServer i5, SG24-7120.

650

IBM System Storage DS8000: Copy Services in Open Environments

The solution is shown in Figure 35-78.

Production
BfS

Backup Backup Save

Flash copy

BfS

Figure 35-78 FlashCopy of entire System i5 disk space

35.8.2 Solution benefits


This solution offers the following major benefits: The production application downtime is only is as long as it takes to power down the production partition, take a FlashCopy of the production volumes, and IPL the production partition (IPL is normal). Usually, this time is much shorter than the downtime experienced when saving to tape without a Save While Active function. For more information about Save While Active refer to the iSeries Information Center on the following Web page: http://publib.boulder.ibm.com/iseries/ The performance impact on the production application during the save to tape operation is minimal since it is only influenced by the FlashCopy activity, which is mostly confined within the DS system. Furthermore, this FlashCopy activity is typically very low since save to tape operations are normally performed at night or during light workload periods. This solution can be implemented together with Backup, Recovery, and Media Services for iSeries (BRMS), a System i5 software for saving application data to tape. For more information about BRMS refer to the iSeries Information Center at the following Web page: http://publib.boulder.ibm.com/iseries/

Chapter 35. Copy Services with System i5

651

35.8.3 Planning and requirements


It is important to adequately size the primary and secondary DS systems. Also, since Boot from SAN is used in this solution, hardware and software requirements for Boot form SAN need to be taken into consideration for both the production and backup partitions. For guidelines on how to size external storage for System i5 as well as more information about requirements for BfS refer to the refer to the Redbooks publication, iSeries and IBM TotalStorage: A Guide to Implementing External Disk on eServer i5, SG24-7120.

35.8.4 Considerations
This solution requires powering down the production partition each time a FlashCopy is taken, and then IPLing the partition after completion of the FlashCopy. This procedure is strongly recommended in order to ensure a correct IPL of the backup partition later in the process. Some installations cannot tolerate a daily IPL of their i5/OS production system, and consequently cannot stick to the recommended procedure. In this case we suggest that they implement IASP and use the solution described in FlashCopy of IASP on page 644. Important: When using Copy Services functions such as Metro Mirror, Global Mirror, or FlashCopy for the replication of the load source unit or other i5/OS disk units within the same DS6000/DS8000 or between two or more DS6000 and DS8000 systems, the source volume and the target volume characteristics must be identical. The target and source must be of matching capacities and matching protection types. Also, once a volume is assigned to an i5/OS partition and added to that partitions configuration, its characteristics must not be changed. If there is a requirement to change some characteristic of a configured volume, it must first be completely removed from the i5/OS configuration. After the characteristic changes are made, for example, protection type, capacity, and so on, by destroying and recreating the volume or by utilizing the DS CLI, the volume can be reassigned to the i5/OS configuration. To simplify the configuration, we recommend a symmetrical configuration between two IBM Storage systems, creating the same volumes with the same volume ID that decides the LSS_ID.

35.8.5 Implementation and usage


In this section we describe how to implement FlashCopy of entire disk space and use it for taking backups of production partition.

Implementation
Since the implementation of this solution implies several System i5 related tasks, we recommend involving a System i5 specialist. The solution is implemented through the following steps: 1. In the production and backup partitions, install FC adapters and IOPs as needed to implement Boot from SAN. 2. On the DS system define logical volumes for the System i5 and connect them to the System i5. 3. Install i5/OS in the local partition or migrate an existing i5/OS to external disk. Restore and start System i5 production applications.

652

IBM System Storage DS8000: Copy Services in Open Environments

For more information about the preceding topics refer to the IBM Redbooks iSeries and IBM TotalStorage: A Guide to Implementing External Disk on Eserver i5, SG24-7120, and IBM System Storage DS6000 Series: Architecture and Implementation, SG24-6781.

Check Boot from SAN tagging


Check that the correct IOPs or FC adapters are tagged as load source devices, in each of the production and backup partitions.

Cloning the production partition to a backup partition


In the following example, we use FlashCopy with the nocopy option. Note that if BRMS is used for saving to tape from the backup partition, some additional steps are required to ensure that backups will correctly restore in the production partition. These steps are described in the refer to the Redbooks publication, iSeries and IBM TotalStorage: A Guide to Implementing External Disk on eServer i5, SG24-7120. To clone the production parititon, proceed with the following steps. Note: The examples shown here were done on a DS8000, using LSS numbers 37 and 39. When performing these actions on a DS6000, another LSS number will be used since the DS6000 supports only LSS numbers 00 to 1F. However, DS CLI scripts used on the DS6000 are the same as those shown in these examples. The steps are: 1. Power down the production partition by issuing the command pwrdwnsys in a Telnet session. For more information about how to use IBM personal communication to Telnet to a System i5, refer to iSeries Information Center on the following Web page: http://publib.boulder.ibm.com/infocenter/iseries/v5r4/index.jsp 2. Perform a FlashCopy of the production volumes. To do this you can either run a DS CLI script in a Windows environment or directly execute the DS CLI script on the System i5. When run from the System i5, the script must reside in a third, separate partition (that is, not the production partition or the backup partition). A script for making the FlashCopy and the command to invoke it in Windows, as well as the output of the command, is shown in Example 35-1 and Example 35-2.
Example 35-1 Script for making FlashCopy

# # - mkflash # mkflash -nocp 3700-3707:3900-3907


Example 35-2 Make FlashCopy
C:\Program Files\ibm\dscli>dscli

-cfg "c:\R2_dscli.profile" -script "c:\R2_mkflash.cli" -user


DSCLI Version: 5.1.600.260 DS: IBM.2107-7520781 successfully created. successfully created. successfully created. successfully created. successfully created.

admin
Date/Time: CMUC00137I CMUC00137I CMUC00137I CMUC00137I CMUC00137I July 4, 2006 10:44:16 AM CEST IBM mkflash: FlashCopy pair 3700:3900 mkflash: FlashCopy pair 3701:3901 mkflash: FlashCopy pair 3702:3902 mkflash: FlashCopy pair 3703:3903 mkflash: FlashCopy pair 3704:3904

Chapter 35. Copy Services with System i5

653

CMUC00137I mkflash: FlashCopy pair 3705:3905 successfully created. CMUC00137I mkflash: FlashCopy pair 3706:3906 successfully created. CMUC00137I mkflash: FlashCopy pair 3707:3907 successfully created.

3. After the FlashCopy is performed, you might want to list the FlashCopy relationship of the production volumes. The DS CLI script to perform this and the command to invoke the script in Windows, as well as the output of the command, are shown in Example 35-3 and Example 35-3.
Example 35-3 Script for list FlashCopy

# lsfalsh # lsflash 3700-3707:3900-3907


Example 35-4 lsflash
C:\Program Files\ibm\dscli>dscli -cfg "c:\R2_dscli.profile" -script "c:\R2_lsflsh.cli" -user admin Date/Time: July 4, 2006 10:52:39 AM CEST IBM DSCLI Version: 5.1.600.260 DS: IBM 2107-7520781 ID SrcLSS SequenceNum Timeout ActiveCopy Recording Persistent Revertible SourceWriteEnabled TargetWriteEnabled BackgroundCopy =============================================================================== ==================================================== 3700:3900 37 0 300 Disabled Disabled Disabled Disabled Enabled Enabled Disabled 3701:3901 37 0 300 Disabled Disabled Disabled Disabled Enabled Enabled Disabled 3702:3902 37 0 300 Disabled Disabled Disabled Disabled Enabled Enabled Disabled 3703:3903 37 0 300 Disabled Disabled Disabled Disabled Enabled Enabled Disabled 3704:3904 37 0 300 Disabled Disabled Disabled Disabled Enabled Enabled Disabled 3705:3905 37 0 300 Disabled Disabled Disabled Disabled Enabled Enabled Disabled 3706:3906 37 0 300 Disabled Disabled Disabled Disabled Enabled Enabled Disabled 3707:3907 37 0 300 Disabled Disabled Disabled Disabled Enabled Enabled Disabled

4. IPL the production partition and start the production applications. For instructions on how to do this refer to Switch to remote site at planned outages on page 632. 5. Next IPL the backup partition from the FlashCopy target volumes, via BfS. Once the partition is IPLed, save the entire system or specific objects to tape using BRMS or native i5/OS commands. For more information about System i5 backups, refer to the iSeries Information Center on the following Web page: http://publib.boulder.ibm.com/iseries You can also consult the IBM Systems - iSeries Backup and Recovery Version 5 Revision 4, SC41-5304-08.

654

IBM System Storage DS8000: Copy Services in Open Environments

6. Once the backup is taken, remove the FlashCopy relationship. The DS CLI script to perform this and the command to invoke the script in Windows, as well as the output of the command, are shown in Example 35-5 and Example 35-6.
Example 35-5 Script for rmflash

# rmflash # rmflash -quiet 3700-3707:3900-3907


Example 35-6 Remove FlashCopy
C:\Program Date/Time: CMUC00140I CMUC00140I CMUC00140I CMUC00140I CMUC00140I CMUC00140I CMUC00140I CMUC00140I Files\ibm\dscli>dscli -cfg "c:\R2_dscli.profile" -script "c:\R2_rmflash.cli" -user admin July 4, 2006 12:48:34 PM CEST IBM DSCLI Version: 5.1.600.260 DS: IBM.2107-7520781 rmflash: FlashCopy pair 3700:3900 successfully removed. rmflash: FlashCopy pair 3701:3901 successfully removed. rmflash: FlashCopy pair 3702:3902 successfully removed. rmflash: FlashCopy pair 3703:3903 successfully removed. rmflash: FlashCopy pair 3704:3904 successfully removed. rmflash: FlashCopy pair 3705:3905 successfully removed. rmflash: FlashCopy pair 3706:3906 successfully removed. rmflash: FlashCopy pair 3707:3907 successfully removed.

The system is now ready for the next backup.

Chapter 35. Copy Services with System i5

655

35.9 FlashCopy SE with System i partition


In this section we describe planning and implementation of FlashCopy SE with i5/OS. Similar to classic FlashCopy, you can use FlashCopy SE for minimizing production downtime when performing backups. We describe a solution with FlashCopy SE of the entire System i disk space. In this scenario a clone of production system is available in a stand-by backup partition. While the clone with classic FlashCopy can be used for backups, testing or development, we recommend to use a clone with FlashCopy SE mostly for backups of the production system, in order to maintain good performance and keep minimal capacity growth of target volumes.

35.9.1 Overview of FlashCopy SE


FlashCopy SE allows the amount of physical space allocated to target volumes to be proportional to the amount of write activity on the FlashCopy source and target, at the same time it provides most of the functions available for classic FlashCopy. For a detailed description of FlashCopy SE, refer to Chapter 10, IBM FlashCopy SE on page 129.

35.9.2 Scenario and usage


Note: The scenario and required actions for taking System i backups with FlashCopy SE are very similar to the ones with classic FlashCopy. However, using FlashCopy SE requires an additional activity: monitoring and maintaining occupation of FlashCopy SE repository. Production System i partition is connected to DS8000, all the disk space being on external storage, Boot from SAN is used to connect external LoadSource unit. For more information about Boot fro SAN refer to the Redbooks publication, iSeries and IBM TotalStorage: A Guide to Implementing External Disk on IBM eServer i5, SG24-7120. Space Efficient volumes of the same capacity and protection as System i production volumes are defined on DS8000 for FlashCopy SE targets. A stand-by Backup System i partition is connected to FlashCopy SE target volumes, target of LoadSource being connected with Boot from SAN. Backup partition usually resides on the same i5 server as Production partition, but it can be also on other i5 server. Typically, this partition doesnt run any workload. For taking System i backup by using FlashCopy SE, you first need to power-down Production partition to ensure that all data are drained to disk. Then you perform FlashCopy SE of all production volumes to Space Efficient targets, and IPL Backup partition which is connected to the targets with Boot from SAN. After IPL, Backup partition contains the clone of production system, therefore it can serve to take backup of entire Production partition, or a specific production library. The solution provides full integration with Backup Recovery and Media Services (BRMS) if necklaces. As soon as FlashCopy SE was taken, you can IPL Production partition and continue with the usual workload. Since performing FlashCopy SE lasts usually only a few seconds, the downtime is about as long as it takes to power-down and IPL, typically it is significantly shorter than the downtime which you would need to back up from the Production partition. The scenario with using FlashCopy SE of entire System i disk space is shown in Figure 35-79.

656

IBM System Storage DS8000: Copy Services in Open Environments

Production

BfS

BfS

FlashCopy SE

Backup

Figure 35-79 FlashCopy SE of entire System i disk space

Regarding the nature of FlashCopy SE, the needed physical disk capacity for target volumes can be much lower than with classic FlashCopy, which reduces the price of needed external storage. On the other hand, implementing FlashCopy SE requires the following activity: you have to control occupation (allocation) of repository so that it doesnt exceed the repository size. Later in this section we describe how to control allocation of repository.

35.9.3 Planning
As any scenario with i5/OS and external storage, this solution also requires planning in both areas, System i partition as well as DS8000. While planning for System i partition consider the following topics: Planning for Boot from SAN Planning and sizing external storage for i5/OS Need to power-down production System i partition before taking FlashCopy SE Possible need to adjust clone system to correctly run along with production system If Backup Recovery and Media Services (BRMS) is used for backups consider needed steps to backup from clone by using BRMS For more information about all listed topics refer to the Redbooks publication, iSeries and IBM TotalStorage: A Guide to Implementing External Disk on IBM eServer i5, SG24-7120. During planning for DS8000 consider the following topics: Sizing of DS8000 for i5/OS and modelling with Disk Magic
Chapter 35. Copy Services with System i5

657

For more information of this topic refer to the Redbooks publication, IBM System Storage DS8000 Series: Architecture and Implementation, SG24-6786. Sizing for FlashCopy SE repository For more information refer to section Sizing for FlashCopy SE repository on page 658. Obtaining proper licenses for FlashCopy SE For more information about needed licenses refer to the Redbooks publication, IBM System Storage DS8000 Series: Architecture and Implementation, SG24-6786.

35.9.4 Sizing for FlashCopy SE repository


With solutions using FlashCopy SE, the FlashCopy SE repository should provide enough capacity so that it does not get totally consumed. Remember also that you cannot expand the repository after it is initially created. We recommend to size the repository capacity, based on writes/sec or Access Density (AD) of the workload. We also recommend to add some contingency, say 50%, to the calculated capacity: this allows you to smoothly use FlashCopy SE volumes even in cases when your workload experiences some peaks, or to cope with a possible workload growth. For detailed information about FlashCopy SE repository and out-of-space conditions, refer to 10.3, Repository for Space Efficient volumes on page 132, and 10.4.5, Monitoring repository space and out-of-space conditions on page 149.

Sizing guidelines when writes/sec are known


Here we list the guidelines to size FlashCopy SE repository for a System i workload: If you have the System i licensed product IBM Performance Tools for iSeries and reports are available, observe the average number of writes/sec in Resource Report, section Disk Utilization. If you expect workload peak while FlashCopy SE target volumes are used, you might want to consider the peak writes/sec figure rather than the average. For more information about obtaining System i performance reports refer to iSeries and IBM TotalStorage: A Guide to Implementing External Disk on IBM eServer i5, SG24-7120. Estimate the amount of time FlashCopy SE will be active (FlashCopy SE active time). For example, if you plan to use FlashCopy SE for a complete system backup which usually takes 3 hours, estimate the FlashCopy SE active time to be of 3 hours. Use the following formula to calculate the needed capacity for the FlashCopy SE repository:
writes/sec x 0.67 x FlashCopy SE active time (sec) x 64 x 1.5 = repository capacity (KB)

Short explanation of the above formula: Whenever a write occurs, FlashCopy SE allocates one track of the repository capacity (one track is 64 KB in fixed block volumes). If multiple writes occur to the same track, FlashCopy SE allocates only one track whatever the number of subsequent writes are done to the same track. With random workloads we estimate about 33% of such re-writes, so take into account that 67% of writes/sec will result in track allocation (writes/sec x 0.67). To calculate the capacity required, multiply writes/sec by the FlashCopy SE active time to obtain total amount of writes while FlashCopy SE is active (writes/sec x 0.67 x FlashCopy SE active time). Multiply the number of writes by the track size of 64 KB to obtain the capacity needed, and add 50% for contingency (writes/sec x 0.67 x FlashCopy SE active time x 64 x 1.5). The resulting capacity is measured in KB in the formula: divide it by one million to express the capacity in GB. 658
IBM System Storage DS8000: Copy Services in Open Environments

Note: In this formula we assume 1000 KB per MB, and 1000 MB per GB.

Sizing guidelines when writes/sec are not known


If System i workload statistics are not known at sizing time, you might want to assume that 50% of IO/sec are writes/sec, and use the following formula for calculating the repository capacity: Access Density x 0.5 x 0.67 x FlashCopy SE active time(sec) x 64 x 1.5 = number of KB/GB to be used for repository (Number of KB/GB) / 10 000 = percentage of production capacity needed for repository Note: For a System i workload on DS8000 you can assume AD = 1 Short explanation of the above formula: The formula is similar to the one in Sizing guidelines when writes/sec are known, with the difference that it works with density of writes (writes per GB). Calculate writes/sec/GB that will result in track allocation, by applying to AD the assumed % of writes (AD x 0.5), and taking into account an assumed 30% of re-writes (AD x 0.5 x 0.67). Multiply the number of writes/sec per GB you obtained by the FlashCopy SE active time (sec) to get the estimated allocated tracks per GB (AD x 0.5 x 0.67 x FlashCopy SE active time). Multiply allocated tracks per GB by 64KB and by the contingency factor of 1.5 to get the estimated allocated capacity per GB (measured in KB). To express this capacity in GB divide it by million, to obtain percentage multiply it by 100; combine these two operation in dividing by 10 000. Note: In this formula we assume 1000 KB per MB, and 1000 MB per GB. InExample 35-7, we illustrate our sizing of the capacity for a Flashcopy SE repository with the System i partition we setup n preparation of this book.
Example 35-7

The disk space consists of 32 * 35 GB LUNs on DS8000, for a total capacity of 1125 GB assigned to the System i partition. For the System i workload we make the following assumptions: AD = 1 50% read, 50% write 33% re-writes We estimate that FlashCopy SE will be active 2 hours, i.e. 7200 sec. To size the repository, we calculate the percentage of production capacity that will be needed for repository, using the formula in Sizing guidelines when writes/sec are not known. After inserting the assumed values to the formula, calculation is as follows: 1 x 0.5 x 0.67 x 7200 x 64 x 1.5 = 231552 KB/GB = 23 % So we need to allocate for the repository 23% of capacity in the System i partition. For our tests we actually allocated 280GB for repository, which is about 25% of the 1125 GB System i production capacity.

Chapter 35. Copy Services with System i5

659

Sizing guidelines for disk arms in repository


It is important to provide enough disk arms to the repository, in order to achieve as good as possible performance during the FlashCopy SE operation. We recommend the following guidelines for the number of disk arms used by repository: If writes/sec are known, consider: Number of needed 15 K RPM disk drives in RAID5 = writes/sec x 0.67 / 25 Number of needed 10 K RPM disk drives in RAID5 = writes/sec x 0.67 / 18 Number of needed 15 K RPM disk drives in RAID10 = writes/sec x 0.67 / 50 Number of needed 10 K RPM disk drives in RAID10 = writes/sec x 0.67 / 36 Short explanation of the above formula: Each write in the production system, less the expected percentage of re-writes (x 0.67), results in a write operation to repository. In RAID5, each write results in 4 disk operations. We assume a maximum of 100 disk operations/sec with a 15 K RPM disk drive, that is 100 / 4 = 25 writes/sec. We assume a maximum of 72 disk operations/sec with a 10 K RPM disk drive, that is 72 / 4 = 18 writes/sec. In RAID10, each write results in 2 disk operations. Thus 100 / 2 = 50 writes/sec to a 15 K RPM disk drive and 72 / 2 = 36 writes/sec with a 10 K RPM disk drive. Example: If the workload is of 1000 writes/sec, consider 1000 x 0.67 / 25 = 27 x 15 K RPM disk drives in RAID5 (about 4 ranks in RAID5) If writes/sec are not known: Estimate the number of writes/sec as: AD x percentage of writes x production disk space Insert your result in the formulas used if writes/sec are known to get the number of disk drives needed for the repository. If AD is not known, you can assume AD = 1 for a System i workload, and if Read/Write ratio is not known, you can assume a Reads/Write ratio of 1 (50% writes/sec) for System i workload. Example: Production disk space in 1125 GB, we assume AD = 1, and Reads/Write ratio = 1. So the calculated number of write/sec is as follows: 1 x 0.5 x 1125 = 563 After inserting writes/sec in the formula, we have 563 x 0.67 / 25 = 15 x disk drives of 15 K RPM in RAID5 (about 2 ranks in RAID5)

FlashCopy SE Sizing Tool


To size capacity required for the repository you can ask your IBM Service representative to use the FlashCopy Space Efficient Sizing Tool which can be accessed in IBM Technical Documents at the following link: http://w3.ibm.com/support/techdocs/atsmastr.nsf/Web/Techdocs. A Metro Mirror or Global Mirror license is needed to run the tool (the tool can be also used in an environment with existing standard FlashCopy). Customers who dont have any of these licenses might consider arranging a temporary MM or GM license for running the tool. The tool collects information about changed tracks in production workload and reports various values like number of Out-of_synch tracks, expected changes tracks and expected occupation of repository, on different levels, during the collection period. In Figure 35-80 you 660
IBM System Storage DS8000: Copy Services in Open Environments

can observe a sample report which shows the number of Out-of-Synch tracks during System i Commercial Processing Workload (CPW), on LSS level. Reported values can be used to size the repository without any further calculation.

700000

Sum of OOStrk

600000

500000 DSS LSS 20780 - 15 20780 - 13 20780 - 12 20780 - 10

400000

300000

200000

100000

0 17:42:09 17:05:03 17:11:26 17:17:30 17:19:31 17:21:32 17:25:33 17:31:56 17:38:07 17:44:09 17:52:13 17:58:13 17:07:04 17:13:27 17:27:36 17:33:57 17:48:10 17:54:13 17:03:00 17:09:24 17:15:30 17:23:31 17:29:55 17:36:04 17:50:13 17:56:13 17:40:07 17:46:10 18:00:17

Currtm

Figure 35-80 Output of FlashCopy Space Efficient sizing Tool

35.9.5 Implementation
In this section we describe the steps to follow to setup FlashCopy SE for a System i partition, including the steps you already normally perform when taking backups. Note: Information on how to access the System i partition and details about i5/OS commands can be found in the i5/OS information center at: http://publib.boulder.ibm.com/infocenter/systems/scope/i5os/index.jsp

Set-up of FlashCopy SE for System i partition


Perform the following steps for the setup: 1. Plan the layout of the System i production volumes and FlashCopy SE volumes on the DS8000. To achieve the best possible performance we recommend to place FlashCopy SE target volumes in a different extent pool than the source volumes, both extent pools being in the same rankgroup.

Chapter 35. Copy Services with System i5

661

In our example we set-up the production System i volumes (FlashCopy SE sources) and FlashCopy SE targets in 4 extent pools, each of them containing two RAID5 ranks in DS8000. Two of the extent pools belong to rankgroup 0, and the other two belong to rankgroup 1. We define both FlashCopy SE source and target LUNs in each extent pool; source volumes have corresponding targets in another extent pool which belongs to the same rank group as the extent pool with the sources. This layout is shown in Figure 35-81.

DS8000
Extent pool A, rankgrp 0 Production LUNs Ext.ent pool B, rankgrp 1 Production LUNs

SE target LUNs

SE target LUNs

FlashCopy SE

FlashCopy SE

Extent pool C, rankgrp 0 Production LUNs

Extent pool D, rankgrp 1 Production LUNs

SE target LUNs

SE target LUNs

Figure 35-81 Layout of LUNs for FlashCopy SE

2. Define extent pools and LUNs for the production System i partition, create the FlashCopy SE repository and create FlashCopy SE LUNs. To create the FlashCopy SE repository use the DSCLI command mksestg with the following parameters: -extpool -captype -vircap -repcap extent pool of repository (optional) denotes the type of specified capacity (GB, cylinders, blocks) the amount of virtual capacity the size of repository.

For more information how to create repository, refer to the Creating a repository for Space Efficient volumes on page 135. An illustration of the DSCLI commands used for creating a repository is shown in Figure 35-82.

662

IBM System Storage DS8000: Copy Services in Open Environments

dscli> mksestg -extpool P14 -captype gb -vircap 282 -repcap 70 Date/Time: October 30, 2007 2:17:46 PM CET IBM DSCLI Version: 5.3.0.977 DS: IBM.2107-7520781 CMUC00342I mksestg:: The space-efficient storage for the extent pool P14 has been created successfully. dscli> mksestg -extpool P15 -captype gb -vircap 282 -repcap 70 Date/Time: October 30, 2007 2:18:21 PM CET IBM DSCLI Version: 5.3.0.977 DS: IBM.2107-7520781 CMUC00342I mksestg:: The space-efficient storage for the extent pool P15 has been created successfully. dscli> mksestg -extpool P34 -captype gb -vircap 282 -repcap 70 Date/Time: October 30, 2007 2:19:03 PM CET IBM DSCLI Version: 5.3.0.977 DS: IBM.2107-7520781 CMUC00342I mksestg:: The space-efficient storage for the extent pool P34 has been created successfully. dscli> mksestg -extpool P47 -captype gb -vircap 282 -repcap 70 Date/Time: October 30, 2007 2:19:41 PM CET IBM DSCLI Version: 5.3.0.977 DS: IBM.2107-7520781 CMUC00342I mksestg:: The space-efficient storage for the extent pool P47 has been created successfully.
Figure 35-82 Creating FlashCopy SE repostitory

To define a Space Efficient LUN for System i using the DS CLI command mkfbvo, specify the parameter -sam tse in the command (- sam stands for Storage Allocation Method and tse denotes Target Space Efficient). Here is an illustration: mkfbvol -extpool p14 -os400 A05 -sam tse -name Vol_SE_#h 1009-100f For more information about creating Space Efficient LUNs refer to 10.3.3, Creation of Space Efficient volumes on page 138. For our test, we created 4 extent pools and 8 * 35 GB System i LUNs in each extent pool. We created a 70 GB repository in each extent pool, which resulted in a total repository capacity of 280 GB for 1125 GB of production capacity. Next, we defined 8 * 35 GB Space Efficient LUNs (FlashCopy SE targets) in each extent pool. Figure 35-86 shows part of the output from the DS CLI command lsfbvol for one of the extent pools. Observe standard and Space Efficient LUNs contained in the pool. *dscli> lsfbvol -l -extpool p14 Date/Time: October 29, 2007 8:28:15 PM CET IBM DSCLI Version: 5.3.0.977 DS: IBM.2107-7520781 Name ID accstate datastate configstate deviceMTM datatype extpool sam captype ================================================================================================= ITSO_St_LS_1000 1000 Online Normal Normal 2107-A85 FB 520U P14 Standard iSeries ITSO_St_1001 1001 Online Normal Normal 2107-A05 FB 520P P14 Standard iSeries ITSO_St_1002 1002 Online Normal Normal 2107-A05 FB 520P P14 Standard iSeries ITSO_St_1003 1003 Online Normal Normal 2107-A05 FB 520P P14 Standard iSeries ITSO_St_1004 1004 Online Normal Normal 2107-A05 FB 520P P14 Standard iSeries ITSO_St_1005 1005 Online Normal Normal 2107-A05 FB 520P P14 Standard iSeries ITSO_St_1006 1006 Online Normal Normal 2107-A05 FB 520P P14 Standard iSeries ITSO_St_1007 1007 Online Normal Normal 2107-A05 FB 520P P14 Standard iSeries ITSO_SE_LS_1008 1008 Online Normal Normal 2107-A85 FB 520U P14 TSE iSeries ITSO_SE_1009 1009 Online Normal Normal 2107-A05 FB 520P P14 TSE iSeries ITSO_SE_100A 100A Online Normal Normal 2107-A05 FB 520P P14 TSE iSeries ITSO_SE_100B 100B Online Normal Normal 2107-A05 FB 520P P14 TSE iSeries ITSO_SE_100C 100C Online Normal Normal 2107-A05 FB 520P P14 TSE iSeries ITSO_SE_100D 100D Online Normal Normal 2107-A05 FB 520P P14 TSE iSeries ITSO_SE_100E 100E Online Normal Normal 2107-A05 FB 520P P14 TSE iSeries ITSO_SE_100F 100F Online Normal Normal 2107-A05 FB 520P P14 TSE iSeries
Figure 35-83 System i standard and Space Efficient LUNs

Chapter 35. Copy Services with System i5

663

3. Set-up a System i partition for which all the disk space is located on the DS8000 and with external LoadSource connected via Boot from SAN. For more information about setting up System i partition with external storage, refer to these Redbooks publications: IBM System Storage DS8000 Series: Architecture and Implementation, SG24-6786, Chapter 17. System i considerations, and iSeries and IBM TotalStorage: A Guide to Implementing External Disk on IBM eServer i5, SG24-7120. In our example we set-up a System i partition using 32 * 35 GB LUNs on the DS8000. LUNs are connected via 4 System i FC adapters, two of them being attached to Boot from SAN IOPs. The external LoadSource is connected via Boot from SAN IOP and is mirrored to an external LUN; all the other LUNs are connected in Multipath. Part of the production LUNs are visible in Figure 35-84. Display Disk Configuration Status Serial Number 50-1000781 50-1208781 50-120A781 50-1304781 50-1005781 50-1508781 50-150F781 50-150B781 50-1302781 50-1004781 50-1307781 50-120B781 50-150D781 Resource Name DD019 DD020 DMP143 DMP195 DMP137 DMP191 DMP185 DMP183 DMP172 DMP159 DMP197 DMP109 DMP173

ASP 1

Unit 1 1 14 15 16 17 18 19 20 21 22 23 24

Type Model 2107 2107 2107 2107 2107 2107 2107 2107 2107 2107 2107 2107 2107 A85 A85 A05 A05 A05 A05 A05 A05 A05 A05 A05 A05 A05

Status Mirrored Active Active RAID-5/Active RAID-5/Active RAID-5/Active RAID-5/Active RAID-5/Active RAID-5/Active RAID-5/Active RAID-5/Active RAID-5/Active RAID-5/Active RAID-5/Active More...

Press Enter to continue. F3=Exit F5=Refresh F11=Disk configuration capacity


Figure 35-84 System i production volumes

F9=Display disk unit details F12=Cancel

On the System i HMC, we tagged as IPL device, the FC adapter to which the external Loadsource is connected and which is attached to Boot from SAN IOP, as is shown in Figure 35-85.

664

IBM System Storage DS8000: Copy Services in Open Environments

Figure 35-85 Tagged Boot from SAN IOP

Using FlashCopy SE with i5/OS


To use FlashCopy SE volumes for a backup of the entire System i disk space, perform the following steps: 1. If BRMS is used for backup, perform the commands to integrate BRMS with Flashcopy, as is described in the Redbooks publication, iSeries and IBM TotalStorage: A Guide to Implementing External Disk on IBM eServer i5, SG24-7120. 2. Power-down the System i partition by using the i5/OS command PWRDWNSYS, as is shown in Figure 35-86.
Power Down System (PWRDWNSYS) Type choices, press Enter. How to end . . . . . . . . Controlled end delay time Restart options: Restart after power down Restart type . . . . . . IPL source . . . . . . . . . . . . . . . . . . . . . . . *CNTRLD 10 *NO *IPLA *PANEL *CNTRLD, *IMMED Seconds, *NOLIMIT *NO, *YES *IPLA, *SYS, *FULL *PANEL, A, B, D, *IMGCLG

F3=Exit F4=Prompt F5=Refresh F13=How to use this display

F10=Additional parameters F24=More keys

Bottom F12=Cancel

Figure 35-86 Power-down production system

Chapter 35. Copy Services with System i5

665

3. Perform a FlashCopy SE of the production LUNs, by using DS CLI command mkflash with the following parameters: -tgtse -nocp Denotes Space Efficient target LUNs Specifies not to perform background copy

An illustration of the DSCLI commands we used in our test is shown in Figure 35-87.
dscli> mkflash -tgtse -nocp 1000-1007:1200-1207 Date/Time: October 30, 2007 10:13:40 AM CET IBM DSCLI Version: 5.3.0.977 CMUC00137I mkflash: FlashCopy pair 1000:1200 successfully created. CMUC00137I mkflash: FlashCopy pair 1001:1201 successfully created. CMUC00137I mkflash: FlashCopy pair 1002:1202 successfully created. CMUC00137I mkflash: FlashCopy pair 1003:1203 successfully created. CMUC00137I mkflash: FlashCopy pair 1004:1204 successfully created. CMUC00137I mkflash: FlashCopy pair 1005:1205 successfully created. CMUC00137I mkflash: FlashCopy pair 1006:1206 successfully created. CMUC00137I mkflash: FlashCopy pair 1007:1207 successfully created. dscli> mkflash -tgtse -nocp 1208-120f:1008-100f Date/Time: October 30, 2007 10:14:03 AM CET IBM DSCLI Version: 5.3.0.977 CMUC00137I mkflash: FlashCopy pair 1208:1008 successfully created. CMUC00137I mkflash: FlashCopy pair 1209:1009 successfully created. CMUC00137I mkflash: FlashCopy pair 120A:100A successfully created. CMUC00137I mkflash: FlashCopy pair 120B:100B successfully created. CMUC00137I mkflash: FlashCopy pair 120C:100C successfully created. CMUC00137I mkflash: FlashCopy pair 120D:100D successfully created. CMUC00137I mkflash: FlashCopy pair 120E:100E successfully created. CMUC00137I mkflash: FlashCopy pair 120F:100F successfully created. dscli> mkflash -tgtse -nocp 1300-1307:1500-1507 Date/Time: October 30, 2007 10:14:20 AM CET IBM DSCLI Version: 5.3.0.977 CMUC00137I mkflash: FlashCopy pair 1300:1500 successfully created. CMUC00137I mkflash: FlashCopy pair 1301:1501 successfully created. CMUC00137I mkflash: FlashCopy pair 1302:1502 successfully created. CMUC00137I mkflash: FlashCopy pair 1303:1503 successfully created. CMUC00137I mkflash: FlashCopy pair 1304:1504 successfully created. CMUC00137I mkflash: FlashCopy pair 1305:1505 successfully created. CMUC00137I mkflash: FlashCopy pair 1306:1506 successfully created. CMUC00137I mkflash: FlashCopy pair 1307:1507 successfully created. dscli> mkflash -tgtse -nocp 1508-150f:1308-130f Date/Time: October 30, 2007 10:14:32 AM CET IBM DSCLI Version: 5.3.0.977 CMUC00137I mkflash: FlashCopy pair 1508:1308 successfully created. CMUC00137I mkflash: FlashCopy pair 1509:1309 successfully created. CMUC00137I mkflash: FlashCopy pair 150A:130A successfully created. CMUC00137I mkflash: FlashCopy pair 150B:130B successfully created. CMUC00137I mkflash: FlashCopy pair 150C:130C successfully created.

DS: IBM.2107-7520781

DS: IBM.2107-7520781

DS: IBM.2107-7520781

DS: IBM.2107-7520781

Figure 35-87 Make FlashCopy SE

4. IPL the Backup System i partition (which is connected to FlashCopy SE target volumes), by activating the partition from the HMC. Make sure to tag as LoadSource, the Boot from SAN IOP to which the external LoadSource is connected. Alternatively you can tag the FC adapter attached to this IOP. Activating the partition is shown in Figure 35-88.

666

IBM System Storage DS8000: Copy Services in Open Environments

Figure 35-88 Activate stand-by partition

IPL the partition in the backup System i, brings up a clone of the production partition, with the disk space residing on FlashCopy SE target LUNs. In our example the Backup partition connects to FlashCopy SE targets shown in Figure 35-87. In i5/OS, FlashCopy SE target LUNs are System i disk units, as can be seen on Figure 35-89. Observe the LUN id which is contained in the disk unit serial number characters 4 to 7.
Display Disk Configuration Status Serial Number 50-1200781 50-1008781 50-100A781 50-1504781 50-1205781 50-1308781 50-130F781 50-130B781 50-1502781 50-1204781 50-1507781 50-100B781 50-130D781 Resource Name DD019 DD020 DMP143 DMP195 DMP137 DMP191 DMP185 DMP183 DMP172 DMP159 DMP198 DMP109 DMP173

ASP 1

Unit 1 1 14 15 16 17 18 19 20 21 22 23 24

Type Model 2107 2107 2107 2107 2107 2107 2107 2107 2107 2107 2107 2107 2107 A85 A85 A05 A05 A05 A05 A05 A05 A05 A05 A05 A05 A05

Status Mirrored Active Active RAID-5/Active RAID-5/Active RAID-5/Active RAID-5/Active RAID-5/Active RAID-5/Active RAID-5/Active RAID-5/Active RAID-5/Active RAID-5/Active RAID-5/Active More...

Press Enter to continue. F3=Exit F5=Refresh F11=Disk configuration capacity F9=Display disk unit details F12=Cancel

Figure 35-89 Flashcopy SE targets in backup System i partition

Chapter 35. Copy Services with System i5

667

Note: Usually it is necessary to change device descriptions and network attributes in the IPLed partition. If BRMS is used it as also necessary to perform commands from BRMS and FlashCopy integration. This procedure can be automated as is described in the book iSeries and IBM TotalStorage: A Guide to Implementing External Disk on IBM eServer i5, SG24-7120. 5. To keep the space occupied in the repository at a minimum level, remove the Flashcopy relationship and release space ion the repository, once the backup is finished. To achieve this, use the DS CLI command rmflash with the parameter -tgtreleasespace. An illustration of the rmflash command is shown in Figure 35-90. dscli> rmflash -tgtreleasespace 1000-1007:1200-1207 Date/Time: November 1, 2007 11:48:52 AM CET IBM DSCLI Version: 5.3.0.977 DS: IBM.2107-7520781 CMUC00144W rmflash: Are you sure you want to remove the FlashCopy pair 1000-1007:1200-1207:? [y/n]:y CMUC00140I rmflash: FlashCopy pair 1000:1200 successfully removed. CMUC00140I rmflash: FlashCopy pair 1001:1201 successfully removed. CMUC00140I rmflash: FlashCopy pair 1002:1202 successfully removed. CMUC00140I rmflash: FlashCopy pair 1003:1203 successfully removed. CMUC00140I rmflash: FlashCopy pair 1004:1204 successfully removed. CMUC00140I rmflash: FlashCopy pair 1005:1205 successfully removed. CMUC00140I rmflash: FlashCopy pair 1006:1206 successfully removed. CMUC00140I rmflash: FlashCopy pair 1007:1207 successfully removed.
Figure 35-90 Releasing repository space

35.9.6 Monitor use of repository space with workload CPW


How quickly the available repository space is consumed depends on the amount and characteristic of writes in both the Production and Backup partition while the FlashCopy SE relationship is in effect. When using FlashCopy SE to take backups, we expect very limited amount of writes in the Backup partition. Consequently it is essentially the workload in the Production partition that affects the space occupied in the repository. To get an impression of how the occupied repository space grows in a typical System i environment, we made a test with the System i Commercial Processing Workload (CPW) during a FlashCopy SE operation. (We used CPW because it has the same workload patterns as experienced by many System i installations).

Test description
Our Production partition is on System i5 model 570, and we allocated 4 processors and 12 GB of memory. For this partition we set-up 1125 GB of disk capacity on the DS8000, and we created a FlashCopy SE repository of 280 GB. The layout of both production and Space Efficient LUNs is described in Implementation on page 661. In the Production partition, we set-up CPW with a 20 000 users workload, which automatically allocates a System i memory pool of 5 GB. The following tools were used to monitor the workload, disk and repository: Performance Data Collection Utility (PDCU), IBM Performance Tools for iSeries, and the FlashCopy Space Efficient Sizing Tool. At the beginning of the test, the repository allocation was 0. We made a FlashCopy SE of all the production volumes and, at the same time, we started CPW and the monitoring tools. The test lasted for about 4 hours. 668
IBM System Storage DS8000: Copy Services in Open Environments

Results
CPW produced an average of 1500 IOPS, as can be seen on the PDCU output in Figure 35-91, (the PDCU capture interval was 15 minutes).

Figure 35-91 PDCU - IO/sec

During the workload, an average of about 900 writes/sec were experienced as depicted in Figure 35-92.

Figure 35-92 PDCU - Writes/sec

Figure 35-93 shows how the space occupied in the repository grows overtime (at 15 minutes interval). This graph was created by FlashCopy Space Efficient sizing tool. Observe that initially the space occupied grew faster because of more writes/sec at the beginning of the CPW. After the CPW settled-down the growth becomes almost linear with an increase of about 5 to 6 GB every 15 minutes.

Chapter 35. Copy Services with System i5

669

Figure 35-93 Used repository capacity during CPW

The percentage of used repository space during CPW can be observed in Figure 35-94. Once CPW settles-down, the occupied repository space grows at a rate of 2% per 15 minutes; in about 3 hours:20 minutes, it had reached 25% of the repository capacity.

Figure 35-94 Percentage of used repository during CPW

670

IBM System Storage DS8000: Copy Services in Open Environments

Summary
In our test we experienced the following conditions: Production partition disk capacity:1125 GB Repository size: 280 GB CPW running in production partition produced about 1500 IO/sec with a read/write ratio of 0.6 (about 900 writes/sec) During FlashCopy SE, the repository occupied space grew from 0 to 70 GB, in 3 hours and 20 min. This represents about 25% of the available repository capacity.

35.9.7 System behavior with a repository full condition


If you maintain the space occupied in the FlashCopy SE repository to a limited level by regularily executing the command to release the space occupied, the repository should never be completely filled up. However, if the space occupied reaches 85% of the available capacity, you will be warned with an Simple Network Management Protocol (SNMP) alert triggered in the DS8000. You can also change this 85% default warning threshold to another value more appropriate for you; you can do this with the DSCLI command chsestg and the -repcapthreshold parameter. For example you can set threshold value to 50% by using command: chsestg -repcapthreshold 50 P14 To get and handle SNMP alerts, the DS8000 and SNMP manager must be properly setup. To observe SNMP alerts, connect directly or via Remote desktop to the workstation where the d SNMP manager resides, and open the SNMP Trap Watcher, as shown in Figure 35-95.

Figure 35-95 SNMP Trap Watcher

In SNMP trap watcher window, in the upper part of the window, click on the alert you want to look at: the description will be shown in the bottom part of the window (see Figure 35-96).

Chapter 35. Copy Services with System i5

671

Figure 35-96 SNMP Trap Watcher screen

For more information about configuring the DS8000, installing the SNMP agent, and handling alerts, refer to the Appendix B, SNMP notifications on page 765, or the Redbooks publication, IBM System Storage DS8000 Series: Architecture and Implementation, SG24-6786. Figure 35-97 shows the SNMP alert when reaching the default 85% threshold for the repository. Observe the number of Extent pool which is written in hexadecimal notation.

672

IBM System Storage DS8000: Copy Services in Open Environments

Figure 35-97 Repostitory Watermark warning in SNMP

In case the repository fills up during a FlashCopy SE operation, the FlashCopy SE relationship will fail. In Figure 35-98 you can see that the repository occupied space has reached 100%, and the failed FlashCopy SE relationship.

Chapter 35. Copy Services with System i5

673

dscli> lssestg -l Date/Time: November 6, 2007 5:31:05 PM CET IBM DSCLI Version: 5.3.0.977 DS: IBM.2107-7520781 extentpoolID stgtype datastate configstate repcapstatus %repcapthreshold repcap (2^30B) vircap repcapalloc vircapalloc ================================================================================================================== ==== P14 fb Normal Normal below 0 70.0 282.0 70.0 264.0 P15 fb Normal Normal below 0 70.0 282.0 70.0 264.0 P34 fb Normal Normal below 0 70.0 282.0 70.0 264.0 P47 fb Normal Normal below 0 70.0 282.0 70.0 264.0 dscli> lsflash 1000-15ff Date/Time: November 6, 2007 5:31:16 PM CET IBM DSCLI Version: 5.3.0.977 DS: IBM.2107-7520781 ID SrcLSS SequenceNum Timeout ActiveCopy Recording Persistent Revertible SourceWriteEnabled TargetWriteEnabled Backgrou ================================================================================================================== ============ 1000:1200 1001:1201 1002:1202 1003:1203 1004:1204 1005:1205 1006:1206 1209:1009 120A:100A -

Figure 35-98 FlashCopy SE failed due to full repository

A FlashCopy SE failure does not affect the System i production partition: there is no interruption, nor any message. However, the backup partition which uses disk space on FlashCopy SE target volumes, stops as soon as FlashCopy relationship fails. In our testing, the backup partition stopped with SEC code A6020266 even a bit before the repository occupation reached 100%. The backup parititon with critical SRC code is shown in Figure 35-99.

Figure 35-99 Backup partition at repository full

674

IBM System Storage DS8000: Copy Services in Open Environments

35.10 TPC for Replication with Global Mirror for i5/OS


The IBM TotalStorage Productivity Center for Replication (TPC-R) is described in Chapter 6, IBM TotalStorage Productivity Center for Replication on page 43. In summary, TPC-R is designed to simplify the management of Copy Services for the DS8000, DS6000, ESS 800, and SVC, in the following areas: Administration and configuration of Copy Services Starting, suspending and resuming Copy Services tasks Managing planned and unplanned outages with Metro Mirror and Global Mirror Based on your needs, you might decide to purchase and implement one or multiple of the following TPC-R licenses: TPC for Replication, (5608-TRA) TPC for Replication Two Site Business Continuity (BC), (5608-TRB) TPC for Replication Three Site Business Continuity (BC), (5608-TRC) Typically, you install TPC-R on a Windows workstation, or on Linux or AIX workstation or a System i5 partition. IP connections are required from the TPC-R workstation to the storage systems you want to manage and monitor. Once TPC-R is installed and running you can access its Graphical User Interface (GUI) through a Web browser or you can use a Command Line Interface to TPC-R. For more information about TPC-R refer to Chapter 6, IBM TotalStorage Productivity Center for Replication on page 43, and to the Redbooks publication, IBM TotalStorage Productivity Center for Replication on Windows 2003, SG24-7250. In this section we describe how to plan the use of TPC-R in the context od a System i host. We explain how to access and use TPC-R to set up Global Mirror for the entire System i disk space; we also explain how to switch over to the remote site during outages, and then fail back to the local site.

35.10.1 Planning
To add LUNs to a TPC-R session, we recommend that you define the LUNs on both the local and remote DS8000 by using the following guidelines: Use the minimal number of LSSs. Make sure that each relevant LSS contains only the volumes that will be managed by TPC-R. At the remote site, define Global Copy targets and FlashCopy targets in different LSSes having the last two digits of a volume ID the same as volumes on the local site. If for any reason you are not able to define the LUNs by adhering to the above guidelines, it might be a good idea to prepare a CSV file containing the LUN IDs, and import it into the TPC-R session, as is described in Add Copy Sets to the TPC-R session on page 678.

Chapter 35. Copy Services with System i5

675

35.10.2 Accessing the TPC-R GUI


To access TPC-R GUI: Start a Web browser on a workstation connected to the TPC-R server, using the link: https://x.x.x.x:9443/CSM Insert IP address or DSN name of the TPC-R workstation in x.x.x.x. In the TPC-R initial window that displays, enter User ID and Password, and Log-in. The initial User ID and Password are provided with the TPC-R installation media. You can define additional User IDs in the TPC -R server as is described in the TPC-R Users Guide. After a successful login, the TPC-R Health Overview window is displayed as illustrated (partial view) in Figure 35-100,

35.10.3 Create a TPC-R session


To create a TPC-R session perform the following steps: 1. In the Health Overview window, click Sessions, as is shown in Figure 35-100

Figure 35-100 TPC-R Menu

2. The Sessions window is displayed. Click Create Session, to start the Create Session wizard, 3. With the wizard, you first select the TPC-R session type you want to create, from the pull-down in the Choose Session Type window. In our example we plan to use Global Mirror with failover to remote site and then fail back, so we choose Global Mirror Failover/ Failback. A small picture for the chosen session type appears in the right side of the screen, as soon as you select it (see Figure 35-101). Click Next.

676

IBM System Storage DS8000: Copy Services in Open Environments

Figure 35-101 Picture of chosen session type

Note: In a TPC-R session for Global Mirror, source volumes are referred to as Host1 (H1) volumes, Global Copy targets as Host2 (H2) volumes and FlashCopy targets as Journal2 (J2) volumes. 4. The next window that displays is shown in Figure 35-102. Specify the name and description of the session. You can also change the GM Consistency Group interval time from a default 0 to a desired value. Click Next.

Figure 35-102 Session name

5. Next, you choose the Disk Systems for the volumes on the local and remote sites. Note that, at this point, you can choose from any Disk System registered in TPC-R, regardless of available GM links; links and paths will be checked by TPC-R when establishing the Global Mirror. An example of selecting the Disk System for H2 volumes (Global Copy target volumes within GM) is shown in Figure 35-103. Click Next, then click Finish.

Chapter 35. Copy Services with System i5

677

Figure 35-103 Selecting Disk System for GM volumes

35.10.4 Add Copy Sets to the TPC-R session


We assume that both the local and remote DS8000 are configured and attached to System i partitions at this point. To add Copy Sets (add volumes) to a TPC-R session: 1. Open the Sessions window and click on the relevant session to display details. In the Session Details window, select Add Copy Sets from the pull-down, as is shown in Figure 35-104. Click Go. Another way to do this, is to select the relevant session in the Sessions window, and choose Add Copy Sets from pull-down in the same window.

Figure 35-104 Add Copy Sets

2. In the next set of windows, specify the Disk systems, LSSes and LUNs to include in the TPC-R session as H1, H2, and J2 volumes. 678
IBM System Storage DS8000: Copy Services in Open Environments

If the LUNs are created according to the guidelines given under Planning on page 675, you can add 3 entire LSSes for H1, H2, and J2 LUNs, as one Copy Set. Otherwise you add a set of 3 LUNs (H1, H2 and J2) as one Copy Set. For efficiency, it is a good idea to prepare a CSV file and import it to the session, as is described in the following example. In our example we create one Copy Set containing one H1 volume, one H2 volume and one J2 volume, export it as CSV, complete the CSV with other LUNs, and import it to the TPC-R session. We perform the following steps: Specify one volume from each site H1 H2 and J2 for the Copy Set. An illustration of specifying LUNs for H2 is shown in Figure 35-105.

Figure 35-105 Adding H2 LUNs to the Copy Set

After the LUNs are selected, you get the Select Copy Sets window, as is shown in Figure 35-106. You can click on the new Copy Set to display it, or just click Next. In the next window, confirm to add the Copy Set by clicking Next, and then click Finish in the subsequent window. Once the new Copy Set is added the wizard brings you to the Session Details window.

Figure 35-106 Display new copy set

Chapter 35. Copy Services with System i5

679

From the pull-down in the Session Details window, select Export Copy Sets, as is shown in Figure 35-107. Click Go.

Figure 35-107 Export Copy Sets

You will get a message about successfully exporting and a link from where you can download the CSV file, as shown in Figure 35-108. Open the link and download the file to your PC.

Figure 35-108 Download CSV file

The downloaded CSV file contains the volumes of the exported Copy Set. Open it with Windows Excel, enter the other volumes you want to add to the TPC-R session, and save the file. Note: While inserting System i volumes, make sure that each LUN maps to a LUN of equal size and protection. For more information about sizes and protection of System i LUNs refer to the Redbooks publication, IBM System Storage DS8000 Series: Architecture and Implementation, SG24-6786.

680

IBM System Storage DS8000: Copy Services in Open Environments

The sample CSV file we used for our example, is shown in Figure 35-109.

Figure 35-109 CSV file

To import the CSV file to TPC-R select Add Copy Sets from the pull-down in the Session Details window, and click Go. In the Add Copy Sets window, check Use a CSV file to import copysets, click Browse and select the CSV file from your PC, as depicted in Figure 35-110. Click Next in the following windows displayed by the wizard, or Finish as required.

Figure 35-110 Import CSV file

After the Copy Sets have been imported, you can examine them by clicking on one of the Role Pairs in the window Session Details (see Figure 35-111) This will bring up a list of all the volume pairs defined on the selected roles.

Chapter 35. Copy Services with System i5

681

Figure 35-111 Select a role pair

In our example we can see all volume pairs H1 and H2, and their status in Global Mirror, as shown in Figure 35-112. Since GM is not yet started all the volumes are in Defined status, they are not yet Preparing or Prepared.

Figure 35-112 Volume pairs H1 and H2

682

IBM System Storage DS8000: Copy Services in Open Environments

35.10.5 Start Global Mirror


To start Global Mirror for the volumes you added to the TPC-R session: 1. In the Sessions window, click on the session for which you want to start GM, then select Start H1->H2 from pull-down in the Session Details window, as shown in Figure 35-113. Click Go.

Figure 35-113 Start H1->H2

In the window that is displayed next, observe the warning about the target LUNs being overwritten. if needed check again the LUN IDs and confirm to start a Global Mirror session from H1 to H2. The status of the session changes to Preparing, and the picture indicates a data flow from H1 to H2 (see Figure 35-114). Global Copy between H1 and H2 is running at this time, but it is not yet synchronized. Also the FlashCopy from H2 to J2 and the Global Mirror session are not yet created.

Figure 35-114 Observe starting of GM

Chapter 35. Copy Services with System i5

683

2. After Global Copy is synchronized, TPC-R creates the FlashCopy relation from H2 to J2 and starts the Global Mirror session. As soon as this is done, the TPC-R session status changes from Preparing to Prepared (see Figure 35-115).

Figure 35-115 Prepared TPC-R session

In the Sessions window, click on the TPC-R session to display details. In our example the session is in Normal status and Prepared state. The small triangle in the picture denotes the set of recoverable volumes (at this stage local volumes H1 are recoverable). Refer to Figure 35-116.

Figure 35-116 TPC-R session - started GM

3. We recommend to display the messages in the TPC-R console to detect any possible issues during the Global Mirror start. To display the message, click on Console in the My Work panel. Console messages for our example (see Figure 35-117) show that GM started successfully. The rather long time between preparing session and prepared session is due to the Global Copy synchronization.

684

IBM System Storage DS8000: Copy Services in Open Environments

Figure 35-117 TPC-R console

35.10.6 Switch to Remote site at planned outages


To switch to the remote site at planned outages perform the following steps: 1. Power-down the System i production partition, as is shown in Figure 35-86 on page 665. 2. In TPC-R, Sessions window, click the relevant session to display the Session Details window. Select Suspend from the pull-down, then click Go. In the warning window which appears next, you can check the LUNs to be suspended, and confirm. TPC-R then suspends the Global Mirror session. The TPC_R session indicates Status Severe and State Suspended, as can be seen in Figure 35-118.

Figure 35-118 Suspended GM

Chapter 35. Copy Services with System i5

685

3. From the pull-down select Recover, as shown in Figure 35-119. Click Go, and confirm at the warning.

Figure 35-119 Recover on remote site

During the recover, TPC-R performs a failover to GC target volumes (H2) and a Fast Reverse Restore from the FlashCopy target LUNs to the GC targets. The TPC-R session is now in status Target Available. You can IPL the remote System i partition connected to H2 LUNs. Notice that H2 LUNs are now recoverable as indicated by the triangle at H2 in the picture on the right hand side. 4. Check that the LoadSource tag on the System i HMC shows to IOP or FC adapter with connected external LoadSource. IPL the recovery System i partition by activating it at the HMC, as is shown in Figure 35-88 on page 667. After the IPL is complete, the Recovery partition runs a clone of the Production system. The disk units in the Recovery partition are GM target LUNs, as can be seen in Figure 35-120 (the volume serial numbers of disk units contain volume ids and the last 3 digits of the remote DS8000 image ID.

686

IBM System Storage DS8000: Copy Services in Open Environments

Display Disk Configuration Status Serial Number 50-C000461 50-C001461 50-C009461 50-C012461 50-C006461 50-C015461 50-C018461 50-C010461 50-C005461 50-C00A461 50-C01A461 50-C007461 50-C00C461 Resource Name DD019 DD020 DMP143 DMP127 DMP137 DMP129 DMP163 DMP133 DMP159 DMP109 DMP155 DMP111 DMP125

ASP 1

Unit 1 1 14 15 16 17 19 20 21 23 24 26 27

Type Model 2107 2107 2107 2107 2107 2107 2107 2107 2107 2107 2107 2107 2107 A85 A85 A05 A05 A05 A05 A05 A05 A05 A05 A05 A05 A05

Status Mirrored Active Active RAID-5/Active RAID-5/Active RAID-5/Active RAID-5/Active RAID-5/Active RAID-5/Active RAID-5/Active RAID-5/Active RAID-5/Active RAID-5/Active RAID-5/Active More...

Press Enter to continue. F3=Exit F5=Refresh F11=Disk configuration capacity F9=Display disk unit details F12=Cancel

Figure 35-120 Disk units in Recovery partition

5. In TPC-R, in the Session details window, select Start H2->H1 from Pull-down, and click Go. This starts the Global Copy failback from H2 LUN to H1 LUNs. The TPC-R session now shows Preparing State and indicates a replication direction from H2 to H1, as can be seen in Figure 35-121.

Figure 35-121 Failback

Chapter 35. Copy Services with System i5

687

35.10.7 Switch to Remote site at unplanned outages


In case of unplanned outage, perform the same actions as for planned outage with the following exceptions: If the System i partition fails, you probably will not be able to power it down before switching to recovery site If the failure occurs on the production DS8000 or on the link, Global Mirror will automatically suspend, and you dont need to suspend it in TPC-R. Therefore you will most probably start with step 3, as described under Switch to Remote site at planned outages

35.10.8 Fail back to local site


Once the outage is over, perform the following steps to fail back to the local site: 1. Power-down the Recovery System i partition, as is shown in Figure 35-86 on page 665. 2. In TPC-R, in the Session Details window, select Suspend from the pull-down, click Go, and confirm the warning. Global Copy is now suspended and the TPC-R session shows Status Severe, State Suspended. 3. Still in the Session Details window, select Recover from the pull-down, click Go and confirm the warning. The status of the session is now Normal, volumes H1 are recoverable. In Figure 35-122 notice the TPC-R picture showing H1 as recoverable by the small triangle.

Figure 35-122 Recoverable volumes H1 in failback

4. At the HMC, IPL the Production System i partition which is connected to H1 volumes. Once the Production system is running, it contains the updates made to the Recovery system during the outage. 5. In TPC-R, in the Session Details window, select Start H1->H2 from the pull-down, and click Go. In the warning window that appears, check if direction of the replication and Disk systems are as you want, and confirm. This resumes Global Mirror in the initial direction, from H1 to H2.

688

IBM System Storage DS8000: Copy Services in Open Environments

Part 10

Part

10

Interoperability
In this part of the book, we discuss the interoperability of the various Copy Services functions on the DS8000, and also the interoperability of the DS8000 with other IBM System Storage and TotalStorage disk subsystems in Copy Services implementations.

Copyright IBM Corp. 2004-2008. All rights reserved.

689

690

IBM System Storage DS8000: Copy Services in Open Environments

36

Chapter 36.

Data migration through double cascading


In this chapter we discuss how to ensure a consistent data migration by combining copy services functions in a double cascading configuration.

Copyright IBM Corp. 2004-2008. All rights reserved.

691

36.1 Data migration with double cascading


Combining the Copy Service functions Metro Mirror and Global Mirror into an environment with double cascading allows us to maintain data consistency while migrating data. There are two possibilities for accomplishing data consistency during a migration using an environment that has double cascading. The first possibility is to shut down all applications at the local site and let the out-of-sync drain completely. The second possibility is to change the Global Copy relationships to Metro Mirror, then when the out-of-sync is approaching zero, a freeze is issued to the changed relationships.

36.2 Double cascading example


Figure 36-1 shows double cascading with a Metro Mirror relationship from the local to the secondary and Global Copy from the secondary to the tertiary site and from the tertiary to the remote site. There is a cascading relationship at both the secondary and tertiary sites where the volumes are both secondaries and primaries. In this example, if all applications are stopped at the local site, then the local, secondary, tertiary and remote will reach a point after some time where they are all equal. Therefore they will be consistent and data has successfully been migrated to the remote site volumes.

DWDM

DWDM
PPRC-GC

A
DS8K

PPRC-MM

B
DS8K

PPRC-GC

C
DS8K

PPRC-GC

D
DS8k

<50 Kms

Local

Secondary

Tertiary

Remote

Figure 36-1 Double cascading example

The second approach to provide data consistency during migration, again looking at the example in Figure 36-1, a freeze is first issued to the Metro Mirror relationship from the local to the secondary site. Since Metro Mirror is running between the local and the secondary site, there is already a synchronous relationship. Therefore the secondary volumes are consistent with the local site volumes at this time. The next step would be to change all the Global Copy relationships to Metro Mirror relations and then issue a freeze when the out-of-sync has reached zero. When the out-of-sync is fully drained, then there is a consistent relationship from the secondary to the remote and migration to the remote is complete. Data migration has sent a snapshot of data to the remote site so that there is consistency.

692

IBM System Storage DS8000: Copy Services in Open Environments

This process would be used when production cannot be stopped, since production remains to be running at the local site. However, once the freeze is first issued on the Metro Mirror relationship, from that point forward the remote will only be consistent with the secondary. The local during this process is still being updated and is changing. The remote will have a snapshot of the time since the freeze was issued.

DWDM

DWDM
PPRC-- GC

A PPRC-- GC DS8K

B
DS8K

PPRC-- GC

C
DS8K

PPRC-- MM

D
DS8k

<50 Kms

Local

Secondary

T ertiary

Remote

Figure 36-2 Global Copy changed to Metro Mirror and direction reversed

Once data migration and consistency have been accomplished at the remote site, you can start production at the remote site, if desired. In this case all applications must first be stopped at the local site and the direction of the mirror is reversed so that Metro Mirror is now running from the remote to the tertiary site and Global Copy is running from the tertiary site to the secondary site and then to the primary site. Figure 36-2 shows the new configuration with the Metro Mirror and Global Copy reversed and production running at the remote site. In this example, it is shown that to protect the production at the remote, the remote copy relations can be reversed to use the tertiary, secondary, or local volumes as targets. These copy relations would be created before starting production at the remote site. Since the relationships from the secondary to the tertiary and from the tertiary to the remote are both in a cascaded relationship, to reverse the directions Global Copy must be first removed and then recreated in the reverse directions. Note: No I/O should be running at any time at any of the sites while the reversal of the remote copy relations is being done. This would corrupt the data resulting and consequently requiring a full copy.

Chapter 36. Data migration through double cascading

693

694

IBM System Storage DS8000: Copy Services in Open Environments

37

Chapter 37.

Interoperability between DS8000 and ESS 800


In this chapter we show the interoperability between the Copy Services functions in the DS8000 and the ESS 800. This chapter contains the following sections: DS8000 and ESS 800 Copy Services interoperability Preparing the environment RMC: Establishing paths between DS8000 and ESS 800 Managing Metro Mirror or Global Copy pairs Managing ESS 800 Global Mirror Managing ESS 800 FlashCopy

Copyright IBM Corp. 2004-2008. All rights reserved.

695

37.1 DS8000 and ESS 800 Copy Services interoperability


Copy Services operations are supported between the DS8000 and the IBM Enterprise Storage Server Model 800 (ESS 800) and Model 750. For the rest of this chapter, all references to the ESS 800 also apply to the ESS 750. Note: On the ESS 800, Remote Mirror and Copy (RMC) is called Peer-to-Peer Remote Copy (PPRC). All references to PPRC are interchangeable with RMC.

37.2 Preparing the environment


Before starting Copy Services operations in a mixed DS8000/ESS 800 environment, you need to ensure this environment is set up correctly.

37.2.1 Minimum microcode levels


To manage the ESS 800 Copy Services from the DS8000, you need to have your IBM Service Representative install licensed internal code Version 2.4.3.65 or later on the ESS 800, and DS8000 code bundle 6.0.500.52 or later on the DS8000.

37.2.2 Hardware and licensing requirements


To establish PPRC pairs between an ESS 800 and a DS8000, the ESS 800 must have a PPRC license and the DS8000 must have a Remote Mirror Copy (RMC) license. The ESS 800 must also have Fibre Channel adapters that have connectivity to the DS8000. ESCON adapters cannot be used to mirror an ESS 800 to a DS8000. You cannot have a Copy Services relationship between a 2105 E20 or F20 and a DS8000. You could, however, have a cascaded installation where an ESS F20 was mirrored to an ESS 800 that was then mirrored to a DS8000. The ESS F20 to ESS 800 relationship would have to be managed with ESS management tools such as the ESS CLI or the ESS Web Copy Services GUI.

37.2.3 Network connectivity


Network connectivity requirements depend on how you want to manage the environment. For FlashCopy management: If you want to use the DS8000 DS GUI to manage FlashCopy on the ESS 800, then you need network connectivity between the DS8000 HMCs and the ESS 800 Copy Services servers. If you want to be able to use the DS CLI to manage FlashCopy on the ESS 800 then you need network connectivity between the machine on which you are running the DS CLI and at least one of the ESS 800 Copy Services servers. For creating Remote Mirror and Copy (RMC), also known as Peer-to-Peer Remote Copy (PPRC), paths and pairs: If you wish to use the ESS 800 as a PPRC source machine, then you will need network connectivity to the ESS 800 Copy Services servers, either from a machine running the DS CLI, or from the DS8000 HMC.

696

IBM System Storage DS8000: Copy Services in Open Environments

If, however, the ESS 800 will purely be a remote target for PPRC, and you do not plan to ever use it as a source machine and you do not intend to use the DS GUI to manage the pairs and paths, then you do not need to have network connectivity to the ESS 800 Copy Services servers. This is because all path and pair establishment is done by connecting to the source machine (which would be the DS8000). This setup is not recommended because it is less flexible.

37.2.4 Create matching user IDs and passwords


When you want to use the DS CLI or DS GUI to perform Copy Services operations, you need to authenticate with a valid user ID and password. When you use the DS GUI to perform an operation that will require it to issue a command to an ESS 800, it needs to authenticate with the ESS 800. To do this it uses the DS user ID and password that you used to log on to the GUI with. This means that this user ID and password needs to be defined in the ESS Specialist. This task needs to be manually performed. If instead of the DS GUI, you only use the DS CLI, then you will be logging onto the ESS 800 Copy Services server directly, so the requirement will not be there. For simplified management you may still want to create a matching user ID and password.

Create a user ID on the DS8000


The first step is to log on to the DS8000 HMC using the DS GUI and create a user ID that is in either the admin, op_storage or op_copy_services groups. Log off and then log on with that user ID and change the initial password.

Create a user ID on the ESS 800


Once you have created a DS user ID, you need to create a matching user ID using the ESS 800 Web Specialist: 1. Start a Web Browser and connect to the IP address of either ESS 800 cluster. 2. Click ESS Specialist. 3. Log on with a ESS Specialist user ID that has admin privileges. 4. Click the Users tab. 5. Click Modify Users. Enter the user ID name and password you created in the DS CLI or GUI. It must be given the Administration access level. 6. Click Add to move the user ID to the right hand box. 7. Click Perform Configuration Update and wait for the completion message. Important: The ESS Web Copy Services user IDs are not used by the DS CLI or DS GUI. You only need to create a matching ESS Specialist user ID.

37.2.5 Updating the DS CLI profile


If you plan to use the DS CLI to manage your ESS 800, you can create a profile to simplify connection, commands, and scripted operations. Add extra lines as shown in Example 37-1.

Chapter 37. Interoperability between DS8000 and ESS 800

697

Example 37-1 Possible modification to a DS CLI profile # ESS 800 # hmc1 is the Copy Services server A hmc1: 10.0.0.100 # devid is the serial number of the ESS 800 - note there are only 5 digits after the 2105 devid: IBM.2105-22399 # remotedevid is the serial number of the DS8000 remotedevid: IBM.2107-7503461 # Username is a user created on the ESS Specialist that matches the userid on the DS8000 username:admin # The password for the admin user id. Placing it here is not very secure. password:passw0rd # The password file is created using the managepwfile and is a better way to manage this. # pwfile:security.dat

Putting the password in the profile is not very secure (because it is stored in plain text), but it can be more convenient. A password file can be created using the managepwfile command and is a better way to manage this. After creating the password file (which by default is called security.dat), you can remove the password from the profile and instead specify the pwfile file. The command to create a passwd file in this example is:
managepwfile -action add -name admin -pw passw0rd

A simple method when you have multiple machines to manage is to create multiple profile files. Then when starting the DS CLI, you can specify the profile file you wish to use with the -cfg parameter. In a Windows environment you could have multiple Windows batch files (BAT files), one for each machine you wish to manage. The profile shown in Example 37-1 on page 698 could be saved in the C:\program files\ibm\dscli\profile directory as 2105source.profile. Then you create a simple Windows BAT file with three lines, as shown in Example 37-2.
Example 37-2 Windows BAT file to start a specific profile title DS CLI Local 2105 22399 Remote IBM.2107-7503461 cd C:\Program Files\ibm\dscli\profile dscli -cfg 2105source.profile

We save the BAT file onto the Windows desktop and start it by double-clicking the icon. The DS CLI will open and will start the specified profile. By creating a BAT file and profile for each machine, we can simplify the process of starting the DS CLI. We can also specify the profile when starting the DS CLI from inside a script, or when running the DS CLI in script mode.

37.2.6 Adding the Copy Services Domain


If you wish to use the DS GUI to manage FlashCopy on an ESS 800, or Remote Mirror and Copy paths and pairs between an ESS 800 and a DS8000, you have to add the ESS Copy Services domain to the DS HMC. You need the IP address of the ESS 800 Copy Services servers (server A and, if desired, server B). You can get these by connecting with a Web Browser to either cluster of the ESS, and clicking the Copy Services tab. They will be displayed on the first window you see. You add an ESS Copy Services domain with the DS Storage Manager to the DS8000 as follows: 1. Click Real-time manager. 2. Click Manage hardware. 698
IBM System Storage DS8000: Copy Services in Open Environments

3. Click Storage complexes. 4. Select Add 2105 Copy Services Domain. Figure 37-1 shows the Add 2105 Copy Services Domain panel.

Figure 37-1 Add 2105 Copy Services Domain

5. Enter the IP address of the ESS Copy Services server A in the Server 1 IP address box. If you have a Server B, select the Define a second Copy Services server box and enter the Server B IP address in the Server 2 IP address box. However, if Server B is running on a 2105-F20, you should not define it. Having added the ESS 800 Copy Services domain to the DS GUI, you are now able to use the DS GUI to create paths and Remote Mirror and Copy pairs, where the ESS 800 is the source device. You can also use the DS GUI to manage FlashCopy pairs on the ESS 800. Note: The steps to add the ESS 800 Copy Services Domain to a DS8000 cannot be performed using the DS CLI. However, if you do not plan to use the GUI, this will not be an issue.

Restriction: You cannot use the ESS Copy Services Server Web GUI to manage Copy Services relationships between an ESS 800 and a DS8000. The Volumes tab will only show ESS 800 volumes, not DS8000 volumes. The Paths tab will not shows paths between an ESS 800 and a DS8000. It will only show paths between ESS 800s. If you are using PPRC between a DS8000 and an ESS 800, all management of that PPRC relationship must be done with the DS CLI or via the DS GUI.

Storage management
You cannot use the DS8000 DS GUI to perform storage configuration on an ESS 800. Likewise, you cannot use the ESS 800 Web Specialist to perform storage configuration on a DS8000. You can only perform Copy Services management tasks on the alternative device. If you are logged onto the DS8000 DS GUI and wish to configure some storage on the ESS 800, you will need to log on to the ESS 800 Web Specialist.

37.2.7 Volume size considerations for RMC (PPRC)


When volumes are created on an ESS 800 they are sized using decimal gigabytes. This means that when a request is made to a create a 10 GB volume, the ESS 800 allocates a minimum of 10,000,000,000 bytes. This is a very important consideration when using PPRC between a DS8000 and an ESS 800. In PPRC it is not an issue if the target volume is larger than the source volume. The extra space on the target volume is simply not written to. However, if an attempt is made to reverse the relationship (so that the source becomes the 699

Chapter 37. Interoperability between DS8000 and ESS 800

target), this attempt will fail because now the source is larger than the target. Clearly the best way to avoid this is to ensure the source and target are exactly the same size. When fixed block volumes are created on the DS8000, the user is given three size choices: ds ess blocks The number of bytes allocated will be the requested capacity value times 230 . The number of bytes allocated will be the requested capacity value times 109 . The number of bytes allocated will be the requested capacity value times 512 bytes (since each block is 512 bytes).

The correct method is to determine the ESS volume size and then create fixed block volumes that are sized using the -type ess parameter.

Determining ESS volume size


To view the ESS volume sizes you can use the ESS Specialist. 1. Start a Web Browser and connect to the IP address of either ESS 800 cluster. 2. Click ESS Specialist. 3. Log on with a ESS Specialist user ID. 4. Click the Storage Allocation tab. 5. Click the Open Systems Storage tab. 6. Click Modify Volumes Assignments. 7. Take note of the value in the size column for the volumes you are interested in, as shown in Figure 37-2.

Figure 37-2 Viewing ESS 800 volume size using ESS Specialist GUI

You can also use the ESS CLI to view the volume size, as shown in Example 37-3.
Example 37-3 View ESS 800 volume size using ESS CLI C:\Program Files\ibm\ESScli>esscli -s 10.0.0.1 -u storwatch -p specialist list volume Wed Nov 02 01:13:10 EST 2005 IBM ESSCLI 2.4.0 Volume -----1000 1001 1002 1003 Cap ----1.1 1.1 1.1 1.1 Units ----GB GB GB GB VolType ------FB FB FB FB LSS --10 10 10 10 VS ---vs0 vs0 vs0 vs0 Serial -------00022399 00122399 00222399 00322399 Label ----*** *** *** ***

700

IBM System Storage DS8000: Copy Services in Open Environments

Creating matching fixed block volumes on the DS8000


In Figure 37-2 on page 700 and Example 37-3 on page 700 we listed four volumes, which are all shown as being 1.1 GB. In our example we are going to create volumes 1400 to 1403 on the DS8000 in Extent Pool P0 to use as RMC (PPRC) target volumes. We can use the following DS CLI command:
mkfbvol -extpool P0 -cap 1.1 -type ess 1400-1403

Example 37-4 shows the resulting volumes. Each volume shows in the cap (10^9B) column as being 1.1 GB.
Example 37-4 Resulting volumes dscli> mkfbvol -extpool p0 -cap 1.1 -type ess 1400-1403 Date/Time: 27 October 2005 22:24:15 IBM DSCLI Version: 5.1.0.204 DS: IBM.2107-7503461 CMUC00025I mkfbvol: FB volume 1400 successfully created. CMUC00025I mkfbvol: FB volume 1401 successfully created. CMUC00025I mkfbvol: FB volume 1402 successfully created. CMUC00025I mkfbvol: FB volume 1403 successfully created. dscli> lsfbvol Date/Time: 2 November 2005 1:19:08 IBM DSCLI Version: 5.1.0.204 DS: IBM.2107-7503461 Name ID accstate datastate configstate deviceMTM datatype extpool cap (2^30B) cap (10^9B) cap (blocks) =========================================================================================================== 1400 Online Normal Normal 2107-900 FB 512 P0 1.1 2148480 1401 Online Normal Normal 2107-900 FB 512 P0 1.1 2148480 1402 Online Normal Normal 2107-900 FB 512 P0 1.1 2148480 1403 Online Normal Normal 2107-900 FB 512 P0 1.1 2148480

Checking the block count at older code levels


Prior to DS8000 code bundle 6.0.500.31, fixed block volumes created on the DS8000 could be up to 32 k bytes larger than the equivalent sized volume on the ESS 800. If you are creating new volumes on a DS8000 that are at or above this code bundle, you do not need to worry about the block count. However, if you created volumes below this code level, and you foresee the possibility that you may want to use these DS8000 volumes as targets in a DS8000 to ESS 800 PPRC pair, then you should check the block count to ensure the volume sizes truly match. To do this you need to first check the block count of the volume on the ESS 800. To do this you need to open the ESS Copy Services server: 1. Start a Web Browser and connect to the IP address of either ESS 800 cluster. 2. Click Copy Services. 3. Log on with a ESS Specialist user ID that has admin privileges. 4. Click the Volumes tab. 5. In the source pull-down, choose the LSS for which your particular volume is located. 6. When the volumes in that LSS are displayed, single left-click a particular volume to highlight it. 7. Now click the Information tab on the bottom right-hand corner of the window. 8. In the subsequent window, take note of the sectors count. This is actually the block count. In Figure 37-3 on page 702 the example is 2148480 sectors (or blocks).

Chapter 37. Interoperability between DS8000 and ESS 800

701

9. Now take note of the block count from the output of the DS CLI command lsfbvol (for the proposed source volume on the DS8000). In Example 37-4 on page 701, the block count is 2148480, which is an exact match.

Figure 37-3 Displaying block count using Copy Services server

10.If the block counts do not match, then you must either: Remove the volume with rmfbvol and create it again (assuming you are now at bundle 6.0.500.31 or later). Create a new volume for the purposes of PPRC (again, assuming you are now at bundle 6.0.500.31 or later). Use this volume only as a PPRC target, since the equivalent ESS 800 source volume is slightly smaller.

Using a spreadsheet to check expected versus actual block count


You can also use a formula to calculate the correct block size for any DS8000 volume that has to be matched in size to an ESS 800 volume. You can use this in a spreadsheet to determine if your DS8000 has any volumes that are slightly too large. The formula you would use is:
=INT((INT(size*10^9/512)+63)/64)*64

Where the size parameter in the formula is the ESS 800 volume size. You could place the output of lsfbvol into a spreadsheet and calculate the expected block size for each volume using the contents of the 10^9 (decimal) cap column. Then compare the column that has the output of the formula to the block column, to ensure the two match. In Table 37-1 the output of the lsfbvol command has been modified and placed into a spreadsheet. The formula has been placed into the Expected Block Size column. It uses the values in the (10^9B) column as an input to determine the correct block size. One volume, 6002, is 64 blocks larger than it should be. If that volume is to be used for PPRC, it should be deleted and re-created (provided the data on it can be moved).
Table 37-1 Using a spreadsheet to calculate correct block size ID 1000 1001 1002 1003 4400 Data type FB 512 FB 512 FB 512 FB 512 FB 512 extpool cap P2 P2 P2 P0 P0 2^30B (10^9B) 5 5 5 5 10 Blocks 9765632 9765632 9765632 9765632 19531264 Expected block size 9765632 9765632 9765632 9765632 19531264

702

IBM System Storage DS8000: Copy Services in Open Environments

5100 6000 6002

FB 512 FB 512 FB 512

P3 P2 P2

4.5 9.4 5.5

8789120 18359424 10742272

8789120 18359424 10742208

Note: Do not apply this formula to System i volumes or volumes that only have a size in the 2^30 column. This is because System i volumes will be correctly sized using 520 byte blocks, while volumes that are binary sized (their size is listed in the 2^30) will also have the correct number of blocks. One option when creating fixed block volumes is to use the parameter -type blocks. The number of blocks could be calculated using the methods shown in Figure 37-3 on page 702 or Table 37-1 on page 702. An example command would be:
mkfbvol -extpool P0 -type blocks -cap 10742208 1404

37.2.8 Volume address considerations on the ESS 800


On the ESS 800, open systems volume IDs are given in an 8-digit format, xxx-sssss, where xxx is the LUN ID and sssss is the serial number of the ESS 800. An example of this is shown in Figure 37-2 on page 700, where the volumes shown are 000-22399 to 003-22399. These volumes are open systems, or fixed block volumes. When referring to them in the DS CLI, you must add 1000 to the volume ID, so volume 000-22399 is volume ID 1000 and volume 001-22399 is volume ID 1001. This is very important because on the ESS 800, the following address ranges are actually used: 0000 to 0FFF 1000 to 1FFF CKD volumes (4096 possible addresses) Open systems fixed block LUNs (4096 possible addresses)

If we intend to use FlashCopy to copy ESS LUN 000-22399 onto 001-22399 using the DS CLI, we must specify 1000 and 1001. If instead we specify 0000 and 0001, we will actually run the FlashCopy against CKD volumes. This may result in an unplanned outage on the System z environment that was using CKD volume 0001.

37.2.9 Establishment errors on newly created volumes


When volumes are created on an ESS 800, DS6000, or DS8000, an internal process is used to format the volumes. If you attempt to use these volumes as PPRC or FlashCopy targets while this process is occurring, the establishment will fail. Wait for the volume initialization process to finish and try again. The initialization time will vary depending on the size of the volume.

37.3 RMC: Establishing paths between DS8000 and ESS 800


To create a PPRC relationship between a DS8000 and an ESS 800 we first need to create logical paths (over the physical Fibre Channel connections). If we are using the DS GUI, then provided we added the ESS Copy Services Domain to the DS8000, we can establish paths in either direction (ESS 800 to DS8000 or DS8000 to ESS 800). If the ESS Copy Services domain was not added, then paths can only be established where the DS8000 is the source.

Chapter 37. Interoperability between DS8000 and ESS 800

703

37.3.1 Decoding port IDs


When viewing port IDs in the DS GUI or DS CLI, the port IDs can be decoded to show which physical port on the DS8000 or ESS 800 is in use. For DS6000 or DS8000, port IDs look like I0000 or I0123, which actually breaks out as IEECP. The EE is the enclosure number (minus 1), C is the card slot number (minus 1), and P is the port number on the card. So I0123 is enclosure 2 (1+1), slot 3 (2+1), port 3. The ESS 800 port IDs do not follow the same rule. To decode them, take the last two digits in the port ID and then use Figure 37-4. So port ID I0020 is the adapter in host bay 2, slot 1 and port ID I00AC is the adapter in host bay 4, slot 4.

00

04

08

0C

20

24

28

2C

80

84

88

8C

A0

A4

A8

AC

Slot 1 Slot 2 Slot 3

Slot 4

Slot 1 Slot 2 Slot 3

Slot 4

Slot 1 Slot 2 Slot 3

Slot 4

Slot 1 Slot 2 Slot 3

Slot 4

Host Bay 1

Host Bay 2

Host Bay 3

Host Bay 4

Figure 37-4 ESS 800 Port IDs decoded

37.3.2 Path creation using the DS GUI


In this example we show how to establish two RMC (PPRC) paths with the DS Storage Manager from DS8000 LSS 14 to ESS LSS 10. Tip: Open systems LSSs on an ESS 800 are always LSS 10 to LSS 1F. If you select ESS 800 LSS 00 to ESS 800 LSS 0F, you are working with System z LSSs on the ESS 800. 1. Click Real-time manager. 2. Click Copy Services. 3. Click Paths. 4. Select the Storage Complex, Storage Unit, and Storage Image from which you want to create PPRC paths. For this path, this machine will be providing the source LSS.

704

IBM System Storage DS8000: Copy Services in Open Environments

5. Click Create; see Figure 37-5.

Figure 37-5 Path panel

6. You will now be prompted to select the source LSS of the DS8000 from which you want to establish the PPRC paths. You then click Next. In this example we want to establish PPRC paths from LSS 14. See Figure 37-6.

Figure 37-6 Select source LSS

7. Now you have to select the target LSS. First select in the Storage Complex pull-down menu the Storage Complex for the ESS. Then select from the Storage Unit pull-down menu the appropriate Storage Unit and the LSS from the Storage Unit. When you are finished, click Next to continue. In Figure 37-7 the target LSS on the ESS is LSS 10.

Figure 37-7 Select target LSS

Chapter 37. Interoperability between DS8000 and ESS 800

705

8. The next panel shows you which Fibre Channel ports are available to establish Metro Mirror and Copy paths; see Figure 37-8. Select the ports and click Next. Because you want to establish two paths from the DS8000 to the ESS you must select both I/O ports from the DS8000 that are available for RMC. You will only see physical connections that actually exist. To get connections listed, the HBAs in the DS8000 and ESS 800 have to either be zoned together or directly connected.

Figure 37-8 Select source I/O ports

9. In the next panel (see Figure 37-9) you have to select for each source I/O port from the DS8000 a target I/O port on the ESS. When done click Next.

Figure 37-9 Select target I/O ports

10.On the next two panels you will be asked whether you want built a Consistency Group and to verify the information you entered during the process. After you verify the information you entered, click Finish to establish the RMC paths.

706

IBM System Storage DS8000: Copy Services in Open Environments

11.Figure 37-10 shows the two RMC paths we have established for LSS 14. To manage the established paths (for example, delete path) you can select the path you want to manage and then select from the pull-down menu the appropriate action you want to perform.

Figure 37-10 Path panel

Paths in the reverse direction


The previous example showed how to create a path from the DS6000 to the ESS 800. In many cases it may be likely that you will need paths from the ESS 800 to the DS8000. Regardless, by having established paths in one direction, you can then establish paths in the opposite direction. Fibre Channel allows for bi-directional mirroring over the same physical path.

Adding or deleting paths


You can add additional paths to an LSS pair by simply creating more paths. To remove a path, just display the Paths screen, select the relevant source machine and LSS, select the path you wish to delete, and use the delete option from the Select Action pull-down.

37.3.3 Establish logical paths between DS8000 and ESS 800 using DS CLI
You can also use the DS CLI to establish logical paths. One additional step is that you need to know the WWNN of the target storage image. The three commands we will use to establish a path are lssi, lsavailpprcport, and mkpprcpath.

Determining the remote device WWNN


First, we need to determine the WWNN of the remote storage device. If you started the DS CLI by connecting to the DS8000 HMC, then the remote storage device is the ESS 800. If you started the DS CLI by connecting to the ESS 800, then the remote Storage Image is the DS8000.

Determining the DS8000 WWNN using DS GUI


To do this: 1. Click Real-time manager. 2. Click Manage hardware. 3. Click Storage images (not the Storage Unit). 4. Click in the Select column for the DS8000 Storage Image, and from the Select Action pull-down, choose Properties.

Chapter 37. Interoperability between DS8000 and ESS 800

707

5. The DS8000 Storage Image WWNN will be displayed on the subsequent properties screen.

Determining DS8000 WWNN using DS CLI


In Example 37-5 we show how to display the WWNN of a Storage Image using the lssi command.
Example 37-5 Determine the WWNN of a DS8000 dscli> lssi IBM.2107-7503461 Date/Time: 1 November 2005 3:08:52 IBM DSCLI Version: 5.1.0.204 Name ID Storage Unit Model WWNN State ESSNet ============================================================================ IBM.2107-7503461 IBM.2107-7503460 932 5005076303FFC08F Online Enabled

Determining the WWNN of the ESS 800 using the ESS Specialist
To do this: 1. Point your browser at the IP address of either cluster of the ESS 800. 2. On the opening screen, click the ESS Specialist button. 3. You will receive a logon prompt. Log on to the ESS Specialist using a ESS Specialist user ID and password. 4. On the welcome screen, you will see the WWNN, as shown in Figure 37-11. You need to write it down.

Figure 37-11 Using ESS 800 Specialist GUI to display the ESS 800 WWNN

Determining the WWNN of the ESS 800 using the ESS CLI
If you have the ESS CLI installed on a PC then you can use the list server command to display the ESS 800 WWNN. Note: This is not the DS CLI. ESS CLI is a separate software package that you can get from your IBM Service Representative if you do not already have it. An example of the command syntax is shown in Example 37-6. This technique has the advantage that you can copy and paste the output.
Example 37-6 Using ESS CLI to display the ESS 800 WWNN C:\Program Files\ibm\ESScli>esscli -u storwatch -p specialist -s 10.0.0.1 list server Tue Nov 01 03:28:59 EST 2005 IBM ESSCLI 2.4.0

708

IBM System Storage DS8000: Copy Services in Open Environments

Server Model Mfg WWN CodeEC Cache NVS Racks ---------- ----- --- ---------------- -------- ----- ----- ----2105.22399 800 013 5005076300C09517 2.4.3.79 32GB 2GB 1

Listing the available ports using the DS CLI


Having determined the remote devices WWNN, we can now display the ports that are available to establish PPRC paths. In Example 37-7 we are logged onto the DS8000 using the DS CLI, so the remote device is the ESS 800.
Example 37-7 Displaying available ports for PPRC path establishment dscli> lsavailpprcport -remotedev IBM.2105-22399 -remotewwnn 5005076300C09517 00:00 Date/Time: 1 November 2005 3:40:52 IBM DSCLI Version: 5.1.0.204 DS: IBM.2107-7503461 Local Port Attached Port Type ============================= I0000 I000C FCP I0000 I00AC FCP I0002 I0088 FCP I0003 I0084 FCP I0130 I000C FCP I0130 I00AC FCP I0241 I00A4 FCP I0311 I0024 FCP

Note: When issuing commands that refer to an ESS 800, the ESS 800 serial number is only five digits, not seven as the DS6000 or DS8000 are. So in our examples, the serial number syntax we use is IBM.2105-22399, not IBM.2105-1322399.

Establishing the logical paths using the DS CLI


Having determined which ports are available for each LSS pair that you wish to copy and mirror between, you now establish a one-way path with the mkpprcpath command. You can then display established paths using the lspprcpath command. In Example 37-8, we first establish a single path between LSS 12 on the DS8000 and LSS 15 on the ESS 800. We then display the paths available for LSS 12 on the DS8000.
Example 37-8 Using the DS CLI to establish RMC (PPRC) paths dscli> mkpprcpath -remotedev IBM.2105-22399 -remotewwnn 5005076300C09517 -srclss 12 -tgtlss 15 I0000:I000C Date/Time: 1 November 2005 4:07:17 IBM DSCLI Version: 5.1.0.204 DS: IBM.2107-7503461 CMUC00149I mkpprcpath: Remote Mirror and Copy path 12:15 successfully established. dscli> lspprcpath 12 Date/Time: 1 November 2005 4:07:37 IBM DSCLI Version: 5.1.0.204 DS: IBM.2107-7503461 Src Tgt State SS Port Attached Port Tgt WWNN ========================================================= 12 15 Success FF21 I0000 I000C 5005076300C09517

In Example 37-9 we add an additional path between LSS 12 on the DS8000 and LSS 15 on the ESS 800. Note that we include the existing path when creating a new path. Otherwise, the existing path is removed and only the new path will be available for use.
Example 37-9 Establishing additional paths using the DS CLI dscli> mkpprcpath -remotedev IBM.2105-22399 -remotewwnn 5005076300C09517 -srclss 12 -tgtlss 15 I0000:I000C I0130:I000C Date/Time: 1 November 2005 4:07:30 IBM DSCLI Version: 5.1.0.204 DS: IBM.2107-7503461

Chapter 37. Interoperability between DS8000 and ESS 800

709

CMUC00149I mkpprcpath: Remote Mirror and Copy path 12:15 successfully established. dscli> lspprcpath 12 Date/Time: 1 November 2005 4:07:37 IBM DSCLI Version: 5.1.0.204 DS: IBM.2107-7503461 Src Tgt State SS Port Attached Port Tgt WWNN ========================================================= 12 15 Success FF21 I0000 I000C 5005076300C09517 12 15 Success FF21 I0130 I000C 5005076300C09517

To establish paths where the ESS 800 is the source, we should connect to the ESS 800 using the DS CLI and follow the same process, but specify the DS8000 as the remote device. Important: When you connect using the DS CLI to the DS8000 HMC, the DS8000 is the local device and the ESS 800 is the remove device. If you connect using the DS CLI to the ESS 800, then ESS 800 is now the local device and the DS8000 is the remote device.

Attention: The rmpprcpath command will remove all paths between the source and target LSSs. To just reduce the path count, use the mkpprcpath command specifying only the paths you wish to continue using.

37.4 Managing Metro Mirror or Global Copy pairs


Having established paths, we can now establish volume pairs.

37.4.1 Managing Metro Mirror or Global Copy pairs using the DS GUI
In this example we show how you can establish Metro Mirror volume pairs between a DS8000 and an ESS 800 with the DS Storage Manager. In this example we create two Metro Mirror pairs. Volumes 1401 and 1402 from the DS8000 are the source volumes, and the target volumes are 1000 and 1001 from the ESS. We would follow the same method to set up a Global Copy pair, except that Global Copy would be selected in Figure 37-17 on page 713. 1. Click Real-time manager. 2. Click Copy services. 3. Click Metro Mirror. 4. Select the Storage Complex, Storage Unit, and Storage Image on which the Metro Mirror source volumes are.

710

IBM System Storage DS8000: Copy Services in Open Environments

5. Click Create; see Figure 37-12.

Figure 37-12 Metro Mirror

The next panels guide you through the process of creating Metro Mirror volume pairs. In the last panel, you have the opportunity to review the information you entered before you establish the Metro Mirror pairs. Also, during the process, you can, at any time, go back to modify specifications that you have already done, or you can cancel the process. 6. You have the option to choose between Automated volume assignment and Manual volume pair assignment; see Figure 37-13. If you click the Automated volume pair assignment, the first selected source volume is paired automatically with the first selected target volume. If you click Manual volume pair assignment, you must select each specific target volume for each selected source volume. In this example we assign the volume pairs manually.

Figure 37-13 Pairing method

Chapter 37. Interoperability between DS8000 and ESS 800

711

7. The source volumes we want to use are on the DS8000 in LSS 14. Select both volumes and click Next; see Figure 37-14. Note that you also have the option from this panel to create paths.

Figure 37-14 Select source volume

8. In the next panel, select the storage complex and storage unit for the LSS with the target volumes. Then select the target volume for the first source volume. When you are finished, click Next, as shown in Figure 37-15. To expand your choices, you must select the small blue boxes.

Figure 37-15 First Metro Mirror target volume

712

IBM System Storage DS8000: Copy Services in Open Environments

9. Next select the target volume for the second source volume, as shown in Figure 37-16. To expand your choices, you must select the small blue boxes.

Figure 37-16 Second Metro Mirror target volume

10.In the next panel you can specify various copy options, as shown in Figure 37-17. In this example we selected Metro Mirror under Define relationship type and Perform initial copy. If you wish to instead use Global Copy, this is where you select it.

Figure 37-17 Copy options

Chapter 37. Interoperability between DS8000 and ESS 800

713

11.The Verify panel opens. Verify the information that you entered, and if everything is correct, click Finish to establish the Metro Mirror volume pairs. Figure 37-18 shows the Metro Mirror pairs for LSS 06 that we established. To manage the established pairs (for example, suspend pair) you can select the volume pair that you want to manage and then select the appropriate action that you want to perform from the menu.

Figure 37-18 Managing Metro Mirror pairs

37.4.2 Managing Metro Mirror pairs using the DS CLI


Having established paths, we can now establish volume pairs. In this example we show how you can establish Metro Mirror volume pairs between a DS8000 and an ESS 800 with the DS CLI. In this example we create two Metro Mirror pairs. Volumes 1401 and 1402 from the DS8000 are the source volumes, and the target volumes on the ESS are 1000 and 1001. In Example 37-10 we create two pairs using the mkpprc command. We then list them using the lspprc command. We then remove one pair using the rmpprc command.
Example 37-10 Creating Metro Mirror pairs using DS CLI dscli> mkpprc -remotedev IBM.2105-22399 -type mmir -mode full 1401:1000 1402:1001 Date/Time: 1 November 2005 19:12:26 IBM DSCLI Version: 5.1.0.204 DS: IBM.2107-7503461 CMUC00153I mkpprc: Remote Mirror and Copy volume pair relationship 1401:1000 successfully created. CMUC00153I mkpprc: Remote Mirror and Copy volume pair relationship 1402:1001 successfully created. dscli> lspprc 1401-1402 Date/Time: 1 November 2005 19:13:30 IBM DSCLI Version: 5.1.0.204 DS: IBM.2107-7503461 ID State Reason Type SourceLSS Timeout (secs) Critical Mode First Pass =========================================================================================== 1401:1000 Full Duplex Metro Mirror 14 unknown Disabled Invalid 1402:1001 Full Duplex Metro Mirror 14 unknown Disabled Invalid dscli> rmpprc -remotedev IBM.2105-22399 1401:1000 Date/Time: 1 November 2005 19:14:50 IBM DSCLI Version: 5.1.0.204 DS: IBM.2107-7503461 CMUC00160W rmpprc: Are you sure you want to delete the Remote Mirror and Copy volume pair relationship 1401:1000:? [y/n] :y CMUC00155I rmpprc: Remote Mirror and Copy volume pair 1401:1000 relationship successfully withdrawn.

In Example 37-10 we connected to the DS8000 HMC using the DS CLI, so the ESS 800 is the remote device. If we wish to establish pairs where the ESS 800 is the source device, we need to connect to the ESS 800 using the DS CLI. This will make the DS8000 the remote device.

714

IBM System Storage DS8000: Copy Services in Open Environments

Important: If you specify the wrong -remotedev or you are using a profile where a different remote device from the one you intended to work on is specified, you may get an error message, CMUN03057, saying you are specifying an invalid subsystem ID. This may be because you are specifying the wrong remote device serial number. If you have multiple potential remote devices, do not specify a remotedev in your DS CLI profile.

37.4.3 Managing Global Copy pairs using the DS CLI


Having established paths, we can now establish volume pairs. In this example we show how you can establish Global Copy volume pairs between a DS8000 and an ESS 800 with the DS CLI. In this example we create two Global Copy pairs. Volumes 1401 and 1402 from the DS8000 are the target volumes, and the source volumes on the ESS are 1000 and 1001. In Example 37-10 on page 714 we create two pairs using the mkpprc command. We then list them using the lspprc command. We then pause one pair using the pausepprc command.
Example 37-11 Using DS CLI to manage Global Copy dscli> mkpprc -remotedev IBM.2107-7503461 -type gcp 1000:1401 1001:1402 Date/Time: 2 November 2005 23:16:24 IBM DSCLI Version: 5.1.0.204 DS: IBM.2105-22399 CMUC00153I mkpprc: Remote Mirror and Copy volume pair relationship 1000:1401 successfully created. CMUC00153I mkpprc: Remote Mirror and Copy volume pair relationship 1001:1402 successfully created. dscli> lspprc 1000-1001 Date/Time: 2 November 2005 23:16:58 IBM DSCLI Version: 5.1.0.204 DS: IBM.2105-22399 ID State Reason Type SourceLSS Timeout (secs) Critical Mode First Pass =========================================================================================== 1000:1401 Copy Pending Global Copy 10 unknown Disabled False 1001:1402 Copy Pending Global Copy 10 unknown Disabled False dscli> pausepprc 1000:1401 Date/Time: 2 November 2005 23:17:58 IBM DSCLI Version: 5.1.0.204 DS: IBM.2105-22399 CMUC00157I pausepprc: Remote Mirror and Copy volume pair 1000:1401 relationship successfully paused. dscli> lspprc 1000 Date/Time: 2 November 2005 23:18:04 IBM DSCLI Version: 5.1.0.204 DS: IBM.2105-22399 ID State Reason Type SourceLSS Timeout (secs) Critical Mode First =========================================================================================== 1000:1401 Suspended Host Source Global Copy 10 unknown Disabled True

37.5 Managing ESS 800 Global Mirror


The establishment of Global Mirror between a DS8000 and an ESS 800 is achieved using the same methods as explained in Chapter 25, Global Mirror examples on page 367.

Chapter 37. Interoperability between DS8000 and ESS 800

715

37.5.1 Managing Global Mirror pairs using the DS CLI


In Example 37-12 we establish a Global Copy pair between volume 1000 on the ESS 800 and volume 1401 on the DS8000 and a remote FlashCopy to support it. We then create session 12 for LSS 10. We then create a Global Mirror using session 12 and LSS 10.
Example 37-12 Establishing Global Copy using DS CLI dscli> lspprc 1000 Date/Time: 2 November 2005 21:47:03 IBM DSCLI Version: 5.1.0.204 DS: IBM.2105-22399 CMUC00234I lspprc: No Remote Mirror and Copy found. dscli> mkpprc -type gcp -tgtread -mode full 1000:1401 Date/Time: 2 November 2005 21:47:21 IBM DSCLI Version: 5.1.0.204 DS: IBM.2105-22399 CMUC00153I mkpprc: Remote Mirror and Copy volume pair relationship 1000:1401 successfully created. dscli> lspprc 1000 Date/Time: 2 November 2005 21:48:38 IBM DSCLI Version: 5.1.0.204 DS: IBM.2105-22399 ID State Reason Type SourceLSS Timeout (secs) Critical Mode First Pass Status ================================================================================================== 1000:1401 Copy Pending Global Copy 10 unknown Disabled False dscli> mkremoteflash -dev IBM.2107-7503461 -conduit IBM.2105-22399/10 -record -nocp 1402:1403 Date/Time: 2 November 2005 21:49:00 IBM DSCLI Version: 5.1.0.204 DS: IBM.2105-22399 CMUC00173I mkremoteflash: Remote FlashCopy volume pair 1402:1403 successfully created. Use the lsremoteflash command to determine copy completion. dscli> mksession -lss 10 -volume 1000 12 Date/Time: 2 November 2005 21:50:23 IBM DSCLI Version: 5.1.0.204 DS: IBM.2105-22399 CMUC00145I mksession: Session 12 opened successfully. dscli> lssession 10 Date/Time: 2 November 2005 21:50:35 IBM DSCLI Version: 5.1.0.204 DS: IBM.2105-22399 LSS ID Session Status Volume VolumeStatus PrimaryStatus SecondaryStatus FirstPassComplete =========================================================================================================== 10 12 Normal 1000 Join Pending Primary Copy Pending Secondary Simplex True dscli> mkgmir -lss 10 -session 12 Date/Time: 2 November 2005 21:51:04 IBM DSCLI Version: 5.1.0.204 DS: IBM.2105-22399 CMUC00162I mkgmir: Global Mirror for session 12 successfully started. dscli> showgmir 10 Date/Time: 2 November 2005 21:51:39 IBM DSCLI Version: 5.1.0.204 DS: IBM.2105-22399 ID IBM.2105-22399/10 Master Count 1 Master Session ID 0x12 Copy State Running Fatal Reason Not Fatal CG Interval (seconds) 0 XDC Interval(milliseconds) 50 CG Drain Time (seconds) 30 Current Time 11/02/2005 21:40:47 EST CG Time 11/02/2005 21:40:47 EST Successful CG Percentage 100 FlashCopy Sequence Number Master ID IBM.2105-22399 Subordinate Count 0 Master/Subordinate Assoc -

37.6 Managing ESS 800 FlashCopy


If there is a FlashCopy license for the ESS 800, it is possible to manage ESS FlashCopy using the DS CLI or SM GUI. The establishment of a FlashCopy pair on an ESS 800, using the DS GUI, is no different than establishing a DS8000 FlashCopy pair; see 9.5, FlashCopy management using the DS GUI on page 118. The establishment of a FlashCopy pair on an 716
IBM System Storage DS8000: Copy Services in Open Environments

ESS 800 using the DS CLI is also no different; see 9.3, Local FlashCopy using the DS CLI on page 104. In a situation where the ESS 800 is the target device in a PPRC relationship, it is also possible to create remote FlashCopies on the ESS 800 using the mkremoteflash command. In each case, creating, changing, and removing the FlashCopy is done using the same commands.

37.6.1 Creating an ESS 800 FlashCopy using the DS GUI


The steps to create an ESS 800 FlashCopy using the DS GUI are: 1. Click Real-time manager. 2. Click Copy Services. 3. Click FlashCopy. 4. From the Storage Complex menu, choose the ESS 800 Copy Services Domain. 5. From the Storage Units menu, select the ESS that you wish to perform the FlashCopy on. 6. Make selections from the Resource type and Specify LSS pull-downs. 7. From the Select Action menu, select Create, as shown in Figure 37-19.

Figure 37-19 Creating ESS 800 FlashCopy using DS GUI

8. Choose between A single source with a single target and A single source with multiple targets and click Next.

Chapter 37. Interoperability between DS8000 and ESS 800

717

9. Select the source volumes, as shown in Figure 37-20. Note that some of the columns have Not Applicable for 2105 in them. This is normal.

Figure 37-20 Selecting ESS 800 source volumes for FlashCopy

10.Select the target volumes and click Next. 11.Select what options you plan to use and click Next. 12.When you get the verification screen, review your selections, and click Finish.

37.6.2 Creating an ESS 800 FlashCopy using DS CLI


Start the DS CLI and connect to the ESS 800. Then issue DS CLI commands as per normal. An example is shown in Example 37-13.
Example 37-13 Using DS CLI to create an ESS 800 FlashCopy dscli> mkflash -nocp -record -persist 1001:1002 Date/Time: 2 November 2005 20:04:33 IBM DSCLI Version: 5.1.0.204 DS: IBM.2105-22399 CMUC00137I mkflash: FlashCopy pair 1001:1002 successfully created. dscli> lsflash 1001 Date/Time: 2 November 2005 20:04:37 IBM DSCLI Version: 5.1.0.204 DS: IBM.2105-22399 ID SrcLSS SequenceNum Timeout ActiveCopy Recording Persistent Revertible SourceWrite TargetWrite =========================================================================================================== 1001:1002 10 0 45 Disabled Enabled Enabled Disabled Enabled Enabled dscli> rmflash 1001:1002 Date/Time: 2 November 2005 20:04:54 IBM DSCLI Version: 5.1.0.204 DS: IBM.2105-22399 CMUC00144W rmflash: Are you sure you want to remove the FlashCopy pair 1001:1002:? [y/n]:y CMUC00140I rmflash: FlashCopy pair 1001:1002 successfully removed.

718

IBM System Storage DS8000: Copy Services in Open Environments

37.6.3 Creating a remote FlashCopy on an ESS 800 using DS CLI


It is also possible to use DS CLI to create a remote FlashCopy on an ESS 800 where the source PPRC device is a DS8000. In Example 37-14, we connect using the DS CLI to a DS8000. We determine there are established paths from LSS 14 on the DS8000 to LSS 10 on the ESS 800. We create a Metro Mirror pair from volume 1401 on the DS8000 to volume 1001 on the ESS 800. We then create a remote FlashCopy on the ESS 800 between ESS 800 volumes 1001 and 1002. We then remove the remote FlashCopy.
Example 37-14 Creating a remote FlashCopy where the ESS 800 is the remote target dscli> lspprcpath 14 Date/Time: 2 November 2005 20:23:23 IBM DSCLI Version: 5.1.0.204 DS: IBM.2107-7503461 Src Tgt State SS Port Attached Port Tgt WWNN ========================================================= 14 10 Success FF16 I0000 I000C 5005076300C09517 14 10 Success FF16 I0130 I000C 5005076300C09517 dscli> mkpprc -remotedev IBM.2105-22399 -type mmir 1401:1001 Date/Time: 2 November 2005 20:14:14 IBM DSCLI Version: 5.1.0.204 DS: IBM.2107-7503461 CMUC00153I mkpprc: Remote Mirror and Copy volume pair relationship 1401:1001 successfully created. dscli> mkremoteflash -dev IBM.2105-22399 -conduit IBM.2107-7503461/14 -persist -nocp 1001:1002 Date/Time: 2 November 2005 20:19:47 IBM DSCLI Version: 5.1.0.204 DS: IBM.2105-22399 CMUC00173I mkremoteflash: Remote FlashCopy volume pair 1001:1002 successfully created. Use the lsremoteflash command to determine copy completion. dscli> lsremoteflash -dev IBM.2105-22399 -conduit IBM.2107-7503461/14 1001:1002 Date/Time: 2 November 2005 20:19:59 IBM DSCLI Version: 5.1.0.204 DS: IBM.2105-22399 ID SrcLSS SequenceNum ActiveCopy Recording Persistent Revertible SourceWrite TargetWrite =========================================================================================================== 1001:1002 10 0 Disabled Disabled Enabled Disabled Enabled Enabled dscli> rmremoteflash -dev IBM.2105-22399 -conduit IBM.2107-7503461/14 1001:1002 Date/Time: 2 November 2005 20:20:10 IBM DSCLI Version: 5.1.0.204 DS: IBM.2105-22399 CMUC00179I rmremoteflash: Are you sure you want to remove the remote FlashCopy pair {0}? [y/n]:y CMUC00180I rmremoteflash: Removal of the remote FlashCopy volume pair 1001:1002 has been initiated successfully. Use the lsremoteflash command to determine when the relationship is deleted.

You could also reverse the scenario and use the ESS 800 as the source machine in a PPRC pair and then create the remote FlashCopy on the DS8000. Note: Commands that work with remote FlashCopies use the -dev parameter to define the machine on which the FlashCopy is to be performed. Other commands, such as mkpprc commands, refer to this device as the remote device, using -remotedev. However, because a FlashCopy must be sent to the remote site and then performed locally there, the use of the -dev parameter to refer to the remote machine is correct.

Chapter 37. Interoperability between DS8000 and ESS 800

719

720

IBM System Storage DS8000: Copy Services in Open Environments

38

Chapter 38.

Solutions
In this chapter we describe solutions for open systems environments which make use of the DS8000 copy services functionalities: Tivoli Storage Manager for Advanced Copy Services a backup solution for databases or SAP HACMP/XD for Metro Mirror a stretched cluster for AIX, based on DS8000 Metro Mirror Geographically Dispersed Open Clusters (GDOC) a dispersed cluster solution based on the Veritas Cluster Server

Copyright IBM Corp. 2004-2008. All rights reserved.

721

38.1 IBM Tivoli Storage Manager for Advanced Copy Services


This chapter discusses IBM Tivoli Storage Manager for Advanced Copy Services (TSM for Advanced Copy Services, TSM4ACS) which exploits the FlashCopy function to provide almost impact-free backup and minute restore for database and SAP environments. An IBM service offering based on TSM for Advanced Copy Services allows cloning of SAP environments. A good starting point to learn more about this solution is: http://www.ibm.com/software/tivoli/products/storage-mgr-advanced-copy-services/ For detailed technical information visit the IBM Tivoli Storage Manager Information Center at: http://publib.boulder.ibm.com/infocenter/tivihelp/v1r1/index.jsp

38.1.1 TSM for Advanced Copy Services Overview


IBM Tivoli Storage Manager for Advanced Copy Services helps protect your business critical databases that require 24x7 availability. It is available for stand-alone DB2 UDB and Oracle database environments and for SAP landscapes using the above database products. TSM for Advanced Copy Services provides a near zero-impact data backup and near instant recovery solution and helps eliminate backup-related performance impact on the production database or ERP servers. It integrates the below products, functions and features: FlashCopy capabilities of IBM disk storage subsystems (DS8000, DS6000, ESS 800) and storage virtualization software (SAN Volume Controller) data protection components of IBM DB2 UDB and Oracle database software as well as SAP software, such Oracle Recovery Manager (RMAN), SAP Backup/Restore Tools (BR*Tools) Tivoli Storage Manager backup/restore functionalities along with its SAP and database-specific enhancements: TSM for Databases, TSM for ERP (Enterprise Resource Planning) Systems Figure 38-1 shows a TSM backup and recovery environment for SAP using Tivoli Storage Manager, TSM for Advanced Copy Services and TSM for ERP Systems.

Figure 38-1 Overview: TSM Backup & Recovery of SAP Environments

722

IBM System Storage DS8000: Copy Services in Open Environments

38.1.2 TSM for Advanced Copy Services Backup


In a DS8000 disk storage subsystem environment, TSM for Advanced Copy Services takes advantage of the DS8000 FlashCopy feature. In normal operation, production data is written to the source volumes while dedicated target volumes are reserved for backup purposes within the storage system. Backups are taken from the copy. A production server and a backup server are connected through the SAN to one storage system. The FlashCopy source volumes are attached to the production server, and the target volumes are assigned to the backup server. While the application continues to run (writing data to the source volumes), you can use TSM for Advanced Copy Services to activate the FlashCopy feature of the storage system. In essence, the FlashCopy command tells the storage system to create a time-zero copy of the source volumes to the target volumes. Even if the Background Copy option is used to replicate data, the FlashCopy targets are immediately available e.g. for backup purposes. After TSM for Advanced Copy Services has imported volume groups and mounted file systems of the target volumes, the TSM for ERP or TSM for Databases product will be called to back up the application's data to the TSM Server, reading the data that have just been FlashCopied. The complete process can run while the application is online. TSM for Advanced Copy Services eliminates the backup related I/O workload from the production server. It operates as a turn-key solution, which obsoletes manual scripting to drive the FlashCopy process on two servers. Note that for a SAP landscapes, SAPs brbackup utility (which is a part of the toolbox called BR*Tools) controls the complete backup process. This enables a smooth integration of the backup solution into the application environment.

38.1.3 TSM for Advanced Copy Services Restore


TSM for Advanced Copy Services provide a quick restore option to restore the database back to the original source volumes on the production server without having to restore the data from the TSM Server. In order to provide this capability the previous backup has to be performed with the 'Background Copy Option' of the FlashCopy services; which ensures that the data were physically copied onto the target volumes and can be used for restore. With the Quick Restore option, sometimes referred as 'flashback', the last backup can be quickly restored by utilizing the FlashCopy function in reverse. In this case the data are copied from the target volumes back onto the source volumes. The product also allows to retaining multiple versions of the last backups.

38.1.4 Cloning of an SAP environment


The product can be enhanced with a Service Offering, which allows Cloning of SAP databases. With this feature you can create a copy or 'clone' of your running SAP production database using the storage system's FlashCopy capability to generate a (physical) copy of the production data. The clone can be imported and mounted on an auxiliary server, from where you can work with it. The clone can be used for development, test, integration, quality assurance or other purposes. It includes functions to customize the new SAP environment, e.g. change of IP addresses and SAP instance names.

Chapter 38. Solutions

723

The solution has been validated by SAP in an integration assessment. Note that the current version of TSM for Advanced Copy Services does not support Space Efficient FlashCopy.

724

IBM System Storage DS8000: Copy Services in Open Environments

38.2 HACMP/XD for Metro Mirror


HACMP is an IBM cluster solution for the AIX operating system. HACMP monitors entire systemsfrom the network through the hardware, operating system and application softwareand quickly restarts applications on designated backup hardware in the event of a failure. HACMP's Extended Distance option (HACMP/XD) extends the protection of HACMP for AIX to geographically remote sites to help ensure business continuity even if an entire site is disabled by catastrophe. HACMP/XD for Metro Mirror exploits the DS8000 Metro Mirror function to replicate data between two sites that can be located up to 300 km from each other. Thus, it combines storage-based data replication and server-based cluster functionality to protect applications against potential disasters. The DS8000 does not require any changes to be used in this fashion. Figure 38-2 shows a four-node server cluster that is attached to two DS8000 disk storage subsystems.

Figure 38-2 HACMP/XD for DS8000 Metro Mirror: 4-node server cluster attached to IBM DS8000

HACMP/XD support is also offered for the IBM DS8000 and DS6000 series, SAN Volume Controller and ESS 800. A good starting point to find HACMP documentation is:
http://www.ibm.com/systems/p/advantages/ha/index.html

Chapter 38. Solutions

725

38.3 Geographically Dispersed Open Clusters (GDOC)


IBM Implementation Services for Geographically Dispersed Open Clusters (GDOC) addresses the automation, testing, and management requirements of a disaster-recovery solution using proven methods and automation software from Symantec. GDOC provides the mechanisms to automate, test, and manage disaster recovery, while allowing some flexibility in the product selection for storage and replication technology. It is also designed to assist you with: Implementing and managing data replication across sites Automating data recovery (wide-area failover and switchover) at a secondary site Managing, monitoring, and testing your disaster recovery solution GDOC supports the following platforms: AIX HP-UX Red Hat Linux and Novell SUSE Linux Microsoft Windows Sun Solaris The GDOC solution is designed to provide you with similar functionality for open systems that GDPS provides for the IBM System z mainframe. The solution typically begins with a cluster of servers that provide high availability within a data center and is extended by adding additional copies of data and additional servers at a geographically remote data center. While clustering within a data center provides high availability, clustering across data centers, with an additional remote copy of data, provides important disaster recovery capability. The end result is an improved ability to recover from a disaster quickly with minimal data loss. This type of solution provides a recovery time, for critical business applications, that is typically much shorter and easier than recovering from tape backup or replicating data with manually initiated processes. The GDOC solution is designed to coordinate data consistency and recovery for multiple vendor operating system platforms (AIX, Solaris, HP-UX, Linux, and Microsoft Windows) between data centers. Data between the data centers is kept in sync using a data replication solution. The GDOC solution allows for different replication technologies including but not limited to: IBM Metro Mirror, IBM DB2 HADR, Network Appliance SnapMirror, Veritas Volume Replicator, and others. The idea is very similar to HACMP/XD shown in Figure 38-2 on page 725. GDOC is a services framework and methodology that includes the integration of Veritas Cluster Server and associated software modules from Symantec Corporation. The GDOC implementation service is designed to deliver an end-to-end solution that may provide: Server and data disaster recovery planning Business impact analysis Risk analysis Bandwidth analysis Enterprise architecture design Network design This solution includes the following base set of services: GDOC consulting and planning An assessment of the high availability/rapid recovery solution requirements. During the planning stage, the desired state is defined, as well as the steps needed to achieve the desired state. This stage is often accomplished through a GDOC Technology Consulting Workshop (TCW) engagement.

726

IBM System Storage DS8000: Copy Services in Open Environments

GDOC conceptual, logical, and physical design Designs the solution infrastructure to meet the business requirements and objectives. It also provides the technical definition for the physical infrastructure. GDOC non-production solution build and test Takes the conceptual, logical, and physical design, including prototype and proof of concept if applicable, and implements it in the customer's environment in a test capacity. GDOC solution roll-out and deployment Takes the prototype and test implementations and builds an enterprise-wide roll-out strategy that enables you, with IBM, to begin a production roll-out in your environment. IBM will educate you so that you can continue this roll-out process independently. You can find additional technical information on GDOC by following these links: External: http://www.ibm.com/services/us/index.wss/offerfamily/gts/a1027708 IBM internal: http://w3.ibm.com/services/salesone/ShowDoc.wss?docid=R626564C69002C83 You can also refer to the Redbooks publication, IBM System Storage Business Continuity Solutions Overview (SG24-6684). For more information, please contact your local IBM Global Services sales representative.

Chapter 38. Solutions

727

728

IBM System Storage DS8000: Copy Services in Open Environments

Appendix A.

Open systems specifics


In this appendix, we describe the basic tasks that you should perform on the individual host systems when using the DS Copy Services. We explain how to bring FlashCopy target volumes online to the same host as well as to a second host. This appendix covers various UNIX and Windows platforms. In addition, we give some recommendations to provide data consistency on the application side. This includes file system and database handling.

Copyright IBM Corp. 2004-2008. All rights reserved.

729

Database and file system specifics


In this section we show how data consistency can be achieved when creating point-in-time copies of active applications. If a FlashCopy of a database is created, particular attention must be paid to the consistency of the copy. On the DS8000, two features can be used to provide a consistent copies: Logical Subsystems (LSS) and Consistency Groups. With Consistency Groups, consistent point-in-time copies across multiple LUNs or logical volumes can be achieved in a backup copy. In addition, I/O activity to a LUN or volume can be freezed. LSS is a logical construct to group logical volumes. This means that DS8000 logical volumes can belong to the same LSS but still reside in multiple arrays or ranks. The logical volumes LSS is defined when it is created. Freezing of I/O to preserve data consistency across multiple copy pairs is done at the LSS level. In case several applications share volumes in one LSS, I/O freeze will concern these applications, because consistency group create commands are directed to each LSS that is involved in a Consistency Group. In addition to the above storage subsystem features, it is necessary to make arrangements on the application side to guarantee a smooth start of the application with the copied data. Application-specific inconsistencies cannot be precluded because applications do not write directly to disk. Memory structures like database and file system buffers as well as I/O queues are used. This appendix provides some examples for database and file system handling. However, it should not be regarded as complete.

File system consistency


A easy method to provide file system consistency is to unmount the file system. But this could imply application downtime and is not practical in a production environment. Therefore we have to find methods to create a point-in-time copy of an open file system. If application files are allocated in file systems, create the file system log on one LUN only (that is, do not use striping for the file system log) or use FlashCopy consistency groups. If a point-in-time copy of an open file system is created, a file system check will be necessary on the copied LUNs before opening the file system copy. Especially for non-journaled file systems, checks can be time-consuming. Some operating systems provide mechanisms to temporarily prevent file system I/O. This operation is commonly called freeze I/O. The restart of I/O in this context is called thaw I/O.

730

IBM System Storage DS8000: Copy Services in Open Environments

Table A-1 shows some commands of UNIX operating systems to freeze and thaw file system I/O operations.
Table A-1 Operating system commands to freeze and thaw file system I/O Operating system AIX Solaris HP-UX freeze operation chfs a freeze=<timeout> <fs> lockfs w <fs> A command interface is not available. The VX_FREEZE ioctl system call is used. thaw operation chfs a freeze=off <fs> lockfs u <fs> The VX_THAW ioctl system call is used.

In addition, we recommend that you perform a file system sync before creating a point-in-time copy that will write the contents of the file system buffers to disk.

Database consistency
Apparently, the easiest and most unpopular way to provide consistency is stop the database before creating the FlashCopy pairs. If a database cannot be stopped for the FlashCopy, some pre- and post-processing actions have to be performed to create a consistent copy: Use database functions such as Oracle online backup mode or DB2 suspend I/O before FlashCopy creation. After the FlashCopy process has finished, withdraw backup mode or resume I/O. Note that an I/O suspend is not required for Oracle if Oracle hot backup mode is enabled. Oracle handles the resulting inconsistencies during database recovery. Optionally, perform a file system freeze operation before and a thaw operation after the FlashCopy. If the file system freeze is omitted, file system checks will be required before mounting the file systems on the FlashCopy target volumes. Perform a file system sync before FlashCopy creation. Use FlashCopy consistency groups if the file system log is allocated on multiple disks, First create FlashCopies of the data files, then switch database log file, finally create FlashCopies of the database logs.

Appendix A. Open systems specifics

731

AIX specifics
In this section we describe the steps needed to use volumes created by the DS Copy Services on AIX hosts.

AIX and FlashCopy


The FlashCopy functionality is to copy the entire contents of a source volume to a target volume. If the source volume is defined to the AIX Logical Volume Manager (LVM), all of its data structures and identifiers are copied to the target volume as well. This includes the Volume Group Descriptor Area (VGDA), which contains the Physical Volume Identifier (PVID) and Volume Group Identifier (VGID). For AIX LVM, it is currently not possible to activate a Volume Group with a physical volume (hdisk or vpath) that contains a VGID and a PVID that is already used in a Volume Group existing on the same server. The restriction still applies even if the hdisk PVID is cleared and reassigned with the two commands listed in Example A-1.
Example: A-1 Clearing PVIDs

#chdev -l <hdisk#> -a pv=clear #chdev -l <hdisk#> -a pv=yes Therefore, it is necessary to redefine the Volume Group information about the FlashCopy target volumes using special procedures or the recreatevg command. This will alter the PVIDs and VGIDs in all the VGDAs of the FlashCopy target volumes, so that there are no conflicts with existing PVIDs and VGIDs on existing Volume Groups that reside on the source volumes. If you do not redefine the Volume Group information prior to importing the Volume Group, then the importvg command will fail.

Accessing FlashCopy target volume from another AIX host


The following procedure makes the data of the FlashCopy target volume available to another AIX host that has no prior definitions of the target volume in its configuration database (ODM). This host that is receiving the FlashCopy target volumes can manage the access to these devices in two different ways: LVM or MPIO definitions SDD definitions If the host is using LVM or MPIO definitions that work with hdisks only, follow these steps: 1. The target volume (hdisk) is new to AIX, and therefore the Configuration Manager should be run on the specific Fibre Channel adapter: #cfgmgr -l <fcs#> 2. Find out which of the physical volumes is your FlashCopy target volume: #lsdev -C |grep 2107 3. Certify that all PVIDs in all hdisks that will belong to the new Volume Group were set. Check this information using the lspv command. If they were not set, run the following command for each one to avoid the importvg command failing: #chdev -l <hdisk#> -a pv=yes 4. Import the target Volume Group: #importvg -y <volume_group_name> <hdisk#>

732

IBM System Storage DS8000: Copy Services in Open Environments

5. Vary on the Volume Group (the importvg command should vary on the Volume Group): #varyonvg <volume_group_name> 6. Verify consistency of all file systems on the FlashCopy target volumes: #fsck -y <filesystem_name> 7. Mount all the target file systems: #mount <filesystem_name> If the host is using SDD that works with vpath devices, follow these steps: 1. The target volume (hdisk) is new to AIX, and therefore the Configuration Manager should be run on all Fibre Channel adapters: #cfgmgr -l <fcs#> 2. Configure the SDD devices: #smitty datapath_cfgall 3. Find out which of the physical volumes is your FlashCopy target volume: #lsdev -C | grep 2107 4. Certify that all PVIDs in all hdisks that will belong to the new Volume Group were set. Check this information using the lspv command. If they were not set, run the following command on each one to avoid the importvg command failing: #chdev -l <hdisk#> -a pv=yes 5. Import the target Volume Group: #importvg -y <volume_group_name> <hdisk#> 6. Change the ODM definitions to work with the SDD devices that will provide you with the load balance and failover functions: #dpovgfix <volume_group_name> 7. Vary on the Volume Group (the importvg command should vary on the Volume Group): #varyonvg <volume_group_name> 8. Verify the consistency of all file systems on the FlashCopy target volumes: #fsck -y <filesystem_name> 9. Mount all the target file systems: #mount <filesystem_name> The data is now available. You can, for example, back up the data residing on the FlashCopy target volume to a tape device. This procedure can be run once the relationship between the FlashCopy source and target volume is established, even if data is still being copied in the background. The disks containing the target volumes may have been previously defined to an AIX system, for example, if you periodically create backups using the same set of volumes. In this case, there are two possible scenarios: If no Volume Group, file system, or logical volume structure changes were made, then use procedure 1 (Procedure 1 on page 734) to access the FlashCopy target volumes from the target system. If some modifications to the structure of the Volume Group were made, such as changing the file system size or the modification of logical volumes (LV), then it is recommended to use procedure 2 (Procedure 2 on page 734) and not procedure 1.

Appendix A. Open systems specifics

733

Procedure 1
For this procedure, follow these steps: 1. Unmount all the source file systems: #umount <source_filesystem> 2. Unmount all the target file systems: #umount <target_filesystem> 3. Deactivate the target Volume Group: #varyoffvg <target_volume_group_name> 4. Establish the FlashCopy relationships. 5. Mount all the source file systems: #mount <source_filesystem> 6. Activate the target Volume Group: #varyonvg <target_volume_group_name> 7. Perform a file system consistency check on target file systems: #fsck -y <target_file_system_name> 8. Mount all the target file systems: #mount <target_filesystem>

Procedure 2
For this procedure, follow these steps: 1. Unmount all the target file systems: #umount <target_filesystem> 2. Deactivate the target Volume Group: #varyoffvg <target_volume_group_name> 3. Export the target Volume Group: #exportvg <target_volume_group_name> 4. Establish the FlashCopy relationships. 5. Import the target Volume Group: #importvg -y <target_volume_group_name> <hdisk#> 6. Perform a file system consistency check on target file systems: #fsck -y <target_file_system_name> 7. Mount all the target file systems: #mount <target_filesystem>

Accessing the FlashCopy target volume from the same AIX host
In this section we describe a method of accessing the FlashCopy target volume on a single AIX host while the source volume is still active on the same server. The procedure is intended to be used as a guide and may not cover all scenarios. If you are using the same host to work with source and target volumes, you have to use the recreatevg command.

734

IBM System Storage DS8000: Copy Services in Open Environments

The recreatevg command overcomes the problem of duplicated LVM data structures and identifiers caused by a disk duplication process such as FlashCopy. It is used to recreate an AIX Volume Group (VG) on a set of target volumes that are copied from a set of source volumes belonging to a specific VG. The command will allocate new physical volume identifiers (PVIDs) for the member disks and a new Volume Group identifier (VGID) to the Volume Group. The command also provides options to rename the logical volumes with a prefix you specify, and options to rename labels to specify different mount points for file systems.

Accessing FlashCopy target volume using the recreatevg command


In this example, we have a Volume Group containing two physical volumes (hdisks) and wish to FlashCopy the volumes for the purpose of creating a backup. The source volume group is src_flash_vg, containing vpath2 and vpath3. The target volume group will be tgt_flash_vg, containing vpath4 and vpath5. Perform these tasks to run the FlashCopy and make the target volumes available to AIX: 1. Stop all I/O activities and applications that access the FlashCopy source volumes. 2. Establish the FlashCopy pairs with the No Background Copy option selected. Use the DS Storage Manager to establish the pairs or run a script with the DS CLI command. 3. Restart applications that access the FlashCopy source volumes. 4. The target volumes, vpath4 and vpath5, will now have the same volume group data structures as the source volumes vpath2 and vpath3. Clear the PVIDs from the target hdisks to allow a new Volume Group to be made: #chdev -l vpath4 -a pv=clear #chdev -l vpath5 -a pv=clear The output of lspv command shows the result in Figure A-1.

Figure A-1 lspv output before recreating the volume group

5. Create the target volume group and prefix all file system path names with /backup, and prefix all AIX logical volumes with bkup: recreatevg -y tgt_flash_vg -L /backup -Y bkup vpath4 vpath5 You must specify the hdisk names of all disk volumes participating in the volume group. The output from lspv shown in Figure A-2 illustrates the new volume group definition.

Figure A-2 lspv output after recreating the volume group

Appendix A. Open systems specifics

735

An extract from /etc/filesystems in Figure A-3 on page 736 shows how recreatevg generates a new file system stanza. The file system named /prodfs in the source Volume Group is renamed to /bkp/prodfs in the target volume group. Also, the directory /bkp/prodfs is created. Notice also that the logical volume and JFS log logical volume have been renamed. The remainder of the stanza is the same as the stanza for /prodfs.

Figure A-3 Target file system stanza

6. Perform a file system consistency check for all target file systems: #fsck -y <target_file_system_name> 7. Mount the new file systems belonging to the target volume group to make them accessible.

AIX and Remote Mirror and Copy


When you have the primary and secondary volumes in a Remote Mirror and Copy relationship, it is not possible to read the secondary unless the Permit read access from target and Reset Reserve options have been selected when establishing the relationship. To be able to read the secondary volumes, they must also be in the full duplex state (in addition to the Permit read access from target and Reset Reserve options). Therefore, if you are configuring the secondary volumes on the target server, it is necessary to terminate the copy pair relationship. When the volumes are in the simplex state, the secondary volumes can be configured (cfgmgr) into the target systems customized device class (CuDv) of the ODM. This will bring in the secondary volumes as hdisks and will contain the same physical volume IDs (PVID) as the primary volumes. Because these volumes are new to the system, there is no conflict with existing PVIDs. The Volume Group on the secondary volumes containing the logical volume (LV) and file system information can now be imported into the Object Data Manager (ODM) and the /etc/filesystems file using the importvg command. If the secondary volumes were previously defined on the target AIX system as hdisks or vpaths, but the original Volume Group was removed from the primary volumes, the old volume group and disk definitions must be removed (exportvg and rmdev) from the target volumes and redefined (cfgmgr) before running importvg again to get the new volume group definitions. If this is not done first, importvg will import the volume group improperly. The volume group data structures (PVIDs and VGID) in ODM will differ from the data structures in the VGDAs and disk volume super blocks. The file systems will not be accessible. If the secondary volumes that are already configured on the target AIX server are in a Remote Mirror and Copy relationship and you do not have the Permit read access from target and Reset Reserve options enabled (and the volumes are not in full duplex state), after rebooting the target server, the hdisks will be configured to AIX again. In other words, you will see each Remote Mirror and Copy secondary volume twice on the target server. The reason for this situation is as follows: AIX knows that these physical volumes already exist with entries in the Configuration Database (ODM). However, when the configuration manager runs during reboot, it cannot read their PVIDs because, as Remote Mirror and Copy targets, they are locked by the DS Copy Services server. 736
IBM System Storage DS8000: Copy Services in Open Environments

This results in AIX causing the original hdisks to be configured to a Defined state, and new (phantom) hdisks being configured and placed in an Available state. This is an undesirable condition that must be remedied before the secondary volumes can be accessed. To access the secondary volumes, the phantom hdisks must be removed and the real or original hdisks must be changed from a Defined state to an Available state. For example, vpath2 through vpath5 are assigned to a volume group, tgtvg. Each of the disk volumes is currently participating as a secondary volume in a Remote Mirror and Copy relationship. If the server is rebooted, four new vpaths are configured to AIX. These phantom disks, vpath6 through vpath9, appear in the output from lspv, as shown in Figure A-4.

Figure A-4 Remote Mirror and Copy phantom disks

Making updates to the LVM information


When performing Remote Mirror and Copy between primary and secondary volumes, the primary AIX host may create/modify or delete existing LVM information from a Volume Group. However, because the secondary volume is not accessible when in a Remote Mirror and Copy relationship, the LVM information in the secondary AIX host would be out-of-date. Therefore, scheduled periods should be allotted where write I/Os to the primary Remote Mirror and Copy volume can be quiesced and file systems unmounted. At this point, the copy pair relationship can be terminated and the secondary AIX host can perform a learn on the volume group (importvg -L). When the updates have been imported into the secondary AIX hosts ODM, you can establish the Remote Mirror and Copy pair again. However, select Do not copy volume from the Select copy Options when establishing the Remote Mirror and Copy pair. As soon as the Remote Mirror and Copy pair has been established, immediately suspend the Remote Mirror and Copy relationship. Because there was no write I/O to the primary volumes, both the primary and secondary are consistent. Now that the primary volume has been suspended, the file systems can be remounted and write I/O resumed. When the write I/O has been going for a while, you can reestablish the relationship with the primary and secondary by choosing Copy out-of-sync cylinders only. If the Permit read access from target and Reset Reserve options were selected during the Remote Mirror and Copy copy pair establish, then it would be advisable to suspend the primary volume (while in the full duplex state) and then perform the import learn function. When completed, all that is necessary would be to reestablish the copy pair by only copying the out-of-sync cylinders. The following example shows two systems, host1 and host2, where host1 has the primary volume vpath5 and host2 has the secondary volume vpath16. Both systems have had their ODMs populated with the volume group itsovg from their respective Remote Mirror and Copy volumes and, prior to any modifications, both systems ODM have the same time stamp, as shown in Figure A-5.

Appendix A. Open systems specifics

737

Figure A-5 Original time stamp

Volumes vpath5 and vpath16 are in the Remote Mirror and Copy duplex state, and the volume group itsovg on host1 is updated with a new logical volume. The time stamp on the VGDA of the volumes gets updated and so does the ODM on host1, but not on host2. See Figure A-6.

Figure A-6 Update source time stamp

To update the ODM on the secondary server, it is advisable to suspend the Remote Mirror and Copy pair prior to performing the importvg -L command to avoid any conflicts from LVM actions occurring on the primary server. Figure Figure A-7 shows the updated ODM entry on host2.

Figure A-7 Update secondary server ODM

When the importvg -L command has completed, you can reestablish the Remote Mirror and Copy pairs and copy only the out-of-sync cylinders.

Windows and Remote Mirror and Copy


Windows 2000 and 2003 handle their disks differently than Windows NT does. They incorporate a stripped-down version of the VERITAS Volume Manager, called the Logical Disk Manager (LDM). With the LDM, you are able to create logical partitions, perform disk mounts, and create dynamic volumes. There are five types of dynamic volumes: Simple, spanned, mirrored, striped, and RAID 5.

738

IBM System Storage DS8000: Copy Services in Open Environments

On Windows NT, the information relating to the disks was stored in the Windows NT registry. With Windows 2000 and 2003, this information is stored on the disk drive itself in a partition called the LDM database, which is kept on the last few tracks of the disk. Each volume has its own 128-bit Globally Unique Identifier (GUID) and belongs to a disk group. This is similar to the concept of Physical Volume Identifier (PVID) and Volume Group in AIX. As the LDM is stored on the physical drive itself, with Windows 2000 it is possible to move disk drives between different computers.

Copy Services limitations with Windows 2000 and Windows 2003


Having the drive information stored on the disk itself imposes some limitations when using Copy Services functionality on a Windows system: The source and target volumes must be of the same physical size. Normally the target volume can be bigger than the source volume; with Windows, this is not the case, for two reasons: The LDM database holds information relating to the size of the volume. As this is copied from the source to the target, if the target volume is a different size from the source, then the database information will be incorrect, and the host system will return an exception. The LDM database is stored at the end of the volume. The copy process is a track-by-track copy; unless the target is an identical size to the source, the database will not be at the end of the target volume. It is not possible to have the source and target FlashCopy volumes on the same Windows system when they were created as Windows dynamic volumes. The reason is that each dynamic volume has to have its own 128-bit GUID. As its name implies, the GUID must be unique on one system. When you perform FlashCopy, the GUID gets copied as well, so this means that if you tried to mount the source and target volume on the same host system, you would have two volumes with exactly the same GUID. This is not allowed, and you will not be able to mount the target volume.

Copy services with Windows volumes


In order to see target volumes on a second Windows host, you have to do these actions: 1. Perform the Remote Mirror and Copy/FlashCopy function onto the target volume. Ensure that when using Remote Mirror and Copy that the primary and secondary volumes were in duplex mode, and write I/O was ceased prior to terminating the copy pair relationship. 2. Reboot the host machine on which you wish to mount the Copy Services target volume. 3. Right-click Open Computer Management, and then click Disk Management. 4. Find the disk that is associated with your volume. There are two panes for each disk; the left one should read Dynamic and Foreign. It is likely that no drive letter will be associated with that volume. 5. Right-click that pane and select Import Foreign Disks. Select OK, then OK again. The volume now has a drive letter assigned to it, and is of Simple Layout and Dynamic Type. You can read/write to that volume. Note: When using Windows dynamic disks, remember that to read FlashCopy targets, if the FlashCopy pair has been rescanned or if the server reading targets has been rebooted, FlashCopy targets will appear as foreign disks to that server. Manual intervention will be required to re-import those disks and restore operation.

Appendix A. Open systems specifics

739

Tip: Disable the Fast-indexing option on the source disk; otherwise, operations to that volume get cached to speed up disk access. However, this means that data is not flushed from memory and the target disk may have copies of files/folders that were deleted from the source system. When performing subsequent Remote Mirror and Copy/FlashCopies to the target volume, it is not necessary to perform a reboot, because the target volume is still known to the target system. However, in order to detect any changes to the contents of the target volume, you should remove the drive letter from the target volume before doing the FlashCopy. Then, after carrying out the FlashCopy, you restore the drive letter in order for the host it is mounted on to be able to read/write to it. There is a Windows utility, DiskPart, that enables you to script these operations so that FlashCopy can be carried out as part of an automated backup procedure. DiskPart can be found at the Microsoft download site with a search on the key word DiskPart: http://www.microsoft.com/downloads A description of DiskPart commands can be found at the Web site: http://www.microsoft.com/resources/documentation/windows/xp/all/proddocs/en-us/disk part.mspx

Extending simple volumes


The Copy Services source may initially be a single simple volume. However, as requirements change on the application server, the logical volume may be extended over two or more volumes. However, you should not independently extend the target volumes, but let Windows detect the correct sequence of the extended volumes during the import process. For this reason, the target volumes should be reverted back to basic disks prior to the initial FlashCopy after the source has been extended. The target server should also be rebooted for disk manager to pick up the new volumes. After reboot, the volumes will be recognized as foreign disks, and you can proceed to import them. Reboot of the target system on subsequent FlashCopy is not necessary until the source volume has been further extended. When performing subsequent Remote Mirror and Copy/FlashCopy to the target volume, it is not necessary to perform a reboot because the target volume is still known to the target system. However, in order to detect any changes to the contents of the target volume, you should remove the drive letter from the target volume before doing the FlashCopy. Then, after carrying out the FlashCopy, you restore the drive letter in order for the host it is mounted on to be able to read/write to it. There is a Windows utility, DiskPart, that enables you to script these operations so that FlashCopy can be carried out as part of an automated backup procedure. DiskPart can be found at the Microsoft download site with a search for the key word DiskPart.: http://www.microsoft.com/downloads The DiskPart tool also provides a way to extend an existing partition into free space at the end of the same logical drive. A description of this procedure can be found in the Microsoft Knowledge Base, article 304736: http://support.microsoft.com/?kbid=304736

Enlarging extended/spanned volumes


When you have extended or spanned disks, the logical drive may in time grow to include more of the initial volume (extended disk) or include additional volumes. When this occurs, it

740

IBM System Storage DS8000: Copy Services in Open Environments

is necessary, as before, to remove the target volume group information and revert the target volumes back to basic disks. On the initial FlashCopy, it is necessary to reboot the target server to configure the additional disks, and then import all the foreign disks that are part of the volume group. When performing subsequent Remote Mirror and Copy/FlashCopy to the target volume, it is not necessary to perform a reboot, because the target volume is still known to the target system. However, in order to detect any changes to the contents of the target volume, you should remove the drive letter from the target volume before doing the FlashCopy. Then, after carrying out the FlashCopy, you restore the drive letter in order for the host it is mounted on to be able to read/write to it. Again, we refer to the Windows utility DiskPart, which enables you to script these operations so that FlashCopy can be carried out as part of an automated backup procedure. You can find DiskPart at the Microsoft download site: http://www.microsoft.com/downloads; Search for the key word DiskPart.

Microsoft Volume Shadow Copy Services (VSS)


The Microsoft Volume Shadow Copy Services (VSS) is a storage management interface for Microsoft Windows Server 2003. VSS enables your storage array to interact with third-party applications that use the VSS Application Programming Interface (API). Microsoft VSS is included in the Windows Server 2003 installation. In Windows 2000, each storage area network (SAN) hardware vendor provided its own proprietary set of APIs for managing their hardware. This makes it challenging to develop uniform SAN-management software. Windows Server 2003 came out with Volume Shadow Copy Service, because it is a general infrastructure for creating consistent point-in-time copies of data on a volume. They are generally referred to as Shadow Copies. VSS is used for a number of purposes such as: Creating consistent backups of open files and applications Creating shadow copies for shared folders For backup, testing, and data mining The DS8000 provides an integration with Microsoft Volume Shadow Copy Service to produce consistent shadow copies. The IBM System Storage DS8000 Series has FlashCopy functionality, which can integrate with the Microsoft Virtual Shadow Copy Service. DS8000 FlashCopy enables you to create full volume copies of data in a Storage Unit. During FlashCopy operation on DS8000, it takes a few seconds to complete the process of establishing the FlashCopy pair and creating the necessary control bitmaps.

Volume Shadow Copy Services product components


Prior to Microsoft VSS, if you did not have an online backup solution implemented, you either had to stop activities on your server during the Backup Process, or live with the side effects of an online backup, such as inconsistent data and open files that could not be backed up. With Windows Server 2003 and VSS enabled applications, online backup results in consistent data, and files that are open during the backup are never a problem. Microsoft Volume Shadow Copy Services (VSS) enables you to perform online backup of applications, which traditionally is not possible. VSS is supported on the DS8000 storage server subsystem with FlashCopy capabilities.

Appendix A. Open systems specifics

741

VSS accomplishes this by facilitating communications between the following three important entities: Requestors: An application that requests that a volume shadow copy be taken. These are applications, such as backup or storage management, that request a Point-in-Time Copy of data or a shadow copy. Writers: A component of an application that stores persistent information about one or more volumes that participate in shadow copy synchronization. Writers are software that is included in applications and services to help provide consistent shadow copies. Writers serve two main purposesby responding to signals provided by VSS to interface with applications to prepare for shadow copy and providing information about the application name, icons, files, and a strategy to restore the files. Writers prevent data inconsistencies. Providers: A component that creates and maintains the shadow copies. IBM VSS Provider is the provider interface that interacts with the Microsoft Volume Shadow Copy Services and to the Common Interface Model Agent (CIM Agent) on the master console. Figure A-8 shows the Microsoft VSS architecture and how the software provider and hardware provider interact through Volume Shadow Copy Services.

R e q u e s tto r Reques or

V o lu m e S h a d o w o u e h o C o p y S e r v ic e C y S r c

W r ite r s e s

Apps
II//O O

S o fft w a r e S o tw a re P r o v iid e r P ro v d e r

H a rd w a re H a rd w a re P r o v iid e r P ro v d e r

Figure A-8 Microsoft VSS architecture

742

IBM System Storage DS8000: Copy Services in Open Environments

Microsoft Volume Shadow Copy Service function


Microsoft VSS accomplishes the fast Backup Process when a backup application initiates a shadow copy backup. Microsoft VSS coordinates with the VSS-aware writers to briefly hold writes on the databases, applications, or both. Microsoft VSS flushes the file system buffers and asks a provider to initiate a FlashCopy of the data. When the FlashCopy is logically completed, Microsoft VSS allows writes to resume and notifies the requestor that the backup has completed successfully. The volumes are mounted, hidden, and for read-only purposes, to be used when rapid restore is necessary. Alternatively, the volumes can be mounted on a different host and used for application testing or backup to tape. The following list shows the Microsoft VSS FlashCopy process: 1. The requestor notifies Microsoft VSS to prepare for shadow copy creation. 2. Microsoft VSS notifies the application-specific writer to prepare its data for making a shadow copy. 3. The writer prepares the data for that application by completing all open transactions, flushing of cache, and writing in-memory data to disk. 4. When the data is prepared for shadow copy, the writer notifies the VSS, and it will relay the message to the requestor to initiate the commit copy phase. 5. VSS temporarily quiesces application I/O write requests for a few seconds and the hardware provider will perform the FlashCopy on the Storage Unit. 6. After the completion of FlashCopy, VSS releases the quiesce, and database writes will resume. 7. VSS queries the writers to confirm that write I/Os were successfully held during Microsoft Volume Shadow Copy.

Microsoft VSS with DS8000 FlashCopy


The following steps are performed by the Microsoft VSS in conjunction with FlashCopy when a backup application initiates a request for backup on a DS8000 Storage Unit: 1. VSS retrieves a list of volumes from the DS8000 and selects appropriate target volumes from the free pool (VSS_FREE). 2. VSS moves the target volumes to the reserved pool (VSS_RESERVED) and database suspends on writes. 3. VSS issues a FlashCopy from the source volumes to the target volumes and database resumes on writes after completion of FlashCopy. 4. VSS assigns the target volumes to the backup servers Host Bus Adaptors (HBAs) where Windows mounts the volumes on the backup server. 5. Requestor reads the data of the target volumes and copies it to tape. 6. When the tape copy completes, Windows un-mounts the volumes and VSS unassigns target volumes from the backup servers HBAs. 7. VSS assigns target volumes back to the free pool (VSS_FREE).

Appendix A. Open systems specifics

743

Refer to the FlashCopy diagram shown in Figure A-9.

Backup App

R equestor
W riters W riters

Apps
Volum e Volum Shadow C opy Service I/O

D S8000 D S8000
VSS_R ESERVE Pool

IB M VSS IB Provider Provider

Hardw are Provider Production Pool

VSS_FREE Pool

Target

FlashC opy
Source

W in2003 Backup Server

W in2003 Production Server

Figure A-9 Microsoft VSS with DS8000 FlashCopy

Additional information
Refer to IBM TotalStorage DS Open Application Programming Interface Reference, GC35-0493, for more technical information and implementation: Refer to the following Web sites for more information about Microsoft VSS: http://technet2.microsoft.com/WindowsServer/en/library/2b0d2457-b7d8-42c3-b6c9-59c 145b7765f1033.mspx?mfr=true http://www.microsoft.com/windowsserversystem/storage/technologies/vss/default.mspx

Microsoft Virtual Disk Service (VDS)


Microsoft Virtual Disk Service is designed to meet the disk and storage subsystem management needs for both small and midsize system configurations for direct attached or SAN-based storage. This interface is available in Microsoft Windows 2003 Server and it is designed to be able to scale up to enterprise configurations without compromising the complex SAN functionality enabled by hardware vendors.

Virtual Disk Service overview


VDS is used as a single component within Microsoft Windows 2003 Server and serves as a management tool. With Windows Server 2003, Microsoft introduced the Virtual Disk Service (VDS). It is just a SAN storage management tool rather than a Business Continuity solution. It allows Windows administrators to perform functions such as creating and deleting volumes and management of volumes assigned to a server. 744
IBM System Storage DS8000: Copy Services in Open Environments

VDS unifies storage management and provides a single interface for managing block storage virtualization. This interface is vendor and technology neutral, being independent of the layer where virtualization is done, operation system software, RAID storage hardware, or other storage virtualization engines. Microsofts Virtual Disk Server enables the management of heterogeneous storage systems, while leveraging both the client and providers APIs. Microsoft VDS also supports automatic LUN configuration, which facilitates dynamic reconfiguration by hardware in response to load or fault handling. Microsoft VDS is a software interface for managing block storage virtualization. VDS is a vendor-specific DLL module that integrates the vendor-specific solution into the overall VDS architecture. This interface module makes it possible for the DS8000 storage server to interoperate with VDS-enabled applications that utilize the VDS API in a Windows environment. VDS hides the complexities associated with storage implementation from applications. VDS makes query and configuration operations common across all devices it manages. VDS is a set of Application Programming Interfaces (API) that use two sets of providers to manage storage devices. The built-in VDS software providers enable you to manage disks and volumes at the operating system level. VDS hardware providers supplied by the hardware vendor enable you to manage hardware RAID arrays. Windows Server 2003 components that work with VDS include the Disk Management Microsoft Management Console (MMC) snap-in, the DiskPart command-line tool, and the DiskRAID command-line tool, which is available in the Windows 2003 deployment kit. VDS utilizes a standardized interface through a graphic user interface (GUI), at the command prompt, or through a command line interface (CLI). Figure A-10 shows the Microsoft VDS architecture.

Command-Line Tools: DiskPart DiskRAID

Disk Management MMC Snap -in

Storage Management Applications

Virtual Disk Service

Software Providers: Basic Disks Dynamic Disks

Hardware Providers

Other Disk Subsystem HDDs Hardware Microsoft Functionality Functionalitya Non-Microsoft Functionality
Figure A-10 Microsoft VDS architecture

LUNs DS8000

Appendix A. Open systems specifics

745

The following minimum hardware is required for installing Microsoft Virtual Disk Service on a Windows Server 2003 operating system: For Virtual Disk Services: DS8000 Storage Unit Common Information Model (CIM) agent

Virtual Disk Service product components


Microsoft Virtual Disk Service (VDS) controls the process of making storage accessible to systems that need it. It is transparent to applications (or users) how the data is stored, whether on a single physical disk or spanned across several disks (a logical unit) in terms of data protection and performance. VDS can present a physical disk or a logical disk to a server. VDS can: Create/delete logical units, assign number IDs (or LUNs). Unmask LUNs to server. Create partitions and volumes. Format the file system: Basic disks. VDS is used to partition each physical disk and to create the volumes that can be mapped to drive letters for use. These volumes are known as simple volumes and do not span multiple disks. Basic disks are the legacy disks and they do not offer the same performance and data protection that dynamic disks offer. Dynamic disks. VDS can be employed to create dynamic disks, which can consist of either simple volumes or multi-partition volumes. Multi-partition volumes physically span more than a single disk but are logically considered as a single volume. Dynamic disks can be spanned, striped (RAID 0), mirrored (RAID 1), or striped with parity (RAID 5). VDS can be used to expand dynamic disks to make more space available to a volume. The DS8000 interacts with the IBM VDS hardware provider to Microsoft VDS. The implementation is based on the DS CIM Agent and Microsoft VDS, using CIM technology to query storage subsystem information and manage LUNs. Microsoft Virtual Disk Service together with Microsoft Virtual Shadow Copy Service forms a unified heterogeneous storage systems solution that provides: Managing block storage virtualization Discovery of new storage Boot from SAN Shadow copy creation that relates to the storage subsystems FlashCopy capability Creation of consistent backups of open files and applications Creation of shadow copies for shared folders For backup, testing, and data mining purposes

746

IBM System Storage DS8000: Copy Services in Open Environments

IBM uses QLogic SANsurfer VDS Manager to interact with Microsoft Virtual Disk Service. Figure A-11 shows the QLogic SANsurfer VDS Manager startup screen.

Figure A-11 QLogic SANsurfer VDS Manager startup

Microsoft Virtual Disk Server by itself as a single component does not provide you with a disaster recovery solution. VDS is primarily designed as a SAN management tool that allows you to perform disk management functions. VDS provides you with an effective tool with which you can manage multi-vendor storage systems by a single interface.

Appendix A. Open systems specifics

747

With Microsoft VDS and VSS, the most distinct advantages coupled with IBM TotalStorage products are booting from SAN and Shadow Copy (Microsoft Volume Shadow Copy Service) creation functionality. In a disaster recovery solution scenario, if you have DS8000 storage subsystems serving a Windows 2003 environment, you can integrate them with Microsoft VSS and VDS. This means that you can effortlessly manage your overall SAN environment, creating FlashCopies and offloading backup to backup servers without shutting down your production application, and creating and assigning logical units and managing the SAN storage environment. Figure A-12 shows the overall Microsoft VDS and VSS architecture.

R e q u e sto r

V o lu m e S h a d o w C o p y S e rv ice

W riters rite rs

Apps
VSS VSS H a rd w a re H a rd w a re P ro vid e r P ro vid e r VDS VDS H a rd w a re H a rd w a re P ro v id e r P v id e r

S o ftw a re S o ftw a re P ro v id e r P ro v id e r

I/O

C om m and L in e In te rfa ce V irtu a l D isk is k S e rv ice rvice S to ra g e M gt App

D isk M g t

Figure A-12 Microsoft VDS and VSS architecture

748

IBM System Storage DS8000: Copy Services in Open Environments

Figure A-13 shows the Microsoft VDS and VSS architecture combined with FlashCopy.

Backup App

Requestor
Writers Writers

Command Command Line Line Interface Interface

Apps
Volume Shadow Copy Service I/O

Virtual Disk Service

Storage Storage Mgt App App Disk Mgt Disk Mgt

DS8000 DS8000
VSS_RESERVE Pool

IBM VSS Provider

Hardware Provider Production Pool

IBM VDS Provider

VSS_FREE Pool

Target

FlashCopy
Source

Win2003 Backup Server

Win2003 Production Server

Figure A-13 Microsoft VDS and VSS with DS8000 storage subsystem

Additional information
Refer to IBM TotalStorage DS Open Application Programming Interface Reference, GC35-0493, for more technical information and implementation. Refer to the following Web site for more information about the Microsoft Virtual Disk Service: http://www.microsoft.com/windowsserversystem/storage/technologies/vss/default.mspx

SUN Solaris and Copy Services


In the following section we describe the actions that should be taken to perform Copy Services functions and mount a target volume on a SUN Solaris server. Making a Copy Services target volume available to the same server or to another server is possible. You can use the DS CLI for automation and create scripts to automate your procedures and to prepare your target mount point.

FlashCopy without a volume manager


In this section we describe how to access FlashCopy volumes under SUN Solaris without volume manager software. Native commands are used to show how it is possible to access the target volume after the Copy Services function has completed. A FlashCopy is a point-in-time copy that immediately creates a copy of the current status of the data. To ensure that the copied data is a consistent copy, which enables you to start up
Appendix A. Open systems specifics

749

the application from the FlashCopy target, the application has to be stopped. To ensure that no other I/O will be issued, source volumes may be freezed before the FlashCopy is proceed. The shell script to be run before the application that will use the FlashCopy target should include the operations shown in Example A-2.
Example: A-2 Backup preparation process

#quiesce an application insert the quiescing script here #freeze the source lockfs -w /source #start FlashCopy relationships insert the FlashCopy ds cli script here using the option -wait on the mkflash command lockfs -u /source #and resume the application insert the resuming script here #check the target for consistency fsck -y /dev/rdsk/cXtYdZsN #if OK mount it mount /dev/dsk/cXtYdZsN /target The foregoing steps for a FlashCopy operation can be done by a script, as shown in Example A-2. This gives an example of preparing a backup using FlashCopy. When the FlashCopy is created the target volumes can be mounted to a different host, which can take the data from the target volumes and send them to the final backup destination by a backup server.

Remote copy without a Volume Manager


A remote copy relation can be established at any time. Before the target volumes can be used, all data must be copied from the source to the target volumes. When this is finished there are the following possibilities to make use of the target volumes: Stop the application at the primary site and terminate or fail over of the Remote Copy. If the applications should stay running at the primary site, consistency can only be provided by Metro Mirror using a freeze prior to a failover or a terminate of the relation, or by Global Mirror using a reverse of its FlashCopy at the remote site. When the Remote Mirror has been failed over or terminated, the remote host can mount the target volumes and start the application.

Copy Services using VERITAS Volume Manager


In the following section we describe how to perform FlashCopy and Remote Mirror and Copy on SUN Solaris systems with VERITAS Volume Manager (VxVM) support.

FlashCopy with VERITAS Volume Manager


In many cases, a user will make a copy of a volume so that the data can be used by a different machine. In other cases, a user may want to make the copy available to the same machine. VERITAS Volume Manager assigns each disk a unique global identifier. If the volumes are on different machines, this does not present a problem. However, if they are on the same machine, you have to take some precautions. For this reason, the steps that you should take are different for the two cases.

750

IBM System Storage DS8000: Copy Services in Open Environments

FlashCopy to a different server


One common method for making a FlashCopy of a VxVM volume is to first freeze the I/O to the source volume, issue the FlashCopy, and import the new FlashCopy onto a second server. In general, the steps for performing this process are as follows: 1. 2. 3. 4. 5. Unmount the target volume on Server B. Freeze the I/O to the source volume on Server A. Invoke FlashCopy commands. Thaw the I/O to the source volume on Server A. Mount the target volume on Server B.

FlashCopy to the same server


The simplest way to make the copy available to the source machine is to export and offline the source volumes. In Example A-3, volume vol1 is contained in Disk Group DG1. This Disk Group consists of one device (c6t1d0s2). When that disk is taken offline, the FlashCopy target becomes available to the source volume, and can be imported.
Example: A-3 Making a FlashCopy available by exporting the source volume

#halt I/O on the source by unmounting the volume umount /vol1 #execute FlashCopy commands here #deport the source volume group vxdg deport DG1 #offline the source disk vxdisk offline c6t1d0s2 #now only the target disk is online #import the volume again vxdg import DG1 #recover the copy vxrecover -s Vol1 #re-mount the volume mount /vol1 If you want to make both the source and target available to the machine at the same time, it is necessary to change the private region of the disk, so that VERITAS Volume Manager allows the target to be accessed as a different disk. Here we explain how to simultaneously mount DS FlashCopy source and target volumes to the same host without exporting the source volumes when using VERITAS Volume Manager. Check with VERITAS and IBM on the supportability of this method before using it. It is assumed that the sources are constantly mounted to the SUN host, the FlashCopy is performed, and the goal is to mount the copy without unmounting the source or rebooting. After the target volumes have been assigned, it is necessary to reboot the SUN server using reboot -- -r or, if a reboot is not immediately possible, then issue devfsadm. However, a reboot is recommended for guaranteed results. It is also assumed that the appropriate actions in order to use the target volumes with the host have already taken place (that is, devfsadm, vxdctl enable, and so on). The following procedure refers to these names: mydg: The name of the diskgroup that is being created. da_name: The disk name shown under the DISK column in the vxdisk list output.

Appendix A. Open systems specifics

751

last_daname: The disk is known to VxVM as shown under the DEVICE column in the vxdisk list output. This is the output of the vxdisk list on SUN Solaris. Use the following procedure to mount the targets to the same host: 1. Determine which disks have a copy of the disk group configuration in their private region. The following command will list the log disk disks: # vxdg list <disk group> 2. Determine the location of the private region (tag 15) on the disks (normally partition 3): # prtvtoc /dev/rdsk/c#t#d#s2 Or use the following command to get the partition number for the private region: # vxdisk list c#t#d#s2 | grep priv 3. Dump the private region: # /usr/lib/vxvm/diag.d/vxprivutil dumpconfig /dev/rdsk/c#t#d#s3 > dg.dump 4. Create a script to initialize the disk group: # cat dg.dump | vxprint -D - -d -F "vxdg -g <mydg> adddisk %name=%last_da_name" > dg.sh 5. Edit the file dg.sh and change the first line to: # vxdg init <mydg> <daname>=<last_daname> 6. Make the file dg.sh executable: # chmod 755 dh.sh 7. Create a file that can be used to rebuild the VM config: # cat dg.dump | vxprint -D - -hvpsm > dg.maker 8. Initialize the disk group by executing dg.sh: # ./dh.sh 9. If this results in the error Disk is already in use by another system, then the private region on each disk that is to be added to the disk group will need to be initialized. This can be done with the following command: # vxdisksetup -i <da_name> 10. Rebuild the VM configuration: # vxmake -g <mydg> -d dg.maker 11. Start the volumes: # vxvol -g <mydg> start <volume>

Remote Mirror and Copy with VERITAS Volume Manager


In the previous section we described how to perform a FlashCopy and mount the source and target file system on the same server. Here we describe the steps necessary to mount a Remote Mirror and Copy secondary volume onto a server that does not have sight of the primary volume. It assumes that the Remote Mirror and Copy copy pair has been terminated prior to carrying out the procedure. After the secondary volumes have been assigned, it is necessary to reboot the SUN server using reboot -- -r or, if a reboot is not immediately possible, then issue devfsadm. However, a reboot is recommended for guaranteed results.

752

IBM System Storage DS8000: Copy Services in Open Environments

Use the following procedure to mount the secondary volumes to another host: 1. Scan devices in the operating system device tree: #vxdisk scandisks 2. List all known disk groups on the system: #vxdisk -o alldgs list 3. Import the Remote Mirror and Copy disk group information: #vxdg -C import <disk_group_name> 4. Check the status of volumes in all disk groups: #vxprint -Ath 5. Bring the disk group online: #vxvol -g <disk_group_name> startall or #vxrecover -g <disk_group_name> -sb 6. Perform a consistency check on the file systems in the disk group: #fsck -V vxfs /dev/vx/dsk/<disk_group_name>/<volume_name> 7. Mount the file system for use: #mount -V vxfs /dev/vx/dsk/<disk_group_name>/<volume_name> /<mount_point> When you have finished with the Remote Mirror and Copy secondary volume, we recommend that you perform the following tasks: 1. Unmount the file systems in the disk group: #umount /<mount_point> 2. Take the volumes in the disk group offline: #vxvol -g <disk_group_name> stopall 3. Export disk group information from the system: #vxdg deport <disk_group_name> Tip: If you FlashCopy or Remote Mirror and Copy only one half of a RAID 1 mirror, it will be necessary to force the import of the disk group because not all of the disks are available. Therefore, it is necessary to issue the following command: vxdg -f import <disk_group> However, be aware that this may cause disk group inconsistencies.

HP-UX and Copy Services


The following section describes how it is possible to access a source and target Copy Services volume on the same HP server.

Appendix A. Open systems specifics

753

HP-UX and FlashCopy


The following procedure must be followed to permit access to the FlashCopy source and destination simultaneously on an HP-UX host. It could be used to make an additional copy of a development database for testing or to permit concurrent development, to create a database copy for data mining that will be accessed from the same server as the OLTP data, or to create a Point-in-Time Copy of a database for archiving to tape from the same server. This procedure must be repeated each time you perform a FlashCopy and want to use the target physical volume on the same host where the FlashCopy source volumes are present in the Logical Volume Manager configuration.

Target preparation
In order to prepare the target system, carry out the following steps: 1. Vary off the source volume groups: #vgchange -a n /dev/<source_vg_name> 2. If you did not use the default Logical Volume Names (lvolnn) when they were created, create a map file from your source volume group using the vgexport command: #vgexport -m <map file name> -p /dev/<source_vg_name> Tip: This map file needs to be ftped to the target host. 3. If the target volume group exists, remove it using the vgexport command. The target volumes cannot be members of a Volume Group when the vgimport command is run: #vgexport -m /dev/null /dev/<target_vg_name> 4. Shut down or quiesce any applications that are accessing the FlashCopy source.

FlashCopy execution
To execute the procedure, you must carry out the following steps: 1. Unmount all file systems in the source volume group. 2. Perform the FlashCopy using the option -wait 3. Mount all the file systems in the source volume group. 4. When the FlashCopy is finished, change the Volume Group ID on each DS Volume in the FlashCopy target. The volume ID for each volume in the FlashCopy target volume group must be modified on the same command line. Failure to do this will result in a mismatch of Volume Group IDs within the Volume Group. The only way to resolve this issue is to perform the FlashCopy again and reassign the Volume Group IDs using the same command line: vgchgid -f </dev/rdsk/c#t#d#_1>...</dev/rdsk/c#t#d#_n> Note: This step is not needed if another host is used to access the target devices. 5. Create the Volume Group for the FlashCopy target: #mkdir /dev/<target_vg_name> #mknod /dev/<target_vg_name>/group c <lvm_major_no> <next_available_minor_no> Use the lsdev -C lvm command to determine what the major device number should be for Logical Volume Manager objects. To determine the next available minor number, examine the minor number of the group file in each volume group directory using the ls -l command. 754
IBM System Storage DS8000: Copy Services in Open Environments

6. Import the FlashCopy target volumes into the newly created volume group using the vgimport command: #vgimport -m <map file name> -v /dev/<target_vg_name> </dev/dsk/c#t#d#_1>...</dev/dsk/c#t#d#_n> 7. Activate the new volume group: #vgchange -a y /dev/<target_vg_name> 8. Perform a full file system check on the logical volumes in the target volume group. This is necessary in order to apply any changes in the JFS intent log to the file system and mark the file system as clean. #fsck -F vxfs -o full -y /dev/<target_vg_name>/<logical volume name> 9. If the logical volume contains a VxFS file system, mount the target logical volumes on the server: #mount -F vxfs /dev/<target_vg_name>/<logical volume name><mount point> When access to the FlashCopy target volume is no longer required, unmount the file systems and vary off the volume group: #vgchange -a n /dev/<target_vg_name> If no changes are made to the source volume group prior to the subsequent FlashCopy, then all that is needed is to vary on the volume group and perform a full file system consistency check, as shown in steps 7 to 9.

HP-UX with Remote Mirror and Copy


When using Remote Mirror and Copy with HP-UX, it is similar to using FlashCopy, apart from the fact that the volume group should be unique to the target server, so there should be no need to perform the vgchgid command to change the physical volume to volume group association. Here is the procedure to bring secondary volumes online to Remote Mirror and Copy target HP-UX hosts: 1. Allow the copy pair volumes to go into duplex state using the catch-up operation or by leaving the volumes to become synchronized. 2. Quiesce the source HP-UX application to cease any updates to the primary volumes. 3. Terminate the Remote Mirror and Copy pair relationship. 4. Rescan for hardware configuration changes using the ioscan -fnC disk command. Check that the disks are CLAIMED using ioscan -funC disk. The reason for doing this is that the volume group may have been extended to include more physical volumes. 5. Create the Volume Group for the Remote Mirror and Copy secondary. Use the lsdev -C lvm command to determine what the major device number should be for Logical Volume Manager objects. To determine the next available minor number, examine the minor number of the group file in each volume group directory using the ls -l command. 6. Import the Remote Mirror and Copy secondary volumes into the newly created volume group using the vgimport command. 7. Activate the new volume group. 8. Perform a full file system check on the logical volumes in the target volume group. This is necessary in order to apply any changes in the JFS intent log to the file system and mark the file system as clean. 9. If the logical volume contains a VxFS file system, mount the target logical volumes on the server.

Appendix A. Open systems specifics

755

If changes are made to the source volume group, they should be reflected in the /etc/lvmtab of the target server. Therefore, it is recommended that periodic updates be made to make the lvmtab on both source and target machines consistent. As with the AIX importvg, there are two alternatives: Using the Permit read access from target option: a. If you are using Global Copy, issue go-to-sync to allow the volumes to go to duplex state. b. When the volumes are in the duplex state, suspend the primary volume so that no updates are reflected on the secondary volumes. c. Export the source volume group information into a map file. d. Export the old volume group definitions from the target host. e. Run an ioscan to identify any new volumes that have been assigned to the target hosts due to expansion of the source volume group. f. Import the target volume group definition using the map file generated from the source host. g. Reestablish the Remote Mirror and Copy relationship, only copying cylinders out-of-sync. Using NOCOPY Remote Mirror and Copy establish: a. Quiesce all write I/O to the primary volumes, and unmount the source file systems. b. If you are using Global Copy, issue the go-to-sync command to all the secondary volumes to catch up. c. When the volumes are in duplex state, terminate the Remote Mirror and Copy relationship. d. Export the source volume group information into a map file. e. Export the old volume group definitions from the target host. f. Run an ioscan to identify any new volumes that have been assigned to the target hosts due to expansion of the source volume group. g. Import the target volume group definition using the map file generated from the source host. h. Establish the pairs relationship with the NOCOPY option, so that only the primary and secondary have a copy pair relationship without updates. i. Immediately suspend the primary volumes. j. Mount the file systems and start the application at the source. k. Some time later, reestablish the pairs, only copying the out-of-sync cylinders.

756

IBM System Storage DS8000: Copy Services in Open Environments

VMware Virtual Infrastructure and Copy Services


With the number of different guest operating systems supported by VMware, it is possible to have a large number of scenarios where the DS8000 Advanced Copy Services can help users meet their business requirements. Furthermore, each of these scenarios has a number of possible permutations. The section is not intended to cover every possible use of Copy Services with VMware; rather, it is intended to provide hints and tips that will be useful in many different Copy Services scenarios. When using Copy Services with the guest operating systems, the restrictions of the guest operating system still apply. For example, there are some restrictions when using Copy Services with Microsoft Windows dynamic disks, as discussed in Appendix , Windows and Remote Mirror and Copy on page 738. These restrictions still apply when the guest operating system is Windows on VMware. In some cases, using Copy Services in a VMware environment may impose additional restrictions. Before using these techniques, check with IBM and the IBM System Storage DS8000 Host Systems Attachment Guide, SC26-7917, for the latest information about the support available for these solutions.

Virtual machine considerations regarding Copy Services


Before issuing the FlashCopy, it is important to prepare both the source and target machines to be copied. For the source machine, this typically means quiescing the applications, unmounting the source volumes, and/or flushing memory buffers to disk. See the appropriate sections for your operating systems for more information about this topic. For the target machine, typically the target volumes must be unmounted. This prevents the operating system from accidentally corrupting the target volumes with buffered writes, as well as preventing users from accessing the target LUNs until the FlashCopy is logically complete. With VMware, there is an additional restriction that the target virtual machine must be shut down before issuing the FlashCopy. VMware also performs caching, in addition to any caching the guest operating system might do. To be able to use the FlashCopy target volumes with ESX Server, you need to make sure, that the ESX Server can see the target volumes. Beside checking the SAN zoning and the host attachment within the DS8000, you may need a SAN rescan issued by the Virtual Center.

Appendix A. Open systems specifics

757

If the FlashCopied LUNs contain a VMFS file system, the ESX host will detect this on the target LUNs and add them as a new datastore to its inventory. The VMs stored on this datastore can then be opened on the ESX host. To assign the existing virtual disks to new VMs, in the Add Hardware Wizard panel, select Use an existing virtual disk and choose the .vmdk file you want to use. See Figure A-14. If the FlashCopied LUNs were assigned as RDMs, the target LUNs can be assigned to a VM by creating a new RDM for this VM. In the Add Hardware Wizard panel, select Raw Device Mapping and use the same parameters as on the source VM. Note: If you do not shut down the source VM, reservations may prevent you from using the target LUNs.

Figure A-14 Adding an existing virtual disk to a VM

VMware ESX server and FlashCopy


In general there are 2 different ways FlashCopy can be used within VMware Virtual Infrastructure: Either on raw LUNs that are attached via RDM to a host or on LUNs that are used to build up VMFS datastores which store VMs and virtual disks.

FlashCopy on LUNs used for VMFS datastores


Since version 3 all files a virtual machine is made up from, are stored on VMFS partitions (that is usually: configuration, BIOS and one or more virtual disks). Therefore the whole VM is most commonly stored in one single location. Since FlashCopy operations are always done on a whole volume this provides an easy way to create point in time backups of whole virtual machines. Nevertheless you have to make sure that the data on the VMFS volume is consistent. Therefore the VMs located on this datastore must be shut down before initiating the FlashCopy job. Since a VMFS datastore can contain more than one LUN, the user has to make sure all participating LUNs are mirrored using FlashCopy to get a complete copy of the datastore.

758

IBM System Storage DS8000: Copy Services in Open Environments

Figure A-15 shows an ESX host with 2 virtual machines, using each one virtual disk. The ESX host has one VMFS datastore consisting of 2 DS8000 LUNs 1 and 2. In order to get a complete copy of the VMFS datastore, both LUNs must be copied with FlashCopy. By using FlashCopy on VMFS LUNs, it is easy to create backups of whole VMs.

Figure A-15 Using FlashCopy on VMFS volumes

FlashCopy on LUNs used for RDM


Raw device mappings (RDM) can be done in two ways: In physical mode, the LUN is mostly treated as any other physical LUN. In virtual mode the virtualization layer provides features like snapshots that are normally only available for virtual disks. In virtual compatibility mode you have to make sure that the LUN you are going to copy is in a consistent state. Depending on the disk mode and current usage you may have to append the redo-log first to get a usable copy of the disk. If persistent or nonpersistent mode is used, the LUN can be handled like a RDM in physical compatibility mode. For details and restrictions, check the SAN Configuration Guide, at: http://www.vmware.com/support/pubs/vi_pubs.html The following paragraphs are valid for both compatibility modes. However, keep in mind that extra work on the ESX host and/or VMs might be required for the virtual compatibility mode.

Appendix A. Open systems specifics

759

Using FlashCopy within a virtual machine


In Figure A-16, a LUN, which is assigned to a VM via RDM, is copied using FlashCopy on a DS8000. The target LUN is then assigned to the same VM by creating a second RDM. After issuing the FlashCopy job HDD1 and HDD2 have the same content. For virtual disks, this can simply be achieved by copying the .vmdk files on the VMFS datastore. However, the copy is not available instantly as with FlashCopy; instead you will have to wait until the copy job has finished duplicating the whole .vmdk file.

Figure A-16 Using FlashCopy within a VM - HDD1 is the source for target HDD2

760

IBM System Storage DS8000: Copy Services in Open Environments

Using FlashCopy between two virtual machines


This works in the same way as using FlashCopy within a virtual machine, but the target disks are assigned to another VM this time. This might be useful to create clones of a VM. After issuing the FlashCopy job, LUN 1 can be assigned to a second VM, which then can work with a copy of VM1's HDD1. (See Figure A-17).

Figure A-17 Using FlashCopy between two different VMs - VM1's HDD1 is the source for HDD2 in VM2

Using FlashCopy between ESX Server hosts


This scenario shows how to use the target LUNs on a different ESX Server host. This is especially useful for disaster recovery if one ESX Server host fails for any reason. If LUNs with VMFS are duplicated using FlashCopy, it is possible to create a copy of the whole virtual environment of one ESX Server host that can be migrated to another physical host with only few efforts.

Appendix A. Open systems specifics

761

To be able to do this, both ESX Server hosts must be able to access the same DS8000 LUN. (See Figure A-18.)

Figure A-18 FlashCopy between 2 ESX hosts

In Figure A-18 we are using FlashCopy on 2 volumes. LUN 1 is used for a VMFS datastore while LUN 2 is assigned to VM2 as a RDM. These two LUNs are then copied with FlashCopy and attached to another ESX Server host. In ESX host 2 we now assign the vdisk that is stored on the VMFS partition on LUN 1' to VM3 and attach LUN 2' via RDM to VM4. By doing this we can create a copy of ESX host 1's virtual environment and use it on ESX host 2. Note: If you use FlashCopy on VMFS volumes and assign them to the same ESX Server host, the server doesn't allow the target to be used since the VMFS volume identifiers have been duplicated. To circumvent this, VMware ESX server provides the possibility of VMFS Volume Resignaturing. For details about resignaturing, check page 112 and the following pages in the SAN Configuration Guide, available at: http://www.vmware.com/support/pubs/vi_pubs.html

ESX and Remote Mirror and Copy


It is possible to use Remote Mirror and Copy with all three types of disks. However, in most environments, raw System LUNs in physical compatibility mode are preferred. As with FlashCopy, using VMware with Remote Mirror and Copy contains all the advantages and limitations of the guest operating system. See the individual guest operating system sections for relevant information. Using VMware with Remote Mirror and Copy imposes some additional restrictions. One such limitation is the mkpprc -tgtread parameter is not supported. VMware cannot use VMFS-formatted volumes or raw System LUNs in virtual mode without writing to the disk. However, it may be possible to use raw System LUNs in physical compatibility mode. Check with IBM on the supportability of this procedure.

762

IBM System Storage DS8000: Copy Services in Open Environments

At a high level, the steps for creating a Remote Mirror and Copy are as follows: 1. Shut down the guest operating system on the target ESX Server. 2. Establish remote mirror and copy from the source volumes to the target volumes. Important: You should use the mkpprc -resetreserve flag when establishing the Remote Mirror and Copy. Otherwise, you may receive this message on the target machine: Cannot create partition table for disk vmhbax:x:x because geometry info is invalid. Please rescan. This flag removes hardware protection on the target device and should be used with caution. 3. When the initial copy has completed and the volumes are now in Full Duplex mode, suspend or remove the Remote Mirror and Copy relationship. 4. Issue the Rescan command on the target ESX Server. 5. If not already assigned to the target virtual machine, assign the Remote Mirror and Copy volumes to the target virtual machine. Virtual disks on VMFS volumes should be assigned as existing volumes, while raw volumes should be assigned as RDMs using the same parameters as on the source host. 6. Start the virtual machine and, if necessary, mount the target volumes. In Figure A-19 we have a similar scenario as in Figure A-18 on page 762, but now the source and target volumes are located on two different DS8000. This setup can be used for disaster recovery solutions where ESX host 2 would be located in the backup data center.

Figure A-19 Using Remote Mirror and Copy functions

Appendix A. Open systems specifics

763

764

IBM System Storage DS8000: Copy Services in Open Environments

Appendix B.

SNMP notifications
In this appendix we describe SNMP traps that are sent out in a Remote Copy and Mirror environment. This appendix repeats some of the SNMP trap information that is available in the IBM Redbook, IBM System Storage DS8000 Series: Architecture and Implementation, SG24-6786.

Copyright IBM Corp. 2004-2008. All rights reserved.

765

SNMP overview
The DS8000 sends out SNMP traps when a state change in a remote Copy Services environment occurs. Therefore 13 traps are implemented. The traps 1xx are sent out for a state change of a physical link connection. The 2xx traps are sent out for state changes in the logical Copy Services setup. The DS HMC can be set up to send SNMP traps to up to two defined IP addresses. TPC for Replication (see Chapter 6, IBM TotalStorage Productivity Center for Replication on page 43) is listening to SNMP traps of the DS8000. In addition, a Network Management program, like Tivoli NetView, can be used to catch and process the SNMP traps.

Physical connection events


With the trap 1xx range, a state change of the physical links is reported. The trap is sent if the physical remote copy link is interrupted. The Link trap is sent from the primary system. The PLink and SLink columns are only used by the 2105 ESS disk unit. If one or several links (but not all links) are interrupted, a trap 100, as shown in Example B-1 here, is posted and indicates that the redundancy is degraded. The RC column in the trap represents the return code for the interruption of the link. You can see the return codes listed in Figure B-1.
Example: B-1 Trap 100: Remote mirror and copy links degraded PPRC Links Degraded UNIT: Mnf Type-Mod SerialNm LS PRI: IBM 2107-922 75-20781 12 SEC: IBM 2107-9A2 75-ABTV1 24 Path: Type PP PLink SP SLink RC 1: FIBRE 0143 XXXXXX 0010 XXXXXX 15 2: FIBRE 0213 XXXXXX 0140 XXXXXX OK

If all links all interrupted, a trap 101, as shown in Example B-2, is posted. This event indicates that no communication between the primary and the secondary system is possible any more.
Example: B-2 Trap 101: Remote mirror and copy links are inoperable PPRC Links Down UNIT: Mnf Type-Mod SerialNm LS PRI: IBM 2107-922 75-20781 10 SEC: IBM 2107-9A2 75-ABTV1 20 Path: Type PP PLink SP SLink RC 1: FIBRE 0143 XXXXXX 0010 XXXXXX 17 2: FIBRE 0213 XXXXXX 0140 XXXXXX 17

When the DS8000 can communicate again using any of the links, trap 102, as shown in Example B-3, is sent when one or more of the interrupted links are available again.
Example: B-3 Trap 102: Remote mirror and copy links are operational PPRC Links Up UNIT: Mnf Type-Mod SerialNm LS PRI: IBM 2107-9A2 75-ABTV1 21 SEC: IBM 2107-000 75-20781 11 Path: Type PP PLink SP SLink RC 1: FIBRE 0010 XXXXXX 0143 XXXXXX OK 2: FIBRE 0140 XXXXXX 0213 XXXXXX OK

766

IBM System Storage DS8000: Copy Services in Open Environments

Table B-1 shows the Remote mirror and copy return codes.
Table B-1 Remote mirror and copy return codes Return Code 02 03 04 05 06 07 08 09 Description Initialization failed. ESCON link reject threshold exceeded when attempting to send ELP or RID frames. Time out. No reason available. There are no resources available in the primary storage unit for establishing logical paths because the maximum number of logical paths have already been established. There are no resources available in the secondary storage unit for establishing logical paths because the maximum number of logical paths have already been established. There is a secondary storage unit sequence number, or logical subsystem number, mismatch. There is a secondary LSS subsystem identifier (SSID) mismatch, or failure of the I/O that collects the secondary information for validation. The ESCON link is offline. This is caused by the lack of light detection coming from a host, peer, or switch. The establish failed. It is retried until the command succeeds or a remove paths command is run for the path. Note: The attempt-to-establish state persists until the establish path operation succeeds or the remove remote mirror and copy paths command is run for the path. 0A 10 The primary storage unit port or link cannot be converted to channel mode if a logical path is already established on the port or link. The establish paths operation is not retried within the storage unit. Configuration error. The source of the error is one of the following: The specification of the SA ID does not match the installed ESCON adapter cards in the primary controller. For ESCON paths, the secondary storage unit destination address is zero and an ESCON Director (switch) was found in the path. For ESCON paths, the secondary storage unit destination address is not zero and an ESCON director does not exist in the path. The path is a direct connection. 14 15 16 The Fibre Channel path link is down. The maximum number of Fibre Channel path retry operations has been exceeded. The Fibre Channel path secondary adapter is not remote mirror and copy capable. This could be caused by one of the following conditions: The secondary adapter is not configured properly or does not have the current firmware installed. The secondary adapter is already a target of 32 different logical subsystems (LSSs). 17 18 19 1A 1B 1C The secondary adapter Fibre Channel path is not available. The maximum number of Fibre Channel path primary login attempts has been exceeded. The maximum number of Fibre Channel path secondary login attempts has been exceeded. The primary Fibre Channel adapter is not configured properly or does not have the correct firmware level installed. The Fibre Channel path established but degraded due to a high failure rate. The Fibre Channel path was removed due to a high failure rate.

Appendix B. SNMP notifications

767

Remote Mirror and Copy events


If you have configured Consistency Groups and a volume within this Consistency Group is suspended due to a write error to the secondary device, trap 200 is sent, as shown here in Example B-4. One trap per LSS, that is configured with the Consistency Group option, is sent. This trap can be handled by automation software such as TPC for Replication to freeze this Consistency Group. The SR column in the trap represents the suspension reason code, which explain the cause of the error that suspended the remote mirror and copy group; suspension reason codes are listed in Table B-2 on page 771.
Example: B-4 Trap 200: LSS Pair Consistency Group remote mirror and copy pair error LSS-Pair Consistency Group PPRC-Pair Error UNIT: Mnf Type-Mod SerialNm LS LD SR PRI: IBM 2107-922 75-03461 56 84 08 SEC: IBM 2107-9A2 75-ABTV1 54 84

Trap 202, as shown in Example B-5, is sent if a remote Copy Pair goes into a suspend State. The trap contains the serial number (SerialNm) of the primary and secondary machine, the logical subsystem or LSS (LS), and the logical device (LD). To avoid SNMP trap flooding, the number of SNMP traps for the LSS is throttled. The complete suspended pair information is represented in the summary. The last row of the trap represents the suspend state for all pairs in the reporting LSS. The suspended pair information contains a hexadecimal string of a length of 64 characters. By converting this hex string into binary, each bit represents a single device. If the bit is 1 then the device is suspended; otherwise, the device is still in full duplex mode.
Example: B-5 Trap 202: Primary remote mirror and copy devices on the LSS were suspended because of an error. Primary PPRC Devices on LSS Suspended Due to Error UNIT: Mnf Type-Mod SerialNm LS LD SR PRI: IBM 2107-922 75-20781 11 00 03 SEC: IBM 2107-9A2 75-ABTV1 21 00 Start: 2005/11/14 09:48:05 CST PRI Dev Flags (1 bit/Dev, 1=Suspended): C000000000000000000000000000000000000000000000000000000000000000

Global Mirror related SNMP traps


Trap 210, as shown in Example B-6, is sent when a Consistency Group in a Global Mirror environment was successfully formed.
Example: B-6 Trap210: Global Mirror initial Consistency Group successfully formed 2005/11/14 15:30:55 CET Asynchronous PPRC Initial Consistency Group Successfully Formed UNIT: Mnf Type-Mod SerialNm IBM 2107-922 75-20781 Session ID: 4002

Trap 211, as shown in Example B-7, is sent if the Global Mirror setup got into an severe error state, where no attempts are made to form a Consistency Group.

768

IBM System Storage DS8000: Copy Services in Open Environments

Example: B-7 Trap 211: Global Mirror Session is in a fatal state Asynchronous PPRC Session is in a Fatal State UNIT: Mnf Type-Mod SerialNm IBM 2107-922 75-20781 Session ID: 4002

Trap 212, as shown in Example B-8, is sent when a Consistency Group cannot be created in a Global Mirror relation. Some of the reasons might be: Volumes have been taken out of a copy session. The remote copy link bandwidth might not be sufficient. The FC link between the primary and secondary system is not available.
Example: B-8 Trap 212: Global Mirror Consistency Group failure - Retry will be attempted Asynchronous PPRC Consistency Group Failure - Retry will be attempted UNIT: Mnf Type-Mod SerialNm IBM 2107-922 75-20781 Session ID: 4002

Trap 213, as shown in Example B-9, is sent when a Consistency Group in a Global Mirror environment can be formed after a previous Consistency Group formation failure.
Example: B-9 Trap 213: Global Mirror Consistency Group successful recovery Asynchronous PPRC Consistency Group Successful Recovery UNIT: Mnf Type-Mod SerialNm IBM 2107-9A2 75-ABTV1 Session ID: 4002

Trap 214, as shown in Example B-10, is sent if a Global Mirror Session is terminated using the DS CLI command rmgmir or the corresponding GUI function.
Example: B-10 Trap 214: Global Mirror Master terminated 2005/11/14 15:30:14 CET Asynchronous PPRC Master Terminated UNIT: Mnf Type-Mod SerialNm IBM 2107-922 75-20781 Session ID: 4002

Trap 215, as shown in Example B-11, is sent if, in the Global Mirror Environment, the Master detects a failure to complete the FlashCopy commit. The trap is sent after a number of commit retries have failed.
Example: B-11 Trap 215: Global Mirror FlashCopy at Remote Site unsuccessful Asynchronous PPRC FlashCopy at Remote Site Unsuccessful A UNIT: Mnf Type-Mod SerialNm IBM 2107-9A2 75-ABTV1 Session ID: 4002

Trap 216, as shown in Example B-12, is sent if a Global Mirror Master cannot terminate the Global Copy relationship at one of his Subordinates (slave). This might occur if the Master is terminated with rmgmir but the Master cannot terminate the copy relationship on the Subordinate. You might need to run a rmgmir against the Subordinate to prevent any interference with other Global Mirror sessions.

Appendix B. SNMP notifications

769

Example: B-12 Trap 216: Global Mirror slave termination unsuccessful Asynchronous PPRC Slave Termination Unsuccessful UNIT: Mnf Type-Mod SerialNm Master: IBM 2107-922 75-20781 Slave: IBM 2107-921 75-03641 Session ID: 4002

Trap 217, as shown in Example B-13, is sent if a Global Mirror environment was suspended by the DS CLI command pausegmir or the corresponding GUI function.
Example: B-13 Trap 217: Global Mirror paused Asynchronous PPRC Paused UNIT: Mnf Type-Mod SerialNm IBM 2107-9A2 75-ABTV1 Session ID: 4002

Trap 218, as shown in Example B-14, is sent if a Global Mirror has exceeded the allowed threshold for failed consistency group formation attempts.
Example: B-14 Trap 218: Global Mirror number of consistency group failures exceed threshold Global Mirror number of consistency group failures exceed threshold UNIT: Mnf Type-Mod SerialNm IBM 2107-9A2 75-ABTV1 Session ID: 4002

Trap 219, as shown in Example B-15, is sent if a Global Mirror has successfully formed a consistency group after one or more formation attempts had previously failed.
Example: B-15 Trap 219: Global Mirror first successful consistency group after prior failures Global Mirror first successful consistency group after prior failures UNIT: Mnf Type-Mod SerialNm IBM 2107-9A2 75-ABTV1 Session ID: 4002

Trap 220, as shown in Example B-16, is sent if a Global Mirror has exceeded the allowed threshold of failed FlashCopy commit attempts.
Example: B-16 Trap 220: Global Mirror number of FlashCopy commit failures exceed threshold Global Mirror number of FlashCopy commit failures exceed threshold UNIT: Mnf Type-Mod SerialNm IBM 2107-9A2 75-ABTV1 Session ID: 4002

Trap 221, as shown in Example B-17, is sent when the Repository has reached the user-defined warning watermark or when physical space is completely exhausted.
Example: B-17 Trap 221: Space Efficient Repository or Over-provisioned Volume has reached a warning watermark Space Efficient Repository or Over-provisioned Volume has reached a warning watermark UNIT: Mnf Type-Mod SerialNm IBM 2107-9A2 75-ABTV1 Session ID: 4002

770

IBM System Storage DS8000: Copy Services in Open Environments

Table B-2 shows the Copy Services suspension reason codes.


Table B-2 Copy Services suspension reason codes Suspension reason code (SRC) 03 Description

The host system sent a command to the primary volume of a remote mirror and copy volume pair to suspend copy operations. The host system might have specified either an immediate suspension or a suspension after the copy completed and the volume pair reached a full duplex state. The host system sent a command to suspend the copy operations on the secondary volume. During the suspension, the primary volume of the volume pair can still accept updates but updates are not copied to the secondary volume. The out-of-sync tracks that are created between the volume pair are recorded in the change recording feature of the primary volume. Copy operations between the remote mirror and copy volume pair were suspended by a primary storage unit secondary device status command. This system resource code can only be returned by the secondary volume. Copy operations between the remote mirror and copy volume pair were suspended because of internal conditions in the storage unit. This system resource code can be returned by the control unit of either the primary volume or the secondary volume. Copy operations between the remote mirror and copy volume pair were suspended when the secondary storage unit notified the primary storage unit of a state change transition to simplex state. The specified volume pair between the storage units is no longer in a copy relationship. Copy operations were suspended because the secondary volume became suspended as a result of internal conditions or errors. This system resource code can only be returned by the primary storage unit. The remote mirror and copy volume pair was suspended when the primary or secondary storage unit was rebooted or when the power was restored. The paths to the secondary storage unit might not be disabled if the primary storage unit was turned off. If the secondary storage unit was turned off, the paths between the storage units are restored automatically, if possible. After the paths have been restored, issue the mkpprc command to resynchronize the specified volume pairs. Depending on the state of the volume pairs, you might have to issue the rmpprc command to delete the volume pairs and reissue a mkpprc command to reestablish the volume pairs.

04

05

06

07

08

09

0A

The remote mirror and copy pair was suspended because the host issued a command to freeze the remote mirror and copy group. This system resource code can only be returned if a primary volume was queried.

Appendix B. SNMP notifications

771

772

IBM System Storage DS8000: Copy Services in Open Environments

Appendix C.

CLI migration
In this appendix, we discuss a proposed way that you can migrate Copy Services tasks for the ESS environment to the DS Copy Services environment. The Copy Services functions described here cover the Graphical User Interface (GUI) and the command-line interface (CLI).

Copyright IBM Corp. 2004-2008. All rights reserved.

773

Migrating ESS CLI to DS CLI


With the introduction of the IBM DS8000 Storage Unit, a new Copy Services application is also introduced. The Copy Services functions can be issued via the DS graphical user interface (GUI) or via the DS CLI. The Advanced Copy Services functions that are available on the ESS 800 are also available on the DS8000. Although the functions are still available, there are also some differences that need to be considered in replacing your ESS CLI with a DS CLI. These are: Point-in-Time Copy (FlashCopy) does not support Consistency Groups on the GUI. Fibre Channel is used for Metro Mirror, Global Mirror, and Metro/Global Copy. The GUI runs real-time only (tasks cannot be saved), while the CLI can be invoked using a saved script. The DS CLI supports both the ESS 800/750 and the DS8000. ESS 800 must be at Licensed Internal Code (LIC) level 2.4.3.15 or later.

Reviewing the ESS tasks to migrate


Review what Copy Services tasks you wish to migrate. You can check these tasks from your ESS GUI or ESS CLI. Example C-1 shows the esscli list command to display the tasks.
Example: C-1 esscli list task esscli list task s copy_services-server u csadmin p passw0rd Wed Nov 24 10:29:31 EST 2004 IBM ESSCLI 2.4.0 Task Name Type Status -----------------------------------------------------------------------H_Epath_test16 PPRCEstablishPaths NotRunning H_Epath_test17 PPRCEstablishPaths NotRunning Brocade_pr_lss10 PPRCEstablishPair NotRunning Brocade_pr_lss11 PPRCEstablishPair NotRunning Flash10041005 FCEstablish NotRunning

You can use ESS CLI to display the contents of each saved task and write the contents to a file. See Example C-2.
Example: C-2 esscli show task esscli show task s copy_services_server u csadmin p passw0rd d name=Flash10041005 Wed Nov 24 10:37:17 EST 2004 IBM ESSCLI 2.4.0 Taskname=Flash10041005 Tasktype=FCEstablish Options=NoBackgroundCopy SourceServer=2105.23953 TargetServer=2105.23953 SourceVol TargetVol -----------------------------------------------------------------------1004 1005

774

IBM System Storage DS8000: Copy Services in Open Environments

You can also check your saved tasks via the ESS GUI. See Figure C-1.

Figure C-1 ESS Copy Services GUI tasks panel

Highlight the task and click the information panel. See Figure C-2.

Figure C-2 ESS task information

Also review the specific server scripts (depending on the OS) that perform tasks that set up and execute saved ESS CLI saved tasks. You may need to edit or translate these scripts in order to run your DS CLI saved tasks. Important: On the ESS 800, open systems volume IDs are given in an 8-digit format, xxx-sssss, where xxx is the LUN ID and sssss is the serial number of the ESS 800. In the example used in this appendix the volumes shown are 004-23953 to 005-23953. These volumes are open systems or fixed block volumes. When referring to them in the DS CLI, you must add 1000 to the volume ID, so volume 004-23953 is volume ID 1004 and volume 005-23953 is volume ID 1005. This is very important because on the ESS 800, the following address ranges are actually used: 0000 to 0FFF 1000 to 1FFF System z CKD volumes (4096 possible addresses) Open systems fixed block LUNs (4096 possible addresses)

If we intend to use FlashCopy to copy ESS LUN 004-23953 onto 005-23953 using the DS CLI, we must specify 1004 and 1005. If instead we specify 0004 and 0005, we will actually run the FlashCopy against CKD volumes. This may result in an unplanned outage on the System z host that was using CKD volume 0005. The ESS CLI command, show task, will show the correct value for the volume ID.

Appendix C. CLI migration

775

Converting the individual tasks


Choose the ESS CLI tasks that you need to translate to the DS CLI. Refer to the IBM System Storage DS8000 Command-Line Interface Users Guide, SC26-7916. You can then save each translated task and be able to run it in the DS8000 CLI environment. Note: The DS GUI (via the DS HMC) does not save Copy Services tasks like the ESS GUI. You can only use the DS GUI for Copy Services in real-time mode. Also translate (if needed) and edit the individual server scripts that will set up and execute the saved DS CLI scripts. See Table C-1 for an example of command translation.
Table C-1 Converting ESS CLI to DS CLI Task parameter Tasktype Options SourceServer TargetServer Source and Target vols. ESS CLI parameter FCEstablish NoBackgroundCopy 2105.23953 2105.23953 1004 1005 DS CLI conversion mkflash -nocp -dev IBM.2105-23953 N/A 1004:1005 Description Establish FlashCopy. No background copy upon FlashCopy. Device ID. Used only once in DS CLI. Separated by a colon in DS CLI.

DS CLI commands
Example C-3 is the translation to DS CLI commands.
Example: C-3 DS CLI mkflash command dscli> mkflash -nocp -dev IBM.2105-23953 1004:1005 CMUC00137I mkflash: FlashCopy pair 1004:1005 successfully created. dscli> lsflash -dev IBM.2105-23953 1004:1005 ID SrcLSS SequenceNum Timeout ActiveCopy Recoding Persistent Revertible =============================================================================== 1004:1005 10 0 120 Disabled Disabled Disabled Disabled

See Figure C-3 for the GUI output.

Figure C-3 ESS GUI FlashCopy window

776

IBM System Storage DS8000: Copy Services in Open Environments

ESS/DS CLI comparison


Table C-2 shows a brief comparison of the major components between the ESS CLI and the DS CLI.
Table C-2 ESS and DS CLI commands and parameters comparison ESS CLI list server DS CLI lsserver Comments Like the 2105, the 2107 storage facility image contains one pair of servers. See Note 1.

list volumespace

lsextpool, showextpool, lsrank, showrank, lsarray, showarray, lsarraysite mkextpool, mkarray, mkrank rmrank, rmarray, rmextpool lsarraysite

create volumespace delete volumespace list diskgroup

A 2107 Array Site consists of eight disk drives (DDMs) that are made into a RAID array. The 2107 does not support the JBOD Array configuration. The 2107 CLI lsioport and showioport commands include the metrics parameter, which returns the performance counter values for the respective I/O port IDs. The metrics parameter provides the means to monitor I/O port performance statistics. See Note 2. See Note 3.

list port

lsioport, showioport

set port list volume create volume set volume list pav create pav delete pav list volumeaccess create volumeaccess delete volumeaccess list hostconnection create hostconnection delete hostconnection set hostconnection

setioport lsfbvol, lsckdvol mkfbvol, mkckdvol chfbvol, chckdvol lsckdvol, showckdvol mkckdvol rmckdvol lsvolgrp, showvolgrp mkvolgrp, chvolgrp rmvolgrp lshostconnect, showhostconnect mkhostconnect rmhostconnect chhostconnect

See Note 4.

The 2107 CLI commands include the volume group ID parameter. For 2107, the hostconnect commands concern SCSI-FCP host port connections to ESS I/O ports that are configured for SCSI-FCP and identified access mode.

Appendix C. CLI migration

777

ESS CLI list log list featurecode

DS CLI N/A lsda, lshba, lsioencl, lsuser, mkuser, rmuser, chuser, lsstgencl

Comments

The 2107 CLI commands can display feature codes when the appropriate parameters are used with the commands.

list task show task list pprcpaths

NA NA lspprcpath Unlike the 2105, the 2107 CLI Copy Services functions are not task oriented. The 2107 CLI provides a complete set of FlashCopy and PPRC make, change, remove, list, and show commands. The 2107 CLI provides a complete set of FlashCopy and PPRC commands that may be used in the coding of scripts that emulate 2105 Copy Services tasks. The 2107 CLI lssu command returns both 2105 and 2107 objects that are contained by the ESS Network Interface domain. These 2107 Copy Services CLI commands are equivalent to the respective 2105 CLI commands. The 2107 mkflashcopy and mkpprc commands provide a wait flag that delays command response until copy complete status is achieved.

rsExecuteTask

mkflash, rmflash, mkpprc, rmpprc, mkpprcpath, rmpprcpath, mksession, chsession, rmsession

rsList2105s

lssu

rsQuery, rsQueryComplete, rsFlashCopyQuery

lsflash, lspprc

rsTestConnection

lsaavailpprcport

778

IBM System Storage DS8000: Copy Services in Open Environments

Note 1: Volume space configuration is a primary difference between 2105 and 2107. Like the 2105, a 2107 Storage Facility Image volume space contains a RAID 5 or RAID 10 Array and Rank that are configured from an Array Site. For the 2105, one command configures an Array Site into a RAID array and Rank. For the 2107, one command configures an Array Site into an Array, and a second command configures an Array into a Rank. For 2105, a Rank is configured as fixed block or CKD. For the 2107, a Rank is assigned to a user-defined Extent Pool object, which the user defines as either the fixed block or CKD storage type. The interleave volume construct does not exist for 2107. For the 2105, a volume is configured from a specific Rank, and cannot span Rank boundaries. For the 2107, a volume is configured from an Extent Pool. An Extent Pool can contain multiple Ranks. A 2107 volume consists of one or more Extents that can be allocated from one or more Ranks. A fixed block Extent is 1 GB (128 logical blocks). Each block contains 512 bytes of usable data space. For 2105, a Rank is either assigned to server 0 or server 1, depending on the Array Site location. A 2105 Rank is assigned to one of 32 possible LSS IDs, depending on the device adapter pair location and storage type configuration. For 2107, an Extent Pool is assigned to server 0 or server 1. A Rank that is configured from any Array Site can be assigned to a server 0 or 1 Extent Pool. Array Site position and device adapter pairs are not factors for the Rank to Extent Pool assignment. A volume that is created from a server 0 Extent Pool is assigned to an even-numbered LSS ID. A volume created from a server 1 Extent Pool is assigned to an odd numbered LSS ID. A user must define at least two Extent Pools (0 and 1), but can define as many Extent Pools as there are Ranks. For 2105, a user can delete a Rank but cannot delete a volume. For 2107, a user can delete a single volume, Rank, or Extent Pool. The 2107 CLI showrank and showextpool commands include a metrics parameter that returns the performance counter values for a specified Rank or Extent Pool ID. The metrics parameter provides the means to monitor Rank and Extent Pool performance statistics.

Note 2: Like 2105, a 2107 Fibre Channel SCSI-FCP I/O port can be configured for either the point-to-point/switched fabric or FC-AL connection topologies. A port that uses the point-to-point/switched fabric topology can be simultaneously used for OS host system I/O and for PPRC path configurations. The Fibre Channel SCSI-FCP IO port only allows identified host system ports to access volumes. A host system port WWPN must be identified (registered) to each ESS IO port through which volume access is intended. Host system port WWPN identification is accomplished by the CLI mkhostconnect command.

Note 3: A 2107 storage facility image can contain up to 65,536 volumes. A 2105 Storage Unit can contain up to 4096 FB volumes and 4096 CKD volumes. Otherwise, the 2105 and 2107 volume definitions and characteristics are essentially identical. The 2107 CLI provides a specific set of volume commands for each storage type, fixed block or CKD, as a means to clarify input parameter and output device adapter definitions. The 2107 CLI showfbvol and showckdvol commands include a metrics parameter that returns the performance counter values for a specified volume ID. The metrics parameter provides the means to monitor volume performance statistics.

Appendix C. CLI migration

779

Note 4: The 2105 volumeaccess commands concern volume ID assignment to a SCSI-FCP host port initiator, or WWPN. For 2107, volume IDs are assigned to a user-defined volume group ID (mkvolgrp and chvolgrp). A volume group ID is then assigned to one or more host system ports (mkhostconnect and chhostconnect) as a means to complete the volume access configuration. The volume group construct also exists in the 2105 internal code, but the construct is not externalized by the 2105 Specialist or CLI commands. For 2107 fixed block volumes, a Volume Group must be configured as either SCSI-mask or SCSI-map-256, depending whether the volume group is accessed by a SCSI-FCP host port that used the report LUNs or poll LUNs access method protocol.

780

IBM System Storage DS8000: Copy Services in Open Environments

Related publications
The publications listed in this section are considered particularly suitable for a more detailed discussion of the topics covered in this book.

IBM Redbooks
For information about ordering these publications, see How to get IBM Redbooks on page 782. Note that some of the documents referenced here may be available in softcopy only. IBM System Storage DS8000 Series: Architecture and Implementation, SG24-6786 IBM System Storage Business Continuity Solutions Guide, SG24-6547 IBM System Storage Solutions Handbook, SG24-5250 IBM System Storage DS8000: Copy Services with System z, SG24-6787 IBM TotalStorage Productivity Center for Replication on Windows 2003, SG24-7250 IBM TotalStorage Productivity Center for Replication on AIX, SG24-7407 IBM TotalStorage Productivity Center for Replication on Linux, SG24-7411 If you are implementing Copy Services in a mixed technology environment you may be interested in referring to the following manuals on the ESS and DS6000. IBM TotalStorage Enterprise Storage Server Implementing ESS Copy Services in Open Environments, SG24-5757 IBM TotalStorage Enterprise Storage Server Implementing ESS Copy Services with IBM Eserver zSeries, SG24-5680 The IBM TotalStorage DS6000 Series: Copy Services with IBM Eserver zSeries, SG24-6782 DFSMShsm ABARS and Mainstar Solutions, SG24-5089 Practical Guide for SAN with pSeries, SG24-6050 Fault Tolerant Storage Multipathing and Clustering Solutions for Open Systems for the IBM ESS, SG24-6295 Implementing Linux with IBM Disk Storage, SG24-6261 Linux with zSeries and ESS: Essentials, SG24-7025

Other publications
These publications are also relevant as further information sources. Note that some of the documents referenced here may be available in softcopy only. IBM System Storage DS8000 Command-Line Interface Users Guide, SC26-7916 IBM System Storage DS8000: Host Systems Attachment Guide, SC26-7917 IBM System Storage DS8000: Introduction and Planning Guide, GC35-0515 IBM System Storage Multipath Subsystem Device Driver Users Guide, SC30-4131

Copyright IBM Corp. 2004-2008. All rights reserved.

781

IBM System Storage DS8000: Users Guide, SC26-7915 IBM System Storage DS Open Application Programming Interface Reference, GC35-0516 IBM System Storage DS8000 Messages Reference, GC26-7914 z/OS DFSMS Advanced Copy Services, SC35-0248 Device Support Facilities: Users Guide and Reference, GC35-0033 IBM TotalStorage Productivity Center for Replication Users Guide, SC32-0103 IBM Systems - iSeries Backup and Recovery Version 5 Revision 4, SC41-5304-08 IBM Systems - iSeries Backup Recovery and Media Services for iSeries Version 5, SC41-5345-05

Online resources
These Web sites and URLs are also relevant as further information sources: IBM Disk Storage Feature Activation (DSFA) Web site: http://www.ibm.com/storage/dsfa Documentation for the DS8000: http://www.ibm.com/servers/storage/support/disk/2107.html The Interoperability Matrix: http://www.ibm.com/servers/storage/disk/ds8000/interop.html Fibre Channel host bus adapter firmware and driver level matrix: http://knowledge.storage.ibm.com/servers/storage/support/hbasearch/interop/hbaS earch.do Emulex: http://www.emulex.com/ts/dds.html JNI: http://www.jni.com/OEM/oem.cfm?ID=4 QLogic: http://www.qlogic.com/support/ibm_page.html IBM: http://www.ibm.com/storage/ibmsan/products/sanfabric.html McDATA: http://www.mcdata.com/ibm/ Cisco: http://www.cisco.com/go/ibm/storage

How to get IBM Redbooks


You can search for, view, or download Redbooks, Redpapers, Hints and Tips, draft publications and Additional materials, as well as order hardcopy Redbooks or CD-ROMs, at this Web site:
ibm.com/redbooks

782

IBM System Storage DS8000: Copy Services in Open Environments

Help from IBM


IBM Support and downloads
ibm.com/support

IBM Global Services


ibm.com/services

Related publications

783

784

IBM System Storage DS8000: Copy Services in Open Environments

Index
A
Access Density (AD) 658 add paths to existing paths between LLSs 229 Add-on license 19 AIX 736 FlashCopy 732 allocation unit 132 AllowTgtSE 141 architecture Copy Services 9 asynchronous data replication 308 Asynchronous PPRC see Global Mirror attributes consistent asynchronous remote copy solution 311 synchronous data replication 308 authorized level licensing 22 automation and management 176 commands comparison for Global Copy 268 FlashCopy 105 remote FlashCopy 116 structure 34 commitflash 110, 148, 395, 470 connectivity for Global Mirror to remote site 314 Consistency Group 133, 182183, 319, 451, 469 data consistency 181 formation 320 parameters 321 verify state 338 what is it 180 Consistency Group (CG) 446 Consistency Group drain time 433 Consistency Group FlashCopy 93 Consistency Group interval time 433, 439 consistencygrp 466 consistgrp 455 Continuous Availability (CA) 588 Continuous Operations (CO) 588 coordinate 374 Copy Pending 465 copy pending 194 Copy Services 753 2105 ESS 800 14 architecture 9 differences of DS CLI and DS GUI DS CLI commands 35 DS8000 13 DS8000 and ESS interoperability 696 how the new structure of Copy Services works 12 HP-UX 753 introduction to new structure 10 network components 199 Remote Mirror and Copy between Storage Complexes 14 using VERITAS Volume Manager SUN Solaris 750 what is a Storage Complex 11 Windows volumes 739 Copy Services Domain 10 adding 698 Copy Services network components Metro Mirror 199 copy set 50 create Global Copy relationship between local and remote volume 315 Global Mirror environment 327 creating paths Metro Mirror 224 Cross Site Mirroring 591

B
B volumes consistent data 342 BackgroundCopy 141 bandwidth Metro Mirror 188 basic concepts FlashCopy 81 basic concepts of Global Mirror 312

C
capacity Global Copy 300 captype 135 cascade 456 cascading 691 cfgmgr 736 cginterval 374 change rate 134 charging example 22 chfbvol 146 chhostconnect 780 chkpprc 599 chlss 466 chsession 352, 379, 484 chsestg 135 chvolgrp 780 CLI migration convert the individual tasks 776 migrating ESS CLI to DS CLI 774 review ESS tasks to migrate 774 clustering 179 command-line configuration examples 201

Copyright IBM Corp. 2004-2008. All rights reserved.

785

D
data backup system 80 data consistency 175, 181182, 692 Consistency Group 181 data migration 692 data mining system 80 decoding port IDs 704 default profile 34 define Global Mirror session 317 delete existing FlashCopy relationship using the DS SM 128 delete paths 234 Dense Wave Division Multiplexor (DWDM) 265 Dense Wave Division Multiplexor or DWDM 174 dependent writes 304, 308 destage 132, 145 DFSA 22 Disaster Recovery practice readiness 409 Disaster Recovery test 497 disaster recovery test scenarios 495 Disk Storage Feature Activation application (DSFA) 22 distance Metro Mirror 188 double cascading 691692 drain 374 draining 469 DS API 102, 268 DS CLI 14, 102, 116, 120, 123, 268, 774 application 37 command structure 34 Copy Services commands 35 default profile 34 determine WWNN for ESS 800 708 establishing logical paths 709 functionality 32 Global Copy examples 270 Global Mirror 351 Global Mirror examples 368 help 40 highlights 32 interactive mode 37, 39 introduction 32 local FlashCopies 104 Metro Mirror 200 operating systems 32 profile 33 return codes 39 script command mode 38 script mode 37 single-shot mode 37 updating profile 697 user accounts 33 user assistance 40 DS Command-Line Interface see DS CLI DS front ends FlashCopy commands 103 DS GUI establish paths 286

Global Copy examples 286 Global Mirror 353 Global Mirror examples 417 manage Global Mirror environment 435 Metro Mirror 201 path creation 704 DS SM 102, 118, 120121, 123, 125, 128, 268 delete existing FlashCopy relationship 128 establish Global Mirror environment 418 resynchronize target 127 DS8000 Copy Services 13 Copy Services with ESS 696 create a user ID 697 data consistency 175 DS CLI highlights 32 interoperability with ESS 800 695 Remote Mirror and Copy 6 dspessdta 599, 602

E
Element Manager 13 environment remove Global Mirror 332 ESS Copy Services with DS8000 696 determining volume size 700 ESS 800 Copy Services 14 create a user ID 697 interoperability with DS8000 695 volume address considerations 703 ESS CLI 774 ESS tasks to migrate 774 establish a Global Mirror environment with DS SM 418 establish pair Global Copy 258 example failover/failback 348 examples add paths to existing paths between LLSs 229 command-line configuration 201 creating Metro Mirror pairs 236 creating paths 224 delete paths 234 DS Storage Manager GUI 286 establish a Global Mirror environment with DS SM 418 establish Global Copy pairs with DS GUI 290 establish paths with the DS Storage Manager GUI 286 failback 246 failover 244 FlashCopy 166 Global Copy and DS CLI 270 Global Mirror and DS CLI 368 manage Global Mirror environment with DS GUI 435 modify a Global Mirror session 439 pause a Global Mirror session 437 resume 242

786

IBM System Storage DS8000: Copy Services in Open Environments

resume a Global Mirror session 438 simplifying Metro Mirror commands 202 suspend 241 exportvg 736 extent pool 132, 135, 159 extlongbusy timeout 466

F
failback 179, 246 Metro Mirror 246 failed state 150 failover 179, 464, 476 B to A 338 Metro Mirror 244 failover/failback example 348 failoverpprc 393, 405, 477 fast reverse restore 98 FATA 160 Fibre Channel Metro Mirror links 186 FlashCopy 3, 113, 754 and Global Copy 88 and Global Mirror for open systems 89 and Metro Mirror 87 apply changes to existing FlashCopy relationship 125 basic concepts 81 commands and parameters used 105 commands in the DS front ends 103 commit data to target using commitflash 110 comparison of display properties from the DS CLI and DS SM 123 Consistency Group FlashCopy 93 data backup system 80 data mining system 80 delete existing FlashCopy relationship using the DS SM 128 display existing FlashCopies using lsflash 107 display properties of existing FlashCopy using the DS SM 121 establish 82 existing Global Copy primary 93 existing Metro Mirror 93 fast reverse restore 98 flow 102 Freeze FlashCopy Consistency Group 93 full volume copy 85 HP-UX 754 increment FlashCopy using resyncflash 111 Incremental FlashCopy 94 initiate background copy for persistent FlashCopy relationship 126 initiate using Create 119 initiate using mkflash 106 integration system 81 interfaces 100101 introduce 316 limitations with multiple FlashCopies 92 local 103 local commands 106 local FlashCopies using the DS CLI 104

managing for ESS 800 716 multiple relationship 92 nocopy option 86 operational areas 80 options 91, 100 overview 79 parameters for initial FC using DS SM and DS CLI 120 parameters used in remote FlashCopy 116 performance 154 Persistent 97 production backup system 80 reading from the source 83 reading from the target 84 remote 97, 104 remote FlashCopies using the DS CLI 116 remote FlashCopy commands 116 remove local FlashCopy using rmflash 115 remove relationship 331 reset FlashCopy Consistency Group using unfreezeflash 116 reset target to contents using revertflash 114 resynchronize target using the DS SM 127 reverse existing FlashCopy using the DS SM 125 reverse restore 98 run new background copy for persistent FlashCopy using rmflash 115 set an existing FlashCopy to revertible using setflashrevertible 109 terminating the FlashCopy relationship 85 test system 80 using the DS SM front end 118 with other Copy Services 87 writing to the source 83 writing to the target 85 FlashCopy examples create backup 167 create test system or integration system 166 creating a FlashCopy for backup purposes without volume copy 167 multiple setup of a test system with same contents 166 one time test system 166 using a target volume to restore its contents back to the source 169 using an Incremental FlashCopy for backup purposes 168 FlashCopy SE 5 flow FlashCopy 102 formation Consistency Group 320 freeze 184 Freeze FlashCopy Consistency Group 93 freezepprc 462, 466, 497 Full Duplex 464 full volume copy 85 fuzzy copy 259

Index

787

G
Geographically Dispersed Open Clusters (GDOC) 726 Global Copy 3, 6 adding and removing PPRC paths 280 capacity 300 changing mode to Metro Mirror 277, 279 clear environment 273 command comparison 268 consistent point-in-time copy 259 converting to Metro Mirror with DS GUI 295 create pairs 271 create PPRC paths 271 determine available fibre links 271 establish pair 258 establish pairs with DS GUI 290 examples 270 examples with DS GUI 286 monitoring the copy status with DS GUI 294 overview 254 pausepprc 276 performance 300 positioning 256 primary 93 remove pairs 273 remove PPRC paths 274 resume 276 resumepprc 276 rmpprc 273 scalability 300 setting up environment 271 state change logic 255 suspend 276 symmetrical configuration 265 Global Mirror 3, 7, 133, 151, 308, 435 add A volumes to session on each LSS 372 add and remove A volume to existing environment 387 add and remove LSS to existing Global Mirror environment 389 add and remove Subordinate 391 add or remove storage servers or LSSs 330 attributes of synchronous data replication 308 basic concepts 312 change tuning parameters 385 clear up environment with DS CLI 377 close session 380 connectivity to remote site 314 Consistency Group drain time 361 Consistency Group formation 320 Consistency Group parameters 321 consistent asynchronous remote copy solution 311 consistent data on B volumes 342 coordination time 360 create 327 create FlashCopy with DS GUI 427 create Global Copy with DS GUI 422 create PPRC paths from A to B 404 create PPRC paths from B to A 401 create session with DS GUI 431 define session 317

dependent writes 304, 308 DS CLI 351 DS GUI 353 DS GUI examples 417 establish environment with DS GUI 418 failback Global Copy from A to B 406, 414 failback Global Copy from B to A 401 failover 338 failover from B to A 393 failover Global Copy from A to B 405 failover Global Copy from B to A 411 failover/failback example 348 for open systems 89 form Consistency Group 319 interfaces 350 introduce FlashCopy 316 local site 345 manage environment 382 modify parameters 330 modify session 329, 439 multiple disk storage servers 332 non-revertible 339 operation 336 pause 410 pause and resume Consistency Group formation 382 pause Global Copy pairs from A to B 411 perform disaster recovery testing on D volume 414 performance considerations at coordination time 359, 722723 populate session with volumes 318 primary site failure 337 query for Global Copy first pass completion 403 query out-of-sync tracks for Global Copy 404 quiesce application at remote site 404 recovery after local site failure using DS CLI 391 Recovery Point Objective (RPO) 358 recovery scenario 336 reestablish FlashCopy from B to C 399 reestablish FlashCopy relationship 412 reestablish FlashCopy relationship between B and C volumes 343 relationship between local and remote volume 315 remote storage server configuration 361, 723 remove A volumes from session 379 remove environment 332 remove FlashCopy between B and C volumes 380 remove FlashCopy relationship 331 remove Global Copy pairs 381 remove PPRC paths 381 replicating data 304 restart application 344 restart application at remote site 400 resume 416 return to local site 400 reverse FlashCopy from B to C 396, 412 revert action 341 revertible 339 session 313, 319 simple configuration 314 start 407

788

IBM System Storage DS8000: Copy Services in Open Environments

start application at the local site 409 start for a specified session 373 start session on each LSS 371 start with Subordinate 376 summary of recovery scenario 392 synchronous data replication 304 take FlashCopy from B to D 413 terminate 378, 392 terminate and start 385 terminology 326 topology 331 verify for Consistency Group state 338 verify for valid Consistency Group state 394 view sessions volumes 437 volumes 329 wait for Global Copy first pass to complete 416 Global Mirror Master 451 Global Mirror session 313 pause 437 go-to-sync 6

DS6000 and ESS 800 managing ESS 800 FlashCopy 716 managing Metro Mirror or Global Copy pairs 710 DS8000 and ESS Copy Services 696 ESS 800 and DS8000 695 adding Copy Services Domain 698 Copy Services 696 volume size considerations for RMC 699 introduce FlashCopy 316 introduction RMC 6 iSeries Access for Windows 600 iSeries Navigator 600

J
journal volume 51

L
license 82 licensing 18 authorized level 22 charging example 22 links Metro Mirror 185 local 344 FlashCopy 103 local FlashCopy commands parameters 105 local site return 345 logical paths Metro Mirror 187 logical size 132 long busy 466 lsflash 106107, 111, 134, 141, 470 lspprc 465 LSS 330 add or remove in Global Mirror 330 LSS design Metro Mirror 188 lssession 352, 408, 461 lssestg 135 LVM 737

H
HACMP/XD 725 HACMP/XD for Metro Mirror 725 heartbeat 592 help 40 DS CLI 40 High Availability (HA) 588 host volume 50 HP-UX 753755 hypervisor 589

I
I/O port 419 IBM TotalStorage Productivity Center 133 IBM VDS hardware provider 746 IBM VSS Provider 742 importvg 736 Incremental FlashCopy 94 Incremental Resync 505 Independent Auxiliary Storage Pool (IASP) 592 indicator feature 18 individual tasks CLI migration 776 initfbvol 146, 150 initial disk synchronization Metro Mirror 178 initial synchronization 194 initiate FlashCopy using Create 119 Input output processor (IOP) 590 integration system 81 interactive mode DS CLI 37, 39 interfaces FlashCopy 100101 Global Mirror 350 remote FlashCopy 104 internal table 134 interoperability 585, 689

M
man page 40 manage Global Mirror environment with DS GUI 435 management console 10 managepwfile 34, 698 Master 468 Master disk subsystem 446 maximum coordination interval 433, 439 maximum time writes inhibited to remote site 439 metrics 459 Metro Mirror 3, 6, 93 adding capacity in new DS6000s 195 automation and management 176 bandwidth 188

Index

789

changing options 233 clustering 179 Copy Services network components 199 creating pairs 236 creating paths 224 data consistency 175 distance 188 DS CLI 200 DS GUI 201 examples 202 failback 179, 246 failover 179, 244 failover and failback 179 Fibre Channel links 186 freeze 184 initial disk synchronization 178 initial synchronization 194 links 185 logical paths 187 LSS design 188 overview 174 pausepprc 209 performance 194 resume example 242 resumepprc 209 rmpprc 206 rolling disaster 176 scalability 195 symmetrical configuration 189 traps 768 volumes 190 Metro/Global Mirror 7, 444 MGM 444 configuration 450 multiple subsystems 451 planned recovery 475 Microsoft Virtual Disk Service see VDS Microsoft Volume Shadow Copy Services see VSS mkextpool 150 mkflash 40, 106107 mkflash, 140 mkgmir 352, 408, 458 mkhostconnect 780 mkpprc 271 mkpprcpath 455 mkremoteflash 284 mksession 352, 458 mksestg 135 mkvolgrp 780 modify a Global Mirror session 329 modify a session Global Mirror 439 modify Global Mirror session parameters 330 multiple relationship FlashCopy 92 multi-rank 160

nocopy option 86 non-revertible 339

O
open systems AIX and FlashCopy 732 AIX and Remote Mirror and Copy 736 Copy Services using VERITAS Volume Manager, SUN Solaris 750 Copy Services with Windows volumes 739 HP-UX and Copy Services 753 HP-UX and FlashCopy 754 HP-UX with Remote Mirror and Copy 755 Microsoft Virtual Disk Service (VDS) 744 Microsoft Volume Shadow Copy Services (VSS) 741 SUN Solaris and Copy Services 749 Windows and Remote Mirror and Copy 738 open systems specifics Virtual Disk Service overview 744 operational areas 80 options FlashCopy 91 Out of Sync Tracks 465 OutOfSyncTracks 134 overhead 132, 159 over-provisioned 149 overview 79 Global Copy 254

P
page fault 590 parameters 105, 116, 330 Consistency Group 321 FlashCopy commands 105 paths add paths to existing paths between LLSs 229 creating 224 delete 234 pause a Global Mirror session 437 pausegmir 352, 382 pausepprc 467 Global Copy 276 Metro Mirror 209 performance 160 FlashCopy overview 154 Global Copy 300 Metro Mirror 194 Persistent FlashCopy 97 physical capacity 134135 physical size 132 planned outage 595 planned recovery 475 planning 160 point-in-time copy creating Global Copy 259 populate Global Mirror session 318 port IDs decoding 704 positioning

N
Network Interface Server 13 nocopy 131, 134, 159

790

IBM System Storage DS8000: Copy Services in Open Environments

Global Copy 256 PPRC volume size considerations 699 PPRC links 454 PPRC paths 454 PPRC-XD 254 PPRC-XD see Global Copy Practice Sessions 51 primary site failure 337 procedure periodic offsite backup 280 production backup system 80 profile DS CLI 33

Q
queue full 175, 183, 497

R
RAID 10 160 RAID 5 160 ranks 132 recapthreshold 135 recovery Global Mirror scenario 336 Recovery Point Objective 444 Recovery Point Objective (RPO) Global Mirror 358 recreatevg 735 Redbooks Web site 782 Contact us xxii reestablish Flashcopy relationship B and C volumes 343 remote FlashCopy 97, 104 Remote Mirror and Copy 736, 755 HP-UX 755 Remote Mirror and Copy see RMC remove a Global Mirror environment 332 remove FlashCopy relationship 331 repcap 134135 replicating data over a distance 304 repository 5, 130, 132, 135 repository overhead 134 repository size 133, 135 repoverh 134 reppercent 135 restart application at remote site 344 resume Global Mirror session 438 Metro Mirror 242 resumegmir 352, 384 resumepprc 276 Metro Mirror 209 resyncflash 111112, 140 resynchronize target using DS SM 127 resyncremoteflash 284 return codes DS CLI 39

return to local site 345 reverse restore 98 reverse source-target relationship using reverseflash 113 reverseflash 106, 113114, 140 revertflash 114115, 148, 395, 469 Revertible 470 revertible 109, 339, 394395 RMC 3, 6, 699 commands 37 Global Copy 6 Global Mirror 7 Metro Mirror 6 rmfbvol 146 rmflash 115, 145146 rmgmir 352, 378, 385 rmpprc Global Copy 273 Metro Mirror 206 rmsession 352, 380 rmsestg 135 rolling disaster 176 RPO 444

S
scalability Global Copy 300 Metro Mirror 195 scheduled outage 595 script command mode DS CLI 38 script mode DS CLI 37 session 319, 329 modify 439 pause 437 view properties 435 view volumes 437 setflashrevertible 109, 111, 148 SFI 10 showckdvol 779 showextpool 779 showfbvol 779 showgmir 352, 408, 459 showgmiroos 460 showsestg 135 Simple Network Management Protocol see SNMP simplifying commands Metro Mirror 202 single-level storage 590 single-shot mode DS CLI 37 sizing 133 SNMP 149, 765 Metro Mirror traps 768 notification overview 766 trap 101 766 trap 202 768 trap 210 768 Index

791

trap 211 768 trap 212 769 trap 213 769 trap 214 769 trap 215 769 trap 216 769 trap 217 770 traps 765 Space Efficient volume 5 spindle 160 Standard FlashCopy 81 start Global Mirror session 319 state change logic Global Copy 255 Storage Complex 1011 Storage Facility Image 10, 90 storage LPAR 90 Storage Pool Striping 160 storage servers add or remove in Global Mirror 330 Storage Unit 10 stripe 132 Subordinates 469 SUN Solaris Copy Services 749 FlashCopy with VERITAS Volume Manager 750 Remote Mirror and Copy with VERITAS Volume Manager 752 suspend Metro Mirror 241 Metro Mirror example 241 Suspended 466 switchover 595 swpprc 599, 610 symmetrical configuration Global Copy 265 Metro Mirror 189 synchronous data replication 304 Synchronous PPRC see Metro Mirror sysbas 595, 598 System i 588 Copy Services Toolkit 596 Disk Pool 593 System i 5 logical partitions 589 System i5 backup node 592 cluster 591 cluster node 591 cluster resource group (CRG) 592 device domain 592 external storage 589 Hardware Management Console (HMC) 589 Input output processor (IOP) 590 primary node 591 recovery domain 592 structure 589 subject 590 System Storage Interoperation Center (SSIC) 191

T
Target Suspended 479 target volume 51, 131 terminology 326 test system 80 tgtreleasespace 146 tgtse 140 Three Site BC 55 threshold 149 topology 331 TPC 133 track 132 track space efficient 140 TSE 140 TSM for Advanced Copy Services 722 type mmir 456

U
unfreezeflash 116, 148 unfreezepprc 466, 497 unplanned outage 595 user groups 33 user accounts DS CLI 33 User ASP 593 user assistance DS CLI 40

V
VDS 744 product components 746 VERITAS Volume Manager 738 view session properties 435 vircap 135 virtual 159 virtual capacity 134135, 150 virtual size 81 virtual space 132 volume 129 volume characteristics 140 Volume Shadow Copy Services (VSS) 741 volumeaccess 780 volumes add or remove 329 Metro Mirror 190 VSS 741 components 741 function 743

W
Windows 738 enlarging extended/spanned volumes 740 extending simple volumes 740 Windows dynamic disks 739 withdraw 145 workload 160 write inhibited 133

792

IBM System Storage DS8000: Copy Services in Open Environments

write-source inhibit 151 wrkcfgsts 601 WWNN 352 determine for ESS 800 using DS CLI 708

Z
z/OS Global Mirror 3 zero data loss 444

Index

793

794

IBM System Storage DS8000: Copy Services in Open Environments

IBM System Storage DS8000: Copy Services in Open Environments

(1.5 spine) 1.5<-> 1.998 789 <->1051 pages

IBM System Storage DS8000: Copy Services in Open Environments


IBM System Storage DS8000:

IBM System Storage DS8000:

IBM System Storage DS8000: Copy Services in Open Environments

IBM System Storage DS8000: Copy Services in Open Environments

Back cover

IBM System Storage DS8000: Copy Services in Open Environments

Configuration of Copy Services in heterogeneous environments New IBM FlashCopy SE TPC for Replication support Copy Services with System i

In todays highly competitive and real time environment, the ability to manage all IT operations on a continuous basis makes the creation of copies and backups of data a core requirement for any IT deployment. Furthermore, it is necessary to provide proactive efficient Disaster Recovery strategies that can ensure continuous data availability for business operations. The Copy Services functions available with the IBM System Storage DS8000 are designed to be part of these strategies. This IBM Redbooks publication will help you plan, install, configure, and manage the Copy Services functions of the IBM System Storage DS8000 when used in Open System and System i environments. We give you the details necessary to implement and control each of the Copy Services functions. Numerous examples illustrate how to use the various interfaces with each of the Copy Services. This book also covers the 3-site Metro/Global Mirror with Incremental Resync feature and introduces the TotalStorage Productivity Center for Replication solution. It should be read in conjunction with The IBM System Storage DS8000 Series: Architecture and Implementation, SG24-6786. There is also a companion book that supports the configuration of the Copy Services functions in z/OS environments, The IBM
System Storage DS8000 Series: Copy Services with IBM System z, SG24-6787.

INTERNATIONAL TECHNICAL SUPPORT ORGANIZATION

BUILDING TECHNICAL INFORMATION BASED ON PRACTICAL EXPERIENCE


IBM Redbooks are developed by the IBM International Technical Support Organization. Experts from IBM, Customers and Partners from around the world create timely technical information based on realistic scenarios. Specific recommendations are provided to help you implement IT solutions more effectively in your environment.

For more information: ibm.com/redbooks


SG24-6788-03 ISBN 0738431141

Potrebbero piacerti anche