Documenti di Didattica
Documenti di Professioni
Documenti di Cultura
Install and customize Productivity Center for Disk Install and customize Productivity Center for Replication Use Productivity Center to manage your storage
ibm.com/redbooks
International Technical Support Organization Managing Disk Subsystems using IBM TotalStorage Productivity Center September 2005
SG24-7097-01
Note: Before using this information and the product it supports, read the information in Notices on page ix.
Second Edition (September 2005) This edition applies to Version 2 Release 1 of IBM TotalStorage Productivity Center (product number 5608-TC1, 5608-TC4, 5608-TC5).
Copyright International Business Machines Corporation 2004, 2005. All rights reserved. Note to U.S. Government Users Restricted Rights -- Use, duplication or disclosure restricted by GSA ADP Schedule Contract with IBM Corp.
Contents
Notices . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ix Trademarks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .x Preface . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xi The team that wrote this redbook. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xi Become a published author . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xiii Comments welcome. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xiii Chapter 1. IBM TotalStorage Productivity Center overview . . . . . . . . . . . . . . . . . . . . . . 1 1.1 Introduction to IBM TotalStorage Productivity Center . . . . . . . . . . . . . . . . . . . . . . . . . . . 2 1.1.1 Standards organizations and standards . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2 1.2 IBM TotalStorage Open Software family . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3 1.3 IBM TotalStorage Productivity Center . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4 1.3.1 Data subject matter expert: TotalStorage Productivity Center for Data . . . . . . . . . 5 1.3.2 Fabric subject matter expert: Productivity Center for Fabric . . . . . . . . . . . . . . . . . . 7 1.3.3 Disk subject matter expert: TotalStorage Productivity Center for Disk . . . . . . . . . 10 1.3.4 Replication subject matter expert: Productivity Center for Replication . . . . . . . . . 12 1.4 IBM TotalStorage Productivity Center . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14 1.4.1 Productivity Center for Disk and Productivity Center for Replication . . . . . . . . . . 15 1.4.2 Event services. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21 1.5 Taking steps toward an On Demand environment . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22 Chapter 2. Key concepts . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.1 Standards organizations and standards . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.1.1 CIM/WEB management model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.2 Storage Networking Industry Association . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.2.1 The SNIA Shared Storage Model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.2.2 SMI Specification . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.2.3 Integrating existing devices into the CIM model . . . . . . . . . . . . . . . . . . . . . . . . . . 2.2.4 CIM Agent implementation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.2.5 CIM Object Manager . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.3 Common Information Model (CIM) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.3.1 How the CIM Agent works. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.4 Service Location Protocol (SLP) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.4.1 SLP architecture . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.4.2 SLP service agent . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.4.3 SLP user agent . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.4.4 SLP directory agent . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.4.5 Why use an SLP DA? . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.4.6 When to use DAs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.4.7 SLP configuration recommendation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.4.8 Setting up the Service Location Protocol Directory Agent . . . . . . . . . . . . . . . . . . 2.4.9 Configuring SLP Directory Agent addresses. . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.5 Productivity Center for Disk and Replication architecture . . . . . . . . . . . . . . . . . . . . . . . Chapter 3. TotalStorage Productivity Center suite installation . . . . . . . . . . . . . . . . . . 3.1 Installing the IBM TotalStorage Productivity Center . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.1.1 Configurations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.1.2 Installation prerequisites . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25 26 26 27 27 28 29 30 30 31 32 32 33 33 34 35 38 38 39 40 41 42 43 44 44 45 iii
3.1.3 TCP/IP ports used by TotalStorage Productivity Center . . . . . . . . . . . . . . . . . . . . 45 3.1.4 Default databases created during install . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 48 3.2 Pre-installation check list. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 48 3.2.1 User IDs and security . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 48 3.2.2 Certificates and key files . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 51 3.3 Services and service accounts . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 52 3.3.1 Starting and stopping the managers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 52 3.3.2 Uninstall Internet Information Services . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 52 3.3.3 SNMP install . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 53 3.4 IBM TotalStorage Productivity Center for Fabric. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 54 3.4.1 The computer name . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 54 3.4.2 Database considerations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 55 3.4.3 Windows Terminal Services . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 55 3.4.4 Tivoli NetView . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 55 3.4.5 Personal firewall . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 56 3.4.6 Change the HOSTS file . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 57 3.5 Installation process . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 58 3.5.1 Prerequisite product install: DB2 and WebSphere . . . . . . . . . . . . . . . . . . . . . . . . 62 3.5.2 Installing IBM Director . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 67 3.5.3 Tivoli Agent Manager . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 77 3.5.4 IBM TotalStorage Productivity Center for Disk and Replication Base. . . . . . . . . . 86 3.5.5 IBM TotalStorage Productivity Center for Disk . . . . . . . . . . . . . . . . . . . . . . . . . . . 95 3.5.6 IBM TotalStorage Productivity Center for Replication. . . . . . . . . . . . . . . . . . . . . 100 3.5.7 IBM TotalStorage Productivity Center for Fabric. . . . . . . . . . . . . . . . . . . . . . . . . 107 Chapter 4. CIMOM installation and configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.2 Planning considerations for Service Location Protocol . . . . . . . . . . . . . . . . . . . . . . . . 4.2.1 Considerations for using SLP DAs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.2.2 SLP configuration recommendation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.3 General performance guidelines . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.4 Planning considerations for CIMOM . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.4.1 CIMOM configuration recommendations. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.5 Installing CIM agent for ESS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.5.1 ESS CLI install . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.5.2 ESS CIM Agent install. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.5.3 Post Installation tasks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.6 Configuring the ESS CIM Agent for Windows . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.6.1 Registering ESS Devices . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.6.2 Register ESS server for Copy services. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.6.3 Restart the CIMOM . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.6.4 CIMOM User Authentication . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.7 Verifying connection to the ESS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.7.1 Problem determination . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.7.2 Confirming the ESS CIMOM is available . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.7.3 Setting up the Service Location Protocol Directory Agent . . . . . . . . . . . . . . . . . 4.7.4 Configuring IBM Director for SLP discovery . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.7.5 Registering the ESS CIM Agent to SLP . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.7.6 Verifying and managing CIMOMs availability . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.8 Installing CIM agent for IBM DS4000 family . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.8.1 Verifying and Managing CIMOM availability . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.9 Configuring CIMOM for SAN Volume Controller . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.9.1 Adding the SVC TotalStorage Productivity Center for Disk user account. . . . . . 119 120 120 120 121 122 123 123 124 124 128 137 139 139 141 142 143 144 147 148 150 152 153 154 155 164 166 167
iv
4.9.2 Registering the SAN Volume Controller host in SLP . . . . . . . . . . . . . . . . . . . . . 4.10 Configuring CIMOM for TotalStorage Productivity Center for Disk summary . . . . . . 4.10.1 SLP registration and slptool . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.10.2 Persistency of SLP registration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.10.3 Configuring slp.reg file . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Chapter 5. TotalStorage Productivity Center common base use . . . . . . . . . . . . . . . 5.1 Productivity Center common base: Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.2 Launching TotalStorage Productivity Center. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.3 Exploiting Productivity Center common base . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.3.1 Configure MDM . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.3.2 Launch Device Manager . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.3.3 Discovering new storage devices . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.3.4 Manage CIMOMs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.3.5 Manually removing old CIMOM entries . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.4 Performing volume inventory . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.5 Working with ESS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.5.1 Changing the display name of an ESS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.5.2 ESS Volume inventory . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.5.3 Assigning and unassigning ESS volumes. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.5.4 Creating new ESS volumes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.5.5 Launch device manager for an ESS device . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.6 Working with SAN Volume Controller . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.6.1 Changing the display name of a SAN Volume Controller . . . . . . . . . . . . . . . . . . 5.6.2 Working with SAN Volume Controller mdisks . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.6.3 Creating new Mdisks on supported storage devices. . . . . . . . . . . . . . . . . . . . . . 5.6.4 Create and view SAN Volume Controller Vdisks . . . . . . . . . . . . . . . . . . . . . . . . 5.7 Working with DS4000 family or FAStT storage . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.7.1 Changing the display name of a DS4000 or FAStT . . . . . . . . . . . . . . . . . . . . . . 5.7.2 Working with DS4000 or FAStT volumes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.7.3 Creating DS4000 or FAStT volumes. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.7.4 Assigning hosts to DS4000 and FAStT volumes . . . . . . . . . . . . . . . . . . . . . . . . 5.7.5 Unassigning hosts from DS4000 or FAStT volumes. . . . . . . . . . . . . . . . . . . . . . 5.8 Event Action Plan Builder . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.8.1 Applying an Event Action Plan to a managed system or group . . . . . . . . . . . . . 5.8.2 Exporting and importing Event Action Plans . . . . . . . . . . . . . . . . . . . . . . . . . . . . Chapter 6. TotalStorage Productivity Center for Disk use . . . . . . . . . . . . . . . . . . . . . 6.1 Performance Manager GUI . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.2 Exploiting Performance Manager . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.2.1 Performance Manager data collection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.2.2 Using IBM Director Scheduler function . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.2.3 Reviewing Data collection task status. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.2.4 Managing Performance Manager Database . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.2.5 Performance Manager gauges . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.2.6 ESS thresholds . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.2.7 Data collection for SAN Volume Controller . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.2.8 SAN Volume Controller thresholds . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.3 Exploiting gauges . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.3.1 Before you begin . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.3.2 Creating gauges example . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.3.3 Zooming in on the specific time period . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.3.4 Modify gauge to view array level metrics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
173 173 174 175 175 177 178 178 179 180 181 181 187 189 194 197 198 198 200 201 202 203 204 204 206 207 209 210 211 212 213 214 215 219 221 227 228 228 229 235 236 239 242 257 260 261 263 263 263 265 266
Contents
6.3.5 Modify gauge to review multiple metrics in same chart. . . . . . . . . . . . . . . . . . . . 268 6.4 Performance Manager command line interface . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 269 6.4.1 Performance Manager CLI commands . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 270 6.4.2 Sample command outputs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 271 6.5 Volume Performance Advisor (VPA) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 272 6.5.1 VPA introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 272 6.5.2 The provisioning challenge . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 273 6.5.3 Workload characterization and workload profiles . . . . . . . . . . . . . . . . . . . . . . . . 273 6.5.4 Workload profile values. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 274 6.5.5 How the Volume Performance Advisor makes decisions . . . . . . . . . . . . . . . . . . 275 6.5.6 Enabling the Trace Logging for Director GUI Interface . . . . . . . . . . . . . . . . . . . . 276 6.6 Getting started. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 277 6.6.1 Workload profiles . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 277 6.6.2 Using VPA with predefined Workload profile . . . . . . . . . . . . . . . . . . . . . . . . . . . 278 6.6.3 Launching VPA tool . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 278 6.6.4 ESS User Validation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 280 6.6.5 Configuring VPA settings for the ESS diskspace request. . . . . . . . . . . . . . . . . . 283 6.6.6 Choosing Workload Profile . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 287 6.6.7 Choosing candidate locations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 290 6.6.8 Verify settings for VPA . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 292 6.6.9 Approve recommendations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 292 6.6.10 VPA loopback after Implement Recommendations selected . . . . . . . . . . . . . . 294 6.7 Creating and managing Workload Profiles . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 303 6.7.1 Choosing Workload Profiles . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 304 6.8 Remote Console installation for TotalStorage Productivity Center for Disk - Performance Manager . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 314 6.8.1 Installing IBM Director Console. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 315 6.8.2 Installing TotalStorage Productivity Center for Disk Base Remote Console. . . . 319 6.8.3 Installing Remote Console for Performance Manager function. . . . . . . . . . . . . . 323 6.8.4 Launching Remote Console for TotalStorage Productivity Center . . . . . . . . . . . 328 Chapter 7. TotalStorage Productivity Center for Fabric use . . . . . . . . . . . . . . . . . . . 7.1 TotalStorage Productivity Center for Fabric overview . . . . . . . . . . . . . . . . . . . . . . . . . 7.1.1 Zoning overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.1.2 Supported switches for zoning . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.1.3 Deployment . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.1.4 Enabling zone control . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.1.5 TotalStorage Productivity Center for Disk eFix . . . . . . . . . . . . . . . . . . . . . . . . . . 7.1.6 Installing the eFix . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.2 Installing Fabric remote console . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.3 TotalStorage Productivity Center for Disk integration . . . . . . . . . . . . . . . . . . . . . . . . . 7.4 Launching TotalStorage Productivity Center for Fabric . . . . . . . . . . . . . . . . . . . . . . . Chapter 8. TotalStorage Productivity Center for Replication use. . . . . . . . . . . . . . . . 8.1 TotalStorage Productivity Center for Replication overview . . . . . . . . . . . . . . . . . . . . . 8.1.1 Supported Copy Services . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.1.2 Replication session . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.1.3 Storage group . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.1.4 Storage pools . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.1.5 Relationship of group, pool, and session . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.1.6 Copyset and sequence concepts . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.2 Exploiting TotalStorage Productivity Center for Replication . . . . . . . . . . . . . . . . . . . . 8.2.1 Before you start. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 331 332 332 333 335 336 338 338 340 346 352 355 356 356 358 359 359 360 361 361 362
vi
8.2.2 Creating a storage group . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.2.3 Modifying a storage group . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.2.4 Viewing storage group properties . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.2.5 Deleting a storage group. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.2.6 Creating a storage pool. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.2.7 Modifying a storage pool . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.2.8 Deleting a storage pool . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.2.9 Viewing storage pool properties . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.2.10 Storage paths . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.2.11 Point-in-Time Copy: Creating a session . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.2.12 Creating a session: Verifying source-target relationship. . . . . . . . . . . . . . . . . . 8.2.13 Continuous Synchronous Remote Copy: Creating a session . . . . . . . . . . . . . . 8.2.14 Managing a Point-in-Time copy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.2.15 Managing a Continuous Synchronous Remote Copy . . . . . . . . . . . . . . . . . . . . 8.3 Using Command Line Interface (CLI) for replication . . . . . . . . . . . . . . . . . . . . . . . . . . 8.3.1 Session details . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.3.2 Starting a session . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.3.3 Suspending a session . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.3.4 Terminating a session . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Chapter 9. Problem determination. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.1 Troubleshooting tips: Host configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.1.1 IBM Director logfiles . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.1.2 Using Event Action Plans . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.1.3 Restricting discovery scope in TotalStorage Productivity Center . . . . . . . . . . . . 9.1.4 Following discovery using Windows raswatch utility . . . . . . . . . . . . . . . . . . . . . . 9.1.5 DB2 database checking . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.1.6 IBM WebSphere tracing and logfile browsing . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.1.7 SLP and CIM Agent problem determination . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.1.8 Enabling SLP tracing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.1.9 ESS registration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.1.10 Viewing Event entries . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.2 Replication Manager problem determination . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.2.1 Diagnosing an indications problem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.2.2 Restarting the replication environment . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.3 Enabling trace logging. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.3.1 Enabling WebSphere Application Server trace . . . . . . . . . . . . . . . . . . . . . . . . . . 9.4 Enabling trace logging. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.4.1 ESS user authentication problem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.4.2 SVC Data collection task failure due to previous running task . . . . . . . . . . . . . . Chapter 10. Database management and reporting . . . . . . . . . . . . . . . . . . . . . . . . . . . 10.1 DB2 database overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10.2 Database purging in TotalStorage Productivity Center . . . . . . . . . . . . . . . . . . . . . . . 10.2.1 Performance Manager database panel. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10.3 IBM DB2 tool suite . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10.3.1 Command Line Tools . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10.3.2 Development Tools . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10.3.3 General Administration Tools . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10.3.4 Monitoring Tools . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10.4 DB2 Command Center overview. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10.4.1 Command Center navigation example . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10.5 DB2 Command Center custom report example . . . . . . . . . . . . . . . . . . . . . . . . . . . .
362 366 367 368 369 372 373 374 375 375 379 385 389 395 407 409 411 414 415 421 422 422 422 423 423 423 428 429 430 431 431 434 435 435 435 435 444 444 445 449 450 450 451 453 454 455 456 457 457 458 462
Contents
vii
10.5.1 Extracting LUN data report . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10.5.2 Command Center report . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10.6 Exporting collected performance data to a file . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10.6.1 Control Center. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10.6.2 Data extraction tools, tips and reporting methods. . . . . . . . . . . . . . . . . . . . . . . 10.7 Database backup and recovery overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10.8 Backup example . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Appendix A. TotalStorage Productivity Center DB2 table formats. . . . . . . . . . . . . . . A.1 Performance Manager tables . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . A.1.1 VPVPD table. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . A.1.2 VPCFG table . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . A.1.3 VPVOL table . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . A.1.4 VPCCH table . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Appendix B. Worksheets . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . B.1 User IDs and passwords. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . B.1.1 Server information . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . B.1.2 User IDs and passwords to lock the key files . . . . . . . . . . . . . . . . . . . . . . . . . . . B.2 Storage device information . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . B.2.1 IBM Enterprise Storage Server. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . B.2.2 IBM FAStT . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . B.2.3 IBM SAN Volume Controller . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Appendix C. Event management . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . C.1 Event management introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . C.1.1 Understanding events and event actions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . C.1.2 Understanding event filters. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . C.1.3 Event Actions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . C.1.4 Event Data Substitution . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . C.1.5 Updating Event Plans, Filters, and Actions . . . . . . . . . . . . . . . . . . . . . . . . . . . . Related publications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . IBM Redbooks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Other Publications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Online resources . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . How to get IBM Redbooks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Help from IBM . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
462 465 481 482 485 490 494 497 498 498 498 499 500 505 506 506 506 508 508 509 510 511 512 512 513 519 521 523 527 527 527 528 528 528
Index . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 529
viii
Notices
This information was developed for products and services offered in the U.S.A. IBM may not offer the products, services, or features discussed in this document in other countries. Consult your local IBM representative for information on the products and services currently available in your area. Any reference to an IBM product, program, or service is not intended to state or imply that only that IBM product, program, or service may be used. Any functionally equivalent product, program, or service that does not infringe any IBM intellectual property right may be used instead. However, it is the user's responsibility to evaluate and verify the operation of any non-IBM product, program, or service. IBM may have patents or pending patent applications covering subject matter described in this document. The furnishing of this document does not give you any license to these patents. You can send license inquiries, in writing, to: IBM Director of Licensing, IBM Corporation, North Castle Drive Armonk, NY 10504-1785 U.S.A. The following paragraph does not apply to the United Kingdom or any other country where such provisions are inconsistent with local law: INTERNATIONAL BUSINESS MACHINES CORPORATION PROVIDES THIS PUBLICATION "AS IS" WITHOUT WARRANTY OF ANY KIND, EITHER EXPRESS OR IMPLIED, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF NON-INFRINGEMENT, MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE. Some states do not allow disclaimer of express or implied warranties in certain transactions, therefore, this statement may not apply to you. This information could include technical inaccuracies or typographical errors. Changes are periodically made to the information herein; these changes will be incorporated in new editions of the publication. IBM may make improvements and/or changes in the product(s) and/or the program(s) described in this publication at any time without notice. Any references in this information to non-IBM Web sites are provided for convenience only and do not in any manner serve as an endorsement of those Web sites. The materials at those Web sites are not part of the materials for this IBM product and use of those Web sites is at your own risk. IBM may use or distribute any of the information you supply in any way it believes appropriate without incurring any obligation to you. Information concerning non-IBM products was obtained from the suppliers of those products, their published announcements or other publicly available sources. IBM has not tested those products and cannot confirm the accuracy of performance, compatibility or any other claims related to non-IBM products. Questions on the capabilities of non-IBM products should be addressed to the suppliers of those products. This information contains examples of data and reports used in daily business operations. To illustrate them as completely as possible, the examples include the names of individuals, companies, brands, and products. All of these names are fictitious and any similarity to the names and addresses used by an actual business enterprise is entirely coincidental. COPYRIGHT LICENSE: This information contains sample application programs in source language, which illustrates programming techniques on various operating platforms. You may copy, modify, and distribute these sample programs in any form without payment to IBM, for the purposes of developing, using, marketing or distributing application programs conforming to the application programming interface for the operating platform for which the sample programs are written. These examples have not been thoroughly tested under all conditions. IBM, therefore, cannot guarantee or imply reliability, serviceability, or function of these programs. You may copy, modify, and distribute these sample programs in any form without payment to IBM for the purposes of developing, using, marketing, or distributing application programs conforming to IBM's application programming interfaces.
ix
Trademarks
The following terms are trademarks of the International Business Machines Corporation in the United States, other countries, or both:
Eserver e-business on demand iSeries z/OS AIX Cloudscape Cube Views CICS DataJoiner DB2 Universal Database DB2 Enterprise Storage Server ESCON FlashCopy Informix Intelligent Miner IBM Lotus MVS NetView OS/390 QMF Redbooks Redbooks (logo) S/390 Tivoli Enterprise Tivoli Enterprise Console Tivoli TotalStorage WebSphere
The following terms are trademarks of other companies: Intel, Pentium, Xeon, Intel logo, Intel Inside logo, and Intel Centrino logo are trademarks or registered trademarks of Intel Corporation or its subsidiaries in the United States, other countries, or both. Excel, Microsoft, Windows, and the Windows logo are trademarks of Microsoft Corporation in the United States, other countries, or both. EJB, Java, JDBC, and all Java-based trademarks are trademarks of Sun Microsystems, Inc. in the United States, other countries, or both. UNIX is a registered trademark of The Open Group in the United States and other countries. Linux is a trademark of Linus Torvalds in the United States, other countries, or both. SET, SET Secure Electronic Transaction, and the SET Logo are trademarks owned by SET Secure Electronic Transaction LLC. Other company, product, and service names may be trademarks or service marks of others.
Preface
IBM TotalStorage Productivity Center is designed to provide a single point of control for managing networked storage devices that implement the Storage Management Initiative Specification (SMI-S), including the IBM TotalStorage SAN Volume Controller, Enterprise Storage Server, and FAStT. TotalStorage Productivity Center includes the IBM Tivoli Bonus Pack for SAN Management, bringing together device management with fabric management, to help enable the storage administrator to manage the Storage Area Network from a central point. The storage administrator has the ability to configure storage devices, manage the devices, and view the Storage Area Network from a single point. This software offering is intended to complement other members of the IBM TotalStorage Virtualization family by simplifying and consolidating storage management activities. This IBM Redbook includes an introduction to the TotalStorage Productivity Center and its components. It provides detailed information about the installation and configuration of TotalStorage Productivity Center for Disk and TotalStorage Productivity Center for Replication and how to use them. It is intended for anyone wanting to learn about TotalStorage Productivity Center and how it complements an on demand environment and for those planning to install and use the product.
xi
Thanks to the following people for their contributions to this project: Sangam Racherla International Technical Support Organization, San Jose Center Bob Haimowitz ITSO Raleigh Center Diana Duan Michael Liu Richard Kirchofer Paul Lee Thiha Than Bill Warren Martine Wedlake IBM San Jose, California Mike Griese Technical Support Marketing Lead Scott Drummond Program Director Storage Networking Curtis Neal Scott Venuti Open Systems Demo Center, San Jose Russ Smith Storage Software Project Management Jeff Ottman Systems Group TotalStorage Education Architect Doug Dunham Tivoli Swat Team
xii
Ramani Routray Almaden Research Center The original authors of this book are: Ivan Aliprandi William Andrews John A. Cooper Daniel Demer Werner Eggli Tom Smythe Peter Zerbini
Comments welcome
Your comments are important to us! We want our Redbooks to be as helpful as possible. Send us your comments about this or other Redbooks in one of the following ways: Use the online Contact us review redbook form found at:
ibm.com/redbooks
Mail your comments to: IBM Corporation, International Technical Support Organization Dept. QXXE Building 80-E2 650 Harry Road San Jose, California 95120-6099
Preface
xiii
xiv
Chapter 1.
Key standards for Storage Management are: Distributed Management Task Force (DMTF) Common Information Model (CIM) Standards. This includes the CIM Device Model for Storage, which at the time of writing was Version 2.7.2 for the CIM schema. Storage Networking Industry Association (SNIA) Storage Management Initiative Specification (SMI-S).
Previously we discussed the next steps or entry points into an On Demand environment. The IBM software products which represent these entry points and which comprise the IBM TotalStorage Open Software Family is shown in Figure 1-3 on page 4.
1.3.1 Data subject matter expert: TotalStorage Productivity Center for Data
The Data subject matter expert has intimate knowledge of how storage is used, for example whether the data is used by a file system or a database application. Figure 1-5 on page 6 shows the role of the Data subject matter expert which is filled by the TotalStorage Productivity Center for Data (formerly the IBM Tivoli Storage Resource Manager).
Figure 1-5 Monitor and Configure the Storage Infrastructure Data area
Heterogeneous storage infrastructures, driven by growth in file and database data, consume increasing amounts of administrative time, as well as actual hardware resources. IT managers need ways to make their administrators more efficient and more efficiently utilize their storage resources. Tivoli Storage Resource Manager gives storage administrators the automated tools they need to manage their storage resources more cost-effectively. TotalStorage Productivity Center for Data allows you to identify different classes of data, report how much space is being consumed by these different classes, and take appropriate actions to keep the data under control. Features of the TotalStorage Productivity Center for Data are: Automated identification of the storage resources in an infrastructure and analysis of how effectively those resources are being used. File-system and file-level evaluation uncovers categories of files that, if deleted or archived, can potentially represent significant reductions in the amount of data that must be stored, backed up and managed. Automated control through policies that are customizable with actions that can include centralized alerting, distributed responsibility and fully automated response. Predict future growth and future at-risk conditions with historical information. Through monitoring and reporting, TotalStorage Productivity Center for Data helps the storage administrator prevent outages in the storage infrastructure. Armed with timely information, the storage administrator can take action to keep storage and data available to the application. TotalStorage Productivity Center for Data also helps to make the most efficient use of storage budgets, by allowing administrators to use their existing storage more efficiently, and more accurately predict future storage growth.
TotalStorage Productivity Center for Data monitors storage assets, capacity, and usage across an enterprise. TotalStorage Productivity Center for Data can look at: Storage from a host perspective: Manage all the host-attached storage, capacity and consumption attributed to file systems, users, directories, and files Storage from an application perspective: Monitor and manage the storage activity inside different database entities including instance, tablespace, and table Storage utilization and provide chargeback information.
Architecture
The TotalStorage Productivity Center for Data server system manages a number of Agents, which can be servers with storage attached, NAS systems, or database application servers. Information is collected from the Agents and stored in a database repository. The stored information can then be displayed from a native GUI client or browser interface anywhere in the network. The GUI or browser interface gives access to the other functions of TotalStorage Productivity Center for Data, including creating and customizing of a large number of different types of reports and setting up alerts. With TotalStorage Productivity Center for Data, you can: Monitor virtually any host Monitor local, SAN-attached and Network Attached Storage from a browser anywhere on the network For more information refer to the redbook IBM Tivoli Storage Resource Manager: A Practical Introduction, SG24-6886.
Optimized Storage Resource Utilization by reporting on storage network performance Enhanced Storage Personnel Productivity - Tivoli SAN Manager creates a single point of control, administration and security for the management of heterogeneous storage networks Figure 1-6 describes the requirements that must be addressed by the Fabric subject matter expert.
Figure 1-6 Monitor and Configure the Storage Infrastructure Fabric area
TotalStorage Productivity Center for Fabric monitors and manages switches and hubs, storage and servers in a Storage Area Network. TotalStorage Productivity Center for Fabric can be used for both online monitoring and historical reporting. TotalStorage Productivity Center for Fabric: Manages fabric devices (switches) through outband management. Discovers many details about a monitored server and its local storage through an Agent loaded onto a SAN-attached host (Managed Host). Monitors the network and collects events and traps Launches vendor-provided specific SAN element management applications from the TotalStorage Productivity Center for Fabric Console. Discovers and manages iSCSI devices. Provides a fault isolation engine for SAN problem determination (ED/FI - SAN Error Predictor) TotalStorage Productivity Center for Fabric is compliant with the standards relevant to SAN storage and management.
In outband discovery, all communications occur over the IP network: TotalStorage Productivity Center for Fabric requests information over the IP network from a switch using SNMP queries on the device. The device returns the information toTotalStorage Productivity Center for Fabric, also over the IP network. Inband discovery is the process of discovering information about the SAN, including topology and attribute data, through the Fibre Channel data paths. In inband discovery, both the IP and Fibre Channel networks are used: TotalStorage Productivity Center for Fabric requests information (via the IP network) from a Tivoli SAN Manager agent installed on a Managed Host. That agent requests information over the Fibre Channel network from fabric elements and end points in the Fibre Channel network. The agent returns the information to TotalStorage Productivity Center for Fabric over the IP network. TotalStorage Productivity Center for Fabric collects, co-relates and displays information from all devices in the storage network, using both the IP network and the Fibre Channel network. If the Fibre Channel network is unavailable for any reason, monitoring can still continue over the IP network.
1.3.3 Disk subject matter expert: TotalStorage Productivity Center for Disk
The Disk subject matter experts job allows you to manage the disk systems. It will discover and classify all disk systems that exist and draw a picture of all discovered disk systems. The Disk subject matter expert provides the ability to monitor, configure, create disks and do LUN masking of disks. It also does performance trending and performance threshold I/O analysis for both real disks and virtual disks. It also does automated status and problem alerts via SNMP. This role is filled by the TotalStorage Productivity Center for Disk (formerly the IBM TotalStorage Multiple Device Manager Performance Manager component). The requirements addressed by the Disk subject matter expert are shown in Figure 1-7 on page 11. The disk systems monitoring and configuration needs must be covered by a comprehensive management tool like the TotalStorage Productivity Center for Disk.
10
Figure 1-7 Monitor and configure the Storage Infrastructure Disk area
The TotalStorage Productivity Center for Disk provides the raw capabilities of initiating and scheduling performance data collection on the supported devices, of storing the received performance statistics into database tables for later use, and of analyzing the stored data and generating reports for various metrics of the monitored devices. In conjunction with data collection, the TotalStorage Productivity Center for Disk is responsible for managing and monitoring the performance of the supported storage devices. This includes the ability to configure performance thresholds for the devices based on performance metrics, the generation of alerts when these thresholds are exceeded, the collection and maintenance of historical performance data, and the creation of gauges, or performance reports, for the various metrics to display the collected historical data to the end user. The TotalStorage Productivity Center for Disk enables you to perform sophisticated performance analysis for the supported storage devices.
Functions
TotalStorage Productivity Center for Disk provides the following functions: Collect data from devices The Productivity Center for Disk collects data from the IBM TotalStorage Enterprise Storage Server (ESS), SAN Volume Controller (SVC), DS400 family and SMI-S enabled devices. Each Performance Collector collects performance data from one or more storage groups, all of the same device type (for example, ESS or SAN Volume Controller). Each Performance Collection has a start time, a stop time, and a sampling frequency. The performance sample data is stored in DB2 database tables. Configure performance thresholds You can use the Productivity Center for Disk to set performance thresholds for each device type. Setting thresholds for certain criteria enables Productivity Center for Disk to notify you when a certain threshold has been exceeded, so that you to take action before a critical event occurs.
11
You can specify what action should be taken when a threshold-exceeded condition occurs. The action may be to log the occurrence or to trigger an event. The threshold settings can vary by individual device. Monitor performance metrics across storage subsystems from a single console Receive timely alerts to enable event action based on customer policies View performance data from the Productivity Center for Disk database You can view performance data from the Productivity Center for Disk database in both graphical and tabular forms. The Productivity Center for Disk allows a TotalStorage Productivity Center user to access recent performance data in terms of a series of values of one or more metrics, associated with a finite set of components per device. Only recent performance data is available for gauges. Data that has been purged from the database cannot be viewed. You can define one or more gauges by selecting certain gauge properties and saving them for later referral. Each gauge is identified through a user-specified name, and once defined, a gauge can be "started", which means it is then displayed in a separate window of the TotalStorage Productivity Center GUI. You can have multiple gauges active at the same time. Gauge definition will be accomplished through a wizard, to aid in entering a valid set of gauge properties. Gauges are saved in the Productivity Center for Disk database and retrieved upon request. When you request data pertaining to a defined gauge, the Performance Manager builds a query to the database, retrieves and formats the data and returns it to you. Once started, a gauge is displayed in its own window, and displays all available performance data for the specified initial date/time range. The date/time range can be changed after the initial gauge widow is displayed. Focus on storage optimization through identification of best LUN The Volume Performance Advisor is an automated tool to help the storage administrator pick the best possible placement of a new LUN to be allocated, that is, the best placement from a performance perspective. It also uses the historical performance statistics collected from the supported devices, to locate unused storage capacity on the SAN that exhibits the best (estimated) performance characteristics. Allocation optimization involves several variables which are user controlled, such as required performance level and the time of day/week/month of prevalent access. This function is fully integrated with the Device Manager function, this is so that when a new LUN is added, for example, to the ESS, the Performance Manager can seamlessly select the best possible LUN. For detailed information about how to use the functions of the TotalStorage Productivity Center for Disk refer to Chapter 6, TotalStorage Productivity Center for Disk use on page 227.
12
them and resume the replication. The Productivity Center for Replication provides complete management of the replication process. The requirements addressed by the Replication subject matter expert are shown Figure 1-8. Replication in a complex environment needs to be addressed by a comprehensive management tool like the TotalStorage Productivity Center for Replication.
Figure 1-8 Monitor and Configure the Storage Infrastructure Replication area
Functions
Data replication is the core function required for data protection and disaster recovery. It provides advanced copy services functions for supported storage subsystems on the SAN. Replication Manager administers and configures the copy services functions and monitors the replication actions. Its capabilities consist of the management of two types of copy services: the Continuous Copy (also known as Peer-to-Peer, PPRC, or Remote Copy), and the Point-in-Time Copy (also known as FlashCopy). At this time TotalStorage Productivity Center for Replication supports the IBM TotalStorage ESS. Productivity Center for Replication includes support for replica sessions, which ensures that data on multiple related heterogeneous volumes is kept consistent, provided that the underlying hardware supports the necessary primitive operations. Productivity Center for Replication also supports the session concept, such that multiple pairs are handled as a consistent unit, and that Freeze-and-Go functions can be performed when errors in mirroring occur. Productivity Center for Replication is designed to control and monitor the copy services operations in large-scale customer environments. Productivity Center for Replication provides a user interface for creating, maintaining, and using volume groups and for scheduling copy tasks. The User Interface populates lists of volumes using the Device Manager interface. Some of the tasks you can perform with Productivity Center for Replication are:
13
Create a replication group. A replication group is a collection of volumes grouped together so that they can be managed concurrently. Set up a Group for replication. Create, save, and name a replication task. Schedule a replication session with the user interface: Create Session Wizard. Select Source Group. Select Copy Type. Select Target Pool. Save Session.
Start a replication session A user can also perform these tasks with the Productivity Center for Replication command-line interface. For more information about the Productivity Center for Replication functions refer to Chapter 8, TotalStorage Productivity Center for Replication use on page 355.
14
The IBM TotalStorage Productivity Center establishes the foundation for IBMs e-business On Demand technology. We need the function in an On Demand environment to provide IT resources On Demand - when the resources are needed by an application to support the customers business process. Of course, we are able to provide resources or remove resources today but the question is how. The process is expensive and time consuming. The IBM TotalStorage Productivity Center is the basis for the provisioning of storage resources to make the e-business On Demand environment a reality. In the future there will be more automation required to handle the hugh amount work in the provisioning area, more automation like the BM TotalStorage Productivity Center launch pad provides. Automation means workflow. Workflow is the key to getting work automated. IBM has a long history and investment in building workflow engines and work flows. Today IBM is using the IBM Tivoli Intelligent Orchestrator and IBM Tivoli Provisioning Manager to satisfy the resource requests in the e-business on demand environment in the server arena. The IBM Tivoli Intelligent Orchestrator and The IBM Tivoli Provisioning Manager provide the provisioning in the e-business On Demand environment.
1.4.1 Productivity Center for Disk and Productivity Center for Replication
The Productivity Center for Disk and Productivity Center for Replication is software that has been designed to enable administrators to manage SANs and storage from a single console. This software solution is designed specifically for managing networked storage components based on the SMI-S, including: IIBM TotalStorage SAN Volume Controller IBM TotalStorage Enterprise Storage Server (ESS) IBM TotalStorage Fibre Array Storage Technology (FAStT) IBM TotalStorage DS4000 series SMI enabled device
15
Productivity Center for Disk and Productivity Center for Replication are built on IBM Director, a comprehensive server management solution. Using Director with the multiple device management solution enables administrators to consolidate the administration of IBM storage subsystems and provide advanced storage management functions (including replication and performance management) across multiple IBM storage subsystems. It interoperates with SAN Management and Enterprise System Resource Manager (ESRM) products from IBM, includingTotalStorage Productivity Center for Data and SAN Management products from other vendors. In a SAN environment, multiple devices work together to create a storage solution. The Productivity Center for Disk and Productivity Center for Replication provides integrated administration, optimization, and replication features for interacting SAN devices, including the SAN Volume Controller and DS4000 Family devices. It provides an integrated view of the underlying system so that administrators can drill down through the virtualized layers to easily perform complex configuration tasks and more productively manage the SAN infrastructure. Because the virtualization layers support advanced replication configurations, the Productivity Center for Disk and Productivity Center for Replication products offer features that simplify the configuration, monitoring, and control of disaster recovery and data migration solutions. In addition, specialized performance data collection, analysis, and optimization features are provided. As the SNIA standards mature, the Productivity Center view will be expanded to include CIM-enabled devices from other vendors, in addition to IBM storage. Figure 1-11 on page 17 provides an overview of Productivity Center for Disk and Productivity Center for Replication.
16
Device Manager
IBM Director IBM TotalStorage Productivity Center for Fabric
WebSphere Application Server DB2
The Productivity Center for Disk and Productivity Center for Replication provides support for configuration, tuning, and replication of the virtualized SAN. As with the individual devices, the Productivity Center for Disk and Productivity Center for Replication layers are open and can be accessed via a GUI, CLI, or standards-based Web Services. Productivity Center for Disk and Productivity Center for Replication provide the following functions: Device Manager - Common function provided when you install the base prerequisite products for either Productivity Center for Disk or Productivity Center for Replication Performance Manager - provided by Productivity Center for Disk Replication Manager - provided by Productivity Center for Replication
Device Manager
The Device Manager is responsible for the discovery of supported devices; collecting asset, configuration, and availability data from the supported devices; and providing a limited topography view of the storage usage relationships between those devices. The Device Manager builds on the IBM Director discovery infrastructure. Discovery of storage devices adheres to the SNIA SMI-S specification standards. Device Manager uses the Service Level Protocol (SLP) to discover SMI-S enabled devices. The Device Manager creates managed objects to represent these discovered devices. The discovered managed objects are displayed as individual icons in the Group Contents pane of the IBM Director Console as shown in Figure 1-12 on page 18.
17
Device Manager provides a subset of configuration functions for the managed devices, primarily LUN allocation and assignment. Its function includes certain cross-device configuration, as well as the ability to show and traverse inter-device relationships. These services communicate with the CIM Agents that are associated with the particular devices to perform the required configuration. Devices that are not SMI-S compliant are not supported. The Device Manager also interacts and provides some SAN management functionality when IBM Tivoli SAN Manager is installed. The Device Manager health monitoring keeps you aware of hardware status changes in the discovered storage devices. You can drill down to the status of the hardware device, if applicable. This enables you to understand which components of a device are malfunctioning and causing an error status for the device.
SAN Management
When a supported SAN Manager is installed and configured, the Device Manager leverages the SAN Manager to provide enhanced function. Along with basic device configuration functions such as LUN creation, allocation, assignment, and deletion for single and multiple devices, basic SAN management functions such as LUN discovery, allocation, and zoning are provided in one step. IBM TotalStorage Productivity Center for Fabric (formerly IBM Tivoli SAN Manager) is currently the supported SAN Manager. The set of SAN Manager functions that will be exploited are: The ability to retrieve the SAN topology information, including switches, hosts, ports, and storage devices The ability to retrieve and to modify the zoning configuration on the SAN The ability to register for event notification, to ensure Productivity Center for Disk is aware when the topology or zoning changes as new devices are discovered by the SAN Manager, and when hosts' LUN configurations change
18
Functions
Collect data from devices The Performance Manager collects data from the IBM TotalStorage Enterprise Storage Server (ESS), IBM TotalStorage SAN Volume Controller (SVC), IBM TotalStorage DS4000 series and SMI-S enabled devices. The performance collection task collects performance data from one or more storage groups, all of the same device type (for example, ESS or SVC). Each performance collection task has a start time, a stop time, and a sampling frequency. The performance sample data is stored in DB2 database tables. Configure performance thresholds You can use the Performance Manager to set performance thresholds for each device type. Setting thresholds for certain criteria enables Performance Manager to notify you when a certain threshold has been exceeded, so that you can take action before a critical event occurs. You can specify what action should be taken when a threshold-exceeded condition occurs. The action may be to log the occurrence or to trigger an event. The threshold settings can vary by individual device. The eligible metrics for threshold checking are fixed for each storage device. If the threshold metrics are modified by the user, the modifications are accepted immediately and applied to checking being performed by active performance collection tasks. Examples of threshold metrics include: Disk utilization value Average cache hold time Percent of sequential I/Os I/O rate NVS full value Virtual disk I/O rate Managed disk I/O rate There is a user interface that supports threshold settings, enabling a user to: Modify a threshold property for a set of devices of like type. Modify a threshold property for a single device. Reset a threshold property to the IBM-recommended value (if defined) for a set of devices of like type. IBM-recommended critical and warning values will be provided for all thresholds known to indicate potential performance problems for IBM storage devices.
19
Reset a threshold property to the IBM-recommended value (if defined) for a single device. Show a summary of threshold properties for all of the devices of like type. View performance data from the Performance Manager database.
Gauges
The Performance Manager supports a performance-type gauge. The performance-type gauge presents sample-level performance data. The frequency at which performance data is sampled on a device depends on the sampling frequency that you specify when you define the performance collection task. The maximum and minimum values of the sampling frequency depend on the device type. The static display presents historical data over time. The refreshable display presents near real-time data from a device that is currently collecting performance data. The Performance Manager enables a Productivity Center for Disk user to access recent performance data in terms of a series of values of one or more metrics associated with a finite set of components per device. Only recent performance data is available for gauges. Data that has been purged from the database cannot be viewed. You can define one or more gauges by selecting certain gauge properties and saving them for later referral. Each gauge is identified through a user-specified name and, when defined, a gauge can be started, which means that it is then displayed in a separate window of the Productivity Center GUI. You can have multiple gauges active at the same time. Gauge definition is accomplished through a wizard to aid in entering a valid set of gauge properties. Gauges are saved in the Productivity Center for Disk database and retrieved upon request. When you request data pertaining to a defined gauge, the Performance Manager builds a query to the database, retrieves and formats the data, and returns it to you. When started, a gauge is displayed in its own window, and it displays all available performance data for the specified initial date/time range. The date/time range can be changed after the initial gauge window is displayed. For performance-type gauges, if a metric selected for display is associated with a threshold enabled for checking, the current threshold properties are also displayed in the gauge window and are updated each time the gauge data is refreshed.
20
Database information function Due to the amount of data collected by the Performance Manager function provided by Productivity Center for Disk, the database should be monitored to prevent it from running out of space. The database information function returns the database % full. This function can be invoked from either the Web user interface or the CLI.
21
generated and tracked (for example, notification that a Productivity Center component is not available). Productivity Center for Disk and Productivity Center for Replication take full advantage of, and build upon, the IBM Director Event Services. The IBM Director includes sophisticated event-handling support. Event Action Plans can be set up that specify what steps, if any, should be taken when particular events occur in the environment. Director Event Management encompasses the following concepts: Events can be generated by any managed object. IBM Director receives such events and calls appropriate internal event handlers that have been registered. Actions are user-configured steps to be taken for a particular event or type of event. There can be zero or more actions associated with a particular action plan. System administrators can create their own actions by customizing particular predefined actions. Event Filters are a set of characteristics or criteria that determine whether an incoming event should be acted on. Event Action Plans are associations of one or more event filters with one or more actions. Event Action Plans become active when you apply them to a system or a group of systems. The IBM Director Console includes an extensive set of GUI panels, called the Event Action Plan Builder, that enable the user to create action plans and event filters. Event Filters can be configured using the Event Action Plan Builder and set up with a variety of criteria, such as event types, event severities, day and time of event occurrence, and event categories. This allows control over exactly what action plans are invoked for each specific event. Productivity Center provides extensions to the IBM Director event management support. It takes full advantage of the IBM Director built-in support for event logging and viewing. It generates events that will be externalized. Action plans can be created based on filter criteria for these events. The default action plan is to log all events in the event log. It creates additional event families, and event types within those families, that will be listed in the Event Action Plan Builder. Event actions that enable Productivity Center functions to be exploited from within action plans will be provided. An example is the action to indicate the amount of historical data to be kept.
Resilient Based on open standards The move to an On Demand storage environment is an evolving one, it does not happen all at once. There are several next steps that you may take to move to the On Demand environment. Constant changes to the storage infrastructure (upgrading or changing hardware for example) can be addressed by virtualization which provides flexibility by hiding the hardware and software from users and applications. Empower administrators with automated tools for managing heterogeneous storage infrastructures. and eliminate human error. Control storage growth with automated identification and movement of low-activity or inactive data to a hierarchy of lower-cost storage. Manage cost associated with capturing point-in-time copies of important data for regulatory or bookkeeping requirements by maintaining this inactive data in a hierarchy of lower-cost storage. Ensure recoverability through the automated creation, tracking and vaulting of reliable recovery points for all enterprise data. The ultimate goal to eliminate human errors by preparing for Infrastructure Orchestration software that can be used to automate workflows. No matter which steps you take to an On Demand environment there will be results. The results will be improved application availability, optimized storage resource utilization, and enhanced storage personnel productivity.
23
24
Chapter 2.
Key concepts
This chapter gives you an understanding of the basic concepts that you must know in order to use TotalStorage Productivity Center. These concepts include standards for storage management, Service Location Protocol (SLP), Common Information Model (CIM) agent, and Common Information Model Object Manager (CIMOM).
25
Key standards for storage management are: Distributed Management Task Force (DMTF) Common Information Model (CIM) Standards. This includes the CIM Device Model for Storage, which at the time of writing was Version 2.7.2 for the CIM schema Storage Networking Industry Association (SNIA) Storage Management Initiative Specification (SMI-S).
26
CIM/WBEM technology uses a powerful human and machine readable language called the managed object format (MOF) to precisely specify object models. Compilers can be developed to read MOF files and automatically generate data type definitions, interface stubs, and GUI constructs to be inserted into management applications.
IBM is committed to deliver best-of-breed products in all aspects of the SNIA storage model, including:
27
Block aggregation The block layer in the SNIA model is responsible for providing low-level storage to higher levels. Ultimately, data is stored on native storage devices such as disk drives, solid-state disks, and tape drives. These devices can be used directly, or the storage they provide can be aggregated into one or more block vectors to increase or decrease their size, or provide redundancy. Block aggregation or Block level virtualization is used to deliver a powerful set of techniques that, when used individually or in combination, serve many purposes, such as: Space management through combining or splitting native storage into new, aggregated block storage Striping through spreading the aggregated block storage across several native storage devices Redundancy through point-in-time copy and both local and remote mirroring File aggregation or File level virtualization The file/record layer in the SNIA model is responsible for packing items such as files and databases into larger entities such as block-level volumes and storage devices. File aggregation or File level virtualization is used to deliver a powerful set of techniques that, when used individually or in combination, serve many purposes, such as: Allow data sharing and collaboration across heterogeneous servers with high performance and full locking support Enhance productivity by providing centralized and simplified management through policy-based storage management automation Increase storage utilization by reducing the amount of duplicate data and by sharing free and temporary space across servers In the area of block aggregation, IBM offers the IBM TotalStorage SAN Volume Controller (SVC), implemented in an in-band model. In the area of file aggregation in a SAN, IBM offers IBM TotalStorage SAN File System, a SAN-wide file system implemented in an out-of-band model. Both of these solutions will adhere to open industry standards. For more information about SMI-S/CIM/WBEM, see the SNIA and DMTF Web sites:
http://www.snia.org http://www.dmtf.org
28
Consistent use of durable names: As a storage network configuration evolves and is re-configure, key long-lived resources like disk volumes must be uniquely and consistently identified over time. Rigorously documented client implementation considerations: SMI-S provides client developers with vital information for traversing CIM classes within a device/subsystem and between devices/subsystems such that complex storage networking topologies can be successfully mapped and reliably controlled An automated discovery system: SMI-S compliant products, when introduced in a SAN environment, will automatically announce their presence and capabilities to other constituents. Resource locking: SMI-S compliant management applications from multiple vendors can exist in the same storage device or SAN and cooperatively share resources via a lock manager. The models and protocols in the SMI-S implementation are platform-independent, enabling application development for any platform, and enabling them to run on different platforms. The SNIA will also provide interoperability tests which will help vendors to test their applications and devices if they conform to the standard.
29
In the future, more and more devices will be native CIM compliant, and will therefore have a built-in Agent as shown in the Embedded Model in Figure 2-3 on page 29. When widely adopted, SMI-S will streamline the way that the entire storage industry deals with management. Management application developers will no longer have to integrate incompatible feature-poor interfaces into their products. Component developers will no longer have to push their unique interface functionality to application developers. Instead, both will be better able to concentrate on developing features and functions that have value to end-users. Ultimately, faced with reduced costs for management, end-users will be able to adopt storage-networking technology faster and build larger, more powerful networks.
The CIM agent includes a CIM Object Manager (CIMOM) which adapts various devices using a plug-in called a provider. The CIM agent can work as a proxy or can be imbedded in storage devices. When the CIM agent is installed as a proxy, the IBM CIM agent can be installed on the same server that supports the device user interface.
30
This enables storage management applications, regardless of vendor, to query status and perform command and control using XML-based CIM interactions. Figure 2-5 shows CIM enablement model.
31
Service Location Protocol (SLP): A directory service that the client application calls to locate the CIMOM.
32
SLP enables the discovery and selection of generic services, which could range in function from hardware services such as those for printers or fax machines, to software services such as those for file servers, e-mail servers, Web servers, databases, or any other possible services that are accessible through an IP network. Traditionally, to use a particular service, an end-user or client application needs to supply the host name or network IP address of that service. With SLP, however, the user or client no longer needs to know individual host names or IP addresses (for the most part). Instead, the user or client can search the network for the desired service type and an optional set of qualifying attributes. For example, a user could specify to search for all available printers that support Postscript. Based on the given service type (printers), and the given attributes (Postscript), SLP searches the user's network for any matching services, and returns the discovered list to the user.
33
a service becomes inactive without removing the registration for itself, that old registration will be removed automatically when its life-span expires. The maximum life-span of a registration is 65,535 seconds (about 18 hours).
34
A SLP UA is not required to discover all matching services that exist on the network, but only enough of them to provide useful results. This restriction is mainly due to the transmission size limits for UDP packets, which could be exceeded when there are many registered services or when the registered services have lengthy URLs or a large number of attributes. However, in most modern SLP implementations, the UAs are able to recognize truncated service replies and establish TCP connections to retrieve all of the information of the registered services. With this type of UA and SA implementation, the only exposure that remains is when there are too many SAs within the multicast range, which could cut short the multicast convergence mechanism. This exposure can be mitigated by the SLP administrator by setting up one or more SLP DAs.
35
S A
CIMOM
Subnet A
CIMOM S A DA CIMOM
S A
CIMOM
Subnet B
S A
CIMOM S A CIMOM
CIMOM
S A
CIMOM DA
Subnet C
Figure 2-8 SLP UA, SA and DA interaction
When SLP DAs are present, the behavior of both SAs and UAs changes significantly. When an SA is first initializing, it performs a DA discovery using a multicast service request and specifies the special, reserved service type service:directory-agent. This process is also called active DA discovery, and it is achieved through the same mechanism as any other discovery using SLP. Similarly, in most cases, an SLP UA also performs active DA discovery using multicasting when it first starts up. However, if the SLP UA is statically configured with one or more DA addresses, it uses those addresses instead. If it is aware of one or more DAs, either through static configuration or active discovery, it sends unicast service requests to those DAs instead of multicasting to SAs. The DA replies with unicast service replies, providing the requested service Uniform Resource Locators (URLs) and attributes. Figure 2-9 on page 37 shows the interactions of UAs and SAs with DAs, during active DA discovery.
36
S LP D A
D A A dvertisem ent D A A dvertisem ent
C lient or user
S LP UA
S LP D A
D A A dvertisem ent D A A dvertisem ent
S LP SA
S ervice
S LP D A
Figure 2-9 Service Location Protocol DA functions
The SLP DA functions very similarly to an SLP SA, receiving registration and deregistration requests, and responding to service requests with unicast service replies. There are a couple of differences, however, where DAs provide more functionality than SAs. One area, mentioned previously, is that DAs respond to service requests of the service:directory-agent service type with a DA advertisement response message, passing back a service URL containing the DA's IP address. This allows SAs and UAs to perform active discovery on DAs. One other difference is that when a DA first initializes, it sends out a multicast DA advertisement message to advertise its services to any existing SAs (and UAs) that might already be active on the network. UAs can optionally listen for, and SAs are required to listen for, such advertisement messages. This listening process is also sometimes called passive DA discovery. When the SA finds a new DA through passive DA discovery, it sends registration requests for all its currently registered services to that new DA. Figure 2-10 on page 38 shows the interactions of DAs with SAs and UAs, during passive DA discovery.
37
SLP U A
SLP SA
SLP U A
DA
A d v e rti sem ent
SLP DA
DA
A d v e rti sem ent
SLP SA
SLP U A
Figure 2-10 Service Location Protocol passive DA discovery
SLP SA
38
Router configuration
Configure the routers in the network to enable general multicasting or to allow multicasting for the SLP multicast address and port, 239.255.255.253, port 427. The routers of interest are those that are associated with subnets that contain one or more storage devices that are to be discovered and managed by TotalStorage Productivity Center. To configure your router hardware and software, refer to your router reference and configuration documentation.
Environment configuration
It might be advantageous to configure SLP DAs in the following environments: In environments where there are other non-TotalStorage Productivity Center SLP UAs that frequently perform discovery on the available services, an SLP DA should be configured. This ensures that the existing SAs are not overwhelmed by too many service requests. In environments where there are many SLP SAs, a DA helps decrease network traffic that is generated by the multitude of service replies. It also ensures that all registered services can be discovered by a given UA. The configuration of an SLP DA is particularly recommended when there are more than 60 SAs that need to respond to any given multicast service request.
39
Result
When a discovery task is started (either manually or scheduled), TotalStorage Productivity Center will discover all devices on the subnet on which TotalStorage Productivity Center resides, and it will discover all devices with affinity to the SLP DAs that were configured.
40
In the SLP Directory Agent Configuration section, type a valid Internet host name or an IP address (in dotted decimal format). Click Add. The host and scope information that you entered are displayed in the SLP Directory Agents Table. Click Change to change the host name or IP address for a selected item in the SLP Directory Agents Table.
41
Click Remove to delete a selected a item from the SLP Directory Agents Table. Click OK to add or change the directory agent information. Click Cancel to cancel adding or changing the directory agent information.
Multiple Device Manager Console TotalStorage Productivity Center Console Device Mgr. Console Replication Mgr. Console Performance Mgr. Console
WAS Device Manager Co-Server Performance Manager Co-Server Replication Manager Co-Server
JDBC
ESS ESS
42
Chapter 3.
43
Considerations
If you want the ESS, SAN Volume Controller, or FAStT storage subsystems to be managed using IBM TotalStorage Productivity Center for Disk, you must install the prerequisite I/O Subsystem Licensed Internal Code and CIM Agent for the devices. See Chapter 4, CIMOM installation and configuration on page 119 for more information. If you are installing the CIM agent for the ESS, you must install it on a separate machine from the Productivity Center for Disk and Productivity Center for Replication code. Note that IBM TotalStorage Productivity Center does not support zLinux on S/390 and does not support windows domains.
3.1.1 Configurations
The storage management components of IBM TotalStorage Productivity Center can be installed on a variety of platforms. However, for the IBM TotalStorage Productivity Center suite, when all four manager components are installed on the same system, the only common platforms for the managers are: Windows 2000 Server with Service Pack 4 Windows 2000 Advanced Server Windows 2003 Enterprise Server Edition Note: Refer to the following Web sites for the updated support summaries, including specific software, hardware, and firmware levels supported. http://www.storage.ibm.com/software/index.html http://www.ibm.com/software/support/
44
If you are using the storage provisioning workflows, you must install IBM TotalStorage Productivity Center for Disk and TotalStorage Productivity Center for Replication and IBM TotalStorage Productivity Center for Fabric on the same machine. Because of processing requirements, we recommend you install IBM Tivoli Provisioning Manager on a separate Windows machine.
Hardware
Dual Pentium 4 or Xeon 2.4 GHz or faster processors 4 GB of DRAM Network connectivity Subsystem Device Driver (SDD), for IBM TotalStorage Productivity Centerfor Fabric (optional) 80 GB available disk space
Database
The installation of DB2 Version 8.2 is part of the suite installer and is required by all the managers.
45
Table 3-2 TCP/IP ports for agent manager Port value 9511 9512 Usage Registering agents and resource managers Providing configuration updates Renewing and revoking certificates Querying the registry for agent information Requesting ID resets Requesting updates to the certificate revocation list Requesting agent manager information Downloading the truststore file Agent recovery service
9513
80
46
Usage Tivoli NetView Object Collection facility socket Tivoli NetView Web server socket Tivoli NetView SnmpServer
47
4. Install and configure SNMP (Fabric requirement) 5. Identify any firewalls and obtain required authorization 6. Obtain the static IP addresses that will be used for the TotalStorage Productivity Center servers
DB2 Productivity Center for Disk Fabric Manager DB2 Productivity Center for Disk DB2 Productivity Center for Disk DB2 Productivity Center for Disk DB2 Productivity Center for Disk
Table 3-8 shows the user IDs used in our TotalStorage Productivity Center environment.
Table 3-8 User IDs used in a IBM TotalStorage Productivity Center environment Element Suite Installer DB2 User ID Administrator db2admina New user no yes, will be created no no, default user no Windows DB2 management and Windows Service Account DirAdmin or DirSuper n/a - internal user n/a - internal user Windows Windows Service Account used during the registration of a Resource Manager to the Agent Manager used to authenticate agents and lock the certificate key files Windows Service Account Type Group(s) Usage
Administratora managerb
AgentMgrb
itcauserb
TPCSUIDa
Windows
DirAdmin
This ID is used to accomplish connectivity with the managed devices, i.e this ID has to be set up on the CIM agents see Fabric Manager User IDs on page 51 see Fabric Manager User IDs on page 51
Windows Windows
49
User ID
c
New user
Type Windows
Group(s)
a. This account can have whatever name you like. b. This account name cannot be changed during the installation. c. The DB2 administrator user ID and password are used here, see Fabric Manager User IDs on page 51.
Granting privileges
Grant privileges to the user ID used to install the IBM TotalStorage Productivity Center for Disk and Replication Base, IBM TotalStorage Productivity Center for Disk, and the IBM TotalStorage Productivity Center for Replication. It is recommended that this user ID be the superuser ID. These user rights are governed by the local security policy and are not initially set as the defaults for administrators. They might not be in effect when you log on as the local administrator. If the IBM TotalStorage Productivity Center installation program does not detect the required user rights for the logged on user name, the program can, optionally, set them. The program can set the local security policy settings to assign these user rights. Alternatively, you can manually set them prior to performing the installation. To manually set these privileges, select the following path and select the appropriate user: Click Start Settings Control Panel Double-click Administrative Tools Double-click Local Security Policy; the Local Security Settings window opens. Expand Local Policies. Double-click User Rights Assignments to see the policies in effect on your system. For each policy added to the user, perform the following steps: Highlight the policy to be checked. Double-click the policy and look for the users name in the Assigned To column of the Local Security Policy Setting window to verify the policy setting. Ensure that the Local Policy Setting and the Effective Policy Setting options are checked. If the user name does not appear in the list for the policy, you must add the policy to the user. Perform the following steps to add the user to the list: a) Click Add on the Local Security Policy Setting window. b) In the Select Users or Groups window, highlight the user of group under the Name column. c) Click Add to put the name in the lower window. d) Click OK to add the policy to the user or group. After these user rights are set (either by the installation program or manually), log off the system, and then log on again in order for the user rights to take effect. You can then restart the installation program to continue with the install of the IBM TotalStorage Productivity Center for Disk and Replication Base.
IBM Director
With Version 4.1, you no longer need to create internal user account. All user IDs must be operating system accounts and members of one of the following: DirAdmin or DirSuper groups (Windows), diradmin or dirsuper groups (Linux) Administrator or Domain Administrator groups (Windows), root (Linux) 50
Managing Disk Subsystems using IBM TotalStorage Productivity Center
In addition to the above there is a host authentication password that is used to allow managed hosts and remote consoles to communicate with IBM Director.
51
52
Click Add/Remove Windows Components Remove the tick box from Internet Information Services (IIS)
53
After setting the public community name restart the SNMP community service.
54
The DNS Suffix and NetBIOS Computer Name panel is displayed. Verify that the Primary DNS suffix field displays a domain name. The fully qualified host name must match the HOSTS file name (including casesensitive characters).
Important: Also ensure that you do not have the Windows 2000 Terminal Services running. Go to the Services panel and check for Terminal Services.
55
Cloudscape database
If you install IBM TotalStorage Productivity Center for Fabric and specify the Cloudscape database, you will need the following user IDs and passwords: Agent manager name or IP address and password Common agent password to register with the agent manager Resource manager user ID and password to register with the agent manager WebSphere administrative user ID and password host authentication password Tivoli NetView password only
DB2 database
If you install IBM TotalStorage Productivity Center for Fabric and specify the DB2 database, you will need the user IDs and passwords listed below: Agent manager name or IP address and password Common agent password to register with the agent manager Resource manager user ID and password to register with the agent manager DB2 administrator user ID and password DB2 user ID and password v WebSphere administrative user ID and password Host authentication password only Tivoli NetView password only Note: If you are running under Windows 2000, when the IBM TotalStorage Productivity Center for Fabric installation program asks for an existing user ID for WebSphere, that user ID must have the Act as part of the operating system user privilege.
WebSphere
To change the WebSphere user ID and password, follow this procedure: Open the file: <install_location>\apps\was\properties\soap.client.props Modify the following entries: com.ibm.SOAP. login Userid=<user_ID> (enter a value for user_ID) com.ibm.SOAP. login Password=<password> (enter a value for password) Save the file. Run the following script: ChangeWASAdminPass.bat <user_ID> <password> <install_dir> Where <user_ID> is the WebSphere user ID and <password> is the password. <install_dir> is the directory where the manager is installed and is optional. For example, <install_dir> is c:\Program Files\IBM\TPC\Fabric\manager\bin\W32-ix86.
Security Considerations
Set up security by using the demonstration certificates or by generating new certificates was a option that was specified when you installed the agent manager as shown in Figure 3-49 on page 83. If you used the demonstration certificates carry on with the installation. If you generated new certificates, follow this procedure: Copy the manager CD image to your computer. Copy the agentTrust.jks file from the agent manager (AgentManager/certs directory) to the /certs directory of the manager CD image. This will overwrite the existing agentTrust.jks file. You can write a new CD image with the new file or keep this image on your computer and point the suite installer to the directory when requested.
Note: Host names are casesensitive. This is a WebSphere limitation. Check your host name.
57
The suite installer then launches the installation wizard for each manager you have chosen to install. If you are running the Fabric Manager install under Windows 2000, the Fabric Manager installation requires that user ID must have the Act as part of the operating system and Log on as a service user rights. Insert the IBM TotalStorage Productivity Center suite installer CD into the CDROM drive. If Windows autorun is enabled, the installation program should start automatically. If it does not, open Windows Explorer and go to the IBM TotalStorage Productivity Center CDROM drive. Doubleclick setup.exe. Note: It may take a few moments for the installer program to initialize. Be patient until the language selection panel in Figure 3-2 appears. The language panel is displayed. Select a language from the dropdown list. This is the language that is used for installing this product Click OK as shown in Figure 3-2.
The Welcome to the InstallShield Wizard for The IBM TotalStorage Productivity Center panel is displayed. Click Next as shown in Figure 3-3 on page 59.
58
The Software License Agreement panel is displayed. Read the terms of the license agreement. If you agree with the terms of the license agreement, select the I accept the terms of the license agreement radio button. Click Next to continue as shown in Figure 3-4. If you do not accept the terms of the license agreement, the installation program will end without installing IBM TotalStorage Productivity Center.
59
The Select Type of Installation panel is displayed. Select Manager installations of Data, Disk, Fabric, and Replication and click Next to continue as shown in Figure 3-5.
The Select the Components panel is displayed. Select the components you want to install. Click Next to continue as shown in Figure 3-6.
Figure 3-6 IBM TotalStorage Productivity Center components WinMgmt is a service of Windows that need to be stopped before proceeding with the install. If the service is running you will see the panel in Figure 3-7 on page 61. Click Next to stop the services.
60
The window in Figure 3-8 will open. Click Next once again to stop WinMgmt. Note: You should stop this service prior to beginning the install of TotalStorage Productivity Center to prevent these windows from appearing.
The Prerequisite Software panel is displayed. The products will be installed in the order listed. Click Next to continue as shown in Figure 3-9 on page 62. In this example, the first prerequisites to be installed are DB2 and WebSphere.
61
Note: The installer will interrogate the server to determine what prerequisites are installed on the server and list what remains to be installed.
62
The DB2 User ID and Password panel is displayed. Accept the default user name or enter a new user ID and password. Click Next to continue as shown in Figure 3-11.
The Confirm Target Directories for DB2 panel is displayed. Accept the default directory or enter a target directory. Click Next to continue as shown in Figure 3-12 on page 64.
63
You will be prompted for the location of the DB2 installation image.Browse to the installation image or installer CD select the required information and click Install as shown in Figure 3-13.
Note: If you use the DB2 CD for this step, the Welcome to DB2 panel is displayed. Click Exit to exit the DB2 installation wizard. The suite installer will guide you through the DB2 installation. The Installing Prerequisites (DB2) panel is displayed with the word Installing on the right side of the panel. When the component is installed a green arrow appears next to the component name (see Figure 3-14 on page 65). Wait for all the prerequisite programs to install. Click Next. Note: Depending on the speed of your machine, this can take from 3040 minutes to install.
64
After DB2 has installed a green check mark will appear next to the text DB2 Universal Database Enterprise Server Edition. The installer will start the install of WebSphere as shown in Figure 3-15.
After WebSphere has installed a green check mark will appear next to the text WebSphere Application Server. The installer will start the install of WebSphere Fixpack as shown in Figure 3-16 on page 66.
65
After WebSphere has installed a green check mark will appear next to the text WebSphere Application Server. The installer will start the install of WebSphere Fixpack as shown in Figure 3-15 on page 65.
After the DB2 WebSphere, and WebSphere fixpack are installed the DB2 Server installation was successful window opens (see Figure 3-18 on page 67). Click Next to continue.
66
The WebSphere Application Server installation was successful window opens (see Figure 3-19). Click Next to continue.
67
The location of the IBM Director install package panel is displayed. Enter the installation source or insert the CD-ROM and enter the CD drive location. Click Next as shown in Figure 3-21.
The next panel provides information about the IBM Director post install reboot option. Note that you should choose the option to reboot later when prompted (seeFigure 3-22 on page 69). Click Next to continue.
68
The IBM Director Server - InstallShield Wizard panel is displayed indicating that the IBM Director installation wizard will be launched. Click Next to continue (see Figure 3-23).
The License Agreement window opens next. Read the license agreement. Click I accept the terms in the license agreement radio button as shown in Figure 3-24 on page 70. Click Next to continue.
69
The next window is the advertisement for Enhance IBM Director with the new Server Plus Pack window (see Figure 3-25). Click Next to continue.
The Feature and installation directory window opens (see Figure 3-26 on page 71). Accept the default settings and click Next to continue.
70
The IBM Director service account information window opens (see Figure 3-27). Type the domain for the IBM Director system administrator. Alternatively, if there is no domain, then type the local host name (this is the recommended setup). Type a user name and password for IBM Director. The IBM Director will run under this user name and you will log on to the IBM Director console using this user name. Click Next to continue.
The Encryption settings window opens as shown in Figure 3-28 on page 72. Accept the default settings in the Encryption settings window. Click Next to continue.
71
In the Software Distribution settings window, accept the default values and click Next as shown in Figure 3-29. Note: The TotalStorage Productivity Center components do not use the software-distribution packages function of IBM Director.
The Ready to Install the Program window opens (see Figure 3-30 on page 73). Click Install to continue.
72
The Installing IBM Director server window reports the status of the installation as shown in Figure 3-31.
The Network driver configuration window opens. Accept the default settings and click OK to continue.
73
The secondary window closes and the installation wizard performs additional actions which are tracked in the status window. The Select the database to be configured window opens (see Figure 3-33). Select IBM DB2 Universal Database in the Select the database to be configured window. Click Next to continue.
The IBM Director DB2 Universal Database configuration window will open (see Figure 3-34). It might be behind the status window, and you must click it to bring it to the foreground.
74
In the Database name field, type a new database name for the IBM Director database table or type an existing database name. In the User ID and Password fields, type the DB2 user ID and password that you created during the DB2 installation. Click Next to continue.
Accept the default DB2 node name LOCAL - DB2 in the IBM Director DB2 Universal Database configuration secondary window as shown in Figure 3-35. Click OK to continue.
The Database configuration in progress window is displayed at the bottom of the IBM Director DB2 Universal Database configuration window. Wait for the configuration to complete and the secondary window to close. Click Finish as shown in Figure 3-36 on page 76 when the Install Shield Wizard Completed window opens.
75
Important: Do not reboot the machine at the end of the IBM Director installation. The suite installer will reboot the machine. Click No as shown in Figure 3-37.
Click Next to reboot the machine as shown in Figure 3-38 on page 77. Important: If the server does not reboot at this point cancel the installer and reboot the server.
76
After rebooting the machine the installer will initialize. The window Select the installation language to be used for this wizard opens. Select the language and Click OK to continue (see Figure 3-39).
Figure 3-39 IBM TotalStorage Productivity Center installation wizard language selection
The installation confirmation panel is displayed click Next as shown in Figure 3-40 on page 78.
77
The Package Location panel is displayed (see Figure 3-41). Select the installation source or CD-ROM drive and click Next. Note: If you specify the path for the installation source you must specify the path at the \win directory level.
The Tivoli Agent Manager Installer window opens (see Figure 3-42 on page 79). Click Next to continue.
78
The Install Shield wizard will start. Then you see the language installation option window in Figure 3-43. Select the required language and click OK.
The Software License Agreement window opens. Click I accept the terms of the license agreement to continue.
79
The confirm target directories panel is displayed. Enter the directory path or accept the default directory and click Next to continue as shown in Figure 3-45.
80
The DB2 information panel is displayed (see Figure 3-46). If you do not want to accept the defaults, enter the DB2 User Name DB2 Port Enter the DB2 Password and click Next to continue.
The WebSphere Application Server Information panel is displayed. This panel lets you specify the host name or IP address, and the cell and node names on which to install the agent manager. If you specify a host name, use the fully qualified host name. For example, specify x330f03.almaden.ibm.com. If you use the IP address, use a static IP address. This value is used in the URLs for all agent manager services. Typically the cell and node name are both the same as the host name of the computer. If WebSphere was installed before you started the agent manager installation wizard, you can look up the cell and node name values in the %WebSphere Application Server_INSTALL_ROOT%\bin\SetupCmdLine.bat file. You can also specify the ports used by the agent manager: Registration (the default is 9511 for the serverside SSL) Secure communications (the default is 9512 for client authentication, twoway SSL) Public communication (the default is 9513) If you are using WebSphere network deployment or a customized deployment, make sure that the cell and node names are correct. For more information about WebSphere deployment, see your WebSphere documentation, Click Next as shown in Figure 3-47 on page 82.
81
The Security Certificates panel is displayed in Figure 3-49 on page 83. Specify whether to create new certificates or to use the demonstration certificates. In a typical production
82
environment, create new certificates. The ability to use demonstration certificates is provided as a convenience for testing and demonstration purposes. Make a selection and click Next to continue.
The security certificate settings panel is displayed. Specify the certificate authority name, security domain, and agent registration password. The agent registration password is the password used to register the agents. You must provide this password when you install the agents. This password also sets the agent manager key store and trust store files. The domain name is used in the right-hand portion of the distinguished name (DN) of every certificate issued by the agent manager. It is the name of the security domain defined by the agent manager. Typically, this value is the registered domain name or contains the registered domain name. For example, for the computer system myserver.ibm.com, the domain name is ibm.com. This value must be unique in your environment. If you have multiple agent managers installed, this value must be different on each agent manager. The default agent registration password is changeMe; click Next as shown in Figure 3-50 on page 84.
83
Preview Prerequisite Software Information panel is displayed. Click Next as shown in Figure 3-51.
The Summary Information for Agent Manager panel is displayed. Click Next as shown in Figure 3-52 on page 85. 84
Managing Disk Subsystems using IBM TotalStorage Productivity Center
The Installation of Agent Manager Completed panel is displayed, Click Finish as shown in Figure 3-53.
The Installation of Agent Manager Successful panel is displayed. Click Next to continue.
85
Important: There are three configuration tasks left to: Start the Agent Manager Service Set the service to start automatically Add a DNS entry for the Agent Recovery Service with the unqualified host name TivoliAgentRecovery and port 80.
Tip: The Database created for the IBM Agent Manager is IBMCDB.
3.5.4 IBM TotalStorage Productivity Center for Disk and Replication Base
There are three separate installs: Install the IBM TotalStorage Productivity Center for Disk and Replication Base code Install the IBM TotalStorage Productivity Center for Disk Install the IBM TotalStorage Productivity Center for Replication IBM TotalStorage Productivity Center for Disk and Replication Base must be installed by a user who is logged on as a local administrator (for example, as the administrator user) on the system where the IBM TotalStorage Productivity Center for Disk and Replication Base will be installed. If you intend to install IBM TotalStorage Productivity Center for Disk and Replication Base as a server, you need the following required system privileges, called user rights, to successfully complete the installation as described in User IDs and security on page 48. Act as part of the operating system Create a token object Increase quotas Replace a process level token Debug programs
86
The Package Location for Disk and Replication Manager window (Figure 3-54 on page 86) is displayed. Enter the appropriate information and click Next to continue.
Figure 3-55 Package location for Productivity Center Disk and Replication
The Information for Disk and Replication Manager panel is displayed. Click Next to continue as shown in Figure 3-56.
The Launch Disk and Replication Manager Base panel is displayed indicating that the Disk and Replication Manager installation wizard will be launched. Click Next to continue as shown in Figure 3-57 on page 88.
87
Figure 3-57 IBM TotalStorage Productivity Center for Disk and Replication Base welcome information
The confirm target directories panel is displayed. Enter the directory path or accept the default directory and click Next to continue as shown in Figure 3-58.
Figure 3-58 IBM TotalStorage Productivity Center for Disk and Replication Base Installation directory
88
The IBM WebSphere selection panel will be displayed, click Next to continue as shown in Figure 3-59.
If the Installation User ID privileges were not set a information panel stating that the privileges needs to be set will be displayed, click Yes to continue. At this point the installation will terminate, close the installer log and log back on and restart the installer. Select the Typical radio button. Click Next to continue as shown in Figure 3-60 on page 90.
89
Figure 3-60 IBM TotalStorage Productivity Center for Disk and Replication Base type of installation
If the IBM Director Support Program and IBM Director Server service is still running a information panel will be displayed that the services will be stopped click Next to stop the running services as shown in Figure 3-61.
90
You must enter the name and password for the IBM TotalStorage Productivity Center for Disk and Replication Base super user ID in the IBM TotalStorage Productivity Center for Disk and Replication Base installation window. This user name must be defined to the operating system. Click Next to continue as shown in Figure 3-62.
Figure 3-62 IBM TotalStorage Productivity Center for Disk and Replication Base Superuser information
You need to enter the user name and password for the IBM DB2 Universal Database Server, click Next to continue as shown in Figure 3-63 on page 92.
91
Figure 3-63 IBM TotalStorage Productivity Center for Disk and Replication Base DB2 user information
If you selected IBM TotalStorage Productivity Center for Disk and Replication Base Server, then you must enter the fully qualified name of the two server key files that were generated previously or that must be generated during or after the IBM TotalStorage Productivity Center for Disk and Replication Base installation in the SSL Configuration window. The information you enter will be used later. Generate a self-signed certificate Select this option if you want the installer to automatically generate these certificate files (used for this installation). Defer the generation of the certificate as a manual post-installation task Select this option if you want to manually generate these certificate files after the installation, using WebSphere Application Server ikeyman utility. In this case the next step, Generate Self-Signed Certificate, is skipped. Fill in the Key file and Trust file password.
92
If you chose to have the installation program generate the certificate for you, the Generate Self-Signed Certificate window opens, after completing all the fields click Next as shown in Figure 3-65.
Figure 3-65 IBM TotalStorage Productivity Center for Disk and Replication Base Certificate information
93
You are presented with the Create Local Database window. Enter the database name, click Next to continue as shown in Figure 3-66.
l
Note: The database name must be unique to IBM TotalStorage Productivity Center for Disk and Replication Base. You cannot share the IBM TotalStorage Productivity Center for Disk and Replication Base database with any other applications.
Figure 3-66 IBM TotalStorage Productivity Center for Disk and Replication Base Database name
The Preview window displays a summary of all of the choices that were made during the customizing phase of the installation, click Install to complete the installation as shown in Figure 3-67 on page 95.
94
Figure 3-67 IBM TotalStorage Productivity Center for Disk and Replication Base Installer information
The Finish window opens. You can view the log file for any possible error messages. The log file is located in (installeddirectory)\logs\dmlog.txt. The dmlog.txt file contains a trace of the installation actions. Click Finish to complete the installation. The post-install tasks information opens in a Notepad. You should read the information and complete any required tasks.
95
The Package Location for IBM TotalStorage Productivity Center for Disk panel is displayed. Enter the appropriate information and click Next to continue as shown in Figure 3-70 on page 97.
The Launch IBM TotalStorage Productivity Center for Disk panel is displayed indicating that the IBM TotalStorage Productivity Center for Disk installation wizard will be launched (see Figure 3-70 on page 97). Click Next to continue.
96
The Productivity Center for Disk Installer - Welcome panel is displayed (see Figure 3-71). Click Next to continue.
Figure 3-71 IBM TotalStorage Productivity Center for Disk Installer Welcome
The confirm target directories panel is displayed. Enter the directory path or accept the default directory (see Figure 3-72 on page 98) and click Next to continue.
Chapter 3. TotalStorage Productivity Center suite installation
97
The IBM TotalStorage Productivity Center for Disk Installer - Installation Type panel opens (see Figure 3-73). Select Typical install in the radio button click Next to continue.
The database configuration panel will be opened accept the database name or re-enter a new data base name, click Next to continue as shown in Figure 3-74 on page 99.
98
Figure 3-74 IBM TotalStorage Productivity Center for Disk database name
Review the information about the IBM TotalStorage Productivity Center for Disk preview panel and click Install as shown in Figure 3-75.
Figure 3-75 IBM TotalStorage Productivity Center for Disk installation preview
99
The installer will create the required database (see Figure 3-76) and install the product. You will see a progress bar for the Productivity Center for Disk install status.
When the install is complete you will see the panel in Figure 3-77. You should review the post installation tasks. Click Finish to continue.
100
The Package Location for Replication Manager panel is displayed. Enter the appropriate information and click Next to continue The Welcome window opens with suggestions about what documentation to review prior to installation. Click Next to continue as shown in Figure 3-79, or click Cancel to exit the installation.
101
The confirm target directories panel is displayed. Enter the directory path or accept the default directory and click Next to continue as shown in Figure 3-80.
Figure 3-80 IBM TotalStorage Productivity Center for Replication installation directory
The next panel (see Figure 3-81) asks you to select the install type. Select the Typical radio button and click Next to continue.
102
Enter parameters for the new DB2 Hardware subcomponent database in the database name or accept the default. We recommend you accept the default. Click Next to continue as shown in Figure 3-82. Note: The database name must be unique to the Replication Manager subcomponent. You cannot share the Replication Manager subcomponent database with any other applications or with other Replication Manager subcomponents.
Figure 3-82 IBM TotalStorage Productivity Center for Replication hardware database name
Enter parameters for the new Element Catalog subcomponent database in the database name or accept the default, click Next to continue as shown in Figure 3-83 on page 104. Note: The database name must be unique to the Replication Manager subcomponent. You cannot share the Replication Manager subcomponent database with any other applications or with other Replication Manager subcomponents.
103
Figure 3-83 IBM TotalStorage Productivity Center for Replication element catalog database name
Enter parameters for the new Replication Manager subcomponent database in the database name or accept the default, click Next to continue as shown in Figure 3-84 on page 105. Note: The database name must be unique to the Replication Manager subcomponent. You cannot share the Replication Manager subcomponent database with any other applications or with other Replication Manager subcomponents.
104
Figure 3-84 IBM TotalStorage Productivity Center for Replication, Replication Manager database name
Select the required database tuning cycle in hours, select Next to continue as shown in Figure 3-85.
Figure 3-85 IBM TotalStorage Productivity Center for Replication database tuning cycle
105
Review the information about the IBM TotalStorage Productivity Center for Replication preview panel and click Install as shown in Figure 3-86.
Figure 3-86 IBM TotalStorage Productivity Center for Replication installation information
The Productivity Center for Replication Installer - Finish panel in Figure 3-87 will be displayed upon successful installation. Read the post installation tasks. Click Finish to complete the installation.
106
The install shield will be displayed read the information and click Next to continue. The Package Location for Productivity Center for Fabric Manager panel is displayed (see Figure 3-89 on page 108). Enter the appropriate information and click Next to continue. Important: The package location at this point is very important If you used the demonstration certificates point to the CD-ROM drive. If you generated new certificates point to the manager CD image with the new agentTrust.jks file.
107
The language installation option panel is displayed, select the required language and click OK as shown in Figure 3-90.
Figure 3-90 IBM TotalStorage Productivity Center for Fabric install wizard
The Welcome panel is displayed. Click Next to continue as shown in Figure 3-91 on page 109.
108
Figure 3-91 IBM TotalStorage Productivity Center for Fabric welcome information
Select the type of installation you want to perform (see Figure 3-92 on page 110). In this case we are installing the IBM TotalStorage Productivity Center for Fabric code. You can also use the suite installer to perform a remote deployment of the Fabric agent.This operation can be performed only if you have previously installed the common agent on a machines. For example, you might have installed the Data agent on the machines and want to add the Fabric agent to the same machines. You must have installed the Fabric Manager before you can deploy the Fabric agent. You cannot select both Fabric Manager Installation and Remote Fabric Agent Deployment at the same time. You can only select one option. Click Next to continue.
109
The confirm target directories panel is displayed. Enter the directory path or accept the default directory and click Next to continue as shown in Figure 3-93.
Figure 3-93 IBM TotalStorage Productivity Center for Fabric installation directory
The Port Number panel is displayed. This is a range of eight port numbers for use by IBM TotalStorage Productivity Center for Fabric. The first port number you specify is considered the primary port number. You only need to enter the primary port number. The primary port number and the next 7 numbers will be reserved for use by IBM TotalStorage Productivity
110
Center for Fabric. For example, if you specify port number 9550, IBM TotalStorage Productivity Center for Fabric will use port numbers 95509557. Ensure that the port numbers you use are not used by other applications at the same time. To determine which port numbers are in use on a particular computer, type either of the following commands from a command prompt. We recommend you use the first command. netstat -a netstat -an The port numbers in use on the system are listed in the Local Address column of the output. This field has the format host:port. Enter the primary port number as shown in Figure 3-94 and click Next to continue.
Figure 3-94 IBM TotalStorage Productivity Center for Fabric port number
The Database choice panel is displayed. You can select DB2 or Cloudscape. If you select DB2, you must have previously installed DB2 on the server. DB2 is the recommended installation option. Select Next to continue as shown in Figure 3-95 on page 112.
111
Figure 3-95 IBM TotalStorage Productivity Center for Fabric database selection type
The next panel allows you to select the WebSphere Application Server to use in the install. In this installation we used Embedded WebSphere Application Server. Click Next to continue as shown in Figure 3-97 on page 113.
Figure 3-96 Productivity Center for Fabric WebSphere Application Server type selection
The Single or Multiple User ID and Password panel (using DB2) is displayed (see Figure 3-97 on page 113). If you selected DB2 as your database, you will see this panel. This panel allows you to use the DB2 administrative user ID and password for the DB2 user and WebSphere user. You can also use the DB2 administrative password for the host authentication and NetView password.
112
For example, if you selected all the choices in the panel, you will use the DB2 administrative user ID and password for the DB2 and WebSphere user ID and password. You will also use the DB2 administrative password for the host authentication and NetView password. If you select a choice, you will not be prompted for the user ID or password for each item you select. Note: If you selected Cloudscape as your database, this panel is not displayed. Click Next to continue.
Figure 3-97 IBM TotalStorage Productivity Center for Fabric user and password options
The User ID and Password panel (using DB2) is displayed. If you selected DB2 as your database, you will see this panel. This panel allows you to use the DB2 administrative user ID and password for the DB2, enter the required User ID and Password, click Next to continue as shown in Figure 3-98 on page 114.
113
Figure 3-98 IBM TotalStorage Productivity Center for Fabric database user information
Enter parameters for the new database in the database name or accept the default, click Next to continue as shown in Figure 3-99. Note: The database name must be unique. You cannot share the IBM TotalStorage Productivity Center for Fabric database with any other applications.
Figure 3-99 IBM TotalStorage Productivity Center for Fabric database name
Enter parameters for the database drive, click Next to continue as shown in Figure 3-100 on page 115.
114
Figure 3-100 IBM TotalStorage Productivity Center for Fabric database drive information
The Agent Manager Information panel is displayed. You must provide the following information: Agent manager name or IP address. This is the name or IP address of your agent manager. Agent manager registration port. This is the port number of your agent manager. Agent registration password (twice). This is the password used to register the common agent with the agent manager as shown in Figure 3-50 on page 84 if the password was not set and the default was accepted the password is changeMe. Resource manager registration user ID. This is the user ID used to register the resource manager with the agent manager (default is manager) Resource manager registration password (twice). This is the password used to register the resource manager with the agent manager (default is password). Fill in the information and click Next to continue as shown in Figure 3-101 on page 116.
115
Figure 3-101 IBM TotalStorage Productivity Center for Fabric agent manager information
The IBM TotalStorage Productivity Center for Fabric Install panel is displayed. This panel provides information about the location and size of the Fabric Manager. Click Next to continue as shown in Figure 3-102.
Figure 3-102 IBM TotalStorage Productivity Center for Fabric installation information
The Status panel is displayed. The installation can take about 1520 minutes to complete. When the installation has completed, the Successfully Installed panel is displayed, click Next to continue as shown in Figure 3-103 on page 117.
116
Figure 3-103 IBM TotalStorage Productivity Center for Fabric installation status
The install wizard Complete Installation panel is displayed. Do not restart your computer, click No, I will restart my computer later. Click Finish to complete the installation as shown in Figure 3-104.
Figure 3-104 IBM TotalStorage Productivity Center for Fabric restart options
The Install Status panel will be displayed indicating the Productivity Center for Fabric installation was successful. Click Next to continue as shown in Figure 3-105 on page 118.
117
118
Chapter 4.
119
4.1 Introduction
After you have completed the installation of TotalStorage Productivity Center for Disk and TotalStorage Productivity Center for Replication, you will need to install and configure the Common Information Model Object Manager (CIMOM) and Service Location Protocol (SLP) agents. Note: For the remainder of this chapter, we refer to the TotalStorage Productivity Center for Disk and TotalStorage Productivity Center for Replication simply as TotalStorage Productivity Center. The TotalStorage Productivity Center for Disk uses SLP as the method for CIM clients to locate managed objects. The CIM clients may have built in or external CIM agents. When a CIM agent implementation is available for a supported device, the device may be accessed and configured by management applications using industry-standard XML-over-HTTP transactions. In this chapter we describe the steps for: Planning considerations for Service Location Protocol (SLP) SLP configuration recommendation General performance guidelines Planning considerations for CIMOM Installing and configuring CIM agent for Enterprise Storage Server Verifying connection to ESS Setting up Service Location Protocol Directory Agent (SLP DA) Installing and configuring CIM agent for DS 4000 Family Configuring CIM agent for SAN Volume Controller
120
SAs register automatically with any DAs they discover within a set of common scopes. Consequently, DAs within the UAs scopes reduce multicast. By eliminating multicast for normal UA request, delays and time-outs are eliminated. DAs act as a focal point for SA and UA activity. Deploying one or several DAs for a collection of scopes provides a centralized point for monitoring SLP activity. You may consider using DAs in your enterprise if any of the following conditions are true: Multicast SLP traffic exceeds 1% of the bandwidth on your network, as measured by snoop. UA clients experience long delays or time-outs during multicast service request. You want to centralize monitoring of SLP service advertisements for particular scopes on one or several hosts. You can deploy any number of DAs for a particular scope or scopes, depending on the need to balance the load. Your network does not have multicast enabled and consists of multiple subnets that must share services. The configuration of an SLP DA is particularly recommended when there are more than 60 SAs that need to respond to any given multicast service request.
Router configuration
Configure the routers in the network to enable general multicasting or to allow multicasting for the SLP multicast address and port, 239.255.255.253, port 427. The routers of interest are those that are associated with subnets that contain one or more storage devices that are to be discovered and managed by TotalStorage Productivity Center for Disk. To configure your router hardware and software, refer to your router reference and configuration documentation. Attention: Routers are sometimes configured to prevent passing of multicast packets between subnets. Routers configured this way prevent discovery of systems between subnets using multicasting. Routers can also be configured to restrict the minimum multicast TTL (time-to-live) for packets it passes between subnets, which can result in the need to set the Multicast TTL higher to discover systems on the other subnets of the router. The Multicast TTL controls the time-to-live for the multicast discovery packets. This value typically corresponds to the number of times a packet is forwarded between subnets, allowing control of the scope of subnets discovered. - Multicast discovery does not discover Director V1.x systems or systems using TCP/IP protocol stacks that do not support multicasting (for example, some older Windows 3.x and Novell 3.x TCP/IP implementations).
121
122
You may plan to install CIM agent code on the same server which also has device management interface or you may install it on a separate server. Attention: At this time only few devices come with an integrated CIM Agent, most devices need a external CIMOM for CIM enable management applications (CIM Clients) to be able to communicate with device. For the ease of the installation IBM provides an ICAT (short for Integrated Configuration Agent Technology) which is a bundle that mainly includes the CIMOM, the device provider and an SLP SA.
You must have CIMOM supported firmware level on the storage devices. It you have incorrect version or firmware, you may not be able to discover and manage any the storage devices. The data traffic between CIMOM agent and device can be very high, especially during performance data collection. Hence it is recommended to have dedicated server for CIMOM agent. Although, you may configure the same CIMOM agent for multiple devices of same type. You may also plan to locate this server within same data center where storage devices are located. This is in consideration to firewall port requirements. Typically, it is best practice to minimize firewall port openings between data center and external network. If you consolidate the CIMOM servers within the data center then you may be able to minimize and need to open the firewall ports only for TotalStorage Productivity Center for Disk communication with CIMOM.
123
Co-location of CIM agent instances of the differing type on the same server is not recommended because of resource contention. It is strongly recommended to have a separate and dedicated servers for CIMOM agents and TotalStorage Productivity Center server. This is due to resource contention, TCP/IP port requirements and system services co-existence.
This section provides an overview of the installation and configuration of the ESS CIM Agent on a Windows 2000 Advanced Server operating system.
124
Figure 4-3 ESS CLI install requirement for ESS CIM Agent
Attention: If you are upgrading from a previous version of the ESS CIM Agent, you must uninstall the ESS CLI software that was required by the previous CIM Agent and reinstall the latest ESS CLI software, you must have a minimum ESS CLI level of 2.4.0.236. Perform the following steps to install the ESS CLI for Windows: Insert the CD for the ESS CLI in the CD-ROM drive, run the setup and follow the instructions as shown in Figure 4-4 on page 126 through Figure 4-7 on page 127. Note: The ESS CLI installation wizard detects if you have an earlier level of the ESS CLI software installed on your system and uninstalls the earlier level. After you uninstall the previous version, you must restart the ESS CLI installation program to install the current level of the ESS CLI.
125
126
Reboot your system before proceeding with the ESS CIM Agent installation. You must do this because the ESS CLI is dependent on environmental variable settings which will not be in effect for the ESS CIM Agent. This is because the CIM Agent runs as a service unless you reboot your system.
127
Verify that the ESS CLI is installed: Click Start > Settings > Control Panel. Double-click the Add/Remove Programs icon. Verify that there is an IBM ESS CLI entry. Verify that the ESS CLI is operational and can connect to the ESS. For example, from a command prompt window, issue the following command: esscli -u itso -p itso13sj -s 9.43.226.43 list server Where: 9.43.226.43 represents the IP address of the Enterprise Storage Server itso represents the Enterprise Storage Server Specialist user name itso13sj represents the Enterprise Storage Server Specialist password for the user name Figure 4-8 shows the response from the esscli command.
Use a Command Prompt or Windows Explorer to change to the Windows directory on the CD. If you are using a Command Prompt window, run setup.exe. If you are using Windows Explorer, double-click the setup.exe file.
128
Note: If you are using CIMOM code from IBM download Web site and not from the distribution CD, then you must ensure to use a shorter windows directory pathname. Executing setup.exe from the longer pathname may fail. An example of a short pathname is C:\CIMOM\setup.exe.
The Welcome window opens suggesting what documentation you should review prior to installation. Click Next to continue (see Figure 4-10 on page 130).
129
The License Agreement window opens. Read the license agreement information. Select I accept the terms of the license agreement, then click Next to accept the license agreement (see Figure 4-11 on page 131).
130
The Destination Directory window opens. Accept the default directory and click Next (see Figure 4-12 on page 132).
131
The Updating CIMOM Port window opens (see Figure 4-13 on page 133). You Click Next to accept the default port if it available and free in your environment. For our ITSO setup we used default port 5989. Note: If the default port is the same as another port already in use, modify the default port and click Next. Use the following command to check which ports are in use: netstat -a
132
The Installation Confirmation window opens (see Figure 4-14 on page 134). Click Install to confirm the installation location and file size.
133
The Installation Progress window opens (see Figure 4-15 on page 135) indicating how much of the installation has completed.
134
When the Installation Progress window closes, the Finish window opens (see Figure 4-16 on page 136). Check the View post installation tasks check box if you want to continue with post installation tasks when the wizard closes. We recommend you review the post installation tasks. Note: Before proceeding, you might want to review the log file for any error messages. The log file is located in xxx\logs\install.log, where xxx is the destination directory where the ESS CIM Agent for Windows is installed.
135
Click Finish to exit the installation wizard (see Figure 4-17 on page 137).
136
137
If SLP is not started, right-click the SLP and select Start from the pop-up menu. Wait for the Status column to be changed to Started.
138
If the IBM CIM Object Manager is not started, right-click the IBM CIM Object Manager ESS and select Start from the pop-up menu. Wait for the Status column to change to Started. If you are able to perform all of the verification tasks successfully, the ESS CIM Agent has been successfully installed on your Windows system. Next, perform the configuration tasks.
139
Type the command addess <ip> <user> <password> command for each ESS (as shown in Figure 4-21 on page 141): Where, <ip> represents the IP address of the cluster of Enterprise Storage Server <user> represents the Enterprise Storage Server Specialist user name
140
<password> represents the Enterprise Storage Server Specialist password for the user name
Important: ESS CIM agent relies on ESS CLI connectivity from ESS CIMOM server to ESS devices. Make sure that the ESS devices you are registering are reachable and available at this point. It is recommended to verify this by launching ESS specialist browser from the ESS CIMOM server. You may logon to both ESS clusters for each ESS and make sure you are authenticated with correct ESS passwords and IP addresses. If the ESS are on the different subnet than the ESS CIMOM server and behind a firewall, then you must authenticate through firewall first before registering the ESS with CIMOM. If you have a bi-directional firewall between ESS devices and CIMOM server then you must verify the connection using rsTestConnection command of ESS CLI code. If the ESS CLI connection is not successful, you must authenticate through the firewall in both directions i.e from ESS to CIMOM server and also from CIMOM server to ESS. Once you are satisfied that you are able to authenticate and receive ESS CLI heartbeat with all the ESS successfully, you may proceed for entering ESS IP addresses. If CIMOM agent fails to authenticate with ESSs, then it will not start-up properly and may be very slow, since it retries the authentication.
Repeat the previous step for each additional ESS device that you want to configure.
141
Close the setdevice interactive session by typing exit. Once you have defined all the ESS servers, you must stop and restart the CIMOM to make the CIMOM initialize the information for the ESS servers. Note: CIMOM collects and caches the information from the defined ESS servers at startup time, the starting of the CIMOM might take a longer period of time the next time you start it.
Attention: If the username and password entered is incorrect or the ESS CIM agent does not connect to the ESS this will cause a error and the ESS CIM Agent will not start and stop correctly, use following command to remove the ESS entry that is causing the problem and reboot the server. rmess <ip> Whenever you add or remove ESS from CIMOM registration, you must re-start the CIMOM to pick up updated ESS device list.
Restart the CIMOM by selecting Start Programs IBM TotalStorage CIM Agent for ESS Start CIMOM service. A Command Prompt window opens to track the progress of the starting of the CIMOM. If the CIMOM has started successfully, the message shown in Figure 4-23 on page 143 displayed:
142
Note: The restarting of the CIMOM may take a while because it is connecting to the defined ESS servers and is caching that information for future use.
Note: The users which you configure to have authority to use the CIMOM are uniquely defined to the CIMOM software and have no required relationship to operating system user names, the ESS Specialist user names, or the ESS Copy Services user names. Open a Command Prompt window and change directory to the ESS CIM Agent directory, for example C:\Program Files\IBM\cimagent. Type the command setuser -u superuser -p passw0rd at the command prompt to start the setuser interactive session to identify users to the CIMOM. Type the command adduser cimuser cimpass in the setuser interactive session to define new users. Where cimuser represents the new user name to access the ESS CIM Agent CIMOM
Chapter 4. CIMOM installation and configuration
143
cimpass represents the password for the new user name to access the ESS CIM
Agent CIMOM
Close the setdevice interactive session by typing exit. For our ITSO Lab setup we used TPCSUID as superuser and ITSOSJ as password.
144
Verify that SLP has dependency on CIMOM, this is automatically configured when you installed the CIM agent software. Verify this by selecting Start Settings Control Panel. Double-click the Administrative Tools icon. Double-Click the Services icon and subsequently select properties on Service Location Protocol as shown in Figure Figure 4-25.
Click on Properties and select Dependencies tab as shown in Figure 4-26 on page 146. You must ensure that IBM CIM Object Manager has a dependency on Service Location Protocol (this should be the case by default).
145
Verify CIMOM registration with SLP by selecting Start Programs TotalStorage CIM Agent for ESS Check CIMOM Registration. A window opens displaying the wbem services as shown in.Figure 4-27. These services have either registered themselves with SLP or you have explicitly registered them with SLP using slptool. If you changed the default ports for a CIMOM during installation, the port number should be correctly listed here. It may take some time for a CIM Agent to register with SLP.
Note: If the verification of the CIMOM registration is not successful, stop and restart the SLP and CIMOM services. Note that the ESS CIMOM will attempt to contact each ESS registered to it. Therefore, the startup may take some time, especially if it is not able to connect and authenticate to any of the registered ESSs. Use the verifyconfig -u superuser -p passw0rd command, where superuser is the user name and passw0rd is the password for the user name that you configured to manage the CIMOM, to locate all WBEM services in the local network. You need to define the TotalStorage Productivity Center for Disk superuser name and passw0rd in order for TotalStorage Productivity Center for Disk to have the authority to manage the CIMOM. The 146
Managing Disk Subsystems using IBM TotalStorage Productivity Center
verifyconfig command checks the registration for the ESS CIM Agent and checks that it can connect to the ESSs. At ITSO Lab we had configured two ESSs (as shown in Figure 4-28).
147
If you still have problems, Refer to the IBM TotalStorage Enterprise Storage Server Application Programming Interface Reference for an explanation and resolution of the error messages. You can find this Guide in the doc directory at the root of the CIM Agent CD. The Figure 4-30 shows the location of installguide in doc directory of the CD.
Another method to verify that ESS CIMOM is up and running is to use the CIM Browser interface. For Windows machines change the working directory to c:\Program Files\ibm\cimagent and run startcimbrowser. The WBEM browser in Figure 4-32 on page 149 will appear. The default user name is superuser and the default password is passw0rd. If you have already changed it, using the setuser command, the new userid and
148
password must be provided. This should be set to the TotalStorage Productivity Center for Disk userid and password.
When login is successful, you should see a panel like the one in Figure 4-33.
149
150
Note: The CIMOM process might not start automatically when you restart the SLP daemon. After you execute stopcimom and startcimom commands shown below, you should get response that it has stopped and started sucessfully. CIMOM startup takes considerable time if you have configured many ESS. To ensure that it has started and is listening, you may verify cimom.log file as shown in Figure 4-29 on page 147. You should see the message as CMMOMxxxx server waiting for connections...
service:wbem:https://9.43.226.237:5989,en,65535 # use default scopes: scopes=test1,test2 description=SVC CIMOM Open Systems Lab, Cottle Road authors=Aliprandi,Andrews,Cooper,Eggli,Lovelace,Zerbini creation_date=04/02/20 service:wbem:https://9.11.209.188:5989,en,65535 # use default scopes: scopes=test1,test2 description=SVC CIMOM Tucson L2 Lab authors=Aliprandi,Andrews,Cooper,Eggli,Lovelace,Zerbini creation_date=04/02/20 #service:wbem:https://9.42.164.175:5989,en,65535 # use default scopes: scopes=test1,test2 #description=SVC CIMOM Raleigh SAN Central #authors=Aliprandi,Andrews,Cooper,Eggli,Lovelace,Zerbini #creation_date=04/02/20 #---------------------------------------------------------------------------Chapter 4. CIMOM installation and configuration
151
# Register Service - SANFS CIMOMS #---------------------------------------------------------------------------#service:wbem:https://9.82.24.66:5989,en,65535 #Additional parameters for setting the appropriate namespace values #CIM_InteropSchemaNamespace=root/cimv2 #Namespace=root/cimv2 # use default scopes: scopes=test1,test2 #description=SANFS CIMOM Gaithersburg ATS Lab #authors=Aliprandi,Andrews,Cooper,Eggli,Lovelace,Zerbini #creation_date=04/02/20 #service:wbem:https://9.11.209.148:5989,en,65535 #Additional parameters for setting the appropriate namespace values #CIM_InteropSchemaNamespace=root/cimv2 #Namespace=root/cimv2 # use default scopes: scopes=test1,test2 #description=SANFS CIMOM Tucson L2 Lab #authors=Aliprandi,Andrews,Cooper,Eggli,Lovelace,Zerbini #creation_date=04/02/20 #---------------------------------------------------------------------------# Register Service - FAStT CIMOM #---------------------------------------------------------------------------#service:wbem:https://9.1.39.65:5989,en,65535 #CIM_InteropSchemaNamespace=root/lsissi #ProtocolVersion=0 #Namespace=root/lsissi # use default scopes: scopes=test1,test2 #description=FAStT700 CIMOM ITSO Lab, Almaden #authors=Aliprandi,Andrews,Cooper,Eggli,Lovelace,Zerbini #creation_date=04/02/20
152
153
Attention: Whenever you update SLP configuration as shown above, you may have to stop and start slpd daemon. This will enable SLP to register and listen on newly configured ports. Also, whenever you re-start SLP daemon, ensure that IBM ESS CIMOM agent has also re-started. Otherwise you may issue startcimom.bat command, as shown in previous steps. Another alternative is to reboot the CIMOM server. Please note that for ESS CIMOM startup takes longer time.
Note: The panel shows connection status of all the connections attempted earlier, either sucessfull or failure. It is possible to delete failed connections and clean up this panel manually. In order to verify and re-confirm the connection, you may select the respective connection status and click Properties. Figure 4-37 on page 155 shows the properties panel. You may verify username and password information. The namespace, username and password are picked up automatically, hence it is not required to be entered manually. This is the same username / password you configured in earlier steps with setuser command. This username is used by CIMOM to logon to TotalStorage Productivity Center for Disk. If you have problems getting a successful connection then you may enter manually namespace as /root/ibm and your CIMOM username / password.
154
You can click Test Connection button. It should show a similar panel to Figure 4-38. It shows the connection is successful.
At this point TotalStorage Productivity Center for Disk has registered ESS CIMOM and is ready for device discovery.
155
While scrolling down the same Web page, we got the following link for DS 4000 CIMOM code as in Figure 4-40 on page 157. This link leads tothe engenio provider Web site. The current supported code level is 1.0.59, as indicated in the Web page.
156
From the Web site select the operating system used for the server on which the IBM DS family CIM Agent will be installed. You will download a setup.exe file. Save it to a directory on the server you will be installing the DS 4000 CIM Agent on (see Figure 4-41 on page 158).
157
Launch the setup.exe file to begin the DS 4000 family CIM agent installation. The InstallShield Wizard for LSI SMI-S Provider window opens (see Figure 4-42). Click Next to continue.
158
The LSI License Agreement window opens next. If you agree with the terms of the license agreement, click Yes to accept the terms and continue the installation (see Figure 4-43 on page 159).
The LSI System Info window opens. The minimum requirements are listed along with the install system disk free space and memory attributes as shown in Figure 4-44. If the install system fails the minimum requirements evaluation, then a notification window will appear and the installation will fail. Click Next to continue.
159
The Choose Destination Location window appears. Click Browse to choose another location or click Next to begin the installation of the FAStT CIM agent (see Figure 4-45 on page 160).
The InstallShield Wizard will prepare and copy the files into the destination directory. See Figure 4-46.
160
The README will appear after the files have been installed. Read through it to become familiar with the most current information (see Figure 4-47 on page 161). Click Next when ready to continue.
In the Enter IPs and/or Hostnames window enter the IP addresses and hostnames of the FAStT devices this FAStT CIM agent will manage as shown in Figure 4-48.
161
Use the Add New Entry button to add the IP addresses or hostnames of the FAStT devices that this FAStT CIM agent will communicate with. Enter one IP address or hostname at a time until all the FAStT devices have been entered and click Next (see Figure 4-49 on page 162).
Do not enter the IP address of a FAStT device in multiple FAStT CIM Agents within the same subnet. This may cause unpredictable results on the TotalStorage Productivity Center for Disk server and could cause a loss of communication with the FAStT devices. If the list of hostnames or IP addresses has been previously written to a file, use the Add File Contents button which will open the Windows Explorer. Locate and select the file and then click Open to import the file contents. When all the FAStT device hostnames and IP addresses have been entered, click Next to start the SMI-S Provider Service (see Figure 4-50).
When the Service has started, the installation of the FAStT CIM agent is complete (see Figure 4-51 on page 163).
162
Arrayhosts File
The installer will create a file called %installroot%\SMI-SProvider\wbemservices\cimom\bin\arrayhosts.txt As shown in Figure Figure 4-52. In this file the IP addresses of installed DS 4000 units can be reviewed, added or edited.
163
Important: You cannot have the FAStT management password set if you are using IBM TotalStorage Productivity Center.
At this point you may run following command on the SLP DA server to verify that DS 4000 family FAStT CIM agent is registered with SLP DA. slptool findsrvs wbem The response from this command will show the available services which you may verify.
connection status is indicated in first line. with IP address 9.1.38.79, port 5988 and status as Success.
Note: The panel shows connection status of all the connections attempted earlier, either sucessfull or failure. It is possible to delete failed connections and clean up this panel manually. In order to verify and re-confirm the connection, you may select the respective connection status and click Properties. The Figure 4-55 shows the properties panel. You may verify username and password information. The namespace, username and password are picked up automatically, hence it is not required to be entered manually. If you have problems for getting successful connection then you may enter manually namespace as /root/lsissi and your CIMOM username / password.
You can click the Test Connection button. It should show similar panel as Figure 4-56 on page 166. It shows the connection is successful.
165
At this point TotalStorage Productivity Center for Disk has registered DS 4000 CIMOM and ready for device discovery.
Figure 4-57 TotalStorage Productivity Center for Disk and SVC communication
For additional details on how to configure the SAN Volume Controller Console refer to the redbook IBM TotalStorage Introducing the SAN Volume Controller, SG24-6423. To discover and manage the SAN Volume Controller we need to ensure that our TotalStorage Productivity Center for Disk superuser name and password (the account we 166
Managing Disk Subsystems using IBM TotalStorage Productivity Center
specify in the TotalStorage Productivity Center for Disk configuration panel as shown in Figure 4-58) matches an account defined on the SAN Volume Controller console, in our case we implemented username TPCSUID and password ITSOSJ. You may want to adapt a similar nomenclature and setup the username and password on each SAN Volume Controller CIMOM to be monitored with TotalStorage Productivity Center for Disk.
4.9.1 Adding the SVC TotalStorage Productivity Center for Disk user account
As stated previously, you should implement a unique userid to manage the SAN Volume Controller devices in TotalStorage Productivity Center for Disk. This can be achieved at the SAN Volume Controller console using the following steps: 1. Login to the SAN Volume Controller console with a superuser account 2. Click Users under My Work on the left side of the panel (see Figure 4-59 on page 168).
167
3. Select Add a user in the drop-down under Users panel and click Go (see Figure 4-60).
168
5. Enter the User Name and Password and click Next (see Figure 4-62 on page 170).
169
6. Select your candidate cluster and move it to the right under Administrator Clusters (see Figure 4-63). Click Next to continue.
7. Click Next after you Assign service roles (see Figure 4-64 on page 171).
170
8. Click Finish after you Verify user roles (see Figure 4-65 on page 172).
171
9. After you click Finish, the Viewing users panel opens (see Figure 4-66).
172
4.10 Configuring CIMOM for TotalStorage Productivity Center for Disk summary
TotalStorage Productivity Center for Disk discovers both IBM storage devices that comply with the Storage Management Initiative Specification (SMI-S) and SAN devices such as switches, ports, and hosts. SMIS-compliant storage devices are discovered using the Service Location Protocol (SLP). The TotalStorage Productivity Center for Disk server software performs SLP discovery on the network. The User Agent looks for all registered services with a service type of service:wbem. TotalStorage Productivity Center for Disk performs the following discovery tasks:
Chapter 4. CIMOM installation and configuration
173
Locates individual storage devices Retrieves vital characteristics for those storage devices Populates The TotalStorage Productivity Center for Disk internal databases with the discovered information The TotalStorage Productivity Center for Disk can also access storage devices through the CIM Agent software. Each CIM Agent can control one or more storage devices. After the CIMOM services have been discovered through SLP, The TotalStorage Productivity Center for Disk contacts each of the CIMOMs directly to retrieve the list of storage devices controlled by each CIMOM. TotalStorage Productivity Center for Disk gathers the vital characteristics of each of these devices. For The TotalStorage Productivity Center for Disk to successfully communicate with the CIMOMs, the following conditions must be met: A common user name and password must be configured for all the CIM Agent instances that are associated with storage devices that are discoverable by TotalStorage Productivity Center for Disk (use adduser as described in 4.6.4, CIMOM User Authentication on page 143). That same user name and password must also be configured for TotalStorage Productivity Center for Disk using the Configure MDM task in the TotalStorage Productivity Center for Disk interface. If a CIMOM is not configured with the matching user name and password, it will be impossible to determine which devices the CIMOM supports. As a result, no devices for that CIMOM will appear in the IBM Director Group Content pane. The CIMOM service must be accessible through the IP network. The TCP/IP network configuration on the host where TotalStorage Productivity Center for Disk is installed must include in its list of domain names all the domains that contain storage devices that are discoverable by TotalStorage Productivity Center for Disk. It is important to verify that CIMOM is up and running. To do that, use the following command from TotalStorage Productivity Center for Disk server: telnet CIMip port Where, CIMip is the ip address where CIM Agent run and port is the port value used for the communication (5989 for secure connection, 5988 for unsecure connection).
174
#---------------------------------------------------------------------------# Register Service - SVC CIMOMS #---------------------------------------------------------------------------service:wbem:https://9.43.226.237:5989,en,65535 # use default scopes: scopes=test1,test2 description=SVC CIMOM Open Systems Lab, Cottle Road authors=Aliprandi,Andrews,Cooper,Eggli,Lovelace,Zerbini creation_date=04/02/20 service:wbem:https://9.11.209.188:5989,en,65535 # use default scopes: scopes=test1,test2 description=SVC CIMOM Tucson L2 Lab authors=Aliprandi,Andrews,Cooper,Eggli,Lovelace,Zerbini creation_date=04/02/20 #service:wbem:https://9.42.164.175:5989,en,65535 # use default scopes: scopes=test1,test2 #description=SVC CIMOM Raleigh SAN Central #authors=Aliprandi,Andrews,Cooper,Eggli,Lovelace,Zerbini #creation_date=04/02/20 #---------------------------------------------------------------------------Chapter 4. CIMOM installation and configuration
175
# Register Service - SANFS CIMOMS #---------------------------------------------------------------------------#service:wbem:https://9.82.24.66:5989,en,65535 #Additional parameters for setting the appropriate namespace values #CIM_InteropSchemaNamespace=root/cimv2 #Namespace=root/cimv2 # use default scopes: scopes=test1,test2 #description=SANFS CIMOM Gaithersburg ATS Lab #authors=Aliprandi,Andrews,Cooper,Eggli,Lovelace,Zerbini #creation_date=04/02/20 #service:wbem:https://9.11.209.148:5989,en,65535 #Additional parameters for setting the appropriate namespace values #CIM_InteropSchemaNamespace=root/cimv2 #Namespace=root/cimv2 # use default scopes: scopes=test1,test2 #description=SANFS CIMOM Tucson L2 Lab #authors=Aliprandi,Andrews,Cooper,Eggli,Lovelace,Zerbini #creation_date=04/02/20 #---------------------------------------------------------------------------# Register Service - FAStT CIMOM #---------------------------------------------------------------------------#service:wbem:https://9.1.39.65:5989,en,65535 #CIM_InteropSchemaNamespace=root/lsissi #ProtocolVersion=0 #Namespace=root/lsissi # use default scopes: scopes=test1,test2 #description=FAStT700 CIMOM ITSO Lab, Almaden #authors=Aliprandi,Andrews,Cooper,Eggli,Lovelace,Zerbini #creation_date=04/02/20
176
Chapter 5.
177
Alternatively access IBM Director from Windows Start Programs IBM Director IBM Director Console Log on to IBM Director using the superuser id and password defined at installation. Please note that passwords are case sensitive. Login values are: IBM Director Server: Hostname of the machine where IBM Director is installed User ID: The username to logon with. This is the superuser ID. Enter it in the form <hostname>\<username>
178
Password: The case sensitive superuser ID password Figure 5-2 shows the IBM Director Login panel you will see after launching IBM Director.
179
Note: The Manage Performance and Manage Replication tasks that you see in Figure 5-3 on page 180 become visible when TotalStorage Productivity Center for Disk or TotalStorage Productivity Center for Replication are installed. Although this chapter covers Productivity Center common base you would have installed either TotalStorage Productivity Center for Disk, TotalStorage Productivity Center for Replication or both.
Figure 5-3 IBM Director Console with Productivity Center common base
See Chapter 7, TotalStorage Productivity Center for Fabric use on page 331 for more details on using and configuring TotalStorage Productivity Center for Fabric.
181
specify will have a performance impact on the CIMOMs and Productivity Center common base servers, so do not set these values too low.
2. Turn off automatic inventory on discovery Important: Because of the time and CIMOM resources needed to perform inventory on storage devices it is undesirable and unnecessary to perform this each time Productivity Center common base performs a device discovery. Turn off automatic inventory by selecting Options Server Preferences as shown in Figure 5-6 on page 183.
182
Now uncheck the Collect On Discovery tick box as shown in Figure 5-7, all other options can remain unchanged. Select OK when done.
3. You can click the Discover all Systems in the top left corner of the IBM Director Console to initiate an immediate discovery task (see Figure 5-8 on page 184).
183
4. You can also use the IBM Director Scheduler to create a scheduled job for new device discovery. Either click the scheduler icon in the IBM Director tool bar or use the menu, Tasks Scheduler (see Figure 5-9 on page 185).
184
Establish parameters for the new job. Under the Date/Time tab. Include date and time to perform the job, and whether the job is to be repeated (see Figure 5-11 on page 186).
185
From the Task tab (see Figure 5-12), select Discover MDM storage devices/SAN Elements, then click Select.
Click File Save as, or use the Save as icon. Provide a descriptive job name in the Save Job panel (see Figure 5-13 on page 187) and click OK.
186
To view or change the details of a CIMOM or perform a connection test, select the CIMOM as seen in Figure 5-14 and then the click the Properties button from the right of the panel. Figure 5-15 on page 188 shows the properties for a DS4000 or FAStT CIMOM.
187
Important: Namespace must be set to \root\lsissi for DS4000 and FAStT CIMOMs. It should be discovered automatically but if your connection fails, please verify. Also DS4000 and FAStT CIMOMs do not need a User name or Password set. Entering them has no effect on the success of a Test Connection.
Figure 5-16 shows the CIMOM properties for a SAN Volume Controller. Important: Namespace must be set to \root\ibm for SAN Volume Controller CIMOMs. It should be discovered automatically but if you experience connection failures, please verify it has been set correctly. For more detailed information about configuring CIMOMs, refer to Chapter 4, CIMOM installation and configuration on page 119.
188
Tip: If you move or delete CIMOMs in your environment the old CIMOM entries are not automatically updated and entries with a Failure status will be seen as in Figure 5-14. These invalid entries can slow down discovery performance as TotalStorage Productivity Center tries to contact them each time it performs a discovery. You cannot delete CIMOM entries directly from the Productivity Center common base interface. Delete them using the DB2 control center tool as described in 5.3.5, Manually removing old CIMOM entries on page 189.
189
Launch DB2 Control Center (Figure 5-18 on page 191). This is a general administration tool for managing DB2 databases and table. Attention: DB2 Control Center is a database administration tool. It gives you direct and complete access to the data stored in all the TotalStorage Productivity Center databases. Altering data through this tool can cause damage to the TotalStorage Productivity Center environment. Be careful not to alter data unnecessarily using this tool.
190
Navigate down the structure in the left hand panel to open up the DMCOSERV database, then click the Tables option. A list of tables for this database will appear in the right hand upper panel as seen in Figure 5-19. Locate the DMCIMOM table as shown and double-click to open a new window (Figure 5-20 on page 192) showing the data rows.
191
Identify the CIMOM rows to be deleted by their IP address as shown in Figure 5-20. Click once on the row to be delete to select it. Click on the Delete Row button to remove it from the table. When you have made your changes you must click the Commit button for the table changes to be made effective. Now click Close to finish with this table. If you make any mistakes before you have pressed the Commit button you can click the Roll Back button to undo the changes. Now locate the BASEENTITY table from Control Center panel as seen in Figure 5-19 on page 191. Open it with a double-click. This table contains many rows of data. Filter the data to show only entries that relate to CIMOMs. Click the Filter button to open the filter panel as seen in Figure 5-22 on page 193.
192
Enter DMCIMOM in the values field as shown in Figure 5-22 and click OK. The table data is now filtered to show only CIMOM entries as seen in Figure 5-23.
Use a single click to select the entries by IP address that relate to the non-existent CIMOMs. Click Delete Row to remove them. Click Commit to make the changes effective, then Close. You can used Roll Back to undo any mistakes before a Commit.
193
Now locate the DMREFERENCE table from Control Center panel as seen in Figure 5-19 on page 191. Open it with a double-click. Note: The DMREFERENCE table may contain more than one entry for each of the non-existent CIMOM(s). It may not contain any rows at all for the CIMOM(s). Delete all relevant rows for the non-existent CIMOM(s) if they exist. If there are no rows in this table for the CIMOM(s) you are deleting they are not linked to any devices and this is OK.
194
To start inventory collection right-click the chosen device and select Perform Inventory Collection as shown in Figure 5-25. A new panel will appear (Figure 5-26) as a progress indication that the inventory process is running. At this stage Productivity Center common base is talking to the relevant CIMOM to collect volume information from the storage device. After a short while the information panel will indicated that the collection has been successful. You can now close this window.
Attention: When the panel in Figure 5-26 indicates that the collection has been successfully completed, it does not necessarily mean that the volume information has been fully processed by Productivity Center common base at this point. To track the detailed processing status, launch the Inventory Status tasks seen in Figure 5-27.
195
To see the processing status of an inventory collection launch the Inventory Status task as seen in Figure 5-27.
The example Inventory Status panel seen in Figure 5-28 shows the progress of the processing for a SAN Volume Controller. Use the refresh button in the bottom left of the panel to update it with the latest progress. You can also launch the Inventory Status panel before starting an inventory collection to watch the process end to end. In our test lab the inventory process time for an SVC took around 2 minutes end to end.
196
197
Enter a more meaningful device name as in Figure 5-31 and click OK.
198
In either case, in the bottom left corner, the status will change from Ready to Starting Task and it will remain this way until the volume inventory appears. Figure 5-33 shows the Volumes panel for the select ESS device that will appear.
199
When you click OK, TotalStorage Productivity Center for Fabric will be called to assist with zoning this volume to the host. If TotalStorage Productivity Center for Fabric is not installed you will see a message panel as in Figure 5-35 on page 201. When the volume has been successfully assigned to the selected host port the Assign host ports panel will disappear and the ESS Volumes panel will be displayed once again, reflecting now the additional host port mapping number in the far right side of the panel, in the Number of host ports column. Note: If TotalStorage Productivity Center for Fabric (formerly known as TSANM) is installed, refer to Chapter 7, TotalStorage Productivity Center for Fabric use on page 331 for complete details of its operation. Also note that TotalStorage Productivity Center for Fabric is only invoked for zoning when assigning hosts to ports. It is not invoked to remove zones when hosts are unassigned.
200
Use the drop-down fields to select the Storage type and choose from Available arrays on the ESS. Then enter the number of volumes you want to create and the Volume quantity along with the Requested size. Finally select the host ports you want to have access to the new volumes from the Defined host ports scrolling list. You can select multiple hosts by holding down the control key <Ctrl> while clicking on hosts. On clicking OK TotalStorage Productivity Center for Fabric will be called to assist with zoning the new volumes to the host(s). If TotalStorage Productivity Center for Fabric (formally known as TSANM) is not installed you will see an message panel as seen in Figure 5-37 on page 202. If TotalStorage Productivity Center for Fabric is installed refer to Chapter 7, TotalStorage Productivity Center for Fabric use on page 331 for complete details of its operation.
201
202
203
Enter a meaningful name for the device and click OK as in Figure 5-40.
204
Tip: Before SAN Volume Controller managed disk properties (mdisks) can be displayed, as with other storage devices managed by Productivity Center common base, an initial inventory must be completed. If you try to use the Managed Disk function on a SAN Volume Controller that has not been inventoried, you will receive a notification that this needs to be done. Refer to 5.4, Performing volume inventory on page 194 for details of performing this.
Figure 5-41 The mdisk properties panel for SAN Volume Controller
Figure 5-41 shows candidate, or unmanaged Mdisks, that are available for inclusion into an existing mdisk group. To add one or more unmanaged disks to an existing mdisk group: Select the MDisk group from the pull-down. Select one mdisk from the list of candidate mdisks, or use the <Ctrl> key to select multiple disks. Click the OK button at the bottom of the window and the selected Mdisk(s) will be added to the Mdisk group.
205
Figure 5-42 Create volumes to be added as Mdisks Productivity Center common base will now requests the specified storage amount from the specified backend storage device.
206
Viewing vdisks
Figure 5-44 on page 208 show the Vdisk inventory and volume attributes for the selected SAN Volume controller.
207
Creating a vdisk
To create a new Vdisk use the Create button as shown in Figure 5-44. You need to provide a suitable Vdisk name and select the Mdisk group from which you want to create the Vdisk.Specify the number of Vdisks to be created and the size in megabytes or gigabytes that each Vdisk should be. Figure 5-45 on page 209 shows example input in these fields.
208
The Host ports section of the Vdisk properties panel allows you to use TotalStorage Productivity Center for Fabric (formally TSANM) functionality to perform zoning actions to provide Vdisk access to specific host WWPNS. If TSANM is not installed, you will receive the warning shown in Figure 5-46. If TotalStorage Productivity Center for Fabric is installed refer to Chapter 7, TotalStorage Productivity Center for Fabric use on page 331 for details on how to configure and use it.
209
Tasks access: You will see in the right hand task panel that there are a number of available tasks under the Manage Storage Units section. These management function can be invoked by dragging them onto the chosen device. However not all functions are appropriate to all supported devices. Right-click access: To access all functions available for the selected device, right-click it to see a drop-down menu of options for it; Figure 5-47. Figure 5-47 shows the functions of TotalStorage Productivity Center for Disk and TotalStorage Productivity Center for Replication. Although this chapter only covers the Productivity Center common base functions you would always have and/or TotalStorage Productivity Center for Disk, TotalStorage Productivity Center for Replication installed
Figure 5-48 Entering a user defined display name for DS4000 or FAStT name
210
Enter a meaningful name for the device and click OK as in Figure 5-48 on page 210
211
Figure 5-50 shows the volume inventory for the selected device. From this panel you can Create and Delete volumes or assign and unassign volumes to hosts.
Select the desired Storage Type and array from Available arrays using the drop-downs. Then enter the Volume quantity and Requested volume size of the new volumes. Finally select the host posts you want to assign to the new volumes from the Defined host ports scroll box, holding the <Crtl> key to select multiple ports. The Defined host ports section of the panel allows you to use TotalStorage Productivity Center for Fabric (formally TSANM) functionality to perform zoning actions to provide volume access to specific host WWPNS. If TSANM is not installed, you will receive the warning shown in Figure 5-52 on page 213. If TotalStorage Productivity Center for Fabric is installed refer to Chapter 7, TotalStorage Productivity Center for Fabric use on page 331 for details on how to configure and use it.
212
The Defined host ports section of the panel allows you to use TotalStorage Productivity Center for Fabric (formally TSANM) functionality to perform zoning actions to provide volume access to specific host WWPNS. If TSANM is not installed, you will receive the warning shown in Figure 5-54 on page 214. If TotalStorage Productivity Center for Fabric is installed refer to Chapter 7, TotalStorage Productivity Center for Fabric use on page 331 for details on how to configure and use it.
213
TotalStorage Productivity Center for Fabric is not called to perform zoning clean up in Version 2.1. This functionality is planned in a future release.
214
215
Here are the tasks to create an Event Action Plan. 1. To begin do one of the following: Right-click Event Actions Plan in the Event Action Plans pane to access the context menu, and then select New. Select File New Event Action Plan from the menu bar. Double-click the Event Action Plan folder in the Event Action Plans pane (see Figure 5-58).
2. Enter the name you want to assign to the plan and click OK to save the new plan. The new plan entry with the name you assigned is displayed in the Event Action Plans pane. The plan is also added to the Event Action Plans task as a child entry in the Director Console (see Figure 5-59 on page 217). Now that you have defined an event action plan, you can assign one or more filters and actions to the plan.
216
Note: You can create a plan without having defined any filters or actions. The order in which you build a filter, action, and Event Action Plan does not matter. 3. Assign at least one filter to the Event Action Plan using one of the following methods: Drag the event filter from the Event Filters pane to the Event Action Plan in the Event Action Plans pane. Highlight the Event Action Plan, then right-click the event filter to display the context menu and select Add to Event Action Plan. Highlight the event filter, then right-click the Event Action Plan to display the context menu and select Add Event Filter (see Figure 5-60 on page 218).
217
The filter is now displayed as a child entry under the plan (see Figure 5-61).
4. Assign at least one action to at least one filter in the Event Action Plan using one of the following methods: Drag the action from the Actions pane to the target event filter under the desired Event Action Plan in the Event Action Plans pane. Highlight the target filter, then right-click the desired action to display the context menu and select Add to Event Action Plan. Highlight the desired action, then right-click the target filter to display the context menu and select Add Action. The action is now displayed as a child entry under the filter (see Figure 5-62 on page 219).
218
5. Repeat the previous two steps for as many filter and action pairings as you want to add to the plan. You can assign multiple actions to a single filter and multiple filters to a single plan. Note: The plan you have just created is not active because it has not been applied to a managed system or a group. In the next section we explain how to apply an Event Action Plan to a managed system or group. For information about editing or deleting a plan, refer to Appendix C, Event management on page 511.
219
Repeat this step for all associations you want to make. You can activate the same Event Action Plan for multiple systems (see Figure 5-64).
Once applied, the plan is activated and displayed as a child entry of the managed system or group to which it is applied when the Associations - Event Action Plans item is checked.
Message Browser
When an event occurs, the Message Browser (see Figure 5-65 on page 221) pops up on the server console.
220
If the message has not yet been viewed, then that Status for that message will be blank. When viewed, a checked envelope icon will appear under the Status column next to the message. To see greater detail on a particular message, select the message in the left pain and click the Event Details button (see Figure 5-66).
Export
Event Action Plans can be exported to three types of files: Archive: Backs up the selected action plan to a file that can be imported into any IBM Director Server.
Chapter 5. TotalStorage Productivity Center common base use
221
HTML: Creates a detailed listing of the selected action plans, including its filters and actions, in an HTML file format. XML: Creates a detailed listing of the selected action plans, including its filters and actions, in an XML file format. To export an Event Action Plan, do the following: 1. Open the Event Action Plan Builder. 2. Select an Event Action Plan from those available under the Event Action Plan folder. 3. Select File Export, then click the type of file you want to export to (see Figure 5-67). If this Event Action Plan will be imported by an IBM Director Server, then select Archive.
4. Name the archive and set a location to save in the Select Archive File for Export window as shown in Figure 5-68 on page 223.
222
Tip: When you export an action plan, regardless of the type, the file is created on a local drive on the IBM Director Server. If an IBM Director Console is used to access the IBM Director Server, then the file could be saved to either the Server or the Console by selecting Server or Local from the Destinations pull-down. It cannot be saved to a network drive. Use the File Transfer task if you want to copy the file elsewhere.
Import
Event Action Plans can be imported from a file. The file must be an Archive export of an action plan from another IBM Director Server. The steps to import an Event Action Plan are as follows: 1. Transfer the archive file to be imported to a drive on the IBM Director Server. 2. Open the Event Action Plan Builder from the main Console window. 3. Click File Import Archive (see Figure 5-69 on page 224).
223
4. From the Select File for Import window (see Figure 5-70), select the archive file and location. The file must be located on the IBM Director Server. If using the Console, you must transfer the file to the IBM Director Server before it can be imported.
5. Click OK to begin the import process. The Import Action Plan window opens, displaying the action plan to import (see Figure 5-71 on page 225). If the action plan had been assigned previously to systems or groups, you will be given the option to preserve associations during the import. Select Import to complete the import process.
224
225
226
Chapter 6.
227
228
You can use the Performance Manager to set performance thresholds for each device type. Setting thresholds for certain criteria allows Performance Manager to notify you when a certain threshold has been crossed, thus enabling you to take action before a critical event occurs. Viewing performance data You can view performance data from the Performance Manager database using the gauge application programming interfaces (APIs). These gauges present performance data in graphical and tabular forms. Using Volume Performance Advisor (VPA) The Volume performance advisor is an automated tool that helps you select the best possible placement of a new LUN from a performance perspective. This function is integrated with Device Manager so that, when the VPA has recommended locations for requested LUNs, the LUNs can ne allocated and assigned to the appropriate host without going back to Device Manager. Managing Workload Profile You can use Performance Manager to select a predefined workload profile or to create a new workload profile that is based on historical performance data or on an existing workload profile. Performance Manager uses these profiles to create a performance recommendation for volume allocation on an IBM storage server. The installation of the Performance Manager component onto an existing TotalStorage Productivity Center for Disk server provides a new Manage Performance task tree (Figure 6-2) on the right-hand side of the TotalStorage Productivity Center for Disk host. This task tree includes:
229
2. Or, right-click a storage device in the center column, and select the Performance Data Collection Panel menu option as shown in Figure 6-3.
Either operation results in a new window named Create Performance Data Collection Task (Figure 6-4). In this window you will specify: A task name A brief description of the task The sample frequency in minutes The duration of data collection task (in hours)
230
In our example, we are setting up a data collection task on an ESS with Device ID 2105.16603. After we have created a task name Cottle _ESS with sample frequency of 5 minutes and duration is 1 hour. It is possible to add more ESSs to the same data collection task, by clicking the Add button on the right-hand side. You can click individual devices, or select multiples by making use of the Ctrl key. See Figure 6-5 for an example of this panel. In our example, we created task for ESS with device ID 2105.22513.
Once we have established the scope of our data collection task and have clicked the OK button, we see our new data collection task available in the right-hand task column (see Figure 6-6 on page 232). We have created task Cottle_ESS in the example. Tip: When providing a description for a new data collection task, you may elect to provide information about the duration and frequency of the task.
231
In order to schedule it, right-click the selected task (see Figure 6-7 on page 233).
232
You have the option to use the job scheduling facility of TotalStorage Productivity Center for Disk, or to execute the task immediately. If you elect Execute Now, you will see a panel similar to the one in Figure 6-9 on page 234, providing you with some information about task name, task status, including the time it was initialized.
233
If you would rather schedule the task to occur at a future time, or to specify additional parameters for the job schedule, you will walk through the panel in Figure 6-10. You may provide scheduled job description for the scheduled job. In our example, we created a job 24March Cottle ESS.
234
Once you are finished customizing the job options, you may save it using either the File Save as menu, or by clicking on the diskette icon in the top left corner of the advanced panel. When you save with advanced job options, you may provide descriptive name for the job as shown in Figure 6-12 on page 236.
235
You should receive a confirmation that your job has been saved as shown in Figure 6-13.
236
Upon double-clicking Task Status it launches following panel as shown in Figure 6-15 on page 238.
237
For reviewing the task status, you can click the task shown under the Task name column. For example, we selected the task FCA18P which was aborted, as shown in Figure Figure 6-16 on page 239. Subsequently, it will show the details with Device ID, Device status and Error Message ID in Device status box. You can click the entry in the device status box. It will further show up the Error message in the Error message box.
238
239
You can use performance database panel to specify properties for a performance database purge task. The sizing function on this panel shows used space and free space in the database. You can choose to purge performance data based on age of the data,the type of the data and the storage devices associated with the data.
240
The Performance database properties panel shows following: Database name The name of the database Database location The file system on which the database resides. Total file system capacity The total capacity available to the file system, in Gigabytes. Space currently used on file system It is shown in Gigabytes and also in percentage. Performance Manager database full The amount of space used by Performance Manager. The percentage shown is the percentage of available space (total space - currently used space) used by Performance Manager database. The following formula is used to derive the percentage of disk space full in the Performance Manager database: a= total capacity of the file system b= total allocated space for Performance Manager database on the file system c= the portion of the allocated space that is used by the Performance Manager database
241
For any decimal amount over a particular number, the percentage is rounded up to the next largest integer. For example, 5.1% is rounded to and displayed as 6%. Space status advisor The Space status advisor monitors the amount of space used by the Performance Manager database and advises you as to whether you should purge data. The advisor levels are: Low: You do not need to purge data now. High: You should purge data soon. Critical: You need to purge data now. Disk space thresholds for status categories: low if utilization <0.8, high if 0.8 <= utilization <0.9 and critical otherwise. The delimiters between low/high/critical are 80% and 90% full. Purge database options Groups the database purge information. Name Type A name for the performance database purge task. The maximum length for a name can be from 1 to 250 characters. Description (optional) Type a description for the performance database purge task. The maximum length for a description can be from 1 to 250 characters. Device type Select one or more storage device types for the performance database purge. Options are SVC, ESS, or All. (Default is All.) Purge performance data older than Select the maximum age for data to be retained when the purge task is run. You can specify this value in days (1-365) or years (1-10). For example, if you select the Days button and a value of 10, the database purge task will purge all data older than 10 days when it is run. Therefore, if it has been more than 10 days since the task was run, all performance data would be purged. Defaults are 365 days or 10 years. Purge data containing threshold exception information Deselecting this option will preserve performance data that contains information about threshold exceptions. This information is required to display exception gauges. This option is selected by default. Save as task button When you click Save as task, the information you specified is saved and the panel closes. The newly created task is saved to the IBM Director Task pane under the Performance Manager Database. Once it is saved, the task can be scheduled using the IBM Director scheduler function.
242
Creating a gauge
Open the IBM Director and do one of the following tasks: Right-click the storage device in the center pane and select Gauges (see Figure 6-19).
You can click Gauges on panel shown and it will produce Job Status window as shown in Figure Figure 6-21 on page 244. It is also possible to launch Gauge creation by expanding Multiple Device Manager - Manage Performance in the rightmost column. Drag the Gauges item to the storage device desired and drop to open the gauges for that device (see Figure 6-20 on page 244).
243
This will produce the Job status window (see Figure 6-21) while the Performance gauges window opens. You will see the Job status window while other selected windows are opening.
The Performance gauges window will be empty until a gauge is created for use. We have created three gauges (see Figure 6-22).
Clicking on the Create button to the left brings up the Job status window while the Create performance gauge window opens.
244
The Create performance gauge window changes values depending on whether the cluster, array, or volume items are selected in the left pane. Clicking on the cluster item in the left pane produces a window as seen in Figure 6-23.
Performance
Cluster Performance gauges provide details on the average cache holding time in seconds as well as the percent of I/O requests that were delayed due to NVS memory shortages. Two Cluster Performance gauges are required per ESS to view the available historical data for each cluster. Additional gauges can be created to view live performance data. Device: Select the storage device and time period from which to build the performance gauge. The time period can be changed for this device within the gauge window thus allowing an overall or detailed view of the data. Name: Enter a name that is both descriptive of the type of gauge as well as the detail provided by the gauge. The name must not contain white space, special characters, or exceed 100 characters in length. Also, the name must be unique on the TotalStorage Productivity Center for Disk Performance Manager Server. If test were used as a gauge name, then it cannot be used for another gauge - even if another storage device were selected - as it would not be unique in the database. Example names: 28019P_C1H would represent the ESS serial number (28019), the performance gauge type (P), the cluster (C1), and historical (H) while 28019E would
245
represent the exception (E) gauge for the same ESS. Gauges for the clusters and arrays would build on that nomenclature to group the gauges by ESS on the Gauges window. Description: Use this space to enter a detailed description of the gauge that will appear on the gauge and in the Gauges window. Metric(s): Click on the metric(s) that will be displayed by default when the gauge is opened for viewing. Those metrics with the same value under the Units column in the Metrics table can be selected together using either Shift mouse-click or Ctrl mouse-click. The metrics in this field can be changed on the historical gauge after the gauge has been opened for viewing. In other words, a historical gauge for each metric or group of metrics is not necessary. However, these metrics cannot be changed for live gauges. A new gauge is required for each metric or group of metrics desired. Component: Select a single device from the Component table. This field cannot be changed when the gauge is opened for viewing. Data points: Selecting this radio button enables the gauge to display most recent data being obtained from currently running performance collectors against the storage device. One most recent performance data gauge is required per cluster and per metric to view live collection data. The Device pull-down displays text informing the user whether or not a performance collection task is running against this Device. You can select no. of datapoints as per your requirement to display the last x data points from the date of the last collection. The data collection could be currently running or most recent one. Date Range: Selecting this radio button presents data over a range of dates/times. Enter the range of dates this gauge will use as a default for the gauge. The date and time values may be adjusted within the gauge to any value before or after the default values and the gauge will display any relevant data for the updated time period. Display gauge: Checking this box will display the newly created gauge after clicking the OK button. Otherwise, if left blank, the gauge will be saved without displaying. Click the OK button when ready to save the performance gauge (see Figure 6-24 on page 247). In the example shown inFigure 6-24 on page 247, we have created gauge with name 22513C1H with description as average cache holding time. We selected starting and ending date as 11-March-2005. This corresponds with our data collection task schedule.
246
The gauge appears after clicking the OK button with the Display gauge box checked or when the Display button is clicked after selecting the appropriate gauge on the Performance gauges window (see Figure 6-26 on page 248). If you decide not to display gauge and save only, then you will see panel as shown in Figure 6-25.
247
The top of the gauge contains the following labels: Graph Name Description Device Component level Component ID Threshold The Name of the gauge The Description of the gauge The storage device selected for the gauge Cluster, Array, Volume The ID # of the component (Cluster, Array, Volume) The thresholds that were applied to the metrics
Time of last data collection Date and time of the last data collection The center of the gauge contains the only fields that may be altered in the Display Properties section. The Metrics may be selected either individually or in groups as long as the data types are the same (for example, seconds with seconds, milliseconds with milliseconds or percent with percent). Click the Apply button to force a Performance Gauge section update with the new y-axis data. The Start Date:, End Date:, Start Time:, and End Time: fields may be varied to either expand the scope of the gauge or narrow it for a more granular view of the data. Click the Apply button to force a Performance Gauge section update with the new x-axis data.For example, we applied Total I/O Rate metric to the saved gauge and resultant graph is as shown in Figure 6-27 on page 249. The Performance Gauge section of the gauge displays graphically, the information over time selected by the gauge and the options in the Display Properties section (see Figure 6-27 on page 249).
248
Figure 6-27 Cluster performance gauge with applied I/O rate metric
Click the Refresh button in the Performance Gauge section to update the graph with the original metrics and date/time criteria. The date and time of the last refresh appear to the right of the Refresh button. The date and time displayed update first followed by the contents of the graph which can be up to several minutes later. Finally, the data used to generate the graph are displayed at the bottom of the window (see Figure 6-28 on page 250). Each of the columns in the data section can be sorted up or down by clicking on the column heading (see Figure 6-32 on page 253). The sort reads the data from left to right so the results may not be as expected. The gauges for the array and volume components function in the same manner as the cluster gauge created above.
249
Exception
Exception gauges display data only for those active thresholds that were crossed during the reporting period. One Exception gauge displays threshold exceptions for the entire storage device based on the thresholds active at the time of collection. To create an exception gauge, select Exception from the Type pull-down menu (see Figure 6-29 on page 251).
250
By default, the Cluster will be highlighted in the left pane and the metrics and component sections will not be available. Device: Select the storage device and time period from which to build the performance gauge. The time period can be changed for this device within the gauge window thus allowing an overall or detailed view of the data. Name: Enter a name that is both descriptive of the type of gauge as well as the detail provided by the gauge. The name must not contain white space, special characters, or exceed 100 characters in length. Also, the name must be unique on the TotalStorage Productivity Center for Disk Performance Manager Server. Description: Use this space to enter a detailed description of the gauge that will appear on the gauge and in the Gauges window Date Range: Selecting this radio button presents data over a range of dates/times. Enter the range of dates this gauge will use as a default for the gauge. The date and time values may be adjusted within the gauge to any value before or after the default values and the gauge will display any relevant data for the updated time period. Display gauge: Checking this box will display the newly created gauge after clicking the OK button. Otherwise, if left blank, the gauge will be saved without displaying. Click the OK button when ready to save the performance gauge. We created exception gauge as shown in Figure 6-30 on page 252.
251
The top of the gauge contains the following labels: Graph Name Description Device Threshold The Name of the gauge The Description of the gauge The storage device selected for the gauge The thresholds that were applied to the metrics
Time of last data collection Date and time of the last data collection The center of the gauge contains the only fields that may be altered in the Display Properties section. The Start Date: and End Date: fields may be varied to either expand the scope of the gauge or narrow it for a more granular view of the data. Click the Apply button to force an Exceptions Gauge section update with the new x-axis data. The Exceptions Gauge section of the gauge displays graphically, the information over time selected by the gauge and the options in the Display Properties section (see Figure 6-31 on page 253).
252
Click the Refresh button in the Exceptions Gauge section to update the graph with the original date criteria. The date and time of the last refresh appear to the right of the Refresh button. The date and time displayed update first followed by the contents of the graph which can be up to several minutes later. Finally, the data used to generate the graph are displayed at the bottom of the window. Each of the columns in the data section can be sorted up or down by clicking on the column heading (see Figure 6-32).
253
Display Gauges
To display previously created gauges, either right-click the storage device and select gauges (see Figure 6-19 on page 243) or drag and drop the Gauges item on the storage device (see Figure 6-20 on page 244) to open the Performance gauges window (see Figure 6-33).
Gauge Properties
The Properties button allows the the following fields/choices to be modified:
Performance
Description Metrics Component Data points Date range - date and time ranges You can change the data displayed in the gauge from Data points with an active data collection to Date range (see Figure 6-34 on page 255). Selecting Date range allows you to choose the Start date and End Date using the performance data stored in the DB2 database.
254
Exception
You can change the Type property of the gauge definition from Performance to Exception. For a gauge type of Exception you can only choose to view data for a Date range.(see Figure 6-35 on page 256).
255
Delete a gauge
To delete a previously created gauge, either right-click the storage device and select gauges (see Figure 6-19 on page 243) or drag and drop the Gauges item on the storage device (see Figure 6-20 on page 244) to open the Performance gauges window (see Figure 6-33 on page 254). Select the gauge to remove and click Delete. A pop-up window will prompt for confirmation to remove the gauge (see Figure 6-36).
To confirm, click Yes and the gauge will be removed. The gauge name may now be reused, if desired.
256
Upon opening the thresholds submenu, you will see the following display, which shows the default thresholds in place for ESS as shown in Figure 6-38 on page 258.
257
On the right-hand side, there are buttons for Enable, Disable, Copy Threshold Properties, Filters, and Properties. If the selected task is already enabled, then the Enable button will appear greyed out, as in our case. If we attempt to disable a threshold that is currently enabled, by clicking on the disable button, a message will be displayed as shown in Figure 6-39.
You may elect to continue, and disable the selected threshold, or to cancel the operation by clicking Dont disable threshold. The copy threshold properties button will allow you to copy existing thresholds to other devices of similar type (ESS, in our case). The window in Figure 6-40 on page 259 is displayed.
258
Note: As shown in Figure 6-40 the copying threshold panel is aware that we have registered on our ESS CIM agent host both clusters of our model 800 ESS, as indicated by the semicolon delimited IP address field for the device ID 2105.22219. The Filters window is another available thresholds option. From this panel, you can enable, disable and modify existing filter values against selected thresholds as shown in Figure 6-41.
Finally, you can open the properties panel for a selected threshold, and are shown the panel in Figure 6-42 on page 260. You have options to acknowledge the values at their current settings, or modify the warning or error levels, or select the alert level (none, warning only, and warning or error are the available options).
259
260
As long as at least one data collection task has been completed, you are able to proceed with the steps to create a gauge to view your performance data.
SVC has following thresholds with their default properties: VDisk I/Os rate Total number of virtual disk I/Os for each I/O group. SAN Volume Controller defaults: Status: Disabled Warning: None Error: None VDisk bytes per second Virtual disk bytes per second for each I/O group. SAN Volume Controller defaults: Status: Disabled Warning: None Error: None MDisk I/O rate
261
Total number of managed disk I/Os for each managed disk group. SAN Volume Controller defaults: Status: Disabled Warning: None Error: None MDisk bytes per second Managed disk bytes per second for each managed disk group. SAN Volume Controller defaults: Status: Disabled Warning: None Error: None You may only enable a particular threshold once minimum values for warning and error levels have been defined. If you attempt to select a threshold and enable it without first modifying these values, you will see a notification like the one in Figure 6-45 on page 262.
Tip: In TotalStorage Productivity Center for Disk, default threshold warning or error values of -1.0 are indicators that there is no recommended minimum value for the threshold and are therefore entirely user defined. You may elect to provide any reasonable value for these thresholds, keeping in mind the workload in your environment. To modify the warning and error values for a given threshold, you may select the threshold, and click the Properties button. The panel in Figure 6-46 will be shown. You can modify the threshold as appropriate, and accept the new values by selecting the OK button.
262
Click on Create button to create a new gauge. You will see a panel similar to Figure 6-48.
263
We selected Cluster in top left corner, Total I/O Rate metric in the metrics box and selected Cluster 1 in component box. Also, we entered following parameters: Name: 22219P_drilldown_analysis Description: Eiderdown analysis for 22219 ESS For the Date range, we selected our historical data collection sampling period and clicked on Display gauge. Upon clicking OK button, we got the next panel as shown in Figure 6-49.
264
265
Figure 6-50 Zooming on specific time period for Total IO rate metric
The subsequent panel is shown in Figure 6-52 on page 267. We selected array level metric for Cluster 1, Device Adapter 1, Loop A and disk group 2 for Avg. Response time as circled in the figure.
266
267
268
The chart Writes and Total I/O are shown as overlapping and Reads are shown zero. Tip: If you select the multiple metrics which do not have the same units for y-axis, then the error is displayed as shown in Figure 6-55 on page 269.
269
are also executables that support viewing and management of task filters, alert thresholds, and gauges. There is detailed help available at the command line, with information about syntax and specific examples of usage.
startesscollection/startsvccollection: These commands are used respectively to build and run data collection against ESS or SAN Volume Controller. lscollection: This command is used to list the running, aborted or finished data collection tasks on the Performance Management server. stopcollection: This command may be used to stop data collection against a specified task name. lsgauge: You can use the lsgauge command to display a list of existing gauge names, types, device types, device IDs, modified dates and description information. rmgauge: Use this command to remove existing gauges. showgauge: This command is used to display performance data output using an existing defined gauge. setessthresh/setsvcthresh: These two commands are respectively used to set ESS and SAN Volume Controller performance thresholds. cpthresh: You can use the cpthresh command to copy threshold properties from one selected device to one or more other devices. setfilter: You can use setfilter to set or change the existing threshold filters. lsfilter: This command may be used to display the threshold filter settings for all devices specified. setoutput: This command may be used to view or modify the existing data collection output formats, including settings for paging, row printing, format (default, xml, or character delimited), header printing, and output verbosity. lsdev: This command can be used list the storage devices that are used by TotalStorage Productivity Center for Disk. lslun: This command can be used list the LUNs or Performance Manager volumes associated with storage devices. lsthreshold: This command can be used to list the threshold status associated with storage devices. 270
Managing Disk Subsystems using IBM TotalStorage Productivity Center
lsgauge: This command can be used list the existing gauge names, gauge type, device name, device ID, date modified and optionally device information. showgauge: Use this command to display performance output by triggering existing gauge. showcapacity: This command displays managed capacity, the sum of managed capacity by device type and total of all ESS and SAN Volume Controller managed storage. showdbinfo: This command displays percent full, used space and free space of Performance Manager database. lsprofile: Use this command to display Volume Performance Advisor profiles. cpprofile: Use this command to copy Volume Performance Advisor profiles. mkprofile: Use this command to create a Workload Profile that you can use later with mkrecom command to create a performance recommendation for ESS volume allocation. mkreom: Use this command to generate and optionally, apply a performance LUN advisor recommendation for ESS volumes. lsdbpurge: This command can be used to display the status of database purge tasks running in TotalStorage Productivity Center for Disk. tracklun: This command can be used to obtain historical performance statistics used to create a profile. startdbpurge: Use this command to start a database purge task. showdev: Use this command to display device properties. setoutput: This command sets output format for administrative command-line interface. cpthresh: This command can be used to copy threshold properties from one device to other devices that are of the same type. rmprofile: Use this command to remove delete performance LUN advisor profiles.
Figure 6-57 Sample perfcli command from Windows command line interface
The Figure 6-58 on page 272 and Figure 6-59 on page 272 show perfcli sample commands within the perfcli tool.
271
272
expert storage analyst given the time and sufficient information. The goal is to give very good advice by allowing VPA to consider the same factors that an administrator would in deciding where to best allocate storage. Note: At this point in time VPA tool is available for IBM ESS only.
273
Capturing existing workloads by observing storage access patterns in the environment. The VPA allows the user to point to a grouping of volumes and a particular window of time, and create a workload profile based on the observed behavior of those volumes. Creation of hypothetical workloads that are similar to existing profiles, but differ in some specific metrics. The VPA has tools to manage a library of predefined and custom workloads, to create new workload profiles, and to modify profiles for specific purposes.
274
275
access density in the range of 0.1 to 3.0. If it is significantly outside this range, then this might not be an appropriate sample. The VPA will tend to utilize resources that can best accommodate a particular type of workload. For example, high write content will make Raid 5 arrays busier than RAID 10 and VPA will therefore bias to RAID 10. Faster devices will be less busy, so VPA biases allocations to the faster devices. VPA also analyzes the historical data to determine how busy the internal ESS components (arrays, disk adapters, clusters) are due to other workloads. In this way, VPA tries to avoid allocating on already busy ESS components. If VPA has a choice among several places to allocate volumes, and they appear to be about equal, it is designed to apply a randomizing factor. This keeps the advisor from always giving the same advice, which might cause certain resources to be overloaded if everyone followed that advice. This also means that several usages of VPA by the same user may not necessarily get the same advice, even if the workload profiles are identical. Note: VPA tries to allocate the fewest possible volumes, as long as it can allocate on low utilization components. If the components look too busy, it will allocate more (smaller) volumes as a way of spreading the workload.It will not recommend more volumes than the maximum specified by the user. VPA may however be required to recommend allocation on very busy components. A utilization indicator in the user panels will indicate whether allocations would cause components to become heavily utilized. The I/O demand specified in the workload profile for the new storage being allocated is not a Service Level Agreement (SLA). In other words, there is no guarantee that the new storage, once allocated, will perform at or above the specified access density. The VPA will make recommendations unless the available space on the target devices is exhausted. An invocation of VPA can be used for multiple recommendations. To handle a situation when multiple sets of volumes are to be allocated with different workload profiles, it is important that the same VPA wizard be used for all sets of recommendations. Select the "Make additional recommendations" on the View Recommendations page, as opposed to starting a completely new sequence for each separate set of volumes to be allocated. VPA is designed to remember each additional (hypothetical) workload when making additional recommendations. There are, of course, limitations to the use of an expert advisor such as VPA. There may well be other constraints (like source and target Flashcopy requirements), which must be considered. Sometimes these constraints can be accommodated with careful use of the tool, and sometimes they may be so severe that the tool must be used very carefully. That is why VPA is designed as an advisor. In summary, the Volume Performance Advisor (VPA) provides you a tool to help automate complex decisions involved in data placement and provisioning. It short, it represents a future direction of storage management software! Computers should monitor their resources and make autonomic adjustments based on the information. The VPA is an expert advisor which provides you a step in that direction.
On the Windows platform, follow these steps: 1. Start Run regedit.exe 2. Open the HKEY_LOCAL_MACHINE SOFTWARE Tivoli Director CurrentVersion file. 3. Modify the LogOutput. Set the value to be equal to 1. 4. Reboot the server The output log location from the instructions above is X:/program files/ibm/director/log (where X is the drive where the Director application was installed). The log file for the Director is com.tivoli.console.ConsoleLauncher.stderr. On the Linux platform, TWGRas.properties sets the output logging on. You need to remove the comment from the last line in the file (twg.sysout=1) and ensure that you have set TWG_DEBUG_CONSOLE as an environment variable. For example in bash: $export TWG_DEBUG_CONSOLE=true
277
To express the workload represented by the new volumes, they are assigned a workload profile. A workload profile contains various performance attributes: I/O demand, in I/O operations per second per GB of volume size Average transfer size, in KBs per second Percentage mix of I/O - sequential or random, and read or write Cache utilization - percent of: cache hits for random reads, cache misses for random writes Peak activity time - the time period when the workload is most active You can create your own workload profile definitions in two ways By copying existing profiles, and editing their attributes By performing an analysis of existing volumes in the environment This second option is known as a Workload Analysis. You may select one or more existing volumes, and the historical performance data for these volumes retrieved, to determine their (average) performance behavior over time.
b. Select storage device right-click the device select Volume Performance Advisor (see Figure 6-62 on page 280).
Figure 6-61 Drag and Drop the VPA icon to the storage device
279
If a storage device is selected for the drag and drop step, that is not in the scope of the VPA, the following message will open (see Figure 6-63). Devices such as a CIMOM or an SNMP device will generate this error. Only ESS is supported at this time.
280
In the ESS User Validation panel, specify the user name, password, and port for each of the IBM TotalStorage Enterprise Storage Servers (ESSs) that you want to examine. During the initial setup of the VPA, on the ESS User Validation window, you need to first select the ESS (as shown in Figure 6-65 on page 282) and then input correct username, correct password and password verification.
You must click Set after you have input the correct username, password, and password verification in the appropriate fields (see highlighted portion with circle in Figure 6-66 on page 282). When you click Set, the application will populate the data you input (masked) into the correct Device Information fields in the Device Information box (see Figure 6-67 on page 282).
If you do not click Set, before selecting OK, the following error(s) will appear depending on what data needs to be entered. BWN005921E (ESS Specialist username has not been entered correctly or applied) BWN005922E (ESS Specialist password has not been entered correctly or applied) If you encounter these errors, ensure you have correctly input the values in the input fields in the lower part of the ESS user validation window and then retry by clicking OK. The ESS user validation window contains the following fields:; Devices table - Select an ESS from this table. It includes device IDs and device IP addresses of the ESS devices on which this task was dropped. ESS Specialist username - Type a valid ESS Specialist user name and password for the selected ESS. Subsequent displays of the same information for this ESS show the user name and password that was entered. You can change the user name by entering a new user name in this field. ESS Specialist password - Type a valid ESS Specialist password for the selected ESS. Any existing password entries are removed when you change the ESS user name. Confirm password - Type the valid ESS Specialist password again exactly as you typed it in the password field. ESS Specialist port - Type a valid ESS port number. The default is 80. Remove button - Click to remove the selected information.
Set button - Click to set names, passwords, and ports without closing the panel.
281
Add button - Click to invoke the Add devices panel. OK button - Click to save the changes and close the panel.
282
Click OK button to save the changes and close the panel. The application will attempt to access the ESS storage device. The error message in Figure 6-68 can be indicative of use of an incorrect username or password for authentication. Additionally, If you have a firewall and are not adequately authenticating to the storage device, the error may appear. If this does occur, check to ensure you are using the correct username and password for the authentication and have firewall access and are properly authenticating to establish storage device connectivity.
283
You use the Volume performance advisor - Settings window to identify your requirements for host attachment and the total amount of space that you need. You can also use this panel to specify volume number and size constraints, if any. We will begin with our example as shown in Figure 6-70.
The following are the fields in this window: Total space required (GB) - Type the total space required in gigabytes. The smallest allowed value is 0.1 GB. We requested 3 GB for our example.
284
Note: You cannot exceed the available volume space available for examination on the server(s) you select. To show the error, in this example we selected host Zombie and Total required space as 400 Gb. We got error as shown in Figure 6-71 on page 285. Action: Retry with different values and look at the server log for details. Solution(s): Select a smaller maximum Total (volume) Space required Gb and retry this step. Select more hosts which will include adequate volume space for this task. You may want to select the box entitled Consider volumes that have already been allocated but not assigned in the performance recommendation. Director log file enabling will generate logs for troubleshooting Director GUI components including the PM coswearer. In this example, the file we reference is; com.tivoli.console.ConsoleLauncher.stderr. (com.tivoli.console.ConsoleLauncher.stdout is also useful) The sample log is shown in Figure 6-72 on page 285.
Specify a volume size range button - Click the button to activate the field, then use the Minimum size (GB) spinner and the Maximum size (GB) spinner to specify the range. In this example, we selected 1 GB as minimum and 3 GB as maximum. Specify a volume quantity range button - Click the button to activate the field, then use the Minimum number spinner and the Maximum number spinner to specify the range. Consider volumes that have already been allocated but not assigned to hosts in the performance recommendation. If you check this box, VPA will use these types of volumes in the volume performance examination process.
Chapter 6. TotalStorage Productivity Center for Disk use
285
When this box (Consider volumes...) is checked and you click Next, the VPA wizard will open the following warning window (see Figure 6-73).
Note: The BWN005996W message is a warning (W). You have selected to reuse unassigned existing volumes which could potentially cause data loss. Go "Back" to the VPA Settings window by clicking OK if you do not want to consider unassigned volumes. Press the "Help" button for more information. Explanation: The Volume Performance Advisor will assume that all currently unassigned volumes are not in use, and may recommend the reuse of these volumes. If any of these unassigned volumes are in use, for example as replication targets or other Data Replication purposes, and these volumes are recommended for reuse, the result could be potential data loss. Action: Go back to the Settings window and unselect "Consider volumes that have already been allocated but not assigned to hosts in the performance recommendation" if you do not want to consider volumes which may potentially be used for other purposes. If you want to continue to consider unassigned volumes in your recommendations then continue. Host Attachments table - Select one or more hosts from this table. This table lists all hosts (by device ID) known to the ESS that you selected for this task. It is important to only choose hosts for volume consideration that are the same server type. It is also important to note that the VPA takes into consideration the maximum volume limitations of server type such as (Windows 256 volumes maximum) and AIX (approximately 4000 volumes). If you select a volume range above the server limit, VPA will display an error. In our example we used the host Zombie. Next button - Click to invoke the Choose workload profile window. You use this window to select a workload profile from a list of existing profile templates. 5. Click Next, after inputting your preferred parameters, and the Choose workload profile window will display (see Figure 6-74 on page 287).
286
287
Note: You cannot modify the properties of the workload profile from this panel. The panel options are greyed out (inactive). You can make changes to a workload profile from Manage Profile Create like panel.
Next button - Click to invoke the Choose candidate locations window. You can use this panel to select volume locations for the VPA to consider. C
6. After reviewing the properties for predefined workload profiles, you may select a workload profile from the table which closely resemble your workload profile requirements. For our scenario, we have selected the OLTP Standard workload name from the Choose workload profile window. We are going to use this workload profile for the LUN placement recommendations. Name - Shows the default profile name. The following restrictions apply to the profile name. 288 The workload profile name must be between 1 to 64 characters. Legal characters are A-Z, a-z, 0-9, "-", "_", ".", and ":"
First character cannot be "-" or "_". Spaces are not an acceptable character.
Description - Shows the description of workload profile. Total I/O per second per GB - Shows the values for the selected workload profile Total I/O per second rate. Average transfer size (KB) - Shows the values for the selected workload profile. Caching information box - Shows the cache hits and destage percentages: Random read cache hits Range from 1 - 100%. The default is 40%. Random write destage Range from 1 - 100%. The default is 33%. Read/Write information box - Shows the read and write values. The percentages for the four fields must equal 100% Sequential reads - The default is 14%. Sequential writes - The default is 23%. Random reads - The default is 36%. Random writes - The default is 32%.
Peak activity information box Since currently we are only viewing properties of an existing profile, the parameters for this box are not selectable. But as reference for subsequent usage, you may review this box. After you review properties for this box, you may Click on Close button. While creating new profile, this box will allow you to input following parameters: Use all available performance data radio button. You can select this option if you want to include all available performance data previously collected in consideration for this workload profile. Use the specified peak activity period radio button. You can select this button as an alternate option (instead of using the Use all available performance data option) for consideration in this workload profile definition. Time setting drop-down menu. Select from the following options for the time setting you want to use for this workload profile. - Device time - Client time - Server time - GMT Past days to analyze spinner. Use this (or manually enter the number) to select the number of days of historical information you want to consider for this workload profile analysis Time Range drop-down lists. Select the Start time and End time to consider using the appropriate fields.
Close button - Click to close the panel. You will be returned to the Choose workload profile window.
289
You can use the Choose candidate locations page to select volume locations for the performance advisor to consider. You can choose to either include or exclude the selected locations for the advisor's consideration. The VPA uses historical performance information to advise you about where volumes should be created. The Choose candidate locations page is one of the panels the performance advisor uses to collect and evaluate the information. Device list - Displays device IDs or names for each ESS on which the task was activated (each ESS on which you dropped the Volume advisor icon). Component Type tree - When you select a device from the Device list, the selection tree opens on the left side of the panel. The ESS component levels are shown in the tree. The following objects might be included: ESS Cluster
290
The component level names are followed by information about the capacity and the disk utilization of the component level. For example, we used System component level. It shows Component ID - 2105-F20-16603,Type- System, Description 2105-F20-16603-IBM, Available capacity - 311GB, Utilization - Low.(see Figure 6-76 on page 290). Tip: You can select the different ESS component types and the VPA will reconsider the volume placement advise based on that particular select. To familiarize yourself with the options, select each component in turn to determine which component type centric advise you prefer before proceeding to the next step. Select a component type from the tree to display a list of the available volumes for that component in the Candidates table (see Figure 6-76 on page 290). We chose system for this example. It represents entire ESS system in this case. Click Add button to add the component selected in the Candidates table to the Selected candidates table. See Figure 6-77. It shows Selected candidate as 2105-F20-16603.
Figure 6-77 VPA Chose candidate locations Component Type tree example (system)
291
You can use the Verify settings panel to verify the volume settings that you specified in the previous panels of the VPA.
292
You use the Recommendations window to first view the recommendations from the VPA and then to create new volumes based on the recommendations. In this example, VPA also recommends the location of volume as 16603:2:4:1:1700 in the Component ID column. This means recommended volume location is at ESS with ID 16603, Cluster 2, Device Adapter 4, Array 1 and volume ID1700. With this information, it is also possible to create volume manually via ESS specialist Browser interface or use VPA to create the same. In the Recommendations window of the wizard, you can choose whether the recommendations are to be implemented, and whether to loop around for another set of recommendations. At this time, you have two options (other than to cancel the operation). Make your final selection to Finish or return to the VPA for further recommendations. a. If you do not want to assign the volumes using the current VPA advice, or want the VPA to make another recommendation, check only the Make Additional Recommendations box. b. If you want to use the current VPA recommendation and make additional volume assistants at this time, select both the Implement Recommendations and Make Additional Recommendations check boxes. If you choose both options, you must
Chapter 6. TotalStorage Productivity Center for Disk use
293
first wait until the current set of volume recommendations are created, or created and assigned, before continuing. If you make this type of selection, a secondary window will appear which runs synchronously within the VPA. Tip: Stay in the same VPA session if you are going to implement volumes and add new volumes. This will enable VPA to provide advice for your current selections, checking for previous assignments, and verifying that no other VPA is processing the same volumes.
2. Click the continue button to proceed with VPA advice (see Figure 6-80).
294
3. In Figure 6-81, we see that the volumes are being created on the server we selected previously. This process takes a little time so be patient. 4. Figure 6-82 indicates that the volume creation and assigning to ESS has completed. Be patient and momentarily, the VPA loopback sequence will continue.
5. After the volume creation step has successfully completed, the following Settings window will again open so that you may add more volumes (see Figure 6-83 on page 296).
295
For the additional recommendations, we decided to use same server. But, we specified the Volume quantity range instead of Volume size range for the requested space of 2 GB. See Figure 6-84 on page 297.
296
After clicking Next, Choose Profile Panels opens. We selected same profile as before OLTP Standard. See Figure 6-85 on page 298.
297
After clicking Next, Choose candidate locations panel opens. We selected Cluster from Component Type drop-down list. See Figure 6-86 on page 299.
298
The Component Type Cluster shows Component ID as 2105-F20-16603:2, Types as Cluster, Descriptor as 2, Available capacity as 308GB and Utilization as Low. This indicates that VPA plans to provision additional capacity on this Cluster 2 of ESS. After clicking Add button Cluster 2 is a selected candidate for new volume. See Figure 6-87 on page 300.
299
Upon clicking Next, Verify settings panel open as shown in Figure 6-88 on page 301.
300
After verifying settings and clicking Next, the VPA recommendations window opens. See Figure 6-89 on page 302.
301
Since, the purpose of this example is to show our readers the VPA looping only, we decided to un-check both check box for Implement Recommendations and Make additional recommendations. Clicking Finish completed the VPA example (Figure 6-90 on page 303).
302
303
M A N A G I N G P R O F I L E S
C h o o s e C reate p ro file
R es u lts ac c ep ted
S a ve P ro file
Before using VPA for any additional disk space requirement for an application, you will need to: Determine typical I/O workload type of that application And, have performance data collected which covers peak load time periods You will need to determine the broad category in selected I/O workload fits in, e.g., whether it is OLTP high, OLTP standard, Data Warehouse, Batch sequential or Document archival. This is shown as highlighted box in the diagram. The TotalStorage Productivity Center for Disk provides predefined profiles for these workload types and it allows you to create additional similar profiles by choosing Create like profiles. If do not find any match with predefined profiles, then you may prefer to Create a new profile. While choosing Create like or Create profiles, you will also need to specify historical performance data samples covering peak load activity time period. Optionally, you may specify additional I/O parameters. Upon submitting the Create or Create like profile, the performance analysis will performed and results will be displayed. Depending upon the outcome of the results, you may need to re-validate the parameters for data collection task and ensure that peak load samples are taken correctly. If the results are acceptable, you may Save the profile. This profile can be referenced for future usage by VPA. In the next 6.7.1, Choosing Workload Profiles on page 304, we will cover step-by-step tasks using an example.
Performance Manager uses these profiles to create a performance recommendation for volume allocation on an IBM storage server. You can also use a set of Performance Manager panels to create and manage the workload profiles. There a three methods to choose workload profile as shown in Figure 6-92.
Note: Using predefined profile does not require pre-existing performance data, but other two methods require historical performance data from target storage device. You can launch workload profiles management tool using drag and drop method from IBM Director console GUI interface. Drag Manage Workload Profile task to target storage device as shown inFigure 6-93.
If you are using Manage Workload Profile or VPA tool for first time of the selected ESS device, then you will need to authorize ESS user validation. This has been described in detail 305
in 6.6.4, ESS User Validation on page 280. The ESS User Validation is same for VPA and Manage Workload Profile tools. After the successful ESS User validation, it will open Manage Workload Profile panel as shown in Figure 6-94.
You can create or manage workload profile using following three methods: 1. Selecting a predefined workload profile Several predefined workloads are shipped with Performance Manager. You can use the Choose workload profile panel to select the predefined workload profile that most closely matches your storage allocation needs. The default profiles shipped with Performance Manager as shown in Figure 6-95.
You can select properties panel of respective predefined profile to verify the profile details. Sample profile for OLTP Standard is shown in Figure 6-75 on page 288. 2. Creating a workload profile similar to another profile You can use the Create like panel to modify the details of a selected workload profile.You can then save the changes and assign a new name to create a new workload profile from the existing profile. To Create like profile, following are the task involved: a. Create a performance data collection task for target storage device -You may need to include multiple storage devices based on your profile requirements for the application.
306
b. Schedule data collection task - You may need to ensure data collection tasks runs over a sufficient period of time, which truly represents typical I/O load of the respective application. The key is to have sufficient historical data. Tip: Best practice is to schedule frequency of performance data collection task in such as way that it covers peak load periods of I/O activity and it has atleast few samples of peak loads. The number of samples depends on I/O characteristics of the application.
c. Determine the closest workload profile match - Determine whether new workload profile matches w.r.t existing or predefined profiles. Note that it may not be the exact fit,but should be of somewhat similar type. d. Create the new similar profile - Using Manage Workload Profile task, create new profile. You will need to select appropriate time period for historical data, which you have collected earlier. In our example, we created similar profile using Batch Sequential predefined profile. First, we select Batch Sequential profile and click Create like button as shown in Figure 6-96.
It opens properties panel for Batch Sequential as shown in Figure 6-97 on page 308.
307
We changed following values for our new profile: a. Name: ITSO_Batch_Daily b. Description: For ITSO batch applications c. Average transfer size: 20KB d. Sequential reads: 65% e. Random reads: 10% f. Peak Activity information: We used time period as past 24 days from 12AM to 11PM. We saved our new profile. (see Figure 6-98 on page 309).
308
This new profile - ITSO_Daily_batch is now available in Manage workload profile panel as shown in Figure 6-99 on page 310. This profile can now be used for VPA analysis. This completes our example.
309
3. Creating a new workload profile from historical data You can use the Manage workload profile panel to create a workload profile based on historical data about existing volumes. You can select one or more volumes as the base for the new workload profile. You can then assign a name to the workload profile, optionally provide a description, and finally create the new profile. To create a new workload profile click the Create button as shown in Figure 6-100.
It will launch a new panel for Creating workload profile as shown in Figure 6-101 on page 311. At this stage, you will need to specify the volumes for performance data analysis. In our example, we selected all volumes. For selecting multiple volumes but not all, click the first volume, hold the Shift key and click the last volume in the list. After all the required volumes are selected (shown as dark blue), click the Add button. See Figure 6-101 on page 311. 310
Note: The ESS volumes you specify should be representative of I/O behavior of the application, for which you are planning to allocate space using the VPA tool.
Upon Clicking Add button all the selected volumes will be moved to selected volumes box as shown in Figure 6-102 on page 312.
311
Figure 6-102 Selected volumes and performance period for new workload profile
In the Peak activity information box, you will need to specify activity sample period for Volume performance analysis. You can select option Use all available performance data or select Use the specified peak activity period. Based on your application peak I/O behavior, you may specify the sample period with Start date, Duration in days and Start / End time. For time setting, you can choose the drop-down box: Device time or Client time or Server time or GMT
After you have entered all the fields, you can click Next. You will see Review panel as shown in Figure 6-103 on page 313.
312
You can specify a Name for new workload profile and Description. It is advised that you may put detailed description which covers: What is Application name for which profile is being created? What is the application I/O activity does peak activity sample represents? When it was created? Optionally, you may specify who has created? Any other relevant information as per your organization requirements
In our example, we created profile named New_ITSO_app1_profile. At this point you may click Finish. At this point, the TotalStorage Productivity Center for Disk will begin Volume performance analysis based on the parameters you have provided. This process may take some time depending upon number of volumes and sampling time period. Hence, be patient. Finally, it will show the outcome of the analysis. In our example, we got results notification message as shown in Figure 6-104 on page 314. Analysis yielded that results are not statistically significant, as shown message: BWN005965E: Analysis results are not significant. This may indicate that:
Chapter 6. TotalStorage Productivity Center for Disk use
313
a. There is not enough I/O activity on selected volumes b. Or, time period chosen for sampling is not correct c. Or, correct volumes were not chosen You have a option of Save or Discard the profile. We decided to save the profile.
Upon saving the profile, it is now listed in the Manage workload profile panel as shown in Figure 6-105.
The new profile can now be referenced by VPA for future usage.
6.8 Remote Console installation for TotalStorage Productivity Center for Disk - Performance Manager
It is possible to install a TotalStorage Productivity Center for Disk console on a server other than the one the TotalStorage Productivity Center for Disk code is installed on. This allows you to manage TotalStorage Productivity Center for Disk from a secondary location. Having a secondary TotalStorage Productivity Center for Disk console will offload workload from the TotalStorage Productivity Center for Disk server.
314
Note: You are only installing the IBM Director and TotalStorage Productivity Center for Disk console code. You do not need to install any other code for the remote console.
In our lab we installed the remote console on a dedicated Windows 2000 server with 2 GB
RAM. You must install all the consoles and clients on the same server. The steps are:
1. Install the IBM Director console. 2. Install the TotalStorage Productivity Center for Disk console. 3. Install the Performance Manager client if the Performance Manager component is installed.
Next, you will panel will be similar to Figure 6-107 on page 316.
315
Choose IBM Director Console Installation. You will see similar to Figure 6-108. Click Next, click for Accept the terms of the Licence Agreement as shown in Figure 6-109 on page 317.
316
Click Next, you will see panel similar to Figure 6-110. Click Next for choosing default program features and program file location as shown in Figure 6-111 on page 318.
317
The installation is completed as shown in Figure 6-113 on page 319. Now, you may proceed for base remote console installation of TotalStorage Productivity Center for Disk.
318
6.8.2 Installing TotalStorage Productivity Center for Disk Base Remote Console
After installing IBM Director console, you will need to install TotalStorage Productivity Center for Disk common base package. Insert the CD-ROM which contains the package or choose the directory if you have downloaded the code. We show a window of our download directory as shown in Figure 6-114 on page 320. Click on Setup.exe to begin install process.
319
Next, you will see panel similar to Figure 6-115. Click Next to install TotalStorage Productivity Center for Disk base package.
Figure 6-115
320
Next, you will see Software License Agreement window as shown in Figure 6-116. Select for accepting the terms of license agreement and click Next.
Next, choose default destination directory as shown in Figure 6-117 and click Next.
You will see panel similar to Figure 6-118 on page 322. Select to Install a Console and Click Next.
Chapter 6. TotalStorage Productivity Center for Disk use
321
Next you will see panel similar to Figure 6-120 on page 323. Click Finish to complete the installation process.
322
Figure 6-120
This completes the common base installation of TotalStorage Productivity Center for Disk console. Next, you will need to install the console for the Performance Manager function of TotalStorage Productivity Center for Disk.
323
Next, you will see Welcome panel similar to Figure 6-122 on page 325. Click Next.
324
Figure 6-122 Welcome panel from TotalStorage Productivity Center for Disk installer
Next, click to select for accepting terms of license agreement and click Next.
Figure 6-123 Accept the terms of license agreement. Chapter 6. TotalStorage Productivity Center for Disk use
325
Next, choose default destination directory as shown in Figure 6-124 and click Next.
Next, choose to install Productivity Center for Disk Client and click Next as shown in Figure 6-125 on page 327.
326
Next, Select both check boxes for products, if you want to install the console and command line client for the Performance Manager function. See Figure 6-126. Click Next.
327
You may click Manage Disk Performance and Replication as highlighted in the figure This will launch IBM director remote console. You may logon to the director server and start using remote console functions except for Replication Manager.
328
Note: At this point, you have installed the remote console for the Performance Manager function only and not for Replication Manager. You may install the remote console for the Replication Manager if you require.
329
330
Chapter 7.
331
332
Figure 7-1 TotalStorage Productivity Center for Fabric device compatibility Web page
Either scroll down this page to the Switches section or jump directly to it as shown in Figure 7-1. If the switch you plan to use is not listed or is not a re-branded equivalent to one listed then it is not supported by TotalStorage Productivity Center for Fabric. Important: Do not assume that a switch will support zoning changes just because it appears on the device compatibility Web page. Not all switches support all functions that TotalStorage Productivity Center for Fabric can provide so it is important to look at the specific details of the device you plan to use. Locate the switch you plan to use and click on it to view the detailed breakdown of its functional support. Figure 7-2 on page 335 shows a device support details page. If you see a Yes either in Zone Control Supported (either In-band or Out-of-band) then TotalStorage Productivity Center for Fabric will be able to work with TotalStorage Productivity Center for Disk to perform zone administration at LUN allocation and assignment time.
333
There are two methods by which TotalStorage Productivity Center for Fabric can communicate with a switch to effect zone changes. The method used is determined by the switch vendor. In-band method: This means that a switch accepts zone change control information through instructions sent to it through the fibre channel network (in-band). Out-of-band method: This means that a switch accepts zone change information through instructions sent to it on its IP network interface know as out-of-band. Important: For switches that use the in-band zone control method you will need to deploy at least one TotalStorage Productivity Center for Fabric agent on a server that is fibre connected to the SAN fabric you want to control. For switches that use the out-of-band zone control method the TotalStorage Productivity Center for Fabric manager machine talks directly to the switch over TCPIP and no agents are required to implement this function. The out-of-band method will require SNMP network access between the TotalStorage Productivity Center for Fabric manager machine and the SAN switch. It may be necessary for an organization to make firewall or network changes to allow this to take place. Out-of-band is the simpler than In-band to setup because it does not require and agent on a SAN connected host. If you establish that an in-band agent will be required to perform zoning change with your switch and you currently dont have in-band agents there are a number of things to consider. A TotalStorage Productivity Center for Fabric agent will need more IP ports than SNMP to communicate with the Fabric server and will require CPU, RAM and disk resources on the SAN connected server. To learn how to deploy TotalStorage Productivity Center for Fabric agents refer to the redbook IBM TotalStorage Productivity Center - Getting Started, SG24-6490.
334
Figure 7-2 Device support Web page for IBM 2109-F08 and 2109-F16 switch
7.1.3 Deployment
TotalStorage Productivity Center for Fabric can run as a stand alone fabric manager or integrate with TotalStorage Productivity Center for Disk. It runs on the same common infrastructure of DB2 and WebSphere as TotalStorage Productivity Center for Disk and TotalStorage Productivity Center for Replication. You can choose to have TotalStorage Productivity Center for Fabric installed on the same server as your TotalStorage Productivity Center for Disk installation or a separate one. A key consideration for this choice will be the amount of RAM installed in the systems hosting the service.
335
Tip: To install TotalStorage Productivity Center for Disk, TotalStorage Productivity Center for Replication and TotalStorage Productivity Center for Fabric on a single server machine you will need at least 2.5 GB RAM. Consider installing 4 GB. If TotalStorage Productivity Center for Fabric is installed on a separate machine to your TotalStorage Productivity Center for Disk system you will need to install the TotalStorage Productivity Center for Fabric remote console on the TotalStorage Productivity Center for Disk system to allow it to be launched from the Director console and used on the same machine. Note: The remote console agent is not needed for the zoning API function to operate. The API function calls are made from TotalStorage Productivity Center for Disk directly to the TotalStorage Productivity Center for Fabric manager machine once configured. It is therefore not necessary to install TotalStorage Productivity Center for Fabric remote console if you do not plan to use it. Installing the remote console on the same machine allows fabric and disk management from a single console.
336
Fill in the following entries to enable communications with TotalStorage Productivity Center for Fabric. TSNAM host - Enter the IP address or hostname of the server where TotalStorage Productivity Center for Fabric manager is installed. Note: If TotalStorage Productivity Center for Fabric is installed on the same machine as TotalStorage Productivity Center for Disk you still need to enter its hostname or IP address in the box. TSANM Port - The port number that TotalStorage Productivity Center for Fabric is using for communications. The default is 9550. It is only necessary to change this value if the default port was changed when TotalStorage Productivity Center for Fabric manager was installed. TSANM password - Enter the password used to communicate with TotalStorage Productivity Center for Fabric. This password would have been set when TotalStorage Productivity Center for Fabric was installed.
337
Note to Linux users: If you have difficulty using the administrative console on Linux, try using the Netscape Communicator 7.1 browser based on Mozilla 1.0. The browser release is not officially supported by the WebSphere Application Server product but users have been able to access the console successfully with it. Else, try running the WebSphere Administrative console remotely from Internet Explorer running on a Windows system. Log in to the WebSphere Administrative Console using the WebSphere username and password. In the WebSphere Administrative Console, expand the Applications menu and choose the Enterprise Applications link. In the Enterprise Applications table, choose the checkbox to select only the DMCoserver application and choose the Update push button, opening the Preparing for the application update panel. Enter the full pathname of the ear file, "<efix directory>\DMCoserver.ear" into the Path text field and choose the Next button. NOTE1: IF you choose the cancel button, you wont be able to complete the install. For the second panel in "Preparing for the application update", accept the defaults and choose the Next button (you may have to scroll the wizard panel down to see the Next button). For each of the "Install New Application" wizard panels, Steps 1 through 13, accept the defaults and choose the Next button (you may have to scroll the wizard panel down to see the Next button). In Step 14, accept the defaults and choose the Finish button. The "Installing" panel should open. Once the install is complete, the panel will display the similar to the following: If there are EJB's in the application, the EJB Deploy process may take several minutes. Please do not save the configuration until the process is complete. Check the SystemOut.log on the Deployment Manager or Server where the application is deployed for specific information about the EJB Deploy process as it occurs. ADMA5106I: Application DMCoserver uninstalled successfully. ADMA5016I: Installation of DMCoserver started. ADMA5005I: Application DMCoserver configured in WebSphere repository .... .... ADMA5013I: Application DMCoserver installed successfully. Application DMCoserver installed successfully. If you want to start the application, you must first save changes to the master configuration. Save to Master Configuration If you want to work with installed applications, then click Manage Applications. Manage Applications Choose the Save Master Configuration link, opening the Save panel.
339
In the Save panel, choose the Save button. When the save operation is complete (the Web browser logo in the top right hand corner will stop moving), the home page of the Administrative Console will open. Choose the Logout button to log out of the Administrative Console. Close the IBM Director Console if it is running. Stop the IBM Director Server. On Windows, stop the IBM Director Support Program service or run the command net stop twgipc. On Linux, run the command twgstop. This should also stop WebSphere. Copy the four TWGExt files into the folder <IBM Director root>\ classes\extensions (for example, on Windows, C:\Program Files\IBM\Director\classes\extensions or, on Linux, /opt/IBM/director/classes/extensions). Confirm that the existing files will be overwritten. Start the IBM Director Server. On Windows, start the IBM Director Support Program service or run the command net start twgipc. On Linux, run the command twgstart. Since twgstart returns asynchronously, wait until twgstat returns Active status. Start the IBM Director Console. The efix has now been applied.
Figure 7-5 Launch TotalStorage Productivity Center for Fabric remote console install
340
The InstallShield Wizard panel will display for a few moments while the installer loads (see Figure 7-6).
Figure 7-8 confirms that you are installing the TotalStorage Productivity Center for Fabric Console V2.1.0. Click Next.
If you accept the license terms select the radio button and click Next as shown in Figure 7-9 on page 342.
341
Select the preferred install directory or accept the default as in Figure 7-10 and click Next.
Enter the fully qualified Host Name or IP address of the server where TotalStorage Productivity Center for Fabric server is installed as seen in Figure 7-11 on page 343. The Port Number will default to 9550. Only change this if the default was changed on the TotalStorage Productivity Center for Fabric server at install time.
342
Figure 7-12 shows the default base Port Number is 9560. TotalStorage Productivity Center for Fabric console requires 25 additional ports starting from this number. We recommend you do not change this value.
Enter the password that the remote console will use to authenticate with the TotalStorage Productivity Center for Fabric manager server (Figure 7-13 on page 344). This password would have been set when the TotalStorage Productivity Center for Fabric manager server was installed and it cannot be changed in this panel. Click Next to continue.
343
Figure 7-14 shows the next panel to select the Drive name that will be used to install NetView. TotalStorage Productivity Center for Fabric uses NetView as its primary interface and it will be installed in the background as part of this install process. Choose a local drive only.
Enter a password for NetView as shown in Figure 7-15 on page 345. The installer will create a NetView user for the NetView service to run under. This is a new password and does not need to match any others previously entered.
344
The panel shown in Figure 7-16 confirms the location for the TotalStorage Productivity Center for Fabric code install and the disk space required. Click Next to start the installation process.
Figure 7-17 on page 346 shows the installation progress. This can take around 15 minutes to complete. Attention: A reboot is required to complete the installation process.
345
346
Note: Before DS4000 or FAStT volume properties can be displayed, as with other storage devices managed by Productivity Center common base, an initial inventory must be completed. Refer to Performing volume inventory on page 194 for details.
Figure 7-19 on page 348 shows the Volumes management panel for all current assignments. Select the Create button to start the volume creation and assignment process for a new LUN.
347
Specify the volume characteristics as seen in Figure 7-20. From the list of Defined host ports select the host(s) that will use this LUN. You can select multiple hosts using the <Ctrl> key. For more detailed information about using the panel refer to 5.7.3, Creating DS4000 or FAStT volumes on page 212.
Important: Only hosts previously defined to the DS4000 (FAStT) subsystem will be visible in the Create volume panel. Use the IBM DS4000 Storage Manager to define hosts World Wide Names (WWNs) and names before starting this process. You will need to run Perform Inventory Collection for TotalStorage Productivity Center for Disk to recognize new WWNs created in the IBM DS4000 Storage Manager. See 5.4, Performing volume inventory on page 194 for more details.
348
Figure 7-21 appears for a few seconds while TotalStorage Productivity Center for Disk communicates with TotalStorage Productivity Center for Fabric to retrieve current zone information from the SAN. The appearance of this panel indicates that the configuration of TotalStorage Productivity Center for Fabric and the efix have been successful.
Attention: If you see the message panel as seen in Figure 7-22 the TotalStorage Productivity Center for Fabric server is communicating but not managing the SAN in which the host and/or disk subsystem reside. You can continue to perform the LUN creation process but the zoning action will not take place. Click OK if you want to continue creating the LUN or Cancel if you want to stop the process.
Now specify the zone properties you want to create as seen in Figure 7-23 on page 350. This panel is shows: Active ZoneSet - This is the ZoneSet that is currently running on the switch fabric you are working with. This value is for information only and cannot be changed on this panel. ZoneSets to verify Zoning - This is the ZoneSet that TotalStorage Productivity Center for Disk will check against to see if a valid zone for this host/disk combination exists. Select the ZoneSet you want to work with using the drop-down arrow. If this SAN fabric only has one ZoneSet defined then it will appear here by default. Host ports - This box lists the WWN of the host port(s) selected to be assigned to the new LUN. You cannot change this value on the panel. If it is not showing the intended WWN click Cancel to return to the Create volume panel to reselect the WWN (Figure 7-20 on page 348). Storage device ports - This box lists the WWN ports that the disk subsystem is presenting to the SAN. Select the ports to be zoned to the host. Select multiple ports using the <Ctrl> key. Click OK to continue. TotalStorage Productivity Center for Disk will now check for an existing zone in the specified ZoneSet that meets the requirement.
349
If a valid zone already exists then no additional zoning changes are needed and you will see an information panel as in Figure 7-24. This is not an error message as it might look at first glance. You will see this if you have existing LUNs from the selected disk subsystem already defined to this host. Click OK to continue.
If a new host zone needs to be created for the host/disk combination the Create a new zone for the selected volume panel will appear (Figure 7-25 on page 351). The majority of this panel is to confirm the zoning action that is about to be executed. Provide the following information: Zone name - Enter the name that you want to assign to this zone. Zone set actions - Choose to make this zoning change effective immediately or only update the ZoneSet for future activation.
350
Click OK to continue. The Creating zone panel (Figure 7-26) will display while the action takes place.
A panel will appear to show the zone has been created as in Figure 7-27. Click OK to continue and perform volume creation.
Figure 7-28 on page 352 will appear and show the volume creation results as they happen.
351
The final panel to appear will be the results of the zone creation as in Figure 7-29.
352
Note: For this function to work you either need to have installed TotalStorage Productivity Center for Fabric manager or TotalStorage Productivity Center for Fabric remote console on the same machine as the TotalStorage Productivity Center for Disk installation. See 7.2, Installing Fabric remote console on page 340 for details of installing TotalStorage Productivity Center for Fabric remote console. Figure 7-31 on page 354 shows an example of the TotalStorage Productivity Center for Fabric console that will appear when launched. The top left of the four panels shows the four SAN islands that it is managing. The bottom two panels show switch to hosts connects for two of the switches.
353
From this interface you can display detailed information about SAN elements such as switches and hosts. You can view zoning information and view host to device relationships. The GUI uses color coded icons to indicate which SAN elements are OK or in error. You can also launch the fabric zoning tool to view and change fabric zones using the standards based zoning functions (on a support switch).
354
Chapter 8.
355
then select Storage software in Product family select TPC for Replication select the Install and use tab. The ESS Copy Services supported with TotalStorage Productivity Center for Replication V2.1 include: ESS PPRC Synchronous remote copy Add / delete volume pairs Full background copy Freeze / Run Suspend / resume Query status of the session, paths, and pairs
356
PPRC
PPRC is a function of a storage server that constantly updates a secondary copy of a volume to match changes made to a primary volume. The primary and the secondary volumes can be on the same storage server or on separate storage servers. PPRC differs from FlashCopy in two essential ways. First, as the name implies, the primary and secondary volumes can be located at some distance from each other. Second, and more significantly, PPRC is not aimed at capturing the state of the source at some point in time, but rather aims at reflecting all changes made to the source data at the target. PPRC is application independent. Because the copying function occurs at the disk subsystem level, the hosts operating system or application has no knowledge of its existence. In contrast to that, host-based mirroring is controlled by software at the operating system or file system level: The storage subsystem does not know about that. Table 8-1 summarizes characteristics of both approaches.
Table 8-1 Comparison of PPRC and host-based mirroring Peer-to-Peer Remote Copy Operation is performed by storage subsystem, transparent for host operating system. The functionality is the same for all operating systems and applications. Read and write operations are sent to the primary volume only. There is an unidirectional relationship from the primary to the secondary volume. Failure recovery is different for the primary and secondary volume. Host-based mirroring Operation is performed by host software or host bus adapter, transparent for storage subsystem. The functionality depends on capabilities of the operating system or host bus adapter. Write operations are sent to both volumes. Read operations are sent to any volume, depending on read policy. The relationship between the volumes is symmetric. Failure recovery is identical for both volumes.
FlashCopy
FlashCopy makes a single point-in-time copy of a LUN. This is also known as a time-zero copy. The target copy is available once the FlashCopy command has been processed. FlashCopy provides an instant or point-in-time copy of an ESS logical volume. Point-in-time copy functions give you an instantaneous copy, or view of what the original data looked like at a specific point-in-time.The point-in-time copy created by FlashCopy is typically used where you need a copy of production data to be produced with minimal application downtime. It can be used for backup, testing of new applications, or for copying a database for data mining purposes. The copy looks exactly like the original source volume and is an instantly available. TotalStorage Productivity Center for Replication provides a user interface for creating, maintaining, using volume groups and for scheduling copy tasks. The User Interface populates lists of volumes using the Device Manager interface. TotalStorage Productivity Center for Replication uses different names for copy services than ESS: Point-in-Time Copy is equivalent to FlashCopy on ESS Continuous Synchronous Remote Copy is equivalent to Peer to Peer Remote Copy on ESS
357
Figure 8-1 illustrates a list of the tasks you can perform from Manage Replication group, which represents TotalStorage Productivity Center for Replication: Create and manage groups, which are collections of volumes grouped together so that they can be managed concurrently. Check status of paths between storage subsystems which are required for remote copy functionality. Create and manage pools which are collections of target volumes. Run the wizard for creating a session: Select copy type Select source group Select target pool Save session or start a replication session
Monitor, terminate or suspend running sessions. A user can also perform these tasks with the TotalStorage Productivity Center for Replication command-line interface which is described in 8.3, Using Command Line Interface (CLI) for replication on page 407.
358
relationships between source and target volume pairs or source volume groups, and among target pools through a Replication Manager copy session. The Replication Manager Sessions panel shows sessions and their associated status. The status indicates if the volume is a source, target, or both; and it shows the copy mode of the volume. You can also use this panel to assess if current replication activities are proceeding normally or abnormally. When you are creating a replication session, you can select source and target volume pairs or volume groups, then establish a continuous synchronous remote copy (remote copy) or point-in-time copy (flash copy) relationship between them. The Sessions panel includes the following options: Create - Invokes the Create Session wizard, which you can use to create copy relationships for a new session. Delete - Deletes an existing session. Flash - Starts a created or terminated session (for Point-in-Time only). Start - Starts a created, suspended or terminated session (for Remote Copy only). Properties - Displays the Session Properties panel for an existing session. Suspend (consistent) - Suspends an existing session, which results in a consistent target copy if there are no errors. Suspend (immediate) - Stops an existing session with no guarantee of consistency. Terminate - Stops an existing session and withdraws the relationships.
359
S1
Remote or Flash Copy
T1
T3
S2
T2
T4
Session
Figure 8-2 Relationship of a group, pool and session
Our example session shows that S1 is associated with T1, similarly S2 with T2. The T1 and T2 volumes are now persistently bound to the relationship whereas T3 and T4 are still available for use. TotalStorage Productivity Center for Replication can automatically create the source to target relationship on your behalf. Once created, these volumes are now part of a session or consistency group. This means again that any error on any of the volumes in this session could trigger a suspend across all the volumes to ensure data consistency. Events such as loss of access to a source subsystem or the loss of the PPRC links could be examples of such conditions to trigger a freeze event.
360
T1
T2
Sequences will be further utilized in subsequent releases of TotalStorage Productivity Center as more complex copy types are supported.
361
Create a Session
Manage Replication
The ESS Copy Services servers are defined to the CIMOM using the addserver command. Each ESS cluster which acts as copy services server must be defined to the ESS CIMOM. Refer to Register ESS server for Copy services on page 141. Verify the ESSs you will use are at the required LIC level. TotalStorage Productivity Center for Replication V.2.1 requires for ESS 750 and ESS 800 LIC level 2.4.1 or above. ESS models F10 and F20 require LIC of 2.3.256 or above. The paths between ESSs you want to replicate are defined using the ESS Specialist.
362
Start a Session
Create a Group
4. Click Create. The Create Group wizard opens (see Figure 8-6 on page 364). 5. Click on the device shown in Device Components pane and select logical storage subsystem (LSS). Note: Device Components shown in the group window do not use the same names defined in the Group Contents pane of the IBM Director console (see Figure 8-1 on page 358). The Device Component pane uses the format: device_type.serial_number. In our example Device Component ESS.2105-16603 in Figure 8-6 indicates ESS 2105 F20 16603 in Figure 8-1 on page 358. 6. Select one or more volumes (press Ctrl and click for multiple volumes selection) from the Available Volumes pane of the Create group pane. Click Add (see Figure 8-6 on page 364). You can also click Select all if you want to add all available volumes to a group. In our example we chose two volumes from ESS F20 (16603) and two volumes from ESS 800 (22513).
363
Note: Although you can only select volumes from one LSS at a time, you can select different LSSes within the same Create Group session. As you select each LSS, the Available volumes pane updates the list of volumes that are available for the selected device. 7.
If you want to remove a volume from the Selected volumes panel, select it, and then click Remove.
8. Click Next. The Save group window opens (see Figure 8-7 on page 365).
364
9. Enter a name for the new group in the Name field. The name is required and must not exceed 250 characters and may not contain special characters such as spaces. 10.Enter a description for the new group in the Description field. The Description is optional and can be 0 - 250 characters. 11.Click Finish to save the new group and close the wizard.
Result
The new group appears in the Groups window (see Figure 8-8). In our example we created two groups which will be used for Point-in-time copy and Remote Copy.
365
6. To change volumes which belong to the group, click Update. The Group properties window with volumes opens (similar to the one shown in Figure 8-10 on page 367).
366
7. To add volumes to the group: Select one or more volumes (using Ctrl) in the Available volumes panel. Click Add. 8. To remove volumes from the group: Select one or more volumes (using Ctrl) in Selected volumes panel. Click Remove. . Attention: Check if existing defined sessions use volumes which you want to remove from the group you are updating.
367
2. Click Manage Replication. 3. Click Groups, the Groups panel opens. 4. In the Groups table, select the group that you want to view (see Figure 8-8 on page 365). 5. Click Properties. The Properties panel opens for the selected group. You can view the following information: Group name Description of the group The table of the volumes that are managed by the group which shows: volume ID device (for example ESS.2105-16603) volume location - logical storage subsystem volume type (FB for open systems) volume size
4. Click Delete. A window opens asking to verify the delete request (see Figure 8-12).
368
5. Click Yes to delete the group. Alternatively, click No to cancel the delete.
4. Click Create. The Create Pool Wizard opens (see Figure 8-14 on page 370).
369
5. Click on device shown in Device Component pane and select a logical storage subsystem (LSS). Note: Device Components shown in the Group window do not use the same names defined in the Group Contents panel in the IBM Director Console (see Figure 8-1 on page 358). The Device Component pane uses the format device_type.serial_number. In our example Device Component ESS.2105-16603 in Figure 8-6 indicates ESS 2105 F20 16603 in Figure 8-1 on page 358.
6. Select one or more volumes (press Ctrl and click for multiple selection) in the Available volumes pane and click Add. You can also click Select all if you want to add all available volumes to a pool. In our example we chose two volumes from ESS F20 (16603) and two volumes from ESS 800 (22513). Important: The size of a source and target volume of a copy relationship has to be equal. 7.
If you want to remove a volume from the Selected volumes panel, select it, and then click Remove.
8. Click Next. The Save pool window opens (see Figure 8-15 on page 371). 9. Enter a name (required), description (optional) and location (optional). Note: We recommend you enter a Location name, which helps in automatic allocation of target volumes during creating a session. 10.Click Finish to save the new pool. 370
Managing Disk Subsystems using IBM TotalStorage Productivity Center
371
Note: You do not have to use all volumes of pool when you create a session. Additionally, a pool and even the same volume from a pool can be defined as a target for multiple sessions.
5. You can change text in the Description panel and Location. Attention: Changing the Location name can destroy a session which uses the pool you are modifying. 6. To change volumes which belong to the pool, click Update. The Pool properties window with volumes opens (similar to the one shown in Figure 8-14 on page 370). 7. To add volumes to the group: Select one or more volumes (using Ctrl) in the Available volumes panel.
372
Click Add. 8. To remove volumes from the group: Select one or more volumes (using Ctrl) in Selected volumes panel. Click Remove. Attention: Check if any defined sessions use volumes which you want to remove from the pool you are modifying. 9. Click OK to commit changes and close the window or click Cancel if you want to cancel the modifications.
4. Click Delete. A window with the message Are you sure you want to delete pool pool_name? opens as shown in Figure 8-19 on page 374.
373
374
375
3. Double-click Groups. The Groups window opens (see Figure 8-8 on page 365). 4. Select the group which you want to copy and click Replicate. The Create Session wizard opens for the group you chose (see Figure 8-22). Or: 1. In the IBM Director Console Tasks panel, expand the Multiple Device Manager tab. 2. Click Manage Replication. 3. Double-click Sessions. The Session window opens (see Figure 8-21).
4. Select Create session action. The Create session window opens (see Figure 8-22). Choose Point-in-Time Copy and click Next.
Note: You can define another session which uses the same group. 5. The Choose source group window opens (see Figure 8-23 on page 377). Choose the Group name which you want to copy and click Next. If you ran the wizard from the Groups window you can see only one Group which you selected before.
376
7. In the Location filter field enter the name of the location of the target pool. You can enter an asterisk (*) as a wildcard for the first or last character of the filter. 8. Click Apply to see volumes of all locations which meet the criteria. 9. Select the All listed locations radio button if you want to use volumes from more than one location or the Select single location radio button then select the location from the Location pane and click Next. Note: We recommend you enter the entire location name in the Location filter field instead of using wildcards. Remember, location name is case sensitive.
10.Enter the session Name and Description in the Create session - Set session settings panel (see Figure 8-25 on page 378).
377
Select one of the following options in Session approval pane: Automatic - indicates that you allow Replication Manager to automatically create relationship between source and target volume Manual - indicated that you want to select volumes and approve relationships 11.Click Next, the Review session properties window opens. Verify your input and click Finish to submit (see Figure 8-26).
12.The session will be created and a new window opens with a message that the command completed successfully. If you get a message as shown in Figure 8-29 on page 380 refer to 8.2.12, Creating a session: Verifying source-target relationship on page 379. 13.In the Sessions pane you can see the newly created session (see Figure 8-27 on page 379).
378
If the session was created successfully, select Flash from the Session actions pull-down to run a Point-in-time copy session (see Figure 8-28). We recommend you verify the source-target volumes before running a session. To verify relationships refer to 8.2.12, Creating a session: Verifying source-target relationship on page 379. .
14.Now you can see in the ESS Specialist interface that FlashCopy is running as shown in Figure 8-38 on page 384. In our example there are two pairs of FlashCopy on two different ESS devices running in the same session.
379
Perform the following steps to verify source-volume pairs. 1. If you got a message that creating command was completed with errors, click Details (see Figure 8-29). The window with messages opens, and you can see detailed messages (see Figure 8-30).
Close both windows. You can see the created session in the Sessions pane. In our example in Figure 8-31 on page 381, we created a session named FC_F20_800.
380
2. Click on a session you want to verify in Sessions panel. 3. Click the Please select drop-down and choose Properties. The window Session properties opens. Click the Copyset tab. See Figure 8-32. 4. The number under Non approved copysets: indicates that during creating a session or relationships they could not be created automatically. In our example, we chose the Automatic Session Approval method; two pairs were set automatically, however the next two were not approved (see Figure 8-32). Click Copyset details. The Copyset window opens as shown in Figure 8-33 on page 382.
5. Select the Invalid Copyset to see details of the last result and click Modify copyset target. In our example two pairs are approved and two are not valid and should be modified as shown in Figure 8-33 on page 382.
381
Tip: Copyset ID is related to the source volume of copy pair. 6. The Choose Target window opens. Select target volumes to create copy pair with source volume and click Next. In our example (see Figure 8-34) source volume is 1300 and we have two available targets 1304 and 1305.
7. The Choose Target Verify window opens. If it shows the correct target volume for modifying the copyset click Finish to approve. 8. Perform Steps 5 - 7 for all copysets which are invalid, which means that source-target pairs were not set and approved. 9. If all copysets are correct you will see status as shown in Figure 8-35 on page 383. Select modified copyset to verify the last result says that the relationship was successfully created. 382
Managing Disk Subsystems using IBM TotalStorage Productivity Center
10.Go back to Session properties windows, Copyset tab (see Figure 8-36) and click Refresh. If you modified all copysets correctly you should get result as shown in Figure 8-36.
11.Go back to the main Sessions window. Select Session pull-down actions and click Flash to run a Point-in-time copy session as shown in Figure 8-37 on page 384. The Confirmation window opens, click Yes to run or No to cancel.
383
12.You can see in the ESS Specialist interface that FlashCopy is running as shown in Figure 8-38. In our example there are two pairs of FlashCopy on two different ESS devices s running in the same session.
Figure 8-38 FlashCopy pairs created and run by TotalStorage Productivity Center for Replication
384
Figure 8-39 Create session window with Continuous Synchronous Remote Copy selection
- or 1. In the IBM Director Task panel, click Multiple Device Manager. 2. Click Manage Replication. 3. Double-click Sessions. The Session window opens (see Figure 8-40). 4. Select Create session action. The Create session window opens (see Figure 8-39).
5. Choose Continuous Synchronous Remote Copy and click Next. The Choose source group window opens Choose the Group name which you want to copy and click Next (see Figure 8-41 on page 386). If you ran the wizard from Groups window you can see only one Group which you selected before. Choose a target pool window opens as shown in Figure 8-42 on page 386
Chapter 8. TotalStorage Productivity Center for Replication use
385
6. In the Location filter field enter the name of the location of the target pool. You can enter an asterisk (*) as a wildcard for the first or last character of the filter. 7. Click Apply to see volumes of all locations which meet criteria. 8. Select All listed locations if you want to use volumes from more than one location or select a single location then select correct location and click Next. Note: Remember, location name is case sensitive
9. The Set session settings window opens. Enter the name and description (see Figure 8-43 on page 387). 10.Select one of the following options in the Session approval panel: Automatic - indicates that you allow Replication Manager to automatically create a relationship between the source and target volume.
386
Manual - indicates that you want to select volumes and approve the relationship.
11.Click Next, the Review session window opens. Validate the information and click Finish to submit (see Figure 8-44).
12.The session will be created and a new window opens with a message that the command completed successfully as shown in Figure 8-45 on page 388. If you get a message as shown in Figure 8-29 on page 380, read 8.2.12, Creating a session: Verifying source-target relationship on page 379.
387
13.In the Sessions window you can see new created session (see Figure 8-46).
Figure 8-46 Sessions window with created Continuous Synchronous Remote Copy session.
14.If session was created successfully, select the session you want to run, select Session actions and click Start to run a Remote Copy session (see Figure 8-47). However, we recommend you verify source-target volumes before running a session. To verify relationships, read 8.2.12, Creating a session: Verifying source-target relationship on page 379.
15.You can see in the ESS Specialist interface that a Remote Copy is running as shown in Figure 8-48 on page 389. In our example there are two pairs of Remote Copy between volumes on two different ESSs running in the same session. 388
Managing Disk Subsystems using IBM TotalStorage Productivity Center
Figure 8-48 Remote copy pairs created and run by TotalStorage Productivity Center for Replication
Sessions window
When you create, verify and run a session you can monitor its status in the main Session window, which gives you basic information about a given session. Each session can include many pairs of volumes which are in copy relationships and create a consistent group. If the status of a given session is not optimal, you need to review the properties for a given session to check if there is a general problem or if it is related to a certain pair of volumes.
389
Perform the following steps to check a basic status of a session: 1. Click Multiple Device Manager in the IBM Director Task panel. 2. Click Manage Replication. 3. Click Sessions. The Sessions window opens (also called the main Session window). There are eight fields in a Sessions window: a. Name of a session b. Status field can have one of the following status: Normal (green icon): Point-in-Time Copy was invoked successfully. Medium (yellow icon): A session is not started or was terminated Severe (red icon): An error occurred. Defined - a session is created and not started or was terminated Active - a session is running
d. Group - name of a Group of volumes which are sources of copy pairs e. Copy Type - Point-in-Time Copy or Continuous Synchronous Remote Copy f. Recoverable - indicates if any sequences in a session are considered recoverable g. Shadowing - indicates if any part of a session is shadowing data h. Volume Exceptions - shows the total number of volumes which are in an exception state. Before starting a created session, you should see the following field values as shown in Figure 8-49: Status - Medium State - Defined Recoverable - No Shadowing - No Volume Exceptions - No
When you successfully flashed a new or terminated session you will see the values for the following parameters shown in Figure 8-58 on page 397: 390
Status - Normal (green) State - Active (changed from Defined) Recoverable - Yes
Properties window
The Sessions window shows a status of session as a group of volume pairs. If you want to see details, perform following steps to use Properties window: 1. In the IBM Director Task panel, click Multiple Device Manager. 2. Click Manage Replication. 3. Double-click Sessions. The Session window opens 4. Select a session which you want to manage, select the Properties session action. The Properties window opens. 5. There are three tabs in the Properties window a. General - shows general information about a session. Comparing the information to the Session window you get additional information like Description, number of volumes and approval status. The only parameter you can change is the approval status which can be automatic or manual. b. Copyset - you can check if all pairs are valid and approved. This panel is mostly used during verification of a session, see Creating a session: Verifying source-target relationship on page 379. c. Sequence - this tab is mostly used to see detailed information about the status of a session, especially when used together with the Pairs window. The General tab shows basic information, like the Session window. For example, when you flashed a session, you should see the following values (see Figure 8-51 on page 392): Copy type - Point-in-Time Copy State - Active Status - Normal Group - name of group used for this session Source Volumes - number of volumes in a group Approval status - Automatic or Manual
391
The Copyset tab generally does not change while managing a session unless some error occurs. Figure 8-52 shows the status in our environment.
Figure 8-52 Copyset tab for correctly defined session with 4 pairs of volumes
To see more information about copysets, especially If some of them are invalid, click Copyset details. The Copyset window opens, displaying the table of copysets in the session. You can check for problems in the following tables: The Copyset table indicates if the copyset is invalid. The Last Result column displays the latest message issued for a copyset and indicates why it is invalid.
392
The Last Result column of the Copyset Relationships table displays the last message issued for a copyset pair. If a message ends in E or W, the pair is considered an exception pair. For more details refer to Creating a session: Verifying source-target relationship on page 379. The Sequence tab is the most useful when you manage replication sessions, especially during synchronization. You can see which volume pairs are synchronized and the status of the others. In the Sequence panel the following columns are available: Recoverable - true or false. Indicates if all pairs in a sequence are recoverable Exception - yes or no. Indicates if at least one pair is in exception state. Shadowing - yes or no. Indicates if all pairs are in shadowing state. Exception volumes - shows number of volumes which are in exception state. Recoverable pairs - shows number of volume pairs which are recoverable Shadowing pairs - shows number of volume pairs which are in shadowing state Total pairs - shows total number of pairs in a sequence Recoverable timestamp - shows time when a session was suspended Following is an example from our environment of different states of replication session. In our example, after you created or terminated a session, you will see the Sequence tab as shown in Figure 8-53.
Figure 8-53 Sequence tab in Session properties window for defined Point-in-Time Copy session
When the session is created or terminated, it is in defined state. You can see in the Sequences pane: Name - Local point in time copy sequence Recoverable - false - it is not recoverable Exception - No - there are no exceptions Shadowing - No - sequence is not shadowing Exception volumes - 0 - no volume is in exception state Recoverable pairs - 0 - no pair is recoverable Shadowing pairs - 0 - no pair is shadowing
Chapter 8. TotalStorage Productivity Center for Replication use
393
Total pairs - 4 (in our example) - total number of pairs is two Recoverable timestamp - n/a - is non-available In the Sequence states panel you see that four pairs are in Defined state. To see more details, select Sequence in the Sequences panel and click Pairs. A new window opens as shown in Figure 8-54.
The Sequence Flashed Target pairs window contains the following information: Source Volume - source volume of a pair, includes type and number of ESS and volume number Target Volume - target volume of a pair State - Defined - means a session is created or terminated but not running Recoverable - No - indicates if a pair is flashed Shadowing - No New - Yes - indicates it is new session Timestamp Last result - the code of the last result, you can see a description in Last result panel if you click on one pair in Pairs panel. When you flash a new or terminated session you will see the Sequence tab as shown in Figure 8-65 on page 402.
394
Notice the values of the following columns: Recoverable - true Recoverable timestamp - time, when Point-in-Time Copy session was successfully flashed The Sequence Flashed Target pairs window shown in Figure 8-56 shows successfully flashed volumes.
395
delete a defined session properties (view and change properties of a session) start a session suspend already started and synchronized session terminate a started session Using any copy services requires that you create an accurate plan before running and a detailed plan for managing the copy services. Any mistake can cause loss of data, for example when you use the wrong volume as a target of a copy session. Therefore we recommend you verify all pairs in a session before starting the copy process. Refer to Creating a session: Verifying source-target relationship on page 379.
Sessions window
When you create, verify and run a session you can monitor its status in the main Session window, which gives you basic information about a given session. Each session can include many pairs of volumes which are in copy relationships and create a consistent group. If the status of a given session is not optimal, you need to review the properties for a given session to check if there is a general problem or if it is related to a certain pair of volumes. Perform the following steps to check a basic status of a session: 1. Click Multiple Device Manager in the IBM Director Task panel. 2. Click Manage Replication. 3. Click Sessions. The Sessions window opens (also called as main Session window) 4. There are eight fields in a Sessions window: a. Name of a session b. Status field can have one of the following status: Normal (green icon): All source volumes are replicating in both directions and copy is active. All volumes were established successfully and are synchronized. Medium (yellow icon): A session is not started, was terminated or is synchronizing but at least one volume is not synchronized with a source. Severe (red icon): An error caused a hardware device to respond at multiple addresses or, for a fibre-channel connection, a volume failed to be established. Defined - a session is created and not started or was terminated Active - a session is running
d. Group - name of a Group of volumes which are sources of copy pairs e. Copy Type - can be a Point-in-Time Copy or Continuous Synchronous Remote Copy (as described in this chapter) f. Recoverable - indicates if any sequences in a session are considered recoverable g. Shadowing - indicated if any part of a session is shadowing data h. Volume Exceptions - shows the total number of volumes which are in an exception state. After you created a session, before starting it you should see the following values for several fields (see Figure 8-57 on page 397): Status - Medium
396
When you start a new session or resume suspended session you will see the following values (see Figure 8-58): Status - Medium (is still not optimal) State - Active (changed from Defined) Recoverable - No Shadowing - Yes (changed) Volume Exceptions - No
If all pairs in a session are synchronized you should see following values (see Figure 8-59 on page 398): Status - Normal (changed, now it is optimal state) State - Active Recoverable - Yes (changed, now you can recover data in case of disaster)
Chapter 8. TotalStorage Productivity Center for Replication use
397
If a session is suspended you should see following values (see Figure 8-60): Status - Normal State - Active Recoverable - Yes Shadowing - No Volume Exceptions - No
Properties window
The Session Properties window shows a status of session as a group of volume pairs. If you want to see details, perform following steps to use Properties window: 1. In the IBM Director Task panel, click Multiple Device Manager. 2. Click Manage Replication. 3. Double-click Sessions. The Session window opens. 4. Select a session which you want to manage, select Properties session action. The Properties window opens. 398
Managing Disk Subsystems using IBM TotalStorage Productivity Center
5. There are three tabs in the Properties window. General - shows general information about a session. Comparing to Session window you get additional information such as Description, number of volumes and approval status. The only parameter you can change is the approval status which can be automatic or manual. Copyset - you can check if all pairs are valid and approved. This panel is mostly used during verification of a session, see Creating a session: Verifying source-target relationship on page 379. Sequence - this tab is mostly used to see detailed information about the status of a session, especially together with the Pairs window. The General tab shows basic information, like the Session window. For example, when you create a session, before starting it you should see the values in Figure 8-61: Copy type - Continuous Synchronous Remote Copy State - Defined Status - Medium Group - name of group used for this session Source Volumes - number of volumes in a group Approval status - Automatic or Manual
The Copyset tab information generally does not change while managing a session unless some error occurs. You should see the following status as shown in Figure 8-62 on page 400.
399
Figure 8-62 Copyset tab in Properties window for correctly defined session
To see more information about copysets, especially If some of them are invalid, click Copyset details. The Copyset window opens, displaying the table of copy sets in the session. You can check for problems in the following tables: The Copyset table indicates if the copy set is invalid. The Last Result column displays the latest message issued for a copyset and indicates why it is invalid. The Last Result column of the Copyset Relationships table displays the last message issued for a copyset pair. If a message ends in E or W, the pair is considered an exception pair. For additional details refer to Creating a session: Verifying source-target relationship on page 379. Sequence tab is the most useful when you manage replication sessions, especially during synchronization you can see which volume pairs are synchronized and the status of the others. In the Sequence panel the following columns are available: Recoverable - true or false. Indicates if all pairs in a sequence are recoverable Exception - yes or no. Indicates if at least one pair is in exception state. Shadowing - yes or no. Indicates if all pairs are in shadowing state. Exception volumes - shows number of volumes which are in exception state. Recoverable pairs - shows number of volume pairs which are recoverable Shadowing pairs - shows number of volume pairs which are in shadowing state Total pairs - shows total number of pairs in a sequence Recoverable timestamp - shows time when a session was suspended The following is an example from our environment showing different states of replication session. If you created session or terminated running session, you will see the Sequence tab as shown in Figure 8-63 on page 401.
400
Figure 8-63 Sequence tab in Session properties window for defined session
When the session is created or terminated, it is in a defined state. You can see in Sequences panel the following values: Recoverable - false - it is not recoverable Exception - No - there are no exceptions Shadowing - No - sequence is not shadowing Exception volumes - 0 - no volume is in exception state Recoverable pairs - 0 - no pair is recoverable Shadowing pairs - 0 - no pair is shadowing Total pairs - 2 - total number of pairs is two Recoverable timestamp - n/a - is non-available In the Sequence states panel you see that two pairs are in Defined state. For more details, select Sequence in Sequences panel and click Pairs. The Sequence Remote Target pairs window opens as shown in Figure 8-64.
401
The Sequence Remote Target pairs window contains the following columns: Source Volume - Source volume of a pair, includes type and number of ESS and volume number. Target Volume - Target volume of a pair. State - Defined - Means a session is created or terminated but not running. Recoverable - No - indicates if a pair is synchronized. Shadowing - No New - Yes - Indicates it is new session. Timestamp Last result - The return code of the last result, you can see a description in the Last result panel if you click on one pair in the Pairs panel. When you start a new session or resume suspended session you will see the Sequence tab as shown in Figure 8-65.
Figure 8-65 Sequence tab in Session properties window for just started session
The following columns have changed their state: Shadowing - yes Shadowing pairs - 2 It looks similar to the Sequence Remote Target pairs window as shown in Figure 8-66 on page 403.
402
If one volume is synchronized but another is still synchronizing you will see the status as shown in Figure 8-67.
Figure 8-67 Sequence tab in a Session properties window for partially synchronized session
One pair is in Duplex state which means synchronized and another pair is in synchronizing state. Notice that Recoverable state is still false, because not all pairs are synchronized. To see which pair is in full duplex state, click Pairs (see Figure 8-68 on page 404).
403
In our example, one pair is in Duplex state (volume 1703 on ESS F20 16603 is synchronized with volume 1301 on ESS 800 22513) while the second pair is still synchronizing. When all pairs in a session are synchronized, you will see the status as shown in Figure 8-69.
Notice that Recoverable status is true which means that all pairs are in Duplex state which means synchronized. The same status is shown for all pairs separately in the Sequence Remote Target pairs window as shown in Figure 8-70 on page 405.
404
When a session is fully synchronized you can suspend a session to have a consistent state of data on remote server. If you successfully suspended a session you will see Sequence tab information as shown in Figure 8-71.
Figure 8-71 Sequence tab in Session properties window in successfully suspended state
You can see in the Sequence tab panel: Recoverable - true - it is recoverable Exception - No - there are no exceptions Shadowing - No - sequence is not shadowing Exception volumes - 0 - no volume is in exception state Recoverable pairs - 2- two pairs are recoverable Shadowing pairs - 0 -no pair is shadowing Total pairs - 2 - total number of pairs is two Recoverable timestamp - time, when a session was successfully suspended In the Sequence states panel you see that two pairs are in Suspended state. To see more details, select Sequence in Sequences panel and click Pairs. A new window opens as shown in Figure 8-72 on page 406.
405
For a successfully suspended session, you should see following values in Pairs window: State - Suspended Recoverable - Yes Shadowing - No New - No Important: Remember to check the status of a session if it is successfully synchronized before you invoke suspend command. Otherwise you will get invalid and inconsistent data on the remote site. If you suspended a session which was not synchronized you will get the information in the Sequence tab as shown in Figure 8-73.
You can see in Sequences panel: Recoverable - false - it is not recoverable Exception - No - there are no exceptions
406
Shadowing - No - sequence is not shadowing Exception volumes - 0 - no volume is in exception state Recoverable pairs - 0- no pair is recoverable Shadowing pairs - 0 -no pair is shadowing Total pairs - 2 - total number of pairs is two Recoverable timestamp - n/a, recovery is not possible so there is no time information Notice that state is Suspended but a session is not recoverable. In a Sequence states panel you see that two pairs are in Suspended state but the Recoverable value is false. To see more details, select Sequence in Sequences panel and click Pairs. A new window opens as shown in Figure 8-74.
When a session was suspended in inconsistent state, you will see the following values in the Pairs pane: State - Suspended Recoverable - No Shadowing - No New - Yes
407
repcli utility
To use the CLI you have to run the repcli utility. The default folder location of CLI for Replication Manager is c:\Program Files\IBM\mdm\rm\rmcli. This utility can also run commands in interactive mode, single command or a set of commands from a script. Syntax of repcli command: repcli [ { -ver|-overview|-script file_name|command | - } ] [ { -help|h|-? } ] where -ver Displays the current version -overview Displays the overview information about the repcli utility, including command modes, standard command and listing parameters, syntax diagram conventions, and user assistance. -script filename Runs the set of command strings in the specified file outside of a repcli session. You must specify a file name. The format options specified using the setoutput command apply to all commands in the script. Output from successful commands routes to stdout. Output from unsuccessful commands route to stderr. If an error occurs while one of the commands in the script is running, the script exits at the point of failure and return to the system prompt. example:
repcli -script start_backup.scr
-command_string Runs the specified command string outside of a repcli session. example:
repcli lssess
408
lscpset lsdev
mkgrp mkpath
setattribute setoutput
stopsess suspendsess
In this section we will focus on a few of them, which are mostly used for managing replication sessions like: flashsess - start Point-in-Time Copy session lspair - show information about a copy pair for a session lsseq - show information about a sequence for a session lssess - show details about all or filtered sessions setoutput - change default format for output showsess - show details about certain session startsess - start Continuous Synchronous Remote Copy session stopflashsess - terminate Point-in-Time Copy session stopsess - terminate Continuous Synchronous Remote Copy session suspendsess - suspend Continuous Synchronous Remote Copy session
lssess command
lssess [ { -help|-h|-? } ] [ { -l (long)|-s (short) } ] [-fmt default|xml|delim|stanza] [-p on|off] [-delim char] [-hdr on|off] [-r #] [-v on|off] [-cptype flash|pprc] [-state defined|active] [-status norm|warn|sev|unknown] [-recov yes|no] [-shadow yes|no] [-err yes|no] [session_name ... | -] -s An optional parameter that displays only the session name. -l Displays more details - default output plus approval type, pool criteria, copysets, non-approved, invalid, and description. -cptype copytype An optional parameter that displays only the sessions with the copy type specified. -state defined | active Displays only the sessions that are in the state specified. -status norm | warn | sev Displays only the sessions that have the status specified. -recov yes | no
409
An optional parameter that is set to yes or no to indicate whether the session can be considered recoverable, based on whether any sequences in the session can be considered recoverable. -shadow yes | no An optional parameter that indicates whether any part of the session is shadowing data. -err yes | no An optional parameter that shows sessions that have errors or no errors. session_name [,...] | An optional parameter that displays only the sessions with the session name specified. Separate multiple session names with a comma between each name. If no session name is specified, all sessions are displayed unless another filter is used. In our example you should see following results for created sessions using the lssess command as shown in Example 8-1.
Example 8-1 lssess - defined sessions repcli> lssess Name Status State Group Type Recover Shadow Err ================================================================= PPRC_800_to_F20 warning Defined PPRC_src pprc No No No FC_F20_800 warning Defined FC_src flash No No No
showess command
You can see details about certain sessions using lssess with -l parameter or showsess command. showsess session_name this command shows following information (like lssess -l): Name - Session Name. Copy type - Point-in-Time Copy or Continuous Synchronous Remote Copy. State - Defined or Active. Status - Unknown, Normal, Low, Medium, Severe, or Fatal. Group - Name of group of source volumes Source volumes - Shows number of volumes in the group being replicated by this session. Approval status - Automatic or Manual. Copysets - Shows number of copysets that the session is managing. Non-approved - Indicates the number of copysets that have yet to be verified. Invalid copysets - Indicates the number of copysets that were determined to be invalid. Seq - Valid sequence names are Remote Target for remote copy and Flashed Target for point-in-time copy. Use quotes around the entire flag e.g. Flashed Target:location=RTP. Pool Criteria - Location exact name or filter Shadow - Yes or no. Indicates if a session is shadowing data Recov - Yes or no. Indicates if all pairs in a session are recoverable 410
Managing Disk Subsystems using IBM TotalStorage Productivity Center
Approve - Yes or no. Indicates if all copysets are approved Description - User-defined session description. In Example 8-2 you can see the result of showsess command for defined sessions. You can compare parameters to information you can get using graphical interface as described in Managing a Continuous Synchronous Remote Copy on page 395.
Example 8-2 showsess - defined sessions repcli> showsess PPRC_800_to_F20 Name PPRC_800_to_F20 Type pprc State Defined Status warning Group PPRC_src Source Volumes 2 Approval Status Automatic Copysets 2 Non-approved 0 Invalid 0 Seq "Remote Target" Pool Criteria F20 Shadow No Recover No Err No Approve Yes Description Remote copy of 2 volumes from ESS 800 to F20 AWN007080I Command completed successfully. repcli> showsess FC_F20_800 Name FC_F20_800 Type flash State Defined Status warning Group FC_src Source Volumes 4 Approval Status Automatic Copysets 4 Non-approved 0 Invalid 0 Seq "Flashed Target" Pool Criteria P% Shadow No Recover No Err No Approve Yes Description Point in time copy of 4 volumes, 2 on ESS F20 and 2 on ESS 800 AWN007080I Command completed successfully.
flashess command
To run a created or terminated Point-in-Time Copy session invoke flashsess command: flashsess [-quiet] session_name [. . .]
Chapter 8. TotalStorage Productivity Center for Replication use
411
session_name specifies the session name to be activated. Separate multiple session names with a white space between each name. Alternatively, use the dash (-) to specify that input for this parameter comes from an input stream (STDIN). -quiet An optional parameter that turns off the confirmation prompt for this command. Note: In a batch program use the quiet parameter where available, otherwise the program will wait for your confirmation. Example 8-3 shows an example of the flashess command.
Example 8-3 flashsess command repcli> flashsess -quiet FC_F20_800 AWN007110I Command completed successfully.
To start a created, terminated or suspended Continuous Synchronous Remote Copy session invoke the startsess command as shown in Example 8-4:
Example 8-4 startsess command repcli> startsess PPRC_800_to_F20 AWN007100I Command completed successfully.
Example 8-5 shows the status of started sessions. Point-in-Time Copy was created successfully which confirms normal status and yes value of recover parameter. However, Continuous Synchronous Remote Copy is running (Active State) but not synchronized which show in the Recover and Status parameters.
Example 8-5 lssess - started sessions repcli> lssess Name Status State Group Type Recover Shadow Err ================================================================ PPRC_800_to_F20 warning Active PPRC_src pprc No Yes No FC_F20_800 normal Active FC_src flash Yes Yes No
lsseq command
You should use two additional commands lsseq and lspair to get more details about current state of sessions. lsseq [ { -l |-s } ] [-recov yes|no] [-shadow yes|no] [-err yes|no] session_name -s An optional parameter that displays volumes only. -l An optional parameter that displays all valid output. This is the default. -recov yes | no An optional parameter that indicates whether any sequences in the session can be considered recoverable. -shadow yes | no
412
An optional parameter that indicates whether or not the sequence is shadowing (copying) the data. -err yes | no An optional parameter that shows sessions that have errors or no errors. session_name Specifies the session name to be activated. In Example 8-6 you can find the time when the Point-in-Time Copy was run in the Timestamp column. For Continuous Synchronous Remote Copy session, one pair is synchronized which shows in the Recov Pairs parameter. The state of Recov parameter will change to yes when all pairs are synchronized.
Example 8-6 lsseq - started sessions
repcli> lsseq FC_F20_800 Name Recov Err Shadow Err Vols Recov Pairs Shadow Pairs Total Pairs Recov Timestamp ===================================================================================================== Flashed Target Yes No Yes 0 4 4 4 2005/04/12 16:34:00 PDT repcli> repcli> lsseq PPRC_800_to_F20 Name Recov Err Shadow Err Vols Recov Pairs Shadow Pairs Total Pairs Recov Timestamp ============================================================================================ Remote Target No No Yes 0 1 2 2 n/a
lspair command
lspair [ { -l |-s } ] { -seq sequence_name|-cpset source_vol_id } [-state defined |active|duplex|suspended|synch|flashed] [-recov yes|no] [-shadow yes|no] [-new yes|no] [-err yes|no] session_name | You can use the lspair command to list the source and target of the copy service pairs and their status. -s An optional parameter that displays information about only pairs. -l An optional parameter that displays the default output, including pairs. -seq sequence_name Displays only pairs of the sequence name specified. Mutually exclusive with -cpset. -cpset source_vol_id Specifies the source volume ID of the copy set on which you want a list of pairs. Mutually exclusive with -seq. -state defined | active | duplex | suspended | synch | flashed An optional parameter that displays the state. The state can be defined , active, duplex, suspended, synch, or flashed. -recov yes | no An optional parameter that displays only pairs in the corresponding recoverable state. -shadow yes | no An optional parameter that displays only pairs that are in the new state. -new yes | no An optional parameter that displays only pairs that are in the new state specified.
Chapter 8. TotalStorage Productivity Center for Replication use
413
-err yes | no An optional parameter that displays only pairs that are in the error state. session_name The session name by which the pairs are identified. In Example 8-7 you can see details about volume pairs. For Continuous Synchronous Remote Copy session one pair of volumes is synchronized which shows Duplex State but the second one is still synchronizing.
Example 8-7 lspair - started sessions
repcli> lspair -seq 'Flashed Target' FC_F20_800 Source Target State Recov Shadow New Copyset Timestamp Last result ==================================================================================================================================== ESS:2105.16603:VOL:1702 ESS:2105.16603:VOL:1706 Flashed Yes Yes No ESS:2105.16603:VOL:1702 2005/04/12 16:34:00 PDT IWNR2016I ESS:2105.16603:VOL:1703 ESS:2105.16603:VOL:1705 Flashed Yes Yes No ESS:2105.16603:VOL:1703 2005/04/12 16:34:00 PDT IWNR2016I ESS:2105.22513:VOL:1300 ESS:2105.22513:VOL:1305 Flashed Yes Yes No ESS:2105.22513:VOL:1300 2005/04/12 16:34:00 PDT IWNR2016I ESS:2105.22513:VOL:1301 ESS:2105.22513:VOL:1304 Flashed Yes Yes No ESS:2105.22513:VOL:1301 2005/04/12 16:34:00 PDT IWNR2016I repcli> repcli> lspair -seq 'Remote Target' PPRC_800_to_F20 Source Target State Recov Shadow New Copyset Timestamp Last result ============================================================================================================================ ESS:2105.22513:VOL:1302 ESS:2105.16603:VOL:1707 Duplex Yes Yes No ESS:2105.22513:VOL:1302 n/a IWNR2011I ESS:2105.22513:VOL:1303 ESS:2105.16603:VOL:1708 SYNCHRONIZING No Yes Yes ESS:2105.22513:VOL:1303 n/a IWNR2011I
When all volume pairs of Continuous Synchronous Remote Copy session are synchronized you should get results as shown in Example 8-8.
Example 8-8 Duplex state of Continuous Synchronous Remote Copy session
repcli> showsess PPRC_800_to_F20 Name PPRC_800_to_F20 Type pprc State Active Status normal Group PPRC_src Source Volumes 2 Approval Status Automatic Copysets 2 Non-approved 0 Invalid 0 Seq "Remote Target" Pool Criteria F20 Shadow Yes Recover Yes Err No Approve Yes Description Remote copy of 2 volumes from ESS 800 to F20 AWN007080I Command completed successfully. repcli> repcli> lsseq PPRC_800_to_F20 Name Recov Err Shadow Err Vols Recov Pairs Shadow Pairs Total Pairs Recov Timestamp ============================================================================================ Remote Target Yes No Yes 0 2 2 2 n/a repcli> repcli> lspair -seq 'Remote Target' PPRC_800_to_F20 Source Target State Recov Shadow New Copyset Timestamp Last result ===================================================================================================================== ESS:2105.22513:VOL:1302 ESS:2105.16603:VOL:1707 Duplex Yes Yes No ESS:2105.22513:VOL:1302 n/a IWNR2011I ESS:2105.22513:VOL:1303 ESS:2105.16603:VOL:1708 Duplex Yes Yes No ESS:2105.22513:VOL:1303 n/a IWNR2011I
used for example to do a backup. All changes will be registered and when you start a suspended session, only modified data will be copied to remote volume to obtain synchronized state.
suspendsess command
You can use suspendsess to suspend a Continuous Synchronous Remote Copy session. To restart a session, invoke startsess command. Note: To keep data consistency use -type consist parameter for suspendsess command. suspendsess [ { -help|-h|-? } ] [-quiet] -type consist|immed session_name ... | -quiet An optional parameter that turns off the confirmation prompt for this command. -type consist | immed Specifies the type of session to suspend. Specify consist to freeze a PPRC session, or specify immed (for immediately) to stop a session. session_name [...] | Specifies the session name to be suspended. Separate multiple session names with a white space between each name. Alternatively, use the dash (-) to specify that input for this parameter comes from an input stream (STDIN). Example 8-9 shows the suspendsess command.
Example 8-9 suspendsess repcli> suspendsess -quiet -type consist PPRC_800_to_F20 AWN007140I Command completed successfully.
When a Continuous Synchronous Remote Copy session is suspended you should see results as shown in Example 8-10 from the lssess command. Notice that a session is recoverable and is not shadowing.
Example 8-10 lssess - suspended session repcli> lssess PPRC_800_to_F20 Name Status State Group Type Recover Shadow Err ============================================================== PPRC_800_to_F20 normal Active PPRC_src pprc Yes No No
lspair command
Invoke the lspair command to see that all volume pairs are suspended and time, when the session was frozen as shown in Example 8-11.
Example 8-11 lspair - suspended session
repcli> lspair -seq 'Remote Target' PPRC_800_to_F20 Source Target State Recov Shadow New Copyset Timestamp Last result ====================================================================================================================================== ESS:2105.22513:VOL:1302 ESS:2105.16603:VOL:1707 Suspended Yes No No ESS:2105.22513:VOL:1302 2005/04/12 19:43:25 PDT IWNR2015I ESS:2105.22513:VOL:1303 ESS:2105.16603:VOL:1708 Suspended Yes No No ESS:2105.22513:VOL:1303 2005/04/12 19:43:25 PDT IWNR2015I
415
stopflashsess command
You can use the stopflashsess command at any point during the life of a Point-in-Time Copy session once that session is in active state. This command withdraws all relationships between volumes on the storage subsystem. Example 8-12 shows an example of the stopflashsess command.
Example 8-12 stopflashsess repcli> stopflashsess -quiet FC_F20_800 AWN007150I Command completed successfully. repcli> lssess FC_F20_800 Name Status State Group Type Recover Shadow Err ========================================================== FC_F20_800 warning Defined FC_src flash No No No repcli> showsess FC_F20_800 Name FC_F20_800 Type flash State Defined Status warning Group FC_src Source Volumes 4 Approval Status Automatic Copysets 4 Non-approved 0 Invalid 0 Seq "Flashed Target" Pool Criteria P% Shadow No Recover No Err No Approve Yes Description Point in time copy of 4 volumes, 2 on ESS F20 and 2 on ESS 800 AWN007080I Command completed successfully.
stopsess command
To stop Continuous Synchronous Remote Copy session you can use the stopsess command at any point during the life of a session once that session is in active state. This command withdraws the relationship on the hardware. stopsess [-quiet] session_name [. . .] -quiet An optional parameter that turns off the confirmation prompt for this command. session_name [...] | Specifies the session name to be stopped. Separate multiple session names with a white space between each name. Alternatively, use the dash (-) to specify that input for this parameter comes from an input stream (STDIN). Example 8-13 shows an example of the stopsess command.
Example 8-13 stopsess repcli> stopsess -quiet PPRC_800_to_F20 AWN007120I Command completed successfully. repcli> lssess PPRC_800_to_F20 Name Status State Group Type Recover Shadow Err ================================================================
416
PPRC_800_to_F20 warning Defined PPRC_src pprc No No No repcli> showsess PPRC_800_to_F20 Name PPRC_800_to_F20 Type pprc State Defined Status warning Group PPRC_src Source Volumes 2 Approval Status Automatic Copysets 2 Non-approved 0 Invalid 0 Seq "Remote Target" Pool Criteria F20 Shadow No Recover No Err No Approve Yes Description Remote copy of 2 volumes from ESS 800 to F20 AWN007080I Command completed successfully.
Output format
This section details the commands to display repcli commands.
setoutput command
You can use the setoutput command to display current output settings for repcli commands. The output format set by this command remains in effect for the duration of a command session or until the options are reset. setoutput [ { -help|-h|-? } ] [-p on|off] [-r #] [-fmt default|xml|delim|stanza] [delim character] [-hdr on|off] [-v on|off]
? | h | help
Displays a detailed description of this command, including syntax, parameter descriptions, and examples. If you specify a help option, all other command options are ignored. fmt Specifies the format of the output. You can specify one of the following values: default Specifies that output is to be displayed in a tabular format using spaces as the delimiter between the columns. This is the default value. delim Specifies that output is to be displayed in a tabular format using the specified character to separate the columns. If you use a shell meta character (for example, * or \t) as the delimiting character, enclose the character in single quotation mark () or double quotation mark (). A blank space is not a valid character. xml Specifies that output is to be displayed using XML format. stanza Specifies that output is to be displayed in rows.
417
delim character
Specifies character to separate the columns when -fmt delim parameter is used.
p
Specifies whether to display one page of text at a time or all text at once. off Displays all text at one time. This is the default value when the perftool command is run in single-shot mode. on Displays one page of text at time. Pressing any key displays the next page. This is the default value when the repcli command is run in interactive mode. hdr Specifies whether to display the table header. on Displays the table header. This is the default value. off Does not display the table header. r number Specifies the number of rows per page to display when the p parameter is on. The default is 24 rows. You can specify a value from 1 to 100. v Specifies whether to enable verbose mode. off Disables verbose mode. This is the default value. on Enables verbose mode. Example 8-14 shows the different formats of output.
Example 8-14 Default output settings repcli> setoutput Paging Rows Format Headers Verbose Banner ========================================== On 22 Default On Off Off
If you want to use an output format other than the default format only once per repcli session, use the setoutput command. There are output parameters for the commands: lssess lspair lsseq The output parameters for these commands are: [-fmt default|xml|delim|stanza] [delim character] [-hdr on|off] [-v on|off]
418
Syntax is the same as for setoutput command. See the different formats of output for the lssess command in Example 8-15, Example 8-16, Example 8-17, and Example 8-18 on page 420.
Example 8-15 default output format repcli> lssess PPRC_800_to_F20 Name Status State Group Type Recover Shadow Err ============================================================== PPRC_800_to_F20 normal Active PPRC_src pprc Yes Yes No
Example 8-16 XML output format repcli> lssess -fmt xml PPRC_800_to_F20 <IRETURNVALUE> <INSTANCE CLASSNAME="RM_Session"><PROPERTY NAME="session_name" TYPE="string"><VALUE TYPE="string">PPRC_800_to_F20</VALUE></PROPERTY><PROPERTY NAME="cptype" TYPE="string"><VALUE TYPE="string">pprc</VALUE></PROPERTY><PROPERTY NAME="state" TYPE="string"><VALUE TYPE="string">Active</VALUE></PROPERTY><PROPERTY NAME="status" TYPE="string"><VALUE TYPE="string">normal</VALUE></PROPERTY><PROPERTY NAME="srcgrp" TYPE="string"><VALUE TYPE="string">PPRC_src</VALUE></PROPERTY><PROPERTY NAME="shadow" TYPE="string"><VALUE TYPE="string">Yes</VALUE></PROPERTY><PROPERTY NAME="recov" TYPE="string"><VALUE TYPE="string">Yes</VALUE></PROPERTY><PROPERTY NAME="err" TYPE="string"><VALUE TYPE="string">No</VALUE></PROPERTY></INSTANCE> </IRETURNVALUE>
Example 8-17 stanza output format repcli> lssess -fmt stanza PPRC_800_to_F20 Name PPRC_800_to_F20 Status normal State Active Group PPRC_src Type pprc Recover Yes Shadow Yes Err No repcli> lssess -l -fmt stanza PPRC_800_to_F20 Name PPRC_800_to_F20 Status normal State Active Group PPRC_src Type pprc Recover Yes Shadow Yes Err No Approval Status Automatic Pool Criteria F20 Copysets 2 Non-approved 0 Invalid 0 Description Remote copy of 2 volumes from ESS 800 to F20 Seq "Remote Target" Source Volumes 2 Approve Yes
419
Example 8-18 delim output format repcli> lssess -fmt delim -delim ',' PPRC_800_to_F20 Name,Status,State,Group,Type,Recover,Shadow,Err =============================================== PPRC_800_to_F20,normal,Active,PPRC_src,pprc,Yes,Yes,No
Note: Use lssess -l command instead of showsess in batch programs, because showsess shows results only in one, stanza format. See Example 8-19.
Example 8-19 is a sample lssess -l command which can be easily used in a batch program.
Example 8-19 Using lssess with -l (long) parameter in delim format
repcli> lssess -l -fmt delim PPRC_800_to_F20 Name,Status,State,Group,Type,Recover,Shadow,Err,Approval Status,Pool Criteria,Copysets,Non-approved,Invalid,Description,Seq,Source Volumes,Approve ================================================================================================================================================== PPRC_800_to_F20,normal,Active,PPRC_src,pprc,Yes,Yes,No,Automatic,F20,2,0,0,Remote copy of 2 volumes from ESS 800 to F20,"Remote Target",2,Yes
420
Chapter 9.
Problem determination
This chapter provides information that will aid in troubleshooting TotalStorage Productivity Center installation and configuration issues. In this chapter, we describe: Checking the TotalStorage Productivity Center host, including: IBM Director logfiles IBM DB2 database logfiles, health monitoring, and table content checking IBM WebSphere Administrative console message browser usage, and how to enable tracing Checking the CIMOM server (SLP host) Tips and hints for validating that the CIM agents are running on your storage devices Locations for relevant logfiles on the SLP host/CIMOM server
421
In our case, note the large quantity of User ID/Password Incorrect from Server on CIMOM messages. For us, these messages were indicative of the CIM agents (not ours) that resided in the local subnet which were not configured to provide data to our TotalStorage Productivity Center for Disk administrative username and password (superuser/password). The Director console reports back that it has been rejected from accessing a CIM agent.
422
for these type of messages, we create an action to broadcast a message to our team that a new CIMOM has been detected by TotalStorage Productivity Center for Disk. See Event Action Plan Builder on page 215 for detailed information.
Figure 9-2 is an example of the raswatch during the trace of TotalStorage Productivity Center discovery.
The raswatch output can be very verbose and will scroll very quickly off the screen. Consider logging the output into a file using raswatch -dev_mgr -high > c:/testlog.txt. This will allow you to open the raswatch file in the notepad editor and search for IP or hostname strings that will validate the TotalStorage Productivity Center discovery process.
423
Additional information about viewing and managing events logged in the DB2 journal can be referenced by accessing the help menus. By default, DB2 instance health checking is disabled. It may be advisable to enable the health monitor. Even the default alert thresholds that are in place when the health monitor is enabled will provide for the TotalStorage Productivity Center administrator at least some idea of any issues with the DB2 instance. You can open the DB2 Health Center by clicking Start, Programs, IBM DB2, Monitoring Tools, Health Center.
424
Remember that by default, the Health Center monitoring is disabled. You can tell by the green circle on our DB2 instance and associated databases that we have enabled monitoring. At present we have no issues that have generated any alerts. Figure 9-5 on page 426 shows the typical default threshold settings for each of the TotalStorage Productivity Center for Disk databases.
425
Aside from viewing the DB2 events, or ensuring that event monitoring is enabled, we can also review the contents of specific database tables to ensure that we are receiving data and that the appropriate tablespaces are being populated. For example, if we have just performed an ESS data collection task, we should have entries in the following three tables in the PMDATA database: VPCCH - Volume data VPCRK - Array data VPCLUS - Cluster data We can review the contents of these tables from within the DB2 Control Center, by navigating through the tree on the left-hand side, from system, instance, to the database, and tables, as shown in Figure 9-6 on page 427.
426
To view the contents of a specific table, right-click the table you want to view and select Sample Contents. You should see a table like the one in Figure 9-7 on page 428.
427
The presence of these rows in this table tells us that we have successfully performed a data collection task against the ESS with serial number 22219. We should also see data in the other tables cited above.
428
A considerable amount of application level information can be obtained from within the IBM WebSphere framework, using the Administrative Console.
429
5. Follow the TotalStorage Productivity Center for Disk discovery process using raswatch (Figure 9-2 on page 423) for evidence that the new CIM: a. Has been detected b. Has allowed TotalStorage Productivity Center for Disk to authenticate correctly The same activities may be traced in the IBM Director logfiles (Figure 9-1 on page 422).
#---------------------------------------------------------------------------# Tracing and Logging #---------------------------------------------------------------------------# A boolean controlling printing of messages about traffic with DAs. # Default is false. ;net.slp.traceDATraffic = true # A boolean controlling printing of details on SLP messages. The fields in # all incoming messages and outgoing replies are printed. ;net.slp.traceMsg = true # A boolean controlling printing details when a SLP message is dropped for # any reason. Default is false. Default is false.
;net.slp.traceDrop = true # A boolean controlling dumps of all registered services upon registration # and deregistration. If true, the contents of the DA or SA server are Default is false.
The more detailed output from SLP tracing is in the slp logfile, located in c:/WINNT/slpd.log. Important: After the required tracing information has been gathered, you should disable slp trace. The logfile can become very large.
430
431
432
433
Seeing trace entries of this type shows that Replication Manager is receiving indications properly. If Replication Manager is not receiving indications, then no entries of this type will be seen surrounding the event in question. If Replication Manager is not seeing indications, then usually one or more layers of the software stack need to be restarted.
435
TotalStorage Productivity Center, the corresponding logging/trace-settings have to be configured with the WebSphere Administrative Console. Tracing defaults to disabled. Use the following steps to change the logging state: 1. Launch the WebSphere Application Server Administrative console at the following URL:
http://servername:9090/admin
This will redirect the browser to the secure login page, and afterward the login goes to the WebSphere Application Server Administrative root page (see Figure 9-14):
436
Figure 9-15 WebSphere Application Server trace tool - select servers example
437
Figure 9-16 WebSphere Application Server trace tool - select application servers example
438
Figure 9-17 WebSphere Application Server trace tool - select server 1 example
439
Figure 9-18 WebSphere Application Server trace tool - select logging and tracing example
440
Figure 9-19 WebSphere Application Server trace tool - select diagnostic trace example
7. Check the enable trace box Enter the required trace entries into the trace specification box, separated by colons. Insert all of the trace specifications in this table which might be used byTotalStorage Productivity Center. The following table provides the default TotalStorage Productivity Center for Replication specifications (see Figure 9-1 on page 442) for the trace.
441
Table 9-1 MDM default trace specifications Component general format Default Comp=level=state where: Comp is the component to trace Level is the amount of trace *State is enabled or disabled ELEMCAT=all=enabled HWLAYER=all=enabled REPMGR=all=enabled DMINT=all=enabled
Replication Manager Element Catalog Replication Manager Hardware layer Replication Manager Session Manager Replication Manager integration with Device Manager
*This is the value which should be set all the time, unless otherwise specified. For TotalStorage Productivity Center for Replication, the full setting is REPMGR=all=enabled:HWLAYER=all=enabled:DMINT=all=enabled:ELEMCAT=all=enabled The remaining settings on this page are used to control how much trace is captured before it is overwritten. The actual settings to use depend on the actual server configuration, but here are some guidelines to use: Always choose the setting which sends the trace to a file. 20 MB seems to be a good size to use for each trace file. Enable at least one historical file. The more history is available, the better, since many TotalStorage Productivity Center are long-running and may result in a lot of trace data. Be sure to leave sufficient free space. The total trace will take up the number of historical files plus 1 multiplied by the size of each file. Recommended settings 20 MB per file 10 historical files (unless there is not enough disk space on the server) Tip: The default file name is ${SERVER_LOG_ROOT}/trace.log and it is best to keep this default whenever possible. This will ensure that the automated tools to collect log and trace information can find the files. If the log files need to be written to a different location, for example to a different disk to manage free disk space, it is better to change the environment variable SERVER_LOG_ROOT. Refer to the WebSphere Application Server documentation for information about how to change this environment variable. Figure 9-20 on page 443 is a sample window showing several values have been changed.
442
Figure 9-20 WebSphere Application Server trace tool - several trace values changed
8. After making all changes, press the OK button, and then click save the changes. Tip: To change trace settings immediately without restarting WebSphere Application Server, make the equivalent change in the Runtime tab instead of the Configuration tab. When Apply or OK is clicked from the Runtime tab, the change takes effect immediately.
443
5. You may need to wait for some time till the IBM Director has re-started.
9.4.2 SVC Data collection task failure due to previous running task
The Performance Manager data collection task may fail due to a previously running task, since SVC data collection allows only one such task to run at a time. You may need to stop previous data collection tasks. You can stop the task using Performance Manager Command Line Interface perfcli tool or from the SVC Console. To stop the task from CLI tool, go to the C:\Program Files\IBM\mdm\pm\pmcli and run the command: stopsvcollection -devtype svc <task_name> Subsequently, you will be requested to confirm whether to stop the task. You may respond Y (yes). Alternatively, you may launch SVC Console Web browser interface. After logging into the SVC console, choose Clusters under My Work column click in the check box for respective SVC cluster in the Clusters column and click Go. Subsequently, in the next panel select Manage Cluster. You will see panel similar to Figure 9-22 on page 446.
445
Choose Stop Statistics Collection as shown in the figure. Next panel is shown in Figure 9-23 on page 447.
446
Click Yes. This will stop all the performance data collection for SVC.
447
448
10
Chapter 10.
449
450
After you specify the database purge information, it is saved as a noninteractive IBM Director task. You schedule all performance data-collection tasks using the IBM Director scheduler function.
451
The current database information is shown. Use this panel to specify the properties for a new performance database purge task. The fields are: Name: Type a name for the performance database purge task, from 1 - 250 characters. Description (optional): Type a description for the performance database purge task, from 1 to 250 characters. Device type: Select one or more storage device types for the performance database purge. Purge performance data older than: Select the maximum number of days or the number of years that you want the performance data to reside in the database before it is purged. Purge data containing threshold exception information: When you select this check box, you choose to purge exception data. Save as task: When you click Save as task, the information you specified is saved and the panel closes. The newly created task is saved as a noninteractive task to the IBM Director Task pane under the Performance Manager Database. All performance database tasks can be scheduled using the IBM Director scheduler function as seen in Figure 10-3 on page 453.
452
Right-click the newly created database purge task (Figure 10-3) to schedule it for execution. Execution is either immediate or scheduled as seen in Figure 10-4.
453
data gathered from the storage devices you are monitoring. A component of the tools suite is an interface to online DB2 Online Support Web site resources. In this section we briefly describe the tools and provide examples of their use with TotalStorage Productivity Center. Tip: For detailed information and usage examples on GUI tools for DB2 UDB Express, see An Introduction to DB2 UDB Express GUI tools (Part 1):
http://www-106.ibm.com/developerworks/db2/library/techarticle/0307chong/0307chong.html
To access the DB2 tool suite use the path: Start Programs IBM DB2 The following main menu options are available to you for use with the TotalStorage Productivity Center databases or any other DB2 database instance you may have on your TotalStorage Productivity Center server. We will be putting more emphasis on the Command Line Tools in a TotalStorage Productivity Center reporting framework. Command Line Tools Development Tools General Administration Tools Information Monitoring Tools Set-up Tools Tip: For detailed information, use the DB2 Tool Suite Help Screens or DB2 Online Information at the following URL:
http://publib.boulder.ibm.com/infocenter/db2help/index.jsp
If a command exceeds the character limit allowed at the command prompt, a backslash (\) can be used as the line continuation character. When the command line processor encounters the line continuation character, it reads the next line and concatenates the characters contained on both lines. Alternatively, the -t option can be used to set a line termination character. In this case, the line continuation character is invalid, and all statements and commands must end with the line termination character. For more information, use the DB2 UDB online help. In current releases of DB2 UDB CLP starts in the interactive mode, which is indicated by a DOS-looking command prompt db2=>. In this mode, end-users may enter one DB2 UDB command or one SQL statement by typing it in at the prompt, and then pressing the Enter key. Figure 10-5 shows an example query in an IBM DB2 Command Line Processor window.
In this example, a connect DB2 UDB command was executed to connect to the TotalStorage Productivity Center Performance Manager database named PMDATA (the TotalStorage Productivity Center performance database alias). When this command is executed, you can then enter a SELECT SQL statement against any of the PMDATA tables in the database. The commands are not case sensitive but the user ID (MDMSUID) and password (MDMSPW) are case sensitive based on how they were defined in the database setup during installation or thereafter. The interactive mode is exited by typing QUIT and pressing Enter. The DB2 UDB Tool Suite also has another CLP which operates in a non-interactive mode; Command Window. It may be opened up from the path: Start IBM DB2 Command Line Tools Command Window The SQL queries are invoked by starting each SQL statement with the characters db2, for example db2 connect to pmdata. This CLP has the same case sensitivity requirements of the Command Line Processor. For additional examples of the Command Line Processor, refer to Data Extraction using DB2 Command Line Processor Interface on page 487.
455
Start Programs IBM DB2 Development Tools The Development Tools options are Development Center Project Deployment Tools DB2 Development Center provides an easy-to-use development environment for creating, installing, and testing stored procedures. It allows you to focus on creating your stored procedure logic rather than the details of registering, building, and installing stored procedures on a DB2 server. Additionally, with Development Center, you can develop stored procedures on one operating system and build them on other server operating systems. Development Center is a graphical application that supports rapid development. Using Development Center, you can perform the following tasks: Create new stored procedures. Build stored procedures on local and remote DB2 servers. Modify and rebuild existing stored procedures. Test and debug the execution of installed stored procedures.
Control Center
A GUI for snapshot and event monitoring. For snapshots, it allows you to define performance variables in terms of the metrics returned by the database system monitor and graph them over time. For example, you can request that it take a snapshot and graph the progression of a performance variable over the last eight hours. Alerts can be set to notify the DBA when certain threshold are reached. For event monitors, it allows you to create, activate, start, stop, and delete event monitors. See the online help for the Control Center for more information (also see Control Center on page 482).
Journal
You can start the Journal by selecting the icon from the Control Center toolbar. The Journal allows you to monitor jobs and review results. From the Journal, you can also display the recovery history and DB2 messages. The Journal allows you to monitor pending jobs, running jobs, and job histories; review results; display recovery history and alert messages; and show the log of DB2 messages.
Replication Center
The Replication Center stores the initial information about registered sources, subscription sets, and alert conditions in the control tables. The Capture program, the Apply program, and the Capture triggers update the control tables to indicate the progress of replication and to coordinate the processing of changes. The Replication Alert Monitor reads the control tables that have been updated by the Capture program, Apply program, and the Capture triggers to understand the problems and progress at a server.
Task Center
Use the Task Center to create, schedule, and run tasks. You can create the following types of tasks: 456
Managing Disk Subsystems using IBM TotalStorage Productivity Center
DB2 scripts that contain DB2 commands OS scripts, which has operating system commands MVS shell scripts to run on OS/390 and z/OS operating systems JCL scripts to run in a host environment Grouping tasks, which contain other tasks Task schedules are managed by a scheduler, while the tasks are run on one or more systems, called run systems. You define the conditions for a task to fail or succeed with a success code set. Based on the success or failure of a task, or group of tasks, you can run additional tasks, disable scheduled tasks, and other actions. Tip: You can also define notifications to send after a task completes. You can send an e-mail notification to people in your contacts list, or you can send a notification to the Journal.
Event Analyzer
The Event Analyzer GUI is used for viewing file event monitor traces. Information collected on connections, deadlocks, overflows, transactions, statements, and subsections is organized and displayed in a tabular format. See the online help for the Event Analyzer for more information.
Health Center
Use the Health Center GUI tool to set up thresholds that, when exceeded, will prompt alert notifications, or even actions to relieve the situation. In other words, you can have the database manage itself!
457
Working within the DB2 UDB Command Center, you can run SQL statements, DB2 UDB commands, and operating system commands in an Interactive mode. Like with most DB GUI tools, you will first connect to the database that you want to run your queries against. From there, Command Center can display a list of tables to which you have access. Command Center can also assist in writing the query by allowing you to pick table names, column names, filters, conditions, predicates, and other table signifies from its windows. You can also execute a stack of SQL statements within the Script tab portion of the window. Multiple SQL statements can be executed as a unit of work (UOW), which means each statement must complete successfully for the others to complete successfully. If any statement fails, the work done by all previously completed statements will be rolled back. In addition to the Command Center, you may want to use the IBM DB2 Control Center. Each of these tools share much of the same functionality but each has specific feature related capabilities. Which tool you you will depend upon what type of information you want to extract, and what your needs are regarding output of the data; screen output, file output, or both.
458
Tip: Alternatively, if the Control Center is open, click the Center opens.
You can use the toolbar icons (see Figure 10-7) to open DB2 tools, view the legend for Command Center objects, and view DB2 information.
Execute Executes the SQL statements, DB2 CLP commands, scripts, or MVS system commands that you enter on the Interactive or Script page. The results are displayed on the Query Results and the Access Plan pages.
459
Control Center Opens the Control Center so that you can display all of your systems, databases, and database objects and perform administration tasks on them.
Replication Center Opens the Replication Center so that you can design your replication environment and set up your replication environment.
Satellite Administration Center Opens the Satellite Administration Center so that you can set up and administer satellites and the information that is maintained in the satellite control tables.
Data Warehouse Center Opens the Data Warehouse Center so that you can manage Data Warehouse objects.
Task Center Opens the Task Center so that you can create, schedule, and execute tasks.
Information Catalog Center Opens the Information Catalog Center so that you can manage your business metadata.
Health Center Opens the Health Center so that you can work with alerts generated while using DB2.
Journal Opens the Journal so that you can schedule jobs that are to run unattended and view notification log entries.
License Center Opens the License Center so that you can display license status and usage information for the DB2 products installed on your system and use the License Center to configure your system for license monitoring.
460
Development Center Opens the Development Center so that you can develop stored procedures, user-defined functions, and structured types.
Contacts Opens the Contacts window where you can specify contact information for individual names or groups.
Tools Settings Opens the Tools Settings notebook so that you can customize settings and properties for the administration tools and for replication tasks.
Legend Opens the Legend window that displays all of the object icons available in the Command Center by icon and name.
Retrieve Table Data Retrieves the data for the table you have executed SQL statements against and displays it on the Query Results page.
Create Access Plan Creates the access plan for the current SQL statement and displays it on the Access Plan page.
Information Center Opens the Information Center so that you can search for help on tasks, commands, and information in the DB2 library.
Help Displays help for getting started with the Command Center. Tip: We suggest you use this extremely useful Help feature to navigate the DB2 Tool Suite until you are comfortable with the function provided in the DB2 Express Tool Suite.
461
Next, display three columns with Date over the left column, Cluster 1 over the middle column, and Cluster 2 over the right column. Sort by date/time in the left column (VPCLUS:PC_DEV_DATE_E/PC_DEV_TIME_E). Left column is keyed by VPCLUS:P_TASK VPCLUS:M_MACH_SN VPCLUS:PC_DEV_DATE_E/PC_DEV_TIME_E
Under each cluster column, display sub-column headers of I/O Rate, Avg Cache Hold Time, and NVS % Full; Center and right columns are keyed by VPCLUS:PC_DEV_DATE_E/PC_DEV_TIME_E VPCLUS:M_CLUSTER_N I/O rate, (VPCLUS:Q_CL_IO_RATE) Average cache hold time, (VPCLUS:Q_CL_AVG_HOLD_TIME) NVS % full, (VPCLUS:Q_CL_NVS_FULL_PRCT)
Under the center and right columns, display rows with the following:
2. Select a cluster from step 1 to investigate further. 3. Then, build a report broken by Logical SubSystem (LSS) displaying the following information in the header to reflect the row data; Type, M_MACH_TY Model, M_MODEL_N Serial, M_MACH_SN Cluster, M_CLUSTER_N from-date/time, PC_DATE_B/PC_TIME_B to-date/time, PC_DATE_E/PC_TIME_E PC_DEV_DATE_B/PC_DEV_TIME_B PC_DEV_DATE_E/PC_DEV_TIME_E M_CARD_NUM DA ID #, M_CARD_NUM Loop A or B, M_LOOP_ID Array ID, M_ARRAY_ID Array Type, M_STOR_TYPE Average ms to satisfy all requests to this array, PC_IOR_AVG % Time Array Busy, Q_SAMP_DEV_UTIL Total I/O read/writes to this array, Q_IO_TOTAL Total sequential read/writes to this array, Q_IO_SEQ
4. Select an LSS from step 2 to investigate further. 5. Then, build a report broken by loop displaying the following row information with the corresponding column headers; Type, M_MACH_TY Model, M_MODEL_N
463
Serial, M_MACH_SN Cluster, M_CLUSTER_N LSS, M_LSS_LA DA #, M_CARD_NUM Loop, M_LOOP_ID from-date/time,PC_DATE_B/PC_TIME_B to-date/time, PC_DATE_E/PC_TIME_E Sort by date/time and sub-sort by Array PC_DEV_DATE_B/PC_DEV_TIME_B PC_DEV_DATE_E/PC_DEV_TIME_E M_ARRAY_ID Array ID, M_ARRAY_ID Array Type, M_STORE_TYPE # of write reqs issues to this array, PC_IO_WRITE # of ms to satisfy reads to this array, PC_RT_READ # of ms to satisfy writes to this array, PC_RT_WRITE Avg I/O rate for all requests, PC_IOR_AVG # of ms avg to satisfy all requests to this array, PC_MSR_AVG bytes read / sec from this array, PC_RBT_AVG bytes written / sec from this array, PC_WBT_AVG % time array busy, Q_SAMP_DEV_UTIL total I/O's issued to this array, Q_IO_TOTAL total sequential read/write requests to this array, Q_IO_SEQ
6. Select an array from step 3 to investigate further. 7. Build a report broken by volume displaying the following row data with the corresponding column names; Type Model Serial Cluster LSS DA# Loop Array from-date/time to-date/time M_MACH_TY M_MODEL_N M_MACH_SN M_CLUSTER_N M_LSS_LA M_CARD_NUM M_LOOP_ID M_ARRAY_ID PC_DATE_B/PC_TIME_B PC_DATE_E/PC_TIME_E
Sort by date/time and sub-sort by volume PC_DEV_DATE_B/PC_DEV_TIME_B PC_DEV_DATE_E/PC_DEV_TIME_E M_VOL_NUM # of the logical volume, M_VOL_NUM F / C, M_VOL_TY LUN Serial or SSID+base device address, M_VOL_ADDR
464
Tip: The proceeding global to granular reporting sequence example can be achieved through the DB2 Tool Suite Command Center (for instance). The individual component reports can be scheduled in the DB2 Tool Suite and the output parsed and compiled into a spreadsheet format (table data output exported in .WKS format for example) such as Lotus 123 or Excel. Once the data is formatted into the worksheet, the native macro functionality of the spreadsheet could be used to process the data further to output graphical reports, summary reports, problem analysis document for root cause analysis, and performance analysis of SAN components, in order to meet the particular needs of your organization and SAN environment.
465
Figure 10-8 Open Command Center window select Interactive button example
3. Utilize performance data collected from your storage servers using the PM database tables (PMDTATA). With the Command Center window open, click the Database Connection button ..... to the right of the Database Connection bar. The Select Database window opens. Figure 10-9 on page 467 shows an example of selecting the PM database tables (PMDATA) in the Command Center.
466
4. Once you have selected the database you want to work with, you can use the previously described functions to manage information within the database, extract data, or setup SQL queries using the Interactive or SQL Assist options. For our example, we use only the PMDATA database information. You could just as easily utilize the DM (DMCOSERV database) data to retrieve related information (such as Asset data) or the IBMDIR database to manage information in the IBM Director tables. We will proceed to run a base report against a specific Model 800 ESS. Note: We can also run this report against all similar storage server types for which we currently have performance previously collected and stored in the PMDATA database. If we preferred to do this, we would include specific, or all, M_MACH_SN (and their associated) values available in the database. 5. We have connected to the PMDATA database and will utilize the SQL Assist function within the Command Center for our query example. Click the SQL Assist button to begin our SQL query script definition (see Figure 10-10 on page 468).
467
Note: Use SQL Assist to create SQL statements. With SQL Assist and some knowledge of SQL, you can create SQL SELECT statements. In some environments, you can also use SQL Assist to create INSERT, UPDATE, or DELETE statements. SQL Assist is a tool that uses an outline and details panels to help you organize the information that you need to create an SQL statement. SQL Assist is available in the Control Center, the Command Center, the Replication Center, and the Development Center. See the Online Help for more information. SQL Assist and other functions within the DB2 Tool Suite incorporate button sensitive (mouse over) help pop-up windows to aid you in navigating and making your menu selections within the tools.
6. The SQL Assist window will now open (see Figure 10-11 on page 469). Click the Select radio button, in the middle right area of the window, since we are only going to retrieve data from the tables with our database queries in this example. The radio button options available in this window include the SQL query options; Select, Insert, Update, Delete. It is not recommended to do any SQL commands on your production database other than Select statements since with this function, you are only reading data from the database. If you want to manipulate your database further, it is
468
recommended to make a copy of the database (not the production backup) and work with the backup.
7. Notice that in the lower pane of the SQL assist window, there is the initial syntax of a select statement. We are now going to go through the steps to complete a comprehensive SQL Select query statement to view (or extract) data from our PMDATA database. In the upper left-hand pane called Outline, double-click the FROM (Source Tables) icon and the Details pane will open in the center of the SQL Assist window. The Available Tables tree is listed. Select DB2ADMIN to find the tables that you want to use in our query (see Figure 10-12 on page 470).
469
Figure 10-12 Selecting the DB2ADMIN table button pull-down listing of available tables
8. We will now select the VPVPD table which contains a storage server configuration snapshot. Use the slider on the Available tables pane until you can click the VPVPD table. Now click the > button and the table name will be populated into the Selected Source Tables in the upper right corner of the SQL Assist window (see Figure 10-13 on page 471). Notice how the VPVPD table selection has now been entered automatically into the rudimentary Select statement in the lower pane of the window. This window will show you the SQL validated statement you have created thus far and keep track as you proceed through the SQL Assist function.
470
9. Now that you have selected the table you want to query for data, click the SELECT (Result Columns) icon in the Outline pane on the left of the window. The DB2ADMIN.VPVPD (Instance.Table) icon will appear in the pane. Select the + button in this icon and a list of the VPVPD columns will list in the table tree column listing (see Figure 10-14 on page 472).
471
10.Now select the VPVPD columns we want in our view with our SQL statement. You can select the columns in several ways. One way is to select the columns one at a time, or by clicking on the column name, selecting multiple column names by clicking the column names while holding the shift key down, or you can select all the columns by clicking the >> button. In our example we elected the columns we want while holding down the shift key and clicking the column names. Now click the > button to populate the Result columns pane (which is greyed out until you make your selections. Once you click the > button, a moment will pass and the column names will appear in the right-hand pane and in the SQL validated query statement that is building in the lower SQL Assist pane (see Figure 10-15 on page 473). User defined field variables are found in the SQL validated statement.
472
11.Next click the Where (Row filter) icon in the outline pane. This will present a table list in the Available columns pane from which you can select where, and how, we want to filter the query (see Figure 10-16 on page 474).
473
Figure 10-16 Where (Row filter) for column M_MACH_SN values (note mouse over help)
12.Now define ther statement to return results where the M_MACH_SN value (ESS serial number) equals 2105.22219. Place the cursor in the Value field. You can either type a specific value in or use the pull-down arrow to being up other options. One of the options presents you the opportunity to see field results already in the table. This will open a subsequent screen showing current column results. You can select from that screen how many results to display. The default is to show 25 rows. You can increase of decrease this value through the menu. After you have made your selection for the Value, click the > button to enter the value into the Search Condition pane and in the SQL validated pane (see Figure 10-17 on page 475).
474
13.You will not be using the Group By or Having SQL query functions for this simple query example. See the help screens for further information in for those query options. Now click Order By (Sort Criteria) in the outline pane. In the Available columns pane is the VPVPD table content tree (column names). Click value P_CDATE (performance collection date). Now click the > button and the column name appears in the Sort Columns pane. You have the option to select ASC (ascending) or DSC (descending) sort order. ASC is setup as a default sort order. Leave the ASC as-is (see Figure 10-18 on page 476).
475
14.Now that you have completed building a query statement, click the Run button to view the results (see Figure 10-19).
15.After reviewing the results of query, click the OK button to return the SQL code to the main Command Center, Interactive window. Notice the mouse over, pop-up window (see Figure 10-20 on page 477). The SQL code from this example is shown in Example 10-1 on page 477.
476
SELECT VPVPD.M_MACH_SN, VPVPD.M_MACH_TY, VPVPD.M_MODEL_N, VPVPD.M_CLUSTER_N, VPVPD.M_RAM, VPVPD.M_NVS, VPVPD.P_CDATE, VPVPD.P_CTIME FROM DB2ADMIN.VPVPD AS VPVPD WHERE VPVPD.M_MACH_SN = '2105.22219 ' ORDER BY VPVPD.P_CDATE ASC
Figure 10-20 Return SQL code we created to the Command Center, Interactive window
16.You will now save your SQL code for future use. You can schedule this as a recurring task or use it ad hoc. Click the Interactive tab on the menu bar at the top of the window and then click Save Command As... (see Figure 10-21 on page 478).
477
Figure 10-21 Save Command As... an ascii file for later use
478
479
2. Drill down with the next query for the suspect time period to see which arrays were involved. This could be done with any of the table data you want to examine that have data associated with them. The SQL query is shown in Example 10-4. The query result is in Figure 10-24 on page 481.
Example 10-4 VPCRK SQL query for specific time period SELECT VPCRK.M_MACH_SN, VPCRK.PC_DEV_DATE_E, VPCRK.PC_DEV_TIME_E, VPCRK.M_LSS_LA, VPCRK.M_ARRAY_ID, VPCRK.M_DDM_NUM, VPCRK.M_CARD_NUM, VPCCH.M_VOL_NUM, VPCCH.M_VOL_ADDR, VPCCH.M_VOL_TY, VPCRK.PC_MSR_AVG FROM DB2ADMIN.VPCRK AS VPCRK, DB2ADMIN.VPCCH AS VPCCH WHERE VPCRK.M_MACH_SN LIKE '%2105%' AND VPCRK.M_MACH_SN LIKE '%2105%' AND VPCRK.PC_DEV_DATE_E = '2004-06-08' ORDER BY VPCRK.PC_DEV_DATE_E ASC, VPCRK.Q_SAMP_DEV_UTIL DESC, VPCRK.PC_MSR_AVG DESC
480
From the information above, you can determine the date, time of day, rank number, volume number, and volume address for the time period we examined from the previous query. You know that all the hits for this report are indicating the volumes are OS/390 assigned storage (value C in the M_VOL_TYPE column). Tip: You could save this query and run it as a scheduled task from the DB2 Tool Suite. You could also export the data for further manipulation and presentation with the use of a spreadsheet application. You could also setup SQL query tasks to run on a schedule, using the TotalStorage Productivity Center gauge reports to determine which areas need further investigation. The information derived from these queries of the PM database tables will correlate with the information you can derive from the ESS Specialist so you can determine which hosts and associated applications are causing performance concerns. For further information and examples of SQL queries, you can refer to the redbook IBM TotalStorage Expert Reporting: How to Produce Built-In and Customized Reports, SG24-7016.
481
Comma Separated Variable (CSV) format. You can do this by opening the PM Performance Data collection task Execution History window. This can be accomplished several ways. Following is an example: 1. Open the Scheduler (either Month, Week, Day, or Job calendar views). 2. Right-click the specific task you want to export. 3. Click the Open Execution History... option. 4. Right-click the Export option of the specific task that was scheduled. The Spreadsheet (.csv) pull-down menu will appear. 5. Right-click the Spreadsheet (.csv) option and the Export Comma Separated Value Format window will open. 6. In the Export Comma Separated Value Format window, input the File Name, Drive, and determine where you want to save the file (see Figure 10-25).
482
Important: Never make any modifications to your existing TotalStorage Productivity Center database. If you want to learn and experiment, create a new database or export your production database and perform operations on the exported database only. Once you have opened the DB2 UDB Control Center, you can drill down to your TotalStorage Productivity Center database (PMDATA in this example) by using the explorer window on the left-hand side of the window (Figure 10-26).
On the right-hand side of the Control Center main window you can view the tables of the PMDATA database (since the Tables folder is highlighted on the left-hand side).In this following graphic we are going to explore the VTHRESHOLD table further by viewing the columns (Figure 10-27 on page 484). This is done by double-clicking on the particular table you want to view details on. We have selected the Columns tab at the top left side of the window. The column attributes are listed under the window column headers. You can also view the tables Keys, Check Constraints, or General Table attributes.
483
From this window, you can further explore the table by selecting the tabs on the upper portion of the window. We will now view the Primary Key(s) window for the CNODE table (Figure 10-28). This is very useful information when you are creating your own query statements. It will reduce the amount of research time spent digging through hardcopy documentation.
You can also show any current SQL statement you are generating within the Control Center, Estimate the size of this table and determine the current size or add unique or foreign key associations from this window. Additional database information is available from within the Control Center and useful help is available as needed.
484
Reporting tools
In this section we outline some of the processes for getting the most out of the TotalStorage Productivity Center database using applications and tools outside of the TotalStorage Productivity Center product. This is not an exhaustive list, but these are things to keep in mind when you are trying to make decisions with respect to exporting data, creating, and disseminating custom reports: IBM DB2 Express (or any Version 8 full-featured IBM DB2 product) is required on the query system (laptop, mobile computer, desktop) Portable programming language and other tools for data extraction/parsing REXX C and C++ (can be compiled and disseminated in AIX environment) ESSCLI (asset/capacity data only) CLI Python QMF
Spreadsheet application Microsoft Excel IBM Lotus 123 Quick print-to-screen reports Parsed and formatted SQL query output Data Output types Data to files (compressed and uncompressed) Binary, ASCII, etc.
485
Integrated replication administration. DDL statements to easily create, drop, and alter data source mappings, users, data types and functions (user-defined and built-in). Excellent performance and intelligent use of pushdown and remote query caching. Refer to the following Web site for more information about IBM DataJoiner:
http://www.ibm.com/software/data/datajoiner/
You can download the free QMF for Windows Try and Buy version from the following Web site:
http://www-3.ibm.com/software/data/qmf/reporter/june98/downloads.html
486
There are numerous other tools and applications such as IBM DB2 Intelligent Miner, IBM Object REXX, LotusScript which contain powerful scripting and/or report formatting capabilities and can access DB2 UDB on UNIX or Intel, IBM ~ iSeries, z/OS, as well as any database manager connected to DataJoiner. Refer to following Web sites for more information about these other tools: DB2 Intelligent Miner:
http://www.ibm.com/software/data/iminer
Object REXX:
http://www.ibm.com/software/awdtools/obj-rexx/
LotusScript:
http://www.ibm.com/software/data/db2/db2lotus/db2lscpt.htm
There are no direct solutions to print the built-in reports or save the report files directly from TotalStorage Productivity Center. However, you can issue standard SQL statements to extract the data. All asset, capacity, and performance data is available in the form of DB2 tables. DB2 UDB management tools will be useful in utilizing your table data in the most efficient manner.
487
Usage Notes: The db2move tool: Exports, imports, or loads user-created tables. If a database is to be duplicated from one operating system to another operating system, db2move only helps you to move the tables. You also need to move all other objects associated with the tables, such as: aliases, views, triggers, user-defined functions, and so on. db2look is another DB2 UDB tool to help you easily move some of these objects by extracting the Data Definition Language (DDL) statements from the database. When export, import, or load APIs are called by db2move, the FileTypeMod parameter is set to lobsinfile. That is, LOB data is kept in separate files from PC/IXF files. There are 26 000 file names available for LOB files. The lLOAD action must be run locally on the machine where the database and the data file reside. When the load API is called by db2move, the CopyTargetList parameter is set to NULL; that is, no copying is done. If logretain is on, the load operation cannot be rolled forward later. The tablespace where the loaded tables reside is placed in backup pending state, and is not accessible. A full database backup, or a tablespace backup, is required to take the tablespace out of backup pending state.
server-level configuration data; generated at the start of Performance Data Collection) and replacing the mmdd with the date of the extract. Export VPVPD table data to file: export to c:\ibmout\vpvpdmmdd.txt of del select * from vpvpd 6. Issue the following command to extract a specific day's worth of data from the VPCRK table (logical array-level performance data) substituting the date to be extracted and the same substitution for the filename. VPCRK discrete table output directed to file: export to c:\ibmout\vpcrkmmdd.txt of del select * from vpcrk where pc_date_b = 'mm/dd/yyyy' Note: Be patient while this process takes place. The prompt will return when the process is complete. The more complex your SQL statement is, the more data to be extracted, and the amount of background host processor load will have a limiting effect on the speed of processing the command.
489
Tip: You can use the FETCH clause in your SQL statement when testing your queries and scripts. This will limit your large script output to how ever many rows you have defined in the FETCH clause. Remember to remove the FETCH clause when your testing is completed to eliminate the erroneous output from your scripts. This can be placed as the last line of your SQL statement (note the semicolon which indicates to the CLP the end of the SQL statement); fetch first 10 rows only;
Planning considerations
Planning is one of the most important areas for consideration before beginning to do database backups. We will cover the factors which should be weighed against one another in planning for recovery, for example, type of database, backup windows and relative speed of backup and recovery methods. We also introduce various backup methods.
490
In general terms DB2 can offer a number of options for backup and recovery management to meet the needs of a wide range of applications. The more simple backup and recovery options provide data protection with minimal administrator skill or effort. Other more powerful options give greater levels of data protection but require more administrator skill and require more effort to maintain If your organization has existing high levels of skills with DB2 or other relational databases you may already have standard operating procedures for protecting databases. If your organization is less skilled in this area we may want to choose a simple backup and recovery process that doesnt require a lot of administrator new skill or effort.
Speed of recovery
If you ask users how quickly they want you to be able to recover lost data, they usually answer immediately. In practice, however, recovery takes time. The actual time taken depends on a number of factors, some of which are outside your control (for example, hardware may need to be repaired or replaced). Nevertheless, there are certain things that you can control and that will help to ensure that recovery time is acceptable: Develop a strategy that strikes the right balance between the cost of backup and the speed of recovery. Document the procedures necessary to recover from the loss of different groups or types of data files. Estimate the time required to execute these procedures (and do not forget the time involved in identifying the problem and the solution). Set user expectations realistically, for example, by publishing service levels that you are confident you can achieve.
Database Logging
In DB2 UDB databases, log files are used to keep records of all data changes. They are specific to DB2 UDB activity. Logs record actions of transactions. If there is a crash, Logs are used to playback/redo committed transactions during recovery. Logging is always on for regular tables in DB2 UDB: Possible to mark some tables or columns as NOT LOGGED Possible to declare and use USER temporary tables There are two kinds of logging:
Chapter 10. Database management and reporting
491
Circular logging (default) This is the TotalStorage Productivity Center database default logging type. Archive logging In addition, capture logging is available for replication purposes. Each type of logging corresponds to the method of recovery you want to perform. Circular logging is used if the maximum recovery you want to perform is crash or restore recovery. Archive logging is used if you want to be able to perform rollforward recovery. Note: IBM does not recommend or support the use of Archival logging with the TotalStorage Productivity Center product database.
Circular logging
Circular logging is the default behavior when a new database is created. (The logretain database configuration parameter setting is NO). With this type of logging, only full, offline backups of the database are valid. As the name suggests, circular logging uses a ring of online logs to provide recovery from transaction failures and system crashes. The logs are used in a round-robin fashion and retained only to the point of ensuring the integrity of current transactions. Circular logging does not allow you to roll a database forward through transactions performed after the last full backup operation. All changes occurring since the last backup operation are lost. Only the crash recovery and restore recovery can be performed when this type of logging. Active logs are used during crash recovery to prevent a failure (system power or application error) from leaving a database in an inconsistent state. The data changes are recorded in the log files and when all the units of work are committed or rolled back in a particular log file, the file can be reused. The number of log files used by circular logging is defined by the logprimary and logsecondary database configuration parameters. If there are UOWs running in a database using all the primary log files and still not reaching a point of consistency, then a secondary log files are allocated one at a time. Figure 10-29 shows Circular Logging log path.
492
Archive logging
Archive logging is used specifically for rollforward recovery. You can configure this logging mode by setting the logretain database configuration parameter to RECOVERY. Rollforward recovery can use both archived logs and active logs to rebuild a database or a tablespace either to the end of the logs, or to a specific point-in-time. The rollforward utility achieves this by reapplying committed changes found in the following three types of log files: Active logs: Crash recovery also manipulates active logs, which uses them to place the database into a consistent state. They contain transaction records that have not been committed and also the committed transaction information that has not been written to the database on disk. You can locate active log files in the LOGPATH directory. Online archived logs: When changes in the active log are no longer needed for normal processing, the log is closed, and becomes an archived log. An archived log is said to be online when it is stored in the database log path directory (see Figure 10-30).
Offline archived logs: An archived log is said to be offline when it is no longer found in the database log path directory. When you want to use archive logging (see Figure 10-31 on page 494) you must make provision for the logs to be stored away from the database. This is done in DB2 UDB by specifying a userexit parameter and interfacing to a suitable archive manager. Full documentation of this is supplied in the DB2 manuals (see Online resources on page 528).
493
Database recovery
A database restore will recreate the database from a backup. The database will exist as it did at the time the backup completed. If archival logging were used before the database crash, it would then be possible to roll forward through the log files to reapply any changes since the backup was taken. It is possible to roll forward either to the end of the logs or to a specific point in time. The granularity available on the last transaction needs to be weighed against database performance. Important: Logfiles are just as important as the backup files. It is not possible restore the database without logfiles.
494
File: database_list This file contains a line for each database to backup. Depending on the TotalStorage Productivity Center components you have installed this list may vary. You can use DB2 Control Center to establish the full list of databases in your installation. The backup data will reside in C:\db2_backups in this example. You need to create this directory before using this process.
Example 10-5
backup database DIRECTOR to C:\db2_backups without prompting;
backup database DMCOSERV to C:\db2_backups without prompting; backup database ELEMCAT to C:\db2_backups without prompting; backup database ESSHWL to C:\db2_backups without prompting; backup database PMDATA to C:\db2_backups without prompting; backup database REPMGR to C:\db2_backups without prompting; backup database TOOLSDB to C:\db2_backups without prompting;
File: TCP_backup.bat This is the script you run. It stops the IBM Director service which will close all connection to the DB2 databases. This will allow DB2 to take an offline backup. The script then restarts the IBM Director service.
Example 10-6 @ECHO ON @REM This is a sample backup script @REM to backup TotalStorage Productivity Center @REM for Disk and Replication @REM ----
@REM @REM
net stop IBM Director Support Program @REM Starting backup of DB2 databases @REM -------------------------------C:\PROGRA~1\IBM\SQLLIB\BIN\db2cmd.exe /c /w /i db2 -tvf C:\scripts\database_list @REM @REM Restarting TotalStorage Productivity Center -------------------------------------------
495
496
Appendix A.
497
M_MACH_SN M_MACH_TY
P_CTIME
TIME
M_MACH_SN
498
Type SMALLINT NOT NULL SMALLINT NOT NULL INTEGER NOT NULL CHAR(8) NOT NULL CHAR (1)
Description Cluster number for this logical array Card number of adapter associated with this logical array An ESS internally generated logical subsystem identifier An ESS internally generated logical array identifier SSA Loop Identifier (for example, A or B) associated with the disk group containing this logical array. If this ESS has the AAL feature enabled, this field contains the value X. Identifying number of the disk group containing this logical array Disk number of the disk group (and final identifier of the logical array) if an independent disk, 0 otherwise Attribute of an array: S if single wide strip size (32K), D if double wide strip size (64K) Number of physical DEFAULT ''S'' disks being used by the logical array, excluding spares Storage type of the logical array: 0 JBOD, 1 - RAID5, 2 - RAID10 Smallest capacity, in units of GB*10, among physical disks used by the logical array Slowest speed, in units of RPM's, among physical disks used by the logical array
M_GRP_NUM M_DISK_NUM
SMALLINT SMALLINT
M_DBL_WIDE
M_DDM_NUM
M_STOR_TYPE
SMALLINT NOT NULL DEFAULT-1 INTEGER NOT NULL DEFAULT -1 INTEGER NOT NULL DEFAULT -1
M_DDM_SIZE
M_DDM_SPEED
M_MACH_SN
499
Type INTEGER NOT NULL INTEGER NOT NULL CHAR (1) CHAR (8)
Description An internally generated logical subsystem identifier Identifying number of this logical volume (and lowest level identifier of the logical volume) Character F if an open systems (fixed block) volume, C if an S/390 volume LUN serial number if the logical volume is an open systems (fixed block) volume, SSID + Base device address if an S/390 volume
M_VOL_TY M_VOL_ADDR
PC_INDEX
CHAR (12) NOT NULL SMALLINT INTEGER NOT NULL CHAR(8) INTEGER NOT NULL DATE NOT NULL TIME NOT NULL DATE
PC_DATE_B
PC_TIME_B
PC_DATE_E
500
Columns PC_TIME_E
Type TIME
Description The time of day that this sample time period ended (this is, performance counters were collected again) Number of normal (non-sequential) I/O read requests (command chains that contained at least one search or read command but no write command) in this time period for this logical volume Number of normal (non-sequential) I/O write requests (command chains that contained at least one write command). Number of cache hits for normal (non-sequential) I/O read requests (normal, read command chains that were completed without requiring access to any DASD). Number of cache hits for normal (non-sequential) I/O write requests (normal, write command chains that were completed without requiring access to any DASD). Number of sequential I/O read requests (sequential mode command chains that contain at least one search or read command but no write commands). Number of sequential I/O write requests (sequential mode command chains that contain at least one write command). Number of cache hits for sequential I/O read requests sequential mode, read command chains that were completed without requiring access to any DASD). Number of cache hits for sequential I/O write requests (sequential mode, write command chains that were completed without requiring access to any DASD). Number of disk to cache track transfers for non-sequential I/O requests (number of tracks transferred successfully from DASD to cache excluding sequential mode next track promotions). Number of disk to cache track transfers for sequential I/O requests (number of tracks transferred successfully from DASD to cache due to sequential mode next track promotions). Number of cache to disk track transfers (number of tracks transferred from cache to DASD asynchronous to transfers from the channel).
PC_N_IO_R
INTEGER
PC_N_IO_W
INTEGER
PC_N_CH_R
INTEGER
PC_N_CH_W
INTEGER
PC_S_IO_R
INTEGER
PC_S_IO_W
INTEGER
PC_S_CH_R
INTEGER
PC_S_CH_W
INTEGER
PC_D2C
INTEGER
PC_SEQ_D2C
INTEGER
PC_C2D
INTEGER
501
Columns PC_RHR_AVG
Description Cache hit ratio for read I/Os (total number of cache hits for read requests / total number of read requests). Cache hit ratio for write I/Os (total number of cache hits for write requests / total number of write requests). Overall cache hit ratio (total number of cache hits for all requests / total number of requests). Cache hit ratio for sequential I/Os (total number of cache hits for sequential requests / total number of sequential requests). Cache hit ratio for normal (non-sequential) I/Os (total number of cache hits for non-sequential requests / total number of non-sequential requests). Number of record mode read I/O requests (number of command chains associated with a record access mode read operation, and the chain contains no write commands). Number of record mode read cache hits (number of record mode read requests which were completed without requiring any access to DASD). Cache hit ratio for record mode reads (number of record mode read cache hits / number of record mode read requests). Number of DASD fast write I/O requests (same as normal write IO requests). Number of DASD fast write-delayed requests (requests of this type delayed due to NVS space constraints). (DASD fast write-delayed request / total of IO requests) * 100. Number of seconds in this time period. Percent (0 - 100) of the time period of this sample (in the hour of the start time). Internally generated identifier of the creator of this record. Zero if normal, a negative value if the location of this logical volume cannot be identified using the VPCFG, VPVOL tables.
PC_WHR_AVG
SMALL INT
PC_THR_AVG
SMALL INT
PC_SHR_AVG
SMALL INT
PC_NHR_AVG
SMALL INT
PC_RMR_IO
INTEGER
PC_RMR_CH
INTEGER
PC_RMRHR_AVG
SMALL INT
PC_DFW_IO PC_DFW_DELAY
INTEGER INTEGER
SMALL INT SMALL INT SMAL INT NOT NULL INTEGER SMALL INT
502
Type INTEGER NOT NULL SMALL INT NOT NULL CHAR(1) NOT NULL SMALL INT NOT NULL SMALL INT NOT NULL
Description Number of quick-write promote operations. Card number of adapter associated with this logical volume. SSA loop identifier (for example, A or B) associated with the disk group containing the logical volume. Identifying number of the disk group containing this logical volume. Disk number of the disk group, if an independent disk, 0, otherwise character F if an open systems volume, C if an S/390 volume. Character F if an open systems (fixed block) volume, C if an S/390 volume LUN serial number if the logical volume is an open systems (fixed block) volume, SSID + Base device address if an S/390 volume Identifying the time level of the statistic; S for sample, H for hourly. Device date that this sample time period began. Device time of day that this sample time period began. Device date that this sample time period ended. Device time of day that this sample time period ended.
M_GRP_NUM M_DISK_NUM
M_VOL_TY M_VOL_ADDR
CHAR (1) NOT NULL DATE NOT NULL TIME NOT NULL DATE TIME
503
504
Appendix B.
Worksheets
This appendix contains worksheets that are meant to be used during the planning and installation of the TotalStorage Productivity Center. The worksheets are meant to be examples and therefore you can decide not to use them if, for example, you already have all or most information collected somewhere. If the tables are too small for your handwriting, or you want to store the information in an electronic format, just use a word processor or spreadsheet application to use our examples as a guide to create your own installation worksheets. In this appendix you will find the following worksheets: Users IDs and passwords Storage device information IBM Enterprise Storage Server IBM FAStT IBM San Volume Controller
505
In Table B-2 simply mark if a manager or a component is going to be installed on this machine.
Table B-2 Managers/Components installed Manager/Component Productivity Center for Disk Productivity Center for Replication Productivity Center for Fabric Productivity Center for Data Tivoli Agent Manager DB2 WebSphere Installed (y/n)?
MDMServerKeyFile.jks
MDServerTrusFile.jks
506
Password
agentTrust.jks
Enter the user IDs and password that you used during the installation in Table B-4 below. Depending on the selected managers and components some of the lines will not be used for this machine.
Table B-4 User IDs used on this machine Element Suite Installer Default/ recommended User ID Administrator Enter user ID below Enter password below
DB2
db2admin1
Administratora
manager2
AgentMgrb
itcauserb
tpcsuida
IBM WebSphere
Host Authentication
1. This account can have whatever name you like. 2. This account name cannot be changed. 3. The DB2 administrator user ID and password are used here, see Fabric Manager User IDs on page 51.
Appendix B. Worksheets
507
508
Both IP addresses
LIC level
Appendix B. Worksheets
509
510
Appendix C.
Event management
This appendix contains additional information about the IBM Director options that can be used to build Event Action Plans. This information complements the information in Event Action Plan Builder on page 215.
511
512
command. Actions that can be taken in response to an event are created using the predefined action templates described in Event Actions on page 519.
513
Creating an Event Action Plan on page 215 describes how to associate a predefined event filter with an event action plan.
514
Any
By default, Any is selected for all filtering categories, indicating that all filtering criteria apply. You must deselect Any to select or enter a filtering criteria for a specific filtering category.
Event Type
The Event Type tab is the most important tab as it is here that you select the events upon which you want the event action plan to be activated. It is used to specify the source or sources of the events that are to be processed by this filter.
Severity
Identifies the urgency of the event. Severity is typically used in action plans because it identifies potentially urgent problems requiring immediate attention. You can select multiple levels of severity as filtering criteria. Logical OR applies for multiple selections. For example: if you select Fatal and Critical, the filtering criteria matches if the originator of the event classifies the event as Fatal or as Critical. Severity levels in the order of most severe to least severe are: Fatal The application that issued the event has assigned a severity level indicating that the source of the event has already caused the program to fail and should be resolved before the program is restarted. Critical The application that issued the event has assigned a severity level indicating that the source of the event may cause program failure and should be resolved immediately. Minor The application that issued the event has assigned a severity level indicating that the source of the event should not cause immediate program failure, but should be resolved.
Appendix C. Event management
515
Warning The application that issued the event has assigned a severity level indicating that the source of the event is not necessarily problematic, but may warrant investigation. Harmless The application that issued the event has assigned a severity level indicating that the event is for information only; no potential problems should occur. Unknown The application that generated the event did not assign a severity level.
Day/Time
Enables you to specify day and time ranges for a filter. Specifying a day and time range in a filter adds control over when actions are run and therefore not run. Use the pull-down menus to select values in each category, then click the Add button when you finish the selections. Your settings are added to the selections pane. You can create as many day/time range entries as you like. Each time you create a day/time range entry, click Add to add the entry to the list in the selections pane. To remove an entry from the selections pane, click the entry, then click the Delete button. The time zone that applies to the day/time filtering entries is the time zone in which the IBM Director Server is located. If your console is not in the same time zone as the server, the difference in time zones is shown above the selections pane. For example: if the IBM Director Server is located in New York and your console is located in California, the time zones displayed and used are Eastern Standard Time (EST), and the following is displayed above the selections pane: Server Time - Local Time = 3 Hours. Day of the Week Use the pull-down menu to select the day of the week to which this filter is to apply. Weekday (Monday - Friday) and weekend (Saturday and Sunday) selections are available. Starting Time Use the pull-down menu to select the starting time of an interval within which this filter is active. Ending Time Use the pull-down menu to select the ending time of an interval within which this filter is active. Add Adds your day and time selections to the list in the selections pane. You can add multiple day/time entries to the list. Delete Deletes a day/time entry from the list of entries in the selections pane. To delete an entry, select it, then click this button. Block queued events Select this check box to avoid filtering on events that had to be queued for transmission to the IBM Director Server. Multiple events can be queued for transmission to the IBM Director Server if the managed system for which the event was generated cannot send the event at the time of its occurrence. This option can be useful if the timing of the event is 516
Managing Disk Subsystems using IBM TotalStorage Productivity Center
important or if you want to avoid filtering on multiple queued events that are sent all at once when the IBM Director Server becomes accessible.
Category
The category specifies the resolution status of the event as a filtering criterion. Alert: Signifies the problem. Resolution: Signifies that the problem has been resolved and is no longer a problem.
Extended Attributes
Enables you to qualify the filtering criteria using additional Keywords and keyword Values that can be associated with some categories of events, such as SNMP. These additional keywords and corresponding values are referred to as the event's extended attributes. This category can be particularly useful for narrowing the filtering criteria to a lower level of detail, for example, to isolate one or more values originating from a specific system. You can also view the extended attributes of a specific event by opening the Event Log task in the Tasks pane of the Director Console and select an appropriate event from the list. The event's extended attributes, if present, are displayed at the bottom of the Event Details panel, below the Sender Name category. Because event types are hierarchical, an event with a particular event type has its associated extended attributes as well as the extended attributes of its parent event types. For example, the event type Director.Topology.Offline has extended attributes for Director.Topology.Offline and Director.Topology. You can specify keywords and values in Extended Attributes only if one event type is selected. If the current event type is set to Any, Extended Attributes is disabled. Extended Attributes is also disabled if multiple event types are selected. If the Extended Attributes panel is enabled for a specific event type but no keywords are listed, the IBM Director Server is not aware of any keywords that can be used for filtering. An event will meet the filtering criteria as follows: If you select multiple keywords, all values received must match all values of all selected keywords (Boolean AND). If you specify multiple values for a single keyword, the values received must match at least one of the values specified for the keyword (Boolean OR). Any By default, this check box is selected indicating to filter on all extended attributes. Deselect Any to select specific keyword/value pairs. Keywords Select the keywords on which you want to filter. If no keywords are listed, the IBM Director Server has not been made aware of or has not published the keywords for the selected event category. You can select multiple keywords. Values Specifies a value for the keyword on which you want to filter. You can specify multiple values, but you cannot specify a range of values. If you want to enter multiple values for a single keyword, use the Add key each time you want to add a value. Boolean OR is used to determine if an event's extended attributes meet the filtering criteria for multiple values of a single keyword.
Appendix C. Event management
517
If you enter more than one keyword/value pair, Boolean AND is used to determine if an event's extended attributes meet the filtering criteria (all keyword values must be true). Case Sensitive Select this option if the specified keyword value should be filtered as case sensitive. Update Allows you to change the value of a selected keyword/value pair. Select a keyword/value pair. Select Values to change the corresponding value. Select Update to make the change take effect. Delete Deletes a selected keyword/value pair as a selection criterion.
Frequency
This only appears for Duplication and Threshold Event Filters. Interval For Duplication Event Filters, the Interval field can be used without using the Count field (Count=0). Interval specifies a window of time that begins when an event meets the filtering criteria. The first occurrence of an event that meets the criteria triggers associated actions and starts a countdown of the units that define the interval. For example, if you enter 10 and select seconds, a 10-second timer starts when an event meets the filtering criteria. If Count is set to 0, all other instances of an event meeting the criteria do not trigger associated actions during the interval. If Interval is set to a value greater than 0 and Count is set to a value greater than 0, after the first occurrence of an event meets the filtering criteria, the value entered in Count (n) specifies the number of times an event must meet the criteria within the interval before associated actions can be triggered again. If an event meets the criteria for the nth time within the interval, the next time (n+1) an event meets the criteria, associated actions are triggered, the count is reset, and the interval is reset. For Threshold Event Filters, the Interval field must be used in conjunction with the Count field. Interval specifies a window of time that begins when an event meets the filtering criteria. The first occurrence of an event that meets the criteria does not trigger associated actions, but starts a countdown of the units that define the interval. For example, if you enter 10 and select minutes, a 10-minute timer starts when an event meets the filtering criteria. The value entered in Count specifies the number of times (n-1) an event has to meet the criteria before associated actions are triggered. The first n-1 events that occur within the interval do not cause associated actions to trigger. The nth time an event meets the criteria within the interval, associated actions are triggered, the count is reset, and the interval is reset. Count For both duplication and threshold event filters, the Count field can be used without using the Interval field (value=0 for selected type of interval). For Duplication Event Filters, Count must be an integer from 0 to 100 and specifies the number of duplicate events to ignore after the first occurrence of an event meets the filtering criteria. For example, if you enter 5 in Count, an event must meet the criteria 6 times after the first event meets the criteria to trigger associated actions again. If you specify an interval and Count is set to the value 0, the first time the criteria are met the associated actions trigger, the interval countdown begins, and no actions are triggered during the interval.
518
For Threshold Event Filters, Count must be an integer from 1 to 100. Count specifies the number (n-1) of events that must meet the filtering criteria before associated actions are triggered. The first n-1 events are ignored. For example, if you enter the value 5 in Count, the first 4 duplicate events are ignored and the fifth event triggers associated actions.
System Variables
This is only enabled if one or more system variables exist. You can create a system variable using the Set Event System Variable event action. System Variables are user-defined keyword/value pairs that are known only to the local IBM Director Server. You can further qualify the filtering criteria by specifying a system variable. These user-defined system variables are not associated with system variables in any way.Refer to Understanding System Variables in the IBM Director Help for more information about how to use system variables.
519
Menu Function
Customize enables the creation of custom actions. Add to Event Action Plan adds the action to the currently selected action plan. Show Implementations displays the systems or groups to which this Action has been applied. Rename allows a new name to be assigned to this Action. Update makes it possible to modify the tasks performed by the Action. Delete removes the Action. If the Action is in use on a group or system, the software will notify the user and prompt for a second verification prior to removal. Test executes the task(s) associated with the Action.
Creating an action
Following are the the steps to create an action. 1. From the Action pane, right-click Send an Event Message to a Console User, and click Customize. IBM Director sorts actions alphabetically and executes them in that order. 2. Fill in the fields using event data substitution variables (see Figure C-4 on page 521). For more information about event data substitution variables, see Event Data Substitution on page 521.
520
3. Select File Save As to save the action and enter the name of the action. In the example, the name System_text Event Message was used. 4. The new action now appears as a subentry listed under Send an Event Message to a Console User (see Figure C-5).
521
help associated with a specific event action template for information about where event data substitution can be used. The text of an event message is divided into keywords. When used in a message, a keyword must be preceded by the ampersand symbol (&). The keywords are:
&date
Specifies the name of the system for which the event was generated.
&sender
Specifies the name of the system from which the event was sent. This keyword returns null if unavailable.
&group
Specifies the group to which the target system belongs and is being monitored. This keyword returns null if unavailable.
&category
Specifies a dotted representation of the event type using internal type strings.
×tamp
Specifies the coordinated time of the event (milliseconds since 1/1/1970 12:00 AM GMT).
&rawsev
Specifies the non-localized string of event severity (FATAL, CRITICAL, MINOR, WARNING, HARMLESS, UNKNOWN).
&rawcat
Specifies the correlator string of the event. Related events, such as those from the same monitor threshold activation, will match this.
&snduid
522
Specifies the value of the property string propname from property file filename (relative to \tivoliWg\classes).
&sysvar:varname
Specifies the event system variable varname. This keyword returns null if a value is unavailable.
&slotid:slot-id
Specifies the value of the event detail slot with the non-localized ID slot_id.
&md5hash
Specifies the MD5 hash code (CRC) of the event data (good event specific unique ID).
&hashtxt
Specifies a full replacement for the field with an MD5 hashcode (32-character hex code) of the event text.
&hashtxt16
Specifies a full replacement for the field with a short MD5 hashcode (16-character hex code) of the event text.
&otherstring
Specifies the value of the detail slot with the localized label that matches otherstring. This keyword returns OTHERSTRING if unavailable. Note: When you specify an event data substitution keyword containing more than one word, substitute the underscore character ("_") for each space between words. For example, to use the keyword "User Logon" you must enter "User_Logon" in the text of the event message. A sample entry containing this keyword might be: "User &User_Logon just logged on to the system." Example of message text with event data substitutions:
Please respond to the event generated for &system, which occurred &date. The text of the event was &text with a severity of &severity.
523
524
525
526
Related publications
The publications listed in this section are considered particularly suitable for a more detailed discussion of the topics covered in this redbook.
IBM Redbooks
For information on ordering these publications, see How to get IBM Redbooks on page 528. Note that some of the documents referenced here may be available in softcopy only. IBM TotalStorage Productivity Center: Getting Started, SG24-6490 IBM TotalStorage SAN Volume Controller, SG24-6423 IBM Tivoli Storage Resource Manager: A Practical Introduction , SG24-6886 IBM Tivoli Storage Area Network Manager: A Practical Introduction, SG24-6848 IBM TotalStorage Enterprise Storage Server Implementing ESS Copy Services with IBM Eserver zSeries, SG24-5680 IBM TotalStorage Expert Reporting: How to Produce Built-In and Customized Reports, SG24-7016 IBM TotalStorage Enterprise Storage Server Implementing ESS Copy Services in Open Environments, SG24-5757 DB2 Warehouse Management: High Availability and Problem Determination Guide, SG24-6544 DB2 UDB/WebSphere Performance Tuning Guide, SG24-6417 Up and Running with DB2 for Linux, SG24-6899 IBM TotalStorage Business Continuity Solutions Guide, SG24-6547 IBM TotalStorage Enterprise Storage Server PPRC Extended Distance, SG24-6568
Other Publications
DFSMS Advanced Copy Services, SC35-0428 z/OS DFSMSdfp Advanced Services, SC26-7400 IBM TotalStorage Enterprise Storage Server Web Interface Users Guide, SC26-7448 IBM TotalStorage Enterprise Storage Server Command-Line Interface Users Guide, SC26-7494 IBM TotalStorage SAN Multiple Device Manager Command-Line Interface Guide, SC26-7585 IBM TotalStorage SAN Multiple Device Manager Configuration Guide, SC26-7586 IBM TotalStorage SAN Multiple Device Manager CIM Agent Developers Reference, SC26-7587
527
Online resources
These Web sites and URLs are also relevant as further information sources: Storage Networking Industry Association Web site
http://www.snia.org/
The following Web site is useful for reference material concerning IBM TotalStorage Productivity Center and the products mentioned in this redbook.
http://www.storage.ibm.com/servers/storage/software/index.html
DB2 Universal Database for Linux, UNIX and Windows Technical Support
http://www-306.ibm.com/software/data/db2/udb/support.html
DB2 Technical Support, Version 8 Information Center and PDF product manuals
http://www-306.ibm.com/cgi-bin/db2www/data/db2/udb/winos2unix/support/v8pubs.d2w/en_main
528
Index
A
addess command 140 addessserver command 141 adduser command 143 agent manager 45 agentTrust.jks file 57 common agent 115 resource manager 115 TCP/IP ports 45 agentTrust.jks file 51, 57 archive logging 492493 associated CIM Agent device support matrix 508 data collection SVC,SVC data collection 260 data collection task 229, 231 Data Manager 44 database backup 490 database name 48, 75, 103 new database 114 new Element Catalog subcomponent database 103 new Replication Manager subcomponent database 104 database purge function 20 database purge task 453 database recovery time 491 database restore 494 DataJoiner 487 DB2 database purging 450 view a table 427 DB2 Command Center 457 DB2 Control Center 482 DB2 Cube Views 486 DB2 database connect to 454 DB2 database health 423 DB2 database-size monitoring 450 DB2 DataJoiner 485 DB2 Development Center 456 DB2 Development Tools 455 DB2 Event Analyzer 457 DB2 General Administration Tools 456 Control Center 456 Journal 456 Replication Center 456 Task Center 456 DB2 Health Center 424 DB2 host 180 DB2 Intelligent Miner 487 DB2 journal 424 DB2 logging 491 DB2 Monitoring Tools 457 DB2 report customized example 462 DB2 Tool Suite 453 command line processor 454 Command Line Tools 454 DB2 UDB Journal utility 423 DB2 user Id 56, 63 name 55, 81 DB2 Utilities Command Center 457 db2move tool 488 DDL 486 default directory 51, 63, 80 delete a gauge 256 Device Manager
B
block aggregation 28 BWN005921E message 281 BWN005922E message 281 BWN005996W message 286
C
certificates 51 CIM Agent 2931, 44, 49 agent code 31 client application 31 device 31 device provider 31 ESS CLI 125 ESS configuration 124 overview 32 Service Location Protocol 32 CIM Browser interface 148 CIM management model 26 CIM Object Manager 2931, 123 CIM-compliant 31 CIMOM 30 CIMOM communication 174 CIMOM SVC console 173 circular logging 492 Command Center 458, 486 Command Line Processor (CLP) 455, 486 command-line interface 407 communication protocols 42 Control Center 482 cpthresh 270 creating gauges 243 CSV format report 482 customized reports 482
D
DA benefit 38 Data agent 109
529
device discovery 181 discovery 181 LUN mapping 200 mdisk display 198, 205, 207 overview 17 Director trace logging 444 directory agent 33, 35 discovery 423 device 181 disk subject matter expert 10 display gauges 247, 254 distinguished name right-hand portion 83 distinguished name (DN) 83
F
Fabric agent 47, 109 Fabric Manager 44, 46, 56 fabric zoning 332 FAStT CIM Agent defining devices 161 SLP registration 164 FAStT device 509 FETCH clause 490 file aggregation 28 FlashCopy 357 flashsess command 411
G
gauge definition 12 gauge properties 254 gauges 20, 242 creating 243 delete 256 display 247, 254 exception 250 performance 243 properties 254
E
enable logging 435 enabling WAS trace 435 ESS CIM Agent 124 addess command 140 addessserver command 141 configuring 139 install 128 log files 135 post install 137 setuser interactive tool 143 verify ESS connectivity 144 verify install 138 ESS CIM agent SLP registration 153 ESS CIMOM CIM Browser interface 148 restart 142 SLP registration 146 telnet command 148 ESS CIMOM verification 144 ESS CLI 124 install 125 verification 128 verifyconfig 146 ESS data collection 229 ESS thresholds 257 ESS user authentication 444 esscli command 128 Event Action Plan 21 event filter 513 export and import action plans 221 Message Browser 220 Event Action Planner 512 actions 520 creating an action 215 event 512 event filter builder 514 Event Action Plans 422 Event Filter Builder 433 Event Filters 22 Event Log 431 Event Services 21 exception gauges 250
H
hard zoning 332 health monitoring 18 host name 54, 57, 71
I
IBM DB2 Universal Database Server 91 IBM Director 16 Event Action Plans 422 Event Services 22 IBM Director (ID) 46, 507 IBM Director event logs 422 IBM Director Scheduler device discovery 184 IBM FAStT 505 Use 505 IBM Object REXX 487 IBM Tivoli Common Agent 52 IBM Tivoli SAN Manager 18 IBM TotalStorage Open Software Family 1, 3 IBM WebSphere selection panel 89 IDs and passwords (IP) 45 in-band 334 inband discovery 10 IP address 48, 81, 506
J
JDBC 485 job scheduling facility 233
530
K
key files 51
L
Launch Device Manager 181 License Agreement 59 lscollection 270 lsfilter 270 lsgauge 270 lspair command 412 lsseq command 412 lssess command 409 LUN to host port mapping 200
M
manage replication sessions 400 mdisk group 205 mdisks display 198, 205, 207 multicast messages 34 multicast request 35 multicast traffic 38
Package Location 87 Productivity Center for Data 5 architecture 7 features 6 Productivity Center for Disk architecture 42 functions 11 gauge definition 12 Volume Performance Advisor 12 Productivity Center for Fabric 7 benefits 10 overview 7 Productivity Center for Replication 21 architecture 42 overview 12 provider component 30 Python 485
Q
QMF 485 QMF for Windows 486 Query Management Facility (QMF) 486
N
nestat command 132 next panel 68
R
raswatch 423 Redbooks Web site 528 Contact us xiii relational database management system 450 remote console 314 repcli command syntax 408 repcli utility 408 Replication Manager 21 CLI 407 Continuous Synchronous Remote Copy 385, 395 copyset 361 Copyset details 392 create a group 362 create a storage group 362 define storage group 368 delete a storage pool 373 freeze operations 434 groups 359 managing a storage pool 372 modify a storage group 366 overview 356 Point-in-Time Copy session 375 replica sessions 21, 356 restarting 435 sequence 361 Session Properties window 398 Sessions window 396 setting up 362 storage paths 375 storage pool 359 storage pool create 369 suspend a session 405 suspended status 398 synchronized state 404 tasks 358
O
offline archived logs 493 On Demand environment 23 out of band 334 outband discovery 9
P
Package Location 78 perfcli tool 445 performance database purge task 450 performance gauge 243 Performance Manager 19 command line interface 269 customized reports 482 Enable threshold button 258 ESS data collection 229 ESS data collection task 229 exporting data 482 function 228 gauges 20, 242 threshold filters 259 thresholds task 257 Volume Performance Advisor 21 perftool 269 ping command 144 pmcli 270 PMDATA table 483 Port Number 110 PPRC 357 Productivity Center 14, 43, 45 components 14
Index
531
troubleshooting 434 verifying source-target relationship 379 view group properties 367 view storage pool properties 374 Replication Manager (RM) 45 Replication Manager problem determination 434 Replication Manager subcomponent 103 replication session 358 Replication subject matter expert 12 Resource Manager 46, 56, 115, 507 REXX 485 rmgauge 270
S
same time 109 other applications 111 Remote Fabric Agent Deployment 109 SAN islands 332, 353 SAN Manager 18 SAN Volume Controller thresholds 261 SAN Volume Controller (SVC) 44, 510 service agent 33, 35 Service Location Protocol 32 multicast 37 service agent 33 user agent 33 setdevice command 429 setessthresh/setsvcthresh 270 setfilter 270 setoutput 270 setoutput command 417 setuser command 143 setuser interactive tool 143 showgauge 270 showsess command 409 Simple Network Management Protocol (SNMP) 48 SLP active DA discovery 36 CIM Agent 31 configuration recommendation 39, 121 DA configuration 39 DA discovery 36 DA functions 37 directory agent configuration 39, 122 discovery requirements 174 environment configuration 39 ESS CIM agent 153 multicast messages 34 multicast request 35 multicast service request 36 passive DA discovery 37 registration 33 registration persistency 175 router configuration 39, 121 service agent 33 service attributes 34 service type 34 setting up DA 40 slp.conf 40, 150
starting 138 unicast 38, 120 user agent 34 User Datagram Protocol message 34 verify install 137 verifyconfig command 153 when to use DA 38, 121 SLP considerations 429 SLP discovery summary 174 slp logfile 430 SLP tracing 430 slp.conf file 40, 150, 430 slp.reg file 175 SMI-S 28 SNIA 27 soft zoning 332 SQL Assist 468 SQL command example 488 SQL scripts considerations 462 standards organizations 2, 26 startcimbrowser command 148 startesscollection 270 startsvccollection 270 stopcollection 270 stopsess command 416 storage device 508 important information 508 storage orchestration 3 Structured Query Language (SQL) 450, 486 Subsystem Device Driver (SDD) 45 suite installer 43, 45, 507 superuser account name 180 suspendsess command 415 SVC mdisk group 205 SVC CIMOM 166 console 166 Multiple Device Manager console account 167 register to SLP DA 173 SVC console account 167 verification 173 SVC data collection error 445 SWDATA 455
T
TCP/IP port 45 telnet command 148, 173 threshold checking 19 threshold properties 262 thresholds task 257 Tivoli Agent Manager 49, 77 Tivoli Event 55 Tivoli NetView 10, 46, 507 7.1.3 55 General Topology map service 46 Object Collection facility socket 47 Object Database 46 Object Database event socket 46 OVs_PMD management service 46
532
OVs_PMD request service 46 Pager 46 password 56 password only DB2 database 55 PMD service 46 release 55 SAN menu 55 Service 52 SnmpServer 47 Topology Manager 46 Topology Manager socket 46 trapd socket 46 Web server socket 47 Tivoli NetView 7.1.1 55 Tivoli NetView 7.1.3 55 Tivoli NetView password only 56 TotalStorage Productivity Center 4 database maintenance 483 database query 482 DB2 logging 492 Device Manager 18 Event Action Plan 21 export PM data 485 performance considerations 122 Performance Manager 19 remote console 314 report tools 485 SAN Manager 18 server 508 SQL commands 488 universal user 49 vdisk display 198, 205, 207 TotalStorage Productivity Center (TPC) 43, 45, 505 TotalStorage Productivity Center for Fabric enabling communications 337 install considerations 335 launching 352 overview 332 remote console 336 zoning API 336 TSANM 209, 212213
workload characteristics 273 workload profile 277 VPA overview 272 VPCCH table 426, 478 VPCLUS table 426 VPCRK table 426, 479, 489 VPVOL table 499 VPVPD table 488 VTHRESHOLD table 483
W
WAS trace 434 WAS trace control 442 WBEM browser 148 WEBM management model 26 WebSphere Application Server 51, 65 ikeyman utility 92 Information panel 81 WebSphere logfile 428 WebSphere startServer.log 428
X
XML 30
U
unicode 486 user agent 34 User Datagram Protocol message 34 user Id 49, 507 user name 50, 63, 71 Users IDs 505
V
vdisks display 198, 205, 207 verifyconfig command 146, 153 Volume Performance Advisor 12, 21 authentication 283 getting started 277 multiple recommendations 276 predefined workload profiles 278 recommendation process 275
Index
533
534
Back cover
Install and customize Productivity Center for Disk Install and customize Productivity Center for Replication Use Productivity Center to manage your storage
IBM TotalStorage Productivity Center is designed to provide a single point of control for managing networked storage devices that implement the Storage Management Initiative Specification (SMI-S), including the IBM TotalStorage SAN Volume Controller, Enterprise Storage Server, and FAStT. TotalStorage Productivity Center includes the IBM Tivoli Bonus Pack for SAN Management, bringing together device management with fabric management, to help enable the storage administrator to manage the Storage Area Network from a central point. The storage administrator has the ability to configure storage devices, manage the devices, and view the Storage Area Network from a single point. This software offering is intended to complement other members of the IBM TotalStorage Virtualization family by simplifying and consolidating storage management activities. This IBM Redbook includes an introduction to the TotalStorage Productivity Center and its components. It provides detailed information about the installation and configuration of TotalStorage Productivity Center for Disk and TotalStorage Productivity Center for Replication and how to use them. It is intended for anyone wanting to learn about TotalStorage Productivity Center and how it complements an on demand environment and for those planning to install and use the product.