Documenti di Didattica
Documenti di Professioni
Documenti di Cultura
Charlotte Brooks Huang Dachuan Derek Jackson Matthew A. Miller Massimo Rosichini
ibm.com/redbooks
International Technical Support Organization IBM TotalStorage SAN File System January 2006
SG24-7057-03
Note: Before using this information and the product it supports, read the information in Notices on page xix.
Fourth Edition (January 2006) This edition applies to Version 2, Release 2, Modification 2 of IBM TotalStorage SAN File System (product number 5765-FS2) on the day of announcement in October of 2005. Please note that pre-release code was used for the screen captures and command output; some minor details may vary from the generally available product.
Copyright International Business Machines Corporation 2003, 2004, 2006. All rights reserved. Note to U.S. Government Users Restricted Rights -- Use, duplication or disclosure restricted by GSA ADP Schedule Contract with IBM Corp.
Contents
Figures . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xi Notices . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xix Trademarks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xx Preface . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xxi The team that wrote this redbook. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xxi Become a published author . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xxiii Comments welcome. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xxiii Summary of changes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xxv December 2004, Third Edition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xxv January 2006, Fourth Edition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xxv Part 1. Introduction to IBM TotalStorage SAN File System. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1 Chapter 1. Introduction. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3 1.1 Introduction: Growth of SANs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4 1.2 Storage networking technology: Industry trends . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4 1.2.1 Standards organizations and standards . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6 1.2.2 Storage Networking Industry Association . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9 1.2.3 The IBM approach . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10 1.3 Rise of storage virtualization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11 1.3.1 What is virtualization? . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11 1.3.2 Types of storage virtualization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11 1.3.3 Storage virtualization models . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13 1.4 SAN data sharing issues . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14 1.5 IBM TotalStorage Open Software Family . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14 1.5.1 IBM TotalStorage SAN Volume Controller . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15 1.5.2 IBM TotalStorage SAN File System . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16 1.5.3 Comparison of SAN Volume Controller and SAN File System . . . . . . . . . . . . . . . 18 1.5.4 IBM TotalStorage Productivity Center. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19 1.5.5 TotalStorage Productivity Center for Fabric . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20 1.5.6 TotalStorage Productivity Center for Data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21 1.5.7 TotalStorage Productivity Center for Disk . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22 1.5.8 TotalStorage Productivity Center for Replication . . . . . . . . . . . . . . . . . . . . . . . . . 24 1.6 File system general terminology . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25 1.6.1 What is a file system? . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25 1.6.2 File system types . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27 1.6.3 Selecting a file system . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28 1.7 Filesets and the global namespace. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 30 1.8 Value statement of IBM TotalStorage SAN File System . . . . . . . . . . . . . . . . . . . . . . . . 30 Chapter 2. SAN File System overview. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.1 SAN File System product overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.2 SAN File System V2.2 enhancements overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.3 SAN File System V2.2.1 and V2.2.2 enhancements overview . . . . . . . . . . . . . . . . . . . 2.4 SAN File System architecture . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.5 SAN File System hardware and software prerequisites . . . . . . . . . . . . . . . . . . . . . . . . 33 34 35 35 36 37
iii
2.5.1 Metadata server . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.5.2 Master Console hardware and software . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.5.3 SAN File System software . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.5.4 Supported storage for SAN File System . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.5.5 SAN File System engines . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.5.6 Master Console . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.5.7 Global namespace . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.5.8 Filesets . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.5.9 Storage pools . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.5.10 Policy based storage and data management . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.5.11 Clients . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.5.12 FlashCopy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.5.13 Reliability and availability . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.5.14 Summary of major features. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
37 38 39 39 40 45 46 47 48 49 51 58 59 61
Part 2. Planning, installing, and upgrading . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 63 Chapter 3. MDS system design, architecture, and planning issues. . . . . . . . . . . . . . . 3.1 Site infrastructure . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.2 Fabric needs and storage partitioning . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.3 SAN File System volume visibility . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.3.1 Uniform SAN File System configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.3.2 Non-uniform SAN File System configuration. . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.4 Network infrastructure . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.5 Security . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.5.1 Local authentication . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.5.2 LDAP. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.6 File sharing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.6.1 Advanced heterogenous file sharing. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.6.2 File sharing with Samba . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.7 Planning the SAN File System configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.7.1 Storage pools and filesets. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.7.2 File placement policies . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.7.3 FlashCopy considerations. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.8 Planning for high availability . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.8.1 Cluster availability . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.8.2 Autorestart service . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.8.3 MDS fencing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.8.4 Fileset and workload distribution. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.8.5 Network planning . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.8.6 SAN planning . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.9 Client needs and application support . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.9.1 Client needs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.9.2 Privileged clients . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.9.3 Client application support . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.9.4 Clustering support . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.9.5 Linux for zSeries . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.10 Data migration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.10.1 Offline data migration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.10.2 Online data migration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.11 Implementation services for SAN File System . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.12 SAN File System sizing guide . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.12.1 Assumptions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 65 66 67 69 69 69 71 72 72 73 78 78 78 78 78 80 80 81 81 82 82 83 84 85 85 85 86 87 87 88 88 89 90 90 91 91
iv
3.12.2 IP network sizing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.12.3 Storage sizing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.12.4 SAN File System sizing. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.13 Planning worksheets . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.14 Deploying SAN File System into an existing SAN. . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.15 Additional materials . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
91 91 92 95 96 97
Chapter 4. Pre-installation configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 99 4.1 Security considerations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 100 4.1.1 Local authentication configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 100 4.1.2 LDAP and SAN File System considerations . . . . . . . . . . . . . . . . . . . . . . . . . . . . 101 4.2 Target Machine Validation Tool (TMVT) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 105 4.3 SAN and zoning considerations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 106 4.4 Subsystem Device Driver . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 109 4.4.1 Install and verify SDD on Windows 2000 client. . . . . . . . . . . . . . . . . . . . . . . . . . 110 4.4.2 Install and verify SDD on an AIX client . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 112 4.4.3 Install and verify SDD on MDS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 117 4.5 Redundant Disk Array Controller (RDAC) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 119 4.5.1 RDAC on Windows 2000 client . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 119 4.5.2 RDAC on AIX client. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 120 4.5.3 RDAC on MDS and Linux client . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 121 Chapter 5. Installation and basic setup for SAN File System . . . . . . . . . . . . . . . . . . . 5.1 Installation process overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.2 SAN File System MDS installation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.2.1 Pre-installation setting and configurations on each MDS . . . . . . . . . . . . . . . . . . 5.2.2 Install software on each MDS engine . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.2.3 SUSE Linux 8 installation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.2.4 Upgrade MDS BIOS and RSA II firmware. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.2.5 Install prerequisite software on the MDS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.2.6 Install SAN File System cluster . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.2.7 SAN File System cluster configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.3 SAN File System clients . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.3.1 SAN File System Windows 2000/2003 client . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.3.2 SAN File System Linux client installation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.3.3 SAN File System Solaris installation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.3.4 SAN File System AIX client installation. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.3.5 SAN File System zSeries Linux client installation . . . . . . . . . . . . . . . . . . . . . . . . 5.4 UNIX device candidate list . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.5 Local administrator authentication option . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.6 Installing the Master Console . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.6.1 Prerequisites . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.6.2 Installing Master Console software . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.7 SAN File System MDS remote access setup (PuTTY / ssh). . . . . . . . . . . . . . . . . . . . 5.7.1 Secure shell overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Chapter 6. Upgrading SAN File System to Version 2.2.2. . . . . . . . . . . . . . . . . . . . . . . 6.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.2 Preparing to upgrade the cluster . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.3 Upgrade each MDS. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.3.1 Stop SAN File System processes on the MDS . . . . . . . . . . . . . . . . . . . . . . . . . . 6.3.2 Upgrade MDS BIOS and RSA II firmware. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.3.3 Upgrade the disk subsystem software . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.3.4 Upgrade the Linux operating system . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Contents
125 126 126 127 127 128 135 135 138 147 149 149 164 168 169 178 185 186 187 187 192 228 228 229 230 231 233 234 234 235 236 v
6.3.5 Upgrade the MDS software. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.4 Special case: upgrading the master MDS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.5 Commit the cluster upgrade . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.6 Upgrading the SAN File System clients . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.6.1 Upgrade SAN File System AIX clients . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.6.2 Upgrade Solaris/Linux clients . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.6.3 Upgrade SAN File System Windows clients . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.7 Switching from LDAP to local authentication . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Part 3. Configuration, operation, maintenance, and problem determination . . . . . . . . . . . . . . . . . . . 249 Chapter 7. Basic operations and configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.1 Administrative interfaces to SAN File System . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.1.1 Accessing the CLI . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.1.2 Accessing the GUI . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.2 Basic navigation and verifying the cluster setup . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.2.1 Verify servers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.2.2 Verify system volume . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.2.3 Verify pools . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.2.4 Verify LUNs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.2.5 Verify administrators . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.2.6 Basic commands using CLI. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.3 Adding and removing volumes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.3.1 Adding a new volume to SAN File System . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.3.2 Changing volume settings. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.3.3 Removing a volume . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.4 Storage pools . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.4.1 Creating a storage pool. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.4.2 Adding a volume to a user storage pool . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.4.3 Adding a volume to the System Pool . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.4.4 Changing a storage pool . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.4.5 Removing a storage pool . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.4.6 Expanding a user storage pool volume. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.4.7 Expanding a volume in the system storage pool. . . . . . . . . . . . . . . . . . . . . . . . . 7.5 Filesets . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.5.1 Relationship of filesets to storage pools . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.5.2 Nested filesets . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.5.3 Creating filesets . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.5.4 Moving filesets . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.5.5 Changing fileset characteristics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.5.6 Additional fileset commands . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.5.7 NLS support with filesets . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.6 Client operations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.6.1 Fileset permissions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.6.2 Privileged clients . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.6.3 Take ownership of filesets . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.7 Non-uniform SAN File System configurations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.7.1 Display a list of clients with access to particular volume or LUN . . . . . . . . . . . . 7.7.2 List fileset to storage pool relationship . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.8 File placement policy. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.8.1 Policies and rules . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.8.2 Rules syntax . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.8.3 Create a policy and rules with CLI . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 251 252 252 256 258 258 259 259 260 261 261 263 263 266 266 268 269 270 270 276 277 277 284 286 287 289 290 294 295 296 296 296 297 297 300 303 304 304 304 305 307 309
vi
Creating a policy and rules with GUI. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . More examples of policy rules. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . NLS support with policies . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . File storage preallocation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Policy management considerations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Best practices for managing policies. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
311 322 322 324 328 334 337 338 340 340 347 348 348 348 349 355 365 375 376 376 378 389 390 391 395 396 396 397 398 398 399 413 414 415 419 421 427 427 429 430 431 435 436 436 439 441 441 442 442 443 446 vii
Chapter 8. File sharing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.1 File sharing overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.2 Basic heterogeneous file sharing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.2.1 Implementation: Basic heterogeneous file sharing . . . . . . . . . . . . . . . . . . . . . . . 8.3 Advanced heterogeneous file sharing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.3.1 Software components . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.3.2 Administrative commands . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.3.3 Configuration overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.3.4 Directory server configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.3.5 MDS configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.3.6 Implementation of advanced heterogeneous file sharing . . . . . . . . . . . . . . . . . . Chapter 9. Advanced operations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.1 SAN File System FlashCopy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.1.1 How FlashCopy works . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.1.2 Creating, managing, and using the FlashCopy images . . . . . . . . . . . . . . . . . . . 9.2 Data migration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.2.1 Planning migration with the migratedata command . . . . . . . . . . . . . . . . . . . . . . 9.2.2 Perform migration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.2.3 Post-migration steps . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.3 Adding and removing Metadata servers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.3.1 Adding a new MDS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.3.2 Removing an MDS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.3.3 Adding an MDS after previous removal . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.4 Monitoring and gathering performance statistics . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.4.1 Gathering and analyzing performance statistics . . . . . . . . . . . . . . . . . . . . . . . . . 9.5 MDS automated failover . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.5.1 Failure detection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.5.2 Fileset redistribution . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.5.3 Master MDS failover . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.5.4 Failover monitoring . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.5.5 General recommendations for minimizing recovery time . . . . . . . . . . . . . . . . . . 9.6 How SAN File System clients access data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.7 Non-uniform configuration client validation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.7.1 Client validation sample script details . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.7.2 Using the client validation sample script . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Chapter 10. File movement and lifecycle management . . . . . . . . . . . . . . . . . . . . . . . . 10.1 Manually move and defragment files . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10.1.1 Move a single file using the mvfile command . . . . . . . . . . . . . . . . . . . . . . . . . . 10.1.2 Move multiple files using the mvfile command . . . . . . . . . . . . . . . . . . . . . . . . . 10.1.3 Defragmenting files using the mvfile command . . . . . . . . . . . . . . . . . . . . . . . . 10.2 Lifecycle management with file management policy . . . . . . . . . . . . . . . . . . . . . . . . . 10.2.1 File management policy syntax. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10.2.2 Creating a file management policy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10.2.3 Executing the file management policy. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10.2.4 Lifecycle management recommendations and considerations . . . . . . . . . . . . .
Contents
Chapter 11. Clustering the SAN File System Microsoft Windows client . . . . . . . . . . 11.1 Configuration overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11.2 Cluster configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11.2.1 MSCS configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11.2.2 SAN File System configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11.3 Installing the SAN File System MSCS Enablement package . . . . . . . . . . . . . . . . . . 11.4 Configuring SAN File System for MSCS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11.4.1 Creating additional cluster groups. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11.5 Setting up cluster-managed CIFS share . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Chapter 12. Protecting the SAN File System environment . . . . . . . . . . . . . . . . . . . . . 12.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12.1.1 Types of backup . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12.2 Disaster recovery: backup and restore . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12.2.1 LUN-based backup . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12.2.2 Setting up a LUN-based backup . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12.2.3 Restore from a LUN based backup. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12.3 Backing up and restoring system metadata . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12.3.1 Backing up system metadata . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12.3.2 Restoring the system metadata . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12.4 File recovery using SAN File System FlashCopy function . . . . . . . . . . . . . . . . . . . . 12.4.1 Creating FlashCopy image . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12.4.2 Reverting FlashCopy images . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12.5 Back up and restore using IBM Tivoli Storage Manager . . . . . . . . . . . . . . . . . . . . . . 12.5.1 Benefits of Tivoli Storage Manager with SAN File System . . . . . . . . . . . . . . . . 12.6 Backup/restore scenarios with Tivoli Storage Manager . . . . . . . . . . . . . . . . . . . . . . 12.6.1 Back up Windows data using Tivoli Storage Manager Windows client. . . . . . . 12.6.2 Back up user data in UNIX filesets with TSM client for AIX . . . . . . . . . . . . . . . 12.6.3 Backing up FlashCopy images with the snapshotroot option . . . . . . . . . . . . . . Chapter 13. Problem determination and troubleshooting . . . . . . . . . . . . . . . . . . . . . . 13.1 Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13.2 Remote access support . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13.3 Logging and tracing. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13.3.1 SAN File System Message convention . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13.3.2 Metadata server logs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13.3.3 Administrative and security logs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13.3.4 Consolidated server message logs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13.3.5 Client logs and traces . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13.4 SAN File System data collection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13.5 Remote Supervisor Adapter II . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13.5.1 Validating the RSA configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13.5.2 RSA II management . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13.6 Simple Network Management Protocol . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13.6.1 SNMP and SAN File System. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13.7 Hints and tips . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13.8 SAN File System Message conventions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
447 448 449 449 450 455 458 468 468 477 478 478 479 479 480 482 484 484 488 493 494 498 502 502 503 504 507 510 519 520 520 521 522 525 528 530 530 534 537 538 538 543 543 546 547
Part 4. Exploiting the SAN File System. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 551 Chapter 14. DB2 with SAN File System. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14.1 Introduction to DB2 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14.2 Policy placement . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14.2.1 SMS tablespaces . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . viii
IBM TotalStorage SAN File System
14.2.2 DMS tablespaces . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14.2.3 Other data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14.2.4 Sample SAN File System policy rules. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14.3 Storage management . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14.4 Load balancing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14.5 Direct I/O support . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14.6 High availability clustering . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14.7 FlashCopy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14.8 Database path considerations. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Part 5. Appendixes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 563 Appendix A. Installing IBM Directory Server and configuring for SAN File System Installing IBM Tivoli Directory Server V5.1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Creating the LDAP database . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Configuring IBM Directory Server for SAN File System . . . . . . . . . . . . . . . . . . . . . . . . . . . Starting the LDAP Server and configuring Admin Server . . . . . . . . . . . . . . . . . . . . . . . . . Verifying LDAP entries . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Sample LDIF file used . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Appendix B. Installing OpenLDAP and configuring for SAN File System . . . . . . . . . Introduction to OpenLDAP 2.0.x on Red Hat Linux . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Installation of OpenLDAP packages . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Configuration of OpenLDAP client . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Configuration of OpenLDAP server . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Configure OpenLDAP for SAN File System . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 565 566 570 574 577 585 587 589 590 590 591 592 594
Appendix C. Client configuration validation script . . . . . . . . . . . . . . . . . . . . . . . . . . . 597 Sample script listing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 598 Appendix D. Additional material . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Locating the Web material . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Using the Web material . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . System requirements for downloading the Web material . . . . . . . . . . . . . . . . . . . . . . . How to use the Web material . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 603 603 603 603 604
Abbreviations and acronyms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 605 Related publications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . IBM Redbooks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Other publications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Online resources . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . How to get IBM Redbooks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Help from IBM . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 607 607 607 608 611 611
Index . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 613
Contents
ix
Figures
1-1 1-2 1-3 1-4 1-5 1-6 1-7 1-8 1-9 1-10 1-11 1-12 1-13 1-14 1-15 1-16 1-17 1-18 2-1 2-2 2-3 2-4 2-5 2-6 2-7 2-8 2-9 2-10 3-1 3-2 3-3 3-4 3-5 3-6 3-7 3-8 4-1 4-2 4-3 5-1 5-2 5-3 5-4 5-5 5-6 5-7 5-8 5-9 SAN Management standards bodies . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6 CIMOM proxy model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8 SNIA storage model. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9 Intelligence moving to the network. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12 In-band and out-of-band models . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13 Block level virtualization. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15 IBM TotalStorage SAN Volume Controller. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16 File level virtualization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17 IBM TotalStorage SAN File System architecture . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18 Summary of SAN Volume Controller and SAN File System benefits. . . . . . . . . . . . . 19 TPC for Fabric . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21 TPC for Data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22 TPC for Disk functions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24 TPC for Replication . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25 Windows system hierarchical view . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26 Windows file system security and permissions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27 File system types . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28 Global namespace . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 30 SAN File System architecture . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 36 SAN File System administrative structure . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 43 SAN File System GUI browser interface . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 44 Global namespace . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 47 Filesets and nested filesets . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 48 SAN File System storage pools . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 49 File placement policy execution . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 50 Windows 2000 client view of SAN File System . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 55 Exploring the SAN File System from a Windows 2000 client. . . . . . . . . . . . . . . . . . . 55 FlashCopy images . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 59 Mapping of Metadata and User data to MDS and clients . . . . . . . . . . . . . . . . . . . . . 68 Illustrating network setup . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 72 Data classification example . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 79 SAN File System design . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 84 SAN File System data migration process . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 89 SAN File System data flow . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 93 Typical data and metadata flow for a generic application with SAN File System . . . 94 SAN File System changes the way we look at the Storage in todays SANs. . . . . . . 97 LDAP tree . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 102 Example of setup . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 108 Verify disks are seen as 2145 disk devices . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 111 SAN File System Console GUI sign-on window . . . . . . . . . . . . . . . . . . . . . . . . . . . 147 Select language for installation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 150 SAN File System Windows 2000 Client Welcome window . . . . . . . . . . . . . . . . . . . 150 Security Warning . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 151 Configuration parameters . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 152 Review installation settings . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 153 Security alert warning . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 153 Driver IBM SANFS Cluster Bus Enumerator . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 154 Driver IBM SAN Volume Manager . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 154
xi
5-10 5-11 5-12 5-13 5-14 5-15 5-16 5-17 5-18 5-19 5-20 5-21 5-22 5-23 5-24 5-25 5-26 5-27 5-28 5-29 5-30 5-31 5-32 5-33 5-34 5-35 5-36 5-37 5-38 5-39 5-40 5-41 5-42 5-43 5-44 5-45 5-46 5-47 5-48 5-49 5-50 5-51 5-52 5-53 5-54 5-55 5-56 5-57 5-58 5-59 5-60 5-61 5-62 xii
Start SAN File System client immediately . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Windows client explorer . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Windows 2000 client SAN File System drivers . . . . . . . . . . . . . . . . . . . . . . . . . . . . Windows 20003 client SAN File System drivers . . . . . . . . . . . . . . . . . . . . . . . . . . . SAN File System helper service . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Launch MMC . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Add the Snap-in for SAN File System . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Add Snap-in . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Add the IBM TotalStorage System Snap-in . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Add/Remove Snap-in. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Save MMC console . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Save MMC console to the Windows desktop. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . IBM TotalStorage File System Snap-in Properties . . . . . . . . . . . . . . . . . . . . . . . . . DisableShortNames . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Verify value for DisableShortNames . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Trace Properties . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Volume Properties . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Modify Volume Properties . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . J2RE Setup Type. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . J2RE verify the install . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . SNMP Service Window . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . SNMP Service Properties . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Verifying SNMP and SNMP Trap Service . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Master Console installation wizard initial window . . . . . . . . . . . . . . . . . . . . . . . . . . Set user account privileges . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Adobe Installer Window . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Master Console installation wizard information . . . . . . . . . . . . . . . . . . . . . . . . . . . . Select optional products to install . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Viewing the Products List. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . PuTTY installation complete . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . DB2 Setup wizard . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . DB2 select installation type . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . DB2 select installation action . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . DB2 Username and Password menu. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . DB2 administration contact . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . DB2 instance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . DB2 tools catalog. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . DB2 administration contact . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . DB2 confirm installation settings . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . DB2 confirm installation settings . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Verify DB2 install . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Verify SVC console install . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Select database repository . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Specify single DB2 user ID . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Enter DB2 user ID . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Set trapdSharePort162 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Define trapdTrapReceptionPort . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Enter TSANM Manager name and port . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . IBM Director Installation Directory window . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . IBM Director Service Account Information . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . IBM Director network drivers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . IBM Director database configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . IBM Director superuser . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
155 155 156 156 157 158 158 159 159 160 160 161 161 162 162 163 163 164 188 189 190 191 192 194 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 212 213 214 215 216 217 217 218 218 220
5-63 5-64 5-65 5-66 5-67 5-68 5-69 5-70 6-1 7-1 7-2 7-3 7-4 7-5 7-6 7-7 7-8 7-9 7-10 7-11 7-12 7-13 7-14 7-15 7-16 7-17 7-18 7-19 7-20 7-21 7-22 7-23 7-24 7-25 7-26 7-27 7-28 7-29 7-30 7-31 7-32 7-33 7-34 7-35 7-36 7-37 7-38 7-39 7-40 7-41 7-42 7-43 8-1
Disk Management . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Upgrade to dynamic disk . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Verify both disks are set to type Dynamic . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Add Mirror . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Select mirrored disk . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Mirroring process . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Mirror Process completed . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Setting Folder Options . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . SAN File System console . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Create PuTTY ssh session. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . SAN File System GUI login window. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . GUI welcome window . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Information Center . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Basic SAN File System configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Select expand vdisk . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . vdisk expansion window . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Data LUN display . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Disk before expansion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Disk after expansion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Relationship of fileset to storage pool . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Filesets from the MDS and client perspective . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Nested filesets . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Nested filesets . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Windows Explorer shows cluster name sanfs as the drive label . . . . . . . . . . . . . . . List nested filesets . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . MBCS characters in fileset attachment directory . . . . . . . . . . . . . . . . . . . . . . . . . . . Select properties of fileset . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ACL for the fileset . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Verify change of ownership . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Windows security tab . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Policy rules based file placement. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Policies in SAN File System Console (GUI). . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Create a New Policy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . New Policy: High Level Settings sample input. . . . . . . . . . . . . . . . . . . . . . . . . . . . . Add Rules to Policy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . New rule created . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Edit Rules for Policy. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . List of defined policies . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Activate Policy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Verify Activate Policy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . New Policy activated . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Delete a Policy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Verify - Delete Policy Window . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . List Policies . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . MBCS characters in policy rule . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Generated SQL for MBCS characters in policy rule . . . . . . . . . . . . . . . . . . . . . . . . Select a policy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Rules for selected policy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Edited rule for Preallocation. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Activate new policy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Disable default pool with GUI. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Display policy statistics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . View Windows permissions on newly created fileset. . . . . . . . . . . . . . . . . . . . . . . .
Figures
222 223 223 224 225 225 226 226 245 253 256 257 258 264 279 280 281 283 284 288 289 289 292 293 294 296 301 302 302 303 306 312 313 314 315 316 317 318 318 319 319 320 321 321 323 324 326 326 327 327 331 333 341 xiii
8-2 8-3 8-4 8-5 8-6 8-7 8-8 8-9 8-10 8-11 8-12 8-13 8-14 8-15 8-16 8-17 8-18 8-19 8-20 9-1 9-2 9-3 9-4 9-5 9-6 9-7 9-8 9-9 9-10 9-11 9-12 9-13 9-14 9-15 9-16 9-17 9-18 9-19 9-20 9-21 9-22 9-23 9-24 9-25 9-26 9-27 9-28 9-29 9-30 9-31 9-32 9-33 9-34 xiv
Set permissions for Everyone group . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Advanced permissions for Everyone . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Set permissions on Administrator group to allow Full control . . . . . . . . . . . . . . . . . View Windows permissions on winfiles fileset . . . . . . . . . . . . . . . . . . . . . . . . . . . . . View Windows permissions on fileset . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Read permission for Everyone group . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . SAN File System user mapping . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Sample configuration for advanced heterogeneous file sharing . . . . . . . . . . . . . . . Created Active Directory Domain Controller and Domain: sanfsdom.net . . . . . . . . User Creation Verification in Active Directory . . . . . . . . . . . . . . . . . . . . . . . . . . . . . SAN File System Windows client added to Active Directory domain. . . . . . . . . . . . Sample heterogeneous file sharing LDAP diagram . . . . . . . . . . . . . . . . . . . . . . . . . Log on as sanfsuser. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Contents of svcfileset6. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . unixfile.txt permissions. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Edit the file in Windows as sanfsuser and save it . . . . . . . . . . . . . . . . . . . . . . . . . . Create the file on the Windows client as sanfsuser . . . . . . . . . . . . . . . . . . . . . . . . . Show file contents in Windows as sanfsuser . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . winfile.txt permissions from Windows . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Make FlashCopy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Copy on write . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . The .flashcopy directory view. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Create FlashCopy image GUI . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Create FlashCopy wizard. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Fileset selection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Set Flashcopy image properties . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Verify FlashCopy image properties . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . FlashCopy image created . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . List of FlashCopy images using GUI . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . List of FlashCopy images before and after a revert operation . . . . . . . . . . . . . . . . . Select image to revert . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Delete Image selection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Delete Image verification . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Delete image complete . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Data migration to SAN File System: data flow . . . . . . . . . . . . . . . . . . . . . . . . . . . . . SAN File System overview. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . View statistics: client sessions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Statistics: Storage Pools . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Console Statistics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Create report . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . View report. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . SAN File System failures and actions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . List of MDS in the cluster . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . List of filesets . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Metadata server mds3 missing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Filesets list after failover . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Metadata server mds3 not started automatically . . . . . . . . . . . . . . . . . . . . . . . . . . . Failback warning . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Graceful stop of the master Metadata server. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Metadata server mds2 as new master. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Configuring SANFS for SNMP . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Selecting the event severity level that will trigger traps . . . . . . . . . . . . . . . . . . . . . . Log into IBM Director Console . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
341 342 343 343 345 346 347 350 351 351 352 352 368 369 369 370 371 371 373 377 377 379 381 381 382 382 383 383 384 386 387 388 388 389 390 403 404 404 405 405 406 414 416 416 417 417 418 418 420 420 422 422 423
9-35 9-36 9-37 9-38 9-39 9-40 9-41 9-42 10-1 10-2 11-1 11-2 11-3 11-4 11-5 11-6 11-7 11-8 11-9 11-10 11-11 11-12 11-13 11-14 11-15 11-16 11-17 11-18 11-19 11-20 11-21 11-22 11-23 11-24 11-25 11-26 11-27 11-28 11-29 11-30 11-31 11-32 11-33 11-34 11-35 11-36 11-37 11-38 11-39 11-40 11-41 11-42 11-43
Discover SNMP devices . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Compile a new MIB . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Select the MIB to compile . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . MIB compilation status windows . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Viewing all events in IBM Director . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Viewing the test trap in IBM Director . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Trap sent when an MDS is shutdown . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Example of required client access . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Windows-based client accessing homefiles fileset . . . . . . . . . . . . . . . . . . . . . . . . . Verify file sizes in homefiles fileset . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . MSCS lab setup . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Basic cluster resources . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Network Interfaces in the cluster . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Cluster Resources . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . SAN File System client view of the global namespace . . . . . . . . . . . . . . . . . . . . . . Fileset directory accessible . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Show permissions and ownership . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Create a file on the fileset. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Choose the installation language. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . License Agreement . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Complete the client information . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Choose where to install the enablement software . . . . . . . . . . . . . . . . . . . . . . . . . . Confirm the installation parameters . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . New SANFS resource is created . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Create a new cluster group . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Name and description for the group. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Specify preferred owners for group . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Group created successfully . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ITSOSFSGroup displays . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Create new resource . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . New resource name and description . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Select all nodes as possible owners . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Enter resource dependencies . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . SAN File System resource parameters . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Display filesets . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Fileset for cluster resource selected . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Cluster resource created successfully . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . New resource in Resource list . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Bring group online . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Group and resource are online . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Resource moves ownership on failures. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Resource stays with current owner after rebooting the original owner . . . . . . . . . . Create IP Address resource. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . IP address resource: General properties . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . IP address resource: Parameters . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Network Name resource: General properties . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Network Name resource: Dependencies . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Network Name resource: Parameters . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . File Share resource: General properties . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . File Share resource: dependencies . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . File Share resource: parameters . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . All file share resources online . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Designate a drive for the CIFS share. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Figures
423 424 424 425 425 426 426 430 437 443 448 449 450 450 451 454 454 455 456 456 457 457 458 459 459 460 460 461 461 462 462 463 463 464 464 465 465 466 466 467 467 468 469 469 470 471 471 472 472 473 473 474 474 xv
11-44 11-45 11-46 12-1 12-2 12-3 12-4 12-5 12-6 12-7 12-8 12-9 12-10 12-11 12-12 12-13 12-14 12-15 12-16 12-17 12-18 12-19 12-20 12-21 12-22 12-23 12-24 13-1 13-2 13-3 13-4 13-5 13-6 13-7 13-8 13-9 13-10 13-11 13-12 13-13 14-1 14-2 14-3 14-4 A-1 A-2 A-3 A-4 A-5 A-6 A-7 A-8 A-9 xvi
CIFS client access SAN File System via clustered SAN File System client . . . . . . Copy lots of files onto the share. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Drive not accessible. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . SVC FlashCopy relationships and consistency group . . . . . . . . . . . . . . . . . . . . . . . Metadata dump file creation start. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Metadata dump file name . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . DR file creation final step . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Delete/remove the metadata dump file . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Verify deletion of the metadata dump file. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . FlashCopy option window GUI. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . FlashCopy Start GUI window. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Select Filesets . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Set Properties of FlashCopy images . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Verify FlashCopy settings . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . FlashCopy images created . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Windows client view of the FlashCopy images . . . . . . . . . . . . . . . . . . . . . . . . . . . . Client file delete . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . FlashCopy image revert selection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Image restore / revert verification and restore . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Remaining FlashCopy images after revert. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Client data restored . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Exploitation of SAN File System with Tivoli Storage Manager. . . . . . . . . . . . . . . . . User files selection. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Restore selective file selection. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Select destination of restore file(s). . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Restore files selection for FlashCopy image backup . . . . . . . . . . . . . . . . . . . . . . . . Restore files destination path selection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . IBM Connection Manager . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Steps for remote access . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . SAN File System message format . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Event viewer on Windows 2000. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . OBDC from GUI . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Remote Supervisor Adapter II . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . RSAII interface using Internet Explorer . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Accessing remote power using RSAII . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Access BIOS log using RSAII . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Java Security Warning . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . RSA II: Remote control buttons . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ASM Remote control . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . SNMP configuration on RSA II. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Example storage pool layout for DB2 objects . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Workload distribution of filesets for DB2 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Default data caching . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Directory structure information. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Select location where to install. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Language selection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Setup type . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Features to install. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . User ID for DB2 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Installation summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . GSKit pop-up . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Installation complete . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Configuration tool. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
475 475 476 481 485 486 486 487 487 494 495 495 496 497 497 498 499 499 500 501 501 502 504 505 506 506 507 520 521 522 531 534 537 539 540 541 542 542 543 544 556 558 559 561 566 567 567 568 568 569 569 570 570
A-10 A-11 A-12 A-13 A-14 A-15 A-16 A-17 A-18 A-19 A-20 A-21 A-22 A-23 A-24 A-25 A-26 A-27 A-28 A-29
User ID pop-up. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Enter LDAP database user ID . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Enter the name of the database. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Select database codepage . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Database location . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Verify database configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Database created. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Add organizational attribute . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Browse for LDIF . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Start the import . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . IBM Directory Server login . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . IBM Directory Server Web Administration Tool . . . . . . . . . . . . . . . . . . . . . . . . . . . . Change admin password . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Add host. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Enter host details . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Verify that host has been added . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Login to local host name . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Admin console . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Manage entries . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Expand ou=Users . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
571 571 572 572 573 573 574 575 576 577 578 579 580 581 582 583 584 585 586 586
Figures
xvii
xviii
Notices
This information was developed for products and services offered in the U.S.A. IBM may not offer the products, services, or features discussed in this document in other countries. Consult your local IBM representative for information on the products and services currently available in your area. Any reference to an IBM product, program, or service is not intended to state or imply that only that IBM product, program, or service may be used. Any functionally equivalent product, program, or service that does not infringe any IBM intellectual property right may be used instead. However, it is the user's responsibility to evaluate and verify the operation of any non-IBM product, program, or service. IBM may have patents or pending patent applications covering subject matter described in this document. The furnishing of this document does not give you any license to these patents. You can send license inquiries, in writing, to: IBM Director of Licensing, IBM Corporation, North Castle Drive Armonk, NY 10504-1785 U.S.A. The following paragraph does not apply to the United Kingdom or any other country where such provisions are inconsistent with local law: INTERNATIONAL BUSINESS MACHINES CORPORATION PROVIDES THIS PUBLICATION "AS IS" WITHOUT WARRANTY OF ANY KIND, EITHER EXPRESS OR IMPLIED, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF NON-INFRINGEMENT, MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE. Some states do not allow disclaimer of express or implied warranties in certain transactions, therefore, this statement may not apply to you. This information could include technical inaccuracies or typographical errors. Changes are periodically made to the information herein; these changes will be incorporated in new editions of the publication. IBM may make improvements and/or changes in the product(s) and/or the program(s) described in this publication at any time without notice. Any references in this information to non-IBM Web sites are provided for convenience only and do not in any manner serve as an endorsement of those Web sites. The materials at those Web sites are not part of the materials for this IBM product and use of those Web sites is at your own risk. IBM may use or distribute any of the information you supply in any way it believes appropriate without incurring any obligation to you. Information concerning non-IBM products was obtained from the suppliers of those products, their published announcements or other publicly available sources. IBM has not tested those products and cannot confirm the accuracy of performance, compatibility or any other claims related to non-IBM products. Questions on the capabilities of non-IBM products should be addressed to the suppliers of those products. This information contains examples of data and reports used in daily business operations. To illustrate them as completely as possible, the examples include the names of individuals, companies, brands, and products. All of these names are fictitious and any similarity to the names and addresses used by an actual business enterprise is entirely coincidental. COPYRIGHT LICENSE: This information contains sample application programs in source language, which illustrates programming techniques on various operating platforms. You may copy, modify, and distribute these sample programs in any form without payment to IBM, for the purposes of developing, using, marketing or distributing application programs conforming to the application programming interface for the operating platform for which the sample programs are written. These examples have not been thoroughly tested under all conditions. IBM, therefore, cannot guarantee or imply reliability, serviceability, or function of these programs. You may copy, modify, and distribute these sample programs in any form without payment to IBM for the purposes of developing, using, marketing, or distributing application programs conforming to IBM's application programming interfaces.
xix
Trademarks
The following terms are trademarks of the International Business Machines Corporation in the United States, other countries, or both:
AFS AIX 5L AIX DB2 Universal Database DB2 DFS Enterprise Storage Server Eserver Eserver FlashCopy HACMP IBM NetView PowerPC POWER POWER5 pSeries Redbooks Redbooks (logo) SecureWay Storage Tank System Storage Tivoli TotalStorage WebSphere xSeries z/VM zSeries
The following terms are trademarks of other companies: Java, J2SE, Solaris, Sun, Sun Java, and all Java-based trademarks are trademarks of Sun Microsystems, Inc. in the United States, other countries, or both. Microsoft, Windows NT, Windows, Win32, and the Windows logo are trademarks of Microsoft Corporation in the United States, other countries, or both. i386, Intel, Pentium, Intel logo, Intel Inside logo, and Intel Centrino logo are trademarks or registered trademarks of Intel Corporation or its subsidiaries in the United States, other countries, or both. UNIX is a registered trademark of The Open Group in the United States and other countries. Linux is a trademark of Linus Torvalds in the United States, other countries, or both. Other company, product, or service names may be trademarks or service marks of others.
xx
Preface
This IBM Redbook is a detailed technical guide to the IBM TotalStorage SAN File System. SAN File System is a robust, scalable, and secure network-based file system designed to provide near-local file system performance, file aggregation, and data sharing services in an open environment. SAN File System helps lower the cost of storage management and enhance productivity by providing centralized management, higher storage utilization, and shared access by clients to large amounts of storage. We describe the design and features of SAN File System, as well as how to plan for, install, upgrade, configure, administer, and protect it. This redbook is for all who want to understand, install, configure, and administer SAN File System. It is assumed the reader has basic knowledge of storage and SAN technologies.
xxi
Charlotte Brooks is an IBM Certified IT Specialist and Project Leader for Storage Solutions at the International Technical Support Organization, San Jose Center. She has 14 years of experience with IBM in the fields of IBM TotalStorage hardware and software, IBM ^ pSeries servers, and AIX. She has written 15 Redbooks, and has developed and taught IBM classes in all areas of storage and storage management. Before joining the ITSO in 2000, she was the Technical Support Manager for Tivoli Storage Manager in the Asia Pacific Region. Huang Dachuan is an Advisory IT Specialist in the Advanced Technical Support team of IBM China in Beijing. He has nine years of experience in networking and storage support. He is CCIE certified and his expertise includes Storage Area Networks, IBM TotalStorage SAN Volume Controller, SAN File System, ESS, DS6000, DS8000, copy services, and networking products from IBM and Cisco. Derek Jackson is a Senior IT Specialist working for the Advanced Technical Support Storage Solutions Benchmark Center in Gaithersburg, Maryland. He primarily supports SAN File System, IBM TotalStorage Productivity Center, and the ATSs lab infrastructure. Derek has worked for IBM for 22 years, and has been employed in the IT field for 30 years. Before joining ATS, Derek worked for IBM's Business Continuity and Recovery Services and was responsible for delivering networking solutions for its clients. Matthew A. Miller is an IBM Certified IT Specialist and Systems Engineer with IBM in Phoenix, AZ. He has worked extensively with IBM Tivoli Storage Software products as both a field systems engineer and as a software sales representative and currently works with Tivoli Techline. Prior to joining IBM in 2000, Matt worked for 16 years in the client community in both technical and managerial positions. Massimo Rosichini is an IBM Certified Product Services and Country Specialist in the ITS Technical Support Group in Rome, Italy. He has extensive experience in IT support for TotalStorage solutions in the EMEA South Region. He is an ESS/DS Top Gun Specialist and is an IBM Certified Specialist for Enterprise Disk Solutions and Storage Area Network Solutions. He was an author of previous editions of the Redbooks IBM TotalStorage Enterprise Storage Server Implementing ESS Copy Services in Open Environments, SG24-5757 and IBM TotalStorage SAN File System SG24-7057. Thanks to the following people for their contributions to this project: The authors of previous editions of this redbook: Jorge Daniel Acua, Asad Ansari, Chrisilia Davis, Ravi Khattar, Michael Newman, Massimo Rosichini, Leos Stehlik, Satoshi Suzuki, Mats Wahlstrom, Eric Wong Cathy Warrick and Wade Wallace International Technical Support Organization, San Jose Center Todd Bates, Ashish Chaurasia, Steve Correl, Vinh Dang, John George, Jeanne Gordon, Matthew Krill, Joseph Morabito, Doug Rosser, Ajay Srivastava, Jason Young SAN File System Development, IBM Beaverton Rick Taliaferro, Ida Wood IBM Raleigh Herb Ahmuty, John Amann, Kevin Cummings, Gonzalo Fuentes, Craig Gordon, Rosemary McCutchen, IBM Gaithersburg Todd DeSantis IBM Pittsburgh xxii
IBM TotalStorage SAN File System
Bill Cochran, Ron Henkhaus IBM Illinois Drew Davis IBM Phoenix Michael Klein IBM Germany John Bynum IBM San Jose
Comments welcome
Your comments are important to us! We want our Redbooks to be as helpful as possible. Send us your comments about this or other Redbooks in one of the following ways: Use the online Contact us review redbook form found at:
ibm.com/redbooks
Mail your comments to: IBM Corporation, International Technical Support Organization Dept. QXXE Building 80-E2 650 Harry Road San Jose, California 95120-6099
Preface
xxiii
xxiv
Summary of changes
This section describes the technical changes made in this edition of the book and in previous editions. This edition may also include minor corrections and editorial changes that are not identified. Summary of Changes for SG24-7057-03 for IBM TotalStorage SAN File System as created or updated on January 27, 2006.
New information
Advanced heterogeneous file sharing File movement and lifecycle management File sharing with Samba
Changed information
Client support
New information
New centralized installation procedure Preallocation policy for large files Local authentication option Microsoft clustering support
Changed information
New MDS server and client platform (including zSeries support) New RSA connectivity and high availability details
xxv
xxvi
Part 1
Part
Chapter 1.
Introduction
In this chapter, we provide background information for SAN File System, including these topics: Growth in SANs and current challenges Storage networking technology: industry trends Rise of storage virtualization and growth of SAN data Data sharing with SANs: issues IBM TotalStorage products overview Introduction to file systems and key concepts Value statement for SAN File System
Physical consolidation
Data from disparate storage subsystems can be combined onto large, enterprise class shared disk arrays, which may be located at some distance from the servers. The capacity of these disk arrays can be shared by multiple servers, and users may also benefit from the advanced functions typically offered with such subsystems. This may include RAID capabilities, remote mirroring, and instantaneous data replication functions, which might not be available with smaller, integrated disks. The array capacity may be partitioned, so that each server has an appropriate portion of the available gigabytes. Available capacity can be dynamically allocated to any server requiring additional space. Capacity not required by a server application can be re-allocated to other servers. This avoids the inefficiency associated with free disk capacity attached to one server not being usable by other servers. Extra capacity may be added, non disruptively. However, physical consolidation does not mean that all wasted space concerns are addressed.
Logical consolidation
It is possible to achieve shared resource benefits from the SAN, but without moving existing equipment. A SAN relationship can be established between a client and a group of storage devices that are not physically co-located (excluding devices that are internally attached to servers). A logical view of the combined disk resources may allow available capacity to be allocated and re-allocated between different applications running on distributed servers, to achieve better utilization.
Chapter 1. Introduction
Storage Networking Industry Association (SNIA) SAN umbrella organization IBM participation: Founding member Board, Tech Council, Project Chair
Fibre Channel Industry Association (FCIA) Sponsors customer events IBM participation: Board
American National Standards Institute (ANSI) X3T11 for FC/FICON standards X3T10 for SCI standards IBM participation
International Organization for Standardization (ISO) International standardization IBM Software development ISO Certified
Key standards for Storage Management are: Distributed Management Task Force (DMTF) Common Information Model (CIM) Standards. This includes the CIM Device Model for Storage. Storage Networking Industry Association (SNIA) Storage Management Initiative (SMI) Specification.
developed to read MOF files and automatically generate data type definitions, interface stubs, and GUI constructs to be inserted into management applications.
SMI Specification
SNIA has fully adopted and enhanced the CIM standard for Storage Management in its SMI Specification. SMI Specification was launched in mid-2002 to create and develop a universal open interface for managing storage devices, including storage networks. The idea behind SMIS is to standardize the management interfaces so that management applications can utilize them and provide cross device management. This means that a newly introduced device can be immediately managed, as it will conform to the standards. SMIS extends CIM/WBEM with the following features:
A single management transport: Within the WBEM architecture, the CIM-XML over HTTP
protocol was selected for this transport in SMIS.
A complete, unified, and rigidly specified object model: SMIS defines profiles and recipes within the CIM that enables a management client to reliably utilize a component vendors implementation of the standard, such as the control of LUNs and Zones in the context of a SAN. Consistent use of durable names: As a storage network configuration evolves and is
reconfigured, key long-lived resources like disk volumes must be uniquely and consistently identified over time.
Rigorously documented client implementation considerations: SMIS provides client developers with vital information for traversing CIM classes within a device/subsystem and between devices/subsystems so that complex storage networking topologies can be successfully mapped and reliably controlled. An automated discovery system: SMIS compliant products, when introduced in a SAN
environment, will automatically announce their presence and capabilities to other constituents.
Resource locking: SMIS compliant management applications from multiple vendors can
exist in the same storage device or SAN and cooperatively share resources via a lock manager. The models and protocols in the SMIS implementation are platform-independent, enabling application development for any platform, and enabling them to run on different platforms. The SNIA will also provide interpretability tests that will help vendors to test their applications and devices if they conform to the standard.
Chapter 1. Introduction
SA
Agent 0n
1 1 Proprietary
0n
Device or Subsystem
Embedded Model
Device Subsystem
Device or
Proxy Model
Proxy Model
In the future, more and more devices will be native CIM compliant, and will therefore have a built-in Agent, as shown in the Embedded Model in Figure 1-2. When widely adopted, SMIS will streamline the way that the entire storage industry deals with management. Management application developers will no longer have to integrate incompatible, feature-poor interfaces into their products. Component developers will no longer have to push their unique interface functionality to applications developers. Instead, both will be better able to concentrate on developing features and functions that have value to end users. Ultimately, faced with reduced costs for management, end users will be able to adopt storage-networking technology faster and build larger, more powerful networks.
Services subsystem
Storage domain
Discovery, monitoring
Block aggregation
Block subsystem
Copyright 2000, Storage Network Industry Association
IBM is committed to deliver best-of-breed products in all aspects of the SNIA storage model, including: Block aggregation File/record subsystems Storage devices/block subsystems
1
http://www.snia.org/news/mission/
Security, billing
Capacity planning
Chapter 1. Introduction
Services subsystems In the area of block aggregation, IBM offers the IBM TotalStorage SAN Volume Controller (SVC), implemented in an in-band model. In the area of file aggregation in a SAN, IBM offers IBM TotalStorage SAN File System, a SAN-wide file system implemented in an out-of-band model. Both of these solutions will adhere to open industry standards. For more information about SMIS/CIM/WBEM, see the SNIA and DMTF Web sites:
http://www.snia.org http://www.dmtf.org
Better solutions at a lower price: By harnessing the resources of multiple companies, more development resources are brought to bear on common client requirements, such as ease of management. Improved interoperability: Without open standards, every vendor needs to work with every other vendor to develop interfaces for interoperability. The result is a range of very complex products whose interdependencies make them difficult for clients to install, configure, and maintain. Client choice: By complying with standards developed jointly, products interoperate seamlessly with each other, preventing vendors from locking clients into their proprietary platform. As client needs and vendor choices change, products that interoperate seamlessly provide clients with more flexibility and improve co-operation among vendors.
More significantly, given the industry-wide focus on business efficiency, the use of fully integrated solutions developed to open industry standards will ultimately drive down the TCO of storage.
Chapter 1. Introduction
11
The IBM strategy is to move the intelligence out of the server, eliminating the dependency on having to implement specialized software at the server level. Removing it at the storage level decreases the dependency on implementing RAID subsystems, and alternative disks can be utilized. By implementing at a fabric level, storage control is moved into the network, which gives the opportunity for virtualization to all, and at the same time reduces complexity by providing a single view of storage. The storage network can be used to leverage all kinds of services across multiple storage devices, including virtualization. A high-level view of this is shown in Figure 1-4.
Device Driver
Device Driver
SAN
Storage Network Intelligent Storage Ctller RAID Controller Disk Intelligent Storage Ctller RAID Controller Disk RAID Controller Disk
The effective management of resources from the data center across the network increases productivity and lowers TCO. In Figure 1-4, you can see how IBM accomplishes this effective management by moving the intelligence from the storage subsystems into the storage network using the SAN Volume Controller, and moving the intelligence of the file system into the storage network using SAN File System. The IBM storage management software, represented in Figure 1-4 as hardware element management and Tivoli Storage Management (a suite of SAN and storage products), addresses administrative costs, downtime, backup and recovery, and hardware management. The SNIA model (see Figure 1-3 on page 9) distinguishes between aggregation at the block and file level.
12
Traditional SAN
be aggregated into one or more block vectors to increase or decrease their size, or provide redundancy. Block aggregation or block level virtualization is used to deliver a powerful set of techniques that, when used individually or in combination, serve many purposes, such as: Space management through combining or slicing-and-dicing native storage into new, aggregated block storage Striping through spreading the aggregated block storage across several native storage devices Redundancy through point-in-time copy and both local and remote mirroring
Chapter 1. Introduction
13
In-band
In an in-band storage virtualization implementation, both data and control information flow over the same path. The IBM TotalStorage SAN Volume Controller (SVC) engine is an in-band implementation, which does not require any special software in the servers and provides caching in the network, allowing support of cheaper disk systems. See the redbook IBM TotalStorage SAN Volume Controller, SG24-6423 for further information.
Out-of-band
In an out-of-band storage virtualization implementation, the data flow is separated from the control flow. This is achieved by storing data and metadata (data about the data) in different places. This involves moving all mapping and locking tables to a separate server (the Metadata server) that contains the metadata of the files. IBM TotalStorage SAN File System is an out-of-band implementation. In an out-of-band solution, the servers (who are clients to the Metadata server) request authorization to data from the Metadata server, which grants it, handles locking, and so on. The servers can then access the data directly without further Metadata server intervention. Separating the flow of control and data in this manner allows the data I/O to use the full bandwidth that a SAN provides, while control I/O goes over a separate network like TCP/IP. For many operations, the metadata controller does not even intervene. Once a client has obtained access to a file, all I/O will go directly over the SAN to the storage devices. Metadata is often referred to as data about the data; it describes the characteristics of stored user data. A Metadata server, in the SAN File System, is a server that off loads the metadata processing from the data-storage environment to improve SAN performance. An instance of the SAN File System runs on each engine, and together the Metadata servers form a cluster.
interfaces for registering with management software and communication connection and configuration information. As the second step, IBM offers automated management software components that integrate with these interfaces to collect, organize, and present information about the storage environment. The IBM TotalStorage Open Software Family includes the IBM TotalStorage SAN Volume Controller, IBM TotalStorage SAN File System, and the IBM TotalStorage Productivity Center.
SANS Today
Block Virtualization
Chapter 1. Introduction
15
The IBM TotalStorage SAN Volume Controller is designed to provide a redundant, modular, scalable, and complete solution, as shown in Figure 1-7.
Managed Disks
Each SAN Volume Controller consists of one or more pairs of engines, each pair operating as a single controller with fail-over redundancy. A large read/write cache is mirrored across the pair, and virtual volumes are shared between a pair of nodes. The pool of managed disks is controlled by a cluster of paired nodes. The SAN Volume Controller is designed to provide complete copy services for data migration and business continuity. Since these copy services operate on the virtual volumes, dramatically simpler replication configurations can be created using the SAN Volume Controller, rather than replicating each physical volume in the managed storage pool. The SAN Volume Controller improves storage administrator productivity, provides a common base for advanced functions, and provides for more efficient use of storage. The SAN Volume Controller consists of software and hardware components delivered as a packaged appliance solution in a variety of form factors. The IBM SAN Volume Controller solution can be preconfigured to the client's specification, and will be installed by an IBM customer engineer.
16
SAN File System is a common file system specifically designed for storage networks. By managing file details (via the metadata controller) on the storage network instead of in individual servers, the SAN File System design moves the file system intelligence into the storage network where it can be available to all application servers. Figure 1-8 shows the file level virtualization aggregation, which provides immediate benefits: a single global namespace and a single point of management. This eliminates the need to manage files on a server by server basis. A global namespace is the ability to access any file from any client system using the same name.
FS
Servers are mapped to a virtual disk, easing the administration of the physical assets
Server file systems are enhanced through a common file system and single name space
IBM TotalStorage SAN File System automates routine and error-prone tasks, such as file placement, and monitors out of space conditions. IBM TotalStorage SAN File System will allow true heterogeneous file sharing, where reads and writes on the same data can be done by different operating systems. The SAN File System Metadata server (MDS) is a server cluster attached to a SAN that communicates with the application servers to serve the metadata. Other than installing the SAN File System client on the application servers, no changes are required to applications to use SAN File System, since it emulates the syntax and behavior of local file systems.
Chapter 1. Introduction
17
External clients
NFS / CIFS
IP Network
SAN
FC
FC/iSCSI Gateway
LAN
iSCSI
In summary, IBM TotalStorage SAN File System is a common SAN-wide file system that permits centralization of management and improved storage utilization at the file level. IBM TotalStorage SAN File System is configured in a high availability configuration with clustering for the Metadata servers, providing redundancy and fault tolerance. IBM TotalStorage SAN File System is designed to provide policy-based storage automation capabilities for provisioning and data placement, nondisruptive data migration, and a single point of management for files on a storage network.
18
Optimize storage performance The IBM SAN File System addresses file related tasks that impact these same requirements. For example: Extend or truncate file system Format file system De-fragmentation File-level replication Data sharing Global name space Data lifecycle management A summary of SAN Volume Controller and SAN File System benefits can be seen in Figure 1-10.
Benefit Create a single pool of storage from multiple disparate storage devices File, Data sharing across heterogeneous Servers, OS Centralized Management
Single SAN-wide File System, global namespace Single interface for the storage pool Pools volumes across disparate storage devices No downtime to manage LUNs, migrate volumes, add storage Volume-based Peer-toPeer Remote Copy and FlashCopy Single view of file space across heterogeneous servers Reduces storage needs at File Level Non-disruptive additions/ changes to file space, less out-ofspace conditions File-based spaceefficient FlashCopy Files, Data, Quality-ofService based pooling
Single, Cost Effective set of Advanced Copy Services Policy Based Automation
SAN Volume Controller and SAN File System provide complementary benefits to address Volume and File level issues
2005 IBM Corporation
Figure 1-10 Summary of SAN Volume Controller and SAN File System benefits
Chapter 1. Introduction
19
20
Chapter 1. Introduction
21
Before using TPC for Data to manage your storage, it was difficult to get advance warning of out-of-space conditions on critical application servers. If an application did run out of storage on a server, it would typically just stop. This means revenue generated from that application or the service provided by it also stopped. And it incurred a high cost to fix it, as fixing unplanned outages is usually expensive. With TPC for Data, applications will not run out of storage. You will know when they need more storage, and can get it at a reasonable cost before an outage occurs. You will avoid the loss of revenue and services, plus the additional costs associated with unplanned outages.
22
Chapter 1. Introduction
23
Figure 1-13 shows the TPC main window with the performance management functions expanded.
Chapter 1. Introduction
25
Generally, it appears as a hierarchical structure in which files and folders (or directories) can be stored. The top of the hierarchy of each file system is usually called root. Figure 1-15 shows an example of a Windows system hierarchical view, also commonly known as the tree or directory.
A file system specifies naming conventions for naming the actual files and folders (for example, what characters are allowed in file and directory names; are spaces permitted?) and defines a path that represents the location where a specific file is stored. Without a file system, files would not even have names and would appear as nameless blocks of data randomly stored on a disk. However, a file system is more than just a directory tree or naming convention. Most file systems provide security features, such as privileges and access control for: Access to files based on user/group permissions Access Control Lists (ACLs) to allow/deny specific actions on file(s) to specific user(s) Figure 1-16 on page 27 and Example 1-1 on page 27 show Windows and UNIX system security and file permissions, respectively.
26
Figure 1-16 Windows file system security and permissions Example 1-1 UNIX file system security and permissions # ls -l total 2659 -rw-------rw------drwxr-xr-x -rwxr-xr-x -rw-------rw-r--r-drwxr-xr-x -rw-r--r--rwxrwxrwx drwxr-x--lrwxrwxrwx drwxr-xr-x drwxrwxr-x -rw-r--r-drwxr-xr-x drwxr-xr-x
1 1 10 1 1 1 2 1 1 2 1 2 5 1 2 2
root root root root root root root root root root bin root root root root root
system system system system system system system system system audit bin system system system system system
31119 196 512 3970 3440 115 512 3802 6600 512 8 512 3072 108 512 512
Sep Sep Sep Apr Sep May Apr Sep May Apr Apr Apr Sep Sep May May
15 15 15 17 16 13 17 04 14 16 17 18 15 15 13 29
16:11 16:11 16:11 11:36 08:16 14:12 11:36 09:51 08:01 2001 09:35 08:30 15:00 09:16 15:12 13:40
.TTauthority .Xauthority .dt .dtprofile .sh_history .xerrors TT_DB WebSM.pref aix_sdd_data_gatherer audit bin -> /usr/bin cdrom dev dposerv.lock drom essdisk1fs
Chapter 1. Introduction
27
LAN File Systems Use LAN for data & metadata NFS, AFS, DFS, CIFS
Leo Iva Lou
SAN File Systems Use SAN for data and LAN for metadata SAN FS
Leo Iva Lou
Leo files
File ServerA
File ServerB
SAN
Metadata Server
Leo/Iva/Lou files
Leo/Iva files
Iva/Lou files
LAN file systems are designed to provide data access over the IP network. Two of the most common protocols are Network File System (NFS) and Common Internet File System (CIFS). Typically, NFS is used for UNIX servers and CIFS is used for Windows servers. Tools exist to allow Windows servers to support NFS access and UNIX/Linux servers to support CIFS access, which enable these different operating systems to work with each others files. Local file systems limitations surface when business requirements mandate the need for a rapid increase in data storage or sharing of data among servers. Issues may include: Separate islands of storage on each host. Because local file systems are integrated with the servers operating system, each file system must be managed and configured separately. In situations where two or more file system types are in use (for example, Windows and Sun Servers), operators require training and skills in each of these operating systems to complete even common tasks such as adding additional storage capacity. No file sharing between hosts. Inherently difficult to manage. LAN file systems can address some of the limitations of local file systems by adding the ability to share among homogenous systems. In addition, there are some distributed file systems that can take advantage of both network-attached and SAN-attached disk. Some restrictions of LAN file systems include: In-band cluster architectures are inherently more difficult to scale than out-of-band SAN file system architectures. Performance is impacted as these solutions grow. Homogeneous file-sharing only. There is no (or limited) ability to provide file-locking and security between mixed operating systems. Each new cluster creates an island of storage to manage. As the number of islands grow, similar issues as with local file systems tend to increase. File-level policy-based placement is inherently more difficult. Clients still use NFS/CIFS protocols with the inherent limitations of those protocols (security, locking, and so on) File system and storage resources are not scalable beyond a single NAS appliance. An NAS appliance must handle blocks for non-SAN attached clients. SAN file systems address the limitations of local and network file systems. They enable 7x24 availability, increasing rates of change to the environment, and reduction of management cost. The IBM SAN File System offers these advantages: Single global view of file system. This enables tremendous flexibility to increase or decrease the amount of storage available to any particular server as well as full file sharing (including locking) between heterogeneous servers. Metadata Server processes only metadata operations. All data I/O occurs at SAN speeds. Linear scalability of global file system can be achieved by adding Metadata Server nodes. Advanced, centralized, file-granular, and policy-based management. Automated lifecycle management of data can take full advantage of tiered storage. Nondisruptive management of physical assets provides the ability to add, delete, and change the disk subsystem without disruption to the application servers.
Chapter 1. Introduction
29
fileset 1
fileset 2
fileset 3
fileset 4
fileset 5
Figure 1-18 Global namespace
fileset 6
Filesets are subsets of the global namespace. To the clients, the filesets appear as normal directories, where they can create their own subdirectories, place files, and so on. But from the SAN File System server perspective, the fileset is the building-block of the global namespace structure, which can only be created and deleted by SAN File System administrators. Filesets represent units of workload for metadata; therefore, by dividing the files into filesets, you can split the task of serving the metadata for the files across multiple servers. There are other implications of filesets; we will discuss them further in Chapter 2, SAN File System overview on page 33.
30
concepts found in mainframe computers and makes them available in the open systems environment. IBM TotalStorage SAN File System can provide an effective solution for clients with a small number of computers and small amounts of data, and it can scale up to support clients with thousands of computers and petabytes of data. IBM TotalStorage SAN File System is a member of the IBM TotalStorage Virtualization Family of solutions. The SAN File System has been designed as a network-based heterogeneous file system for file aggregation and data sharing in an open environment. As a network-based heterogeneous file system, it provides: High performance data sharing for heterogeneous servers accessing SAN-attached storage in an open environment. A common file system for UNIX and Windows servers with a single global namespace to facilitate data sharing across servers. A highly scalable out-of-band solution (see 1.3.3, Storage virtualization models on page 13) supporting both very large files and very large numbers of files without the limitations normally associated with NFS or CIFS implementations. IBM TotalStorage SAN File System is a leading edge solution that is designed to: Lower the cost of storage management Enhance productivity by providing centralized and simplified management through policy-based storage management automation Improve storage utilization by reducing the amount of duplicate data and by sharing free and temporary space across servers Improve application availability Simplify and lower the cost of data backups through application server free backup and built in file-based FlashCopy images Allow data sharing and collaboration across servers with high performance and full locking support Eliminate data migration during application server consolidation Provide a scalable and secure infrastructure for storage and data on demand IBM TotalStorage SAN File System solution includes a Common Information Model (CIM) Agent, supporting storage management by products based on open standards for units that comply with the open standards of the Storage Network Industry Association (SNIA) Common Information Model.
Chapter 1. Introduction
31
32
Chapter 2.
33
34
35
IP Network
SAN
FC
FC/iSCSI Gateway
LAN
iSCSI
In Figure 2-1, we show five such clients, each running a SAN File System currently supported client operating system. The SAN File System client software enables them to access the global namespace through a virtual file system (VFS) on UNIX/Linux systems and an installable file system (IFS) on Windows systems. This layer (VFS/IFS) is built by the OS vendors for use specifically for special-purpose or newer file systems. There are also special computers called Metadata server (MDS) engines, that run the Metadata server software, as shown in the left side of the figure. The MDSs manage file system metadata (including file creation time, file security information, file location information, and so on), but the user data accessed over the SAN by the clients does not pass through an MDS. This eliminates the performance bottleneck from which many existing shared file system approaches suffer, giving near-local file system performance. MDSs are clustered for scalability and availability of metadata operations and are often referred to as the MDS cluster. In a SAN File System server cluster, there is one master MDS and one or more subordinate MDSs. Each MDS runs on a separate physical engine in the cluster. Additional MDSs can be added as required if the workload grows, providing solution scalability. Storage volumes that store the SAN File System clients user data (User Pools) are separated from storage volumes that store metadata (System Pool), as shown in Figure 2-1.
36
The Administrative server allows SAN File System to be remotely monitored and controlled through a Web-based user interface called the SAN File System console. The Administrative server also processes requests issued from an administrative command line interface (CLI), which can also be accessed remotely. This means the SAN File System can be administered from almost any system with suitable TCP/IP connectivity. The Administrative server can use local authentication (standard Linux user IDs and groups) to look up authentication and authorization information about the administrative users. Alternatively, an LDAP server (client supplied) can be used for authentication. The primary Administrative server runs on the same engine as the master MDS. It receives all requests issued by administrators and also communicates with Administrative servers that run on each additional server in the cluster to perform routine requests.
37
Remote Supervisory Adapter II card (RSA II). This must be compatible with the SUSE operating system. Suggested card: IBM part number 59P2984 for x345, 73P9341 - IBM Remote Supervisor Adapter II Slim line for x346. Certified for SUSE Linux Enterprise Server 8, with Service Pack 4 (kernel level 2.4.21-278) or SUSE Linux Enterprise Server 9, Service Pack 1, with kernel level 2.6.5-7.151. Each MDS must have the following software installed: SUSE Linux Enterprise Server 8, Service Pack 4, kernel level 2.4.21-278, or SUSE Linux Enterprise Server 9, Service Pack 1, kernel level 2.6.5-7.151. Multi-pathing driver for the storage device used for the metadata LUNs. At the time of writing, if using DS4x000 storage for metadata LUNs, then either RDAC V9.00.A5.09 (SLES8) or RDAC V9.00.B5.04 (SLES9) is required. If using other IBM storage for metadata LUNs (ESS, SVC, DS6000, or DS8000), then SDD V1.6.0.1-6 is required. However, these levels will change over time. Always check the release notes distributed with the product CD, as well as the SAN File System for the latest supported device driver level. More information about the multi-pathing driver can be found in 4.4, Subsystem Device Driver on page 109 and 4.5, Redundant Disk Array Controller (RDAC) on page 119.
38
Antivirus software is recommended. Additional software for the Master Console is shipped with the SAN File System software package, as described in 2.5.6, Master Console on page 45.
DS6000 DS8000 SVC (SLES8 only) SVC for Cisco and MDS9000
Note this information can change at any time; the latest information about specific supported storage, including device driver levels and microcode, is at this Web site. Please check it before starting your SAN File System installation:
http://www.ibm.com/storage/support/sanfs
39
User volumes
SAN File System can be configured with any SAN storage device for the user data storage, providing it is supported by the operating systems running the SAN File System client (including having a compatible HBA) and that it conforms to the SCSI standard for unique device identification. SAN File System also supports storage devices for user data storage attached through iSCSI. The iSCSI attached storage devices must conform to the SCSI standard for unique device identification and must be supported by the SAN File System client operating systems. Consult your storage systems documentation or the vendor to see if it meets these requirements. Note: Only IBM storage subsystems are supported for the system (metadata) storage pool. SAN File System supports an unlimited number of LUNs for user data storage. The amount of user data storage that you can have in your environment is determined by the amount of storage that is supported by the storage subsystems and the client operating systems. In the following sections, SAN File System hardware and logical components are described in detail.
Metadata server
A Metadata server (MDS) is a software server that runs on a SAN File System engine and performs metadata, administrative, and storage management services. In a SAN File System
40
server cluster, there is one master MDS and one or more subordinate MDSs, each running on a separate engine in the cluster. Together, these MDSs provide clients with shared, coherent access to the SAN File System global namespace. All of the servers, including the master MDS, share the workload of the SAN File System global namespace. Each is responsible for providing metadata and locks to clients for filesets that are hosted by that MDS. Each MDS knows which filesets are hosted by each particular MDS, and when contacted by a client, can direct the client to the appropriate MDS. They manage distributed locks to ensure the integrity of all of the data within the global namespace. Note: Filesets are subsets of the entire global namespace and serve to organize the namespace for all the clients. A fileset serves as the unit of workload for the MDS; each MDS has a workload assigned of some of the filesets. From a client perspective, a fileset appears as a regular directory or folder, in which the clients can create their own regular directories and files. Clients, however, cannot delete or rename the directories at which filesets are attached. In addition to providing metadata to clients and managing locks, MDSs perform a wide variety of other tasks. They process requests issued by administrators to create and manage filesets, storage pools, volumes, and policies, they enforce the policies defined by administrators to place files in appropriate storage pools, and they send alerts when any thresholds established for filesets and storage pools are exceeded.
File metadata: This is information needed by the clients in order to access files directly from storage devices on a Storage Area Network. File metadata includes permissions, owner and group, access time, creation time, and other file characteristics, as well as the location of the file on the storage. System metadata: This is metadata used by the system itself. System metadata includes information about filesets, storage pools, volumes, and policies. The MDSs perform the reads and writes required to create, distribute, and manage this information.
The metadata is stored and managed in a separate system storage pool that is only accessible by the MDS in a server cluster. Distributing locks to clients involves the following operations: Issuing leases that determine the length of time that a server guarantees the locks it grants to clients. Granting locks to clients that allow them shared or exclusive access to files or parts of files. These locks are semi-preemptible, which means that if a client does not contact the server within the lease period, the server can steal the clients locks and grant them to other clients if requested; otherwise, the client can reassert its locks (get its locks back) when it can make contact, thereby inter-locking the connection again. Providing a grace period during which a client can reassert its locks before other clients can obtain new locks if the server itself goes down and then comes back online.
41
Administrative server
Figure 2-2 on page 43 shows the overall administrative interface structure of SAN File System.
42
SAN File File System Clients Installable file system Virtual file system Customer Network SAN File system Server Cluster GUI Web Server CLI Client (tanktool) (sfscli) Admin Agent (CIM) Metadata server Server Auth Server LDAP Server OR Local Authentication Linux RSA Card
The SAN File System Administrator server, which is based on a Web server software platform, is made up of two parts: the GUI Web server and the Administrative Agent.
43
The GUI Web server is the part of the administrative infrastructure that interacts with the SAN File System MDSs and renders the Web pages that make up the SAN File System Console. The Console is a Web-based user interface, either Internet Explorer or Netscape. Figure 2-3 shows the GUI browser interface for the SAN File System.
The Administrative Agent implements all of the management logic for the GUI, CLI, and CIM interfaces, as well as performing administrative authorization/authentication against the LDAP server. The Administrative Agent processes all management requests initiated by an administrator from the SAN File System console, as well as requests initiated from the SAN File System administrative CLI, which is called sfscli. The Agent communicates with the MDS, the operating system, the Remote Supervisor Adapter (RSA II) card in the engine, the LDAP, and Administrative Agents on other engines in the cluster when processing requests. Example 2-1 shows all the commands available with sfscli.
Example 2-1 The sfscli commands for V2.2.2 itso3@tank-mds3:/usr/tank/admin/bin> sfscli> help activatevol lsadmuser addprivclient lsautorestart addserver lsclient addsnmpmgr lsdomain attachfileset lsdrfile autofilesetserver lsfileset builddrscript lsimage catlog lslun catpolicy lspolicy chclusterconfig lspool chdomain lsproc chfileset lsserver chldapconfig lssnmpmgr ./sfscli mkvol mvfile quiescecluster quit rediscoverluns refreshusermap reportclient reportfilesetuse reportvolfiles resetadmuser resumecluster reverttoimage rmdomain setfilesetserver setoutput settrap startautorestart startcluster startmetadatacheck startserver statcluster statfile statfileset statldap statpolicy statserver
44
chpool lstrapsetting rmdrfile chvol lsusermap rmfileset clearlog lsvol rmimage collectdiag mkdomain rmpolicy detachfileset mkdrfile rmpool disabledefaultpool mkfileset rmprivclient dropserver mkimage rmsnmpmgr exit mkpolicy rmusermap expandvol mkpool rmvol help mkusermap setdefaultpool sfscli> itso3@tank-mds3:/usr/tank/admin/bin>
An Administrative server interacts with a SAN File System MDS through an intermediary, called the Common Information Model (CIM) agent. When a user issues a request, the CIM agent checks with an LDAP server, which must be installed in the environment, to authenticate the user ID and password and to verify whether the user has the authority (is assigned the appropriate role) to issue a particular request. After authenticating the user, the CIM agent interacts with the MDS on behalf of that user to process the request. This same system of authentication and interaction is also available to third-party CIM clients to manage SAN File System.
45
The RSA II card for any of the engines in the SAN File System cluster, through a Web browser. In addition, the user can use the RSA II Web interface to establish a remote console to the engine, allowing the user to view the engine desktop from the Master Console. Any of the SAN File System clients, through an SSH session, a telnet session, or a remote display emulation package, depending on the configuration of the client.
Remote access
Remote Access support is the ability for IBM support personnel who are not located on a users premises to assist an administrator or a local field engineer in diagnosing and repairing failures on a SAN File System engine. Remote Access support can help to greatly reduce service costs and shorten repair times, which in turn will reduce the impact of any SAN File System failures on business. Remote Access provides a support engineer with full access to the SAN File System console, after a request initiated by the customer. The access is via a secure VPN connection, using IBM VPN Connection Manager. This allows the support engineer to query and control the SAN File System MDS and to access metadata, log, dump, and configuration data, using the CLI. While the support engineer is accessing the SAN File System, the customer is able to monitor their progress via the Master Console display.
46
folders (permissions permitting). From the MDS perspective, the filesets allow the metadata workload to be split between all the servers in the cluster. Note: Filesets can be organized in any way desired, to reflect enterprise needs.
/ ROOT
(Default Fileset)
(Additional Filesets)
/HR
/Finance
/CRM
/Manufacturing
For example, the root fileset (for example, ROOT) is attached to the root level in the namespace hierarchy (for example, sanfs), and the filesets are attached below it (that is, HR, Finance, CRM, and Manufacturing). The client would simply see four subdirectories under the root directory of the SAN File System. By defining the path of a filesets attach point, the administrator also automatically defines its nesting level in relationship to the other filesets.
2.5.8 Filesets
A fileset is a subset of the entire SAN File System global namespace. It serves as the unit of workload for each MDS, and also dictates the overall organizational structure for the global namespace. It is also a mechanism for controlling the amount of space occupied by SAN File System clients. Filesets can be created based on workflow patterns, security, or backup considerations, for example. You might want to create a fileset for all the files used by a specific application, or associated with a specific client. The fileset is used not only for managing the storage space, but also as the unit for creating FlashCopy images (see 2.5.12, FlashCopy on page 58). Correctly defined filesets mean that you can take a FlashCopy image for all the files in a fileset together in a single operation, thus providing a consistent image for all of those files. A key part of SAN File System design is organizing the global namespace into filesets that match the data management model of the enterprise. Filesets can also be used as a criteria in placement of individual files within the SAN File System (see 2.5.10, Policy based storage and data management on page 49). Tip: Filesets are assigned to a MDS either statically (that is, by specifying a MDS to serve the fileset when it is created), or dynamically. If dynamic assignment is chosen, automatic simple load balancing will be done. If using static fileset assignment, consider the overall I/O loads on the SAN File System cluster. Since each fileset is assigned to one (and only one) MDS at a time, for serving the metadata, you will want to balance the load across all MDS in the cluster, by assigning filesets appropriately. More information about filesets is given in 7.5, Filesets on page 286.
Chapter 2. SAN File System overview
47
An administrator creates filesets and attaches them at specific locations below the global fileset. An administrator can also attach a fileset to another fileset. When a fileset is attached to another fileset, it is called a nested fileset. In Figure 2-5, fileset1 and fileset2 are the nested filesets of parent fileset Winfiles. Note: In general, we do not recommend creating nested filesets; see 7.5.2, Nested filesets on page 289 for the reasons why.
/
/HR /UNIXfiles
( ROOT )
/Winfiles
/Manufacturing
(filesets)
fileset1
fileset2
(nested filesets)
Here we have shown several filesets, including filesets called UNIXfiles and Winfiles. We recommend separating filesets by their primary allegiance of the operating system. This will facilitate file sharing (see Sharing files on page 54 for more information). Separation of filesets also facilitates backup, since if you are using file-based backup methods (for example, tar, Windows Backup vendor products like VERITAS NetBackup, or IBM Tivoli Storage Manager), full metadata attributes of Windows files can only be backed up from a Windows backup client, and full metadata attributes of UNIX files can only be backed up from a UNIX backup client. See Chapter 12, Protecting the SAN File System environment on page 477 for more information. When creating a fileset, an administrator can specify a maximum size for the fileset (called a quota) and specify whether SAN File System should generate an alert if the size of the fileset reaches or exceeds a specified percentage of the maximum size (called a threshold). For example, if the quota on the fileset was set at 100 GB, and the threshold was 80%, an alert would be raised once the fileset contained 80 GB of data. The action taken when the fileset reaches its quota size (100 GB in this instance) depends on whether the quota is defined as hard or soft. If a hard quota is used, once the threshold is reached, any further requests from a client to add more space to the fileset (by creating or extending files) will be denied. If a soft quota is used, which is the default, more space can be allocated, but alerts will continue to be sent. Of course, once the amount of physical storage available to SAN File System is exceeded, no more space can be used. The quota limit, threshold, and quota type can be set differently and individually for each fileset.
48
SAN File System has two types of storage pools (System and User), as shown in Figure 2-6.
User Pool2
User Pool3
System Pool
The System Pool contains the system metadata (system attributes, configuration information, and MDS state) and file metadata (file attributes and locations) that is accessible to all MDSs in the server cluster. There is only one System Pool, which is created automatically when SAN File System is installed with one or more volumes specified as a parameter to the install process. The System Pool contains the most critical data for SAN File System. It is very important to use highly reliable and available LUNs as volumes (for example, using mirroring, RAID, and hot spares in the back-end storage system) so that the MDS cluster always has a robust copy of this critical data. For the greatest protection and highest availability in a local configuration, mirrored RAID-5 volumes are recommended. The RAID configuration should have a low ratio of data to parity disks, and hot spares should also be available, to minimize the amount of time to recover from a single disk failure. Remote mirroring solutions, such as MetroMirror, available on the IBM TotalStorage SAN Volume Controller, DS6000, and DS8000, are also possible.
User Pools
User Pools contain the blocks of data that make up user files. Administrators can create one or more user storage pools, and then create policies containing rules that cause the MDS servers to store data for specific files in the appropriate storage pools. A special User Pool is the default User Pool. This is used to store the data for a file if the file is not assigned to a specific storage pool by a rule in the active file placement policy. One User Pool, which is automatically designated the default User Pool, is created when SAN File System is installed. This can be changed by creating another User Pool and setting it to the default User Pool. The default pool can also be disabled if required.
49
A storage pool is a named set of storage volumes that can be specified as the destination for files in rules. Only User Pools are used to store file data. The rules in a file placement policy are processed in order until the condition in one of the rules is met. The data for the files is then stored in the specified storage pool. If none of the conditions specified in the rules of the policy is met, the data for the file is stored in the default storage pool. Figure 2-7 shows an example of how file placement policies work. The yellow box shows a sequence of rules defined in the policy. Underneath each storage pool is a list of some files that will be placed in it, according to the policy. For example, the file /HR/dsn.bak matches the first rule (put all files in the fileset /HR into User Pool 1) and is therefore put into User Pool 1. The fact that it also matches the second rule is irrelevant, because only the first matching rule is applied. See 7.8, File placement policy on page 304 for more information.
User Pool 1
/HR/dsn1.txt /HR/DB2.pgm /HR/dsn1.bak
User Pool 2
/CRM/DB2.pgm /Finance/DB2.tmp
User Pool 3
/CRM/dsn3.tmp
User Pool 4
/CRM/dsn2.bak /Finance/dsn4.bak
The file placement policy can also optionally contain preallocation rules. These rules, available with SAN File System V2.2.2, allow a system administrator to automatically preallocate space for designated files, which can improve performance. See 7.8.7, File storage preallocation on page 324 for more information about preallocation.
expensive storage to cheaper storage, or vice versa, for more critical files. Lifecycle management reduces the manual intervention necessary in managing space utilization and therefore also reduces the cost of management. Lifecycle management is set up via file management policies. A file management policy is a set of rules controlling the movement of files among different storage pools. Rules are of two types: migration and deletion. A migration rule will cause matching files to be moved from one storage pool to another. A deletion rule will cause matching files to be deleted from the SAN File System global namespace. Migration and deletion rules can be specified based on pool, fileset, last access date, or size criteria. The system administrator defines these rules in a file management policy, then runs a special script to act on the rules. The script can be run in a planning mode to determine in advance what files would be migrated/deleted by the script. The plan can optionally be edited by the administrator, and then passed back for execution by the script so that the selected files are actually migrated or deleted. For more information, see Chapter 10, File movement and lifecycle management on page 435.
2.5.11 Clients
SAN File System is based on a client-server design. A SAN File System client is a computer that accesses and creates data that is stored in the SAN File System global namespace. The SAN File System is designed to support the local file system interfaces on UNIX, Linux, and Windows servers. This means that the SAN File System is designed to be used without requiring any changes to your applications or databases that use a file system to store data. The SAN File System client for AIX, Sun Solaris, Red Hat, and SUSE Linux use the virtual file system interface within the local operating system to provide file system interfaces to the applications running on AIX, Sun Solaris, Red Hat, and SUSE Linux. The SAN File System client for Microsoft Windows (supported Windows 2000 and 2003 editions) uses the installable file system interface within the local operating system to provide file system interfaces to the applications. Clients access metadata (such as a file's location on a storage device) only through a MDS, and then access data directly from storage devices attached to the SAN. This method of data access eliminates server bottlenecks and provides read and write performance that is comparable to that of file systems built on bus-attached, high-performance storage. SAN File System currently supports clients that run these operating systems: AIX 5L Version 5.1 (32-bit uniprocessor or multiprocessor). The bos.up or bos.mp packages must be at level 5.1.0.58, plus APAR IY50330 or higher. AIX 5L Version 5.2 (32-bit and 64-bit). The bos.up package must be at level 5.2.0.18 or later. The bos.mp package must be at level 5.2.0.18 or later. APAR IY50331 or higher is required. AIX 5L Version 5.3 (32-bit or 64-bit). Windows 2000 Server and Windows 2000 Advanced Server with Service Pack 4 or later. Windows 2003 Server Standard and Enterprise Editions with Service Pack 1 or later. VMWare ESX 2.0.1 running Windows only. Red Hat Enterprise Linux 3.0 AS, ES, and WS, with U2 kernel 2.4.21-15.0.3 hugemem, smp or U4 kernel 2.4.21-27 hugemem, and smp on x86 systems.
51
SUSE Linux Enterprise Server 8.0 on kernel level 2.4.21-231 (Service Pack 3) kernel level 2.4.21-278 (Service Pack 4) on x86 servers (32-bit). SUSE Linux Enterprise Server 8.0 SP3 kernel 2.4.21-251 on pSeries (64-bit). SUSE Linux Enterprise Server 8.0 SP3 kernel 2.4.21-251 on zSeries (31-bit). Sun Solaris 9 (64-bit) on SPARC servers. Note: The AIX client is supported on pSeries systems with a maximum of eight processors. The Red Hat client is supported on either the SMP or Hugemem kernel, with a maximum of 4 GB of main memory. The zSeries SUSE 8 SAN File System client uses the zFCP driver and supports access to ESS, DS6000, and DS8000 for user LUNs. SAN File System client software must be installed on each AIX, Windows, Solaris, SUSE, or Red Hat client. On an AIX, Linux, and Solaris client, the software is a virtual file system (VFS), and on a Windows client, it is an installable file system (IFS). The VFS and IFS provide clients with local access to the global namespace on the SAN. Note that clients can also act as servers to a broader clientele. They can provide NFS or CIFS access to the global namespace to LAN-attached clients and can host applications such as database servers. A VFS is a subsystem of an AIX/Linux/Solaris clients virtual file system layer, and an IFS is a subsystem of a Windows clients file system. The SAN File System VFS or IFS directs all metadata operations to an MDS and all data operations to storage devices attached to a SAN. The SAN File System VFS or IFS provides the metadata to the client's operating system and any applications running on the client. The metadata looks identical to metadata read from a native, locally attached file system, that is, it emulates the local file system semantics. Therefore, no change is necessary to the client applications' access methods to use SAN File System. When the global namespace is mounted on an AIX/Linux/Solaris client, it looks like a local file system. When the global namespace is mounted on a Windows client, it appears as another drive letter and looks like an NTFS file system. Files can therefore be shared between Windows and UNIX clients (permissions and suitable applications permitting).
Clustering
SAN File System V2.2.2 supports clustering software running on AIX, Solaris, and Microsoft clients.
AIX clients
HACMP is supported on SAN File System clients running AIX 5L V5.1, V5.2, and V5.3, when the appropriate maintenance levels are installed.
Solaris clients
Solaris client clustering is supported when used with Sun Cluster V3.1. Sun clustered applications can use SAN File System provided that the SAN File System is declared to the cluster manager as a Global File System. Likewise, non-clustered applications are supported when Sun Cluster is present on the client. Sun Clusters can also be used as an NFS server, as the NFS service will fail over using local IP connectivity.
52
Microsoft clients
Microsoft client clustering is supported for Windows 2000 and Windows 2003 clients with MSCS (Microsoft Cluster Server), using a maximum of two client nodes per cluster.
Virtual I/O
The SAN File System 2.2.2 client for AIX 5L V5.3 will interoperate with Virtual I/O (VIO) devices. VIO enables virtualization of storage across LPARs in a single POWER5 system. SAN File System support for VIO enables SAN File System clients to use data volumes that can be accessed through VIO. In addition, all other SAN File System clients will interoperate correctly with volumes that are accessed through VIO by one or more AIX 5L V5.3 clients. Version 1.2.0.0 of VIO is supported by SAN File System. Restriction: SAN File System does not support the use of Physical Volume Identifier (PVID) in order to export a LUN/physical volume (for example, hdisk4) on a VIO Server. To list devices with a PVID, type lspv. If the second column has a value of none, the physical volume does not have a PVID. For a description of driver configurations that require the creation of a volume label, see What are some of the restrictions and limitations in the VIOS environment? on the VIOS Web site at:
http://www.software.ibm.com/webapp/set2/sas/f/vios/documentation/faq.html
53
Sharing files
In a homogenous environment (either all UNIX or all Windows clients), SAN File System provides access and semantics that are customized for the operating system running on the clients. When files are created and accessed from only Windows clients, all the security features of Windows are available and enforced. When files are created and accessed from only UNIX clients, all the security features of UNIX are available and enforced. In Version 2.2 of SAN File System (and beyond), the heterogenous file sharing feature improves the flexibility and security involved in sharing files between Windows and UNIX based environments. The administrator defines and manages a set of user map entries using the CLI or GUI, which specifies a UNIX domain-qualified user and a Windows domain-qualified user that are to be treated as equivalent for the purpose of validating file access permissions. Once these mappings are defined, the SAN File System automatically accesses the Active Directory Sever (Windows) and either LDAP or Network Information Service (NIS) on UNIX to cross-reference the user ID and group membership. See 8.3, Advanced heterogeneous file sharing on page 347 for more information about heterogenous file sharing. If no user mappings are defined, then heterogeneous file sharing (where there are both UNIX and Windows clients) is handled in a restricted manner. When files created on a UNIX client are accessed by a non-mapped user on a Windows client, the access available will be the same as those granted by the Other permission bits in UNIX. Similarly, when files created on a Windows client are accessed on a non-mapped user on a UNIX client, the access available is the same as that granted to the Everyone user group in Windows. If the improved heterogenous file sharing capabilities (user mappings) are not implemented by the administrator, then file sharing is positioned primarily for homogenous environments. The ability to share files heterogeneously is recommended for read-only, that is, create files on one platform, and provide read-only access on the other platform. To this end, filesets should be established so that they have a primary allegiance. This means that certain filesets will have files created in them only by Windows clients, and other filesets will have files created in them only by UNIX clients.
54
If we expand the S: drive in Windows Explorer, we can see the directories underneath (Figure 2-9 shows this view). There are a number of filesets available, including the root fileset (top level) and two filesets under the root (USERS and userhomes). However, the client is not aware of this; they simply see the filesets as regular folders. The hidden directory, .flashcopy, is part of the fileset and is used to store FlashCopy images of the fileset. More information about FlashCopy is given in 2.5.12, FlashCopy on page 58 and 9.1, SAN File System FlashCopy on page 376.
Figure 2-9 Exploring the SAN File System from a Windows 2000 client
55
Example 2-2 shows the AIX mount point for the SAN File System, namely SANFS. It is mounted on the directory /sfs. Other UNIX-based clients see a similar output from the df command. A listing of the SAN File System namespace base directory shows the same directory or folder names as in the Windows output. The key thing here is that all SAN File System clients, whether Windows or UNIX, will see essentially the same view of the global namespace.
Example 2-2 AIX /UNIX mount point of the SAN file system Rome:/ >df Filesystem 512-blocks Free %Used Iused %Iused Mounted on /dev/hd4 65536 46680 29% 1433 9% / /dev/hd2 1310720 73752 95% 21281 13% /usr /dev/hd9var 65536 52720 20% 455 6% /var /dev/hd3 131072 103728 21% 59 1% /tmp /dev/hd1 65536 63368 4% 18 1% /home /proc - /proc /dev/hd10opt 65536 53312 19% 291 4% /opt /dev/lv00 4063232 1648688 60% 657 1% /usr/sys/inst.images SANFS 603095040 591331328 2% 1 1% /sfs Rome:/ > cd /sfs/sanfs Rome:/ > ls .flashcopy aix51 aixfiles axi51 files lixfiles lost+found smallwin testdir tmp userhomes USERS winfiles winhome
Use of MBCS
Multi-byte characters (MBCS) can now be used (from V2.2 onwards) in pattern matching in file placement policies and for fileset attach point directories. MBCS are not supported in the names of storage pools and filesets. Likewise, MBCS cannot be used in the SAN File System cluster name, which appears in the namespace as the root fileset attach point directory name (for example, /sanfs), or in the fileset administrative object name (as opposed to the fileset directory attach point).
56
a UNIX-based client with names that differ only in case, its inability to distinguish between the two files may lead to undesirable results. For this reason, it is not recommended for UNIX clients to create case-differentiated files in filesets that will be accessed by Windows clients. The following features of NTFS are not currently supported by SAN File System: File compression on either individual files or all files within a folder. Extended attributes. Reparse points. Built-in file encryption on files and directories. Quotas; however, quotas are provided by SAN File System filesets. Defragmentation and error-checking tools (including CHKDSK). Alternate data streams. Assigning an access control list (ACL) for the entire drive. NTFS change journal. Scanning all files/directories owned by a particular SID (FSCTL_FIND_FILES_BY_SID). Security auditing or SACLs. Windows sparse files. Windows Directory Change Notification. Applications that use the Directory Change Notification feature may stop running when a file system does not support this feature, while other applications will continue running. The following applications stop running when Directory Change Notification is not supported by the file system: Microsoft applications ASP.net Internet Information Server (IIS) The SMTP Service component of Microsoft Exchange Non-Microsoft application Apache Web server The following application continues to run when Directory Change Notification is not supported by the file system: Windows Explorer. Note that when changes to files occur by other processes, the changes will not be automatically reflected until a manual refresh is done or the file folder is reopened. In addition to the above limitations, note these differences: Programs that open files using the 64-bit file ID (the FILE_OPEN_BY_FILE_ID option) will fail. This applies to the NFS server bundled with Microsoft Services for UNIX. Symbolic links created on UNIX-based clients are handled specially by SAN File System on Windows-based clients; they appear as regular files with a size of 0, and their contents cannot be accessed or deleted. Batch oplocks are not supported. LEVEL_1, LEVEL_2 and Filter types are supported.
57
Differences between SAN File System and NTFS SAN File System differs from Microsoft Windows NT File System (NTFS) in its degree of integration into the Windows administrative environment. The differences are: Disk management within the Microsoft Management Console shows SAN File System disks as unallocated. SAN File System does not support reparse points or extended attributes. SAN File System does not support the use of the standard Windows write signature on its disks. Disks used for the global namespace cannot sleep or hibernate. SAN File System also differs from NTFS in its degree of integration into Windows Explorer and the desktop. The differences are: Manual refreshes are required when updates to the SAN File System global namespace are initiated on the metadata server (such as attaching a new fileset). The recycle bin is not supported. You cannot use distributed link tracing. This is a technique through which shell shortcuts and OLE links continue to work after the target file is renamed or moved. Distributed link tracking can help a user locate the link sources in case the link source is renamed or moved to another folder on the same or different volume on the same PC, or moved to a folder on any PC in the same domain. You cannot use NTFS sparse-file APIs or change journaling. This means that SAN File System does not provide efficient support for the indexing services accessible through the Windows Search for files or folders function. However, SAN File System does support implicitly sparse files.
2.5.12 FlashCopy
A FlashCopy image is a space-efficient, read-only copy of the contents of a fileset in a SAN File System global namespace at a particular point in time. A FlashCopy image can be used with standard backup tools available in a users environment to create backup copies of files onto tapes. A FlashCopy image can also be quickly reverted, that is, roll back the current fileset contents to an available FlashCopy image. When creating FlashCopy images, an administrator specifies which fileset to create the FlashCopy image for. The FlashCopy image operation is performed individually for each fileset. A FlashCopy image is simply an image of an entire fileset (and just that fileset, not any nested filesets) as it exists at a specific point in time. An important benefit is that during creation of a FlashCopy image, all data remains online and available to users and applications. The space used to keep the FlashCopy image is included in its overall fileset space; however, a space-efficient algorithm is used to minimize the space requirement. The FlashCopy image does not include any nested filesets within it. You can create and maintain a maximum of 32 FlashCopy images of any fileset. See 9.1, SAN File System FlashCopy on page 376 for more information about SAN File System FlashCopy. Figure 2-10 on page 59 shows how a FlashCopy image can be seen on a Windows client. In this case, a FlashCopy image was made of the fileset container_A, and specified to be created in the directory 062403image. The fileset has two top-level directories, DRIVERS and Adobe. After the FlashCopy image is made, a subdirectory called 062403image appears in the special directory .flashcopy (which is hidden by default) underneath the root of the fileset. This directory contains the same folders as the actual fileset, that is, DRIVERS and Adobe, and all the file/folder structure underneath. It is simply frozen at the time the image was taken.
58
Therefore, clients have file-level access to these images, to access older versions of files, or to copy individual files back to the real fileset if required, and if permissions on the flashcopy folder are set appropriately.
59
Remote Supervisor Adapter II (RSA II). The RSA-II provides remote access to the engines desktop, monitoring of environmental factors, and engine restart capability. The RSA card communicates with the service processors on the MDS engines in the cluster to collect hardware information and statistics. The RSA cards also communicate with the service processors to enable remote management of the servers in the cluster, including automatic reboot if a server hang is detected. More information about the RSA card can be found in 13.5, Remote Supervisor Adapter II on page 537. To improve availability, the MDS hardware also needs the following dual redundant features: Dual power supplies. Dual fans. Dual Ethernet connections with network bonding enabled. Bonding network interfaces together allows for increased failover in high availability configurations. Beginning with V2.2.2, SAN File System supports network bonding with SLES8 SP 4 and SLES 9 SP 1. Redundant Ethernet support on each MDS enables the full redundancy of the IP network between the MDSs in the cluster as well as between the SAN File System Clients and the MDSs. The dual network interfaces in each MDS are combined redundantly servicing a single IP address. Each MDS still uses only one IP address. One interface is used for IP traffic unless the interface fails, in which case IP service is failed over to the other interface. The time to fail over an IP service is on the order of a second or two. The change is transparent to SAN File System. No change to client configuration is needed. We also strongly recommend UPS systems to protect the SAN File System engines.
Software faults
Software faults are server errors or failures for which recovery is possible via a restart of the server process without manual administrative intervention. SAN File System detects and recovers from software faults via a number of mechanisms. An administrative watchdog process on each server monitors the health of the server and restarts the MDS processes in the event of failure, typically within about 20 seconds of the failure. If the operating system of an MDS hangs, it will be ejected from the cluster once the MDS stops responding to other cluster members. A surviving cluster member will raise an event and SNMP trap, and will use the RSA card to restart the MDS that was hung.
Hardware faults
Hardware faults are server failures for which recovery requires administrative intervention. They have a greater impact than software faults and require at least a machine reboot and possibly physical maintenance for recovery. SAN File System detects hardware faults by way of a heartbeat mechanism between the servers in a cluster. A server engine that experiences a hardware fault stops responding to heartbeat messages from its peers. Failure of a server to respond for a long enough period of
60
time causes the other servers to mark it as being down and to send administrative SNMP alerts.
Global namespace
SAN File System presents a single global namespace view of all files in the system to all of the clients, without manual, client-by-client configuration by the administrator. A file can be identified using the same path and file name, regardless of the system from which it is being accessed. The single global namespace shared directly by clients also reduces the requirement of data replication. As a result, the productivity of the administrator as well as the users accessing the data is improved. It is possible to restrict access to the global namespace by using a non-uniform SAN File System configuration. In this way, only certain SAN File System volumes and therefore filesets will be available to each client. See 3.3.2, Non-uniform SAN File System configuration on page 69 for more information.
File sharing
SAN File System is specifically designed to be easy to implement in virtually any operating system environment. All systems running this file system, regardless of operating system or hardware platform, potentially have uniform access to the data stored (under the global namespace) in the system. File metadata, such as last modification time, are presented to users and applications in a form that is compatible with the native file system interface of the platform. SAN File System is also designed to allow heterogeneous file sharing among the UNIX and Windows client platforms with full locking and security capabilities. By enabling this capability, heterogeneous file sharing with SAN File System increases in performance and flexibility.
61
Lifecycle management
SAN File System provides the administrator with policy based data management that automates the management of data stored on storage resources. Through the policy based movement of files between storage pools and the policy based deletion of files, there is less effort needed to update the location of files or sets of files. Free space within storage pools will be more available as potentially older files are removed. The overall cost of storage can be reduced by using this tool to manage data between high/low performing storage based on importance of the data.
62
Part 2
Part
63
64
Chapter 3.
65
66
copper or optional fibre), preferably on a separate switch for maximum redundancy. With Ethernet bonding configured, three network ports are required per MDS. To perform a rolling upgrade to SAN File System V2.2.2, you must leave the USB/RS-485 serial network interface in place for the RSA cards. Once the upgrade is committed, you can remove the RS-485 interface, since it is no longer used. It is replaced by the TCP/IP interface for the RSA cards. Power outlets (one or two per server engine; dual power supplies for the engine are recommended but not required). You need two wall outlets or two rack PDU outlets per server engine. For availability, these should be on separate power circuits. The Master Console, if deployed, requires one wall outlet or one PDU outlet. SAN clients with supported client operating systems, and supported Fibre Channel adapters for the disk system being used. Supported SAN File System clients at the time of writing are listed in 2.5.11, Clients on page 51, and are current at the following Web site:
http://www.ibm.com/servers/storage/software/virtualization/sfs/interop.html
67
AIX
W indow s 2000
W indow s 2000
HBA
HBA
HBA
HBA
HBA
FC Sw itch FC Sw itch 1 1
FC Sw itch 1
System Pool User Pool
HBA
M etadata Server
Figure 3-1 Mapping of Metadata and User data to MDS and clients
Each of the SAN File System clients should be zoned separately (hard zoning is recommended) so that each HBA can detect all the LUNs containing that clients data in the User Pools. If there are multiple clients with the same HBA-type (manufacturer and model), these may be in the same zone; however, putting different HBA-types in the same zone is not supported, for incompatibility reasons. LUN masking must be used where supported by the storage device to LUN mask the metadata storage LUNs for exclusive use by the Metadata servers. Here are some guidelines for LUN masking: Specify the Metadata LUNs to the Linux mode (if the back-end storage has OS-specific operating modes). Specify the LUNs for User Pool LUNs, when using ESS, as follows (note that on SVC, there is no host type setting): Set the correct host type according to which client/server you are configuring. The host type is set on a per-host basis, not for the LUN, regardless of host. Therefore, with LUNs in User Pools, the LUNs may be mapped to multiple hosts, for example, Windows and AIX. You can ignore any warning messages about unlike hosts. Tip: For ESS, if you have microcode level 2.2.0.488 or above, there will be a host type entry of IBM SAN File System (Lnx MDS). If this is available, choose it for the LUNs. If running an earlier microcode version, choose Linux. For greatest security, SAN File System fabrics should preferably be isolated from non-SAN File System fabrics on which administrative activities could occur. No hosts can have access to the LUNs used by the SAN File System apart from the MDS servers and the SAN File System clients. This could be achieved by appropriate zoning/LUN masking, or for greatest security, by using separate fabrics for SAN File System and non-SAN File System activities.
68
The Master Console hardware, if deployed, requires two fibre ports for connection to the SAN. This enables it to perform SAN discovery for use with IBM TotalStorage Productivity Center for Fabric. We strongly recommend installing and configuring IBM TotalStorage Productivity Center for Fabric on the Master Console, as having an accurate picture of the SAN configuration is important for a successful SAN File System installation. Multi-pathing device drivers are required on the MDS. IBM Subsystem Device driver (SDD) is required on SAN File System MDS when using IBM TotalStorage Enterprise Storage Server, DS8000, DS6000, and SAN Volume Controller. RDAC is required on SAN File System MDS for SANs using IBM TotalStorage DS4x00 series disk systems. Multi-pathing device drivers are recommended on the SAN File System clients for availability reasons, if provided by the storage system vendor.
69
Note for SAN File System V2.1 clients: SAN configurations for SAN File System V2.1 are still supported by V2.2 and above, so no changes are required in the existing SAN infrastructure when upgrading. A non-uniform SAN File System configuration provides the following benefits. Flexibility Scalability Security Wider range of mixed environment support
Flexibility
SAN File System can adapt to desired, environment-to-environment specific SAN zoning requirements. Instead of enforcing a single zone environment, multiple zones, and therefore multiple spans of access to SAN File System user data, are possible. This means it is now easier to deploy SAN File System into an existing SAN environment. In order to help make SAN File System configurations more manageable, a set of new functions and commands were introduced with SAN File System V2.1: SAN File System volume size can now be increased in size without interrupting file system processing or moving the content of the volume. This function is supported on those systems on which the actual device driver allows LUN expansion (for example, current models of SVC or the DS4000 series) and the host operating system also supports it. Data volume drain functionality (rmvol) uses a transactional-based approach to manage the movement of data blocks to other volumes in the particular storage pool. From the client perspective, this is a serialized operation, where only one I/O at a time occurs to volumes within the storage pool. The goal of employing this kind of mechanism is to reduce the clients CPU cycles. Some commands for managing the client data (for example, mkvol and rmvol) now require a client name as a mandatory parameter. This ensures that the administrative command will be executed only on that particular client. We cover the basic usage of most common SAN File System commands in Chapter 7, Basic operations and configuration on page 251.
Scalability
The MDS can host up to 126 dual-path LUNs for the system pool. The maximum number of LUNs for client data depends on platform-specific capabilities of that particular client. Very large LUN configurations are now possible if the data LUNs are divided between different clients.
Security
By easing the zoning requirements in SAN File System, better storage and data security is possible in the SAN environment, as all hosts (SAN File System clients) have access only to their own data LUNs. You can see an example of a SAN File System zoning scenario in Figure 3-1 on page 68.
Note that LUNs within a DS4000 partition can only be used by one operating system type; this is a restriction of the DS4x00 partition. Other disk systems, for example, SVC, allow multi-operating system access to the same LUNs.
71
An example of how the network can be set up is shown in Figure 3-2. Note there are two physical connections on the right of each MDS, indicating the redundant Ethernet configuration. However, these share the one TCP/IP address.
M a s te r C o n s o le
V P N fo r re m o te a c c e s s
E x is tin g IP N e tw o r k M e ta d a ta S e rve r
RSA
A IX
A IX
W in d o w s 2000
W in d o w s 2000
FC t w i 1 F C S w iS c h t c h 1
F C S w itc h 2
S y s te m Pool U ser Pool
RSA
M e ta d a ta S e rve r
3.5 Security
Authentication to the SAN File System administration interface can be accomplished in one of two ways: using LDAP, or using a new procedure called local authentication, which uses the Linux operating system login process (/etc/passwd and /etc/group). You must choose, as part of the planning process, whether you will use LDAP or local authentication. If an LDAP environment already exists, and you plan to implement SAN File System heterogenous file sharing, there is an advantage to using that LDAP; however, for those environments not already using LDAP, SAN File System implementation can be simplified by using local authentication. Using local authentication can eliminate one potential point of failure, since it does not depend on access to an external LDAP server to perform administrative functions.
with the appropriate groups according to the privileges required. For a new SAN File System installation, this is part of the pre-installation/planning process. For an existing SAN File System cluster that has previously been using LDAP authentication, migration to the local authentication method can be at any time, except for during a SAN File System software upgrade. We show detailed steps for defining the required groups and user IDs in 4.1.1, Local authentication configuration on page 100 (for new SAN File System installations) and 6.7, Switching from LDAP to local authentication on page 246 (for existing SAN File System installations who want to change methods). When using local authentication, whenever a user ID/password combination is entered to start the SAN File System CLI or GUI, the authentication method checks that the user ID exists as a UNIX user account in /etc/passwd, and if the correct password was supplied. It then checks that the user ID is a member of one of the four required groups (Administrator, Operator, Backup, or Monitor). Finally, based on the group of which the user ID is a member, the method determines whether this group is authorized to perform the requested function in order to decide access.
3.5.2 LDAP
A Lightweight Directory Access Protocol (LDAP) server is the other alternative for authentication with the SAN File System administration interface. This LDAP server can be any compliant implementation, running on any supported operating system. It is not supported to install the LDAP server on any MDS or the Master Console at this time.
73
Although any standards-compliant LDAP implementation should work with SAN File System, at the time of writing, tested combinations included: IBM Directory Server V5.1 for Windows IBM Directory Server V5.1 for Linux OpenLDAP/Linux Microsoft Active Directory The LDAP server needs to be configured appropriately with SAN File System in order to use LDAP to authenticate SAN File System administrators. Examples of LDAP setup and configuration are provided in the following appendixes: Appendix A, Installing IBM Directory Server and configuring for SAN File System on page 565 Appendix B, Installing OpenLDAP and configuring for SAN File System on page 589
LDAP users
A User, in SAN File System and LDAP terms, is an entry in the LDAP database that corresponds to an administrator of the SAN File System. This is a person that will use the CLI (sfscli) or the GUI (console). While you can also use LDAP on your SAN File System clients to authenticate client users, this is not required, and is not discussed further in this redbook. All SAN File System administrative users must have an entry in the LDAP database. They all must have the same parent DN, and they must all be the same objectClass. They must contain a user ID attribute, which will be their login name. It must also contain a userPassword attribute. The short description of the necessary fields and a recommended value can be seen in Table 3-1, with space to fill in your values.
LDAP roles
SAN File System administrators must have a role. The role of a SAN File System administrator determines the scope of commands they are allowed to execute. In increasing order of permission, the four roles are Monitor, Operator, Backup, and Administrator. Each of the four roles must have an entry in the LDAP database. All must have the parent DN (distinguished name), and all must have the same objectClass. When a user logs in, SAN File System checks the LDAP server to determine the role to which the user belongs. Each must have an attribute containing the string that describes its role: Administrator, Backup, Operator, or Monitor. Finally, they each must support an attribute that can contain multiple values, which will contain one value for each role occupants DN. The short description of the necessary fields and a recommended value can be seen in Table 3-1, with space to fill in your values.
Table 3-1 LDAP role information for SAN File System planning Description Network IP Address of LDAP server Port Numbers 9.42.164.125 389 insecure, 636 secure Example of value Your value
74
Description Authorized LDAP Username Authorized LDAP Password Organization Organization Parent DN ObjectClass Organization Manager, ITSO organization Manager Parent dn ObjectClass Attribute containing Role name Users, ITSO organization User Parent dn ObjectClass ou Organization Admin User Parent dn ObjectClass Attribute containing role name objectClass of User entries Attribute containing login user ID Attribute containing login password Monitor, Users, ITSO organization Monitor User Parent dn ObjectClass Attribute containing Role name objectClass of User entries Attribute containing Login user ID
Example of value superadmin (default for IBM Directory Server) secret (default for IBM Directory Server)
Your value
cn=ITSOAdmin Administrator,ou=Users,o=ITSO inetOrgPerson cn: ITSOAdmin Administrator sn: Administrator uid: ITSOAdmin userPassword: password
dn: cn=ITSOMon Monitor,ou=Users,o=ITSO inetOrgPerson cn: ITSOMon Monitor sn: Monitor uid: ITSOMon
75
Description Attribute containing Login password Backup, Users, ITSO organization Backup User Parent dn ObjectClass Attribute containing Role name ObjectClass of User entries Attribute containing login user ID Attribute containing Login password Operator, Users, ITSO organization Operator User Parent dn ObjectClass Attribute containing Role name ObjectClass of User entries Attribute containing Login user ID Attribute containing Login password Roles Role Parent dn ObjectClass ou ObjectClass of Role entries Attribute containing Role name Attribute for Role occupants Administrator, Roles, ITSO organization Admin Role Parent dn
Your value
dn: cn=ITSOBack Backup,ou=Users,o=ITSO inetOrgPerson cn: ITSOBack Backup sn: Backup uid: ITSOBack userPassword: password
dn: cn=ITSOOper Operator,ou=Users,o=ITSO inetOrgPerson cn: ITSOOper Operator sn: Operator uid: ITSOOper userPassword: password
dn: ou=Roles,o=ITSO organizationalUnit Roles organizationalRole cn: Administrator roleOccupant: cn=ITSOAdmin Administrator,ou=Users,o=ITSO
dn: cn=Administrator,ou=Roles,o=IT SO
76
Description ObjectClass Attribute containing Role name Attribute for Role occupants Monitor, Roles, ITSO organization Monitor Role Parent dn ObjectClass Attribute containing Role name Attribute for Role occupants Backup, Roles, ITSO organization Backup Role Parent dn ObjectClass Attribute containing Role name Attribute for Role occupants Operator, Roles, ITSO organization Operator Role Parent dn ObjectClass Attribute containing Role name Attribute for Role occupants
Your value
77
78
One default User Pool is created on installation; additional pools may be created based on criteria chosen by particular organization. Examples of the many possible criteria that could be chosen include: Device capabilities Performance Availability Location: secure or unsecure Business owners Application types We strongly recommend separating workload across different types of storage LUNs. The system pool size should start at approximately 2-5% of the total user data size, and volumes may be added to increase the pool size as user data grows. As part of the planning and design process, you should determine which storage pools are needed. In order to determine how many storage pools are needed, a data classification analysis might be required. For example, you might want to place the database data, the shared work directories used by application developers, and the personal home directories of individuals into separate Storage Pools. The reason why you would want to do this is to use storage capacity more efficiently. With the active data pooled, we can use enterprise class disk array like IBM TotalStorage DS8000 for the databases, mid-range disk array like the DS4x00 series for the shared work directories, and low-cost storage (JBODs) for the personal home directories. The goal of storing the data in separate pools is to match the value of the data to the cost of the storage. Figure 3-3 shows three storage pools that have been defined for particular needs. The clients have also been mapped to particular storage pools according to the access requirements. This mapping information is used to determine which LUNs need to be made available to which clients (via a combination of zoning, LUN masking, or other methods as available in the storage system).
Clients
SAN
Virtualized File System
Data classification analysis will also help to implement policy-based file placement management into the SAN File System. Policy determines which pool or pools will be used to place files when they are created within SAN File System. If a non-uniform configuration is being used in SAN File System, then you need to make sure that for each client, all volumes in any storage pool that could be used by any fileset to which that client needs access are available to that client. We will show some methods for doing this in 9.7, Non-uniform configuration client validation on page 429.
79
For the best performance in SAN File System, all engines should be busy in a balanced manner. This is facilitated through the use of filesets. You should plan for at least N filesets for N MDS; if not, then some of the Metadata servers will be in standby mode. You could carve out the workload into a multiple of N filesets, all expected to be similar in terms of workload, or use a more granular approach, where the filesets have different access characteristics (for example, where some generate more metadata traffic than others). SAN File System also supports basic load balancing functions for filesets; you can balance the fileset workload by dynamically assigning them to an MDS, depending on the number of filesets already being served by each MDS. See 7.5, Filesets on page 286 for more information about dynamic filesets and load balancing. Nested filesets are not recommended; see 7.5.2, Nested filesets on page 289 for reasons why. Note: Remember that the performance of the SAN File System cluster itself is dependent on metadata traffic and not data traffic.
80
You might make the assumption that 10% of the total amount of data in the fileset will change during the lifetime of a FlashCopy image, that is, between when the image is taken and when it is deleted. Let us assume the following example. We have 500 GB of data in one fileset. We want to keep three FlashCopy images. Therefore, for a 10% changed data ratio, we will need 50 GB additional space per FlashCopy image, in total, 150 GB additional space for all three FlashCopy images. Note: Keep in mind that the space used by FlashCopy images counts against the quota of the particular fileset.
81
The length of the failure detection window is set so that a crashing MDS process has time to be restarted automatically if possible, and rejoin the cluster before the ejection process is started. This means that filesets do not have to be relocated in the event of most software faults. The following section discusses the restart mechanism that makes this rejoin possible.
Simple Network Management Protocol on page 543). The RSA check interval can be changed or disabled using internal existing commands:
sfscli legacy setrsacheckinterval <interval_in_seconds> sfscli legacy setrsacheckinterval DEFAULT sfscli legacy disablersacheck
Each of these commands must be executed from the master MDS. If the RSA fault detection is disabled with the last command, a manual check can be performed on demand (or via a cron job) using the internal existing command sfscli legacy lsengine. This is shown in 13.5.1, Validating the RSA configuration on page 538.
83
SAN 1
Storage system 1
fc3 fc4
SDD, RDAC
Windows
AIX
Linux
SFS Clients
Ethernet Bonding
IP network
ip2
Ethernet Bonding
Sub MDS
fc1
fc2
SAN 2
Storage subsystem 2
Each SAN File System MDS has dual Ethernet adapters (Gigabit or Fibre Channel), and uses Ethernet bonding to provide redundant connections to the IP network. Bonding is a term used to describe combining multiple physical Ethernet links together to form one virtual link, and is sometimes referred to as trunking, channel bonding, NIC teaming, IP multipathing, or grouping. Bonding is commonly implemented either in the kernel network stack (driver and device independent) or by Ethernet device drivers. There are multiple bonding modes with the most common being active-active (load balancing packets across all bonded members), and active-backup, in which only one NIC in a bonded group is active at a time. In active-backup mode failover occurs to the inactive NIC upon failure of the active NIC. Active-backup mode works with any existing Ethernet infrastructure, while active-active mode (load balancing) requires participation of the network switches. In SAN File System V2.2.2, the dual Ethernet adapters on each MDS can be bonded into one virtual interface in active-backup mode with mii monitoring for link failure detection. See Set up Ethernet bonding on page 131 and IBM TotalStorage SAN File System Installation and Configuration Guide, GA27-4316 for details on configuring Ethernet bonding on the MDS. Although not compulsory, we strongly recommend you implement Ethernet bonding in your SAN File System cluster. Ethernet bonding allows a single NIC or cable to fail without downtime, but if both NICs are connected to a common switch, a switch failure can cause significant downtime. Therefore, for highest availability, a fully physical redundant network layer is recommended, so that each NIC is connected to a separate switch.
84
The combination of Ethernet Bonding and a fully redundant physical network allows a SAN File System to be transparent to many network faults or maintenance operations, such as cable faults, NIC faults, or switch replacements. If both NICs are isolated, then failover will occur, that is, the MDS will be ejected from the cluster, and filesets transferred to a surviving MDS(s). Ethernet bonding may also be implemented on the SAN File System clients; this is optional, and the methods for doing this depend on the client OS platform. Consult your OS documentation for details.
Volume managers, such as VERITAS Volume Manager or LVM in AIX, can be used only to manage virtual disks or LUNs that are not managed by SAN File System. This is because both SAN File System and other volume managers need to own their particular volumes.
85
The clients require HBAs that are compatible with the underlying storage systems used for data storage by SAN File System. See the following IBM Web sites for supported adapters: DS6x00 and DS8x00 series:
http://www.ibm.com/servers/storage/support/disk/ds6800/ http://www.ibm.com/servers/storage/support/disk/ds8100/ http://www.ibm.com/servers/storage/support/disk/ds8300/
ESS:
http://www.ibm.com/servers/storage/support/disk/2105.html
SVC:
http://www.ibm.com/servers/storage/support/virtual/2145.html
For non-IBM storage, consult your vendor for supported HBAs. Each client requires at least 20 MB of available space on the hard drive for the SAN File System client code. To remotely administer the SAN File System, you need a secure shell (SSH) client for the CLI and a Web browser for the GUI. Examples of SSH clients are PuTTY, Cygwin, or OpenSSH, which are downloadable at:
http://www.putty.nl http://www.cygwin.com http://www.openssh.com
The Web browsers currently supported are Internet Explorer 6.0 SP1 and above and Netscape 6.2 and above (Netscape 7.0 and above is recommended). To access the Web interface for the RSAII card, Java plug-in Version 1.4 is also required, which can be downloaded from:
http://www.java.sun.com/products/plugin
86
You can grant or revoke privileged client access dynamically. In this way, you can simply grant privileged client access only when you need to perform an action requiring root privileges on SAN File System objects, and revoke it once you complete the action.
Direct I/O
Some applications, such as database management systems, use their own sophisticated cache management systems. For such cases, SAN File System provides a direct I/O mode. In this mode, SAN File System performs direct writes to disk, and bypasses local file system caching. Using the direct I/O mode makes files behave more like raw devices. This gives database systems direct control over their I/O operations, while still providing the advantages of SAN File System features, such as policy based placement. The application needs to be written to understand how to use direct I/O. Normally, Enterprise applications know how to handle this. Basically, the application sets the O_DIRECT flag when you call open() on AIX. On Windows, the application has to set the FILE_NO_INTERMEDIATE_BUFFERING flag at open time. It is not possible to enable this by the Administrator itself (for example, at mount time). Direct I/O is already available for IBM DB2 UDB for Windows. It is available for IBM DB2 UDB for AIX at V8.1 FP4. Direct I/O is also available on SAN File System Intel clients that run Linux releases (32-bit) that support the POSIX Direct I/O file system interface calls, such as SLES8 and RHEL3.
87
88
Create Metadata
SAN File System
r u le s
W r it e m et ad at
Po
l ic y
Original data
Migrate data
User Pools
System Pool
89
Service description
The TotalStorage Services SAN File System Migration offering provides a nondisruptive online data migration at the file or block level to ensure data integrity. The TotalStorage Services team will provide architectural planning, along with execution at the byte-level, to migrate data from current application servers into a virtualized SAN File System environment. The service includes: Architecture planning Installation and hardware planning Software installation Initial file system comparisons and synchronizations Implementing a calculated replication schedule Contact your IBM service representative for more details of this offering or go to the following Web site:
http://www.storage.ibm.com/services/software.html
Service description
The IBM Implementation service offering for SAN File System provides planning, installation, configuration, and verification of SAN File System solutions. The service includes: Pre-install planning session Skills transfer SAN File System MDS installation Assistance with the LUNs configuration on back-end storage for SAN File System Storage pool and filesets configuration Master Console installation and configuration Client installation Optional LDAP installation
Benefits
IBM has years of experience with providing Storage Virtualization solutions. The key benefit is skills transfer from IBM Specialists to client personnel during the installation and configuration phase. This offering also helps clients to manage and focus resources on day-to-day operations. Contact your service representative for more details of this offering or go to the following Web site:
http://www.storage.ibm.com/services/software.html
90
3.12.1 Assumptions
We do not specifically address sizing of the SAN and fabrics. Specific fibre bandwidth must be available to support the current application workload. The SAN bandwidth and topology should be seen as a separate exercise using existing best practices. We assume that this exercise has been completed and the SAN performance is satisfactory. Since the SAN File System engines have 2 Gbps HBAs, it is recommended to use 2 Gbps connections in all the switches and clients. It is assumed that the IP network connecting the client/s and the Metadata servers is sufficiently large with sufficiently low latency.
91
However, the minimum recommended size for a system volume is 2 GB. This is because SAN File System has been designed to work with large amounts of data, and therefore testing has been targeted on system volumes of at least this size. Using the 5% rule, this would give a minimum global namespace of 40 GB. Important: Do not allow the System Pool to fill up. Alerts are provided to monitor it. If spare LUNs are available to the MDS, the System Pool can be expanded without disruption.
92
TCP/IP LAN
Metadata Client . . .
FS
Data
SAN Fabric
User Pools
Other parameters affecting loading and sizing of the SAN File System include: The number and mix of file system objects (for example, files, directories, symbolic links, and so on) that would be involved in the combined workload as seen by the MDS cluster. The number of filesets those file system objects are partitioned into. The size and mix of the objects and filesets that each client would be expected to operate on with their respective applications. For workload distribution purposes, there should be at least one fileset assigned to each engine. This implies that there should be at least as many filesets as there are engines in the cluster, unless it is desired to have a spare idle MDS in the cluster, for example, for availability reasons. One subordinate MDS should have some spare capacity, so that it can support takeover of other filesets, in case of a hard failure of an engine. The mix of metadata operations affects the maximum load. For example, file create operations may take up to twice as long as a file open. The typical file operations a client application would generate. For example, is it primarily read-only, or does it write a lot of new files? The impact of multi-client sharing of SAN File System file system objects. This will generate more metadata traffic, particularly if the file is shared heterogeneously. Collecting this data and analyzing it is a difficult exercise and it requires considerable expertise with the application under consideration. Performance analysis should be based around peak application workloads rather than average workloads. File operation profiles for many well known and standard workload classifications can be used to estimate this information; your IBM representative can assist with the sizing of the SAN File System.
93
FOPS
METADATA PATH
MDS OPS
MDS SERVER(s)
DP AA TT AH
DATA CACHE
D A T A
SAN
Figure 3-7 Typical data and metadata flow for a generic application with SAN File System
94
Testing has shown very high client metadata cache hit ratios, depending on the application workload. Therefore, many application operations that could require metadata services will be able to be satisfied locally, without having to access the MDS itself. In other words, under normal working conditions, the volume of MDS operations per second (MDS OPS in Figure 3-7 on page 94) will essentially be relatively few compared to the volume of File System Operations per second (FOPS in Figure 3-7 on page 94) produced from a given workload of application operations per second. Please consult your IBM representative for support in sizing a SAN File System configuration.
95
Table 3-5 Storage planning sheet Pool type System Storage device Accessible clients Volume_Names
Table 3-5 will help you plan out the zoning or LUN access, by specifying which clients should have access to which storage pool(s). Remember a client needs access to all volumes in a storage pool.
Table 3-6 Client to fileset and fileset to storage pool relationships planning sheet Client name Fileset name Storage Pool name
Use Table 3-6 to relate your filesets, storage pools, and policies. First, decide which fileset(s) each client should have access to. Then decide which storage pool(s) each fileset should be able to store files in. You will use this information to plan your policies as well as to confirm that each client has access to the required volumes in the pools to support the required fileset access.
Therefore, your existing SAN configuration will be considerably affected, especially from the zoning and LUN management perspective of view. Consider initially deploying SAN File System in an isolated environment, do the basic setup, test your configuration, and once you are sure that SAN File System is running smoothly in an isolated environment, you can start with the rollout into the production environment. Tip: If you do not have the facility to use a stand-alone, isolated SAN environment for the initial SAN File System setup, you can zone-out necessary storage resources in your production environment and use that part of the zoned-out environment for your SAN File System setup. Another major step in the SAN File System deployment phase is preparation for data migration. We cover this topic in more detail in 3.10, Data migration on page 88.
SANs Today
File System
File System
File System
SAN
high
SAN
medium low
Figure 3-8 SAN File System changes the way we look at the Storage in todays SANs
97
98
Chapter 4.
Pre-installation configuration
In this chapter, we discuss how to pre-configure your environment before installing SAN File System. We discuss the following topics: Security considerations Target Machine Validation Tool (TMVT) Back-end storage and zoning considerations SDD on clients and SAN File System MDS RDAC on clients and SAN File System MDS
99
Operator Backup
Administrator
Full access
After authenticating the user ID, the administrative server interacts with the MDS to process the request. The administrative agent caches all authenticated user roles for 600 seconds. You can clear the cache using the resetadmuser command.
100
1. Define the following four groups. These correspond to the four SAN File System command roles:
# # # # groupadd groupadd groupadd groupadd Administrator Operator Backup Monitor
You must use these exact group names and define all of the groups. 2. Decide which IDs you will require to administer SAN File System, and which administrative privilege (group) they should have. At a minimum, you need one ID in the Administrator group, but you can make them as required, and there can be several IDs each in the same group. Define the user IDs and passwords that will log in to the SAN File System CLI or GUI. When defining each user ID, associate it with the appropriate group. In this example, we are defining an ID itsoadm, in the Administrator group, and an ID ITSOMon, in the Monitor group. # useradd -g Administrator itsoadm # passwd itsoadm (specify a password when prompted) # useradd -g Monitor ITSOMon # passwd ITSOMon (specify a password when prompted) UNIX user IDs, groups, and passwords are case sensitive. We recommend limiting UNIX user IDs to eight characters or fewer. 3. Once all UNIX groups and user IDs/passwords are defined on all MDSs, log in using each user ID to verify the ID/password, and to make sure a /home/userid directory structure exists. Create home directories if required (use the md command). You can also list the contents of the /etc/passwd and /etc/group files to verify that the intended UNIX groups and user IDs were added to the MDSs. You are now ready to use local authentication in the SAN File System cluster that you will install in Chapter 5, Installation and basic setup for SAN File System on page 125. You will specify the -noldap option when installing SAN File System. You will select one local user ID/password combination, which is in the Administrator group, and specify it as the CLI_USER/CLI_PASSWD parameters when installing SAN File System (see step 4 on page 138 in 5.2.6, Install SAN File System cluster on page 138).
101
Each role object must contain an attribute that supports multiple DNs. You must be able to create an object for each SAN File System administrative user. Each administrative user object must contain an attribute that can be used to log in to the SAN File System console or CLI, and a userPassword attribute. If you are accessing the LDAP server over Secure Sockets Layer (SSL), a public SSL authorization certificate (key) must be included when the truststore is created during installation. For our configuration, we used the LDAP configuration shown in Figure 4-1. This configuration is represented in an LDIF file and imported into the LDAP server. We show the LDIF file corresponding to this tree in Sample LDIF file used on page 587.
ou=Roles
ou=Users
cn=Manager
Users
A User, in SAN File System and LDAP terms, is an entry in the LDAP database that corresponds to an administrator of the SAN File System. This is a person that will use the CLI (sfscli) or the SAN File System Console (GUI Interface) to administer the SAN File System. You can also use LDAP on your SAN File System clients to authenticate client users, and to coordinate a common user ID/group ID environment. For more detailed information about LDAP, see the IBM Redbook Understanding LDAP: Design and Implementation, SG24-4986.
Roles
SAN File System administrators must each have a certain role, which determines the scope of commands they are allowed to execute. In increasing order of permission, the four roles are Monitor, Operator, Backup, and Administrator. Each of the four roles must have an entry in the LDAP database. The Roles are described in Table 4-1 on page 100. At least one user with the Administrator role is required. You can also choose to define other roles as appropriate for your organization.
102
All roles must have the parent DN (distinguished name), and all roles must have the same objectClass. Examples are given in Appendix A, Installing IBM Directory Server and configuring for SAN File System on page 565 and Appendix B, Installing OpenLDAP and configuring for SAN File System on page 589. Next, verify that the LDAP has been set up correctly and that each MDS can talk to the LDAP server. This procedure assumes that Linux is already installed with TCP/IP configured on the MDS, as described in 5.2.2, Install software on each MDS engine on page 127. The ldapsearch command is used to send LDAP queries from the MDS to the LDAP server. Start a login session with each MDS (using the default root/password) and enter ldapsearch at the Linux prompt, specifying the IP address of the LDAP server and the parent DN (ITSO in our case), as shown in Example 4-1.
Example 4-1 Verifying that an MDS can contact the LDAP server NP28Node1:~ # ldapsearch -h 9.42.164.125 -x -b o=ITSO '(objectclass=*)' version: 2 # filter: (objectclass=*) # requesting: ALL # ITSO dn: o=ITSO objectClass: organization o: ITSO # Manager, ITSO dn: cn=Manager,o=ITSO objectClass: organizationalRole cn: Manager # Users, ITSO dn: ou=Users,o=ITSO objectClass: organizationalUnit ou: Users # ITSOAdmin Administrator, Users, ITSO dn: cn=ITSOAdmin Administrator,ou=Users,o=ITSO objectClass: inetOrgPerson cn: ITSOAdmin Administrator sn: Administrator uid: ITSOAdmin # ITSOMon Monitor, Users, ITSO dn: cn=ITSOMon Monitor,ou=Users,o=ITSO objectClass: inetOrgPerson cn: ITSOMon Monitor sn: Monitor uid: ITSOMon # ITSOBack Backup, Users, ITSO dn: cn=ITSOBack Backup,ou=Users,o=ITSO objectClass: inetOrgPerson cn: ITSOBack Backup sn: Backup uid: ITSOBack # ITSOOper Operator, Users, ITSO dn: cn=ITSOOper Operator,ou=Users,o=ITSO objectClass: inetOrgPerson cn: ITSOOper Operator sn: Operator uid: ITSOOper # Roles, ITSO dn: ou=Roles,o=ITSO objectClass: organizationalUnit ou: Roles # Administrator, Roles, ITSO
103
dn: cn=Administrator,ou=Roles,o=ITSO objectClass: organizationalRole cn: Administrator roleOccupant: cn=ITSOAdmin Administrator,ou=Users,o=ITSO # Monitor, Roles, ITSO dn: cn=Monitor,ou=Roles,o=ITSO objectClass: organizationalRole cn: Monitor roleOccupant: cn=ITSOMon Monitor,ou=Users,o=ITSO # Backup, Roles, ITSO dn: cn=Backup,ou=Roles,o=ITSO objectClass: organizationalRole cn: Backup roleOccupant: cn=ITSOBack Backup,ou=Users,o=ITSO # Operator, Roles, ITSO dn: cn=Operator,ou=Roles,o=ITSO objectClass: organizationalRole cn: Operator roleOccupant: cn=ITSOOper Operator,ou=Users,o=ITSO # search result search: 2 result: 0 Success # numResponses: 13 # numEntries: 12
You should perform ldapsearch on each MDS to ensure that they can all communicate with the LDAP server. To list the parameters that you can use with the ldapsearch command, use the -? option, as shown in Example 4-2.
Example 4-2 Using ldapsearch help NP28Node1:~ # ldapsearch -? ldapsearch: invalid option -- ? ldapsearch: unrecognized option -? usage: ldapsearch [options] [filter [attributes...]] where: filter RFC-2254 compliant LDAP search filter attributes whitespace-separated list of attribute descriptions which may include: 1.1 no attributes * all user attributes + all operational attributes Search options: -a deref one of never (default), always, search, or find -A retrieve attribute names only (no values) -b basedn base dn for search -F prefix URL prefix for files (default: "file:///tmp/) -l limit time limit (Max seconds) for search -L print responses in LDIFv1 format -LL print responses in LDIF format without comments -LLL print responses in LDIF format without comments and version -s scope one of base, one, or sub (search scope) -S attr sort the results by attribute `attr' -t write binary values to files in temporary directory -tt write all values to files in temporary directory -T path write files to directory specified by path (default: /tmp) -u include User Friendly entry names in the output -z limit size limit (in entries) for search
104
Common options: -d level set LDAP debugging level to `level' -D binddn bind DN -f file read operations from `file' -h host LDAP server -H URI LDAP Uniform Resource Identifier(s) -I use SASL Interactive mode -k use Kerberos authentication -K like -k, but do only step 1 of the Kerberos bind -M enable Manage DSA IT control (-MM to make critical) -n show what would be done but don't actually search -O props SASL security properties -p port port on LDAP server -P version protocol version (default: 3) -Q use SASL Quiet mode -R realm SASL realm -U authcid SASL authentication identity -v run in verbose mode (diagnostics to standard output) -w passwd bind passwd (for simple authentication) -W prompt for bind passwd -x Simple authentication -X authzid SASL authorization identity ("dn:<dn>" or "u:<user>") -Y mech SASL mechanism -Z Start TLS request (-ZZ to require successful response) NP28Node1:~ #
Examine the results in report_file_name, paying particular attention to areas flagged as non-compliant. Resolve those prerequisites, and then rerun the tool until TMVT runs without errors. Example 4-3 shows a partial listing from the TMVT report file. In this case, we had to check and install the RSA firmware.
Example 4-3 TMVT report file tank-mds1:~ # /usr/tank/server/bin/tmvt -r /usr/tank/admin/log/TMVT_MDS1_afterinstall -I=9.82.22.175 -U=USERID -P=PASW0RD HSTPV0009E The Hardware Components group fails to comply with the requirements of the recipe. HSTPV0007E Machine: tank-mds1 FAILS TO COMPLY with requirements of SAN File System release 2.2.2.91, build sv22_0001. tank-mds1:~ # cat /usr/tank/admin/log/TMVT_MDS1_afterinstall Hardware Components (14) Item Name Current Recipe Failed Hardware Component Checks (1) Remote Supervisor Adapter 2
MISSING
present
105
Passed Hardware Component Checks (13) Available RAM (Megabytes) 4039 Disk space in /var (Megabytes) 16386 TCP/IP enabled Ethernet controller Broadcom Corporation NetX Ethernet controller Broadcom Corporation NetX Ethernet controller Intel Corp. 82546EB Gigab Machine BIOS Level NA Machine BIOS Build GEE163AUS Machine Type/Model 41461RX FC HBA Manufacturer QLogic FC HBA Model QLA2342 FC HBA BIOS/Firmware Version 3.03.06 FC HBA Driver Version 7.03.00 Software Components (18) Item Name Correct Software Packages (18) xshared perl pango ncurses lsb-runtime libusb libstdc++ libgcc gtk2 gtk glibc glib2 glib expect ethtool bash atk aaa_base Current
Recipe
4.2.0-270 5.8.0-201 1.0.4-148 5.2-402 1.2-105 0.1.5-179 3.2.2-54 3.2.2-54 2.0.6-154 1.2.10-463 2.2.5-233 2.0.6-47 1.2.10-326 5.34-192 1.7cvs-26 2.05b-50 1.0.3-66 2003.3.27-76
4.2.0-270 5.8.0-201 1.0.4-148 5.2-402 1.2-105 0.1.5-179 3.2.2-54 3.2.2-54 2.0.6-154 1.2.10-463 2.2.5-233 2.0.6-47 1.2.10-326 5.34-192 1.7cvs-26 2.05b-50 1.0.3-66 2003.3.27-76
Note: TMVT non-compliance does not strictly prevent the installation of the SAN File System. It identifies deviations from the recommended hardware and software platform.
SAN considerations
Set up your switch configuration to maximize the number of physical LUNs addressable by the MDSs and to minimize or preferably eliminate sharing of fabrics with other non-SAN File System users whose usage may be disruptive to the SAN File System. Verify that the storage devices that are used by SAN File System are set up so that the appropriate storage LUNs are available to the SAN File System.
106
Zoning considerations
Because of the restriction on the number of LUNs an MDS can access (126 currently), make sure you limit the number of paths created through the fabrics from each metadata server to the storage to two paths, one per host-bus adapter (HBA) port. Some combination of zoning and physical fabric construction may be used to reduce or limit the number of physical paths. Each fabric should consist of one or more switches from the same vendor. Keep in mind that no level of SAN zoning can totally protect SAN File System systems from SAN events caused by other non-SAN File System systems connected to the same fabric. Therefore, your SAN File System fabric should be isolated from traffic and administrative contact with non-SAN File System systems. You can utilize VSANs to accomplish this fabric isolation. When metadata and user storage reside on the same storage subsystem, you must ensure that the metadata storage is fully isolated and protected from access by client systems. With some subsystems, access to various LUNs is determined by connectivity to various ports of the storage subsystems. With these storage subsystems, hard zoning of the attached switches may be sufficient to ensure isolation of the metadata storage from access by client systems. However, with other storage subsystems (such as ESS), LUN access is available from all ports and LUN masking must be used to ensure that only the MDSs can access the metadata LUNs. Important: SAN File System user and metadata LUNs should not share the same ESS 2105 Host Adapter ports. SAN File System clients should be zoned or LUN masked such that each can see user storage only. Specify that the metadata storage or LUNs are to be configured to the Linux mode (if the storage subsystem has operating system-specific operating modes). For more information about planning to implement zoning, see the following manual and redbook: IBM TotalStorage SAN File System Planning Guide, GA27-4344 IBM SAN Survival Guide, SG24-6143 The following is an example of a lab setup and is shown in Figure 4-2 on page 108. There are two MDS, two xSeries Windows clients and two pSeries AIX clients. Each system (MDS and client) has two FC HBAs. The port names are: NP28Node1, two ports: MDS1_P1 and MDS1_P2 NP28Node2, two ports: MDS2_P1 and MDS2_P2 SVC: Two nodes, four ports per node: svcn1_p1, svcn1_p2, svcn1_p3, svcn1_p4, svcn2_p1, svcn2_p2, svcn2_p3, and svcn2_p4 AIX1, two ports: AIX1_P1 and AIX1_P2 AIX2, two ports: AIX2_P1and AIX2_P2 WIN2kup, two ports: wink2up_p1 and wink2up_p2 WIN2kdn, two ports: wink2dn_p1 and wink2dn_p2
107
There are two pairs of switches: the first pair consists of Switch 11 and Switch 31, and the second pair consists of Switch 12 and Switch 32.
AIX2 Client
Switch 31
Switch 32
Switch 11
Switch 12
NP28Node1 (MDS1)
NP28Node2 (MDS2)
The zoning was implemented as follows: Each client HBA is zoned to one port of each SVC node. Since there are four clients and two HBAs in each client, four client zones have been defined on each switch pair. One MDS zone is defined on the first switch pair, including one port from each MDS and one port from the first SVC node (three ports in total). One MDS zone is defined on the second switch pair, including one port from each MDS and one port from the second SVC node (three ports in total). The switch zoning using the above rules is shown in Example 4-4. For simplicity, the zoning for the SVC to its back-end storage has been omitted.
Example 4-4 Using zoneShow First switch pair: cfg: Redbook zone: AIX1_SVC 12,3 12,4 32,6 zone: AIX2_SVC 12,1 12,2 32,4 zone: MDS_SVC 32,9
108
zone:
zone:
zone:
zone:
zone:
zone:
32,8 [MDS1_P2] 12,3 [svcn1_p2] win2kdn_SVC 32,14 [win2kdn_p1] 12,1 [SVCN1_P4] 12,2 [SVCN2_P4] win2kup_SVC 12,4 [svcn2_p2] 12,3 [svcn1-p2] 32,13 [win2kup_p1] switch pair: Redbook AIX1_SVC 31,6 [AIX1_p2] 11,3 [svcn1_p1] 11,4 [svcn2_p1] AIX2_SVC 31,4 [AIX2_p2] 11,1 [svcn1_p3] 11,2 [svcn2_p3] MDS_SVC 31,9 [MDS1_P2] 31,8 [MDS2_P2] 11,4 [svcn2_p1] win2kup_SVC 31,13[win2kup_p2] 11,3 [svcn1_p1] 11,4 [svcn2_p1] wink2dn_SVC 11,1 [svcn1_p3] 11,2 [svcn2_p3] 31,14[win2kdn_p2]
LUN masking / Storage Partitioning was implemented as follows: One 3.5 GB LUN, mapped to both HBAs in both MDS Nodes to be used for the System Pool. Four LUNs, of size 4.5 GB, 4 GB, 3 GB, and 1 GB, were assigned to all the HBAs Host Clients, to be used for User Pools. The setup described here is simply to show how the fabric and back-end storage is configured and is an example only. There are many other possibilities for doing this. Planning rules and considerations are explained in Chapter 3, MDS system design, architecture, and planning issues on page 65.
109
Attention: The examples shown here for installing and configuring SDD may not exactly match the current required version of SDD for SAN File System; however, the instructions are similar. Please refer to the SAN File System support Web site to confirm the required SDD version.
Windows 2000 operating system with Service Pack 2 or higher is required for SDD; however, SAN File System requires Service Pack 4. Approximately 1 MB of space is required on the Windows 2000 system drive. ESS devices are configured as IBM 2105xxx (where xxx is the ESS model number), SVC devices are configured as 2145, and DS6000/DS8000 devices are configured as IBM 2107.
1. Run setup.exe from the download directory and accept the defaults during the installation. Tip: If you have previously installed V1.3.1.1 (or earlier) of SDD, you will see an Upgrade? prompt. Answer Yes to continue the installation. 2. At the end of the process, you will be prompted to reboot now or later. A reboot is required to complete the installation. 3. After the reboot, the Start menu will include a Subsystem Device Driver entry containing the following selections: Subsystem Device Driver management SDD Technical Support Web site README
110
2. To verify that SDD can see the devices, use the datapath query device command, as shown in Example 4-5.
Example 4-5 Verifying SDD on Windows 2000 C:\Program Files\IBM\Subsystem Device Driver>datapath query device Total Devices : 4
DEV#: 0 DEVICE NAME: Disk1 Part0 TYPE: 2145 POLICY: OPTIMIZED SERIAL: 600507680185001B2000000000000002 ============================================================================ Path# Adapter/Hard Disk State Mode Select Errors 0 Scsi Port2 Bus0/Disk1 Part0 OPEN NORMAL 0 0 1 Scsi Port2 Bus0/Disk1 Part0 OPEN NORMAL 31 0 2 Scsi Port3 Bus0/Disk1 Part0 OPEN NORMAL 28 0 3 Scsi Port3 Bus0/Disk1 Part0 OPEN NORMAL 0 0 DEV#: 1 DEVICE NAME: Disk2 Part0 TYPE: 2145 POLICY: OPTIMIZED SERIAL: 600507680185001B2000000000000003 ============================================================================ Path# Adapter/Hard Disk State Mode Select Errors 0 Scsi Port2 Bus0/Disk2 Part0 OPEN NORMAL 0 0 1 Scsi Port2 Bus0/Disk2 Part0 OPEN NORMAL 30 0 2 Scsi Port3 Bus0/Disk2 Part0 OPEN NORMAL 29 0 3 Scsi Port3 Bus0/Disk2 Part0 OPEN NORMAL 0 0
111
DEV#: 2 DEVICE NAME: Disk3 Part0 TYPE: 2145 POLICY: OPTIMIZED SERIAL: 600507680185001B2000000000000001 ============================================================================ Path# Adapter/Hard Disk State Mode Select Errors 0 Scsi Port2 Bus0/Disk3 Part0 OPEN NORMAL 24 0 1 Scsi Port2 Bus0/Disk3 Part0 OPEN NORMAL 0 0 2 Scsi Port3 Bus0/Disk3 Part0 OPEN NORMAL 0 0 3 Scsi Port3 Bus0/Disk3 Part0 OPEN NORMAL 35 0 DEV#: 3 DEVICE NAME: Disk4 Part0 TYPE: 2145 POLICY: OPTIMIZED SERIAL: 600507680185001B2000000000000006 ============================================================================ Path# Adapter/Hard Disk State Mode Select Errors 0 Scsi Port2 Bus0/Disk4 Part0 OPEN NORMAL 0 0 1 Scsi Port2 Bus0/Disk4 Part0 OPEN NORMAL 24 0 2 Scsi Port3 Bus0/Disk4 Part0 OPEN NORMAL 35 0 3 Scsi Port3 Bus0/Disk4 Part0 OPEN NORMAL 0 0
The actual devices are shown in the DEVICE NAME heading, in this case Disk1, Disk2, Disk3, and Disk4. Note that there are four paths displayed for each disk, as the SVCs have been configured for four paths to each LUN. 3. Finally, check that both FC adapters has been correctly configured to use SDD. Use the datapath query adapter command, as shown in Example 4-6.
Example 4-6 Display information about HBAs that is currently configured for SDD C:\Program Files\IBM\Subsystem Device Driver>datapath query adapter Active Adapters :2 Adpt# Adapter Name State Mode Select Errors Paths 0 Scsi Port2 Bus0 NORMAL ACTIVE 109 0 8 1 Scsi Port3 Bus0 NORMAL ACTIVE 127 0 8
Active 8 8
In this example, the two HBAs have been installed and successfully configured for SDD. You have now successfully installed and verified SDD on a Windows 2000 client.
The prerequisites for installing SDD on AIX are: You must have root access. The following procedures assume that SDD will be used to access all single-path and multipath devices. If installing an older version of SDD, first remove any previously installed, newer version of SDD from your client. Make sure that your HBAs are installed by using lsdev -Cc adapter |grep fc. The output should be similar to Example 4-7 on page 113, which shows two HBAs: fcs0 and fcs1.
112
Example 4-7 Make sure FC adapter is installed fcs0 fcs1 Available 20-58 Available 20-60 FC Adapter FC Adapter
Note: In certain circumstances, when upgrading from a previous version of SDD, you may see the following error message during installation:
Error, volume group configuration may not be saved completely. Failure occurred during pre_rm. Failure occurred during rminstal. Finished processing all filesets. (Total time: 16 secs).
To correct this, unmount all file systems belonging to SDD volume groups and vary off those volume groups. See the SDD manual and README file for more information.
2. Select the required SDD level depending on the level of AIX that you are running (devices.sdd.51 in our case). The installation will complete. 3. If you are using SVC as a front-end to SAN File System user storage, you also need to install the 2145 component for SDD called AIX Attachment Scripts for SVC. This component can be found from the SVC Support site:
http://www.ibm.com/servers/storage/support/virtual/2145.html
113
Use smitty install_update, and select Install Software. In the INPUT device field, the included packages will be displayed, as in Example 4-9.
Example 4-9 Install 2145 component for SDD Install and Update Software by Package Name (includes devices and printers) Type or select a value for the entry field. Press Enter AFTER making all desired changes. lqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqk x Select Software to Install x * x x+ x Move cursor to desired item and press F7. Use arrow keys to scroll. x x ONE OR MORE items can be selected. x x Press Enter AFTER making all selections. x x x x #--------------------------------------------------------------------- x x # x x # KEY: x x # @ = Already installed x x # x x #--------------------------------------------------------------------- x x x x ibm2145.rte ALL x x 4.3.2002.1111 IBM 2145 TotalStorage SAN Volume Controller x x x x F1=Help F2=Refresh F3=Cancel x F1x F7=Select F8=Image F10=Exit x F5x Enter=Do /=Find n=Find Next x F9mqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqj
4. Select ibm2145.rte and press Enter to install. 5. Verify that SDD has installed successfully using lslpp -l *sdd*, as in Example 4-10.
Example 4-10 Verify that SDD has been installed root@aix2:/# lslpp -l '*sdd*' Fileset Level State Description ---------------------------------------------------------------------------Path: /usr/lib/objrepos devices.sdd.51.rte 1.5.1.0 COMMITTED IBM Subsystem Device Driver for AIX V51 Path: /etc/objrepos devices.sdd.51.rte
1.5.1.0 COMMITTED
Note: You do not need to reboot the pSeries, even though the installation message indicates this. SDD on your AIX client platform has now been installed, and you are ready to configure SDD. Tip: For AIX 5L Version 5.1 and AIX 5L Version 5.2, the published limitation on one system is 10,000 devices. The combined number of hdisk and vpath devices should not exceed the number of devices that AIX supports. In a multipath environment, because each path to a disk creates an hdisk, the total number of disks being configured can be reduced by the number of paths to each disk.
114
2. Verify that you can see the vpaths using lsdev -Cc disk | grep vpath (Example 4-12). Here we see the consolidated devices, representing the four actual disks.
Example 4-12 Verify that you can see the vpaths root@aix2:/# lsdev -Cc disk | grep "vpath*" vpath0 Available Data Path Optimizer vpath1 Available Data Path Optimizer vpath2 Available Data Path Optimizer vpath3 Available Data Path Optimizer Pseudo Pseudo Pseudo Pseudo Device Device Device Device Driver Driver Driver Driver
115
In our setup, four user data LUNs have been assigned to the clients. To verify that they have been correctly configured for SDD and correspond to the hdisk listing, use datapath query device (Example 4-13 shows how the command works).
Example 4-13 Verify that vpaths correlate to the hdisk root@aix2:/# datapath query device Total Devices : 4
DEV#: 0 DEVICE NAME: vpath0 TYPE: 2145 POLICY: Optimized SERIAL: 600507680185001B2000000000000003 ========================================================================== Path# Adapter/Hard Disk State Mode Select Errors 0 fscsi0/hdisk2 CLOSE NORMAL 0 0 1 fscsi0/hdisk6 CLOSE NORMAL 0 0 2 fscsi1/hdisk10 CLOSE NORMAL 0 0 3 fscsi1/hdisk14 CLOSE NORMAL 0 0 DEV#: 1 DEVICE NAME: vpath1 TYPE: 2145 POLICY: Optimized SERIAL: 600507680185001B2000000000000001 ========================================================================== Path# Adapter/Hard Disk State Mode Select Errors 0 fscsi0/hdisk3 CLOSE NORMAL 0 0 1 fscsi0/hdisk7 CLOSE NORMAL 0 0 2 fscsi1/hdisk11 CLOSE NORMAL 0 0 3 fscsi1/hdisk15 CLOSE NORMAL 0 0 DEV#: 2 DEVICE NAME: vpath2 TYPE: 2145 POLICY: Optimized SERIAL: 600507680185001B2000000000000002 ========================================================================== Path# Adapter/Hard Disk State Mode Select Errors 0 fscsi0/hdisk4 CLOSE NORMAL 0 0 1 fscsi0/hdisk8 CLOSE NORMAL 0 0 2 fscsi1/hdisk12 CLOSE NORMAL 0 0 3 fscsi1/hdisk16 CLOSE NORMAL 0 0 DEV#: 3 DEVICE NAME: vpath3 TYPE: 2145 POLICY: Optimized SERIAL: 600507680185001B2000000000000006 ========================================================================== Path# Adapter/Hard Disk State Mode Select Errors 0 fscsi0/hdisk5 CLOSE NORMAL 0 0 1 fscsi0/hdisk9 CLOSE NORMAL 0 0 2 fscsi1/hdisk13 CLOSE NORMAL 0 0 3 fscsi1/hdisk17 CLOSE NORMAL 0 0
In our setup, we assigned four SVC LUNs to the AIX client, using four paths to each SVC LUN. If your LUNs do not show up as expected, continue to the next steps to configure your disk devices to work with SDD. If the disk devices have been configured correctly, the SDD setup for AIX 5L Version 5.1 has been completed. 3. If you have already created some ESS or SVC volume groups, vary off (deactivate) all active volume groups with ESS or SVC by using the varyoffvg AIX command. Attention: Before you vary off a volume group, unmount all file systems in that volume group. If some supported storage devices (hdisks) are used as physical volumes of an active volume group and if there are file systems of that volume group being mounted, you must unmount all file systems and vary off all active volume groups with supported storage device SDD disks, in order to configure SDD vpath devices correctly. 116
IBM TotalStorage SAN File System
4. Using smit devices, highlight Data Path Device and press Enter. The Data Path Device panel is displayed, as shown in Example 4-14.
Example 4-14 Data Path Device panel Data Path Devices Move cursor to desired item and press Enter. Display Data Path Device Configuration Display Data Path Device Status Display Data Path Device Adapter Status Define and Configure all Data Path Devices Add Paths to Available Data Path Devices Configure a Defined Data Path Device Remove a Data Path Device
5. Select Define and Configure All Data Path Devices. The configuration process begins. When complete, the output should look similar to Example 4-15.
Example 4-15 Devices configured COMMAND STATUS Command: OK stdout: yes Before command completion, additional vpath0 Available Data Path Optimizer vpath1 Available Data Path Optimizer vpath2 Available Data Path Optimizer vpath3 Available Data Path Optimizer stderr: no instructions may appear below. Pseudo Device Driver Pseudo Device Driver Pseudo Device Driver Pseudo Device Driver
6. Exit smitty and then verify the SDD configuration, as described in steps 1 through 3 above. 7. Use the varyonvg command to vary on all deactivated supported storage device volume groups. 8. If you want to convert the supported storage device hdisk volume group to SDD vpath devices, you must run the hd2vp utility. SDD provides two conversion scripts, hd2vp and vp2hd. The hd2vp converts a volume group from supported storage device hdisks to SDD vpaths, and vp2hd converts a volume group from SDD vpaths to supported storage device hdisks. Use vp2hd if you want to configure the applications back to their original supported storage device hdisks, or if you want to remove SDD from your AIX client. For more information about these scripts, consult your SDD user guide. You have now successfully configured SDD for AIX 5L Version 5.1.
Note: It is important that you verify the SDD level at the SDD Web site:
http://www.ibm.com/servers/storage/support/software/sdd/
117
4. Start SDD:
# sdd start
Select 2778 25
Errors Paths 0 5 0 5
Active 5 5
Verify that you can display information about the devices currently assigned to the MDS, using datapath query device, as shown in Example 4-17. We see the correct output: one SVC device is attached to the SCSI path. This will be used for the System Pool.
Example 4-17 Display information about devices that are currently configured for SDD mds1:~ # datapath query device DEV#: 0 DEVICE NAME: vpatha TYPE: 2145 POLICY: Optimized SERIAL: 600507680188801b2000000000000000 ============================================================================ Path# Adapter/Hard Disk State Mode Select Errors 0 Host2Channel0/sdb CLOSE NORMAL 0 0 1 Host2Channel0/sde CLOSE NORMAL 0 0 2 Host3Channel0/sdh CLOSE NORMAL 0 0 3 Host3Channel0/sdk CLOSE NORMAL 0 0
Once you have confirmed that SDD has correctly configured the HBAs and the disk devices, you should re-do the steps on the other MDS servers. You should carefully note the serial number (SERIAL field) corresponding to each vpathx device on each MDS. The mapping may not be the same on each MDS. When installing SAN File System, we need to specify at least one device name (/dev/rvpathx) to be configured as the first volume in the System Pool. This is specified for each MDS; therefore, it is vital to ensure that the correct device corresponding to the correct serial number is entered for each MDS (for this example, the device will be vpatha, with serial number 600507680185001b2000000000000000).
118
119
To install RDAC, follow the instructions in the manual IBM TotalStorage FAStT Storage Manager Version 9 Installation and Support Guide for AIX, HP-UX, Solaris, and Linux on POWER, GC26-7648. This manual also covers installation of RDAC on Solaris, which we do not cover here. Verify that the correct version of the software was successfully installed with the lslpp command:
# lslpp -ah devices.fcp.disk.array.rte
Configure the devices for the software changes to take effect by typing the following command:
# cfgmgr -v
to see if the RDAC software recognizes the FAStT volumes as shown in the following list: Each DS4300 (FAStT600) volume is recognized as a 1722 (600) Disk Array Device. Each DS4400 (FAStT700) volume is recognized as a 1742 (700) Disk Array Device. Each DS4500 (FAStT900) volume is recognized as a 1742-900 Disk Array Device. Example 4-19 on page 121 shows the output of the lsdev command for a set of DS4500 (FAStT900) LUNs. 120
IBM TotalStorage SAN File System
Example 4-19 Device listing for DS4500 LUNs # lsdev -Cc disk hdisk0 Available 10-88-00-8,0 16 Bit LVD hdisk32 Available 31-08-01 1742-900 Disk hdisk33 Available 91-08-01 1742-900 Disk hdisk34 Available 31-08-01 1742-900 Disk hdisk35 Available 91-08-01 1742-900 Disk SCSI Disk Drive Array Device Array Device Array Device Array Device
The README file contains detailed installation instructions; we will just summarize the procedure here. It is similar for both the MDS (SUSE Linux) and the SAN File System client (Red Hat Linux). You will also find more information in the manual IBM TotalStorage FAStT Storage Manager Version 9 Installation and Support Guide for Intel-based Operating System Environments, GC26-7649. The Linux RDAC driver is released as a source-code gunzip compressed tar package. To unpack it, enter the following command at the Linux command prompt:
mds4:/tmp/rdac # tar -zxvf ibmrdac-linux-xx.xx.xx.xx.tar.gz
where xx.xx.xx.xx is the release version of the RDAC driver (09.00.a5.00 at the time of the writing of this redbook). The source files will uncompress to the linuxrdac directory. Attention: The Host server must have the non-fail-over Fibre Channel HBA device driver properly built and installed before the Linux RDAC driver installation. Refer to the FC HBA device driver README or the FC HBA User Guide for instructions on installing the non-fail-over version of the device driver. The driver source tree is included in the package if you need to build it from the source tree. To build and install the RDAC package, first perform the steps shown in Example 4-20. These will ensure synchronization between the RDAC driver and the running kernel. The output of these commands is omitted for brevity.
Example 4-20 # # # # # cd /usr/src/linux make mrproper make cloneconfig make dep make j 8 modules
Next, change to the linuxrdac directory (use cd /RDAC/linuxrdac) and remove the old driver modules in that directory. Type the following command:
make clean
121
The next step copies the driver modules to the kernel module tree and builds the new RAMdisk image (mpp.img) which includes the RDAC driver modules and all driver modules that are needed during boot time. Run the following command:
make install
After RDAC installation, we must verify that the RDAC driver has discovered the available physical LUNS and created virtual LUNS for them. Use the following command:
ls -lR /proc/mpp
254,
0 0 0 0
14 14 14 14
. .. H3_FastT600_SISRack mppVBusNode
0 0 0 0 0
14 14 14 14 14
/proc/mpp/H3_FastT600_SISRack/controllerA: total 0 dr-xr-xr-x 3 root root 0 May 14 15:42 . dr-xr-xr-x 4 root root 0 May 14 15:42 .. dr-xr-xr-x 2 root root 0 May 14 15:42 qla2300_h3c0t0 /proc/mpp/H3_FastT600_SISRack/controllerA/qla2300_h3c0t0: total 0 dr-xr-xr-x 2 root root 0 May 14 15:42 . dr-xr-xr-x 3 root root 0 May 14 15:42 .. -rw-r--r-1 root root 0 May 14 15:42 LUN0 -rw-r--r-1 root root 0 May 14 15:42 UTM_LUN31 /proc/mpp/H3_FastT600_SISRack/controllerB: total 0 dr-xr-xr-x 3 root root 0 May 14 15:42 . dr-xr-xr-x 4 root root 0 May 14 15:42 .. dr-xr-xr-x 2 root root 0 May 14 15:42 qla2300_h2c0t0 /proc/mpp/H3_FastT600_SISRack/controllerB/qla2300_h2c0t0: total 0 dr-xr-xr-x 2 root root 0 May 14 15:42 . dr-xr-xr-x 3 root root 0 May 14 15:42 .. -rw-r--r-1 root root 0 May 14 15:42 LUN0 -rw-r--r-1 root root 0 May 14 15:42 UTM_LUN31 mds3:~ #
122
For grub, edit the /etc/grub.conf file and copy the original configuration to a new entry at the beginning of the boot list, changing the new entry's initrd image to mpp.img. It should look something like Example 4-22 (note that it may vary with a different system configuration).
Example 4-22 File grub.conf editing mds4:/tmp/rdac/linuxrdac # vi /boot/grub/menu.lst "/boot/grub/menu.lst" 14L, 407Cgfxmenu (hd0,0)/boot/message color white/blue black/light-gray default 0 timeout 8 title linux with mpp support kernel (hd0,0)/boot/vmlinuz root=/dev/sda1 acpi=oldboot initrd (hd0,0)/boot/mpp.img title linux kernel (hd0,0)/boot/vmlinuz root=/dev/sda1 acpi=oldboot initrd (hd0,0)/boot/initrd title floppy root (fd0) chainloader +1 title failsafe kernel (hd0,0)/boot/vmlinuz.shipped root=/dev/sda1 ide=nodma apm=off acpi=off vga=normal nosmp disableapic maxcpus=0 3 initrd (hd0,0)/boot/initrd.shipped mds4:/tmp/rdac/linuxrdac #
If you make any changes to the MPP configuration file (/etc/mpp.conf) or persistent binding file (/var/mpp/devicemapping), run mppUpdate to re-build the RAMdisk image to include the new file so that the new configuration file (or persistent binding file) can be used on the next system reboot.
123
The command fdisk -l, shown in Example 4-23, displays two DS4500 LUNs (sdb and sdc) in addition to the OS disk (sda). Note if you install the Storage Manager runtime and Storage Manager utilities, you can also use commands like SMdevices to list the RDAC devices. These packages are available at the following Web site:
http://www.ibm.com/pc/support/site.wss/document.do?lndocid=MIGR-60591 Example 4-23 fdisk -l output command mds1:~ # fdisk -l Disk /dev/sda: 255 heads, 63 sectors, 4420 cylinders Units = cylinders of 16065 * 512 bytes Device Boot /dev/sda1 * /dev/sda2 /dev/sda3 Start 1 1268 1530 End Blocks 1267 10177146 1529 2104515 3618 16779892+ Id 83 82 83 System Linux Linux swap Linux
Disk /dev/sdb: 255 heads, 63 sectors, 30335 cylinders Units = cylinders of 16065 * 512 bytes Disk /dev/sdb doesn't contain a valid partition table Disk /dev/sdc: 255 heads, 63 sectors, 30335 cylinders Units = cylinders of 16065 * 512 bytes Disk /dev/sdc doesn't contain a valid partition table mds1:~ #
124
Chapter 5.
125
126
127
SUSE Linux Enterprise Server (SLES) 8, with Service Pack 4 running the 2.4.21-278-smp kernel. (SLES8 is also referred to as United Linux.) Service Pack 3 must be installed before installing Service Pack 4. The required kernel level is included with the Service Pack 4 GA distribution. SUSE Linux Enterprise Server (SLES) 9, with Service Pack 1 running the 2.6.5-7.151-bigsmp kernel and kernel source. You can obtain the required kernel and source packages from your SUSE Maintenance Web service Note: Our example uses the SLES 8 Linux version. SAN File System is also supported on MDS running SLES 9. Check the IBM TotalStorage SAN File System Installation and Configuration Guide, GA27-4316. for more details on differences between installing the different SUSE versions.
Tips: If using SLES 9, you require United Linux Service Pack 1, QLogic HBA device driver 8.00.00, and a new kernel version 2.6.5-7.151. See IBM TotalStorage SAN File System Installation and Configuration Guide, GA27-4316 for installation instructions for SLES 9, since these steps are different, including for Ethernet bonding.
4. Select Option 1 - Update System to Service Pack 3 level. 5. After the updates have been applied, you are prompted to quit. Press Enter. 6. Unmount the CD-ROM and remove it from the CD-ROM drive:
umount /media/cdrom/
7. Reboot the engine (shutdown -r now). After rebooting, log in as root. 8. Repeat steps 1 to 7 for the Service Pack 4. 9. Verify that the required kernel level is installed:
rpm qa | grep e k_smp e kernel
129
3. Set the time zone if you did not set it during installation. Choose the appropriate time zone setting from the listings in /usr/share/zoneinfo. Example 5-3 shows setting the time zone to US Eastern time.
Example 5-3 Time zone settings # rm /etc/localtime # ln -s /usr/share/zoneinfo/EST5EDT /etc/localtime
4. Set the system time from the hardware clock with the hwclock command (Example 5-4).
Example 5-4 Time setting from hardware clock # hwclock --hctosys
Tip: Make sure each MDS in the cluster is set to the same date and time!
2. Change to the networking configuration directory /etc/sysconfig/network/. Look at the configuration file ifcfgeth0 to check that the IPADDR and NETMASK values are appropriately set, as shown in Example 5-6.
Example 5-6 IPADDR and NETMASK settings # cat /etc/sysconfig/network/ifcfg-eth0 IPADDR=9.82.22.171 NETMASK=255.255.255.0 BROADCAST=9.82.22.255
3. Check that the file /etc/resolv.conf includes correct DNS information. At a minimum, you need one nameserver and domain entry, as shown in Example 5-7.
Example 5-7 DNS settings example: /etc/resolv.conf nameserver 192.168.254.100 nameserver 192.168.254.101 domain company.com search company.net company.com
Note: If DNS is not being used, the IP addresses and host names of each SAN File System engine must be included in the /etc/hosts file on each SAN File System engine. 4. Check /etc/sysconfig/network/routes for the TCP/IP routing information, including a default route at a minimum (see Example 5-8 on page 131).
130
Example 5-8 IP routing - /etc/sysconfig/network/routes 224.0.0.0 0.0.0.0 240.0.0.0 eth0 multicast default 9.82.22.1 0.0.0.0 eth0
5. If you had to make any changes to the network configuration, you need to shut down and reboot the MDS (run shutdown now -r). 6. Use ifconfig to verify network operation, as shown in Example 5-9.
Example 5-9 ifconfig # ifconfig eth0 Link encap:Ethernet HWaddr 00:10:18:00:47:29 inet addr:9.82.22.171 Bcast:9.82.22.255 Mask:255.255.255.0 UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:1625358 errors:0 dropped:0 overruns:0 frame:0 TX packets:263962 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:100 RX bytes:128100235 (122.1 Mb) TX bytes:24174608 (23.0 Mb) Interrupt:20 Memory:efff0000-f0000000 lo Link encap:Local Loopback inet addr:127.0.0.1 Mask:255.0.0.0 UP LOOPBACK RUNNING MTU:16436 Metric:1 RX packets:78270 errors:0 dropped:0 overruns:0 frame:0 TX packets:78270 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:51608135 (49.2 Mb) TX bytes:51608135 (49.2 Mb)
7. Check that you can ping other host names in your network from the MDS, and that the MDS host name itself, both short and fully qualified form if used, is resolvable from other hosts before proceeding. If you have problems, re-check the network settings and Ethernet cabling, as well as the configuration of the DNS, if used. 8. You can now perform the rest of the installation at another system with SSH connection to your MDS, that is, you do not need to be at the MDS console. If you do not have an SSH-enabled system, see 5.7, SAN File System MDS remote access setup (PuTTY / ssh) on page 228 and 7.1.1, Accessing the CLI on page 252 for details on how to do this task.
131
Redundant Ethernet support has several benefits for SAN File System clients: A single Ethernet component failure no longer needs to result in a metadata service outage or a failover. This makes a network partition much less likely, which is a particularly disruptive failure. It reduces the chance that a failure will cause a file system error to be returned by SAN File System to the application. It allows certain client network maintenance (for example, switch replacement) to be performed without impacting access to the SAN File System service. The procedure is slightly different for enabling Ethernet Bonding in a SLES 8 or SLES 9 configuration. The IBM TotalStorage SAN File System Installation and Configuration Guide, GA27-4316, provides instructions for the SLES 9 environment. In our environment (SLES 8 Service Pack 4), each MDS has two Broadcom Gigabit Ethernet adapters. You could also enable bonding on the x345 using the built-in Fast Ethernet adapters. The following steps should be performed on each MDS in turn. Important: If you are configuring Ethernet bonding on an existing SAN File System cluster, run the steps on any subordinate(s) first, then finally on the master MDS. 1. Enter /etc/sysconfig/network/network stop to stop networking. 2. Check the configuration of the first Ethernet interface in the file /etc/sysconfig/network/ifcfg-eth0. The value of BOOTPROTO must be static, as in Example 5-10.
Example 5-10 BOOTPROTO value tank-mds2:~ # cat /etc/sysconfig/network/ifcfg-eth0 BOOTPROTO='static' BROADCAST='9.82.24.255' IPADDR='9.82.24.96' NETMASK='255.255.255.0' NETWORK='9.82.24.0' STARTMODE='onboot' UNIQUE='QOEa.4zNNCpehEiC' WIRELESS='no' device='eth0'
3. Add the lines shown in Example 5-11 to /etc/init.d/boot.local so that bonding will be configured on each system reboot. The bonding options are defined with the modprobe command and tell what mode and timer monitor values are supported. SAN File System requires mode=active-backup and miimon=100. The active-backup mode means only one of the interfaces will be active, and the other waits in standby until needed. The next modprobe statement loads the NIC driver. Since we are using the Broadcom Gigabit Ethernet NICs, we specify bcm5700. If using the Intel adapter, enter modprobe e1000. The ifconfig and ifenslave commands create an adapter called bond0 with the same TCP/IP address as eth0, and tie eth0 and eth1 to the bond0 adapter. SLES 8 requires that the bond, or enslave, be restarted after each boot, unlike SLES 9.
Example 5-11 Bonding options # Here you should add things, that should happen directly after booting # before we're going to the first run level. # modprobe bonding mode=active-backup miimon=100 modprobe bcm5700 ifconfig bond0 9.82.24.96 netmask 255.255.255.0 up
132
4. Check /etc/sysconfig/network/routes and make sure that the default route is not tied to a specific adapter, such as eth0, or eth1, as in Example 5-12.
Example 5-12 Routes tank-mds1:~ # more /etc/sysconfig/network/routes # default 9.82.24.1 0.0.0.0
5. Now reboot the MDS to activate the changes. To verify that bonding is active, check the status of all three adapters (bond0, eth0 and eth1) using the ifconfig command, as in Example 5-13. In our example, with mode=active-backup, (specified in Example 5-11 on page 132), eth0 is the adapter sending and receiving traffic, while eth1 is sitting idle, in backup mode.
Example 5-13 Initial ifconfig output tank-mds1:/etc/sysconfig/network # ifconfig bond0 Link encap:Ethernet HWaddr 00:10:18:00:99:5C inet addr:9.82.24.96 Bcast:9.255.255.255 Mask:255.255.255.0 inet6 addr: fe80::210:18ff:fe00:995c/64 Scope:Link UP BROADCAST RUNNING MASTER MULTICAST MTU:1500 Metric:1 RX packets:291601 errors:0 dropped:0 overruns:0 frame:0 TX packets:207016 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:23762180 (22.6 Mb) TX bytes:17686933 (16.8 Mb) eth0 Link encap:Ethernet HWaddr 00:10:18:00:99:5C inet addr:9.82.24.96 Bcast:9.255.255.255 Mask:255.255.255.0 inet6 addr: fe80::210:18ff:fe00:995c/64 Scope:Link UP BROADCAST RUNNING SLAVE MULTICAST MTU:1500 Metric:1 RX packets:276771 errors:0 dropped:0 overruns:0 frame:0 TX packets:207013 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:100 RX bytes:22159636 (21.1 Mb) TX bytes:17686711 (16.8 Mb) Interrupt:20 Memory:efff0000-f0000000 Link encap:Ethernet HWaddr 00:10:18:00:99:5C inet addr:9.82.24.96 Bcast:9.255.255.255 Mask:255.255.255.0 inet6 addr: fe80::210:18ff:fe00:995c/64 Scope:Link UP BROADCAST RUNNING NOARP SLAVE MULTICAST MTU:1500 Metric:1 RX packets:14830 errors:0 dropped:0 overruns:0 frame:0 TX packets:3 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:100 RX bytes:1602544 (1.5 Mb) TX bytes:222 (222.0 b) Interrupt:22 Memory:edff0000-ee000000
eth1
133
6. Now we can test if the NIC failover works. We disconnected the connection from eth0 while running a continuous ping from a workstation on a different subnet from the MDS. We see a momentary timeout to the ping response from 9.82.24.96 as eth1 becomes the active adapter (see Example 5-14).
Example 5-14 Ping timeout Reply from 9.82.24.96: Reply from 9.82.24.96: Reply from 9.82.24.96: Reply from 9.82.24.96: Reply from 9.82.24.96: Request timed out. Reply from 9.82.24.96: Reply from 9.82.24.96: Reply from 9.82.24.96: Reply from 9.82.24.96: Reply from 9.82.24.96: Reply from 9.82.24.96: bytes=32 bytes=32 bytes=32 bytes=32 bytes=32 bytes=32 bytes=32 bytes=32 bytes=32 bytes=32 bytes=32 time=299ms TTL=57 time=46ms TTL=57 time=10ms TTL=57 time=10ms TTL=57 time=7ms TTL=57 time=301ms TTL=57 time=2ms TTL=57 time=7ms TTL=57 time=7ms TTL=57 time=7ms TTL=57 time=6ms TTL=57
7. We can verify that eth1 is now the active adapter and eth0 is the backup by issuing another ifconfig on tank-mds1, as shown in Example 5-15, compared to the previous output, as shown in Example 5-13 on page 133.
Example 5-15 Ifconfig output after eth0 failover ifconfig bond0 Link encap:Ethernet HWaddr 00:10:18:00:99:D5 inet addr:9.82.24.96 Bcast:9.255.255.255 Mask:255.255.255.0 inet6 addr: fe80::210:18ff:fe00:99d5/64 Scope:Link UP BROADCAST RUNNING MASTER MULTICAST MTU:1500 Metric:1 RX packets:10525 errors:0 dropped:0 overruns:0 frame:0 TX packets:7516 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:945831 (923.6 Kb) TX bytes:715659 (698.8 Kb) Link encap:Ethernet HWaddr 00:10:18:00:99:D5 inet addr:9.82.24.96 Bcast:9.255.255.255 Mask:255.255.255.0 inet6 addr: fe80::210:18ff:fe00:99d5/64 Scope:Link UP BROADCAST NOARP SLAVE MULTICAST MTU:1500 Metric:1 RX packets:5822 errors:0 dropped:0 overruns:0 frame:0 TX packets:4413 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:100 RX bytes:540595 (527.9 Kb) TX bytes:422500 (412.5 Kb) Interrupt:20 Memory:efff0000-f0000000 Link encap:Ethernet HWaddr 00:10:18:00:99:D5 inet addr:9.82.24.96 Bcast:9.255.255.255 Mask:255.255.255.0 inet6 addr: fe80::210:18ff:fe00:99d5/64 Scope:Link UP BROADCAST RUNNING SLAVE MULTICAST MTU:1500 Metric:1 RX packets:4703 errors:0 dropped:0 overruns:0 frame:0 TX packets:3103 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:100 RX bytes:405236 (395.7 Kb) TX bytes:293159 (286.2 Kb) Interrupt:22 Memory:edff0000-ee000000 Link encap:Local Loopback inet addr:127.0.0.1 Mask:255.0.0.0 inet6 addr: ::1/128 Scope:Host UP LOOPBACK RUNNING MTU:16436 Metric:1 RX packets:3296 errors:0 dropped:0 overruns:0 frame:0
eth0
eth1
lo
134
TX packets:3296 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:2442382 (2.3 Mb) TX bytes:2442382 (2.3 Mb)
8. The test has succeeded; we can reconnect the eth0 interface. 9. Repeat these steps on the remaining MDSs.
For the IBM ^ xSeries 346 model, the BIOS is at the following Web site:
http://www.ibm.com/pc/support/site.wss/document.do?sitestyle=ibm&lndocid=MIGR-57356
For the IBM ^ xSeries 365 model, the BIOS is at the following Web site:
http://www.ibm.com/pc/support/site.wss/document.do?sitestyle=ibm&lndocid=MIGR-60101
Follow the README notes that come with the FLASH BIOS package for installation instructions. In our case, we dumped the BIOS to a diskette and rebooted the MDS with the diskette inserted in the drive. The MDS reboots off of the diskette, asks some elementary questions, and flashes the BIOS. Now check the required RSA II card firmware level. You can download this firmware (for the IBM ^ xSeries 345) from the following Web site:
http://www.ibm.com/pc/support/site.wss/document.do?lndocid=MIGR-46489
For the IBM ^ xSeries 346 model, the RSA II firmware is at the following Web site
http://www.ibm.com/pc/support/site.wss/document.do?sitestyle=ibm&lndocid=MIGR-56759
For the IBM ^ xSeries 365 model, the RSA II firmware is at the following Web site
http://www.ibm.com/pc/support/site.wss/document.do?sitestyle=ibm&lndocid=MIGR-53861
Instructions for upgrading the BIOS and firmware are given in the manual IBM TotalStorage SAN File System Installation and Configuration Guide, GA27-4316
135
2. IBM Subsystem Device Driver (SDD) V1.6.0.1-6 or Redundant Disk Array Controller (RDAC) V9.00.A5.09 or later, as appropriate for your system storage. We provide detailed instructions for installing the device driver in 4.4, Subsystem Device Driver on page 109 and 4.5, Redundant Disk Array Controller (RDAC) on page 119. In our case, we use SDD: a. Install and start IBMsdd to manage multiple Fibre Channel paths to IBM storage LUNs (LUNs have already been created and mapped to this host). Go to http://www.ibm.com/servers/storage/support/software/sdd/downloading.html. b. Click TotalStorage Multipath Subsystem Device Driver downloads and select Subsystem Device Driver for Linux, and start the SDD download for the storage subsystem and OS you are using. c. Install the SDD driver by running, for example, rpm U IBMsdd-1.6.0.1-4.i686.ul1.rpm. d. Configure SDD to restart during boot by running chkconfig a sdd 35 (see Example 5-16).
Example 5-16 SDD boot config tank-mds2:~ # chkconfig sdd 35 sdd 0:off tank-mds2:~ # 1:off 2:off 3:on 4:off 5:on 6:off
e. Start SDD by running sdd start. f. Verify that SDD devices were configured by running lsvpcfg (see Example 5-17).
Example 5-17 List vpaths tank-mds2:~ # lsvpcfg 000 vpathc ( 254, 32) 600507680184001aa800000000000087 = 600507680184001aa800000000000087 = /dev/sdc /dev/sde /dev/sdg /dev/sdi 001 vpathd ( 254, 48) 600507680184001aa800000000000088 = 600507680184001aa800000000000088 = /dev/sdd /dev/sdf /dev/sdh /dev/sdj tank-mds2:~ #
g. Verify that each MDS discovers the same number of LUNs, and verify that the multi-path device driver restarts after a reboot. 3. Install IBM Java Runtime Environment (provided on the SAN File System installation CD). Mount the CD-ROM, for example, /media/cdrom, and run the command:
rpm -U /media/dvd/common/IBMJava2-142-ia32-JRE-1.4.2-1.0.i386.rpm
4. Install heterogeneous security. This is required if you will use advanced heterogeneous security, as described in 8.3, Advanced heterogeneous file sharing on page 347. 5. Set up SSH keys as described in IBM TotalStorage SAN File System Installation and Configuration Guide, GA27-4316. Example 5-18 shows how we enabled ssh keys on our cluster, tank-mds3 and tank-mds4. With ssh keys, the installation procedure can run from the master MDS, without having to continually prompt for passwords for the subordinate MDS.
Example 5-18 Setup ssh keys for cross-authentication on each MDS *************************** First, create keys on both MDS Start with tank-mds4 *************************** tank-mds4:/ # mkdir -p ~/.ssh tank-mds4:/ # ssh-keygen -t rsa -N ""
136
Generating public/private rsa key pair. Enter file in which to save the key (/root/.ssh/id_rsa): Your identification has been saved in /root/.ssh/id_rsa. Your public key has been saved in /root/.ssh/id_rsa.pub. The key fingerprint is: 5f:f5:a2:b5:db:0c:08:71:57:70:12:53:61:68:5e:52 root@tank-mds4 *************************** Now on tank-mds3 *************************** tank-mds3:~ # mkdir -p ~/.ssh tank-mds3:~ # ssh-keygen -t rsa -N "" Generating public/private rsa key pair. Enter file in which to save the key (/root/.ssh/id_rsa): Your identification has been saved in /root/.ssh/id_rsa. Your public key has been saved in /root/.ssh/id_rsa.pub. The key fingerprint is: 8c:a9:74:3c:a2:ff:6f:07:1a:82:9a:4a:c1:13:21:1d root@tank-mds3 *************************** Once creating the ssh keys, add the public key from each server to the $HOME/.ssh/suthorized_keys file on each of the other servers. on mds4 *************************** tank-mds4:/ # ssh root@tank-mds3 "cat >> ~/.ssh/authorized_keys" < ~/.ssh/id_rsa.pub The authenticity of host 'tank-mds3 (9.82.22.171)' can't be established. RSA key fingerprint is f6:59:1d:2b:0e:1b:0b:9d:28:ee:c9:f4:50:df:5b:af. Are you sure you want to continue connecting (yes/no)? yes 4425: Warning: Permanently added 'tank-mds3,9.82.22.171' (RSA) to the list of known hosts. root@tank-mds3's password: tank-mds4:/ # *************************** And on mds3 *************************** tank-mds3:~ # ssh root@tank-mds4 "cat >> ~/.ssh/authorized_keys" < ~/.ssh/id_rsa.pub The authenticity of host 'tank-mds4 (9.82.22.172)' can't be established. RSA key fingerprint is 38:30:e6:fe:46:0e:d1:31:95:d9:3a:56:ba:fd:5d:a0. Are you sure you want to continue connecting (yes/no)? yes 400: Warning: Permanently added 'tank-mds4' (RSA) to the list of known hosts. root@tank-mds4's password: tank-mds3:~ # *************************** Finally verify root password no longer required for ssh: On mds3 *************************** tank-mds3:~ # ssh root@tank-mds4 Last login: Fri Aug 26 04:09:24 2005 from sig-9-48-48-194.mts.ibm.com tank-mds4:~ # who root pts/0 Aug 26 01:19 (0013108fb8cf.wma.ibm.com) root pts/1 Aug 26 04:09 (sig-9-48-48-194.mts.ibm.com) root pts/2 Aug 26 05:09 (tank-mds3.wsclab.washington.ibm.com) *************************** on mds4 *************************** tank-mds4:/ # ssh root@tank-mds3 Last login: Fri Aug 26 04:22:11 2005 from 0013108fb8cf.wma.ibm.com tank-mds3:~ # who root pts/0 Aug 26 04:06 (sig-9-48-48-194.mts.ibm.com) root pts/1 Aug 26 04:22 (0013108fb8cf.wma.ibm.com) root pts/2 Aug 26 05:10 (tank-mds4.wsclab.washington.ibm.com) tank-mds3:~ #
137
3. Edit the generated file (/tmp/sfs.conf in our example), and change each entry to match your environment. See 5.2.7, SAN File System cluster configuration on page 147 for details of the parameters included in this file. 4. Run install_sfs-package-<version>.<platform>.sh to install, configure, and start the SAN File System cluster, specifying the configuration file created in the previous steps (for example, /tmp/sfs.conf):
/media/cdrom/SLESx/install_sfs-package-<version>.<platform>.sh --loadcluster --sfsargs "-f /tmp/sfs.conf -noldap"
Note: If you are using an LDAP server rather than local authentication to authenticate SAN File System Administration console users, omit the -noldap option. The command will then be /media/cdrom/SLESx/install_sfs-package-<version>.sh --loadcluster --sfsargs "-f /tmp/sfs.conf". We provide details of local authentication in 3.5.1, Local authentication on page 72, 4.1.1, Local authentication configuration on page 100, and 5.5, Local administrator authentication option on page 186. Choose the installation language (we chose 2 for English), press Enter to display the license agreement, and enter 1 when prompted to accept the license agreement, as shown in Example 5-19 on page 139.
138
Example 5-19 Cluster installation: language and license agreement tank-mds3:/media/cdrom/SLES8 # ./install_sfs-package-2.2.2-132.i386.sh --loadcluster --sfsargs "-f /tmp/sfs.conf -noldap" Software Licensing Agreement 1. Czech 2. English 3. French 4. German 5. Italian 6. Polish 7. Portuguese 8. Spanish 9. Turkish Please enter the number that corresponds to the language you prefer. 2 Software Licensing Agreement Press Enter to display the license agreement on your screen. Please read the agreement carefully before installing the Program. After reading the agreement, you will be given the opportunity to accept it or decline it. If you choose to decline the agreement, installation will not be completed and you will not be able to use the Program. International Program License Agreement Part 1 - General Terms BY DOWNLOADING, INSTALLING, COPYING, ACCESSING, OR USING THE PROGRAM YOU AGREE TO THE TERMS OF THIS AGREEMENT. IF YOU ARE ACCEPTING THESE TERMS ON BEHALF OF ANOTHER PERSON OR A COMPANY OR OTHER LEGAL ENTITY, YOU REPRESENT AND WARRANT THAT YOU HAVE FULL AUTHORITY TO BIND THAT PERSON, COMPANY, OR LEGAL ENTITY TO THESE TERMS. IF YOU DO NOT AGREE TO THESE TERMS, - DO NOT DOWNLOAD, INSTALL, COPY, ACCESS, OR USE THE PROGRAM; AND - PROMPTLY RETURN THE PROGRAM AND PROOF OF ENTITLEMENT TO Press Enter to continue viewing the license agreement, or, Enter "1" to accept the agreement, "2" to decline it or "99" to go back to the previous screen. 1
139
5. Now the packages are extracted, as in Example 5-20. You will be prompted to accept the options that you configured in the configuration file /tmp/sfs.conf. You can either accept each one, by pressing Enter, or change them to other values.
Example 5-20 Cluster installation: unpack packages and check installation options Installing /usr/tank/packages/sfs.server.verify.linux_SLES8-2.2.2-91.i386.rpm...... sfs.server.verify.linux_SLES8################################################## sfs.server.verify.linux_SLES8-2.2.2-91 Installing /usr/tank/packages/sfs.server.config.linux_SLES8-2.2.2-91.i386.rpm...... sfs.server.config.linux_SLES8################################################## sfs.server.config.linux_SLES8-2.2.2-91 IBM SAN File System metadata server setup To use the default value that appears in [square brackets], press the ENTER key. A dash [-] indicates no default is available. SAN File System CD mount point (CD_MNT) ======================================= setupsfs needs to access the SAN File System CD to verify the license key and install required software. Enter the full path to the SAN File System CDs mount point. CDs mount point [/media/cdrom]: /media/cdrom Server name (SERVER_NAME) ========================= Every engine in the cluster must have a unique name. This name must be the same as the unique name used to configure the RSA II adapter on each engine. However, no checks are done by the metadata server to enforce this rule. Server name [tank-mds3]: tank-mds3 Cluster name (CLUSTER_NAME) =========================== Specifies the name given to the cluster. This cluster name becomes the global name space root. For example, when a client mounts the namespace served by cluster name sanfs on the path /mnt/, the SAN File System is accessed by /mnt/sanfs/. If a name is not specified, a default cluster name will be assigned. The cluster name can be a maximum of 30 ASCII bytes or the equivalent in unicode characters. Cluster name [ITSO_GBURG]: ITSO_GBURG Server IP address (IP) ====================== This is dotted decimal IPv4 address that the local metadata server engine has bound to its network interface. Server IP address [9.82.22.171]: 9.82.22.171 Language (LANG)
140
=============== The metadata server can be configured to use a custom locale. This release supports only UTF8 locales. Language [en_US.utf8]: en_US.utf8 System Managment IP (SYS_MGMT_IP) ================================= Enter the System Managment IP address This is the address assigned to your RSAII card. System Managment IP [9.82.22.173]: 9.82.22.173 Authorized RSA User (RSA_USER) ============================== Enter the user name used to access the RSA II card. Authorized RSA User [USERID]: USERID RSA Password (RSA_PASSWD) ========================= Enter the password used to access the RSA II card. RSA Password [PASSWORD]: PASSW0RD CLI User (CLI_USER) =================== Enter the user name that will be used to access the administrative CLI. This user must have an administrative role. CLI User [itsoadm]: itsoadm CLI Password (CLI_PASSWD) ========================= Enter the password used to access the administrative CLI. CLI Password [itso]: xxxxx Truststore Password (TRUSTSTORE_PASSWD) ======================================= Enter the password used to secure the truststore file. The password must be at least six characters. Truststore Password [password]: xxxx LDAP SSL Certificate (LDAP_CERT) ================================ If your LDAP server only allows SSL connections, enter the full path to the file containing the LDAP certificate. Otherwise, do not enter anything.
141
LDAP SSL Certificate []: Metadata disk (META_DISKS) ========================== A space separated list of raw devices on which SAN File System metadata is stored. Metadata disk [/dev/rvpatha]: /dev/rvpatha
Note: If you are using LDAP authentication, (that is, you did not use the -noldap option), you will also be prompted for additional options, as shown in Example 5-21. These will appear after the Language (LANG) option, and before the System Management IP (SYS_MGMT_IP) option shown in Example 5-20. The sample values here correspond to the LDAP configuration shown in 3.5.2, LDAP on page 73 and 4.1.2, LDAP and SAN File System considerations on page 101.
Example 5-21 Cluster installation: LDAP options LDAP server (LDAP_SERVER) ========================= An LDAP server is used to authenticate users who will administer the server. LDAP server IP address [9.42.164.114]: 9.42.164.114
LDAP user (LDAP_USER) ===================== Distinguished name of an authorized LDAP user. LDAP user [cn=root]: cn=Manager,o=ITSO LDAP user password (LDAP_PASSWD) ================================ Password of the authorized LDAP user. This password will need to match the credentials set in your LDAP server. LDAP user password [atslock]: password LDAP secured connection (LDAP_SECURED_CONNECTION) ================================================= Set this value to true if your LDAP server requires SSL connections. If your LDAP server is not using SSL or you are not sure, set this value to false. LDAP secured connection [false]: false LDAP roles base distinguished name (LDAP_BASEDN_ROLES) ====================================================== Base distinguished name to search for roles. For example: ou=Roles,o=company,c=country
142
LDAP roles base distinguished name [ou=Roles,o=ITSO]: ou=Roles,o=ITSO LDAP members attribute (LDAP_ROLE_MEM_ID_ATTR) ============================================== When a SAN File System administration login is attempted, the SAN File System console searches all Role entries to get a list of uses that have permission to access the SAN File System Console. LDAP members attribute [roleOccupant]: roleOccupant LDAP user id attribute (LDAP_USER_ID_ATTR) ========================================== When a SAN File System administration login is attempted, the SAN File System Console searches all users which are associated with a SAN File System Role to see if the login attempt should be allowed. LDAP user id attribute [uid]: uid LDAP role name attribute (LDAP_ROLE_ID_ATTR) ============================================ The attribute that holds the name of the role. LDAP role name attribute [cn]: cn
6. You will be asked if there any subordinate nodes in the cluster. Answer yes (default). You will be then prompted to enter in the host name, Ethernet TCP/IP address, and RSA TCP/IP address of any subordinate MDS, as shown in Example 5-22. Repeat for each subordinate MDS, then answer no to the question Will this cluster have any subordinates (sic) nodes?" when all subordinates have been entered. We have one subordinate node, tank-mds4.
Example 5-22 Cluster installation: enter subordinate node details Subordinate server setup ======================== setupsfs will now collect information about each subordinate node in the cluster. - Enter No if this cluster will not have any subordinate nodes. - Enter Yes to continue. Will this cluster have any subordinates nodes? [Yes]: yes Subordinate Server Name ======================= Every engine in the cluster must have a unique name. Subordinate Name. [-]: tank-mds4 Subordinate IP address ====================== The dotted decimal IPv4 address that the subordinate Metadata server engine has bound to its network interface.
143
Subordinate Server IP address [-]: 9.82.22.172 System Managment IP (SYS_MGMT_IP) ================================= Enter the System Managment IP address This is the address assigned to your RSAII card. System Management IP [-]: 9.82.22.174 Subordinate server setup ======================== - Enter No if there are not any more subordinate nodes. - Enter Yes to continue. Is there another subordinates node? [Yes]: no
7. Now the installation proceeds: The unpacked software is installed on the master MDS and all subordinates, and the server processes are started on each MDS. Finally, the subordinates are joined to the SAN File System cluster (see Example 5-23). Note: If you did not set up the ssh keys correctly, as described in 5.2.5, Install prerequisite software on the MDS on page 135, you will be prompted many times to enter the root password for any subordinate node(s).
Example 5-23 Cluster installation: install each MDS and form SAN File System cluster Run SAN File System server setup ================================ The configuration utility has not made any changes to your system configuration. - Enter No to quit without configuring the metadata server on this system. - Enter Yes to start the metadata server. Run server setup [Yes]: yes Gathering required files Copying files to 9.82.22.172 HSTPV0035I Machine tank-mds3 complies with requirements of SAN File System version 2.2.2.91, build sv22_0001. Installing:sfs.server.verify.linux_SLES8-2.2.2-91.i386.rpm on 9.82.22.172 . sfs.server.verify.linux_SLES8-2.2.2-91 HSTPV0035I Machine tank-mds4 complies with requirements of SAN File System version 2.2.2.91, build sv22_0001. . Installing:wsexpress-5.1.2-1.i386.rpm on 9.82.22.172 . wsexpress-5.1.2-1 . Installing:sfs.server.config.linux_SLES8-2.2.2-91.i386.rpm on 9.82.22.172 . sfs.server.config.linux_SLES8-2.2.2-91 . Installing:sfs.admin.linux_SLES8-2.2.2-91.i386.rpm on 9.82.22.172 .
144
HSTWU0011I Installing the SAN File System console... HSTWU0014I The SAN File System console has been installed successfully. sfs.admin.linux_SLES8-2.2.2-91 . Installing:sfs.server.linux_SLES8-2.2.2-91.i386.rpm on 9.82.22.172 . sfs.server.linux_SLES8-2.2.2-91 Creating configuration for 9.82.22.172 . Updating configuration file: /tmp/fileIFY1Lm/sfs.conf.9.82.22.172
Updating configuration file: /usr/tank/admin/config/cimom.properties . Installing:sfs.admin.linux_SLES8-2.2.2-91.i386.rpm on 9.82.22.171 . HSTWU0011I Installing the SAN File System console... HSTWU0014I The SAN File System console has been installed successfully. sfs.admin.linux_SLES8-2.2.2-91 . Installing:sfs.server.linux_SLES8-2.2.2-91.i386.rpm on 9.82.22.171 . sfs.server.linux_SLES8-2.2.2-91 Creating configuration for 9.82.22.171 . Updating configuration file: /tmp/fileIFY1Lm/sfs.conf.9.82.22.171 HSTAS0005I Creating truststore file. HSTAS0006I The truststore was created successfully. Starting the metadata server on 9.82.22.171 . Starting the CIM agent on 9.82.22.171 . . Starting the SAN File System Console on 9.82.22.171 . . Name State Server Role Filesets Last Boot ============================================================= tank-mds3 Online Master 1 Aug 26, 2005 7:17:03 AM Starting the metadata server on 9.82.22.172 . Starting the CIM agent on 9.82.22.172 . Starting the SAN File System Console on 9.82.22.172 . Name State Server Role Filesets Last Boot ============================================================= tank-mds3 Online Master 1 Aug 26, 2005 7:17:03 AM Name State Server Role Filesets Last Boot ============================================================= tank-mds3 Online Master 1 Aug 26, 2005 7:17:03 AM NODE: 0 9.82.22.171 1737 1700 1738 1800 5989 GR tank-mds3 2.2.2.91 9.82.22.173 CMMNP5205I Metadata server 9.82.22.172 on port 1737 was added to the cluster successfully. Configuration complete. #
145
8. You can verify the setup using the sfscli lsserver command, as shown in Example 5-24. It should show one master, with the rest as subordinates. All MDSs should have a state of Online.
Example 5-24 SAN File System installation complete. # sfscli lsserver Name State Server Role Filesets Last Boot ============================================================= tank-mds4 Online Master 1 Aug 26, 2005 7:17:03 AM tank-mds3 Online Subordinate 0 Aug 26, 2005 7:17:47 AM
9. The installation process stores the software packages for all SAN File System components, including the Metadata server, the administrative server, and all clients, in the directory /usr/tank/packages. Example 5-25 shows the SAN File System packages installed in the directory.
Example 5-25 SAN File System installation packages # cd /usr/tank/packages # ls . .. inst_list.cd inst_list.no.cd sfs-client-WIN2K3-opt-2.2.2.82.exe sfs.admin.linux_SLES8-2.2.2-91.i386.rpm sfs.client.aix51 sfs.client.aix52 sfs.client.aix53 sfs.client.linux_RHEL-2.2.2-82.i386.rpm sfs.client.linux_SLES8-2.2.2-82.i386.rpm sfs.client.linux_SLES8-2.2.2-82.ppc64.rpm sfs.client.solaris9.2.2.2-82 sfs.locale.linux_SLES8-2.2.2-8.i386.rpm sfs.server.config.linux_SLES8-2.2.2-91.i386.rpm sfs.server.linux_SLES8-2.2.2-91.i386.rpm sfs.server.verify.linux_SLES8-2.2.2-91.i386.rpm sfs.server.linux-2.2.0-83.i386.rpm mds1:/usr/tank/packages #
10.Run the Target Machine Validation Tool (TMVT) to verify that your hardware and software prerequisites have been met. We showed an example of using this tool in 4.2, Target Machine Validation Tool (TMVT) on page 105:
/usr/tank/server/bin/tmvt -r report_file_name
11.To confirm this setup, you can access the MDS GUI from a browser, as indicated. Figure 5-1 on page 147 shows the SAN File System console login window. After you have signed in using the CLI_USER and password, you can run the GUI, as described in 7.1.2, Accessing the GUI on page 256.
146
CLUSTER_NAME
ITSO_GBURG
IP LANG
9.82.22.171 En_US.utf8
147
Attribute LDAP_SERVER
Meaning The IP address or resolvable machine name of the LDAP Server. You can select local authentication by specifying the -noldap option (see 5.5, Local administrator authentication option on page 186). Distinguished name of an authorized LDAP user. This user must have read access to the directory where the Roles and Users are. Password of LDAP_USER. Set to true if using a secureLDAP connection (the LDAP certificate must be available). Set to false otherwise. The base DN that contains the role objects as leaf nodes. The attribute of the role object that points to the DN of a user that belongs to that role. The attribute that holds the user ID. The attribute that holds the name of the role. The user name to use to communicate with the RSA II card. The password to use to communicate with the RSA II card. This is a login that will be used for accessing the SAN File System CLI and GUI. Must belong to a user object in the LDAP directory that is assigned to a specific role (if using LDAP) OR be a local OS user ID defined as in 4.1.1, Local authentication configuration on page 100. The password for the CLI_USER. Specify a password to be used when configuring the truststore. Certificate if the LDAP server is using SSL. If LDAP_SECURED_CONNECTION is false, leave this blank. A space-separated list of the fully-qualified raw device names for at least one metadata disk.
Example 9.42.164.125
LDAP_USER
cn=Manager,o=ITSO
LDAP_PASSWD LDAP_SECURED_CONNECTION
password false
LDAP_BASEDN_ROLES LDAP_ROLE_MEM_ID_ATTR
ou=Roles,o=ITSO roleOccupant
xxxxx xxxxx
META_DISKS
/dev/rvpatha
148
Attribute NODE_LIST
Meaning Information about subordinate nodes: We found that even if you complete this field in the format shown in the file, you will still be prompted to enter them, as in Example 5-22 on page 143. Set this value to the IP of your RSA card
Example
SYS_MGMT_IP
9.82.22.173
Installation prerequisites
Service Pack 4 or higher for Windows 2000 is required. One free drive letter is required to attach the SAN File System global namespace. The SAN File System cluster must be up and running.
149
2. Run the executable sfs-client-WIN2K3-version.exe on the client to be installed. Note that the same executable file is used for both Windows 2000 and Windows 2003 installations. Select a language for the installation, either English or Japanese (see Figure 5-2).
Figure 5-3 SAN File System Windows 2000 Client Welcome window
4. You will see a security warning. Click Run to continue (see Figure 5-4 on page 151).
150
5. In the next window, you are prompted to enter the configuration parameters, as shown in Figure 5-5 on page 152. Enter the appropriate information in the fields and click Next. The fields are: SAN File System server name: MDS IP Address in decimal. This can be any MDS in the cluster; you can specify the current master MDS, if you have trouble choosing one. SAN File System server port: 1700 (default). SAN File System preferred drive letter: Enter any free drive letter; the default is T. SAN File System client name: Enter a name for the Windows client; we recommend using the short host name. Disable Disk Management Write Signature Dialogue Box: Make sure this box is checked. SAN File System network connection type: Select TCP. SAN file system client critical error handling policy: The default is Log. Important: It is important to check the Disable Disk Management Write Signature dialogue box, as this will prevent Windows systems from writing its own default signature on SAN File System owned volumes. The box is checked by default.
Tip: The installation option, SAN file system client critical error handling policy, determines how the client will behave if it gets critical errors when trying to access the SAN File System global namespace. It has three possible values:
Log (default): SAN File System client errors are logged to the system log of the client
machine.
freezefs: The client does not attempt to write any more data to the SAN File System
drive, and halts communication with the MDS cluster.
151
6. A confirmation/review window will appear (Figure 5-6 on page 153). Verify that the information is correctly entered, and click Next.
152
7. On a Windows 2000 client, the installation will now proceed. Skip to step 10 on page 154. 8. Only on Windows 2003, you will get a pop-up informing you that you will have to click twice (see Figure 5-7). Click OK.
153
9. On Windows 2003, we have to select twice to accept the installation of the IBM SANFS Cluster Bus Enumerator driver (Figure 5-8) and IBM SANFS Cluster Volume Manager driver (Figure 5-9). These are required for PlugnPlay integration of the SAN File System drive with Windows Explorer. Click Yes on each window.
10.After successful installation, you are prompted to start the SAN File System client immediately, as shown in Figure 5-10 on page 155. Click Yes.
154
11.A final window informs you that installation is complete. You should now be able to view the SAN File System namespace, attached at the drive specified. Open Windows Explorer and the new drive letter T: should display, as in Figure 5-11. Notice the drive label; this will match the Cluster name specified when installing the MDS cluster, (CLUSTER_NAME parameter in 5.2.7, SAN File System cluster configuration on page 147). In this case, the Windows client is attached to a SAN File System cluster with the CLUSTER_NAME of ATS_GBURG.
155
12.Verify that the driver has started successfully. Select Computer Management System Tools System Information Software Environment Drivers for Windows 2000. You will see the SAN File System drivers in a running state, as in Figure 5-12.
For Windows 2003, select Computer Management System Tools Device Manager (see Figure 5-13).
13.You can also see the SAN File System Helper service in the Services applet, as shown in Figure 5-14 on page 157. This service is used for some internal functions, including tracing; it does not stop or start the SAN File System driver.
156
157
2. To add the Snap-in for SAN File System, select Console Add/Remove Snap-in, as shown in Figure 5-16.
3. The Add/Remove Snap-in window opens. Click Add, as shown in Figure 5-17 on page 159.
158
4. Scroll down to select the IBM TotalStorage File System Snap-in and click Add, as shown in Figure 5-18.
159
5. Click OK or add other Snap-ins as desired. For example, the Computer Management Snap-in could be useful to monitor the client. When finished, click OK, as in Figure 5-19.
6. Select Console Save As, as shown in Figure 5-20, to save the MMC console for future use.
160
7. Enter a location and file name for the MMC console and click Save. We called the MMC console SANFS and saved it to the Desktop, as shown in Figure 5-21. This creates an icon on our desktop, which we can click to launch the console in the future.
8. The MMC has now been configured for use with SAN File System.
161
The following global properties can be changed using MMC: DisableOplocks: Controls the setting of the Oplocks feature that provides improved CIFS performance by caching file data. The default value is 0, which indicates that Oplocks are enabled. DisableShortNames: Controls the setting of the ShortNames feature that enables the generation of the MS DOS 8.3 name format. The default value is 0, which indicates that short names are enabled. LogInternalErrors: Enables or disables internal error logging. The default value is 0, which indicates that the logging is disabled. WriteThrough: Forces all cached writes to be synchronously flushed to disk. The default value is 0, which indicates that this action is disabled. 2. To change any of the global properties, double-click it. In this example, we are changing the DisableShortNames property. The DisableShortNames property window will open. We change the value to 1 and click OK to save, as shown in Figure 5-23.
3. Verify that the value has been changed for the DisableShortNames property in the right hand column, as shown in Figure 5-24. For changes to global properties to take effect, reboot the Windows client.
162
4. Select Trace Properties, as shown in Figure 5-25. These Trace Properties can be changed: Categories: Lists the upper-driver trace classes enabled for tracing. CsmCategories: Lists the CSM trace classes enabled for tracing. 5. To change the Trace Properties: a. Double-click Categories or CsmCategories. b. Edit the list of trace classes. c. Click OK to close the window. Changes to trace properties take effect immediately.
6. The following Volume Properties can be modified: Preferred drive letter for the SAN File System namespace. Windows client name. The IP address of the MDS. The TCP port number at which the MDS listens. 7. To change a Volume Property, click Volume Property in the left-hand column and then right-click and select Properties on the volume that you want to modify, as shown in Figure 5-26. The volume represents the SAN File System namespace.
163
8. The Volume Properties window will open. Modify any of the values and click OK, as shown in Figure 5-27.
9. For changes to Volume Properties to take effect, close MMC and reboot the Windows client.
Tip: If you are upgrading the SAN File System client from a previous version, you need to first remove the old package, using rpm -e, then re-install the new package. The output should be similar to that shown in Example 5-27 on page 165.
164
Example 5-27 Install SAN File system client package [root@prague code]# rpm -ihv sfs.client.linux_RHEL-2.2.2-82.i386.rpm Preparing... ########################################### [100%] 1:sfs.client.linux_RHEL ########################################### [100%] Run /usr/tank/client/bin/setupstclient -prompt to configure and start the SAN File System client.
3. Make sure that the master MDS is running, then configure and start the client with the setupstclient command:
/usr/tank/client/bin/setupstclient -prompt
4. You will be prompted to enter values for the client configuration, as in Example 5-28: SAN File System server name (no default) SAN File System server port (the default is 1700.) SAN File System mount point (no default) SAN File System client name (the default is the short version of the host name.) SAN File System network connection type (the default is TCP.) SAN File System client critical error handling policy (the default is log.) SAN File System candidate disks
Example 5-28 Linux setupstclient [root@prague /]# /usr/tank/client/bin/setupstclient -prompt IBM SAN File System client setup utility The IBM SAN File System client setup utility performs the following functions: 1. Prompts you for information necessary to set up the SAN File System client. 2. (Optional) Saves the configuration you specify to the file: /usr/tank/client/config/stclient.conf 3. (Optional) Runs the setup process: a. Loads the SAN File System driver as a kernel module (using the insmod(1) command). b. Creates the SAN File System client (using the stfsclient(1) command). c. Mounts the SAN File System (using the stfsmount(1) command). Because the utility does not make changes until the configuration file is saved and the setup process begins, you can press Ctrl-c to exit the utility without making changes at any time before that point. To use the default value that appears in [square brackets], press Enter. A dash [-] indicates no default is available. Device candidate list (devices) =============================== The SAN File System client determines which disks to use as SAN File System user data volumes by searching a list of disks, called device candidates. The device candidate list consists of those devices that have device-special files in the directory you specify. Device candidate list [pat=/dev/sd*[a-z]]: pat=/dev/sd* Client name (clientname) Chapter 5. Installation and basic setup for SAN File System
165
======================== You can set the name of this SAN File System client. The name can be any string, but must be unique. By default, the client setup utility uses the host name (output of the hostname command). Client name [linux]: LIXPrague Metadata server IP address (server_ip) ====================================== During setup, the SAN File System client must connect to one of the Metadata servers in the cluster. After the client establishes a connection to the server, the server notifies the client of any other servers in the cluster. Specify the IP address for any Metadata server in the cluster to establish the connection. Metadata server connection IP address [-]: 9.82.22.172 Metadata server port number (server_port) ========================================= The SAN File System client must connect to a specific port on the Metadata server. In most cases the Metadata server uses port 1700. Accept this default unless you know the Metadata server was configured to listen on a different port. Metadata server port number [1700]: SAN File System mount point (mount_point) ========================================= The client setup utility mounts the SAN File System to a specified mount point (directory) and creates the file system image. If the specified mount point does not exist it will be created. Once mounted, the directory tree for the file system image appears at that mount point. Mount point [/mnt/sanfs]: /sfs2 Read-only file system (readonly) ================================ If you mount the SAN File System as read-only, data and metadata in the file system can be viewed, but not modified. Accessing a file system object does not affect its access time attribute. Mount file system read-only [No]: NLS converter [convertertype]: =============================== The NLS converter tells the Metadata server how to convert strings from the SAN File System client into Unicode. NLS converter [ISO-8859-1]:
166
Transport protocol (nettype) ============================ The transport protocol determines how the SAN File System client connects to the Metadata server. Specify either tcp or udp. Transport protocol [tcp]: Record mount in /etc/mtab (etc_mtab) ==================================== By default, if the file system mount succeeds, the client setup utility adds an entry for the file system image to /etc/mtab. You can choose to not record the mount in this file. Record the mount [Yes]: Show number of free blocks (always_empty) ========================================= By default, the number of blocks reported as free blocks by statfs() is actually the number of blocks in partitions that are not assigned to a fileset. Some programs might mistakenly report that there is no free space left in partitions assigned to the fileset, when there is actually free space available. This option forces statfs() to report the number of free blocks as being one less than the number of blocks in the file system. Always indicate blocks free [No]: Display verbose messages (verbose) ================================== By default, the client setup utility runs quietly, suppressing informational messages generated by the commands. You can choose to display these messages by specifying verbose. Display verbose output [No]: yes Configuration data collection complete. Save configuration ================== You can save the configuration that you just completed to a file. You can modify and use this file to set up additional SAN File System clients on other machines. Save configuration [Yes]: Creating configuration file: /usr/tank/client/config/stclient.conf Run SAN File System client setup ================================
167
The configuration utility has not made any changes to your system configuration. - Enter No to quit without configuring the SAN File System client on this system. - Enter Yes to start the SAN File System client Run the SAN File System client setup utility [Yes]: HSTCL0031I The client named LIXPrague was created with client identifier 159ec800 for SAN File System Metadata server at IP address 9.42.164.114, port 1700. HSTCL0068I Establishing 256 candidate SAN File System user data disk devices. HSTMO0015I Mounted SAN File System client LIXPrague of file-system type sanfs over directory /sfs2 in read-write mode. SAN File System client setup complete.
In most cases, you can accept the defaults. 5. To validate that the SAN File System was installed properly on the Linux client, use the cat command:
cat /usr/tank/client/VERSION
6. Use the mount command to verify that the SAN File System is mounted on the client. The mount point for the SAN File System should be displayed, /sfs2 in this case, as shown in Example 5-30.
Example 5-30 SAN File System is mounted [root@prague root]# mount /dev/sda1 on / type ext2 (rw) none on /proc type proc (rw) usbdevfs on /proc/bus/usb type usbdevfs (rw) none on /dev/pts type devpts (rw,gid=5,mode=620) /dev/sda7 on /home type ext2 (rw) none on /dev/shm type tmpfs (rw) /dev/sda2 on /tmp type ext2 (rw) /dev/sda3 on /usr type ext2 (rw) /dev/sda5 on /var type ext2 (rw) LIXPrague on /sfs2 type sanfs (rw)
4. Enter All (the default) when prompted to select the packages to be installed. 5. Enter y when prompted to select the packages to be installed. 6. Configure and start the client with the setupstclient command:
/usr/tank/client/bin/setupstclient -prompt
You will be prompted to enter values for the client configuration: SAN File System server name (no default) SAN File System server port (the default is 1700.) SAN File System mount point (no default) SAN File System client name (the default is the short version of the host name.) SAN File System network connection type (the default is TCP.) SAN File System client critical error handling policy (the default is log.) In most cases, you can accept the defaults. Make sure to enter in the actual IP address of an MDS. The execution of the setupstclient command is similar to that shown in Example 5-28 on page 165. 7. To validate that the SAN File System was installed properly on Solaris client, use the cat command:
cat /usr/tank/client/VERSION
8. Use the mount command to verify that the SAN File System is mounted on the client. The mount point for the SAN File System should be displayed.
169
2. Enable asynchronous input/output for the AIX client if you are running AIX 5L V5.2 or V5.3. Start SMIT and select Devices Asynchronous I/O Asynchronous I/O (Legacy) Change/Show Characteristics of Asynchronous I/O. The screen in Example 5-31 on page 169 should appear.
Example 5-32 Enable asynchronous I/O Change / Show Characteristics of Asynchronous I/O Type or select values in entry fields. Press Enter AFTER making all desired changes. [Entry Fields] [1] [10] [4096] [39] available enable
MINIMUM number of servers MAXIMUM number of servers per cpu Maximum number of REQUESTS Server PRIORITY STATE to be configured at system restart State of fast path
# # # # + +
3. Exit SMIT. 4. Run cfgmgr to apply the changes. 5. Copy the install package from an MDS (/usr/tank/packages/sfs.client.aix5x) to a local directory of the AIX client. Make sure to select the appropriate package for your version of AIX; the client packages are called sfs.client.aix51, sfs.client.aix52, and sfs.client.aix53 for AIX 5L V5.1, V5.2, and V5.3, respectively. You can use secure ftp from an MDS, or start the SAN File System console (select Download Client Software) and follow the prompts. We copied the install package to the directory /tmp/SANFS_Client. The package is of the format sfs.client.aix5x, where x is either 1, 2, 3. Select the appropriate package for your installed version of AIX. 6. Use the AIX installp command or SMIT to install (Run SMIT and select Software Installation and Maintenance Install and Update Software Install Software. Complete the parameters as shown in Example 5-33, and the install should complete, as shown in Example 5-34 on page 171.
Example 5-33 Installation directory and file selection Install and Update from ALL Available Software Type or select values in entry fields. Press Enter AFTER making all desired changes. [Entry Fields] . [+ 2.2.2.82 SAN File > + no + yes + no + yes + yes + no + no + no + yes + no + no +
* INPUT device / directory for software * SOFTWARE to install PREVIEW only? (install operation will NOT occur) COMMIT software updates? SAVE replaced files? AUTOMATICALLY install requisite software? EXTEND file systems if space needed? OVERWRITE same or newer versions? VERIFY install and check file sizes? DETAILED output? Process multiple volumes? ACCEPT new license agreements? Preview new LICENSE agreements?
170
Example 5-34 Installation output COMMAND STATUS Command: OK stdout: yes stderr: no
Before command completion, additional instructions may appear below. [TOP] I:sfs.client.aix52 2.2.2.82
+-----------------------------------------------------------------------------+ Pre-installation Verification... +-----------------------------------------------------------------------------+ Verifying selections...done Verifying requisites...done Results... SUCCESSES --------Filesets listed in this section passed pre-installation verification and will be installed. Selected Filesets ----------------sfs.client.aix5.2 2.2.2.82 # SAN File System client for A... << End of Success Section >> FILESET STATISTICS -----------------1 Selected to be installed, of which: 1 Passed pre-installation verification ---1 Total to be installed +-----------------------------------------------------------------------------+ Installing Software... +-----------------------------------------------------------------------------+ installp: APPLYING software for: sfs.client.aix52 2.2.2.82
. . . . . << Copyright notice for sfs.client.aix51-opt >> . . . . . . . Licensed Materials - Property of IBM 5765-FS1 5765-FS2 (C) Copyright International Business Machines Corp. 2003-2004 All rights reserved. US Government Users Restricted Rights - Use, duplication or disclosure restricted by GSA ADP Schedule Contract with IBM Corp. . . . . . << End of copyright notice for sfs.client.aix52 >>. . . . Run /usr/tank/client/bin/setupstclient -prompt to configure and start the SAN File System client. Finished processing all filesets. (Total time: 10 secs). +-----------------------------------------------------------------------------+ Summaries: +-----------------------------------------------------------------------------+ Chapter 5. Installation and basic setup for SAN File System
171
Installation Summary -------------------Name Level Part Event Result ------------------------------------------------------------------------------sfs.client.aix52 2.2.2.82 USR APPLY SUCCESS sfs.client.aix52 2.2.2.82 ROOT APPLY SUCCESS
The client setup utility: Prompts you for information necessary to set up the SAN File System client. (Optional) Saves the configuration you specify to the configuration file /usr/tank/client/config/stclient.conf. (Optional) Runs the setup process, which: Loads the SAN File System driver as a kernel extension (using the stfsdriver command). Creates the SAN File System client (using the stfsclient command). Mounts the SAN File System (using the stfsmount command).
Path to the kernel extension: The default is /usr/tank/client/bin/stfs. The client setup utility
loads the SAN File System driver as a kernel extension, named kernextname, and this requires the path to be defined as described above.
Device candidate list: List of disks to use as data volumes. The default is pat=/dev/rhdisk*. Since we are using SVC, we will change this to pat=/dev/rvpath*. Client name: Enter in a logical name for your client. We recommend using the host name
AIXRome in our case.
Metadata server connection IP address: We entered the IP address 9.82.22.172, our master MDS. Whenever it starts, the SAN File System client must connect to one of the MDSs in the cluster to initiate communication. We recommend setting this parameter to the master MDS initially. Metadata server port number: The default is 1700. Mount point: The default is /mnt/sfs. The client setup utility mounts the SAN File System global namespace to a specified mount point (directory). Once mounted, the directory tree for the global namespace file system image appears at that mount point. This directory must exist. We already created the /mnt/sfs directory, so we will use it. Mount file system read-only: The default is No. If you mount the SAN File System as
read-only, you will not be able to add or edit any files in the global namespace.
Disable automatic restart: By default, the SAN File System client restarts when the system starts. Enter Yes to enable automatic restart of the SAN File System client at startup and No to disable automatic restart.
172
IBM TotalStorage SAN File System
SAN File System kernel extension major number: The default is 99. Specify a major number that will be used to create the driver instance. NLS converter: The default is ISO-8859-1. The NLS converter tells the MDS how to
convert strings from the SAN File System client into Unicode.
Transport protocol: The default is tcp. Method of handling critical errors: The default is log.
Important: The installation option, Method of handling critical errors, determines how the client will behave if it gets critical errors when trying to access the SAN File System global namespace. It has three possible values:
Log (default): SAN File System client errors are logged to the system log of the client
machine.
freezefs: The client does not attempt to write any more data to the SAN File System
drive, and halts communication with the MDS cluster.
Attention: If the mount point directory does not already exist, you will get the error message Directory does not exist and the installation will stop. Create the directory and re-start the installation.
Display verbose output: The default is No. By default, the client setup utility runs quietly,
suppressing informational messages generated by the commands. You can choose to display these messages by specifying verbose.
Save configuration: The default is Yes. This creates the configuration file with the name
/usr/tank/client/config/stclient.conf. See Example 5-40 on page 175 for the sample contents of the configuration file.
Run the SAN File System client setup utility: The default setting is Yes.
Now the setup utility runs and completes with the messages shown in Example 5-35.
Example 5-35 Client setup utility complete HSTDR0029I The kernel extension was successfully loaded from file /usr/tank/client/bin/stfs kernel module ID (kmid) = 5f944bc. HSTDR0030I File system driver is initialized and ready to handle file-system type 20. SAN File System client setup complete.
173
The SAN File System should now be mounted and this can be verified using the mount command on the AIX system. This should be similar to Example 5-36. Note the mounted file system /sfs, of type stfs.
Example 5-36 Mount verification # mount node mounted -------- --------------/dev/hd4 /dev/hd2 /dev/hd9var /dev/hd3 /dev/hd1 /proc /dev/hd10opt /dev/lv00 SANFS mounted over vfs date options --------------- ------ ------------ --------------/ jfs Jun 03 10:54 rw,log=/dev/hd8 /usr jfs Jun 03 10:54 rw,log=/dev/hd8 /var jfs Jun 03 10:54 rw,log=/dev/hd8 /tmp jfs Jun 03 10:54 rw,log=/dev/hd8 /home jfs Jun 03 10:55 rw,log=/dev/hd8 /proc procfs Jun 03 10:55 rw /opt jfs Jun 03 10:55 rw,log=/dev/hd8 /usr/sys/inst.images jfs Jun 03 10:55 rw,log=/dev/hd8 /mnt/sfs sanfs Jun 03 17:05 rw
3. Now issue the rmstclient command. This will disconnect the client from the SAN file system, as shown in Example 5-38. It will also unmount the SAN File System if it was not already unmounted.
Example 5-38 Remove AIX SAN File System client # /usr/tank/client/bin/rmstclient -noprompt Using configuration file: /usr/tank/client/config/stclient.conf HSTDR0033I SAN File System driver shut down successfully. HSTDR0035I The kernel extension 62777f8 was unloaded successfully. SAN File System client removal complete.
174
Example 5-39 Re-connecting the AIX SAN File System client # ./setupstclient -noprompt Using configuration file: /usr/tank/client/config/stclient.conf HSTDR0029I The kernel extension was successfully loaded from file /usr/tank/client/bin/stfs kernel module ID (kmid) = 62777f8. HSTDR0030I File system driver is initialized and ready to handle file-system type 20.
Make sure to specify the actual installed SAN File System package for your system.
kernextname=/usr/tank/client/bin/stfs # # # # # # # # # # Device candidate list (devices) =============================== The SAN File System client determines which disks to use as SAN File System user data volumes by searching a list of disks, called device candidates. The device candidate list consists of those devices that have device-special files in the directory you specify. Device candidate list [pat=/dev/rhdisk*]:
devices=pat=/dev/rvpath* # # # # # Client name (clientname) ======================== You can set the name of this SAN File System client. The name can be any string, but must be unique. By
175
# default, the client setup utility uses the host name # (output of the hostname command). # # Client name [Rome]: clientname=AIXRome # # # # # # # # # # # # Metadata server IP address (server_ip) ====================================== During setup, the SAN File System client must connect to one of the Metadata servers in the cluster. After the client establishes a connection to the server, the server notifies the client of any other servers in the cluster. Specify the IP address for any Metadata server in the cluster to establish the connection. Metadata server connection IP address [-]:
server_ip=9.82.22.172 # # # # # # # # # Metadata server port number (server_port) ========================================= The SAN File System client must connect to a specific port on the Metadata server. In most cases the Metadata server uses port 1700. Accept this default unless you know the Metadata server was configured to listen on a different port. Metadata server port number [1700]:
server_port=1700 # # # # # # # # # # SAN File System mount point (mount_point) ========================================= The client setup utility mounts the SAN File System to a specified mount point (directory) and creates the file system image. If the specified mount point does not exist it will be created. Once mounted, the directory tree for the file system image appears at that mount point. Mount point [/mnt/sanfs]:
mount_point=/mnt/sfs # # # # # # # # Read-only file system (readonly) ================================ If you mount the SAN File System as read-only, data and metadata in the file system can be viewed, but not modified. Accessing a file system object does not affect its access time attribute. Mount file system read-only [No]:
176
# # # # # # # #
By default, the SAN File System client restarts when the system starts. - Enter Yes to enable automatic restart of the SAN File System client at startup. - Enter No to disable automatic restart of the SAN File System client at startup. Enable automatic restart at startup [Yes]:
autorestart=Yes # # # # # # # SAN File System kernel extension major number (majornumber) =========================================================== SAN File System driver requires a major number while creating a file system driver instance. Please specify a major number. SAN File System kernel extension major number [99]:
majornumber=99 # # # # # # # NLS converter [convertertype]: =============================== The NLS converter tells the Metadata server how to convert strings from the SAN File System client into Unicode. NLS converter [ISO-8859-1]:
convertertype=ISO-8859-1 # # # # # # # # Transport protocol (nettype) ============================ The transport protocol determines how the SAN File System client connects to the Metadata server. Specify either tcp or udp. Transport protocol [tcp]:
nettype=tcp # # # # # # # # # # # # # Error handling (stfserror) ========================== All SAN File System client errors are logged to the system log of the client machine. There are some error conditions that may require additional measures, such as when an application exits and a subsequent hardware failure prevents data from being committed to disk. For these types of error conditions, you can select the freezefs or systemhalt options. The freezefs option prevents the SAN File System from writing additional data to disk and will halt communication with the Metadata servers. The systemhalt option forces the client system to abruptly shut down. Choose either log, freezefs, or systemhalt.
177
# # # # # # # # #
Display verbose messages (verbose) ================================== By default, the client setup utility runs quietly, suppressing informational messages generated by the commands. You can choose to display these messages by specifying verbose. Display verbose output [No]:
verbose=Yes
Installation prerequisites
The SAN File System client for Linux for IBM ^ zSeries supports the following configurations: Supports the 31-bit SLES8, Service Pack 3 distribution under z/VM 5.1 or later, or directly within an LPAR, on any generally available zSeries model that supports the co-required OS and software stack. Consult your system documentation or system administrator for details of setting up an LPAR or z/VM environment with SLES8 Linux. Supports the use of SCSI SAN with zSeries with the zFCP driver, fixed block SCSI, with IBM ESS, DS6000, and DS8000 storage LUNs. Correct configuration of the disks (LUNs) is very important. For detailed information about this task, see the Redpaper Getting Started with zSeries Fibre Channel Protocol, REDP-0205. The SAN File System cluster must be up and running.
2. Configure the zSeries SAN File System client by running the setupstclient command from the client installation directory, as shown in Example 5-42. Specify appropriate values for your environment, in particular, the MDS TCP/IP address.
Example 5-42 Configure zSeries client sanfs01:~ # cd /usr/tank/client/bin sanfs01:/usr/tank/client/bin: # setupstclient Using configuration file: /usr/tank/client/config/stclient.conf IBM SAN File System client setup utility
178
The IBM SAN File System client setup utility performs the following functions: 1. Prompts you for information necessary to set up the SAN File System client. 2. (Optional) Saves the configuration you specify to the file: /usr/tank/client/config/stclient.conf 3. (Optional) Runs the setup process: a. Loads the SAN File System driver as a kernel module (using the insmod(1) command). b. Creates the SAN File System client (using the stfsclient(1) command). c. Mounts the SAN File System (using the stfsmount(1) command). Because the utility does not make changes until the configuration file is saved and the setup process begins, you can press Ctrl-c to exit the utility without making changes at any time prior to that point. To use the default value that appears in [square brackets], press Enter. A dash [-] indicates no default is available. Device candidate list (devices) =============================== The SAN File System client determines which disks to use as SAN File System user data volumes by searching a list of disks called device candidates. The device candidate list consists of those devices that have device-special files in the directory you specify. Device candidate list [pat=/dev/sd*[a-z]]: Client name (clientname) ======================== You can set the name of this SAN File System client. The name can be any string, but must be unique. By default, the client setup utility uses the host name (output of the hostname command). Client name [sanfs01]: Metadata server IP address (server_ip) ====================================== During setup, the SAN File System client must connect to one of the metadata servers in the cluster. After the client establishes a connection to the server, the server notifies the client of any other servers in the cluster. Specify the IP address for any metadata server in the cluster to establish the connection. Metadata server connection IP address [192.168.71.75]: Metadata server port number (server_port) ========================================= The SAN File System client must connect to the client server port
179
on the metadata server. In most cases, the metadata server uses port 1700. Accept this default unless you know that the metadata server was configured to listen on a different port. The sfscli command statserver -netconfig will print the client server port. Metadata server port number [1700]: SAN File System mount point (mount_point) ========================================= The client setup utility mounts the SAN File System to specified mount point (directory) and creates the file image. If the specified mount point does not exist, it created. Once mounted, the directory tree for the file image appears at that mount point. Mount point [/mnt/sanfs]: Read-only file system (readonly) ================================ If you mount the SAN File System as read-only, data and metadata in the file system can be viewed, but not modified. Accessing a file system object does not affect its access time attribute. Mount file system read-only [No]: Disable automatic restart(autorestart) ======================================= By default, the SAN File System client restarts when the system starts. - Enter Yes to enable automatic restart of the SAN File System client at startup. - Enter No to disable automatic restart of the SAN File System client at startup. Enable automatic restart at startup [No]: NLS converter (convertertype): =============================== The NLS converter tells the metadata server how to convert strings from the SAN File System client into Unicode. NLS converter [ISO-8859-1]: Transport protocol (nettype) ============================ The transport protocol determines how the SAN File System client connects to the metadata server. Specify either tcp or udp. Transport protocol [tcp]: Record mount in /etc/mtab (etc_mtab) ==================================== By default, if the file system mount succeeds, the client a system is system
180
setup utility adds an entry for the file system image to /etc/mtab. You can choose not to record the mount in this file. Record the mount [Yes]: Show number of free blocks (always_empty) ========================================= By default, the number of blocks reported as free blocks by statfs() is actually the number of blocks in partitions that are not assigned to a fileset. Some programs might mistakenly report that there is no free space left in partitions assigned to the fileset, when there is actually free space available. This option forces statfs() to report the number of free blocks as being one less than the number of blocks in the file system. Always indicate blocks free [No]: Display verbose messages (verbose) ================================== By default, the client setup utility runs quietly, suppressing informational messages generated by the commands. You can choose to display these messages by specifying verbose. Display verbose output [No]: Run SAN File System client setup. ================================= The configuration utility has not yet made changes to your system configuration. - Enter No to quit without configuring the SAN File System client on this system. - Enter Yes to put these changes into effect and start the SAN File System client. Run the SAN File System client setup utility [Yes]: HSTCL0068I Establishing 256 candidate SAN File System user data disk devices. SAN File System client setup complete.
181
3. The SAN File System should now be mounted and this can be verified using the df -k command at the zSeries Linux prompt. This should be similar to Example 5-43.
Example 5-43 SAN File System mount verification sanfs01:/usr/tank/client/config # df -k Filesystem /dev/dasda1 /dev/dasdb1 /dev/dasdc1 shmfs sanfs01 1K-blocks 2365444 2365444 2365444 257372 8589934588 Used Available Use% Mounted on
1075176 1170108 48% / 1878164 367120 84% /usr 68736 2176548 4% /tmp 0 257372 0% /dev/shm 4 8589934584 1% /mnt/sanfs
182
Example 5-46 Configuration file for zSeries client sanfs01:/usr/tank/client/config # cat stclient.conf # # SAN File System client configuration for Linux # # # # # # # # # # # Device candidate list (devices) =============================== The SAN File System client determines which disks to use as SAN File System user data volumes by searching a list of disks called device candidates. The device candidate list consists of those devices that have device-special files in the directory you specify. Device candidate list [pat=/dev/sd*[a-z]]:
devices=pat=/dev/sd*[a-z] # # # # # # # # # Client name (clientname) ======================== You can set the name of this SAN File System client. The name can be any string, but must be unique. By default, the client setup utility uses the host name (output of the hostname command). Client name [sanfs01]:
clientname=sanfs01 # # # # # # # # # # # # Metadata server IP address (server_ip) ====================================== During setup, the SAN File System client must connect to one of the metadata servers in the cluster. After the client establishes a connection to the server, the server notifies the client of any other servers in the cluster. Specify the IP address for any metadata server in the cluster to establish the connection. Metadata server connection IP address [-]:
server_ip=192.168.71.75 # # # # # # # # # # # Metadata server port number (server_port) ========================================= The SAN File System client must connect to the client server port on the metadata server. In most cases, the metadata server uses port 1700. Accept this default unless you know that the metadata server was configured to listen on a different port. The sfscli command statserver -netconfig will print the client server port. Metadata server port number [1700]:
server_port=1700 Chapter 5. Installation and basic setup for SAN File System
183
# # # # # # # # # #
SAN File System mount point (mount_point) ========================================= The client setup utility mounts the SAN File System to specified mount point (directory) and creates the file image. If the specified mount point does not exist, it created. Once mounted, the directory tree for the file image appears at that mount point. Mount point [/mnt/sanfs]: a system is system
mount_point=/mnt/sanfs # # # # # # # # Read-only file system (readonly) ================================ If you mount the SAN File System as read-only, data and metadata in the file system can be viewed, but not modified. Accessing a file system object does not affect its access time attribute. Mount file system read-only [No]:
readonly=No # # # # # # # # # # Disable automatic restart(autorestart) ======================================= By default, the SAN File System client restarts when the system starts. - Enter Yes to enable automatic restart of the SAN File System client at startup. - Enter No to disable automatic restart of the SAN File System client at startup. Enable automatic restart at startup [Yes]:
autorestart=No # # # # # # # NLS converter (convertertype): =============================== The NLS converter tells the metadata server how to convert strings from the SAN File System client into Unicode. NLS converter [ISO-8859-1]:
convertertype=ISO-8859-1 # # # # # # # # Transport protocol (nettype) ============================ The transport protocol determines how the SAN File System client connects to the metadata server. Specify either tcp or udp. Transport protocol [tcp]:
184
# # # # # # # #
==================================== By default, if the file system mount succeeds, the client setup utility adds an entry for the file system image to /etc/mtab. You can choose not to record the mount in this file. Record the mount [Yes]:
etc_mtab=Yes # # # # # # # # # # # # # # # Show number of free blocks (always_empty) ========================================= By default, the number of blocks reported as free blocks by statfs() is actually the number of blocks in partitions that are not assigned to a fileset. Some programs might mistakenly report that there is no free space left in partitions assigned to the fileset, when there is actually free space available. This option forces statfs() to report the number of free blocks as being one less than the number of blocks in the file system. Always indicate blocks free [No]:
always_empty=No # # # # # # # # # Display verbose messages (verbose) ================================== By default, the client setup utility runs quietly, suppressing informational messages generated by the commands. You can choose to display these messages by specifying verbose. Display verbose output [No]:
verbose=No
185
You can add additional disks to the list after installation using the stfsdisk command on AIX (in /usr/tank/client/bin), or the sanfs_ctl disk command on Solaris. An example of the stfsdisk command is shown in Example 5-47. For more details on the parameters for this command, see IBM TotalStorage SAN File System Installation and Configuration Guide, GA27-4316.
Example 5-47 Show AIX device candidate list # cd /usr/tank/client/bin # ./stfsdisk -query -kmname /usr/tank/client/bin/stfs INACTIVE /dev/rvpath0 ACTIVE /dev/rvpath1 INACTIVE /dev/rvpath2 ACTIVE /dev/rvpath3 INACTIVE /dev/rvpath7
186
then checks that the user ID is a member of one of the four required groups (Administrator, Operator, Backup, or Monitor). Finally, based on the group of which the user ID is a member, the method determines whether this group is authorized to perform the requested function in order to decide access.
5.6.1 Prerequisites
The client must provide a suitable Intel server preloaded with the designated software before installing the Master Console software package that is shipped with SAN File System. We listed the hardware and software prerequisites for the Master Console in 2.5.2, Master Console hardware and software on page 38. Important: Using a Master Console is optional; it is no longer required for a SAN File System configuration. We listed the contents of the Master Console software package in 2.5.6, Master Console on page 45. If you have an existing Master Console and want to install the SAN File System Master Console package, we recommend you start with a clean system, that is, disable the existing mirroring and reinstall the operating system. We discovered some problems when trying to upgrade an existing Master Console installation. If you are sharing a Master Console with a SAN Volume Controller, we recommend maintaining it at the same level as currently used for the SVC.
187
Do the following steps for the installation: 1. Install the selected Windows version, and configure the TCP/IP addresses and other parameters for your environment. Refer to the IBM TotalStorage Master Console for SAN File System and SAN Volume Controller Installation and Users Guide Version 3 Release 1, GC30-4090 for more details. 2. Install the Service Pack and the Windows Update (for Windows 2000, if required). You can download the Service Pack and Windows update from:
http://v4.windowsupdate.microsoft.com
To find the Windows Update, select Fixes for Windows 2000 Professional SP4 and Recommended Updates. Search for 818043. 3. Install your antivirus package, according to the vendors instructions. 4. Install Java Runtime. You can obtain JRE 1.4.2 from the Web site http://www.sun.com. Select Downloads Java & Technologies Java 2 Platform Standard Edition 1.4, and then Download J2SE JRE. We used the current package at the time of writing, which is V1.4.2. The Master Console installation wizard is written in Java, so you need this runtime environment. 5. From the directory where you downloaded the Java package, run the J2RE-1_4_2.exe file. The installation wizard initiates. 6. Accept the License Agreement and click Next. 7. On the Setup Type window, select Typical and click Next, as shown in Figure 5-28.
8. The installation will proceed. It may take some time, depending on your processor speed. 9. To verify that you have installed J2RE, check for the Java Web Start icon on your Desktop. It should be similar to Figure 5-29 on page 189.
188
189
Figure 5-30 shows the Services applet after installing the SNMP service.
10.Double-click SNMP Service. 11.On the General tab, check that the Startup Type is set to Automatic as the Startup Type, as shown in Figure 5-31 on page 191. You should also start the service if it is not started (right-click SNMP Service and select Start).
190
12.In the Security tab, ensure there is a public community name with a minimum of Read rights, as shown in Figure 5-32 on page 192. Click OK.
191
13.Verify that SNMP Trap Service status is set to Manual. To do this, double-click its entry on the Services applet and check that Manual is selected from the Startup type drop-down. Figure 5-32 shows the Services applet with the correct startup types for the two SNMP services. 14.If installing on Windows 2003, select Accept SNMP packets from any host; this is the default on Windows 2000.
Important: The Master Console has to be rebooted at certain stages during the installation. If you are prompted to reboot the system, you should do so, EXCEPT WHERE EXPLICITLY TOLD NOT TO IN THESE INSTRUCTIONS. After each reboot, the Master Console installation wizard will automatically continue the installation process from the point it was interrupted by the required reboot. Leave any CD in the drive across reboots. You will be prompted when it is necessary to insert another CD. The following software components are contained on the CD-ROM package and will be installed by the wizard: Adobe Acrobat Reader PuTTY DB2 SAN Volume Controller console DS4000 STorage Manager Client (formerly FAStT Storage Manager Client) Tivoli Storage Area Network Manager IBM Director IBM VPN Connection Manager
4. Select the language to be used for installation wizard and click OK.
193
5. The Installation wizard window appears, as shown in Figure 5-33. Click Next to start the installation.
6. Accept the License Agreement on the next window and click Next. 7. Verify the privileges to be assigned to the account used for the installation by clicking Yes, as shown in Figure 5-34.
8. You will be prompted to log out, then re-login, and restart the installation. Do this now, logging in as the same user as previously (Administrator in our case). Restart the Master Console installation by rerunning setup.exe. 9. You may be prompted again to set the language and accept the license agreement.
194
10.The actual software installation begins now with the installation of Adobe Acrobat Reader.
195
2. When the installation is complete, an information window appears, describing the actual installation wizard, as shown in Figure 5-36. Click Next.
3. From this window, you can access the documentation by right-clicking the left side of the window. Click Next to continue.
196
If you do not select to install the DS4000 Storage Manager Client, a pop-up warning window appears. You can ignore this if you do not have a DS4000 disk by clicking OK.
197
In Figure 5-38, none of the products have been installed; therefore, all will be installed.
From this window, click Next to begin installing the first product on the list, that is, PuTTY.
Installing PuTTY
1. The PuTTY installation window appears. Click Next to continue. 2. Confirm that you want to install PuTTY by clicking Yes. 3. The PuTTY setup wizard launches. Click Next to continue. 4. For the next windows (destination directory, Start Menu folder, and Additional Tasks), we recommend accepting the defaults. Click Next to advance. 5. Confirm the installation settings, and click Install. 6. The PuTTY Setup wizard completes. You can view the Readme.txt file or click Finish to complete the installation. Note: If you are using the SAN Volume Controller for your metadata storage, you should create a public and private key using PuTTY. You will need these keys when you install the SAN Volume Controller Console. Follow the instructions in the IBM TotalStorage Master Console for SAN File System and SAN Volume Controller Installation and Users Guide Version 3 Release 1, GC30-4090. This process is not invoked during the Master Console installation wizard, so it must be done separately at this point. You have now installed PuTTY. Click Next to continue the installation wizard, as shown in Figure 5-39 on page 199.
198
Installing DB2
The wizard now continues with installing DB2. 1. You are prompted to begin installing DB2 UDB Enterprise Edition. Click Next to begin. 2. You will be prompted to put in the next CD. Click OK when this is done.
199
3. The DB2 setup wizard starts, as shown in Figure 5-40. Click Next.
4. Accept the license agreement and click Next. 5. On the Select installation type window (Figure 5-41 on page 201), accept the defaults. Click Next.
200
201
6. Accept the default to install DB2 Enterprise Server Edition on this computer, as shown in Figure 5-42. Click Next.
7. Select the default directory; otherwise, specify the destination directory of your choice. Click Next. 8. In the next window, you have to specify a user name and password for DB2. The default user name is db2admin, as shown in Figure 5-43 on page 203. You need to enter a password for this user name.
202
Make sure the Use the same values for the remaining DB2 Username and Password settings box is checked. If you do not, then you will subsequently be prompted to enter a user name and password at several points. Click Next when done. If you are prompted to create the user, click Yes.
203
On the Set up administration contact list, make sure Local - create a contact list on this system is checked, as shown in Figure 5-44. Click Next.
Ignore the SMTP warning if it appears. Click OK. 9. On the DB2 instances window, select DB2, as shown in Figure 5-45 on page 205. Click Next.
204
205
10.On the next window, accept the default Do not prepare the DB2 tools catalog on this computer, as shown in Figure 5-46. Click Next.
11.Enter an appropriate administration contact name and e-mail address in your organization, as shown in Figure 5-47 on page 207. Click Next.
206
207
12.Confirm the installation settings and click Install to start the installation, as shown in Figure 5-48.
13.The installation proceeds to copy the files and configure the database instance. 14.Click Finish to complete the installation in the window shown in Figure 5-49 on page 209.
208
15.The installation will take some time. When the IBM DB2 Universal Database First Steps window appears, click Exit First Steps.
209
16.On the Master Console installation wizard, click Next to verify the DB2 installation, as shown in Figure 5-50.
17.You will be prompted to continue with the IBM TotalStorage SAN Volume Controller Console software installation. Click Next.
210
211
8. Select DB2 as the data repository and click Next, as shown in Figure 5-52. Note that DB2 is not the default selection on this window.
9. On the Single/Multiple User ID/Password Choice window, you can decide to use the DB2 Administrator user name and password you specified during the DB2 installation in step 8 on page 202 for all IDs and passwords on this window. We recommend using the same ID/password for all options, as shown in Figure 5-53. Click Next. .
212
10.Enter the DB2 ID and password that you defined in step 8 on page 202, as shown in Figure 5-54.
11.On the database name window, click Next to accept the default itsanmdb. 12.On the Tivoli NetView installation drive window, click Next to accept the default drive. 13.Click Next to confirm the installation. The installation proceeds; it may take some time. 14.Click Next to complete the installation of Tivoli SAN Manager. 15.You are prompted to reboot the computer; click Finish to do this. 16.After rebooting, the SAN Manager installation is validated. Then the wizard proceeds to install the Tivoli SAN Manager Agent.
213
2. Locate the key HKEY_LOCAL_MACHINE\SOFTWARE\Tivoli\NetView\CurrentVersion 3. Change the value of trapdSharePort162 to 0, as shown in Figure 5-55.
3. Add a value (select Edit New DWord Value) and name it trapdTrapReceptionPort. 4. Double-click the new value and set it to an available port number, such as 9950, and click the Decimal radio button, as shown in Figure 5-56 on page 215.
214
Remember the port number that you set here. You will refer to that number when you modify the IBM Director configuration later. 5. Exit the Windows registry. 6. Open a Command Prompt window. Change to the directory c:\usr\ov\bin. Remove the NetView service with the command nvservice -remove. 7. Reinstall the NetView service (which will remove the dependency on the SNMP Trap Service) with the command nvservice -install -username .\NetView -password password, entering a password for the local NetView account. Close the Command Prompt window when done. 8. Return to the wizard to install the Tivoli SAN Manager Agent.
215
7. On the Manager name and port number window, enter localhost for the Tivoli Manager name (because both the Manager and the Agent are installed on the Master Console). Accept the default Port Number and click Next, as shown in Figure 5-57.
8. Accept the default base port number and click Next. 9. On the next window, Host Authentication Password window, enter the password of the Host Authentication ID you specified when you installed Tivoli SAN Manager. The default was to use the DB2 ID, db2admin. 10.Click Next to verify the installation settings and begin copying files. 11.Click Finish to complete installation of the Tivoli SAN Manager Agent. 12.On the Master Console installation wizard, click Next to verify the install of Tivoli SAN Manager Agent. 13.The wizard will then install IBM Director.
216
6. On the IBM Director service account information window, fill in the following fields: Domain: Enter the host name of the Master Console. User name: Enter a Windows user account with administrative privileges, for example, Administrator. Password: Enter and confirm the password for the specified Windows user account. The window should look similar to Figure 5-59. Click Next.
7. On the Encryption Settings window, accept the defaults and click Next. 8. On the Software distribution settings window, accept the defaults and click Next.
217
9. Click Install to begin installation. This may take some time. 10.On the Network driver configuration pop-up, select the first port and click Enable driver, as shown in Figure 5-60. Click OK.
11.On the IBM Director database configuration window (Figure 5-61), make sure that Microsoft Jet 4.0 is selected. Do not select DB2 here. Click Next.
12.On the next window, accept the defaults for ODBC data source and Database name; these cannot be changed. Click Next.
218
13.Click Finish to complete the installation. 14.When prompted to reboot the system, click No. DO NOT REBOOT until you have completed the next task. 15.From the Master Console installation wizard, click Next. The wizard validates the installation of IBM Director.
by deleting the # sign, and add the host name of the Master Console. For example:
snmp.trap.v1.forward.port.1=KCWC09K
by deleting the # sign, and add the port that you specified for the trapdTrapReceptionPort value in the Windows Registry key, in step 4 on page 214:
snmp.trap.v1.forward.port.1=9950
5. Save and close the file. 6. Click Next on the Master Console installation wizard. The wizard validates the installation of IBM Director. 7. Reboot the Master Console.
219
2. The preconfiguration tasks execute automatically to configure IBM Director. 3. After discovery is complete, click Next to continue.
221
(Oct 6, 2004 6:20:47 PM), Command to be executed : regedit /s "C:\Program Files\IBM\MasterConsole\Support Utils\Remote_Support\IPsecInstall.reg" (Oct 6, 2004 6:20:47 PM), The reg file "C:\Program Files\IBM\MasterConsole\Support Utils\Remote_Support\IPsecInstall.reg" successfully loaded to the Windows registry (Oct 6, 2004 6:21:02 PM), Successfully installed IBM VPN Client. (Oct 6, 2004 6:21:06 PM), You need to reboot your system. (Oct 6, 2004 6:21:06 PM), Master Console for IBM TotalStorage SAN File System successfully installed.
You will be prompted to reboot the system in order to complete the installation; select Yes, restart my computer, and click Finish. The final step is to set up mirroring of the Master Console boot drive for redundancy.
222
2. Right-click the Basic Unallocated disk (Disk 1 in our example) and select Upgrade to Dynamic Disk, as shown in Figure 5-64.
223
Note: Both the disks must be set to Dynamic for mirroring to work with Windows. If the other disk is not set Dynamic, do the following: 1. Right-click Disk0 and select Upgrade to Dynamic Disk. 2. Click Yes on the warning. The system will probably reboot. 3. After the reboot, re-start Disk Management. 4. Right-click Disk0, and select the Add Mirror option, as shown in Figure 5-66.
224
5. The Add Mirror window is displayed, as in Figure 5-67. Select the other disk, Disk 1 in our example, and click Add Mirror.
6. This initiates the mirroring process (to synchronize the two drives). Figure 5-68 shows the progress; both Disk 0 and Disk 1 are Regenerating. This process takes about 20-25 minutes.
225
7. Once the mirroring process is completed, a warning displays, as shown in Figure 5-69. Click OK. It tells you that to be able to boot from the new mirrored disk, you have to add an entry to the boot.ini file.
8. The boot.ini file is, by default, a system file, which is disabled from view. To display and edit this file, you have to modify the folder options. Do the following to access/view the file: a. Open My Computer and click C:. b. From the Menu, select Tools Folder Options. c. In the Folder Options window, select the View tab, and select Show hidden files and folders, as in Figure 5-70. d. Click OK for the changes to take effect.
9. You can now see the hidden files. Edit the boot.ini file from the C: drive. The file should look similar to Example 5-50 on page 227. Copy the highlighted line if necessary to add the second entry for Disk 1. Be very careful when editing this file; an error may prevent your system from booting.
226
Example 5-50 Boot.ini file [boot loader] timeout=30 default=multi(0)disk(0)rdisk(0)partition(1)\WINNT [operating systems] multi(0)disk(0)rdisk(0)partition(1)\WINNT="Microsoft Windows 2000 Advanced Server" /fastdetect multi(0)disk(1)rdisk(0)partition(1)\WINNT="Microsoft Windows 2000 Advanced Server" /fastdetect
10.Save the file and reboot the Master Console. 11.After the system completes its POST sequence, the system should prompt you to Select an operating system to boot from. Because we have mirrored the drive, the options listed should show the same name and same operating system. The screen should look similar to Example 5-51.
Example 5-51 Boot selection screen Please select the operating system to start: Microsoft Windows 2000 Advanced Server Microsoft Windows 2000 Advanced Server
Use the arrow keys to select the operating system of your choice: Press Enter.
Because both disks have been assigned the same name, it is hard to distinguish the primary and the secondary drive. 12.To modify the names of the drives, edit the boot.ini file again to add identifiers to the disks. You can use Primary and Secondary to differentiate between the two disks. Example 5-52 shows the updated boot.ini file.
Example 5-52 Identify disks in boot.ini [boot loader] timeout=30 default=multi(0)disk(0)rdisk(0)partition(1)\WINNT [operating systems] multi(0)disk(0)rdisk(0)partition(1)\WINNT="Microsoft Windows 2000 Advanced Server Primary" /fastdetect multi(0)disk(1)rdisk(0)partition(1)\WINNT="Microsoft Windows 2000 Advanced Server Secondary" /fastdetect
13.Reboot the system for changes to take effect. Your boot screen should now show the distinct disk identifiers. Try booting from both the drives to verify the mirroring has worked. Your Master Console is now successfully installed.
227
5.7 SAN File System MDS remote access setup (PuTTY / ssh)
The CLI for SAN File System can only be accessed with a secure shell connection. Telnet is not available on the MDS.
228
Chapter 6.
229
6.1 Introduction
This chapter details the steps needed to prepare and then perform a Rolling Upgrade on the SAN File System Metadata servers and Clients. The following procedures upgrade a 2-node MDS cluster consisting of tank-mds1, which is the current master MDS, and tank-mds2, which is the subordinate MDS. The cluster is currently at V2.2.1.32; we will upgrade to V2.2.2. The current configuration is shown in Example 6-1 using the lsserver and statcluster SAN File System commands. Note the Software Version and Committed Software Version.
Example 6-1 Show SAN File System cluster status with lsserver command tank-mds1:~ # sfscli lsserver Name State Server Role Filesets Last Boot ========================================================== tank-mds1 Online Master 9 Aug 19, 2005 3:24:25 PM tank-mds2 Online Subordinate 7 Aug 19, 2005 3:58:44 PM sfscli> statcluster Name ATS_GBURG ID 61306 State Online Target State Online Last State Change Aug 19, 2005 12:29:49 PM Last Target State Change Servers 2 Active Servers 1 Software Version 2.2.1.32 Committed Software Version 2.2.1.32 Last Software Commit Aug 19, 2005 10:40:58 AM Software Commit Status Not In Progress Metadata Check State Idle Metadata Check Percent Completed 0 %
SAN File System provides a rolling software upgrade so that the SAN File System clients do not experience access disruptions to the SAN File System namespace during most of the upgrade process. The detailed procedure is specific for each release, as operating systems and hardware supported have changed with each release. However, the high level process is as follows: 1. Preparation: Complete some prerequisite steps and save important configuration details, described in 6.2, Preparing to upgrade the cluster on page 231. 2. Stop a single MDS, upgrade any BIOS/device drivers and operating system prerequisites, upgrade its SAN File System binaries, then restart that MDS and allow it to rejoin the cluster. During this time, the cluster as a whole continues to operate at the previous software version with some MDSs running the previous version and some MDSs running the new version. Repeat the process until every MDS is running the new software version binaries, while continuing to use the old cluster protocols and data formats. For the current release of SAN File System (upgrading from V2.2.1 to V2.2.2), you should upgrade each subordinate, and finally the master MDS last; however, check the rolling upgrade instructions for each specific release, as the recommended order may change. 3. Once all binaries have been updated on each MDS, and all MDS have rejoined the cluster, issue the sfscli upgradecluster command to go through a coordinated cluster transition to using the new protocols, shared data structures, and new functionality. 4. Stop individual SAN File System clients and upgrade the SAN File System client software. Restart the SAN File System client and any applications. Check the specific rolling upgrade instructions for each release.
230
3. The resulting backup archive is located in /usr/tank/server/server/DR/ and has a name of the format DRfiles-<hostname->-<datestamp>.tar.gz. It should also be copied to a safe location other than on the local file system of the MDS being upgraded. This is especially important if you will upgrade to SUSE 9, as this upgrade will overwrite any data on the operating system disk.
231
4. Gather and record the configuration information shown in Table 6-1 for each MDS.
Table 6-1 Configuration information to save Configuration name Mount point of CD Value Method to obtain value Path to the root of the mounted SAN File System CD. Default: /media/cdrom. grep TrustPassword /usr/tank/admin/config/cimo m.properties. grep ^Port /usr/tank/admin/config/cimo m.properties. Log into the user account that has been set up to access the SAN File System CLI (sfscli) and run cat $HOME/.tank.passwd. Example: # cat $HOME/.tank.passwd itsoadm:password. The first value is the CLI User ID; the second value is the CLI User password. Address assigned to the RSA II card. Either LDAP or local authentication. Run grep ^AuthModule /usr/tank/admin/config/cimo m.properties. If the value is=com.ibm.storage.storagetan k.auth.SFSLocalAuthModule is present, then local authentication is used; otherwise LDAP authentication is used.
Truststore Password
If you are using the LDAP configuration method, you must also record the LDAP parameters from /usr/tank/admin/config/tank.properties: LDAP_SERVER, LDAP_USER, LDAP_PASSWD, LDAP_SECURED_CONNECTION, LDAP_BASEDN_ROLES, LDAP_ROLE_MEM_ID_ATTR, LDAP_USER_ID_ATTR, and LDAP_ROLE_ID_ATTR. 5. Decide which version of SUSE Linux you will run on the upgraded cluster. SAN File System V2.2.2 requires either SLES 8 Service Pack 4 (kernel version 2.4.21-278) or SLES 9 with Service Pack 1 and kernel version 2.6.5-7.151. Run the command shown in Example 6-3 on page 233 on each MDS in the cluster to check the Linux kernel version. We are currently running SLES 8 with Service Pack 3, as required by SAN File System V2.2.1. Since we are using SVC metadata storage, we will upgrade to SLES 8 Service Pack 4, as at the time of writing, this storage was not supported at SLES 9. Therefore, we will need to apply SLES 8 Service Pack 4. If you decide to move to SLES 9, you will need to first upgrade the operating system to SLES 9, then apply SLES 9 Service Pack 1, and then upgrade the kernel to the right level.
232
Example 6-3 Show kernel version on MDS tank-mds2:~ # rpm -qa |grep kernel kernel-source-2.4.21-231
You can get the Linux kernel packages from your SUSE Maintenance Web service, or through a public Linux download site such as http://rpmfind.net. See the IBM TotalStorage SAN File System Installation and Configuration Guide, GA27-4316 and also Apply United Linux Service Pack 3 and 4 on page 129 for details on upgrading the SUSE operating system. We will perform the actual upgrade during the rolling upgrade process, but you should gather any CDs required before starting the upgrade. This will minimize the outage time on any given MDS. 6. Check in the Release notes, and download any BIOS or firmware upgrade images required; Web sites for these notes are given in 6.3.2, Upgrade MDS BIOS and RSA II firmware on page 234. 7. Check in the Release notes, and download any disk device driver upgrade images required; Web sites for these notes are given in 6.3.3, Upgrade the disk subsystem software on page 235. 8. If you are upgrading from V2.2.1, you should cable a second redundant Ethernet connection to each MDSs second Ethernet port before upgrading so that Ethernet bonding can be set up. To function in the most complete fashion, the V2.2.2 high-availability feature requires that Ethernet bonding be set up. Bonding enables you to configure multiple Ethernet connections with the same IP address so that if one of the connections fails, the second connection takes over. To set up Ethernet bonding on SLES8, follow the procedure in Set up Ethernet bonding on page 131. To set up Ethernet bonding on SLES9, see the instructions in the SAN File System 2.2.2 Release Notes and IBM TotalStorage SAN File System Installation and Configuration Guide, GA27-4316. Important: If you need to configure Ethernet bonding on an existing SAN File System cluster, run the steps on any subordinate(s) first, then finally on the master MDS. 9. Make sure that SSH keys have been set up before proceeding to the next step. This will allow unchallenged root login among each MDS and will avoid the necessity of being prompted many times to enter root passwords during various SAN File System maintenance processes. If you have not previously configured the SSH keys, follow the procedure in step 5 on page 136 in 5.2.5, Install prerequisite software on the MDS on page 135.
233
Attention: If the server being stopped is the master, wait until the new master takes over before proceeding. You can check this with the lsserver command. If the output shows the other server in a Joining state you must wait. When the master takeover is complete, it will show one MDS with the Master role and the other in state Not Running. In our case, after upgrading the current subordinate, tank-mds2, we would then shut down the master, tank-mds1. We would wait until the lsserver command output shows tank-mds2 in an Online state with the Master role and all filesets assigned, since it will have taken over tank-mds1 workload. The shutdown MDS tank-mds1 will stay in state Not Running. We could then proceed to upgrade tank-mds1. 2. On the MDS, which was shut down, disable the automatic restart capability, using the stopautorestart command, as shown in Example 6-5.
Example 6-5 Disable autorestart tank-mds2:~ # sfscli stopautorestart tank-mds2 CMMNP5365I The automatic restart service for metadata server tank-mds2 successfully disabled
234
For the IBM ^ xSeries 346 model, the BIOS is at the following Web site
http://www.ibm.com/pc/support/site.wss/document.do?sitestyle=ibm&lndocid=MIGR-57356
For the IBM ^ xSeries 365 model, the BIOS is at the following Web site
http://www.ibm.com/pc/support/site.wss/document.do?sitestyle=ibm&lndocid=MIGR-60101
Follow the README notes that come with the FLASH BIOS package for installation instructions. In our case, we dumped the BIOS to a diskette and rebooted the MDS with the diskette inserted in the drive. The MDS reboots off of the diskette, asks some elementary questions, and flashes the BIOS. Then we upgrade the RSA II card firmware level to the latest level, which is 1.09. You can download this firmware (for the IBM ^ xSeries 345) from the following Web site:
http://www.ibm.com/pc/support/site.wss/document.do?lndocid=MIGR-46489
For the IBM ^ xSeries 346 model, the RSA II firmware is at the following Web site
http://www.ibm.com/pc/support/site.wss/document.do?sitestyle=ibm&lndocid=MIGR-56759
For the IBM ^ xSeries 365 model, the RSA II firmware is at the following Web site
http://www.ibm.com/pc/support/site.wss/document.do?sitestyle=ibm&lndocid=MIGR-53861
Instructions for upgrading the BIOS and firmware are given in the manual IBM TotalStorage SAN File System Installation and Configuration Guide, GA27-4316. Attention: If a reboot is required for either one of the upgrades above, after the machine reboots make sure the MDS is in a Not Running state. Use:
#sfscli lsserver
The RDAC drive can be downloaded from this Web site. Make sure to choose the correct version for your Linux kernel.
http://www.ibm.com/servers/storage/support/disk/ds4500/stormgr1.html
235
If you upgraded to SLES9, copy the saved backup configuration archive file (in Example 6-2 on page 231) back to /usr/tank/server/DR/and also any local scripts or utilities that were installed on top of SAN File System.
ATS_GBURG 61306 Online Online Aug 19, 2005 12:29:49 PM 2 1 2.2.1.32 2.2.1.32 Aug 19, 2005 10:40:58 AM Not In Progress Idle 0 %
236
2. Mount the SAN File System CD in the CD-ROM, for example, at /media/cdrom. Install the 1.4.2-1.0 version of IBM Java Runtime Environment, provided in the SAN File System installation CD, using the following command:
rpm -U /media/cdrom/common/IBMJava2-142-ia32-JRE-1.4.2-1.0.i386.rpm
Upgrade the MDS to V2.2.2 by running the install_sfs-package-<version>.sh script, as shown in Example 6-8 on page 238. Run the installation script that corresponds to the version of SUSE Linux Enterprise server that is installed on your system. There are two install_sfs-package scripts on the SAN File System CD: For SUSE Linux Enterprise Server Version 8, in a directory named SLES8 For SUSE Linux Enterprise Server Version 9, in a directory named SLES9. We are at SLES8, so we run the script from that directory:
cd /media/cdrom/SLES8 ./install_sfs-package-<version>.sh --restore /usr/tank/server/DR/savedDRarchive --sfsargs "-noldap"
Use the --restore option and reference the archive file created in Example 6-2 on page 231. If your configuration is using local authentication, rather than LDAP, use the --sfsargs -noldap option as shown; otherwise, the command will be of the format:
./install_sfs-package-<version>.sh --restore /usr/tank/server/DR/savedDRarchive
that is, without the --sfsargs -noldap option. Do not attempt to migrate to local authentication during the rolling upgrade process; either migrate before upgrading (and thoroughly test) or after the upgrade is complete. The install_sfs package is a self-extracting archive and shell script and contains the software packages for all SAN File System components, including the metadata server, the administrative server, and all clients. Note that the version string for the install_sfs-package might differ from the version strings of the individual packages, but this does not cause any problems with the installation. Using Example 6-8 on page 238 as a reference, enter the number corresponding to the language to use for the installation (we entered 2 for English), press Enter to display the license agreement, and 1 to accept it. The process extracts the packages, then prompts you for the server configuration parameters. Accept the prompted entries if they are correct; otherwise, enter amended values. You should have saved this information for entry in step 4 on page 232 of 6.2, Preparing to upgrade the cluster on page 231. Note particularly that you have to enter the TCP/IP address of your RSA card where prompted (System Management IP). This is the address that you saved in step 1 on page 231 of 6.2, Preparing to upgrade the cluster on page 231, and recorded in step 4 on page 232 of the same section. If you are using LDAP authentication, you will also be prompted to enter in these values: LDAP_SERVER, LDAP_USER, LDAP_PASSWD, LDAP_SECURED_CONNECTION, LDAP_BASEDN_ROLES, LDAP_ROLE_MEM_ID_ATTR, LDAP_USER_ID_ATTR, and LDAP_ROLE_ID_ATTR. You should also have saved these in step 4 on page 232 of 6.2, Preparing to upgrade the cluster on page 231. If you are already using local authentication, make sure to enter a valid locally defined user ID/password combination that is a member of the Administrator group for the CLI_USER/CLI_PASSWD prompts; otherwise, enter the LDAP user ID with the Administrator role.
237
Example 6-8 Upgrade cluster: Install SAN File System package part 1 tank-mds2:/media/cdrom/SLES8 # ./install_sfs-package-2.2.2-132.i386.sh --restore /usr/tank/server/DR/DRfiles-tank-mds1-20050819123200.tar.gz --sfsargs "-noldap" Software Licensing Agreement 1. Czech 2. English 3. French 4. German 5. Italian 6. Polish 7. Portuguese 8. Spanish 9. Turkish Please enter the number that corresponds to the language you prefer. 2 Software Licensing Agreement Press Enter to display the license agreement on your screen. Please read the agreement carefully before installing the Program. After reading the agreement, you will be given the opportunity to accept it or decline it. If you choose to decline the agreement, installation will not be completed and you will not be able to use the Program.
International Program License Agreement Part 1 - General Terms BY DOWNLOADING, INSTALLING, COPYING, ACCESSING, OR USING THE PROGRAM YOU AGREE TO THE TERMS OF THIS AGREEMENT. IF YOU ARE ACCEPTING THESE TERMS ON BEHALF OF ANOTHER PERSON OR A COMPANY OR OTHER LEGAL ENTITY, YOU REPRESENT AND WARRANT THAT YOU HAVE FULL AUTHORITY TO BIND THAT PERSON, COMPANY, OR LEGAL ENTITY TO THESE TERMS. IF YOU DO NOT AGREE TO THESE TERMS, - DO NOT DOWNLOAD, INSTALL, COPY, ACCESS, OR USE THE PROGRAM; AND - PROMPTLY RETURN THE PROGRAM AND PROOF OF ENTITLEMENT TO Press Enter to continue viewing the license agreement, or, Enter "1" to accept the agreement, "2" to decline it or "99" to go back to the previous screen. 1 Installing sfs-package-2.2.2-132.i386.rpm...... sfs-package ################################################## sfs-package-2.2.2-132 Installing /usr/tank/packages/sfs.locale.linux_SLES8-2.2.2-8.i386.rpm...... sfs.locale.linux_SLES8 ################################################## sfs.locale.linux_SLES8-2.2.2-8 Installing /usr/tank/packages/sfs.server.verify.linux_SLES8-2.2.2-91.i386.rpm...... sfs.server.verify.linux_SLES8################################################## sfs.server.verify.linux_SLES8-2.2.2-91
238
Installing /usr/tank/packages/sfs.server.config.linux_SLES8-2.2.2-91.i386.rpm...... sfs.server.config.linux_SLES8################################################## sfs.server.config.linux_SLES8-2.2.2-91 SAN File System CD mount point (CD_MNT) ======================================= setupsfs needs to access the SAN File System CD to verify the license key and install required software. Enter the full path to the SAN File System CDs mount point. CDs mount point [/media/cdrom]: Truststore Password (TRUSTSTORE_PASSWD) ======================================= Enter the password used to secure the truststore file. The password must be at least six characters. Truststore Password [-]: ibmstore CIMOM port (CIMOM_PORT) ======================= The CIMOM port is the port used for secure administrative operations. CIMOM port number [5989]: CLI User (CLI_USER) =================== Enter the user name that will be used to access the administrative CLI. This user must have an administrative role. CLI User [-]: itsoadm CLI Password (CLI_PASSWD) ========================= Enter the password used to access the administrative CLI. CLI Password [-]: xxxx System Managment IP (SYS_MGMT_IP) ================================= Enter the System Managment IP address This is the address assigned to your RSAII card. System Managment IP [-]: 9.82.22.176
239
3. The process continues installing the new packages on the MDS, as shown in Example 6-9.
Example 6-9 Upgrade cluster: Install SAN File System package part 2 . Gathering required files .HSTPV0035I Machine tank-mds2 complies with requirements of SAN File System version 2.2.2.91, build sv22_0001. . Installing:wsexpress-5.1.2-1.i386.rpm on 9.82.24.97 . wsexpress-5.1.2-1 . Installing:ibmusbasm-1.09-2.i386.rpm on 9.82.24.97 . Found Product ID 4001 USB Service Processor. Installing the USB Service Processor driver. ibmusbasm-1.09-2 . Installing:sfs.admin.linux_SLES8-2.2.2-91.i386.rpm on 9.82.24.97 . HSTWU0011I Installing the SAN File System console... HSTWU0014I The SAN File System console has been installed successfully. sfs.admin.linux_SLES8-2.2.2-91 . Installing:sfs.server.linux_SLES8-2.2.2-91.i386.rpm on 9.82.24.97 . sfs.server.linux_SLES8-2.2.2-91 Restoring configuration files on 9.82.24.97 . Updating configuration file: /usr/tank/admin/config/cimom.properties Starting the CIM agent on 9.82.24.97 . Starting the SAN File System Console on 9.82.24.97 . Configuration complete.
4. Check the status of the upgraded MDS with the lsserver command, as shown in Example 6-10.
Example 6-10 Check MDS status tank-mds2:~ # sfscli lsserver Name State Server Role Filesets Last Boot ================================================================== tank-mds2 Not Running Subordinate 0 Jan 1, 1970 12:00:00 AM
5. Check to see what SAN File System packages have been installed (see Example 6-11). Compare it to Example 6-7 on page 236.
Example 6-11 Check new installed packages tank-mds2:~ # rpm -qa|grep sfs dosfstools-2.8-296 sfs-package-2.2.2-132 sfs.server.config.linux_SLES8-2.2.2-91 sfs.locale.linux_SLES8-2.2.2-8 sfs.server.verify.linux_SLES8-2.2.2-91
240
sfs.server.linux_SLES8-2.2.2-91 sfs.admin.linux_SLES8-2.2.2-91
6. Now we can start the upgraded MDS to run it at V2.2.2, as shown in Example 6-12.
Example 6-12 Start the upgraded server tank-mds2:~ # /usr/tank/admin/bin/sfscli startserver tank-mds2 Are you sure you want to start the metadata server? Starting the metadata server might cause filesets to be reassigned to this metadata server in accordance with the fileset assignment algorithm. [y/n]:y CMMNP5248I Metadata server tank-mds2 started successfully.
7. The MDS will rejoin the cluster. Check this on the master MDS, as shown in Example 6-15.
Example 6-13 Upgraded server rejoins the cluster tank-mds1:~ #sfscli lsserver Name State Server Role Filesets Last Boot ============================================================== tank-mds1 Online Master 1 Aug 19, 2005 10:41:01 AM tank-mds2 Online Subordinate 0 Aug 19, 2005 3:24:25 PM
8. Re-enable the automatic restart capability on the MDS that was just upgraded using the startautorestart command, as shown in Example 6-14.
Example 6-14 Re-enable autorestart tank-mds2:~ # sfscli startautorestart tank-mds2 CMMNP5365I The automatic restart service for metadata server tank-mds2 successfully enabled
241
tank-mds1 Not Running Master tank-mds2 Joining Subordinate 0 Aug 19, 2005 3:24:25 PM tank-mds1:~ # sfscli lsserver Name State Server Role Filesets Last Boot ================================================================== tank-mds1 Not Running Master tank-mds2 Joining Subordinate 0 Aug 19, 2005 3:24:25 PM tank-mds1:~ # sfscli lsserver Name State Server Role Filesets Last Boot ==================================================== tank-mds1 Not Running Subordinate -
2. Check on tank-mds2 that the master role is correctly assumed (see Example 6-16).
Example 6-16 Check master role failover tank-mds2:~ #sfscli lsserver Name State Server Role Filesets Last Boot ================================================================== tank-mds1 Not Running Subordinate tank-mds2 Online Master 1 Aug 19, 2005 3:24:25 PM
3. On the MDS that was shutdown, disable the automatic restart capability using the stopautorestart command, as shown in Example 6-17.
Example 6-17 Disable autorestart tank-mds1:~ # sfscli stopautorestart tank-mds1 CMMNP5365I The automatic restart service for metadata server tank-mds1 successfully disabled
4. Now follow the same steps to upgrade the final MDS, as described in 6.3.2, Upgrade MDS BIOS and RSA II firmware on page 234, 6.3.3, Upgrade the disk subsystem software on page 235, 6.3.4, Upgrade the Linux operating system on page 236, and 6.3.5, Upgrade the MDS software on page 236. 5. After the upgrade is complete, check the status of tank-mds1 (see Example 6-18).
Example 6-18 Check upgraded MDS status tank-mds1:~ # sfscli lsserver Name State Server Role Filesets Last Boot ================================================================== tank-mds1 Not Running Subordinate 0 Jan 1, 1970 12:00:00 AM
6. Check that the new SAN File System packages were installed (see Example 6-19).
Example 6-19 Check SAN File System packages upgraded tank-mds1:~ # rpm -qa|grep sfs dosfstools-2.8-296 sfs-package-2.2.2-132 sfs.server.config.linux_SLES8-2.2.2-91 sfs.locale.linux_SLES8-2.2.2-8 sfs.server.verify.linux_SLES8-2.2.2-91 sfs.server.linux_SLES8-2.2.2-91 sfs.admin.linux_SLES8-2.2.2-91
242
Example 6-20 Start upgraded MDS tank-mds1:~ # /usr/tank/admin/bin/sfscli startserver tank-mds1 Are you sure you want to start the metadata server? Starting the metadata server might cause filesets to be reassigned to this metadata server in accordance with the fileset assignment algorithm. [y/n]:y CMMNP5248I Metadata server tank-mds1 started successfully.
8. The master role will remain on tank-mds2. On tank-mds2, check the status of both servers to make sure the software of both servers are upgraded, as in Example 6-21.
Example 6-21 Check all MDS are upgraded tank-mds2:~ # sfscli lsserver -l Name State Last State Change Target State Last Target State Change Server Role Filesets Last Boot Current Time Most Current Software Version =========================================================================================== ====================================================================================== tank-mds2 Online Aug 19, 2005 4:24:03 PM Online Master 0 Aug 19, 2005 3:24:25 PM Aug 19, 2005 4:24:51 PM 2.2.2.91 tank-mds1 Online Aug 19, 2005 3:58:58 PM Online Subordinate 1 Aug 19, 2005 3:58:44 PM Aug 19, 2005 3:59:47 PM 2.2.2.91
9. Congratulations! Your cluster is now upgraded to V 2.2.2 of SAN File System. 10.You may now disconnect the USB/RS-485 serial network interface on the RSA cards. This is not required, but the interface and connection is no longer used by SAN File System.
243
date -----------Aug 17 16:27 Aug 17 16:27 Aug 17 16:27 Aug 17 16:28 Aug 17 16:29 Aug 17 16:29 Aug 17 16:29 Aug 17 16:29
3. If there are any outstanding processes accessing the mount point, they should be terminated. 4. Stop the SAN File System client using the rmstclient command with the noprompt option, as shown in Example 5-38 on page 174. This command will fail if the SAN File System is being accessed by the client. Make sure to stop all use of SAN File System on the AIX client, as described in the previous step. 5. Copy the current stclient.conf configuration file to a temporary location on the AIX client:
cp /usr/tank/client/config/stclient.conf /tmp
Note: If you did not choose to save the setup configuration when the AIX client was first installed, you may not have this file. 6. Remove the current SAN File System software from the AIX system, as described in Uninstalling the AIX SAN File System client on page 175. Then install the new client package, as described in 5.3.4, SAN File System AIX client installation on page 169. Make sure to specify the location of the new version package file where it was saved in the first step. 7. Copy your saved stclient.conf file back to the /usr/tank/client/config directory:
cp /tmp/stclient.conf /usr/tank/client/config
8. Reconfigure the SAN File System client, using the stored parameters in the stclient.conf directory, as shown in Configuring the AIX client to the SAN File System server on
244
page 172. Use the -noprompt option to have the setup run silently, using the values in the configuration file.
2. Scroll down to the Windows 2000 or 2003 client section and save the executable file to a temporary directory. It will be called sfs-client-WIN2K3-2.2.2-.x.exe (x is the release number).
245
3. Determine the current configuration information for the client, including: SAN File System Master Server IP address or host name SAN File System server port SAN File System preferred drive letter SAN File System client name SAN File System network connection type (TCP or UDP) SAN File System client critical error handling policy 4. Stop all applications using SAN File System on the Windows client. 5. Uninstall the current client version, as shown in Removing the SAN File System Windows client on page 157. Make sure you reboot your client following a successful de-installation. 6. Install the new version, as shown in Windows client installation steps on page 149, being sure to specify the new client package you just obtained from the MDS. Enter in the saved configuration parameters.
You must use these exact group names and define all of the groups. 2. For each LDAP user ID that was used for SAN File System, define a UNIX user ID, and specify the same password. When defining each user ID with the useradd command, specify the same group that matches its LDAP role. You may decide to use different user IDs than were previously used in LDAP; if so, note the special steps required. In our case, we will preserve an existing ID, ITSOMon, but replace the previous ITSOAdmin ID with itsoadm (remember, user IDs, groups, and passwords are case sensitive). # useradd -g Administrator itsoadm # passwd itsoadm (Specify a password when prompted.) # useradd -g Monitor ITSOMon # passwd ITSOMon (Specify a password when prompted.) Repeat for each user ID that was defined in LDAP or for any new user ID required. We recommend limiting UNIX user IDs to eight characters or fewer. 3. After making these definitions identically on each MDS, log in to each MDS, using each user ID to verify the ID/password, and to make sure a /home/userid directory structure exists. Create home directories if required (use the md command). You can also list the contents of the /etc/passwd and /etc/group files to verify that the intended UNIX groups and user IDs were added to the MDSs.
246
4. After making these definitions on every MDS, reconfigure the cluster to use local authentication. On each MDS, enter the following: # /usr/tank/admin/bin/stopCimom (this stops the administrative agent). 5. Edit /usr/tank/admin/config/cimom.properties and change the line beginning with AuthModule to:
AuthModule=com.ibm.storage.storagetank.auth.SFSLocalAuthModule
Repeat steps 4 to 6 on each MDS. You are now using local authentication and can delete the SAN File System definitions from the LDAP server, as they are no longer required. You will now log in to the CLI and GUI using a local user ID and password combination. You can only use user IDs that are members of one of the SAN File System standard groups; an attempt to use the CLI or GUI with a user ID that is not a member of one of these groups will fail. You do not need to log in to the operating system as a SAN File System user ID to run the CLI; the CLI is just an application that runs after logging in. Therefore, you can log in to the MDS as any user, and then run the CLI as a SAN File System user ID. The user ID that will be used to run the CLI will be specified in the .tank.passwd file in the home directory of the ID that logged into the operating system. Check in any existing .tank.passwd files that the user IDs and passwords specified in them have been configured locally. If you have used different user IDs from the previous LDAP configuration (for example, in our case, to illustrate this, we had ITSOAdmin in LDAP but defined instead a user ID itsoadm - remember these are case sensitive), you will need to update any .tank.passwd files to reflect the correct user ID and password combination. For example, in our case for root, our /root/.tank.passwd file previously contained:
ITSOAdmin:xxxxx
where xxxxx is the actual LDAP password. This indicates that when we log in to the MDS OS as root, when running the CLI, it runs with the privileges of the ITSOAdmin user ID. Since this ID no longer exists, we need to replace it with a valid SAN File System local user ID. Use the tankpasswd command, as shown in Example 6-24. Change to the home directory for the user that had logged into the MDS (root in this example), then update the .tank.passwd file to set a user ID to be used when logging into the CLI. Repeat this process while logged in as any other user IDs that have been accessing the CLI. You have to configure the .tank.passwd file even if logging into the MDS OS as the same user ID that will run the CLI.
Example 6-24 Update the CLI password # cd ~ # cat .tank.passwd ITSOAdmin:password # /usr/tank/admin/bin/tankpasswd -u itsoadm -p password # cat .tank.passwd itsoadm:password # sfscli ladmuser tank-mds4:~ # sfscli lsadmuser Name User Role Authorization =============================== ITSOMon Monitor Not Current itsoadm Admin Current
247
The example also shows the lsadmuser command; this command displays the currently defined SAN File System user IDs, and shows our current session ran under the user ID itsoadm (Authorization is Current). If you subsequently upgrade the SAN File System software, make sure to enter a valid locally defined user ID/password combination that is a member of the Administrator group when prompted for CLI_USER/CLI_PASSWD prompts (as in Example 6-8 on page 238).
248
Part 3
Part
249
250
Chapter 7.
251
252
mds1:~ #
If using PuTTY, start the PuTTY interface, and create a session for your MDS, as shown in Figure 7-1.
253
If you want to run the CLI when logged in as a non-root user ID, you must manually create a password file specifying a valid SAN File System user ID/password combination in the home directory of the login user ID. Use the tankpasswd command with a SAN File System user ID and password combination, as shown in Example 7-2. In the example, the session is logged in as lxuser, but after the command is run, when lxuser runs sfscli, it will run with the privilege level of the user ID ITSOMon.
Example 7-2 Create .tank.passwd file for non-root users lxuser@mds1:~> cd $HOME lxuser@mds1:~> /usr/tank/admin/bin/tankpasswd -u ITSOMon -p password lxuser@mds1:~> cat .tank.passwd ITSOMon:password lxuser@mds1:~>
If you have done this, you can now start a sfscli session by simply typing sfscli, as shown in Example 7-3. Most administrative tasks necessary to administer the cluster can be run using the CLI. A few tasks (for example, during installation) are executed outside of sfscli, that is, directly at the MDS operating system. For these, mostly standard Linux commands are used.
Example 7-3 Starting sfscli mds1:~ # sfscli sfscli>
Type help at the sfscli prompt to get a list of commands available (Example 7-4).
Example 7-4 Access help using sfscli sfscli> help activatevol addprivclient addserver addsnmpmgr attachfileset autofilesetserver builddrscript catlog catpolicy chclusterconfig chdomain chfileset chldapconfig chpool chvol clearlog collectdiag lsadmuser lsautorestart lsclient lsdomain lsdrfile lsfileset lsimage lslun lspolicy lspool lsproc lsserver lssnmpmgr lstrapsetting lsusermap lsvol mkdomain mkvol mvfile quiescecluster quit rediscoverluns refreshusermap reportclient reportfilesetuse reportvolfiles resetadmuser resumecluster reverttoimage rmdomain rmdrfile rmfileset rmimage rmpolicy setfilesetserver setoutput settrap startautorestart startcluster startmetadatacheck startserver statcluster statfile statfileset statldap statpolicy statserver stopautorestart stopcluster stopmetadatacheck stopserver
254
To get more about a specific command, type help clicommand (see Example 7-5). This will show the full reference for the selected command, including syntax and examples. You can also use help -s clicommand to display just the short description of a command.
Example 7-5 Help on specific CLI command sfscli> help rmfileset rmfileset Removes one or more empty, detached filesets and optionally the files in the filesets, including any FlashCopy(R) images.
>>-rmfileset--+--------+--+---------+--+-----+------------------> +- -?----+ '- -quiet-' '- -f-' +- -h----+ '- -help-' .--------------. V | >--+---fileset_name-+-+---------------------------------------->< '- - --------------' << information deleted >>
You can run sfscli on any MDS; however, many commands are valid for execution only at the master MDS. Also, some commands execute differently when executed on the master and subordinate MDS; for example, the lsserver command, when run on the master MDS, will list all MDSs in the cluster. Tip: To display which of the cluster nodes is currently running as a Master MDS, use the statcluster -netconfig command from the SAN File System command line interface. If issued from a subordinate MDS, it displays attributes only about the local MDS. More information about command restrictions or operations can be found in the IBM TotalStorage SAN File System Administrators Guide and Reference, GA27-4317. This will also tell you the required LDAP privileges (Administrator, Backup, Monitor, or Operator) required for the various commands.
255
Enter your administrator ID and password to display the main window (Figure 7-3 on page 257).
256
Task Bar
My work frame
My work area
The My work frame area on the left hand side contains links to the SAN File System administrative functions, consisting of a series of embedded menus.
257
Several user assistance resources are also available, including an embedded Help Assistant for window help information, as well as a more comprehensive SAN File System Information Center. To open the embedded Help Assistant, click the Help Assistant Icon in the top right corner. To access the Information Center, select one of the topics under the SAN File System Assistance section in the work area. The Information Center will then open up a new window, as shown in Figure 7-4.
258
Example 7-6 List server sfscli> lsserver Name State Server Role Filesets Last Boot ========================================================= mds1 Online Master 2 May 14, 2004 2:47:31 AM mds2 Online Subordinate 0 May 16, 2004 10:39:58 PM
In this output: Name: The name of the MDS. State: Indicates the state of the MDS. The possible states are: Failed Initialization, Fully Quiescent, Initializing, Joining, Not Added, Not Running, Offline, Online, Partly Quiescent, and Unknown Server Role: Indicates whether the MDS is master or subordinate. Filesets: Indicates number of filesets assigned to the MDS. Last Boot section: Shows when the engine was last started.
In this output: Name: The name of the volume; in this case, MASTER is a system-defined name for the initial system volume. State: Indicates whether the volume is active or not (a volume can be activated using the activatevol command). Pool: The pool that the volume is assigned to (SYSTEM in this case, indicating the System Pool). Size (MB): Size of the volume in MB. Used (MB): Amount of space being used in MB. Used (%): Percentage of the available size being used in the volume.
259
To verify that these two pools exist after install, use lspool, as shown in Example 7-8.
Example 7-8 Verify the system and default pool sfscli> lspool Name Type Size (MB) Used (MB) Used (%) Threshold (%) Volumes ============================================================================ SYSTEM System 2032 240 11 80 1 DEFAULT_POOL User Default 0 0 0 80 0
In this output: Name: The name of the pool. Type: Indicates whether it is a System Pool or User Pool. The default pool is also indicated. Size (MB): Total size of the pool in MB. Used (MB): Amount of space being used in the pool, in MB. Used (%): Percentage of the available size being used in the pool. Threshold (%): Percentage of the storage pools estimated capacity which, when reached or exceeded, causes the MDS to generate an alert. Volumes: Number of volumes defined in the pool. All storage pools should be monitored to ensure they do not run out of space, but it is crucial, in particular, to monitor the System Pool. If the System Pool fills, then no metadata can be written and the cluster will be unavailable to the clients.
In this output: Lun ID: The WWN of the LUN assigned from the back-end storage device. Vendor: Indicates the vendor of the back-end storage device. Product: The type (product ID in this case) of the back-end storage device. Size: The size of the LUN presented from the back-end storage device. Volume: Indicates if a volume name has been defined for the volume within SAN File System. State (wrapped): Indicates whether the LUN is assigned to a pool or not.
260
In the example, we can see there are three LUNS mapped: one of them has a state of Assigned (and is our System Pool volume) and the other two are in the Available state, ready to be assigned. To list the LUNs which are visible to a SAN File System client, use the lslun -client <client_name> command, as shown in Example 7-10. GUI: Manage Storage Data LUNs.
Example 7-10 List LUNs for a particular client example sfscli> lslun -client LIXPrague Lun ID Vendor Product Size (MB) Volume State =========================================================================================== VPD83NAA6=600507680188801B200000000000001C IBM 2145 40959 vol_lixprague1 Assigned
In this output: Name: The user ID defined on the LDAP server. User Role: The LDAP role of the user ID. Authorization: Indicates if the user ID is currently authenticated with the MDS.
261
clearlog: Clears the audit log and cluster log files. GUI: Follow the preceding menu selection and then Select Action: Clear Log. chclusterconfig: Modifies the cluster settings that do not require a restart when changed. GUI: Manage Servers and Clients Cluster Select Properties Select Localization or Tuning. quiescecluster: Changes the state of all MDSs in the cluster to one of three quiescent states. GUI: Manage Servers and Clients Cluster Select Change State. resumecluster: Brings all MDSs in the cluster to the online state. GUI: Manage Servers and Clients Cluster Select Change State. startcluster: Starts all MDS in the cluster and brings them to the full online state. GUI: Manage Servers and Clients Cluster Select Start Online. statcluster: Displays status, network, workload, and configuration information about the cluster. GUI: Manage Servers and Clients Cluster Select Properties. stopcluster: Stops all MDSs in the cluster gracefully. GUI: Manage Servers and Clients Cluster Select Stop. lsserver: Displays a lists of all MDSs in the cluster and their attributes (if issued from the master MDS), or displays attributes about the local MDS if issued from a subordinate MDS. GUI: Manage Servers and Clients Server. startserver: Starts the specified MDS. GUI: Manage Servers and Clients Server Select Server Select Action Start. statserver: Displays status, configuration, and workload information for a specific MDS in the cluster, if issued from the master MDS. Displays status, configuration, and workload information for the local MDS if issued from a subordinate. GUI: Manage Servers and Clients Server Select Server Select Action Properties. stopserver: Shuts down a subordinate MDS gracefully. GUI: Manage Servers and Clients Server Select Server Select Action Stop. setfilesetserver: Reassigns an existing fileset to be hosted by a different Metadata server. GUI: Manage Filing Click on desired Fileset Properties Select General Settings Server Assignment Method. statfileset: Displays the number of started and completed transactions for the filesets being served by the local MDS. GUI: Monitor System Filesets. startmetadatacheck: Starts the utility that performs a consistency check on the metadata for the entire system or a set of filesets, generates reports in the cluster log, and optionally repairs inconsistencies in the metadata. GUI: Maintain System Check Metadata. stopmetadatacheck: Stops the metadata check utility that is currently in progress. GUI: Maintain System Check Metadata Select Stop. lsautorestart: Displays a list of MDSs and the automatic-restart settings for each. GUI: Maintain System Restart Service. startautorestart: Enables the MDS to restart automatically if it is down. GUI: Maintain System Restart Service Select Server Select Action Enable Service. stopautorestart: Disables the MDS from restarting automatically if it is down. GUI: Maintain System Restart Service Select Server Select Action Disable Service. lsproc: Displays a list of long-running processes that are not yet complete and their attributes.GUI: Monitor System Processes.
262
setdefaultpool: Designates a User Pool to be the default storage pool, and changes the previous default pool to a regular, nondefault User Pool. GUI: Manage Storage Storage Pools General Properties Select Enable (Select a User Pool). resetadmuser: Forces all administrative users to log in again. GUI: Administer Access Users Select Actions Timeout All Authorizations. suspendvol: Suspends one or more volumes so that the MDS cannot allocate new data on the volumes. GUI: Manage Storage Volumes Select Volume Select Action Suspend.
263
GUI: Manage Storage Data LUNs, select Client name from the drop-down, and refresh. Our lab setup is shown in Figure 7-5.
AIX
Windows 2000
Windows 2000
HBA
HBA
HBA
HBA
HBA
FC Switch FC Switch 1 1
FC Switch 2
System Pool User Pool
HBA
Metadata Server
To make sure that the LUNs are visible to SAN File System, use the lslun command, as shown in Example 7-9 on page 260. In order to list LUNs visible to a particular client, run the lslun command with the -client <client_name> parameter. Once you have verified that each client and MDS can see all of the required LUNs, you can now start defining volumes. Use the mkvol command, as shown in Example 7-13. When adding LUNs to a user pool (to the default pool in our example), you must use the -client parameter, and the client specified must be one that has access to the LUN being added. In our case, we specify client AIXRome to add this LUN. You can specify any client with this command, so long as that client can see the LUN being added. GUI: Manage Storage Add Volumes
Example 7-13 Add a volume to your default pool sfscli> lslun -client AIXRome Lun ID Vendor Product Size (MB) Volume State =========================================================================================== ========== VPD83NAA6=600507680188801B2000000000000001 IBM 2145 102399 Available VPD83NAA6=600507680188801B200000000000001C IBM 2145 40959 Available sfscli> mkvol -lun VPD83NAA6=600507680188801B2000000000000001 -pool DEFAULT_POOL -client AIXRome -activate yes vol01
264
In this command, you specify the following parameters: -lun lun_identifier: Specifies the identifier of a LUN to make into a volume. -client client_name: Name of a client that has visibility to the LUN. In order to create a volume in the user pool, the client must be active (must appear in the client list when you run the lsclient command) and have access to that particular LUN (use the reportclient command to report active clients that can access the LUN). This parameter is only required for adding volumes to a User Pool, that is, it is not used when adding volumes to the System Pool. -pool pool_name: Name of the storage pool to which to add the new volumes. The storage pool is either a User Pool or the System Pool. If not specified, the volumes are added to the default User Pool. -activate yes/no: Specify whether to activate the volume. Data is only stored on activated volumes. The default is yes. -f: Force the MDS to add the volume and write a new label to it, even if the volume already has a valid SAN File System label. Note: You can use -f only if the volume is not assigned to another storage pool in the same cluster. volume_name: Name(s) assigned to the added volume(s). This name must be unique within the storage pool, and can be up to 256 characters in length. In Example 7-13 on page 264, a volume called vol01 is added to the default pool. To verify that the volume has been successfully added, use the lspool command, as shown in Example 7-14. Compare with the listing before we added the volume (Example 7-8 on page 260). GUI: Manage Storage Storage Pools.
Example 7-14 List the pools to verify that the volume has been added to the default pool sfscli> lspool Name Type Size (MB) Used (MB) Used (%) Threshold (%) Volumes ============================================================================ SYSTEM System 2032 240 11 80 1 DEFAULT_POOL User Default 102384 0 0 80 1
We see that the size of the default pool has increased, and that there is one volume listed under the Volumes column. Checking lsvol, the new volume appears, as shown in Example 7-15. GUI: Manage Storage Volumes.
Example 7-15 List volumes sfscli> lsvol Name State Pool Size (MB) Used (MB) Used (%) ========================================================== MASTER Activated SYSTEM 2032 240 11 vol01 Activated DEFAULT_POOL 102384 0 0
265
Finally, with lslun, we can see that the state of the newly defined LUN has changed from Available to Assigned, as shown in Example 7-16. Please note the syntax of the lslun command. As you can see in Example 7-13 on page 264, we have created the volume for client AIXRome. Because the LUN used to create this volume is visible to the AIXRome client only, we need to specify -client parameter for lslun command.
Example 7-16 Display LUNs for an SAN File System client sfscli> lslun -client AIXRome Lun ID Vendor Product Size (MB) Volume State ==================================================================================== VPD83NAA6=600507680188801B2000000000000001 IBM 2145 102400 vol01 Assigned VPD83NAA6=600507680188801B200000000000001C IBM 2145 140959 vol01 Available
Note: Please note that the lsvol command does not have the -client option. When the lsvol command is issued, all defined volumes will be displayed. From the SAN File System MDS point of view, volumes represent logical units of particular Storage Pools. By contrast, when lslun is used, usually the -client parameter has to be specified (except when you want to list the system LUNs visible to an MDS node). This is because a LUN represents a physical object of a back-end Storage visible to a particular client or MDS.
The change is reflected using the lsvol command, as shown in Example 7-18.
Example 7-18 List volumes to verify changes sfscli> lsvol Name State Pool Size (MB) Used (MB) Used (%) =========================================================== MASTER Activated SYSTEM 2032 240 11 volume1 Activated DEFAULT_POOL 102384 0 0
266
If one or more files cannot be accessed during the removal, for example, if there are bad sectors on the volume being removed, the volume removal will fail unless you specify the -f option. With this parameter, all files on the volume will be deleted, not copied to other volumes. Before removing a volume with the -f option, we recommend listing the files on the volume using the reportvolfiles command, as shown in Example 7-19. This command gives you the MDS perspective of what files are stored on that particular volume; it does not actually access the volume contents. Note that you cannot perform this operation at the GUI.
Example 7-19 reportvolfiles sfscli> lsvol Name State Pool Size (MB) Used (MB) Used (%) =========================================================== MASTER Activated SYSTEM 2032 240 11 volume1 Activated DEFAULT_POOL 102384 16 0 sfscli> reportvolfiles volume1 ROOT:sanfs/test.txt ROOT:sanfs/files/B4rFEm ROOT:sanfs/files/B7rz7u ROOT:sanfs/files/cfgvg.out ROOT:sanfs/files/codcron ROOT:sanfs/files/lslpplc.out ROOT:sanfs/files/post_i.out ROOT:sanfs/files/pre_rm.out ROOT:sanfs/files/rc.net.out ROOT:sanfs/files/rc.net.serial.out ROOT:sanfs/files/rmTrace ROOT:sanfs/files/rpcbind.file ROOT:sanfs/files/sdd.temporary.file ROOT:sanfs/files/sddsrv.out ROOT:sanfs/files/sfs.client.aix51-opt ROOT:sanfs/files/xlogfile
The reportvolfiles command will tell you where the files are located on that volume within the global namespace. In the example, all the files on the volume volume1 are contained in the fileset called files. Attention: Specifying the -f parameter with the rmvol command will remove the files from the volume, and these files will have to be restored from an existing backup. We recommend using RAID disk (for example, RAID 5 or RAID 10) for user volumes, to minimize the possibility of volume corruption. If the -f parameter is not specified, the MDS will automatically move the data off the volume to another volume within the pool. If you want to assign a volume to another storage pool, you must move all the files from it first using the rmvol command. The rmvol command requires a -client parameter in order to remove a volume from a user pool. The client specified must have access not only to the volume being removed, but to all other volumes in the storage pool. To verify this, use lsvol -pool <storage_pool> and lslun -client <name>, and crosscheck the results. To remove a system volume, use the rmvol command without the -client parameter.
267
GUI: Manage Storage Volumes Select volume Select action Remove. Example 7-20 shows how to remove a user volume, vol01.
Example 7-20 Removing volumes in SAN File System sfscli> lsvol Name State Pool Size (MB) Used (MB) Used (%) =========================================================== MASTER Activated SYSTEM 2032 240 11 volume1 Suspended DEFAULT_POOL 102384 16 0 sfscli> rmvol -client AIXRome volume1 Are you sure you want to delete Volume volume1? [y/n]:y CMMNP5449E There is not enough space on other volumes to move the volume contents. sfscli> lsvol Name State Pool Size (MB) Used (MB) Used (%) =========================================================== MASTER Activated SYSTEM 2032 240 11 volume1 Suspended DEFAULT_POOL 102384 16 0 sfscli> rmvol -client AIXRome -f volume1 Are you sure you want to delete Volume volume1? [y/n]:y CMMNP5442I Volume volume1 was removed successfully. sfscli> lsvol Name State Pool Size (MB) Used (MB) Used (%) ==================================================== MASTER Activated SYSTEM 2032 240 11
If you suspect a faulty disk and wanted to delete it gracefully from the storage pool, do the following operations: Attempt rmvol on the volume (without the -f option). This will move all the data that is accessible on the volume. List the remaining contents of the volume (reportvolfiles). Keep this list. Force remove the volume (rmvol -f). This will delete all traces of the remaining files and remove the volume from its storage pool. Add additional volume(s) to the storage pool if space is required to replace the failing volume. Restore the deleted files from a backup.
define new LUNS, run the rediscoverluns -client <client_name> command to make the new LUNs available to the clients, to be assigned as volumes. Only one System Pool can exist. This is created by default at system installation; therefore, any new pools will be User Pools.
269
2. In Example 7-24 on page 271, we verify that SDD has been configured to use the three volumes that has been assigned to SAN File System. The output of the datapath query device command shows there are three devices that have been configured by SDD. The Serial numbers match the LUN IDs reported in the previous example.
270
Example 7-24 Datapath query device # datapath query device Total Devices : 3 DEV#: 0 DEVICE NAME: vpatha TYPE: 2145 POLICY: Optimized SERIAL: 600507680188801b2000000000000000 ============================================================================ Path# Adapter/Hard Disk State Mode Select Errors 0 Host2Channel0/sdb OPEN NORMAL 154515 0 1 Host2Channel0/sde OPEN NORMAL 0 0 2 Host3Channel0/sdi OPEN NORMAL 155066 0 3 Host3Channel0/sdl OPEN NORMAL 0 0 DEV#: 1 DEVICE NAME: vpathb TYPE: 2145 POLICY: Optimized SERIAL: 600507680188801b2000000000000001 ============================================================================ Path# Adapter/Hard Disk State Mode Select Errors 0 Host2Channel0/sdc CLOSE NORMAL 0 0 1 Host2Channel0/sdf CLOSE NORMAL 96 0 2 Host3Channel0/sdh CLOSE NORMAL 0 0 3 Host3Channel0/sdk CLOSE NORMAL 75 0 DEV#: 2 DEVICE NAME: vpathc TYPE: 2145 POLICY: Optimized SERIAL: 600507680188801b2000000000000002 ============================================================================ Path# Adapter/Hard Disk State Mode Select Errors 0 Host2Channel0/sdd CLOSE NORMAL 0 0 1 Host2Channel0/sdg CLOSE NORMAL 70 0 2 Host3Channel0/sdj CLOSE NORMAL 0 0 3 Host3Channel0/sdm CLOSE NORMAL 101 0
3. At the SVC, we added a new LUN (vdisk) and made it available to the MDS host ports. See your storage device documentation for detailed instruction on how to do this. The next steps show how to have the MDS Linux OS dynamically recognize the new LUN. You must perform the remaining steps in this section on every MDS before continuing to Adding volumes to system storage pool on page 274. 4. Force the HBA driver (QLogic for the MDS) to rescan the SAN fabric. Since we are using QLogic adapters, the commands are as shown in Example 7-25.
Example 7-25 Force a scan for new devices # echo scsi-qlascan >/proc/scsi/qla2300/2 # echo scsi-qlascan >/proc/scsi/qla2300/3
271
5. This will update the two QLogic files in the /proc directory. View these files, as shown in Example 7-26 (we just show one of the files, /proc/scsi/qla2300/2, in the example, but you should check both of them). Scroll down to the SCSI LUN information. A * indicates a newly discovered LUN that has not yet been registered with the operating system. Take a note of the SCSI ID and LUN number from the left hand column of any entries marked with a *.
Example 7-26 View QLogic proc # cat /proc/scsi/qla2300/2 QLogic PCI to Fibre Channel Host Adapter for QLA2342: Firmware version: 3.02.24, Driver version 6.06.64 Entry address = c5000060 HBA: QLA2312 , Serial# F97353 Request Queue = 0x50e8000, Response Queue = 0x50d0000 Request Queue count= 128, Response Queue count= 512 Total number of active commands = 0 Total number of interrupts = 155823 Total number of IOCBs (used/max) = (0/600) Total number of queued commands = 0 Device queue depth = 0x20 Number of free request entries = 27 Number of mailbox timeouts = 0 Number of ISP aborts = 0 Number of loop resyncs = 0 Number of retries for empty slots = 0 Number of reqs in pending_q= 0, retry_q= 0, done_q= 0, scsi_retry_q= 0 Host adapter:loop state= <READY>, flags= 0x8e0813 Dpc flags = 0x0 MBX flags = 0x0 SRB Free Count = 4096 Link down Timeout = 000 Port down retry = 030 Login retry count = 030 Commands retried with dropped frame(s) = 0 SCSI Device Information: scsi-qla0-adapter-node=200000e08b09691d; scsi-qla0-adapter-port=210000e08b09691d; scsi-qla0-target-0=5005076801400364; scsi-qla0-target-1=500507680140035a; SCSI LUN Information: (Id:Lun) * - indicates lun is not registered with the OS. ( 0: 0): Total reqs 152325, Pending reqs 0, flags 0x0, 0:0:81, ( 0: 1): Total reqs 729, Pending reqs 0, flags 0x0, 0:0:81, ( 0: 2): Total reqs 0, Pending reqs 0, flags 0x0*, 0:0:81, ( 0: 3): Total reqs 763, Pending reqs 0, flags 0x0, 0:0:81, ( 1: 0): Total reqs 733, Pending reqs 0, flags 0x0, 0:0:82, ( 1: 1): Total reqs 833, Pending reqs 0, flags 0x0, 0:0:82, ( 1: 2): Total reqs 0, Pending reqs 0, flags 0x0*, 0:0:82, ( 1: 3): Total reqs 844, Pending reqs 0, flags 0x0, 0:0:82,
6. Add the collected SCSI ID and LUN numbers of the newly added LUNS to the /proc/scsi/scsi file. These are 0 2 and 1 2 in our example. To do this, for each controller number (2 and 3 are the controller numbers for the QLogic 2342 ports) and ID LUN combination, enter the echo scsi add-single-device controller 0 ID LUN>/proc/scsi/scsi command at the system prompt, as shown in Example 7-27 on page 273.
272
Example 7-27 Add the new LUNs to /proc/scsi/scsi # # # # echo echo echo echo "scsi "scsi "scsi "scsi add-single-device add-single-device add-single-device add-single-device 2 3 2 3 0 0 0 0 0 0 1 1 2" 2" 2" 2" >/proc/scsi/scsi >/proc/scsi/scsi >/proc/scsi/scsi >/proc/scsi/scsi
Once the /proc directory has been updated, the LUNs should now be recognized by the operating system. Verify this by viewing the /proc/scsi/qla2300/2 and /proc/scsi/qls2300/3 files, as shown in Example 7-28. There are now no * entries in the list of LUNs, indicating the new LUN is recognized by the operating system.
Example 7-28 Verify that LUNs is now recognized by OS # cat /proc/scsi/qla2300/2 QLogic PCI to Fibre Channel Host Adapter for QLA2342: Firmware version: 3.02.24, Driver version 6.06.64 Entry address = c5000060 HBA: QLA2312 , Serial# F97353 Request Queue = 0x50e8000, Response Queue = 0x50d0000 Request Queue count= 128, Response Queue count= 512 Total number of active commands = 0 Total number of interrupts = 156227 Total number of IOCBs (used/max) = (0/600) Total number of queued commands = 0 Device queue depth = 0x20 Number of free request entries = 121 Number of mailbox timeouts = 0 Number of ISP aborts = 0 Number of loop resyncs = 0 Number of retries for empty slots = 0 Number of reqs in pending_q= 0, retry_q= 0, done_q= 0, scsi_retry_q= 0 Host adapter:loop state= <READY>, flags= 0x8e0813 Dpc flags = 0x0er-port=210000e08b09691d; MBX flags = 0x0t-0=5005076801400364; SRB Free Count = 40960507680140035a; Link down Timeout = 000 Port down retry = 030 Login retry count = 030 lun is not registered with the OS. Commands retried with dropped frame(s) = 0, flags 0x0, 0:0:81, ( 0: 1): Total reqs 728, Pending reqs 0, flags 0x0, 0:0:81, ( 0: 2): Total reqs 0, Pending reqs 0, flags 0x0*, 0:0:81, SCSI Device Information: Pending reqs 0, flags 0x0, 0:0:81, scsi-qla0-adapter-node=200000e08b09691d; flags 0x0, 0:0:82, scsi-qla0-adapter-port=210000e08b09691d; flags 0x0, 0:0:82, scsi-qla0-target-0=5005076801400364;0, flags 0x0*, 0:0:82, scsi-qla0-target-1=500507680140035a;s 0, flags 0x0, 0:0:82, SCSI LUN Information: (Id:Lun) * - indicates lun is not registered with the OS. ( 0: 0): Total reqs 152704, Pending reqs 0, flags 0x0, 0:0:81, ( 0: 1): Total reqs 731, Pending reqs 0, flags 0x0, 0:0:81, ( 0: 2): Total reqs 8, Pending reqs 0, flags 0x0, 0:0:81, ( 0: 3): Total reqs 765, Pending reqs 0, flags 0x0, 0:0:81, ( 1: 0): Total reqs 735, Pending reqs 0, flags 0x0, 0:0:82, ( 1: 1): Total reqs 835, Pending reqs 0, flags 0x0, 0:0:82, ( 1: 2): Total reqs 9, Pending reqs 0, flags 0x0, 0:0:82, ( 1: 3): Total reqs 846, Pending reqs 0, flags 0x0, 0:0:82,
273
7. Force the Subsystem Device Driver (SDD), or equivalent driver, to rescan and map the new devices. For SDD, enter the /usr/sbin/cfgvpath command at the system prompt, as shown in Example 7-29.
Example 7-29 Force SDD to rescan and map the new devices # /usr/sbin/cfgvpath crw-r--r-1 root root 253, 0 Sep 26 21:50 /dev/IBMsdd major number 254 assigned to vpath (dev: vpathe) Added vpathe 254 64 ...
We can see that a new vpath, for the new LUN, called vpathe, was added. 8. Verify that SDD recognized the newly added LUN using the datapath query device command, as shown in Example 7-30.
Example 7-30 Verify using datapath query command # datapath query device Total Devices : 4 DEV#: 0 DEVICE NAME: vpatha TYPE: 2145 POLICY: Optimized SERIAL: 600507680188801b2000000000000000 ============================================================================ Path# Adapter/Hard Disk State Mode Select Errors 0 Host2Channel0/sdb OPEN NORMAL 155601 0 1 Host2Channel0/sde OPEN NORMAL 0 0 2 Host3Channel0/sdi OPEN NORMAL 156238 0 3 Host3Channel0/sdl OPEN NORMAL 0 0 DEV#: 1 DEVICE NAME: vpathb TYPE: 2145 POLICY: Optimized SERIAL: 600507680188801b2000000000000001 ============================================================================ Path# Adapter/Hard Disk State Mode Select Errors 0 Host2Channel0/sdc CLOSE NORMAL 0 0 1 Host2Channel0/sdf CLOSE NORMAL 96 0 2 Host3Channel0/sdh CLOSE NORMAL 0 0 3 Host3Channel0/sdk CLOSE NORMAL 75 0 DEV#: 2 DEVICE NAME: vpathc TYPE: 2145 POLICY: Optimized SERIAL: 600507680188801b2000000000000002 ============================================================================ Path# Adapter/Hard Disk State Mode Select Errors 0 Host2Channel0/sdd CLOSE NORMAL 0 0 1 Host2Channel0/sdg CLOSE NORMAL 70 0 2 Host3Channel0/sdj CLOSE NORMAL 0 0 3 Host3Channel0/sdm CLOSE NORMAL 101 0 DEV#: 3 DEVICE NAME: vpathe TYPE: 2145 POLICY: Optimized SERIAL: 600507680188801b200000000000002b ============================================================================ Path# Adapter/Hard Disk State Mode Select Errors 0 Host3Channel0/sdq CLOSE NORMAL 0 0 1 Host2Channel0/sdn CLOSE NORMAL 0 0 2 Host3Channel0/sdo CLOSE NORMAL 0 0 3 Host2Channel0/sdp CLOSE NORMAL 0 0
274
2. Use the lslun command to list LUNs that are available to the SAN File System, as shown in Example 7-32. As you can see, there are two unallocated LUNs. This matches the output we saw in Example 7-23 on page 270. Therefore, SAN File System has not yet detected our newly added LUN, ID 600507680188801b200000000000002b. GUI: Manage Storage Metadata LUNs.
Example 7-32 List available LUNs # sfscli lslun Lun ID Vendor Product Size (MB) Volume State Storage Device WWNN Port WWN ================================================================================ VPD83NAA6=600507680188801B2000000000000002 IBM 2145 102400 - Available UNKNOWN UNKNOWN, VPD83NAA6=600507680188801B2000000000000001 IBM 2145 102400 - Available UNKNOWN UNKNOWN, VPD83NAA6=600507680188801B2000000000000000 IBM 2145 10240 MASTER Assigned UNKNOWN UNKNOWN,
3. In order for SAN File System to discover new LUNs, run the rediscoverluns command, as shown in Example 7-33. This will force SAN File System to go and rescan for new LUNs that are available and recognized by the operating system. GUI: Manage Storage Metadata LUNs Select Action Rediscover LUNs.
Example 7-33 Rediscover new LUNs # sfscli rediscoverluns CMMNP5410I The LUNs have been rediscovered. Tip: Run lslun to view the LUNs
4. Rerun the lslun command to verify that the new LUN has been recognized, as shown in Example 7-34.
Example 7-34 List LUNs # sfscli lslun Lun ID Vendor Product Size (MB) Volume State Storage Device WWNN Port WWN ================================================================================ VPD83NAA6=600507680188801B2000000000000002 IBM 2145 102400 - Available UNKNOWN UNKNOWN, VPD83NAA6=600507680188801B2000000000000001 IBM 2145 102400 - Available UNKNOWN UNKNOWN, VPD83NAA6=600507680188801B2000000000000000 IBM 2145 10240 MASTER Assigned UNKNOWN UNKNOWN, VPD83NAA6=600507680188801B200000000000002B IBM 2145 999 - Available UNKNOWN UNKNOWN,
275
5. You can now add the new LUN to the System Pool, using the mkvol command, as shown in Example 7-35. GUI: Manage Storage Add Volumes.
Example 7-35 Add the new LUN to the SYSTEM pool # sfscli mkvol -lun VPD83NAA6=600507680188801B200000000000002B -pool SYSTEM -desc "SYS VOLUME2" newsysvol CMMNP5426I Volume newsysvol was created successfully.
6. Verify the new volume using the lsvol command, as shown in Example 7-36. GUI: Manage Storage Volumes.
Example 7-36 List volumes # sfscli lsvol Name State Pool Size (MB) Used (MB) Used (%) ======================================================= MASTER Activated SYSTEM 10224 384 3 newsysvol Activated SYSTEM 992 48 4 avol1 Activated poola 102384 1280 1 avol2 Activated poola 40944 1264 3 bvol1 Activated poolb 51184 1600 3 bvol2 Activated poolb 46064 1584 3
7. Finally, verify that the System Pool now includes the new volume using the lspool command, as shown in Example 7-37. Compare this with the previous pool listing, Example 7-31 on page 275. GUI: Manage Storage Storage Pools
Example 7-37 Verify SYSTEM pool # sfscli lspool SYSTEM Name Type Size (MB) Used (MB) Used (%) Threshold (%) Volumes ================================================================ SYSTEM System 11216 432 3 80 2
You have now successfully added a new LUN to the System Pool.
276
Tip: It is a good practice to rename or even remove the pool DEFAULT_POOL, since you will typically either create and assign another pool as the default pool, or even disable the default user pool entirely (see Disabling the default User Pool on page 328). In this case, it would be confusing to have a pool called DEFAULT_POOL, which is not, in fact, the default storage pool.
277
When expanding a volume, you must make sure all systems with visibility to it have recognized the new capacity. When expanding a LUN in a user pool, make sure to validate the expansion on every client that has visibility to it. You can display which clients have visibility to a LUN using the reportclient command, as shown in 7.7.1, Display a list of clients with access to particular volume or LUN on page 304.
We can see the vdisk is mapped to the port 10000000C92855E1, which corresponds to our AIX SAN File System client agent47, and has the LUN ID 6005076801848008C80000000000000A. 2. We will verify the size of this LUN at the MDS using the lslun command, as shown in Example 7-41. This command shows the current capacity (about 7 GB) of the LUN to be expanded, which is visible from client agent47. GUI: Manage Storage Data LUNs.
Example 7-41 LUN size before expansion sfscli> lslun -client agent47 VPD83NAA6=6005076801848008C80000000000000A Lun ID Vendor Product Size (MB) Volume State Storage Device WWNN Port WWN =========================================================================================== ============================ VPD83NAA6=6005076801848008C80000000000000A IBM 2145 6999 svc-svcpool-2 Assigned Unknown -
3. Example 7-41 shows that this LUN is available to SAN File System as the volume svc-svcpool-2. We can display the size of this volume (it matches the size of the LUN) using the lsvol command, as shown in Example 7-42. GUI: Manage Storage Volumes.
Example 7-42 Volume size before expansion sfscli> lsvol svc-svcpool-2 Name State Pool Size (MB) Used (MB) Used (%) =========================================================================== svc-svcpool-2 Activated svcpool 6999 0 0
4. This volume is assigned to the storage pool svcpool. The lspool command shows its current size of 9 GB (see Example 7-43 on page 279). GUI: Manage Storage Storage Pools.
278
Example 7-43 Pool size before expansion sfscli> lspool Name Type Size (MB) Used (MB) Used (%) Threshold (%) Volumes ============================================================================ SYSTEM System 64832 768 1 80 3 DEFAULT_POOL User Default 5696 3184 55 80 4 PolicyPool User 944 32 3 80 1 svcpool User 9031 112 1 80 2
5. Now we will expand the volume. At the SAN Volume Controller browser interface, display the Virtual Disks (vdisks), as shown in Figure 7-6. Select the check box for the vdisk to be expanded (aix52_SanFS). Check the vdisk you are attempting to expand. If it is of type image, then it cannot be expanded. If it is of type sequential, it will the become a striped vdisk when it is expanded. A vdisk of type striped remains of this type when expanded. Select Expand a Vdisk from the drop-down menu and click Go.
279
6. Select the managed disks (mdisks) to be used for the vdisk expansion and also the size to expand the vdisks by. We will expand the disk by 500 MB from its current 7 GB size. Click OK (Figure 7-7).
7. Now you must verify the expansion on the client(s). You must do this on each SAN FIle System client that has visibility to the LUN.
280
Example 7-44 Validate new LUN size sfscli> lslun -client agent47 VPD83NAA6=6005076801848008C80000000000000A Lun ID Vendor Product Size (MB) Volume State Storage Device WWNN Port WWN =========================================================================================== ============================ VPD83NAA6=6005076801848008C80000000000000A IBM 2145 7499 svc-svcpool-2 Assigned Unknown -
You can also do this from the GUI by selecting Manage Storage Data LUNS from the left hand window, selecting the client, and clicking Refresh. Figure 7-8 shows that our LUN, VPD83NAA6=6005076801848008C80000000000000A, is recognized at its new size.
2. Now we need to expand the size of the SAN File System volume on the MDS. We know from Example 7-44 that our LUN is actually the volume svc-svcpool-2. Use the expandvol command, specifying the client agent47, as shown in Example 7-45. GUI: Manage Storage Volumes Select Volume Select action Properties Size Select client with visibility to the volume Expand volume.
Example 7-45 Expand the volume sfscli> expandvol -client agent47 svc-svcpool-2 CMMNP5389I Volume svc-svcpool-2 was expanded successfully.
3. Verify the size of this LUN at the MDS using the lslun command. Example 7-46 on page 282 shows the new capacity (about 7.5 GB) of the LUN is recognized and visible from client agent47. GUI: Manage Storage Metadata LUNs.
281
Example 7-46 LUN size after expansion sfscli> lslun -client agent47 VPD83NAA6=6005076801848008C80000000000000A Lun ID Vendor Product Size (MB) Volume State Storage Device WWNN Port WWN =========================================================================================== ============================ VPD83NAA6=6005076801848008C80000000000000A IBM 2145 7499 svc-svcpool-2 Assigned Unknown -
4. Verify that the volume has been expanded with the lsvol command. Compare the previous size (Example 7-42 on page 278) with the new size of 7488 MB (see Example 7-47). GUI: Manage Storage Volumes.
Example 7-47 Volume size after expansion sfscli> lsvol svc-svcpool-2 Name State Pool Size (MB) Used (MB) Used (%) =========================================================================== svc-svcpool-2 Activated svcpool 7488 0 0
5. Verify that the pool now reports the correct expanded size using lspool. Compare the previous size in Example 7-43 on page 279 to the size shown in Example 7-48. GUI: Manage Storage Storage Pools.
Example 7-48 Pool size after expansion sfscli> lspool Name Type Size (MB) Used (MB) Used (%) Threshold (%) Volumes ============================================================================ SYSTEM System 64832 768 1 80 3 DEFAULT_POOL User Default 5696 3184 55 80 4 PolicyPool User 944 32 3 80 1 svcpool User 9520 112 1 80 2
282
283
We expanded the vdisk by 500 MB in the SVC. Windows 2000 requires a reboot for it to detect the expanded volume. After the reboot, Disk Manager confirms that the disk had been expanded, as shown in Figure 7-10. It now has a capacity of 5.37 GB.
284
Example 7-49 List LUNs # sfscli lslun Lun ID Vendor Product Size (MB) Volume St ate Storage Device WWNN Port WWN ================================================================================ VPD83NAA6=600507680188801B200000000000002B IBM 2145 999 newsysvol As signed UNKNOWN UNKNOWN, VPD83NAA6=600507680188801B2000000000000002 IBM 2145 102400 Av ailable UNKNOWN UNKNOWN, VPD83NAA6=600507680188801B2000000000000001 IBM 2145 102400 Av ailable UNKNOWN UNKNOWN, VPD83NAA6=600507680188801B2000000000000000 IBM 2145 10240 MASTER As signed UNKNOWN UNKNOWN,
1. Expand the LUN using the procedures for your back-end storage system. Note that at the time of the writing of this redbook, the SVC is the only supported metadata storage device that can expand an existing LUN. 2. After the expansion on the SVC, reboot each MDS in the cluster, one at a time, in a rolling fashion. The reboot is necessary for each MDS to recognize the expanded LUN. Make sure that each MDS has re-joined the cluster (using the lsserver command) before initiating a reboot of the next MDS. By rebooting each MDS individually, you will maintain availability of the filesets to the clients. 3. Once every MDS has been rebooted, verify that all LUNs are still visible to the SAN File System, as shown in Example 7-50. As you can see, the LUN has been successfully expanded, and now shows the updated size of 1199 MB. GUI: Manage Storage Metadata LUNs.
Example 7-50 List LUNs # sfscli lslun Lun ID Vendor Product Size (MB) Volume St ate Storage Device WWNN Port WWN ================================================================================ ==================================== VPD83NAA6=600507680188801B200000000000002B IBM 2145 1199 newsysvol As signed UNKNOWN UNKNOWN, VPD83NAA6=600507680188801B2000000000000002 IBM 2145 102400 Av ailable UNKNOWN UNKNOWN, VPD83NAA6=600507680188801B2000000000000001 IBM 2145 102400 Av ailable UNKNOWN UNKNOWN, VPD83NAA6=600507680188801B2000000000000000 IBM 2145 10240 MASTER As signed UNKNOWN UNKNOWN,
285
4. Even though the LUN has been expanded, SAN File System does not yet recognize the new capacity in the volume, as shown in Example 7-51. The associated volume, newsysvol, still shows the former capacity of 992 MB. GUI: Manage Storage Volumes.
Example 7-51 List volumes # sfscli lsvol Name State Pool Size (MB) Used (MB) Used (%) ======================================================= MASTER Activated SYSTEM 10224 384 3 newsysvol Activated SYSTEM 992 48 4 avol1 Activated poola 102384 1280 1 avol2 Activated poola 40944 1264 3 bvol1 Activated poolb 51184 1600 3 bvol2 Activated poolb 46064 1584 3
5. Use the expandvol command to expand the volume, as in Example 7-52, specifying the volume that was expanded, that is, newsysvol. GUI: Manage Storage Volumes Select Volume Select action Properties Size Expand volume.
Example 7-52 Expand volume # sfscli expandvol newsysvol CMMNP5389I Volume newsysvol was expanded successfully.
6. Verify that the volume has been successfully expanded using the lsvol command, as shown in Example 7-53. This shows that the capacity has increased. GUI: Manage Storage Volumes.
Example 7-53 Show expanded volume size # sfscli lsvol Name State Pool Size (MB) Used (MB) Used (%) ======================================================= MASTER Activated SYSTEM 10224 384 3 newsysvol Activated SYSTEM 1184 48 4 avol1 Activated poola 102384 1280 1 avol2 Activated poola 40944 1264 3 bvol1 Activated poolb 51184 1600 3 bvol2 Activated poolb 46064 1584 3
7.5 Filesets
A fileset is a unit of workload, and is a subset of the SAN File System global namespace. Filesets are created by an administrator to divide up the namespace into a logical organization structure. The fileset is the unit for which FlashCopy images are created. We will differentiate between: Dynamic fileset assignment Static fileset assignment
286
When a fileset is created, it can be assigned to a specific MDS for management. This is known as a static fileset. You can also choose to allow the cluster to assign the fileset to a suitable MDS, using a simple load balancing algorithm. This is known as a dynamic fileset. Filesets can be changed from static to dynamic, and from dynamic to static, and a static fileset can also be rebound statically to another MDS. In a balanced environment, each MDS should host at least one fileset unless you choose to have an idle MDS with no filesets assigned available to provide failover functions, if desired. This is known as an N+1 configuration. We discuss SAN File System failover and its effect on fileset assignments in 9.5, MDS automated failover on page 413. We recommend using either all dynamic or all static fileset assignments to avoid undesired excessive load on a specific MDS cluster node. Using all static filesets allows you to have more precise control of load balancing the SAN File System cluster. Dynamic filesets will be allocated to different MDSs to balance the load; however, the algorithm essentially only considers the number of filesets assigned to each MDS. It does not take into account that some filesets are busier than others. Therefore, if you know which filesets generate more transactions, you can use this knowledge to statically assign them in a balanced manner across the MDS cluster. Tip: The ROOT fileset is treated like any other fileset. The only difference is, since it is created by the system, it will start out as a static fileset statically assigned to the master at creation time.
287
Storage Pool A
Storage Pool B
Storage Pool C
Storage Pool D
Once a fileset has been created and attached to a particular location within the SAN File System, it appears as a regular directory or folder to the SAN File System clients. The clients can create files and directories in the fileset, permissions permitting. From a client perspective, a fileset looks like a normal directory within the SAN file system; clients only mount the single global namespace, and thereby have access to all the filesets (within security constraints). A client cannot move, rename, or delete a directory that is the root of a fileset. A client cannot create hard links across fileset boundaries. Figure 7-12 on page 289 shows the MDS and client perspective of filesets. There are five filesets shown: the root, Images, Install, UNIXfiles, and Winfiles. Some of these have subdirectories (for example, the folder Backup is a subdirectory on the root file system, and the fileset unixfiles has a subdirectory called data). The client, however, is not specifically aware which folders are filesets; they all appear as regular directories.
288
Unixfiles FileSet
has subdirectories
Winfiles FileSet
has subdirectories
ROOT
Projects
Dev
Website
Figure 7-13 Nested filesets
You should be careful when creating nested filesets for the following reasons: You cannot access a child fileset if the MDS hosting the parent fileset is unavailable. In the example, if the MDS hosting the fileset Projects failed, then both the Projects fileset, and the fileset Website, even if hosted by a different MDS, would be unavailable until the failed MDS workload was failed over.
289
A FlashCopy image is created at the individual fileset level; it does not include any nested filesets. Also, you cannot make a FlashCopy image of a fileset and any nested filesets in a single operation. This may be of concern if it must have a consistent image of a fileset and its nested filesets. Making FlashCopy images in multiple operations could potentially lead to ordering or consistency issues. A FlashCopy image cannot be reverted when nested filesets exist within the fileset. You must manually detach the nested filesets before reverting the image. In the example, if you wanted to revert a FlashCopy image of the fileset Projects, you would first need to detach the fileset Website. You could reattach it after the fileset Projects was reverted. If creating nested filesets, attach them only directly to other filesets. Do not attach filesets to client-created directories. Doing this makes doing a large-scale restore more complex. In the example, Website is attached directly to the Projects fileset. To be able to detach a fileset, you need to detach all its nested filesets first. In the example, if you needed to detach the fileset Projects, you would first need to detach the fileset Website.
290
fileset_name: The name for the fileset. This is a logical name, internal to the MDS cluster; it is not visible to the clients. It need not be the same as the dir parameter, although you might choose to make it the same, for clarity. Note: We strongly recommend attaching filesets either onto the root or to other filesets, and not to directories, as this will make restore easier if required. If you attach filesets to directories, then you have to re-create the directory on the client itself before you can restore the fileset. Newly created filesets have owner and permissions set to the following: File permissions 000 (no access), owned by user ID/groupID 1000000/1000000 (no access) when viewed from UNIX-based clients. No access, and owned by SID S-1-0-0 when viewed from Windows-based clients. You need to set ownership and permissions to a suitable value once for each fileset on a privileged client, as described in 7.6, Client operations on page 296, to be able to use the new filesets. An example of creating filesets is shown in Example 7-54. GUI: Manage Filing Create a Fileset.
Example 7-54 Creating filesets using the CLI sfscli> mkfileset -attach sanfs -dir userhomes s-desc "user home directories" userhomes CMMNP5147I Fileset userhomes was created successfully. sfscli> mkfileset -server mds1 -attach /sanfs/userhomes -dir user1 user1 CMMNP5147I Fileset user1 was created successfully.
We created two filesets, the first called userhomes and the second called user1. We attached the fileset userhomes to the root (sanfs, which is the name of the cluster) and we also named the directory userhomes. Since we did not specify the -server option, this fileset will be assigned dynamically to one of the MDS cluster nodes. The second fileset, user1 was attached to the newly created fileset /sanfs/userhomes, at the directory point user1, and was also statically assigned to mds1. To verify that the filesets were created, use the lsfileset command, as shown in Example 7-55 on page 292. The column Most Recent Image will list the date and time that the last FlashCopy image of the fileset was made; in this case, we have not made any FlashCopy images yet. The final column, Server, shows the server that is currently hosting the fileset. GUI: Manage Filing Filesets. Note: The directory point and the fileset name do not need to be the same, although they are in our example. The directory point is the directory that will be visible to the clients. The fileset name is the logical name of the fileset as displayed by the SAN File System administrator.
291
Example 7-55 Listing defined filesets sfscli> lsfileset Name Fileset State Quota Type Quota (MB) Used (MB) Used (%) Threshold (%) Most Recent Image Server =========================================================================================== userhomes Attached Soft 0 0 0 80 - mds4 user1 Attached Soft 0 0 0 80 - mds1
The -l flag on the lsfileset command will show more details of the fileset, including the hosting MDS, MDS state, number of FlashCopy images which exist for the fileset, attach point, directory name, and parent fileset. An example of this command is shown in Example 7-56. We can determine if a fileset is static or dynamic by looking in the Assigned Server column (to the left of the Attach Point). Fileset userhomes has a - (dash) in this field, indicating it is a static fileset. The Assigned field for this fileset has the value mds4, since this is the MDS currently hosting the fileset. For fileset user1, the MDS mds1 is listed in both the Assigned Server and the Server column -, indicating it is a static fileset that is being hosted by its assigned server. GUI: Manage Filing Filesets Click on the fileset.
Example 7-56 Long listing of filesets sfscli> lsfileset -l userhomes user1 Name Fileset State Serving State Quota Type Quota (MB) Used (MB) Used (%) Threshold (%) Images Most Recent Image Server Assigned Server Attach Point Directory Name Directory Path Parent Children Description =========================================================================================================== =========================================================================================================== =========================== userhomes Attached Online Soft 0 0 0 80 1 Jun 07, 2004 10:24:42 AM mds4 sanfs/userhomes userhomes sanfs ROOT 1 user home directories user1 Attached Online Soft 0 16 0 80 1 Jun 07, 2004 3:56:44 AM mds1 mds1 sanfs/userhomes/user1 user1 sanfs/userhomes userhomes 0 -
ROOT
userhomes
USERS
user1
Figure 7-14 Nested filesets
292
Now, to see the clients view of the new filesets, we will show Windows Explorer from a Windows 2000 (or Windows 2003) client. For this client, drive S: was specified as the mount point for the SAN File System cluster. The user homes and USERS fileset can be viewed on the client under S:, as shown in Figure 7-15. Note: The CLUSTER_NAME, sanfs, is shown as the disk label of the S: drive. This is the same as the name specified when installing the SAN File System cluster.
CLUSTER_NAME
Figure 7-15 Windows Explorer shows cluster name sanfs as the drive label
293
To view the nested fileset user1 that was attached under sanfs\userhomes, expand its tree on the left-hand side, as shown in Figure 7-16.
As you can see in Figure 7-16, the user1 is attached to sanfs\userhomes and the name of the directory is user1.
While filesets are being moved, there will be a pause for clients that are accessing that fileset. Typically, this will simply be interpreted as an operation taking a little longer than usual to complete; the explicit behavior depends on the application. After the move, the clients can continue transparently; they do not need to re-start the application to recognize the new fileset host.
294
To convert a static fileset to a dynamic fileset, use the autofilesetserver command. This is shown in Example 7-58. We change the previously static fileset user1 to a dynamic fileset. After the command is issued, the Assigned Server column has a dash (-) in it, indicating a dynamic fileset. GUI: Manage Filing Filesets Click on the fileset Select action Properties General Settings Server Assignment Method Automatic.
Example 7-58 Change static fileset to a dynamic fileset sfscli> autofilesetserver user1 CMMNP5402I Automatic Metadata server assignment for fileset user1 is enabled. sfscli> lsfileset -l user1 Name Fileset State Serving State Quota Type Quota (MB) Used (MB) Used (%) Threshold (%) Images Most Recent Image Server Assigned Server Attach Point Directory Name Directory Path Parent Children Description =========================================================================================================== =========================================================================================================== ============ user1 Attached Online Soft 0 16 0 80 1 Jun 07, 2004 3:56:44 AM mds1 sanfs/userhomes/user1 user1 sanfs/userhomes userhomes 0 -
295
filesets using Windows and UNIX-based clients. These steps are needed before any client can start accessing a fileset. Note: These steps need to be performed only ONCE for each fileset.
297
Attention: chclusterconfig -privclient list replaces the entire list of current privileged clients. If you use this command to add an additional privileged client, you must specify both the current and new clients in the new list. The addprivclient command behaves differently (see Example 7-63). Re-issue the statcluster -config command to verify that the clients AIXRome and LIXPrague have been added to the privileged client list. This is shown in Example 7-62 on page 299.
298
Example 7-62 Verify privileged client list sfscli> statcluster -config Name sanfs ID 60355 State Online Target State Online Last State Change Sep 27, 2004 4:52:46 AM Last Target State Change Servers 2 Active Servers 2 Software Version 2.2.0.83 Committed Software Version 2.2.0.83 Last Software Commit Sep 15, 2004 4:41:21 PM Software Commit Status Not In Progress Installation Date Oct 14, 2003 12:04:25 PM ===========User-Defined Configuration Settings============ Pool Space Reclamation Interval 60 minutes Privileged Clients AIXRome,LIXPrague RSA User USERID RSA Password ******** ===========Service-Defined Tuning Configuration=========== Master Server Buffer 2048 pages Subordinate Server Buffer 200000 pages Admin Process Limit 4 Server Workload Process Limit 20
The other method to add a privileged client and preserve the existing privileged clients list is to use the addprivclient command. Example 7-63 shows how to use the addprivclient command and displays the output of the statcluster -config command again to show the modified list of privileged clients.
Example 7-63 Add new privileged client WINWashington using the addprivclient command sfscli> statcluster -config Name sanfs ID 60355 State Online Target State Online Last State Change Sep 27, 2004 4:52:46 AM Last Target State Change Servers 2 Active Servers 2 Software Version 2.2.0.83 Committed Software Version 2.2.0.83 Last Software Commit Sep 15, 2004 4:41:21 PM Software Commit Status Not In Progress Installation Date Oct 14, 2003 12:04:25 PM ===========User-Defined Configuration Settings============ Pool Space Reclamation Interval 60 minutes Privileged Clients AIXRome,LIXPrague, WINWashington RSA User USERID RSA Password ******** ===========Service-Defined Tuning Configuration=========== Master Server Buffer 2048 pages Subordinate Server Buffer 200000 pages Admin Process Limit 4 Server Workload Process Limit 20
299
6 3 2 3
144 72 48 72
19 19 19 19
2. Try to change to the AIX directory; as you do not have the correct permission to do this, an error will be displayed, as shown in Example 7-66.
Example 7-66 Verify no access to AIX directory # cd aixfiles ksh: AIX: Permission denied.
Because you are on a privileged client, you can change these permissions. Use the chown, chgrp, and chmod commands to set the user ID, group ID, and permissions, and then verify the changes, as shown in Example 7-67 on page 301. Now you can change to the directory and create files there.
300
Example 7-67 Take ownership and set permissions on the fileset # chown root.system aixfiles # chmod 755 aixfiles # ls -la total 6 drwxr-xr-x 6 root system drwxrwxrwx 3 root system dr-xr-xr-x 2 root system drwxr-xr-x 3 root system # cd aixfiles # ls -la total 3 drwxr-xr-x 3 root system drwxr-xr-x 6 root system d--------2 1000000 1000000
144 72 48 72
19 19 19 19
3. Open the Security tab, and click Advanced to display the access control settings window.
301
5. The owner will be the default, S-1-0-0, which is the null security ID. Choose another owner, usually Administrator or Administrators. Make sure the box Replace owners on subcontainers and objects is checked. 6. Click Apply and then click OK. Acknowledge the warning given in Figure 7-20.
Select Yes to activate the new settings. 7. Select the Security tab and set the permissions you want for the folder. Here we have given all privileges to the Administrators group (see Figure 7-21 on page 303). You should set the Everyone permissions according to your security requirements. If a UNIX-based client accesses this folder, it will do so with the permissions assigned to Everyone. Click OK to activate the changes.
302
8. Verify that you can access the fileset by opening the USERS directory (in this example, S:\USERS). You can now create files in the fileset.
303
We will first present general policy information, then show how to set up policy in the CLI (7.8.3, Create a policy and rules with CLI on page 309) and the GUI (7.8.4, Creating a policy and rules with GUI on page 311).
Rules
A rule is an SQL-like statement that tells a SAN File System MDS to place the data for a file in a specific storage pool if the file meets a particular condition. A rule can apply to any file being created or only to files being created within a specific fileset.
Policies
A policy is a set of rules that determines where specific files are placed. An administrator can define any number of policies, but only one policy can be active at a time. If an administrator activates another policy, or makes changes to a policy, that action has no effect on existing files in the SAN File System. The new policy will be effective only on newly created files in the SAN File System. Restriction: Please be aware that you cannot change rules in an active policy. You need to deactivate the policy first (by activating another policy), then edit the rules and activate the policy back again. See 7.8.9, Best practices for managing policies on page 334 for our recommendations. A policy can contain any number of rules; however, the entire policy cannot exceed a length of 1 MB (which includes any spaces used to delimit the rules and any comments). Rules in the active policy are effective on all the SAN File System MDS (and therefore apply to all the clients). Rules can specify any of these conditions, which when matched, will cause that rule to be applied: Fileset File name or extension Date and time when the file is created User ID and Group ID on UNIX clients SAN File System evaluates rules in the order that they appear in the active policy. When a client creates a file, SAN File System scans the list of rules in the active policy to determine which rule applies to the file. When a rule applies to the file, SAN File System stops processing the rules and assigns the file to the appropriate storage pool. If no rule applies, the file is assigned to the default storage pool. Note: Rules in a policy are evaluated only when a file is being created. If an administrator switches from one policy to another, the rules in the new policy apply only to newly created files. Activating a new policy does not change the storage pool assignments for existing files. Moving or renaming a file does not cause a policy to be applied; however, restoring a file will cause it to be created in the storage pool required by the current policy. At install time, a default null policy is created, which remains active until a new policy is created and activated. The null policy assigns all files to the default storage pool. Therefore, when creating new User Pools, these will not be used until you create and activate a policy with rules specifying direct files to those pools.
Chapter 7. Basic operations and configuration
305
Figure 7-22 shows how the policy rules operate to control how SAN File System allocates new files to the desired storage pools.
/HR
/HR/DB2.data /HR/dsn1.bak /HR/plan.doc
/CRM
/CRM/DB2.data /CRM/DB2.bak
/Finance
/Finance/DB2.data /Finance/cust.data
/MFG
Rules for File Placement /HR go into User Pool A *.bak go into User Pool C *DB2.* go into User Pool B Others go into Default Storage Pool
/MFG/proj/plan.doc /MFG/proj/plan.bak
User Pool B
Volumes (LUNs) ( RAID-10 )
User Pool C
Volumes (LUNs) ( JBOD )
/CRM/DB2.data /Finance/DB2.data
/CRM/DB2.bak /MFG/proj/plan.bak
/Finance/cust.data /MFG/proj/plan.doc
The example shows four different storage pools available with volumes assigned to provide different qualities of service: User Pool A, User Pool B, User Pool C, and User Pool D. User Pool A and the Default Storage Pool both use RAID 5, User Pool B uses RAID 10, and User Pool C uses JBOD volumes. There are four filesets in the SAN File System: HR, CRM, Finance, and MFG. The figure shows how the active policy is applied to determine the placement of files, as created, in the available storage pools. The rules in the box specify the following actions: All files in the fileset HR go to User Pool A. Files with suffix .bak go to User Pool C. Files containing the string DB2. in the file name go to User Pool B. Other files that do not meet any rules go to the default storage pool (this is the default rule that is implicit in any policy). Note: The figure does not show the exact syntax for rules, but is for illustration only. The order that the rules are listed in a policy determines the results (when a file is created, the rules are evaluated in order from top to bottom); the first rule that the file matches determines the files placement. For example, although the file /HR/DB2.data matches both the first and third rule, the first rule takes precedence; therefore, it is placed in User Pool A when created.
306
In this syntax, the parameters are defined as follows: rule_name pool_name file_set_list SQL_expression Optional identifier (name) for the rule. Storage pool where the files matching the rule should be stored. One or more (comma-separated) filesets, for which this rule applies. Narrows the file selection for which the rule applies. The SQL_expression can be any combination of standard SQL-syntax expressions, except that Case expressions and compare-when clauses are not allowed. You can use many built-in functions in the SQL-expression, for date and time manipulation, numeric manipulation, and string manipulation. These are listed next. Each rule must include either a FOR clause or a WHERE clause, or both a FOR clause and a WHERE clause. This will determine whether a rule is restricted in operation to files in a particular fileset or filesets. This concept is illustrated in 7.8.5, More examples of policy rules on page 322.
Note for SAN File System V1.1 clients: SAN File System V2.1 and higher still supports the use of the FOR CONTAINER clause in the rules, so existing policies do not need to be changed at this time. However, we recommend all future policies be written using the FOR FILESET clause.
Attributes
You can use the following file attributes in the WHERE clause: NAME Name of the file. % in the name represents one or more characters (wildcard), and _ (underscore) represents any single character. You can specify only the file name here, not a directory path. Date and time that the file is created. Numeric group ID, only valid for UNIX clients. Numeric user ID, only valid for UNIX clients.
String functions
These string-manipulation functions are available for file names and literals. Strings must be enclosed in single-quotation marks. A single-quotation mark can be included in a string by using two single-quotation marks (for example, ab represents the string ab). CHAR(x) Converts an integer x to a string. CHARACTER_LENGTH(x) Determines the number of characters in string x. CHAR_LENGTH(x) CONCAT(x,y) HEX(x) LCASE(x) Determines the number of characters in string x. Concatenates string x and y. Converts an integer x in hexadecimal format. Converts string x to lowercase.
307
Converts string x to lowercase. Left-justifies string x in a field of y characters, optionally padding with z. Determines the length of the data type of string x. Removes leading blanks from string x. Determines the position of string x in y. Determines the position of string x in y. Right-justifies string x in a field of y characters, optionally padding with z. Removes the trailing blanks from string x.
SUBSTR(x FROM y FOR z) Extracts a position of string x, starting at position y, optionally for z characters. SUBSTRING(x FROM y FOR z) Extracts a position of string x, starting at position y, optionally for z characters. TRIM(x) TRIM(x FROM y) TRIM(x y FROM z) UCASE(x) UPPER(x) Trims blanks from the beginning and end of string x. Trims blanks that are x (LEADING, TRAILING, or BOTH) from string y. Trims character y that is x (LEADING, TRAILING, or BOTH) from string z. Converts string x to uppercase. Converts string x to uppercase.
Numerical functions
These numeric-calculation functions are available for numerical parts of the file name, numeric parts of the current date, and UNIX-client user IDs or group IDs. INT(x) INTEGER(x) MOD(x) Converts number x to a whole number, rounding up fractions of .5 or greater. Converts number x to a whole number, rounding up fractions of .5 or greater. Determines x % y.
CURRENT_TIMESTAMP Determines the current date and time on the MDS. DATE(x) DAY(x) DAYOFWEEK(x) Creates a date out of x. Creates a day of the month out of x. Creates the day of the week out of date x, where x is a number from 1 to 7 (Sunday=1).
308
DAYOFYEAR(x) DAYS(x) DAYSINMONTH(x) DAYSINYEAR(x) HOUR(x) MINUTE(x) MONTH(x) QUARTER(x) SECOND(x) TIME(x) TIMESTAMP(x,y) WEEK(x) YEAR(x)
Creates the day of the year out of date x, where x is a number from 1 to 366. Determines the number of days since 0000-00-00. Determines the number of days in the month from date x. Determines the day of the year from date x. Determines the hour of the day (a value from 0 to 23) of time or time stamp x. Determines the minutes from date x. Determines the month of the year from date x. Determines the quarter of year from date x, where x is a number from 1 to 4. Returns the seconds portion of time x. Displays x in a time format. Creates a time stamp (date and time) from a date x and, optionally, a time y. Determines the week of the year from date x. Determines the year from date x.
Save the file and note the name used. We saved our file with the name /home/admin/sample_policy.txt.
309
Now that you have created the rule file, you need to create a policy for it within SAN File System. Your rule file will be checked for valid SQL syntax during this step.
Create a policy
Use the mkpolicy command to create a policy containing the rule file, as shown in Example 7-71. Notice that the -file parameter is used to specify the name of the file containing the rules, as created in the previous step. We also specify a name for the policy (Sample_Policy) and enter a description (optional).
Example 7-71 Create a policy mds1:/usr/tank/admin/bin # sfscli sfscli> mkpolicy -file /usr/tank/admin/bin/sample_policy.txt -desc Sample Policy for Typical File Handling sample_policy CMMNP5193I Policy sample_policy was created successfully..
In this example, DEFAULT_POLICY is active as the default configuration, and the newly imported policy Sample_Policy is inactive.
310
Rerun the lspolicy command to see that the new policy is now active, as shown in Example 7-75.
Example 7-75 List the policies sfscli> sfscli> lspolicy Name State Last Active Modified Description =========================================================================================== =========================================== DEFAULT_POLICY inactive May 14, 2004 3:16:24 AM May 06, 2004 3:40:05 AM Default policy set (assigns all files to default storage pool) sample_policy active May 14, 2004 3:16:24 AM May 14, 2004 3:14:22 AM Sample Policy for Typical File Handling sfscli>
When an administrator activates the policy, the master MDS checks all references to filesets and storage pools. If a rule in the policy references a non-existent storage pool, or a non-existent or unattached fileset, an error is returned and the policy is not activated.
Updating a policy
To update a policy, retrieve it using the catpolicy command, as shown in List policy contents on page 310. You can capture this output to a file; make the necessary changes with a text editor. Then create a policy with the mkpolicy command, and activate it with the usepolicy command. Note: Simply editing your original rules file will not update the policy, since once the policy is created, the rules in the file are imported into the SAN File System configuration, and there is no preserved link back to the original text file. Also, you cannot be sure that the original text file has not been tampered with. Therefore you should always retrieve the stored version of the policy, as described in this section. You can also modify policies using the GUI, as shown in the next section.
311
Here is a summary of the steps required to create a policy and rules with the SAN File System Console (GUI): 1. 2. 3. 4. 5. List currently defined policies, and determine the active policy. Create a policy (with high-level settings). Add rules for the policy. Edit rules (if necessary). Activate the policy.
List policies
Select Manage Filing Policies to display the Policies window (Figure 7-23), showing the currently defined policies (Initially, DEFAULT_POLICY, which is active and was created at installation).
312
Create a policy
To create a new policy, click Create a Policy, or select Create from the drop-down menu and click Go. The Introduction window (Figure 7-24) will be displayed.
313
This window shows the three major steps to create a new policy. Click Next to start Step 1. The High-Level Settings window displays. Enter a name for the policy and a description, as shown in Figure 7-25. You can also select to clone or copy an existing policy. We will create a new policy by selecting New Policy. Click Next to continue.
Add rules
The Add Rules to Policy window displays. Here you enter a description for the rule, select the destination storage pool in the Storage Pool Assignment, and specify one or more conditions to apply in the Conditions fields. In Figure 7-26 on page 315, the rule specifies that files ending in the extension .mp3 are to be stored in the storage pool svcpoolA. You can optionally limit the rule to apply only to files in a certain fileset by checking the Fileset box and making a selection from the drop-down menu. Notice that all the SQL functions described above are easy to select here by making the appropriate choice in the different pull-down menus. Also, you can use any individual condition, or any combination of conditions, to define the scope of the rule.
314
315
If you have more rules to specify, click New Rule at the bottom and repeat this step. We added another rule for DB2 files, to store them in DEFAULT_POOL, as shown in Figure 7-27.
When you have specified all the rules you want in the policy, click Next at the bottom of this window (button not shown).
Edit rules
The Edit Rules for Policy window (Figure 7-28 on page 317) will be displayed. This shows the SQL corresponding to the rules entered, and allows you to make further edits if required.
316
You can edit and modify the rules or add new ones as you wish. Notice that if you click Back at this stage, any editing changes will be lost.
317
When finished editing, click Finish. The Policies window (Figure 7-29) will be displayed, showing our newly created policy, Sample_Policy. It is inactive, and the DEFAULT_POLICY is still active.
318
You are prompted to confirm your choice to activate this policy, and thereby deactivate the current policy, as shown in Figure 7-31. If you are satisfied with this action, click OK.
The Policy window (Figure 7-32) will be displayed again, reflecting the new active policy.
319
Updating a policy
You can use the GUI to update the rules in a policy, but it must be inactive, that is, you cannot update the active policy. Therefore, to change the active policy, first activate another policy, then select the now inactive policy. Select Properties from the drop-down menu, then click the Rules entry on the left hand side. This will display the current rules in a text box which you can edit to add/remove/modify rules as required. After making all the changes you want, activate the edited policy. As a best practice, after activating a new policy, create an additional copy of the policy (using the Clone policy option). The policy copy created will have exactly the same rules as the currently active policy, but it will be inactive, and can therefore be edited, then activated, whenever you want to change the policy rules.
Deleting a policy
You can delete a policy from the GUI, but its status must be inactive. To make a policy inactive, activate another existing policy, then delete the required inactive policy. The policy window lists all the active and inactive polices. As shown in Figure 7-33, select a policy and, from the drop-down menu, select Delete, and then click Go.
320
Click OK to confirm your delete. Once you have confirmed your deletion, you will be returned to the List Policy window (Figure 7-35).
321
322
323
324
Reduces the number of Metadata server transactions needed to write new files. Reduces the number of allocation messages flowing between client and server when new files are written. Important: At the time of the writing of this redbook, preallocation has no effect on write performance on files larger than 1 MB. This is expected to change in a future release of SAN File System. Preallocation rules can still be written without error, specifying a value of up to 128 MB; however, we recommend at this time not to create rules for files expected to grow larger than 1 MB. When writing files larger than 1 MB, the MDS will automatically allocate enough storage to cover the actual write size requested.
The rule_name parameter is optional. The FOR FILESET clause is optional, but if used, will restrict the application of the policy to files in that fileset or filesets. The SQL_expression is a file matching specification that is the same as for the file placement policy. The PREALLOC value can be specified in bytes, kilobytes or megabytes as follows: BYTE, BYTES, KB, KILOBYTE, KILOBYTES, MB, MEGABYTE, and MEGABYTES. Both upper and lower case is valid, and white space is allowed between the number and unit. For example, 1 mb and 1 MB are valid preallocation values.
325
In Figure 7-38, the currently active policy, DEFAULT_POLICY, has been copied to a new inactive policy called common_policy. Click the check box for the new policy, select Properties from the drop-down, and click Go.
Click Rules on the left hand side to display the contents of the policy, stgRule1 and stgRule2, as in Figure 7-39.
326
You can now edit in this text box to add a preallocation rule. We will add a rule to set a 128 MB preallocation for any file named bigfile in the fileset aixfiles, as shown in Figure 7-40. Click OK when you are finished editing the policy.
Returning to the list of policies, check the policy you just edited, and select Activate (see Figure 7-41).
327
Policy evaluation
The master MDS evaluates the policy as follows: When an administrator creates a new policy, the master MDS checks the basic syntax of all the rules in the policy. When an administrator activates the policy, the master MDS checks all references to filesets and storage pools. If a rule in the policy references a non-existent storage pool, or a non-existent or unattached fileset, an error is returned and the policy is not activated. After the policy is successfully activated, the rules in the policy are evaluated in order, whenever a file is subsequently created in the SAN File System. At this stage, if an error is detected in the policy, an entry is made in the SAN File System log file, and the file will be stored in the default User Pool.
328
For example, suppose you have three filesets called: Personnel Development Manufacturing You have defined three User Pools with volumes: Personnel_Pool Development_Pool Manufacturing_Pool You have three clients, and you have configured the LUNs for the volumes in the User pools so that each client has access to only one of the pools. Therefore, you want to confine each fileset to using only one pool. A simple policy, which will ensure no files fall through the policy (that is, that an explicit rule will apply to every file), is shown in Example 7-77.
Example 7-77 Simple complete policy when no default User pool VERSION 1 /* Do not remove or change this line!*/ RULE 'stgRule1' SET STGPOOL Personnel_Pool FOR FILESET Personnel RULE 'stgRule2' SET STGPOOL Development_Pool FOR FILESET Development RULE 'stgRule3' SET STGPOOL Manufacturing_Pool FOR FILESET Manufacturing
If you added another fileset, for example, Test, and another pool Test_Pool, you could include another similar rule so that the policy would now be as shown in Example 7-78.
Example 7-78 Simple complete policy with extra fileset and pool when no default User pool VERSION 1 /* Do not remove or change this line!*/ RULE RULE RULE RULE 'stgRule1' 'stgRule2' 'stgRule3' 'stgRule4' SET SET SET SET STGPOOL STGPOOL STGPOOL STGPOOL Personnel_Pool FOR FILESET Personnel Development_Pool FOR FILESET Development Manufacturing_Pool FOR FILESET Manufacturing Test_Pool FOR FILESET Test
If you had additional pools, you could enhance the policy, while still including the catch-all rules (so that an explicit rule will apply to every file), as shown in Example 7-79 on page 330. Note we qualify each rule with the FOR FILESET clause so that we know exactly which filesets will be using each rule. In this example, we assume four filesets: Personnel, Development, Manufacturing, and Test. There are six pools: Personnel_Pool, Development_Pool, Manufacturing_Pool, Test_Pool, DB2_Pool, and Notes_Pool. The LUNs/volumes in the pools are made visible to the three clients as follows: clientA: Personnel_Pool, Notes_Pool: We want this client to be able to access the fileset Personnel. clientB: Development_Pool, Test_Pool, DB2_Pool: We want this client to be able to access filesets Development and Test. clientC: Manufacturing_Pool, DB2_Pool, Notes_Pool: We want this client to be able to access the fileset Manufacturing.
329
There are no common pools, so we have disabled the default User Pool. We need a policy that ensures that files in each fileset will only be stored in pools that are accessible by the client that we have declared needs access to that fileset. Example 7-79 shows one such sample policy that meets this requirement.
Example 7-79 Simple complete policy with extra fileset and pool when no default User pool VERSION 1 /* Do RULE 'stgRule1' RULE 'stgRule2' RULE 'stgRule3' RULE 'stgRule4' RULE 'stgRule1' RULE 'stgRule2' RULE 'stgRule3' RULE 'stgRule4' not SET SET SET SET SET SET SET SET remove or change this line!*/ STGPOOL DB2_Pool FOR FILESET Development WHERE NAME like %DB2% STGPOOL DB2_Pool FOR FILESET Manufacturing WHERE NAME like %DB2% STGPOOL Notes_Pool FOR FILESET Personnel WHERE NAME like %.nsf STGPOOL Notes_Pool FOR FILESET Manufacturing WHERE NAME like %.nsf STGPOOL Personnel_Pool FOR FILESET Personnel STGPOOL Development_Pool FOR FILESET Development STGPOOL Manufacturing_Pool FOR FILESET Manufacturing STGPOOL Test_Pool FOR FILESET Test
Of course, there are many possible ways to write your policy, but the important thing to remember is to walk through the policy to check that: The policy meets your requirements for file storage. The implicit default rule (assign any non-explicitly matched file to the Default User pool) will never be invoked.
Next, we try to create the file testDefaultPool.txt11 into the SAN File System on the client Rome. Since this file does not match any of the rules in our active policy, it must go to the default pool. However, we have disabled the default pool. The client cannot write the file in a non-existent pool, so the file create fails, as shown in Example 7-81.
330
Example 7-81 Client creates a file Rome:/usr/local >cp testDefaultPool.txt11 /sfs/sanfs cp: /sfs/sanfs/testDefaultPool.txt11: There is not enough space in the file system. Rome:/usr/local >
We can show what happened by looking at the SAN FIle System event log, /usr/tank/server/log/log.std. More details on the server error logs are in 13.3, Logging and tracing on page 521. Example 7-82 shows the relevant messages of the failed file create operation.
Example 7-82 Extract from log file showing file creation failure 2004-05-21 05:31:25 WARNING HSTCM0935W N mds1 No storage pool has been assigned to file 'testDefaultPool.txt11' in fileset 3 (ROOT) since no policy rule applied and there is no default storage pool. 2004-05-21 05:31:25 ERROR HSTSC0527E N mds1 Unable to create file 'testDefaultPool.txt11' in fileset 3 (ROOT) because no storage pool was assigned to it. 2004-05-21 05:31:25 WARNING HSTSC0551W E mds1 ALERT: No storage pool assigned during file creation in fileset 3 (ROOT), error occurred 1 time(s) since the last alert.
If you disable the default pool using the GUI, an extra warning is also displayed before you commit the operation, as shown in Figure 7-42. To disable the default storage pool or set another pool as default, select Manage Storage Pools, select the pool of interest, click General Properties from the drop-down menu, and click Go. From there, you can either disable the pool as default (if the previous default pool was selected), or select another storage pool to be the new default.
331
Policy statistics
Another aid in checking execution of your policies is the statpolicy command to display policy statistics. These statistics are maintained for each fileset and are reset when the following actions occur: The SAN File System cluster is stopped and restarted. A new policy is activated. A MDS is stopped, started, added, or dropped. A fileset is moved to another MDS or detached. You can manually reset the counters for the statistics by reactivating the current policy. There are two options to specify with the statpolicy command: -rule and -pool parameters. When the -rule parameter is used, the results will display the following for each rule in the active policy: Position: Rule name, ordinal position of the rule. Evaluation Errors: Number of times that a rule has caused an error while being evaluated, not including syntax errors. Evaluations Not Applied: Number of times the rule was evaluated but not applied. Applied Evaluations: Number of times the rule was evaluated and applied. Last Applied: Date and time the rule was last applied.
332
When the -pool parameter is used, the results will display the following for each user pool: Storage pool name Number of times the file was placed into this storage pool Last time a file was placed into this storage pool Example 7-84 shows the results of the statpolicy command.
Example 7-84 Statpolicy results sfscli> statpolicy -rule Rule Name Position Evaluation Errors Evaluations Not Applied Applied Evaluations Last Applied ========================================================================================================= stgRule1 1 0 0 0 stgRule2 2 0 0 0 stgRule3 3 0 0 12 Jun 05, 2004 11:16:42 AM Default 0 0 0 651 Jun 04, 2004 11:39:19 PM sfscli> statpolicy -pool Pool Name Files Placed Last File Placed ================================================= DEFAULT_POOL 651 Jun 04, 2004 11:39:19 PM aixrome 12 Jun 05, 2004 11:16:42 AM lixprague 0 winwashington 0 -
You can also show policy statistics using the GUI. Select Manage Filing Policy Statistics. Then you can either pick the rule or pool option on the left hand side, as shown in Figure 7-43.
333
Then, when you activate the policy non-unif, you know that the non-unif_standby policy is identical. If you then later need to make any changes to your policy non-unif: 1. 2. 3. 4. 5. Activate the non-unif_standby policy with the usepolicy command. Edit the file non-unif.txt to update the rules. Run mkpolicy with the -f option command to make changes to the non-unif policy. Re-activate the non-unif policy. Propagate the updated policy to the standby policy, using the mkpolicy command with -f.
In this way, you always have your actual policy in effect. Example 7-86 shows the whole procedure.
Example 7-86 Activate standby policy if you need to do any changes in the active policy sfscli> usepolicy non-unif_standby Are you sure you want to use this policy? New files should be allocated to a pool that is accessible to the clients where the file is needed. [y/n]:y CMMNP5189I Policy non-unif_standby is now the active policy. sfscli>quit mds4:/usr/tank/admin/bin # vi non-unif.txt mds4:/usr/tank/admin/bin # sfscli sfscli> mkpolicy -file non-unif.txt -f non-unif CMMNP5193I Policy non-unif was created successfully. sfscli> usepolicy non-unif Are you sure you want to use this policy? New files should be allocated to a pool that is accessible to the clients where the file is needed. [y/n]:y CMMNP5189I Policy non-unif is now the active policy.
334
sfscli> mkpolicy -file non-unif.txt -f non-unif_standby CMMNP5193I Policy non-unif_standby was created successfully.
You can use this procedure also when updating policies from the SAN File System GUI; the only difference when working with the GUI is that you do not use a rules definition file, but modify the rules directly from the GUI interface. You can see examples how to work with manage policies using the GUI in 7.8.4, Creating a policy and rules with GUI on page 311.
335
336
Chapter 8.
File sharing
This chapter describes the file sharing features of SAN File System. After reading this chapter, the SAN File System Administrator should have a better understanding of file sharing and how it can be used within SAN File System. In this chapter, we discuss the file sharing capabilities of SAN File System, including these topics: Overview: Homogenous and heterogeneous file sharing Basic heterogeneous file sharing Sample implementation Advanced heterogeneous file sharing Overview: Components and commands Configuration Sample implementation
337
338
between UNIX and Windows. For example, if a file created on UNIX has write permission for the Other entity, the Windows client will see permissions (for Everyone) of both Write data and Append data. Conversely, if it is required for a UNIX client to be able to write to a directory created by a Windows client, then the Everyone entity for that folder must have all three permissions: Create file, Create folders, and Delete sub-folders/files. The permissions or ownership can only be changed on the client type (that is, Windows or UNIX) where the file/directory was created, that is, a Windows client cannot change any security metadata on a UNIX-created file, and a UNIX client cannot change any security metadata on a Windows-created file. This is referred to as the primary allegiance for a fileset. For files created on UNIX-based clients, SAN File System stores the actual UID/GID numbers and shares them across all UNIX-based clients, but they all appear as SID S-1-0-0 on Windows. For files created on Windows, SAN File System stores the actual SID and shares it across Windows clients, but they all appear as 999999/999999 on UNIX-based clients. UID/GID/SID are all mapped by the client to user/group/owner according to whatever scheme is in use on the client. Please see 8.2, Basic heterogeneous file sharing on page 340 for detailed information about setting up this form of file sharing within SAN File System.
Table 8-1 Windows and UNIX permissions mapping UNIX Permission Read Write Windows File Permission Read data Write data and Append data Execute data Windows Directory Permission List folder Create files and Create folders and Delete sub-folders/files Traverse folder
Execute
339
As with basic heterogeneous file sharing, the permissions or ownership can only be changed on the client type (that is, Windows or UNIX) where the file/directory was created. That is, a Windows client cannot change any security metadata on a UNIX-created file, and a UNIX client cannot change any security metadata on a Windows-created file.
When the filesets are first created, they appear to a UNIX client (in this case, AIX) with no access, and UID/GID 1000000/1000000, as shown in Example 8-2. Any attempt to change to either the winfiles or aixfiles directory would fail.
Example 8-2 View UNIX-type permissions on newly created fileset on an AIX client # ls -la total 6 drwxr-xr-x drwxrwxrwx dr-xr-xr-x d--------d---------
6 3 2 3 3
144 72 48 72 72
07 04 04 07 07
On the Windows clients, the owner of each fileset is SID S-1-0-0, as shown in Figure 8-1 on page 341.
340
In this example, we are going to share the winfiles fileset between Windows and UNIX. First, you need to take ownership of the winfiles fileset, as described in Take ownership of fileset: Windows on page 301. After this has been done, we will set the permissions so that the UNIX clients will be able to read and list the winfiles fileset and Windows Administrator users will get full access to the directory. To enable the UNIX permissions, you need to set the permissions on Windows for the Everyone group, as shown in Figure 8-2.
341
The permissions for the Everyone group have been set to allow Read & Execute, List Folder Contents, and Read. Click Advanced to display the Access Control Settings and click View/Edit. Now we can see, in Figure 8-3, that the required permissions to allow a UNIX user to read and execute in the directory are set. The permissions were given in Table 8-1 on page 339. We must have List folder and Traverse folder to translate to UNIX permissions of Read and Execute. Since none of the write permissions from the table are given, the UNIX client will not be able to write to the folder.
Next, verify that the Administrator group has the Windows permission set to Full control, by clicking the Administrators group, as shown in Figure 8-4 on page 343.
342
Figure 8-5 summarizes the current permissions for the winfiles fileset: Members of the Windows Administrator group have Full control to the fileset and members of the Everyone group (which will be used by UNIX clients) have only Read & Execute.
343
On the AIX client, we can see that the other group now has r-x permissions (Example 8-3), and the UID/GID is now set to 999999/999999. This indicates that a Windows client has taken ownership of the fileset, as it has changed from the original 1000000.
Example 8-3 List AIX permissions after changing Windows permissions # ls -la total 6 drwxr-xr-x drwxrwxrwx dr-xr-xr-x d--------d------r-x
6 3 2 3 3
144 72 48 72 72
07 04 04 07 07
The permissions set mean that the AIX client can now change to the winfiles directory and list its contents, as shown in Example 8-4. It could also view the PDF file. However, it cannot write to the directory because the appropriate permissions are not set for Everyone.
Example 8-4 Read winfiles fileset on AIX client # cd winfiles # pwd /mnt/tank/SFS1/winfiles # ls -la total 20483 d------r-x 3 999999 drwxr-xr-x 6 root d------r-x 2 999999 -------r-x 1 999999 guidesg245416.pdf
96 144 48 9753683
07 07 07 25
Next, we will share UNIX files to Windows clients in the fileset aixfiles. We assume the AIX client has already taken ownership of the fileset, as described in Take ownership of fileset: UNIX on page 300. To allow Windows clients to read and execute files within the aixfiles fileset, set the UNIX permissions to 755, which translates as shown in Example 8-5. The crucial thing is to set permissions appropriately for Other in this case, they are read and executed. This will translate into the correct Windows folder permissions, as shown in Table 8-1 on page 339.
Example 8-5 Set UNIX permission on aixfiles # chmod 755 aixfiles # ls -la total 6 drwxr-xr-x 6 root drwxrwxrwx 3 root dr-xr-xr-x 2 root drwxr-xr-x 3 root d------r-x 3 999999
144 72 48 72 96
07 04 04 07 07
The UNIX permission 755 basically means that the owner (root) has read, write, and execute permissions, members of the system group have read and execute permissions, and everyone else (including Windows clients) will have read and execute permissions. On the Windows client, the owner SID shows as S-1-0-0, as shown in Figure 8-6 on page 345.
344
On the AIX client, we created a file in the fileset, aixfordummies.txt. It has been set to allow everyone to read the file, as shown in Example 8-6.
Example 8-6 View AIX permissions for text document within aixfiles fileset # cd aixfiles # pwd /mnt/tank/SFS1/aixfiles # ls -la total 11 drwxr-xr-x 3 root drwxr-xr-x 6 root d--------2 1000000 -rw-r--r-1 root
96 144 48 17
07 07 07 07
. .. .flashcopy aixfordummies.txt
345
We confirm on the Windows client that it can read the file (since it inherits the permissions for the other category on UNIX) in Figure 8-7.
On the Windows file, we confirm the read access to the file by listing it in the directory and accessing it in the Notepad application, as shown in Example 8-7. Note that any attempt to update that file would fail because we do not have write access.
Example 8-7 List aixfiles directory on Windows client C:\Documents and Settings\Administrator>s: S:\>dir Volume in drive S is SFS1 Volume Serial Number is 0000-905A Directory of T:\ 10/07/2003 05:47a <DIR> . 10/07/2003 05:29p <DIR> winfiles 10/07/2003 06:10p <DIR> aixfiles 0 File(s) 0 bytes 4 Dir(s) 12,079,595,520 bytes free S:\>cd aixfiles S:\aixfiles>dir Volume in drive S is SFS1 Volume Serial Number is 0000-905A Directory of S:\aixfiles 10/07/2003 06:10p <DIR> . 10/07/2003 05:47a <DIR> .. 10/07/2003 06:11p 17 aixfordummies.txt 1 File(s) 17 bytes 2 Dir(s) 12,079,595,520 bytes free S:\aixfiles>notepad aixfordummies.txt
You have now successfully shared files between UNIX and Windows clients using the basic heterogeneous file sharing capabilities of SAN File System.
346
SFS Client
Master MS
User map read and write User name and ID lookup
System Pool
User Map Table
NIS / LDAP
Active Directory
In order to enable advanced heterogeneous file sharing capabilities, additional components must be configured to work with the SAN File System, in addition to the basic heterogeneous file sharing configuration. If the additional components are not configured and if no user mappings are defined, the file sharing will occur according to the basic heterogeneous file sharing concept and configuration, as described in 8.2, Basic heterogeneous file sharing on page 340. In the sections below, we will summarize the configuration and setup procedure for advanced heterogeneous file sharing, then work through a sample setup.
347
348
349
Our lab setup is shown in Figure 8-9. We will use this diagram as a reference for the rest of the configuration and implementation information contained in this chapter. We used an LDAP server for the UNIX directory service.
User Map
LDAP Enoch SANFSDom
Directory servers
AIX agent47
Windows jacob
SAN FS clients
SAN
Active Directory
We installed the Windows 2000 system goku as the Active Directory Server. It was configured as the Active Directory Domain Controller and nameserver (DNS) for the domain sanfsdom.net. See your Windows documentation for detailed instructions for setting up Active Directory. The Active Directory confirmation window for the domain controller and the domain is shown in Figure 8-10 on page 351.
350
Figure 8-10 Created Active Directory Domain Controller and Domain: sanfsdom.net
We created a User called sanfsuser within the domain, as shown in Figure 8-11.
351
Figure 8-12 shows that we added our SAN File System Windows 2000 client (jacob) to the Active Directory Domain. This means that the Windows sanfsuser ID created can now be used to log into our SAN File System Windows client, jacob.
Figure 8-12 SAN File System Windows client added to Active Directory domain
LDAP
For our LDAP server, we used OpenLDAP running on a Red Hat Linux server called enoch. This LDAP server is also being used for our SAN File System administrator user authentication functionality. We added additional schema entries to enable advanced heterogeneous file sharing. You can use the same LDAP server as already used for SAN File System as we have done here, or use a different LDAP server. To extend the LDAP server, we added additional entries to our LDAP schema. Figure 8-13 shows the LDAP entries that were added below the top level, which in this case is SANFSBase.
o=SANFSBase
Existing entries
cn=SANFSdom
ou=Users
ou=Groups
cn=sanfsuser
cn=sanfsgroup
352
We created a new branch in our existing LDAP tree for our new domain for UNIX user IDs. The domain is called SANFSdom. Within this domain, we created containers for Users and Groups to contain our UNIX user ID and Group information, respectively. Within those containers, we created a user called sanfsuser and a group called sanfsgroup. The user created is linked to the group it belongs to through the attributes set during its definition. This can be seen in the sample LDIF file used to create the additional LDAP structure, listed in Example 8-8. To deploy a similar setup in your environment, attach the new container (SANFSdom, in our example) under your existing LDAP Directory base. Then you should import the LDIF file to your configuration to add the additional entries. Note: The LDIF file below uses o=SANFSBase as the root of the LDAP Directory tree, which differs from the LDAP example in Figure 4-1 on page 102. You should use your appropriate organization name entry.
Example 8-8 Sample LDAP LDIF file for file sharing # Information for the File Sharing Domain (SANFSdom) # SANFSdom, SANFSBase dn: ou=SANFSdom,o=SANFSBase ou: SANFSdom objectClass: organizationalunit # Users, SANFSdom, SANFSBase dn: ou=Users,ou=SANFSdom,o=SANFSBase ou: Users objectClass: organizationalunit # Groups, SANFSdom, SANFSBase dn: ou=Groups,ou=SANFSdom,o=SANFSBase ou: Groups objectClass: organizationalunit # sanfsgroup, Groups, SANFSdom, SANFSBase dn: cn=sanfsgroup,ou=Groups,ou=SANFSdom,o=SANFSBase cn: sanfsgroup objectClass: posixGroup gidNumber: 1000 memberUid: 1 # sanfsuser, Users, SANFSdom, SANFSBase dn: cn=sanfsuser,ou=Users,ou=SANFSdom,o=SANFSBase uid: sanfsuser gidNumber: 1000 objectClass: posixAccount objectClass: account cn: sanfsuser userPassword:: ZG9udGdpdmVvdXQ= homeDirectory: /tmp uidNumber: 6000
353
354
# LDAP class definitions. #userclasses:aixaccount,ibm-securityidentities userclasses:account,posixaccount,shadowaccount #groupclasses:aixaccessgroup groupclasses:posixgroup . . # LDAP server port. Default to 389 for non-SSL connection and # 636 for SSL connection #ldapport:389 ldapport:389 #ldapsslport:636 . . #
3. Use the mksecldap and secldaplcntd commands to configure LDAP and start the LDAP daemons, as shown in Example 8-10. These allow the AIX client to recognize and access the LDAP server. The format of the mksecldap command is:
# mksecldap -c -a <ldapadmin base dn> -p <password> -h <LDAP server IP address> Example 8-10 mksecldap and secldaplcntd # mksecldap -c -a cn=Manager,o=SANFSBase -p fakepwd -h enoch.tucson.ibm.com # secldapclntd
4. You can now use the user IDs defined in your LDAP server (sanfsuser in our example) to log in to your AIX client.
NIS configuration
We do not show NIS configuration here; however, if you are using NIS rather than LDAP to serve your UNIX IDs, you would enter the User IDs and groups into the NIS server, then configure your UNIX clients to use NIS for user logins.
355
2. Change to the /usr/tank directory. 3. Extract the tar file using the tar xvf /tmp/hetsec_prereqs.tar command, as shown in Example 8-11.
Example 8-11 Extract hetsec_prereqs.tar output # cd /tmp /tmp # cp /media/cdrom/common/hetsec_prereqs.tar /tmp /tmp # cd /usr/tank /usr/tank # tar xvf /tmp/hetsec_prereqs.tar ./hetsec_prereqs/ ./hetsec_prereqs/krb5.conf.template ./hetsec_prereqs/install-winbind.sh ./hetsec_prereqs/build-winbind.sh ./hetsec_prereqs/smb.conf.template ./hetsec_prereqs/INSTALL ./hetsec_prereqs/build-heimdal.sh ./hetsec_prereqs/sanfswinbind /usr/tank #
3. Copy the downloaded heimdal-0.6.3.tar.gz package to the created directory /usr/local/heimdal. 4. From the /usr/local/heimdal directory, execute the following command to build and install heimdal:
bash /usr/tank/hetsec_prereqs/build-heimdal.sh
Successful completion is shown in Example 8-12 on page 357. As the script is executed, a lot of output will be produced. This process may take a few minutes.
356
Example 8-12 Build heimdal /usr/local/heimdal # bash /usr/tank/hetsec_prereqs/build-heimdal.sh . . === HEIMDAL for SANFS install step complete. === === HEIMDAL for SANFS ready for use
3. Copy the downloaded samba-3.0.7.tar.gz package to the created directory /usr/local/winbind. 4. From the /usr/local/winbind directory, execute the following command to build winbind:
bash /usr/tank/hetsec_prereqs/build-winbind.sh
Successful completion is shown in Example 8-13. As the script is executed, a lot of output will be produced. This process may take a few minutes.
Example 8-13 Build winbind # bash /usr/tank/hetsec_prereqs/build-winbind.sh . . === SAMBA for SANFS install step complete. === === SAMBA for SANFS ready for system installation ===
Configure Kerberos
The Kerberos configuration file, /etc/krb5.conf, is used to allow the MDS to authenticate to the Active Directory server. 1. Use the file /usr/tank/hetsec_prereqs/krb5.conf.template file as a template, replacing the fields in bold in Example 8-14 on page 358 with the values corresponding to your Active Directory server and domain. Our domain is SANFSDOM.NET and the Active Directory server is goku.tucson.ibm.com. 2. Save the edited file as /etc/krb5.conf. Attention: The krb5.conf file is case-sensitive. Make sure that the case of the updated entries match the case of the krb5.conf.template file.
357
Example 8-14 Sample krb5.conf file /etc # cat krb5.conf # # Edit this file to reflect your Windows domain: # # 1) Replace YOURDOMAIN.NET with your domain name. # 2) Replace the DNS addresses with the ones for your server. # 3) Remove the line at the top of the file containing "EDITED". # # THIS FILE MUST BE EDITED! [libdefaults] default_realm = SANFSDOM.NET default_etypes = des-cbc-crc des-cbc-md5 default_etypes_des = des-cbc-crc des-cbc-md5 [realms] SANFSDOM.NET = { # DNS address for your domain controller or ADS kdc = goku.tucson.ibm.com kpasswd_server = goku.tucson.ibm.com } [domain_realm] yourdomain.net = SANFSDOM.NET .yourdomain.net = SANFSDOM.NET
Configure winbind
1. Use the file /usr/tank/hetsec_prereqs/smb.conf.template file as a template to show the Active Directory domain name, as shown in Example 8-15. 2. Save the edited file as /usr/local/winbind/install/lib/smb.conf. Tip: We recommend setting the security parameter to security=domain.
Example 8-15 Sample smb.conf file /usr/local/winbind/install/lib # cat smb.conf # # 1) Change the lines containing "YOURDOMAIN" to reflect your # Windows domain name. # # 2) Uncomment one of the "security" lines. # If you are using a Windows NT domain controller, use # the "domain" form. If you are using an Active Directory # server, use the "ADS" form. If you are uncertain, # or you have trouble, try the "domain" form. # # 3) Remove the line at the top containing "EDITED". # # THIS FILE MUST BE EDITED! [global] # Your Windows domain name workgroup = SANFSDOM # The Kerberos "realm" for your domain. # This should be the way it appears in your /etc/krb5.conf file.
358
realm = SANFSDOM.NET # Which kind of directory server you have (choose one): security = domain # security = ADS # How to find the Kerberos server (default) password server = * # What should winbind use between domain name and user name # As shown here, users would be listed as YOURDOMAIN+username winbind separator = + # How long to cache material from the ADS winbind cache time = 10 # How to create temporary "proxy" Unix users for Windows users. # A user/group ID will be assigned to Windows users from # this range by the winbind server, but SANFS does not use them. idmap uid = 20000-400000 idmap gid = 20000-400000 template shell = /bin/bash template homedir = /home/%D/%U
Path definitions
You must set path definitions to use the Heimdal packages. Execute the following command:
# export LD_LIBRARY_PATH=/usr/local/heimdal/install/lib:${LD_LIBRARY_PATH}
We also recommend using the following PATH statements to simplify running upcoming configuration steps with the commands:
# PATH=/usr/local/heimdal/install/bin:$PATH # PATH=/usr/local/winbind/install/bin:$PATH
Running these commands from the command line will only set the PATH variables for the current session. To ensure that the path variables are set upon a subsequent reboot or logon to the machine, add these statements to the .bashrc file, as shown in Example 8-16.
Example 8-16 Sample .bashrc file # cat .bashrc .... PATH=$PATH:/usr/local/heimdal/install/bin:/usr/local/winbind/install/bin export LD_LIBRARY_PATH=/usr/local/heimdal/install/lib:${LD_LIBRARY_PATH} ... #
359
Tips: If you have not changed your Administrator password since it was created, you may need to change it to enable the use of encryption methods compatible with Heimdal. If you are unable initially to authenticate with kinit, change your password and try kinit again. If you receive the following message, kinit: krb5_get_init_creds: Clock skew too great, you must update the date and time on your MDS using the date command and then rerun kinit. If you set the PATH variable as described in Path definitions on page 359, you do not need to specify the full path name of these commands.
Example 8-17 Successful kinit output # kinit administrator@SANFSDOM.NET administrator@SANFSDOM.NET's Password: manny: NOTICE: ticket renewable lifetime is 1 week #
2. Verify that your login was successful with the klist command, as shown in Example 8-18.
Example 8-18 Login verification output using klist -v # klist -v Credentials cache: FILE:/tmp/krb5cc_0 Principal: administrator@SANFSDOM.NET Cache version: 4 Server: krbtgt/SANFSDOM.NET@SANFSDOM.NET Ticket etype: des-cbc-crc Auth time: Oct 12 16:13:31 2004 End time: Oct 13 02:09:10 2004 Renew till: Oct 19 16:13:31 2004 Ticket flags: renewable, initial, pre-authenticated Addresses: IPv4:9.11.209.148 manny:~ #
360
Example 8-19 MDS joins the Active Directory domain # net ads join Using short domain name -- SANFSDOM Joined 'MANNY' to realm 'SANFSDOM.NET'
Once this script has completed, the winbind service will start automatically upon reboot; however, to start the service immediately, run the command shown in Example 8-21.
Example 8-21 Starting Winbind # /etc/init.d/sanfswinbind start Starting WINBIND # done
361
to:
passwd: compat nis ldap winbind group: compat nis ldap winbind Example 8-22 Sample nsswitch.conf file /etc # cat nsswitch.conf # # /etc/nsswitch.conf # # An example Name Service Switch config file. This file should be # sorted with the most-used services at the beginning. # # The entry '[NOTFOUND=return]' means that the search for an # entry should stop if the search in the previous entry turned # up nothing. Note that if the search failed due to some other reason # (like no NIS server responding) then the search continues with the # next entry. # # Legal entries are: # # compat Use Libc5 compatibility setup # nisplus Use NIS+ (NIS version 3) # nis Use NIS (NIS version 2), also called YP # dns Use DNS (Domain Name Service) for IPv4 only # dns6 Use DNS for IPv4 and IPv6 # files Use the local files # db Use the /var/db databases # [NOTFOUND=return] Stop searching if not found so far # # For more information, please read the nsswitch.conf.5 manual page. # # passwd: files nis # shadow: files nis # group: files nis passwd: compat nis ldap winbind group: compat nis ldap winbind hosts: networks: services: protocols: rpc: ethers: netmasks: netgroup: files dns files dns files files files files files files
362
For a Secured LDAP configuration, set the BASE and URI entries as for an unsecured LDAP configuration. Also: 1. Ensure the ldap.cert file is in the /etc/openldap directory.
363
2. Specify the following additional line after the URI entry in the /etc/openldap/ldap.conf file:
TLS_CACERT /etc/openldap/ldap.cert
Note: If your LDAP server is an AIX SecureWay or IBM Directory Server LDAP server that was initiated using AIX's mksecldap command, or if it is being used on AIX 5L V5.1 or V5.2, please edit the settings in /usr/share/doc/packages/nss_ldap/ldap.conf in order to correctly map the attributes. The guidelines mentioned above still apply. If in doubt, edit both ldap.conf files with the same information.
2. Set the domain name to the /etc/defaultdomain file, as shown in Example 8-26.
Example 8-26 Sample /etc/defaultdomain file /etc # cat defaultdomain sanfsdom
3. Set the NIS domain name to allow the MDS to immediately recognize and access the NIS Directory Service, by running the following command:
# domainname sanfsdom
4. Run the following commands to allow the MDS to link to the NIS Domain upon reboot and to start it immediately:
# chkconfig ypbind on # /etc/init.d/ypbind start
364
shows our Windows user. The Windows users will be prefixed with the Active Directory domain, while the UNIX users that are being served from LDAP will appear just as normal UNIX IDs in this output.
Example 8-27 Output of getent passwd # getent passwd root:x:0:0:root:/root:/bin/bash bin:x:1:1:bin:/bin:/bin/bash . . . sanfsuser:x:6000:1000:sanfsuser:/tmp: SANFSDOM+Administrator:x:20000:20000::/home/SANFSDOM/Administrator:/bin/bash SANFSDOM+Guest:x:20001:20000::/home/SANFSDOM/Guest:/bin/bash SANFSDOM+krbtgt:x:20002:20000::/home/SANFSDOM/krbtgt:/bin/bash SANFSDOM+sanfsuser:x:20003:20000:sanfsuser:/home/SANFSDOM/sanfsuser:/bin/bash #
365
Example 8-28 Create the user domains # sfscli mkdomain -type win_ad Windows CMMNP5469I Domain Windows was created successfully. # sfscli mkdomain -type unix_ldap Unix CMMNP5469I Domain Unix was created successfully. # sfscli lsdomain Name Type ================== Unix UNIX LDAP Windows Windows AD
3. Now we can map the users from the different domains to each other. In our case, we are mapping the user sanfsuser from the Active Directory domain to the user sanfsuser from the LDAP domain using the mkusermap command, as shown in Example 8-29. Typically there would be many user mappings to create. The src and tgt parameters are in the form user@domain, where user is an existing user ID, and domain is the appropriate domain name created in the previous step. Tip: In a typical environment, you will have many user mappings to create. You can automate this by scripting the mkusermap commands. You would need to extract the users to map from Active Directory or LDAP and NIS. You might choose an organizational standard for mapping IDs, for example, that the UNIX user IDs have the same name as the Windows ID, as we have shown here in our simple example.
Example 8-29 Create user map # sfscli mkusermap -src SANFSDOM+sanfsuser@Windows -tgt sanfsuser@Unix Are you sure that you want to create the user map? [y/n]:y CMMNP5490I The user mapping for SANFSDOM+sanfsuser@Windows with sanfsuser@Unix was created successfully.
4. Repeat the mkusermap command to map all appropriate users. When the user mapping is done, you can display the current mapping with the lsusermap command, as shown in Example 8-30.
Example 8-30 Display user map # sfscli lsusermap Unix Windows =================== sanfsuser SANFSDOM+sanfsuser
Now that we have successfully mapped our users, we are ready to show an example of how advanced heterogeneous file sharing will work.
UNIX to Windows
1. Our AIX SAN File System client, agent47, has been made a privileged client, as shown in Example 8-31 on page 367. The AIX client is also configured to authenticate users with
366
our LDAP server, as we showed in Configure UNIX clients to use LDAP user IDs on page 354.
Example 8-31 List of clients, showing the privileged clients emily:~ # sfscli lsclient Client Session ID State Server Renewals Privilege ==================================================== jacob 1 Current emily 351 Root agent47 4 Current emily 47609 Root jacob 2 Current manny 350 Root agent47 5 Current manny 47666 Root
2. To set up, we log into our AIX client, agent47, as root and take ownership (including set permissions) of the fileset svcfileset6, as shown in Example 8-32. See Take ownership of fileset: UNIX on page 300 for more information about the take ownership operation. We will make the UNIX user ID sanfsuser the owner of the fileset and give full permissions for that fileset to sanfsuser and other members of the group sanfsgroup. Intrinsically, root also has full permissions. Everyone else will have read/execute permissions only. Currently, the only member of the group sanfsgroup is sanfsuser. The fileset svcfileset6 is attached at the directory /mnt/sanfs/sanfs/svcfileset6. Note: UNIX only displays the first eight characters of user and group IDs in directory listings. This is why they display in our output as sanfsuse and sanfsgro, respectively.
Example 8-32 Show fileset permission and ownership change # whoami root # lsuser sanfsuser sanfsuser id=6000 pgrp=sanfsgroup groups=sanfsgroup home=/tmp login=true su=true rlogin=true daemon=true admin=false sugroups=ALL admgroups= tpath=nosak ttys=AL L expires=0 auth1=SYSTEM auth2=NONE umask=22 registry=LDAP SYSTEM=compat loginti mes= loginretries=0 pwdwarntime=0 account_locked=false minage=0 maxage=0 maxexpi red=-1 minalpha=0 minother=0 mindiff=0 maxrepeats=8 minlen=0 histexpire=0 histsi ze=0 pwdchecks= dictionlist= fsize=2097151 cpu=-1 data=262144 stack=65536 core=2 097151 rss=65536 nofiles=2000 roles= # lsgroup sanfsgroup sanfsgroup id=1000 users=1,sanfsuser registry=LDAP # cd /mnt/sanfs/sanfs # chown sanfsuser:sanfsgroup svcfileset6 # chmod 775 svcfileset6 # ls -ld svcfileset6 drwxrwxr-x 4 sanfsuse sanfsgro 144 Oct 04 16:29 svcfileset6
367
3. Now, still at the AIX client, agent47, we will create a file in the fileset using our LDAP user, sanfsuser. We su to sanfsuser and change to the directory svcfileset6. We create a new file called unixfile.txt, as shown in Example 8-33. Note the default permissions on this file only sanfsuser has write permissions.
Example 8-33 Show example of file creation with sanfsuser # su - sanfsuser $ cd svcfileset6 $ vi unixfile.txt i created this file with sanfsuser on unix sfs client with chown sanfsuser:sanfsgroup svcfileset6 and chmod 775 svcfileset6 ~ "unixfile.txt" 6 lines, 125 characters # cat unixfile.txt i created this file with sanfsuser on unix sfs client with chown sanfsuser:sanfsgroup svcfileset6 and chmod 775 svcfileset6 $ ls -l total 17 -rw-r--r-1 sanfsuse sanfsgro d--------2 1000000 1000000 -rw-r--r-1 sanfsuse sanfsgro
6 Oct 04 16:14 junk.txt 48 Oct 04 16:01 lost+found 125 Oct 04 16:46 unixfile.txt
4. We will now test advanced heterogeneous file sharing by attempting to open and edit the same file as the Active Directory sanfsuser from a Windows SAN File System client. Since we mapped this user to the ID sanfsuser in UNIX (in Create domains and user maps in SAN File System on page 365), it should have the same file permissions. 5. First, we log onto the SANFSDOM domain at the Windows SAN File System client jacob, using the user ID sanfsuser (Figure 8-14).
368
6. We can explore the svcfileset6 directory and see the file just created - unixfile.txt, as shown in Figure 8-15. Since our Windows User ID is mapped to sanfsuser on UNIX, we have read and write permission, as shown in Figure 8-16. The permission boxes are grayed out, because the Windows client cannot change file security attributes, including permissions. File permissions and security attributes can only be altered by clients of the same type as the file creator, that is, only other UNIX-based clients can change the permissions of a file created on a UNIX-based system.
369
7. Because of the user mapping done at the MDS, we should be able to update and save this file. In Figure 8-17, we add some more lines of text to unixfiles.txt and save it over the original file, because the mapped file permissions give us full read and write access.
8. After saving the file, we return to the AIX client, agent47, and verify the updated content, as shown in Example 8-34.
Example 8-34 Show unixfile.txt file on AIX SAN File System client # cd /mnt/sanfs/sanfs # su sanfsuser $ cd svcfileset6 $ ls junk.txt lost+found unixfile.txt $ cat unixfile.txt i created this file with sanfsuser on unix sfs client with chown sanfsuser:sanfsgroup svcfileset6 and chmod 775 svcfileset6
Now i logged into the Windows SAN File System client with sanfuser and I opened the file for editing. It should save since I am sanfsuser.
Windows to UNIX
1. Now we will show the mapping in reverse. We will create a file from the Windows SAN File System client as sanfsuser in the Active Directory domain, and verify read/write access to it from the AIX client, as the LDAP sanfsuser user ID. In Figure 8-18 on page 371 and Figure 8-19 on page 371, we have created the file winfile.txt.
370
2. Now we will open and try to edit the file on the AIX client as sanfsuser. We can do this, since sanfsuser on UNIX has been mapped to sanfsuser on Windows (see Example 8-35). Note that the owner of winfile.txt is displayed as sanfsuser (we display the mapped UNIX user). The group is not translated, since groups are not directly mapped by SAN File System, and therefore displays as the default 999999. Group membership is checked, however.
Example 8-35 Show attempt to edit file with AIX client as sanfsuser # cd /mnt/sanfs/sanfs # su sanfsuser $ cd svcfileset6 $ ls -l total 26 -rw-r--r-1 sanfsuse d--------2 1000000 -rw-r--r-1 sanfsuse -rwx--xr-x 1 sanfsuse $ vi winfile.txt I created this^M file on my ^M Windows client ^M logged on as ^M sanfsuser. Now I am editing
6 48 269 71
04 04 04 06
371
the file on a AIX client machine logged on as sanfsuser. Saving the file should be successfull. ~ "winfile.txt" 12 lines, 187 characters # cat winfile.txt I created this file on my Windows client logged on as sanfsuser. Now I am editing the file on a AIX client machine logged on as sanfsuser. Saving the file should be successfull. $ ls -l total 26 -rw-r--r-1 sanfsuse d--------2 1000000 -rw-r--r-1 sanfsuse -rwx--xr-x 1 sanfsuse
6 48 269 187
04 04 04 06
3. We have shown the operation of the user mapping in SAN File Sharing.
372
Figure 8-20 winfile.txt permissions from Windows Example 8-36 Show how editing the file with user jacuna will fail # su - jacuna $ cd svcfileset6 $ echo "Attempt to update by a non-mapped user" >> winfile.txt The file access permissions do not allow the specified action. ksh: winfile.txt: 0403-005 Cannot create the specified file. $ cat winfile.txt I created this file on my Windows client logged on as sanfsuser. Now I am editing the file on a AIX client machine logged on as sanfsuser. Saving the file should be successfull.
373
374
Chapter 9.
Advanced operations
In this chapter, we cover the following topics: FlashCopy operations Data migration: planning and implementing Adding and removing an MDS from the cluster Monitoring and gathering performance statistics MDS failover Validating non-uniform SAN File System configurations
375
Copy on write
Immediately after the FlashCopy operation, the original fileset files (Source Data) and the FlashCopy images (Copy Data) of the files in the fileset share the same data blocks, that is, nothing is actually copied, making the operation space efficient, as shown in Figure 9-1 on page 377.
376
Make Flashcopy
S OUR C E DA T A
S O UR C E DA T A
Figure 9-1 Make FlashCopy
As soon as any updates are made to the actual fileset contents (for example, a client adds or deletes files, or updates contents of files), the fileset is updated by an operation called copy on write. This means that (only) the changed blocks in the fileset are written to a new location on disk. The FlashCopy image continues to point to the old blocks, while the actual fileset will be updated over time to point to the new blocks (see Figure 9-2).
Copy on-Write SI EI T P
Modify/delete existing data Create new data
Modify S and E, Delete T and add P
S O U R C E D A T A AI EI P S
S O UR C E DA T A
Figure 9-2 Copy on write
Modified data is written to a new location, representing current fileset state. Flashcopy pointers still point to original data.
377
In this case, two blocks were changed (S and E), one block was deleted (T), and a new block was written (P) in the actual fileset. The new blocks are written as shown, and the FlashCopy image continues to point to the original blocks. preserving the point-in-time copy. Therefore, any access to the FlashCopy image accesses the data blocks as they existed when the FlashCopy image was created, and any access to the fileset itself accesses the new data blocks.
378
A FlashCopy image is simply an image of an entire fileset as it exists at a specific point in time. While a FlashCopy image is being created, all data remains online and available to users and applications. The FlashCopy image operation is performed individually for each fileset, that is, you can create only one FlashCopy image at a time. FlashCopy images are full images; you cannot create incremental FlashCopy images. A fileset can have up to 32 read-only FlashCopy images. Once a FlashCopy image is created, its name cannot be changed. You can use a FlashCopy image for backing up files instead of the original Source Data. This will guarantee a consistent image of the files, since the files in a FlashCopy image are read-only. Clients have file-level access to FlashCopy images, to access older versions of files, or to copy individual files back to the real fileset, if required.
379
380
The next window (Figure 9-5) starts the 3-step wizard: Select Filesets, Set Properties, and Verify Setup.
381
Click Next to start the wizard. On the next window (Figure 9-6), select the fileset to make the image of. We chose fileset asad.
Now specify the properties of the image: the image name, image directory, and description. Figure 9-7 shows the default settings.
Tip: A maximum of 32 FlashCopy images can be maintained for any fileset. If 32 images already exist and you try to create the 33rd image, the operation will fail unless you check the Force Image Creation box (or specify the -f flag on the mkimage command). In that case, the oldest image will be deleted to make room for the new image when it is created. Finally, verify the properties, as shown in Figure 9-8 on page 383, and click Next.
382
This completes the process, and the new image (Image-115 of asad) is created, as shown in Figure 9-9.
383
384
Tip: Because the specified FlashCopy image is deleted after you issue the reverttoimage command, it is recommended that you keep a secondary backup of the image before using the command for future use or disaster recovery.
Attention: If nested filesets exist within a fileset that you want to revert, you must manually detach all nested filesets before running the reverttoimage command. After the FlashCopy image of the parent fileset is reverted, reattach the nested fileset. Depending on the age of the specified FlashCopy image and the amount of unique file data in the image tree, the revert operation could result in significant background activity to clean up the file system objects that are no longer referenced. In Example 9-3, we revert the image Image-14 for the fileset asad. When we re-issue the lsimage command, we see that the reverted image, Image-14, has automatically been deleted, since its contents are now active in the fileset. Note too that images Image-114 and Image-115 have also been deleted, since they were created subsequently to the image Image-14, and therefore are invalid.
Example 9-3 reverttoimage command sfscli> lsimage Name Fileset Directory Name Date ========================================================= Image-12 asad Image-12 May 13, 2004 12:25:09 AM Image-13 asad Image-13 May 13, 2004 12:26:02 AM Image-14 asad Image-14 May 13, 2004 12:26:21 AM Image-114 asad Image-114 May 13, 2004 12:26:52 AM Image-115 asad Image-115 May 13, 2004 12:54:42 AM sfscli> sfscli> reverttoimage -fileset asad Image-14 Are you sure you want to revert to FlashCopy image Image-14 for fileset asad? [y/n]:y CMMNP5182I The FlashCopy image Image-14 successfully reverted. sfscli> sfscli> lsimage Name Fileset Directory Name Date ========================================================= Image-12 asad Image-12 May 13, 2004 12:25:09 AM Image-13 asad Image-13 May 13, 2004 12:26:02 AM sfscli>
385
Note on reverting FlashCopy images: When you revert to a FlashCopy image that is not the most recently made image, any images that were made subsequent to the image being reverted will automatically be deleted as a part of the revert process. This is because of the way the images are maintained by SAN File System in order to keep the overhead to a minimum. Conceptually, you can think of the images as a set of sequential pointers at one fixed point in time. The set of pointers terminates at the active (and therefore changing) data. In simple terms, once you have rolled back to an image, you cannot then roll forward to an intervening image as it will be removed. This is shown in Figure 9-11. You should therefore be careful when reverting to an image because of this restriction. If in doubt, remember you can always copy the data in an older FlashCopy image to a separate directory structure, instead of reverting it to the primary image. If you do that, you will still maintain all the images (within the restriction of maximum of 32 images per fileset).
At 10am Current data Image-4 9:50am Image-3 9:30am Image-2 9:20am Image-1 9:00am
Figure 9-11 List of FlashCopy images before and after a revert operation
386
You will be asked to confirm the revert and the operation will proceed exactly as described in the previous section.
387
Verify the action (Figure 9-14). If we had selected the Delete option (equivalent to the -f option on the rmimage command), then any open files in the image would also be deleted. This could cause application errors because of unexpected file removal; therefore, this option should be used with caution.
388
Image-12 is now removed from the list of images, as shown in Figure 9-15.
389
Figure 9-16 shows an overview of the data flow in the data migration from non-SAN File System to SAN File System. You will need an installed SAN File System client that has access to the source data, as well as, obviously, access to the SAN File System global namespace. As the data is migrated (or copied) to SAN File System, it is split: the data blocks go into User Pools and the metadata is generated and stored in the System Pool.
C lie n t
s e e s b o th o r ig in a l a n d d e s t in a t io n d ir e c t o r y
C re a te M e ta d a ta
S A N F ile S y s te m
es
r it
ul
e m
yr
et
lic
ad
Po
at a
O r ig in a l D a ta
M ig r a te d a ta
U ser P o o ls
S y s te m Pool
Prerequisites check
Here are the basic prerequisites and factors that must be taken into account in general cases: Windows and UNIX data must be migrated separately, by the appropriate client. The SAN File System cluster, with storage pool, filesets, policies, and security, as well as clients, must be properly configured.
390
The client that performs the data migration must be able to access all the source file systems. You must have superuser privileges (for UNIX clients) or administrator privileges (for Windows clients) to migrate data. The client must be a privileged client, as described in 7.6.1, Fileset permissions on page 297. All applications that modify the data being migrated must be stopped until the migration completes to guarantee the data integrity. At least twice the space of the data should be available for migration (this includes the space occupied by the original data (X) and space occupied by the data once migrated to SAN File System (X)), therefore, 2X is required. The data migration utility does not verify that there is enough space in the storage pool where data is being migrated. Files in NTFS compressed drives will be expanded, and sparse files will become dense, or full, during data migration. Sufficient space must be available in the SAN File System to store the expanded files.
migratedata utility
The SAN File System data migration utility migratedata executes in three different phases, as specified in a parameter: plan migrate verify Estimates the time that it will take to migrate data with the available resources. Estimation is done by copying sample files from source directory to destination. Performs data migration. Verifies the integrity of the migrated data and metadata (such as owner, permission, and last modified time stamp).
The migratedata command is part of the SAN File System client, and is installed in /usr/tank/migration/bin/migratedata for UNIX, or <SYSTEM_DRIVE>:\Program Files\IBM\Storage Tank\Migration\migratedata.exe for Windows. The command syntax is as follows:
migratedata -log log_file (-f) -phase [ migrate | plan | verify ] -checkpoint blocks -resume -data -destdir dest_dir source_path
391
Where: -log log_file Specifies the log file in which migration activities are logged. When used with -phase migrate -resume, this log file is used for resuming after the last completed block or file. Specifies that the migration should continue even if there is an error with a file. If not specified, an error results in the entire migration being stopped at that point. Specifies the migration phase to run, selected from: Gathers information about the available system resources (memory, CPUs, size of source tree, and space available on the destination file system), copies some sample files from source directory to estimate transfer rates, and provides an estimated time for the migration. Migrates the specified data in the source path to the destination directory. This is the default phase. Verifies the integrity of the migrated data, as well as consistency of the metadata (such as owner, modification time stamp, and permissions). Number of blocks of file data migrated at which the checkpoint is written. Resumes the migration from the last completed block or file as logged in the log file specified by the -log parameter. Verifies every block of source data (file data and metadata) with the migrated data. plan
-f
-phase
migrate verify
Note: Verifying all data with this option is very time consuming, and can take as long as the migration itself. -destdir dest_dir Specifies the name of the destination directory for the migrated data. The destination directory must already exist, with appropriate permissions set. Specifies one or more paths of directories or files to migrate.
source_path
While migrating, consider the following requirements: You can specify more than one phase, for example, to plan, migrate, and verify the data, specify phase plan phase migrate phase verify. Although you can specify the phases in any order, the command always executes in this order: plan, migrate, and verify. This tool does not provide data locking for data being migrated. You have to stop applications that may modify the data during migration. This tool does not verify if there is enough space. You should have at least twice the space of the source data in the destination (as capacity of fileset quota and storage pool capacity).
392
-phase plan
Example 9-5 shows migratedata -phase plan on an AIX system. The final line of output gives the estimated time to migrate the data. In this example, we will migrate data from the /home/testdata directory into the SAN File System at the location specified by -destdir. The plan phase works by copying some of the data into a temporary directory and calculating the I/O rate.
Example 9-5 migratedata -phase plan # /usr/tank/migration/bin/migratedata -log /var/tmp/migrate_plan.log -phase plan -destdir /sfs/sanfs/aixfiles/cb /home/testdata PLAN: Source directory: /home/testdata PLAN: Number of filesystem objects to migrate: 410 PLAN: Destination directory: /sfs/sanfs/aixfiles/cb/_tmp8867_ PLAN: On destination space required: 651.093750 MB, available: 339776 MB PLAN: Number of CPUs: 4, Available Memory: 1455 MB, IO Blocksize: 3 MB PLAN: Copy rate 5.264906 MB/sec, Estimated time: 0h:2m:3s #
-phase migrate
Example 9-6 shows the same command with -phase migrate. Now the data will be physically copied. A checkpoint will be taken after every 100 file blocks are written to allow the command to be re-started from the last checkpoint if the original command fails or is interrupted. Notice that the actual data rate is close to the estimated rate; however, we are only copying a small amount of data.
Example 9-6 migratedata -phase migrate # /usr/tank/migration/bin/migratedata -log /var/tmp/migrate_do.log -phase migrate -checkpoint 100 /sfs/sanfs/aixfiles/cb /home/testdata #PLAN: Source directory: /usr/local PLAN: Source directory: /home/testdata PLAN: Number of filesystem objects to migrate: 410 PLAN: Destination directory: /sfs/sanfs/aixfiles/cb PLAN: On destination space required: 651.093750 MB, available: 339648 MB MIGRATE: Number of CPUs: 4, Available Memory: 1453 MB, IO Blocksize: 3 MB MIGRATE: COPY STARTED MIGRATE: Copy rate 6.174036 MB/sec, Estimated time: 0h:1m:45s MIGRATE: COPY COMPLETE: 648.280576 MB copied at 5.857630 MB/sec # ls -l /sfs/sanfs/aixfiles/cb 0 drwxr-xr-x 3 root system 72 Jun 05 11:32 _tmp21595_/ 0 drwxr-xr-x 3 root system 72 Jun 05 11:27 testdata/ #
393
-phase verify
Example 9-7 shows the migratedata -phase verify execution. We give it the same log file as we specified in the migrate phase.
Example 9-7 migratedata -phase verify # /usr/tank/migration/bin/migratedata -log /var/tmp/migrate_do.log -phase verify -destdir /sfs/sanfs/aixfiles/cb /home/testdata #PLAN: Source directory: /home/testdata PLAN: Destination directory: /sfs/sanfs/aixfiles/cb /home/testdata VERIFY: Comparing files started. VERIFY: SUCCEEDED: Comparing files completed with 0 errors and 0 resets #
Note: You have to specify the same log file that the migrate phase produced during the migrate phase (/var/tmp/migrate_do.log, in this case). The log file produced at migration and verification phases looks like Example 9-8. In the migration phase, each object is logged with a time stamp, its attribute flags, and the result.
Example 9-8 migratedata log SAN FILE SYSTEM DATA MIGRATION (Version 1.1): Sat Jun 5 11:41:02 2004 11:41:02 PLAN: Source directory: /home/testdata 11:41:02 PLAN: Number of filesystem objects to migrate: 410 11:41:02 PLAN: Destination directory: /sfs/sanfs/aixfiles/cb 11:41:02 PLAN: On destination space required: 651.093750 MB, available: 339648 M B 11:41:02|/home/testdata|d|0|0|00000000000000000000000000000000|00000000000000000 000000000000000|DONE 11:41:02|/home/testdata/inst.images|d|0|0|00000000000000000000000000000000|00000 000000000000000000000000000|DONE 11:41:02|/home/testdata/inst.images/lost+found|d|0|0|000000000000000000000000000 00000|00000000000000000000000000000000|DONE 11:41:02|/home/testdata/inst.images/sdd|d|0|0|00000000000000000000000000000000|0 0000000000000000000000000000000|DONE 11:41:02|/home/testdata/inst.images/sfs|d|0|0|00000000000000000000000000000000|0 0000000000000000000000000000000|DONE 11:41:02|/home/testdata/inst.images/fixes|d|0|0|00000000000000000000000000000000 |00000000000000000000000000000000|DONE 11:41:03|/home/testdata/inst.images/fixes/U497868.bff|f|0|0|2004-06-05 11:36:32. 000000-05:00|00000000000000000000000000000000|DONE 11:41:03|/home/testdata/inst.images/tsmcli|d|0|0|0000000000000000000000000000000 0|00000000000000000000000000000000|DONE 11:41:03|/home/testdata/inst.images/tsmcli/IP22727.README.32bit|f|0|0|2004-06-05 11:35:02.000000-05:00|00000000000000000000000000000000|DONE 11:41:03|/home/testdata/inst.images/tsmcli/IP22727.README.FTP|f|0|0|2004-06-05 1 1:35:02.000000-05:00|00000000000000000000000000000000|DONE ***** lots of files deleted **** 11:41:03 MIGRATE: Number of CPUs: 4, Available Memory: 1453 MB, IO Blocksize: 3 MB 11:41:05 MIGRATE: COPY STARTED 05 11:35:53.000000-05:00|4dc3d21f0c7a0e20067693ce40d471ea|DONE8|6960128|2004-0605 11:36:00.000000-05:00|de4f40e506d9a887a1bad91c7bcb7a45|DONE4|7655424|2004-0605 11:36:09.000000-05:00|f78fcadb40a46c486051c21d2b04a718|DONE0|9369600|2004-0605 11:36:04.000000-05:00|4ab91bd126d02d9700c50cc4b7e63de3|DONE6|9020416|2004-06-
394
05 11:35:52.000000-05:00|87d4685d53d93ee7bb11d2c632489159|DONE4|6631424|2004-0611:41:24 MIGRATE: Copy rate 6.174036 MB/sec, Estimated time: 0h:1m:45s 6-05 11:36:20.000000-05:00|8abe5948cf85397120719a2b74a2c28a|DONE|11264000|2004-0 06-05 11:35:47.000000-05:00|e1a994437d39390f9b6396cc6ad9b2dd|DONE0|6568960|200405 11:36:00.000000-05:00|a00edd4b96237131e6e7bdaefa2d8d1d|DONE8|5777408|2004-0605 11:36:04.000000-05:00|6d7378bed3f89756036924d844048a05|DONE2|5711872|2004-0605 11:35:47.000000-05:00|26eae095eb54e59af974407ed1eddbd7|DONE4|5581824|2004-066-05 11:36:29.000000-05:00|675dce6334f4c027259bbe945ae1de82|DONE|15199232|2004-0 6-05 11:35:55.000000-05:00|3fb4f44e22ba64282897d1cd42b4f44e|DONE|16478208|2004-0 05 11:35:55.000000-05:00|b489230af47d64e3fc9cfb520cd6e8da|DONE6|5446656|2004-0605 11:35:58.000000-05:00|671c92bce01ac3907a564e596d28341e|DONE6|5354496|2004-0605 11:36:01.000000-05:00|3063d3ce8d18d993c15a8e7adc206689|DONE8|9197568|2004-066-05 11:36:28.000000-05:00|96e86872cdf77b46f7abd979424053f2|DONE|14427136|2004-0 05 11:36:02.000000-05:00|f63e8b715295fc7f93704134444eb223|DONE8|6028288|2004-0605 11:35:57.000000-05:00|bef5cf3aef8904a063e9230a271d080d|DONE0|4741120|2004-0605 11:36:13.000000-05:00|89a2ec337dad77a66d574cceb536d71d|DONE0|4449280|2004-0605 11:36:08.000000-05:00|df736c493a3250e0b4e86cbbb9ef1ca0|DONE8|4937728|2004-0605 11:35:58.000000-05:00|3b7b67d1f3c45f62870ce2755fe9664e|DONE4|5684224|2004-0605 11:36:23.000000-05:00|7f9d489c57eacc6ea04ef9abded53a0c|DONE0|5698560|2004-0605 11:35:49.000000-05:00|10b9fd525411f11b102ed0d09dd49d5a|DONE8|3964928|2004-0605 11:35:54.000000-05:00|850a5e02dcafdcf2c39861a34827be8a|DONE4|3687424|2004-0605 11:36:06.000000-05:00|bbe5997304f2ef4d5283cf428f179505|DONE0|3589120|2004-0605 11:36:20.000000-05:00|c1be7d3734fb8ff98b2a21f0bb6d7cb8|DONE0|5017600|2004-0605 11:35:50.000000-05:00|b2cbfe3fc1c310026b38b29735a7be15|DONE4|3359744|2004-0605 11:36:10.000000-05:00|fccd9c07ed0adaaf23eebf0dc97b7e63|DONE4|2673664|2004-0605 11:35:57.000000-05:00|19a68a04030a918bac0011105714fbbd|DONE2|3392512|2004-06-10:43:16 ***** lots of files deleted **** 11:42:54 MIGRATE: COPY COMPLETE: 648.280576 MB copied at 5.857630 MB/sec SAN FILE SYSTEM DATA MIGRATION (Version 1.1): Sat Jun 5 11:48:59 2004 11:48:59 11:48:59 11:48:59 11:48:59 PLAN: Source directory: /home/testdata PLAN: Destination directory: /sfs/sanfs/aixfiles/cb VERIFY: Comparing files started. VERIFY: SUCCEEDED: Comparing files completed with 0 errors and 0 resets#
395
You need to install and configure the MDS. 1. Install the correct version of SUSE (it must be the same as the other nodes in the cluster), patches, and basic configuration, as in 5.2.1, Pre-installation setting and configurations on each MDS on page 127, 5.2.2, Install software on each MDS engine on page 127, 5.2.3, SUSE Linux 8 installation on page 128, 5.2.4, Upgrade MDS BIOS and RSA II firmware on page 135, and 5.2.5, Install prerequisite software on the MDS on page 135. This includes configuring the RSA card TCP/IP address and Ethernet bonding. 2. Make sure the SSH keys are setup between the new MDS and all existing MDS (as in step 5 on page 136 of 5.2.5, Install prerequisite software on the MDS on page 135). 3. Mount the SAN File System CD in the CD-ROM (for example, at /mount/cdrom). 4. Remove any previously installed Java version (run rpm -qa | grep IBMJava | xargs rpm e), and install the correct version of Java from the CD (run rpm -Uvh /media/cdrom/common/IBMJava2-142-ia32-JRE-1.4.2-1.0.i386.rpm.). 5. Generate a configuration file by running the installation script with the --genconfig option
/media/cdrom/SLES8/install_sfs-package-2.2.2-130.i386.sh --genconfig /tmp/sfs.conf
This creates a template file /tmp/sfs.conf. Edit it to include the correct SAN File System configuration parameters for your environment, as listed in Table 5-1 on page 147. 6. Now you can run the actual installation, using the script as input. Note the option now is --loadserver. Also, include the -noldap option as shown if using local authentication in your cluster. If not, do not include this option. If using local authentication, you must have defined the SAN File System user IDs and groups identical to the existing MDSs, as shown in 4.1.1, Local authentication configuration on page 100. Run the following command:
/media/cdrom/SLES8/install_sfs-package-<version>.sh -loadserver -sfsargs -f /tmp/sfs.conf -noldap
7. The installation will proceed as shown in Example 5-19 on page 139 and following. The output will be slightly different since we are installing one MDS rather than the entire cluster.
396
8. After the installation script completes, run the CLI lsserver command on the newly added MDS to show the server state. It should be Not Added, Subordinate, as shown in Example 9-10.
Example 9-10 Check new server status # sfscli lsserver Name State Server Role Filesets Last Boot ========================================================== mds4 Not Added Subordinate 0 Sep 10 2005 6:14:07 AM
9. Now add this MDS to the existing cluster, as shown in Example 9-11, using the addserver command on the master MDS.
Example 9-11 Add a new node to the SAN File System cluster sfscli> addserver 9.42.164.113 CMMNP5205I Metadata server 9.42.164.113 on port 1737 was added to the cluster successfully.
10.Now issue the lsserver command again (see Example 9-12). It shows that the new node mds4 is added and started. You could now keep this as a spare MDS, or assign filesets to it. Note in this configuration, all filesets were static, therefore none got automatically moved to the new MDS. If there were dynamic filesets, we would expect some to be moved to the new MDS after the cluster detected a new member, to balance the workload.
Example 9-12 New node mds4 is added to the cluster and started sfscli> lsserver Name State Server Role Filesets Last Boot ========================================================= mds1 Online Master 4 Sep 07, 2004 4:52:02 AM mds3 Online Subordinate 4 Sep 07, 2004 9:23:11 AM mds2 Online Subordinate 3 Sep 09, 2004 10:46:04 PM mds4 Online Subordinate 0 Sep 10, 2004 6:14:07 AM
11.You should also verify the RSA connectivity to the new MDS, as described in 13.5.1, Validating the RSA configuration on page 538.
397
Example 9-13 shows how to remove a node from the cluster using the dropserver command. You can drop any server, except for the last remaining server in the cluster.
Example 9-13 Removing cluster node sfscli> dropserver mds4 Are you sure you want to drop Metadata server mds4? Filesets automatically assigned to this Metadata server will be reassigned to the remaining Metadata servers. You must reassign any statically assigned filesets manually. [y/n]:y CMMNP5214I Metadata server mds4 dropped from the cluster.
Attention: Note that addserver command uses an IP address of the node to be added, while the dropserver command needs the node name as its parameter.
398
SAN File System provides utilities for monitoring the cluster and Metadata traffic. These utilities can be invoked either from the SAN File System console (GUI) or the CLI and allow you to monitor the SAN File System by displaying the statistics, status, and logs. It is recommended to regularly monitor the cluster so that you can anticipate potential bottlenecks.
statserver -workstats servername Shows statistics per server subordinate workload. statcluster workstats Shows statistics for the master MDS workload. statfile lsclient l Shows statistics about specified files. Shows statistics per client on a server, or cluster-wide.
statfileset
For gathering statistics related to filesets, use statfileset at the CLI, as shown in Example 9-15. The output shows various statistics for the filesets, which might help you in balancing the fileset workload among the MDS servers. You can easily see which filesets are more active than others. You can choose to display the statistics for one or more specific filesets by using the fileset parameter (for example, statfileset - fileset ROOT aixfiles). The output also shows which filesets are currently associated with each MDS. Tip: If statfileset is executed at the master MDS, statistics for all filesets are displayed. If it is executed at a subordinate MDS, then only statistics for filesets associated with that MDS are displayed.
Example 9-15 statfileset sfscli>statfileset Name Server Current Transactions Stopped Retried Started Completed ========================================================================= ROOT mds1 0 725 12 39826 39101 testfileset mds1 0 3 0 17020 17017 lixfiles mds1 0 12 0 43760 43748 user1 mds1 0 5 0 16942 16937 USERS mds4 0 17 0 16431 16414 userhomes mds4 0 5 0 16457 16452 aixfiles mds3 0 3 0 48613 48610 asad mds3 0 4 0 51 47 dbdir mds2 0 3 0 40 37 winhome mds2 0 3 0 32886 32883
This command displays how many transactions have started and completed on each fileset hosted by the MDS where the command is executed. The counters are reset each time the MDS is rebooted.
399
statserver
To check the workload on a specific MDS, use the CLI command statserver -workstats servername. In Example 9-16, you can see statistics gathered from the MDS tank-mds1.
Example 9-16 statserver -workstats tank-mds1:~ # sfscli statserver -workstats Name tank-mds1 Server Role Master Most Current Software Version 2.2.2.91 ===========Workload Statistics=========== Updates 11889 Total Transactions 11889 Dirty Buffers 0 Clean Buffers 1422 Free Buffers 8578 Total Buffers 10000 Session Locks 2 Data Locks 5 Byte Range Locks 0
This command shows the locks, buffers, and transactions on the MDS, so you can see which servers are most active. The transactions are re-set each time the MDS is started; therefore, to track them over a given time period, issue the command periodically and calculate the difference in values over successive iterations. Tip: If statserver is executed at the master MDS, you can choose any MDS. If it is executed at a subordinate MDS, then it only displays statistics for that MDS.
statcluster
To display cluster statistics, use the command statcluster -workstats, as shown in Example 9-17.
Example 9-17 statcluster -workstats tank-mds1:~ # sfscli statcluster -workstats Name ATS_GBURG ID 42999 State Online Target State Online Last State Change Sep 8, 2005 7:34:54 AM Last Target State Change Servers 2 Active Servers 2 Software Version 2.2.2.91 Committed Software Version 2.2.2.91 Last Software Commit Sep 6, 2005 10:30:15 AM Software Commit Status Not In Progress Metadata Check State Active Metadata Check Percent Completed 72 % Installation Date Sep 6, 2005 10:30:15 AM ============Master Server Workload Statistics============= System Updates 5 Total System Transactions 6 Clean Buffers 26 Dirty Buffers 0 Free Buffers 486
400
Total Buffers
512
The statcluster command gives you additional information about the cluster, such as software version, cluster state, and buffer statistics. In addition, it includes the status of the Metadata Checker function. It also includes a section on the system metadata workload, so you can calculate what proportion of the total workload is comprised of the master MDS working with the System pool. There are several other options on the statcluster command; one of the most useful is the -netconfig parameter. This version can be executed on any MDS and will return (among other things) the IP address of the master MDS. Example 9-18 shows a typical output.
Example 9-18 statcluster -netconfig mds2:~ # sfscli statcluster -netconfig Name sanfs IP 9.42.164.114 Cluster Port 1737 Heartbeat Port 1738 Client-Server Port 1700 Admin Port 1800 Command issued from subordinate server
statfile
The statfile command displays metadata information about specified file(s), including the storage pool and fileset where the file is stored, the MDS to which the fileset is associated, and its size. There is also a verbose mode (-v on), which includes information such as date and time of creation and last access. It can only be run from the master MDS. You can use this command to check if policies are being applied as you want by seeing which storage pool a particular file is stored in. Example 9-19 shows a typical output. Note that you cannot use wildcards in the file specification; each file must be named in full.
Example 9-19 statfile sfscli> statfile sanfs/aixfiles/aixhome/cd/README.GUID Name Pool Fileset Server Size (B) File Modified ============================================================================================== sanfs/aixfiles/aixhome/cd/README.GUID aixrome aixfiles mds3 571 Jun 05, 2004 1:47:32 PM
401
lsclient
To display the current client workload, use lsclient -l, as shown in Example 9-20. This command gives statistics per client per MDS. For brevity in the example, we have limited it to a specific client; however, if you do not specify a client, the results for all clients will be shown.
Example 9-20 lsclient sfscli> lsclient -l Client Session ID State Server Renewals Last Renewal Next Renewal (secs) Privilege Client IP Port Client OS File System Version Transactions Started Transactions Complete Session Locks Data Locks Byte Range Locks =========================================================================================================== =========================================================================================================== ================== sanbs1-8 18 Current tank-mds1 23829 Sep 9, 2005 10:03:19 AM 19 Root 9.82.23.18 2235 Windows 2.2.1.54 5 5 1 2 0 san350-1 17 Current tank-mds1 23832 Sep 9, 2005 10:03:19 AM 19 Root 9.82.22.137 2824 Windows 2.2.2.82 81 81 1 2 0 sanm80 16 Current tank-mds1 23846 Sep 9, 2005 10:03:20 AM 20 Root 9.82.24.19 1021 AIX 2.2.2.82 1 1 0 1 0 sanm80 17 Current tank-mds2 23826 Sep 9, 2005 10:29:31 AM 17 Root 9.82.24.19 1023 AIX 2.2.2.82 0 0 0 0 0 sanbs1-8 19 Current tank-mds2 23827 Sep 9, 2005 10:29:34 AM 20 Root 9.82.23.18 2237 Windows 2.2.1.54 0 0 0 0 0 san350-1 18 Current tank-mds2 23826 Sep 9, 2005 10:29:32 AM 18 Root 9.82.22.137 2833 Windows 2.2.2.82 1 1 0 0 0
You can view client specific statistics, such as transactions, locks, and leases for each client. Note: SAN File System only keeps statistics for clients currently accessing the global namespace; it has no static view of all the clients, only the active clients.
402
This shows a snapshot of overall system performance, including the state (online/offline, and so on) of each MDS, number of filesets assigned to each MDS, and number of transactions per minute. Recent error messages are shown at the bottom of the display, and you can use the filtering pull-downs to limit or expand the data displayed. You can tell which MDS is the current master by the small server stack to its left in the table. You can set the display to automatically refresh at a designated interval by selecting a time period from the Refresh Interval drop-down. Click the link for any MDS to show properties for that server, or click any link in the Filesets column to show the filesets currently assigned on that server. You can also view statistics on individual components, such as Servers, Client Sessions, Containers, Storage Pools, Volumes, LUNs, and Engines. To view statistics for specific SAN File System components: 1. Select Monitor System Statistics, as shown in Figure 9-20 on page 405. 2. Click the link on the left-hand side for the component for which you want to view statistics. You can select between the following components: Servers, Client Sessions, Containers, Storage Pools, Volumes, LUNs, and Engines. In this example, we have chosen to show statistics about client sessions and storage pools.
403
Figure 9-18 shows view statistics for the active client sessions, including locks and sessions, both active and expired.
The storage pool report provides statistics, such as volume size and usage, as shown in Figure 9-19. It also shows how many storage pools have reached their alert threshold.
404
405
3. Select the components for which you want to include statistics in the report, then click Create Report (see Figure 9-22).
4. Now you can view and print the report using the print function in your Web browser. 5. Click Close to close the report.
The above example shows statistics, including the number of reads and writes, for each HBA.
406
This command shows read and write counts for each LUN.
407
To get a list of commands available with datapath, simply type datapath at the Linux prompt, as shown in Example 9-23.
Example 9-23 Available parameters with datapath mds1:~ # datapath Invalid command Usage: datapath datapath datapath datapath datapath datapath datapath datapath query adapter [n] query device [n] set adapter <n> online/offline set device <n> path <m> online/offline set device [n]/([n] [n]) policy rr/fo/lb/df query adaptstats [n] query devstats[n] open device <n> path <n>
Network metrics for server ServerNo=0, ipAddress=9.82.24.96, port=1700 Connection Time = 0 Network Send metrics MsgSent AvgSize 181 176 Unreliable Acks 220136 152 Network Recv metrics MinSize 0 NAcks 0 MaxSize 1055 Attempts 0 AvgRTT 0 MinRTT 0 MaxRTT 0
408
MsgRecv 152
AvgSize 247.34
MinSize 0
MaxSize 1450
Dropped 0
NAcks 0
Network metrics for server ServerNo=1, ipAddress=9.82.24.97, port=1700 Connection Time = 0 Network Send metrics MsgSent AvgSize 104 138 Unreliable Acks 212132 73 Network Recv metrics MsgRecv 73 AvgSize 184.03 MinSize 0 MaxSize 1094 Dropped 0 Unreliable Acks 212119 104 NAcks 0 MinSize 0 NAcks 0 MaxSize 244 Attempts 0 AvgRTT 0 MinRTT 0 MaxRTT 0
--------------------------------------------------------------------------------
Example 9-25 shows Transaction Manager (TM) statistics per client related to TM data structures, including the number of messages sent and received (per message type), the maximum lengths and average lengths of the transaction queue, and the number of transactions, messages, and leases lost. The statistics also include the number of transactions within certain time ranges, or buckets, for each transaction type.
Example 9-25 SAN File System transaction manager statistics root@sanm80:/usr/tank/client/bin > ./stfsstat -tm -mount /mnt/sanfs Date: 2005-10-05 21:19:41 STFS Client Version:2.2.2.82 STFS stats since 2005-09-20 15:18:33 TM Metrics ServNo 0 0 0 0 Messages Queues Xmit Queue Std Proc Queue DDL Proc Queue RRL Proc Queue Sent 187 Queues Xmit Queue Std Proc Queue DDL Proc Queue RRL Proc Queue Sent 107 MaxLen 1 1 1 0 BatchSz 1.00 MaxLen 1 1 1 0 BatchSz 1.00 AvgLen 0.0000 1.0000 1.0000 0.0000 MsgSize 144.15 AvgLen 0.0000 1.0000 1.0000 0.0000 MsgSize 105.58 Txns Enq 181 126 22 0 maxBatchSz 15 Txns Enq 104 44 27 0 maxBatchSz 15 Txns Deq 104 44 27 0 Txns Deq 181 107 22 0
ServNo 1 1 1 1 Messages
409
Transactions 117
Retry 1 Outstand 0
Del/Fail 0
Abandon 0
Blind 0
Last Lease Thread Schedule 2005-10-05 21:19:41 Loss of Leases 3 IdentifyAttmpts 9 Reasserts 4 Avg #Obj/Reassert 0 Message Buffers Sent 294 Received 432329 Sent Bucket4 10-100ms AvgLen 130.87 AvgLen 40.10 MinLen 52 MinLen 40 MaxLen 1023 MaxLen 1450 Bucket <100mu
Messages Bucket2
Bucket3
100mu-1ms 1-10ms
Identify CreateFile 1 5 0 LookupName 30 0 0 RemoveName 1 5 0 SetAccessCtlAttr 3 0 0 ReadDir 1 0 0 ReadDirPlus 24 0 0 UpdateAccessTime AcquireSessionLock 18 2 0 DowngradeSessionLock DenySessionLock DiscardDirectory DiscardObjAttr AcquireDataLock 9 6 0 DowngradeDataLock DeferredDowngradeDataLock BlkDiskUpdate 0 6 0
9 6 0 30 0 6 0 3 0 1 0 26 2 34 20 0 6 5 11 1 15 0 10 40 6 0
0 0 0 0 0 0
302
1486
969 0
384
1554
1143 0
1027
1383
Messages RenewLease IdentifyResp ReportTxnStatus CreateFileResp LookupNameResp RemoveNameResp SetAccessCtlAttrResp ReadDirResp ReadDirPlusResp
Received 432113 5 11 6 25 6 3 1 26
410
AcquireSessionLockResp DemandSessionLock InvalidateDirectory InvalidateObjAttr PublishBasicObjAttr AcquireDataLockResp DemandDataLock BlkDiskUpdateResp PublishClusterInfo PublishLoadUnitInfo Report Txn Status Subcodes
20 11 11 1 7 13 49 6 6 4
stpTxnRC_Success stpTxnRC_Name_Already_Exists stpTxnRC_Name_Not_Found stpTxnRC_Object_Not_Found stpTxnRC_Lock_Denied stpTxnRC_Wrong_Object_Type stpTxnRC_No_Space stpTxnRC_Object_Not_Empty stpTxnRC_Different_Container stpTxnRC_Invalid_Parameter stpTxnRC_Read_Only_Directory stpTxnRC_Change_Name_To_Self stpTxnRC_Range_Deadlock stpTxnRC_Range_Not_Available stpTxnRC_No_Conflicting_Range stpTxnRC_Retry_Required stpTxnRC_Internal_Error stpTxnRC_Others The RootClientFlag for this client: 1
0 0 5 0 1 0 0 0 0 0 0 0 0 0 0 1 0 4
--------------------------------------------------------------------------------
411
AVG 1.7381
MAX 3 268435456 1800 64003 64003 2119234 1984 727175 726927 0 0 0 2 0 1 0 1 0 0 0 37 0 59989 92.81 % 88.18 % 1 60320 61
MIN 1
Desired Metadata Cache Size Metadata Cache Grace Period Object Hash Table Buckets Name Hash Table Buckets Total Client Memory Total Client Memory(MW Overhead) Memory Allocs Memory Frees Current Memory Waits Total Memory Waits Total File Objects Total Dir Objects Total Symlink Objects Total Objects Total Shadow Objects Total Inconsistent Objects Total Names Total Segments Total Attributes Access Time Updates Translation Discards Directory Name Discards Object Hit Ratio Name Hit Ratio MakeInconsistentObject operations MakeMRUObject operations MakeZeroRefObject operations
in 488 bytes
To monitor general client performance, you can use SDD commands, such as datapath query adaptstats/devstats, as well as operating system specific utilities, such as top (UNIX), perfmon (Windows), and vmstat (UNIX). To view statistics on the HBAs on the client, use datapath query adaptstats, as shown in Example 9-27.
Example 9-27 Viewing HBA statistics on client C:\Program Files\IBM\Subsystem Device Driver>datapath query adaptstats Adapter #: 0 ============= Total Read Total Write Active Read Active Write Maximum I/O: 40 20 0 0 2 SECTOR: 75 11 0 0 9 Adapter #: 1 ============= Total Read Total Write Active Read Active Write Maximum I/O: 0 8 0 0 0 SECTOR: 0 13 0 0 0
The command shows read and write figures for each HBA on that system.
412
For statistics on the disk device, use datapath query devstats, as shown in Example 9-28 on page 413.
Example 9-28 Viewing disk statistics on client C:\Program Files\IBM\Subsystem Device Driver>datapath query devstats Total Devices : 5 Device #: 0 ============= Total Read Total Write Active Read Active Write Maximum I/O: 8 118832 0 0 20 SECTOR: 8 950656 0 0 160 Transfer Size: <= 512 <= 4k <= 16K <= 64K > 64K 8 118832 0 0 0 Device #: 1 ============= I/O: SECTOR: Transfer Size: Total Read 8 8 <= 512 8 Total Write 0 0 <= 4k 0 Active Read Active Write 0 0 0 0 <= 16K <= 64K 0 0 Maximum 1 1 > 64K 0
Device #: 2 ============= I/O: SECTOR: Transfer Size: Total Read 8 8 <= 512 8 Total Write 0 0 <= 4k 0 Active Read Active Write 0 0 0 0 <= 16K <= 64K 0 0 Maximum 1 1 > 64K 0
Device #: 3 ============= I/O: SECTOR: Transfer Size: Total Read 346 2712 <= 512 8 Total Write 645713 5165704 <= 4k 646051 Active Read Active Write 0 0 0 0 <= 16K <= 64K 0 0 Maximum 20 160 > 64K 0
Device #: 4 ============= I/O: SECTOR: Transfer Size: Total Read 8 8 <= 512 8 Total Write 0 0 <= 4k 0 Active Read Active Write 0 0 0 0 <= 16K <= 64K 0 0 Maximum 1 1 > 64K 0
413
have latent queued I/O. If a rogue MDS is detected, one of the other MDSs shuts it down, via the RSA adapter, before failing over its workload. Figure 9-23 summarizes the various failure possibilities and the actions that are taken in each instance.
Filesets affected? Fault or Operation On other MDS On failed MDS Action taken
Manually move a fileset between servers Manually stop a subordinate Manually stop the master Manually start a subordinate Recoverable Metadata server software fault, subordinate Recoverable Metadata server software fault, master Non-recoverable software or hardware fault, subordinate Non-recoverable software or hardware fault, master SANFS client hardware or software fault
NO
YES
Only filesets served by the destination MDS are affected; other MDS continue processing without pause. Filesets automatically moved. Filesets/master automatically moved. Filesets may be failed back (per config.). Server automatically restarted; filesets are not moved. Server automatically restarted; filesets/master role are not moved. Engine automatically shut down and workload moved to other servers. Engine automatically shut down and workload moved to other servers.
Any individual SFS files or directories locked by a client that fails are ordinarily released after 20 seconds, if the client has not recovered by that time.
Note that when an MDS is enabled to restart automatically, an SNMP trap is not sent when the MDS is restarted. Manually stopping an MDS or cluster disables the MDS restart service for that MDS or cluster. Manually starting the MDS or cluster reenables the MDS restart service for that MDS or cluster.
414
The MDS restart service can manually be started using the startautorestart sfscli command.
415
Failover
In Figure 9-24, we have a 4 MDS cluster.
We can see that among these four servers, mds4 is acting as a spare server, since no fileset is explicitly assigned to it (value of 0 in the Filesets column). Note also in the list of filesets, shown in Figure 9-25, mds4 does not appear in the Assigned Server column.
In this configuration (only static filesets assignment and one server as spare), if any MDS fails, all of its filesets will automatically move to the spare MDS; in this case, mds4. We simulate a failure on mds3 by disconnecting both its Ethernet connections. We have to disconnect both, since the Ethernet bonding configuration would automatically failover to the other NIC if only one was down. After 10 seconds (by default in SAN File System V2.2.2), the failed MDS is removed from the cluster. It displays with an unknown state (the field State reports - ), as shown in Figure 9-26 on page 417. Note that in the case of a planned outage, the MDS should first be stopped gracefully using stopserver; in this case, it would then report State as Not Running.
416
In the meantime, all filesets that were assigned to MDS mds3 (user1 and asad) are sequentially and automatically reassigned to the spare MDS mds4. We can see the result of the failover in Figure 9-27. The value in the Server column for these filesets still shows mds3, as they are statically assigned there. The time taken to complete the failover varies according to the number of filesets requiring failover, and how active they are, but should generally be up to about one minute, and often less.
Tip: The failover process can be monitored using the catlog sfscli command, which will display error messages logged in the MDS. Filesets asad and user1 have been statically assigned to MDS mds3, but are currently assigned to MDS mds4 because of the failover.
417
Failback
In our case, we were able to fix the network problem without rebooting, that is, by reconnecting the Ethernet cables on mds3. This did not automatically restart the SAN File System processes. The MDS could report its presence to the other servers, but its state shows as Not running, as shown in Figure 9-28. Therefore, failback is not triggered.
To trigger the failback, we have to restart the MDS processes on mds3. Select mds3 and choose Start... in the Select Action action drop-down menu. The message indicates that bringing up the server will cause all static filesets to be reassigned to their original server, as shown in Figure 9-29. In our case, this will cause filesets asad and user1 to fail back to MDS mds3.
Once the server is started, the static filesets are reassigned to the original MDS. If the failure had caused the MDS to reboot, it would have re-started SAN File System automatically, and therefore failback would be automatic. From a client perspective, the fileset movement (either manual or during failover) is not disruptive and therefore will not usually return an error to the calling application. It will typically cause a pause in any read or writes during the fail-over and fail-back processes. Some 418
IBM TotalStorage SAN File System
applications might experience a timeout; however, if this occurs, the client can retry the operation. Note that there are no administrative tasks necessary on the client side as part of the failover or failback.
419
Figure 9-30 shows that the master MDS is currently mds1. We gracefully stop the master MDS by selecting mds1 and Stop in the Select Action drop-down menu. The server then reports as Not running.
At this moment, the cluster is in the process of re-electing a new master and failing over all filesets assigned to former master MDS mds1. As described in 9.5.2, Fileset redistribution on page 415, MDS mds4 will be assigned these filesets, since it is a spare. As part of the master failover, the SAN File system Web interface is no longer hosted by mds1. Any current browser sessions will hang and must be closed and re-opened pointing to any of the remaining MDS IP addresses. In our example, we re-opened the browser to https://mds3:7979/sfs/. After the initial login window, the console is automatically redirected to the Web interface hosted by the new master MDS. In Figure 9-31, we see that mds2 has assumed the master role.
420
Note that the new master MDS can also be determined from any running server using the statcluster command with the netconfig option, as shown in Example 9-30.
Example 9-30 statcluster command used to determine master Metadata server mds3:~ # sfscli statcluster -netconfig Name sanfs IP 9.42.164.115 Cluster Port 1737 Heartbeat Port 1738 Client-Server Port 1700 Admin Port 1800 Command issued from subordinate server
The line labeled IP gives us the IP address of the master Metadata server. In our case, the address 9.42.164.115 corresponds to server mds2. Later on, we can restart server mds1. This will trigger static fileset movements as part of failover, but will not affect the master metadata role assignment.
Using the SNMP capabilities of SAN File System. In each phase of the failover process, SAN File System triggers SNMP traps. These SNMP traps can be sent to IBM Director on the Master Console, if deployed, or to any other SNMP manager available in your environment. We will show you how to send SNMP traps to IBM Director on the Master Console.
421
2. Check SNMP Manager, enter the Master Console IP address, and choose V2C for the SNMP version. Leave the default SNMP port and community as is, as shown in Figure 9-32.
Click Apply to save the changes. 3. Click SNMP Events under Monitor System SNMP Properties to select the severity of events that will be sent as a trap to the SNMP Managers, as shown in Figure 9-33.
Figure 9-33 Selecting the event severity level that will trigger traps
We choose to send all except information level events. Click Apply to save the changes. SAN File System is now configured for SNMP.
422
4. Next, compile the SAN File System MIB on IBM Director and verify the traps. The SAN File System MIB is located on each MDS in the directory /usr/share/snmp/mibs/. Copy the MIB from this directory on the MDS, as shown in Example 9-32. Then save it on the Master Console as IBM-SANFS-MIB.mib.
Example 9-32 Copy the IBM-SANFS-MIB.txt using scp in cygwin $scp root@9.42.164.114:/usr/share/snmp/mibs/IBM-SANFS-MIB.txt . root@9.42.164.114's password: IBM-SANFS-MIB.txt 100% |*****************************| 10872 00:00
5. Log into IBM Director on the Master Console, as shown in Figure 9-34.
6. In the Tasks menu, select Discover Systems SNMP Devices, as shown in Figure 9-35.
423
7. In the Groups window on the left side of the window, expand the All Groups group, right-click the SNMP Devices group, and then select Compile a new MIB, as shown in Figure 9-36.
8. A window opens, prompting you to select the location of the new MIB. Select the IBM-TANK-MIB.mib file that you saved in step 4 on page 423, and click OK, as shown in Figure 9-37.
9. The Status Messages window displays, as shown in Figure 9-38 on page 425.
424
10.To test the environment, you can log onto the master MDS and use the snmptrap operating system command, as shown in Example 9-33. Note this is not part of the SAN File System CLI; this will simply send a test trap to the SNMP manager specified.
Example 9-33 Sending test trap with snmptrap mds1:~ # snmptrap -v 2c -c public 9.42.164.160 '' IBM-SANFS-MIB:sanfsGenericTrap
11.Once the trap is sent, go into IBM Director and right-click All Events and select Open... under Event Log in the Tasks section, as shown in Figure 9-39.
425
IBM Director will now receive traps sent by SAN File System. For example, Figure 9-41 shows a trap that is sent when shutting down an MDS.
426
In the trap details, we can see that the server is mds1. It is moving from State 1 (Online) to State 0 (Down). Each time an MDS changes state or the cluster state changes, a similar trap is sent to the specified SNMP Managers. Therefore, the entire fail-over and fail-back processes can be monitored using these traps. It is possible to create filters in IBM Director that will report only the SAN File System traps. In this way, you can have a fast overview of all SANFS related events. Consult the IBM Director documentation for more details on doing this task.
427
A typical example of metadata access only is listing files in a directory, such as the ls command on UNIX (including Linux) systems. If you then look at the contents of a file, then you do require access to the volume that the file is stored on, since reading a file requires both metadata and file data access. So what happens in a non-uniform SAN File System configuration where a client has attached the global namespace but only has access to some of the volumes? In particular, what kind of visibility does the client have to SAN File System objects that are stored on volumes associated with LUNs that are not accessible by a client? To answer this question, we configured an environment with a fileset attached at the directory /sfs/sanfs/linuxfiles/linuxhome. Assume our policy directs all files in this fileset to the same pool. A quick test of this is to use the statfile command to find the storage pool in which one of the files in the fileset is stored. We find that it is stored in pool lixprague. Next, we list the volumes in that pool using the lsvol command. Finally, we confirm that the client AIXRome has no access to the volume vol_lixprague1, using the reportclient command. This sequence is shown in Example 9-34.
Example 9-34 Verifying client AIXRome has no access to the volume sfscli> statfile sanfs/lixfiles/linuxhome/test.txt Name Pool Fileset Server Size (B) File Modified =========================================================================================== = sanfs/lixfiles/linuxhome/test.txt lixprague lixfiles mds1 46 Jun 09, 2004 3:06:32 PM sfscli> lsvol -pool lixprague Name State Pool Size (MB) Used (MB) Used (%) =============================================================== vol_lixprague1 Activated lixprague 40944 8560 20 sfscli> reportclient -vol vol_lixprague1 Name ========= LIXPrague
Now, let us try an operation on the client AIXRome requiring metadata-only access. In Example 9-35, you can see that you can list files in the /sfs/sanfs/lixfiles/linuxhome directory using the ls -l command on this client, even though the LUN where the files are stored is inaccessible. This is because this operation only accesses metadata and is fulfilled by the MDS hosting the fileset.
Example 9-35 You can list file system objects with ls commands, even if data LUN is inaccessible [root@rome linuxhome]# ls -l total 4096035 -rw-r--r-1 root root 336 Jun 4 -rw-r--r-1 root root 302 Jun 4 -rw-r--r-1 root root 993 Jun 4 -rw-r--r-1 root root 4194304000 Jun drwxrwxrwx 2 root root 6 Jun 4 d--------2 1000000 1000000 2 May 26 -rw-r--r-1 root root 13 Jun 9 drwxr-xr-x 19 root root 20 Jun 1 -rw-r--r-1 root root 27 Jun 9 -rw-r--r-1 root root 10217 Jun 4
09:57 dsmerror.log 09:59 dsmj.log 09:58 dsmwebcl.log 1 15:28 hugefile.log 10:00 install 01:14 lost+found 14:50 next.txt 17:30 sysfiles 14:50 test.txt 10:01 tsm_restore.gif
However, when we try a command that requires access to the actual volume where the files are stored, it fails. Example 9-36 on page 429 shows a failed attempt to display the content of
428
the file test.txt file with an I/O error, because the client cannot access the volume where the file is stored.
Example 9-36 However, you cannot read the content of the file if data LUN is inaccessible [root@rome linuxhome]# cat test.txt cat: test.txt: Input/output error [root@rome linuxhome]#
How can we restrict a SAN File System client from even seeing metadata about files, such as their names/last access dates/permissions, and so on? The answer is with standard operating system security measures: file and directory permissions. For our next example, on a privileged client, we set the permissions for the directory linuxhome to 700, which gives permission only to root users on privileged clients. Now on our client AIXRome, which is not a privileged client, we can see in Example 9-37, that we cannot list or change to the secured directory even if we are logged in as the root user.
Example 9-37 User lxuser does not have privileges to list files in linuxhome directory On MDS mds4:~ # sfscli statcluster -config |grep Privileged Privileged Clients LIXPrague,WINWashington On non-privileged client, AIXRome [root@rome lixfiles]$ ls -l total 5 drwx-----6 root root [root@rome lixfiles]# cd linuxhome [root@rome linuxhome]$ ls -ls ls: .: Permission denied [root@rome linuxhome]$
13 Jun
9 14:50 linuxhome
Therefore, we have demonstrated how client access works for both metadata and data access, as well as giving a brief example of how standard operating system methods can be used to prevent even metadata access to parts of the SAN File System.
429
We will assume the following configuration: Fileset F has files stored in pools A and B. Pool A contains volume 1 and 2, and pool B contains volumes 3 and 4. If a client NT1 needs access to fileset F, the SAN configuration and the disk subsystem configuration must be set up so that the client has visibility to volumes 1, 2, 3, and 4. If you add additional volumes to either Pool A or Pool B, these must also be made visible to client NT1. Figure 9-42 shows this configuration.
Client
Filesets
1
SP A
F NT1 3
SP B
When a client first initiates a contact to the SAN File System, for example, when you boot the client and start the client processes, the MDS will check that the client does not have incomplete access to any storage pool, that is, the client must either access all the volumes in a storage pool, or none of them. In the example above, if it is determined that the client has visibility to volume 3, but not volume 4, this will be detected at startup and the server logs will have the following message:
HSTCM0954W Client NT1 does not have access to volume 4 with diskID abcdef in storage pool B.
The client will still operate, but will give I/O errors if it tries to read data from volumes that are not visible to it. This is shown in 9.6, How SAN File System clients access data on page 427. If a client has only partial access to a storage pool (that is, visibility to only some volumes), then writes will still succeed, since the write will always be directed to a volume that the client can access. If, however, the client does not have access to ANY volumes in the required storage pool, the write will fail with an I/O error.
430
The sample script provided must be run on the master MDS. It takes, as entry parameters, the client name and the list of filesets to validate.
You must give one client name and one or more fileset names to check access. In this example, the access of the client AIXRome to the volumes in the storage pools either currently or potentially used by filesets aixfiles, winhome, lixfiles, and dbdir is checked.
431
Example 9-38 Using the client validation sample script mds4:/tmp/eric # ./check_fileset_access.sh AIXRome aixfiles winhome lixfiles dbdir Gathering information from SANFS - please wait... Listing luns on client AIXRome... Listing pools on SANFS... CMMCI9006E No Volume instances found that match criteria: pool = empty_pool. Now checking filesets access... INFO - Client AIXRome has correct access to fileset aixfiles WARNING - Client AIXRome does not have correct access to fileset winhome INFO - Client AIXRome has correct access to fileset lixfiles INFO - Client AIXRome has correct access to fileset dbdir Please refer to ./check_fileset_access.sh.log for details.
432
WARNING - Client AIXRome does not have correct access to fileset winhome INFO Checking fileset lixfiles... INFO - Client AIXRome has correct access to fileset lixfiles INFO Checking fileset dbdir... INFO - Client AIXRome has correct access to fileset dbdir ####### Checking for client AIXRome finished successfully ############
From this log, we can see that client AIXRome does not have correct access to fileset winhome, because it does not have access to volumes vol_winwashington1 and vol_winwashington2 from pool winwashington, as well as the volume in pool small_pool.
433
434
10
Chapter 10.
435
Within the storage pools, we have defined volumes as shown in Example 10-2 on page 437. There are five volumes defined to the three storage pools. Data has been striped across both of the volumes that are associated with the Default User pool (poola).
436
Example 10-2 List volumes # sfscli lsvol Name State Pool Size (MB) Used (MB) Used (%) ==================================================== MASTER Activated SYSTEM 10224 384 3 avol1 Activated poola 102384 1216 1 avol2 Activated poola 40944 1216 2 bvol1 Activated poolb 51184 0 0 bvol2 Activated poolb 46064 0 0
The active policy is shown in Example 10-3. This policy assigns all files to the default user storage pool. This means that everything that is copied onto the SAN File System namespace will end up in poola.
Example 10-3 List policy # sfscli lspolicy Name State Last Active Modified Description ================================================================================ DEFAULT_POLICY active Sep 9, 2004 9:25:14 PM May 6, 2004 3:40:05 AM Default policy set (assigns all files to default storage pool)
We have copied files into the homefiles fileset (which is attached to the directory homefiles). Figure 10-1 shows the view of these files on a Windows client.
Figure 10-1 Windows-based client accessing homefiles fileset Chapter 10. File movement and lifecycle management
437
The files are striped across the volumes of poola. Example 10-5 shows the list of contents (using the reportvolfiles command) of the two volumes in poola, avol1 and avol2.
Example 10-5 List contents of avol1 volume # sfscli reportvolfiles avol1 homefiles:homefiles/readme.doc homefiles:homefiles/dontreadme.doc homefiles:homefiles/instructions.txt homefiles:homefiles/SANFS Admin Guide.pdf homefiles:homefiles/SANFS Maint&PD Guide.pdf homefiles:homefiles/SANFS_InstallGuide.pdf homefiles:homefiles/sfs-package-2.1.0-7.i386.rpm homefiles:homefiles/StatusReport.doc workfiles:workfiles/inst.images/PMP3/U482893.bff workfiles:workfiles/inst.images/PMP3/520003.tar workfiles:workfiles/inst.images/PMP3/U482895.bff workfiles:workfiles/inst.images/PMP3/U485143.bff workfiles:workfiles/inst.images/PMP3/U485155.bff workfiles:workfiles/inst.images/PMP3/U485162.bff workfiles:workfiles/inst.images/PMP3/U485186.bff workfiles:workfiles/inst.images/PMP3/U485191.bff workfiles:workfiles/inst.images/PMP3/U485371.bff workfiles:workfiles/inst.images/PMP3/U485401.bff ***etc etc*** # sfscli reportvolfiles avol2 workfiles:workfiles/inst.images/PMP3/U497873.bff workfiles:workfiles/inst.images/PMP3/U497902.bff workfiles:workfiles/inst.images/PMP3/U497904.bff workfiles:workfiles/inst.images/PMP3/U497905.bff workfiles:workfiles/inst.images/PMP3/U497906.bff workfiles:workfiles/inst.images/devices.fcp.disk.ibm2145.rte workfiles:workfiles/inst.images/new/bos.adt.syscalls.5.2.0.30.bff workfiles:workfiles/inst.images/new/bos.diag.rte.5.2.0.30.bff workfiles:workfiles/inst.images/new/bos.diag.util.5.2.0.30.bff workfiles:workfiles/inst.images/new/devices.chrp.pci.rte.5.2.0.30.bff workfiles:workfiles/inst.images/new/devices.common.IBM.disk.rte.5.2.0.30.bff workfiles:workfiles/inst.images/new/devices.common.IBM.ethernet.rte.5.2.0.30.bff ***etc etc***
Now we will use the mvfile command to manually move a single file from one pool to another. If any FlashCopy images contain this file, then the file within any image will also be moved from the original storage pool to the destination. You must be logged into the operating system on the engine hosting the master MDS to run this command. The commands accepts the following parameters; -f Forces the MDS to move the file even if the file is open, that is, being accessed by a client. -pool pool_name Specifies the name of the storage pool to which to move the file. To defragment a file, rather than move it, specify the file's current storage pool. -client client_name Specifies the name of a SAN File System client to perform the move or defragment of the file. The client must have access to all the volumes contained in the current and target storage pools. To list all active clients that can access a volume, use the reportclient -vol command. To list the volumes in a storage pool, use the lsvol -pool command. 438
IBM TotalStorage SAN File System
path Specifies the fully qualified names of one or more files to move or defragment. A fully qualified name means the full directory path, for example, cluster-name / fileset-name / filename or cluster-name/file-name. This parameter does not support wildcard characters in directory or file names. Specifies that you want to read the names of one or more files to move or defragment from stdin (for example, - << /work/files_list.txt). Example 10-6 shows moving the file readme.doc from poola to poolb, using the client AIXRome to initiate the move.
Example 10-6 Move one file mds2:~ # sfscli mvfile -pool poolb -client AIXRome /sanfs/homefiles/readme.doc CMMNP5463I File /sanfs/homefiles/readme.doc was moved successfully.
In Example 10-7, we use the reportvolfiles command to verify that the readme.doc file was successfully moved to the bvol1 volume on poolb.
Example 10-7 Verify that file moved to bvol1 mds2:~ # sfscli reportvolfiles bvol1 homefiles:homefiles/readme.doc
439
To move all the files contained in this input file, use the mvfile command format as shown in Example 10-9. This reads in the list of files contained in test1, and uses it as standard input (stdin) to the command.
Example 10-9 Move stack of file using the mvfile command # sfscli mvfile CMMNP5463I File CMMNP5463I File CMMNP5463I File CMMNP5463I File CMMNP5463I File CMMNP5463I File CMMNP5463I File CMMNP5463I File -pool poolb -client AIXRome - < test1 /sanfs/homefiles/dontreadme.doc was moved successfully. /sanfs/homefiles/instructions.txt was moved successfully. /sanfs/homefiles/SANFS Admin Guide.pdf was moved successfully. /sanfs/homefiles/SANFS Maint&PD Guide.pdf was moved successfully. /sanfs/homefiles/sfs-package-2.1.0-7.i386.rpm was moved successfully. /sanfs/homefiles/StatusReport.doc was moved successfully. /sanfs/workfiles/inst.images/PMP3/U482893.bff was moved successfully. /sanfs/workfiles/inst.images/PMP3/520003.tar was moved successfully.
Important: The actual I/O will be performed by the specified client to and from the target and source volumes. This should be considered when selecting a client to perform the move. We recommend selecting a less loaded client (for example, with spare CPU and I/O capacity) or scheduling the move to occur at a less busy time, in order to avoid performance impact on an already heavily loaded client. Once the files have been moved, verify that the file movement was successful using the reportvolfiles command. In Example 10-10, we verify that all the files have moved off the avol1 volume, since it is now empty.
Example 10-10 Verify files moved from avol1 volume # sfscli reportvolfiles avol1 CMMNP5122I No files were found on Volume avol1.
Finally, we use the reportvolfiles command to verify that the moved files are distributed across the volumes in poolb, bvol1 and bvol2, as shown in Example 10-11.
Example 10-11 Verify that files moved to bvol1 volume # sfscli reportvolfiles bvol1 homefiles:homefiles/readme.doc homefiles:homefiles/dontreadme.doc homefiles:homefiles/instructions.txt homefiles:homefiles/SANFS Admin Guide.pdf homefiles:homefiles/SANFS Maint&PD Guide.pdf homefiles:homefiles/SANFS_InstallGuide.pdf homefiles:homefiles/sfs-package-2.1.0-7.i386.rpm workfiles:workfiles/inst.images/PMP3/U482893.bff workfiles:workfiles/inst.images/PMP3/520003.tar workfiles:workfiles/inst.images/PMP3/U482895.bff # # sfscli reportvolfiles bvol2 homefiles:homefiles/sfs-package-2.1.0-7.i386.rpm homefiles:homefiles/StatusReport.doc workfiles:workfiles/inst.images/PMP3/520003.tar workfiles:workfiles/inst.images/PMP3/U485891.bff workfiles:workfiles/inst.images/PMP3/U485893.bff workfiles:workfiles/inst.images/PMP3/U485975.bff workfiles:workfiles/inst.images/PMP3/U485976.bff workfiles:workfiles/inst.images/PMP3/U485979.bff workfiles:workfiles/inst.images/PMP3/U485986.bff workfiles:workfiles/inst.images/PMP3/U485993.bff
440
441
MIGRATE-FROM-POOL or DELETE-FROM-POOL Specifies whether to move or delete files. source_pool_name Identifies the source storage pool of files to be moved or deleted. target-pool_name Identifies the target storage pool of files to be moved. This parameter is not used if DELETE-FROM-POOL is used.
FOR FILESET (fileset_name) Specifies one or more filesets in which the file resides. This parameter is optional WHERE AND Compares the file attributes specified in the rule with the attributes of the file to determine whether the file should be moved or deleted. Used to specify the compound of the following conditions:
AGE operator integer DAYS Age of file, specified as less than (<), less than or equal (<=), greater than (>), or greater than or equal (>=) to a number of days since the file was last accessed. SIZE operator integer KB | MB | GB Size of a file, specified as less than (<), less than or equal (<=), greater than (>), or greater than or equal (>=) to a number of kilobytes, megabytes, or gigabytes. You can specify an AGE qualifier, a SIZE qualifier, or both.
On the Windows SAN File System client, we can see several files that are larger than 300 KB, as shown in Figure 10-2 on page 443.
442
Now we need to create our policy. You need to do this with a text editor - there is no facility in the SAN File System CLI or GUI to do this. Save the rules to an output file - we have called it lcmscrpt.txt. Example 10-14 shows the contents of our sample policy. There are two rules: The first specifies that files in the homefiles fileset that are larger than 300 KB and which are in poolb will be moved to poola. The second specifies that any files in poolb which have not been accessed for 365 days or more will be deleted.
Example 10-14 Sample file management policy mds2:/tmp # cat lcmscrpt.txt RULE LrgFiles MIGRATE FROM POOL poolb TO POOL poola FOR FILESET (homefiles) WHERE SIZE >= 300 KB RULE oldFiles DELETE FROM POOL 'poolb' WHERE AGE >= 365 days
443
The following options are available for the lifecycle management policy script: -verbose --log <logdir> --client <client> Print detailed output while executing. Log execution of the script into time/date stamped files in <logdir>. Preferred client(s) to move or delete files. The client must have access to all the volumes in all source and target storage pools specified in any move and delete operations. File name of the previously created file management policy. File name for the plan file. This file will be created if running in plan phase, or must already have been created if running in execute phase.
--phase {plan | execute} Specifies whether to run the script in plan or execute phase. The script must first be run in plan phase, then in execute phase. In the plan phase, the script scans through the namespace, selecting files that match the criteria in the policy. It produces a human-readable output file, which lists the file names matching the policy that are to be moved or deleted, as well as the appropriate storage pools. In execute phase, this output file is then used as input to perform the designated actions on the selected files. In this example, we are using the rules file (lcmscrpt.txt) that we just created. The result of the plan phase will be written to /tmp/lcmoutput, as shown in Example 10-15. It is highly recommended to include the --verbose parameter, as this will display any errors when planning or executing the file management policy. Note you do not need to specify the --client parameter in this phase, since the plan phase simply scans the metadata to determine which files match the policy - no access to the user storage pools is required.
Example 10-15 Plan phase mds2:/usr/tank/server/bin # ./sfslcm.pl --verbose --rules /tmp/lcmscrpt.txt --plan /tmp/lcmoutput --phase plan 2004-09-28 01:48:19 : HSTHS0009I Beginning plan phase 2004-09-28 01:48:19 : HSTHS0010I Reading rules from file /tmp/lcmscrpt.txt 2004-09-28 01:48:19 : HSTHS0020I Rules summary: 1 pools were found in /tmp/lcmscrpt.txt 2004-09-28 01:48:19 : HSTHS0021I Pool poolb has 2 rules 2004-09-28 01:48:19 : HSTHS0022I End of rules summary report 2004-09-28 01:48:19 : HSTHS0011I Beginning to create plan for rules. 2004-09-28 01:48:19 : HSTHS0012I Running report of files in pool poolb 2004-09-28 01:48:21 : HSTHS0016I Adding plan records for pool poolb 2004-09-28 01:48:21 : HSTHS0017I Added 6 records for pool poolb 2004-09-28 01:48:21 : HSTHS0018I Finished creating plan. 6 records were created for 1 pools. 2004-09-28 01:48:21 : HSTHS0023I End of plan phase
After completing the plan phase we recommend that you examine the output file to ensure that the file management policy that you created has been executed as expected. Example 10-16 on page 445 lists the contents of our plan file. It shows that six files will be migrated from poolb to poola, as they match the criteria in the rule file. You can delete entries from it if you decide you do not want to migrate or delete certain files. Or, you might choose to split the file into pieces, and execute it concurrently on different clients or the same client in order to improve performance. This option is discussed further in 10.2.4, Lifecycle management recommendations and considerations on page 446. Be careful in editing the plan file that you do not delete important data in the records.
444
Now we will execute the plan file by running the script in the execute phase, as shown in Example 10-17. We need to specify the --client parameter, specifying a SAN File System client with access to both poola and poolb, since these are referenced in the plan file.
Example 10-17 Execute phase mds2:/usr/tank/server/bin # ./sfslcm.pl --verbose --client AIXRome --plan /tmp/lcmoutputm --phase execute 2004-09-28 01:55:08 : HSTHS0024I Beginning execute phase 2004-09-28 01:55:08 : HSTHS0025I Executing plan from /tmp/lcmoutputm 2004-09-28 01:55:08 : HSTHS0026I Beginning operations on pool poolb, plan record 1 2004-09-28 01:55:21 : HSTHS0027I End of operations on pool poolb: 6 migrations, 0 deletes, 0 errors, 0 operations skipped due to errors 2004-09-28 01:55:21 : HSTHS0030I End execute phase
We can quickly check the execution of the script by rerunning the statfile command. In Example 10-13 on page 442, we saw that this file was in poolb. Now, as shown in Example 10-18, this file has moved from poolb to poola.
Example 10-18 Verify large file moved from poolb to poola # sfscli statfile /sanfs/homefiles/sfs-package-2.1.0-7.i386.rpm Name Pool Fileset Server Size (B) File Modified ================================================================================ /sanfs/homefiles/sfs-package-2.1.0-7.i386.rpm poola homefiles mds1 110770205 Mar 24, 2004 1:53:51 PM mds2:/usr/tank/server/bin #
You have now successfully automated the file movement using file management policy.
445
446
11
Chapter 11.
447
T:\cluster_dir]
Sanbs1-9
Sanbs1-8
SAN
SAN FS MDS Storage pool: CLUSTERPOOL Fileset: cluster_fs1
448
Chapter 11. Clustering the SAN File System Microsoft Windows client
449
The defined network interfaces are shown in Figure 11-3. There are two networks: a public and a private.
Figure 11-4 shows the default cluster resource types. After configuration with SAN File System, we will see a new cluster resource type defined.
450
Figure 11-5 SAN File System client view of the global namespace
Both MSCS nodes have access to a LUN on the SVC. We have configured this in the SAN File System so that only these nodes can write to this disk. First, we created a new storage pool, CLUSTERPOOL, as shown in Example 11-1.
Example 11-1 Storage pool for use by the MSCS tank-mds2:~ # sfscli lspool CLUSTERPOOL Name Type Size (MB) Used (MB) Used (%) Threshold (%) Volumes =================================================================== CLUSTERPOOL User 2032 0 0 80 1
We can see the ID of the LUN that is visible to the Windows 2003 servers using the lslun command, as shown in Example 11-2. This confirms that they both see the same LUN.
Example 11-2 Show the LUN visible to the clustered nodes sfscli> lslun -client sanbs1-9 Lun ID Vendor Product Size (MB) Volume State Storage Device WWNN Port WWN =========================================================================================== ====================== VPD83NAA6=600507680184001AA800000000000077 IBM 2145 2047 Available UNKNOWN UNKNOWN sfscli> sfscli lslun -client sanbs1-8 Lun ID Vendor Product Size (MB) Volume State Storage Device WWNN Port WWN =========================================================================================== ====================== VPD83NAA6=600507680184001AA800000000000077 IBM 2145 2047 Available UNKNOWN UNKNOWN
The LUN was then defined as a volume in the CLUSTERPOOL storage pool, using the mkvol command, as in Example 11-3.
Example 11-3 Add the shared LUN to the storage pool tank-mds1:~ # sfscli mkvol -lun VPD83NAA6=600507680184001AA800000000000077 -client sanbs1-8 -pool CLUSTERPOOL clustervol1 CMMNP5426I Volume clustervol1 was created successfully.
Chapter 11. Clustering the SAN File System Microsoft Windows client
451
Example 11-4 shows the newly added volume, using the lsvol command. We have other volumes, pools and clients; however, we are setting up a dedicated pool and fileset only for our MSCS configuration.
Example 11-4 List SAN FIle System volumes tank-mds1:~ # sfscli lsvol Name State Pool Size (MB) Used (MB) Used (%) ========================================================================== MASTER Activated SYSTEM 2032 224 11 ITSO_SYS_POOL-SYSTEM-0 Activated SYSTEM 2032 64 3 SVC-DEFAULT_POOL-0 Activated DEFAULT_POOL 2032 16 0 clustervol1 Activated CLUSTERPOOL 2032 0 0
Now we want to add a fileset, called cluster_fs1, to reside in the storage pool. We use the mkfileset command, as in Example 11-5.
Example 11-5 Create a fileset for use by MSCS tank-mds1:~ # sfscli mkfileset -attach ATS_GBURG -dir cluster_dir cluster_fs1 CMMNP5147I Fileset cluster_fs1 was created successfully. tank-mds1:~ # sfscli lsfileset -l Name Fileset State Serving State Quota Type Quota (MB) Used (MB) Used (%) Threshold (%) Images Most Recent Image Server Assigned Server Attach Point Directory Name Directory Path Parent Children Description =========================================================================================== =========================================================================================== ============================================ ROOT Attached Online Soft 0 0 0 0 0tank-mds1 tank-mds1 ATS_GBURG ATS_GBURG 2 Root fileset aixfiles Attached Online Soft 200000 16 0 80 0 tank-mds1 tank-mds1 ATS_GBURG/aixfiles aixfiles ATS_GBURG ROOT 0 cluster_fs1 Attached Online Soft 0 0 0 80 0 tank-mds1 ATS_GBURG/cluster_dir cluster_dir ATS_GBURG ROOT 0 -
We make sure that our MSCS nodes have root privileges to the SAN File System, using the addprivclient command, as shown in Example 11-6. For more information about privileged clients, see 7.6.2, Privileged clients on page 297.
Example 11-6 Create privileged clients. tank-mds1:~ # sfscli addprivclient sanbs1-8 sanbs1-9 Are you sure you want to add sanbs1-8 as a privileged client? [y/n]:y Are you sure you want to add sanbs1-9 as a privileged client? [y/n]:y CMMNP5378I Privileged client access successfully granted for sanbs1-8. CMMNP5378I Privileged client access successfully granted for sanbs1-9.
Now we want to create a policy so that all files in the fileset we created will be stored in the CLUSTERPOOL storage pool. This makes sure that the MSCS nodes, and only those nodes among the SAN File System clients, will have access to the LUN. We create a text file, called /tmp/policy.txt with the rule shown in Example 11-7 on page 453. The rule directs any file in the cluster_fs1 fileset to the storage pool CLUSTERPOOL. All other files will go into the designated default storage pool. In our configuration, all the other clients have access to another pool, which has been designated as the default pool. We discussed how to set up policy for non-uniform storage configurations like this in 7.8.8, Policy management considerations on page 328. 452
IBM TotalStorage SAN File System
Example 11-7 Contents of policy input file VERSION 1 /* Do not remove or change this line! */ rule 'stgRule1' set stgpool 'CLUSTERPOOL' for FILESET('cluster_fs1')
Currently, we have the default policy active. To create a new policy, we use the mkpolicy command, referencing our text file. The new policy is called cluster_policy, as shown in Example 11-8.
Example 11-8 Create a new policy tank-mds1:~ # sfscli mkpolicy -file /tmp/policy.txt cluster_policy CMMNP5193I Policy cluster_policy was created successfully.
Now we will activate the new policy, using the usepolicy command, as shown in Example 11-9.
Example 11-9 Activate the new policy tank-mds1:~ # sfscli lspolicy Name State Last Active Modified Description =========================================================================================== ============================================ DEFAULT_POLICY active Aug 19, 2005 4:26:10 PM Aug 19, 2005 10:41:19 AM Default policy set (assigns all files to default storage pool) cluster_policy inactive Aug 23, 2005 4:34:55 PM tank-mds1:~ # sfscli catpolicy cluster_policy cluster_policy: VERSION 1 /* Do not remove or change this line! */ rule 'stgRule1' set stgpool 'CLUSTERPOOL' for FILESET('cluster_fs1') tank-mds1:~ # sfscli usepolicy cluster_policy Are you sure you want to use this policy? New files should be allocated to a pool that is accessible to the clients where the file is needed. [y/n]:y CMMNP5189I Policy cluster_policy is now the active policy. tank-mds1:~ # sfscli lspolicy Name State Last Active Modified Description =========================================================================================== ============================================ DEFAULT_POLICY inactive Aug 26, 2005 1:34:55 PM Aug 19, 2005 10:41:19 AM Default policy set (assigns all files to default storage pool) cluster_policy active Aug 25, 2005 2:15:50 PM Aug 23, 2005 4:34:55 PM -
Chapter 11. Clustering the SAN File System Microsoft Windows client
453
The client can now see the directory corresponding to the fileset, which is cluster_dir, as shown in Figure 11-6.
We have to take ownership and set permissions on the directory for the fileset, cluster_dir. These were set as shown in Figure 11-7. We set the owner for this directory to be Administrator, and gave full control to the Administrator and to the Cluster Services account. Full permissions for the Cluster Services account is required if you will create a clustered CIFS share, as we are doing.
454
Now, we check that the permissions and definitions are by correct by creating a file on the volume, as shown in Figure 11-8.
Finally, to verify our policy is correct, we use the reportvolfiles command, as in Example 11-10. It confirms the newly created file has been stored in the volume corresponding to the LUN that is visible in the Microsoft cluster.
Example 11-10 Confirm the file is stored in the correct volume. tank-mds1:~ # sfscli reportvolfiles clustervol1 cluster_fs1:cluster_dir/test.file
Now we have our basic setup, the next step is to install the SAN File System Microsoft Cluster Server Enablement package.
Chapter 11. Clustering the SAN File System Microsoft Windows client
455
1. To start the installation, run the executable; ours was called IBM-SFS-MSCS-Enablement-WIN2K-2.2.2.93.exe. On the first window, shown in Figure 11-9, choose the installation language.
2. You will see the license agreement, as in Figure 11-10. Click Yes.
3. Now enter the options shown in Figure 11-11 on page 457, the User Name and Company Name can be any string appropriate for your environment, and for the Serial Number, enter IBM-SFS-MSCS-100. Note, the User Name does not have to be an operating system user ID.
456
4. Choose the installation directory for the package. We accepted the default, as shown in Figure 11-12.
Chapter 11. Clustering the SAN File System Microsoft Windows client
457
5. Finally, you can confirm the installation parameters (see Figure 11-13).
6. After the installation completes, you are prompted to reboot the system. After rebooting sanbs1-8, we repeated the same installation steps on sanbs1-9 so that the cluster enablement software is available on each system. Now we need to define the SAN File System into the cluster.
458
Now we will create a cluster group for our SAN File System. Right-click an existing cluster group and select New Group, as shown in Figure 11-15.
Chapter 11. Clustering the SAN File System Microsoft Windows client
459
Enter the properties for your group - first you give it a name and optional description, as shown in Figure 11-16. Our group is called ITSOSFSGroup. Click Next to continue.
On the next window, you can specify preferred owners for the group. We left this blank (see Figure 11-17).
Click Next to create the group. If it was created successfully, a window similar to Figure 11-18 on page 461 will display.
460
The group now appears in the main cluster display (see Figure 11-19). Note that it is offline.
Chapter 11. Clustering the SAN File System Microsoft Windows client
461
Now we need to add a resource to the group for the actual SAN File System. Click Resource Types, right-click the SANFS resource, and then select New Resource, as shown in Figure 11-20.
Give your resource a name and optional description. Make sure SANFS is selected as the Resource type, and that the resource is in the group that you just defined (ITSOSFSGroup in our case, as shown in Figure 11-21). Click Next to continue.
On the next window, Figure 11-22 on page 463, all nodes should be included in the Possible owners window. If they are not, move them by selecting each in turn, and clicking Add. Click Next. 462
Next you can enter in any resource dependencies. We had none eligible to select, as shown in Figure 11-23. Click Next.
Chapter 11. Clustering the SAN File System Microsoft Windows client
463
In Figure 11-24, enter the parameters for the SANFS resource. Specifically, enter the fileset that you want to enter as a cluster resource.
Click Browse; the drop-down list displays all first-level directories in the SAN File System global namespace, as in Figure 11-25. In our configuration, we have two directories that are actually fileset attachment points, T:\aixfiles and T:\cluster_dir, as well as the first-level directory, t:\lost+found, which is part of the ROOT fileset. You may only select first-level directories (of the format T:\directoryname) as resources here, regardless of whether they are fileset attachment points. Any directory that is below the first level (for example, T:\aixfiles\subdir1, T:\cluster_dir\subdir2\subdir3) is not available for definition as a cluster resource. We selected the fileset that we know is available to the MSCS configuration (cluster_dir).
The Parameters window re-displays with the fileset path shown, as in Figure 11-26 on page 465.
464
Click Finish. You will see a pop-up similar to Figure 11-27 that states that the new resource was successfully created.
Chapter 11. Clustering the SAN File System Microsoft Windows client
465
If you click Resources, you will see the newly created resource. Note that it is offline, as we have not brought it online yet (see Figure 11-28).
Now we will bring the group and resource online. Right-click the group (ITSOSFSGroup) and select Bring Online, as shown in Figure 11-29.
The display changes to indicate that the group, and its resources, are online (see Figure 11-30 on page 467). It is currently owned by the system where are doing the configuration, in this case, sanbs1-9.
466
To check the initial failover of the resource, we shut down the owning node, sanbs1-9. We expect this resource to move to the other node, sanbs1-8. We start Cluster Administrator on sanbs1-8, and confirm that the ownership has transferred correctly. Note sanbs1-9 is showing as down (see Figure 11-31).
Chapter 11. Clustering the SAN File System Microsoft Windows client
467
After rebooting sanbs1-9, the ownership of the SAN File System resource stays with sanbs1-8, since we did not specify a preferred owner (see Figure 11-32).
Figure 11-32 Resource stays with current owner after rebooting the original owner
Now we will show how to set up the MSCS configuration so that the SAN File System resource is shared to TCP/IP attached clients via CIFS.
First, remember we had to set appropriate permissions on the directory corresponding to the fileset to be shared for the cluster services account, as in Figure 11-7 on page 454. We now need to define some additional resources in the group ITSOSFSGroup. First, we need an IP address to be associated with the CIFS share. This is the IP address that CIFS clients will use to access the share. Right-click the group and select New Resource. Give the resource a name, CIFSShareIP in our case, an optional description, select Resource type of IP Address, and the correct group (ITSOSFSGroup in our case), as in Figure 11-33 on page 469. Click Next.
468
Specify the properties. Here are the properties we used (displayed after we created the resource and brought it online). In the General tab, both cluster nodes were selected as possible owners (see Figure 11-34).
Chapter 11. Clustering the SAN File System Microsoft Windows client
469
We used the defaults for the Dependencies and Advanced tabs. On the parameters window (Figure 11-35), we specified a TCP/IP address to be associated with the CIFS share, 9.82.23.49, which was on the public LAN. For our environment, we obtained a new TCP/IP address; however, your environment may differ.
Next, we created a Network Name resource by selecting New Resource. We selected Network Name as the resource type, and ITSOSFSGroup for the group. We named the resource ITSOWinNetwork. Figure 11-36 on page 471 shows the General properties for this resource.
470
Under Dependencies (Figure 11-37), we specified the SANFS and IPAddress resources to be brought online first.
Chapter 11. Clustering the SAN File System Microsoft Windows client
471
In the Parameters tab (Figure 11-37 on page 471), we gave it a name, ITSOWINSHARE.
Finally, we create the File Share resource. Select New Resource, enter a name (ITSOWinShare in our case), select File Share as the resource type, and ITSOSFSGroup for the group. Figure 11-39 shows the General properties for this resource.
472
Under the Dependencies tab (Figure 11-40), we selected the SANFS, IP Address, and Network Name resources.
Figure 11-41 shows the Parameters tab. The Share name specified, ITSOWinShare, is the name by which the CIFS clients will reference it when they map the network drive.
Figure 11-41 File Share resource: parameters Chapter 11. Clustering the SAN File System Microsoft Windows client
473
We bring all the newly created resource online by right-clicking each one in turn and selecting Bring Online (see Figure 11-42). The resources are now online and owned by sanbs1-8, since this where we initially configured them.
Now we can access the share from a TCP/IP attached client. To access the share, we select Tools Map Network Drive. We enter drive letter M, and \\9.82.23.49\ITSOWinShare for the Folder. This information matches the parameters defined for the IP Address resource in Figure 11-35 on page 470, and for Figure 11-41 on page 473. You may need to specify a different use name, depending on how your user authentication is configured.
Figure 11-44 on page 475 shows that the client can see the same files as are visible on the SAN File System client (compare with Figure 11-8 on page 455). We copied another file, SFS22_PDGuide.pdf, to the share to show it can be written to by the client. The other file in the directory, arb.sfs is automatically installed when we created the SAN File System resource, and is used by the MSCS nodes to arbitrate who is the owner of the resource.
474
Figure 11-44 CIFS client access SAN File System via clustered SAN File System client
To test the behavior in a failover, we initiated a long copy operation from the client to the CIFS share (see Figure 11-45).
Chapter 11. Clustering the SAN File System Microsoft Windows client
475
At this time, sanbs1-8 owned the share and associated resources. We shut down this system. At the time of the failure, the drive became inaccessible to the client (see Figure 11-46).
The other node, sanbs1-9, took over the resource within seconds, and the drive again became accessible to the client. We could resume the copy operation, This was similar to the behavior that would be observed in a non-SAN File System environment, if a temporary network glitch caused a regular CIFS share to become unavailable. As another test, we copied some additional PDF files to the share, and opened one of them in Acrobat. We then shut down the cluster node owning the share resource. We tried to open a different PDF file, but got the same message that the drive was inaccessible, until the other node took over the resource (estimated less than 10 seconds). We could then open a new PDF file.
476
12
Chapter 12.
477
12.1 Introduction
Data protection is the process of making extra copies of data, so that it can be restored in the event of various types of failure. The type of data protection (or backup) done depends on the kinds of failure that you wish to avoid. Various failures might require restore of a single file, an older version of a file, a directory, a LUN, or an entire system. Various methods for protecting the SAN File System are available, including these: SAN File System FlashCopy Ability to back up SAN File System files with third-party backup/restore applications (for example, IBM Tivoli Storage Manager, Legato NetWorker, and VERITAS NetBackup) Ability to use storage system-based protection methods (for example, FlashCopy and PPRC functions of the ESS and SVC), also known as LUN-based backup Ability to save the SAN File System cluster configuration and restore/execute it SAN File System FlashCopy provides a space-efficient image of the contents of part of the SAN File System global namespace at a particular moment. SAN File System supports the use of backup tools that may already be present in your environment. For example, if your enterprise currently uses a storage management product, such as IBM Tivoli Storage Manager, SAN File System clients can use the functions and features of that product to back up and restore files that reside in the SAN File System global namespace. Another option is LUN-based backup, which uses the hardware-based instant copy features available in the underlying storage subsystems supported by SAN File System, such as FlashCopy in SVC and ESS. Finally, you can use a SAN File System command to back up the system metadata. This will create a file that can then be converted into scripts that will automatically re-create the SAN File System metadata before restoring all of the user data. When backing up files stored in SAN File System, you must save both the actual files themselves and the file metadata. Our examples will show some approaches for this. For SAN File System, an administrator must also back up the system metadata, which includes information about fileset attachment points, storage pools, volumes, and file placement policies. This backup data is used to re-create the cluster state if necessary.
An example of using Tivoli Storage Manager for file-based backup is shown in 12.5, Back up and restore using IBM Tivoli Storage Manager on page 502. When using a file-based backup method, it is important to be aware of the associated file metadata backup (this includes all the permissions and extended attributes of the files). This file metadata for Windows-created files can only be backed up completely from a Windows backup client or utility. Similarly, file metadata for UNIX (including Linux) files can only be backed up completely from another UNIX-based backup client or utility. Therefore, if it is important to preserve full file attribute information; we recommend creating separate filesets by primary allegiance, that is, you would have certain filesets that will only contain Windows created files, and other filesets that will only contain UNIX created files. In this way, you can back up these filesets from the appropriate client OS. In a LUN-based approach, the administrator can use the instant copy features that exist in the storage subsystems that SAN File System supports. See 12.2, Disaster recovery: backup and restore on page 479 for more details.
479
This procedure will also lock out any subsequent new I/O from the clients or MDS. 3. Copy the following critical system configuration files from each of the MDSs to offline media, such as tape or another system, for example, the Master Console. These files will be different on each MDS, so ensure that you copy them for each MDS separately. Example 12-2 shows part of the copy operation for our LAB setup. You will have to use a secure copy (scp) or secure ftp (sftp) utility, such as provided those by Cygwin or PuTTY. The files are: /etc/init.d/boot.local /etc/sysconfig/network/routes /etc/sysconfig/network/ifcfg-eth0 /etc/HOSTNAME /etc/hosts /etc/resolv.conf /root/.tank.passwd /usr/tank/admin/truststore /usr/tank/admin/config/cimom.properties /usr/tank/server/config/Tank.Bootstrap /usr/tank/server/config/Tank.Config Once copied to another location, you could use a third-party backup application or OS utility to save the files to removable media, or use tar on the MDS to back up these files. However, installing or running a third-party backup application on the MDS is not supported.
Example 12-2 Copying system config files $ scp truststore root@9.42.164.114:/usr/tank/admin/truststore root@9.42.164.114's password: truststore 100% 1901 $ scp cimom.properties root@9.42.164.114:/usr/tank/admin/config/cimom.properties root@9.42.164.114's password: cimom.properties s 100% 2450 $ scp Tank.Bootstrap root@9.42.164.114:/usr/tank/server/config/Tank.Bootstrap root@9.42.164.114's password: Tank.Bootstrap 100% 60 $ scp Tank.Config root@9.42.164.114:/usr/tank/server/config/Tank.Config
480
100%
79
As an alternative to backing up the files manually, you can use an option on the sfs command to create an archive of the files. Run setupsfs -backup on each MDS, as shown in Example 12-3. This command creates an archive of the critical files needed for recovery of the MDS(s) and is known as a DRfile. You can copy the archive file to diskette, or onto another server for an external copy.
Example 12-3 Create DRfile archive tank-mds1:/usr/tank/admin/bin # setupsfs -backup /etc/HOSTNAME /etc/tank/admin/cimom.properties /etc/tank/server/Tank.Bootstrap /etc/tank/server/Tank.Config /etc/tank/server/tank.sys /etc/tank/admin/tank.properties /usr/tank/admin/truststore /var/tank/server/DR/TankSysCLI.auto /var/tank/server/DR/TankSysCLI.volume /var/tank/server/DR/TankSysCLI.attachpoint /var/tank/server/DR/After_upgrade_to_2.2.1-13.dump /var/tank/server/DR/After_upgrade_to_2.2.1.13.dump /var/tank/server/DR/Before_Upgrade_2.2.2.dump /var/tank/server/DR/drtest.dump /var/tank/server/DR/Moved_to_ESSF20.dump /var/tank/server/DR/SFS_BKP_After_Upgrade_to_2.2.0.dump /var/tank/server/DR/Test_051805.dump /var/tank/server/DR/ATS_GBURG.rules /var/tank/server/DR/ATS_GBURG_CLONE.rules Created file: /usr/tank/server/DR/DRfiles-tank-mds1-20050912114651.tar.gz
4. Begin the storage subsystem copy service according to its specific procedures. In our Lab setup, we are using FlashCopy on an IBM TotalStorage SAN Volume Controller, Model 2146. Figure 12-1 shows the FlashCopy setup. We created Source and Target vdisks for all the User Pool and System Pool LUNs. The User Pool vdisks and the System Pool vdisks are then configured in a consistency group called sanfs_group.
Figure 12-1 SVC FlashCopy relationships and consistency group Chapter 12. Protecting the SAN File System environment
481
5. After the storage subsystem copy completes, re-enable the SAN File System MDS using the resumecluster command on the master MDS, as shown in Example 12-4.
Example 12-4 resumecluster sfscli> resumecluster CMMNP5233I Cluster successfully returned to the online state.
6. Restart the client applications using the specific procedures for those applications. Important: In order to ensure consistency of the restore in the event of a disaster, you need a copy of the configuration files from each MDS, which should be labeled to match the LUN copy (FlashCopy) image.
3. Power on each of the MDSs, and re-install the operating system, as described in 5.2.2, Install software on each MDS engine on page 127. 4. Now copy back the saved system configuration files on each MDS: /root/.tank.passwd /usr/tank/admin/truststore /usr/tank/admin/config/cimom.properties /usr/tank/server/config/Tank.Bootstrap /usr/tank/server/config/Tank.Config
482
6. On the master MDS, run startcluster, and verify that all the MDSs are running using the lsserver command, as shown in Example 12-6.
Example 12-6 Start cluster and verify sfscli> startcluster CMMNP5236I Cluster started successfully. sfscli> lsserver Name State Server Role Filesets Last Boot ======================================================== mds3 Online Master 0 May 21, 2004 9:19:27 AM mds4 Online Subordinate 1 May 21, 2004 5:17:07 AM sfscli>
Rev: 1000 ANSI SCSI revision: 02 1 Rev: 1 ANSI SCSI revision: 02 Rev: 0000 ANSI SCSI revision: 03 Rev: 0000 ANSI SCSI revision: 03 Rev: 0000 ANSI SCSI revision: 03
483
Vendor: IBM Model: 2145 Type: Direct-Access Host: scsi2 Channel: 00 Id: 01 Vendor: IBM Model: 2145 Type: Direct-Access Host: scsi2 Channel: 00 Id: 01 Vendor: IBM Model: 2145 Type: Direct-Access Host: scsi3 Channel: 00 Id: 00 Vendor: IBM Model: 2145 Type: Direct-Access Host: scsi3 Channel: 00 Id: 00 Vendor: IBM Model: 2145 Type: Direct-Access Host: scsi3 Channel: 00 Id: 00 Vendor: IBM Model: 2145 Type: Direct-Access Host: scsi3 Channel: 00 Id: 01 Vendor: IBM Model: 2145 Type: Direct-Access Host: scsi3 Channel: 00 Id: 01 Vendor: IBM Model: 2145 Type: Direct-Access Host: scsi3 Channel: 00 Id: 01 Vendor: IBM Model: 2145 Type: Direct-Access
Rev: 0000 ANSI SCSI revision: 03 Lun: 01 Rev: 0000 ANSI SCSI revision: 03 Lun: 03 Rev: 0000 ANSI SCSI revision: 03 Lun: 00 Rev: 0000 ANSI SCSI revision: 03 Lun: 01 Rev: 0000 ANSI SCSI revision: 03 Lun: 03 Rev: 0000 ANSI SCSI revision: 03 Lun: 00 Rev: 0000 ANSI SCSI revision: 03 Lun: 01 Rev: 0000 ANSI SCSI revision: 03 Lun: 03 Rev: 0000 ANSI SCSI revision: 03
5. On the Master MDS, run startcluster, and verify all the MDS are running with lsserver, as shown in Example 12-8.
Example 12-8 startcluster and verify sfscli> startcluster CMMNP5236I Cluster started successfully. sfscli> lsserver Name State Server Role Filesets Last Boot ========================================================= mds1 Online Master 6 May 25, 2004 11:13:06 PM mds2 Online Subordinate 2 May 25, 2004 10:58:53 PM
6. Now start the application servers and the SAN File System client.
484
Once created, the metadata dump file is stored in the directory /usr/tank/server/DR on the master Metadata servers local disk, and contains all information required to re-create the metadata. The metadata dump file can then be used to completely restore the global namespace tree (that is, all the fileset attach points) and the MDS server configuration, before using a client-based backup application to restore the actual data in the global namespace.
485
Enter a name for the file, select Create Create new recovery file, and click OK. In Figure 12-3, we created a metadata dump file called SANFS_05-27-04. It is a good practice to include a date stamp in the file name.
3. Finally, select the file name by checking it and selecting Create, as shown in Figure 12-4.
486
3. Figure 12-6 shows the confirmation window with the name of the file to be deleted. Click Delete to complete the operation.
487
488
Example 12-11 builddrscript command sfscli> builddrscript SANFS_10-22-04 CMMNP5363I Disaster recovery script files for SANFS_10-22-04 were built successfully.
Note: Backup, Operator, or Administrator privileges are required to run builddrscript. Example 12-12 shows the three created script files: TankSysCLI.auto: Commands to create Storage Pools, Filesets, and Policies TankSysCLI.volume: Commands to add Volumes to Storage Pools TankSysCLI.attachpoint: Commands to attach filesets
Example 12-12 script files # cd /usr/tank/server/DR # mds1:/usr/tank/server/DR # ls -l total 32 drwxr-xr-x 2 root root drwxr-xr-x 4 root root -rw-rw-rw1 root root -rw-r--r-1 root root -rw-r--r-1 root root -rw-r--r-1 root root
10 02:37 . 5 03:37 .. 22 02:04 SANFS_10-22-04.dump 22 02:34 TankSysCLI.attachpoint 22 02:34 TankSysCLI.auto 22 02:34 TankSysCLI.volume
Important: The mkdrfile and builddrscript commands should be run frequently enough to ensure that any configuration changes are reflected in the output of these commands (at least whenever you make a change to the MDS configuration). You can use a backup utility to copy the dump file to an alternate location, or to tape, and so on.
Tip: These files can also be used to as documentation of the configuration, and can be used to selectively re-create entities, such as policies, in case these are inadvertently deleted. To restore the metadata from these script files, run the scripts in the order shown. Notice that these scripts are designed to be run on a new SAN File System installation in order to re-create the system metadata from scratch. Therefore, before running these scripts, you should have re-installed and configured each MDS, as described in 5.2, SAN File System MDS installation on page 126. Verify that all MDS in the cluster are online and that the cluster is running (using the lsserver command), as shown in Example 12-13.
Example 12-13 Check online server state sfscli> lsserver -state online Name State Server Role Filesets Last Boot ========================================================= mds1 Online Master 6 Oct 19, 2004 11:13:06 PM mds2 Online Subordinate 2 Oct 19, 2004 10:58:53 PM
489
Example 12-14 shows the contents of our lab setup TankSysCLI.auto file.
Example 12-14 TankSysCLI.auto file contents mds1:/usr/tank/server/DR # cat /usr/tank/server/DR/TankSysCLI.auto ################################################################################ # CLI Commands to create Storage Pools, Filesets, Service Classes and # Policy Sets. # These commands need NO manual intervention. ################################################################################ # SMDR Version: 2.2.0.90 # Time of Backup: Oct 22 02:03:49 ################################################################################ # Backup Master Node: 9.42.164.114:1737 ################################################################################ # Cluster Id: 60355 # Installation Id: 8361388714337296178 # DiskEpoch: 0 ################################################################################ chpool -thresh 80 -desc "Default storage pool" DEFAULT_POOL mkpool -partsize 16 -allocsize 4 -thresh 80 -desc "This is a test pool" Test_Pool1 mkpool -partsize 16 -thresh 80 lixprague mkpool -partsize 16 -thresh 80 winwashington mkpool -partsize 16 -thresh 80 aixrome setdefaultpool -quiet DEFAULT_POOL mkfileset -server mds2 -thresh 80 -desc "user home directories" userhomes mkfileset -server mds1 -thresh 80 user1 mkfileset -server mds2 -quota 1000 -qtype hard -thresh 65 dbdir mkfileset -server mds1 -thresh 80 aixfiles mkfileset -thresh 80 asad mkfileset -server mds1 -quota 1000 -qtype soft -thresh 90 winhome mkfileset -server mds1 -thresh 80 lixfiles mkpolicy -file /usr/tank/server/DR/Example_Policy.rules -desc "Example_Policy rules for handling *.mp3 and *DB2.* files" Example_Policy mkpolicy -file /usr/tank/server/DR/Test_Policy.rules -desc "For testing purpose" Test_Policy mkpolicy -file /usr/tank/server/DR/non-unif.rules non-unif usepolicy -quiet non-unif
2. Edit /usr/tank/server/DR/TankSysCLI.volume and modify it to match your current SAN settings, if there have been changes since the last creation of the metadata dump file and the scripts. Run the script TankSysCLI.volume with the command:
# sfscli -script /usr/tank/server/DR/TankSysCLI.volume
Example 12-15 shows the contents of our lab setup TankSysCLI.volume file.
Example 12-15 TankSysCLI.volume file contents mds1:/usr/tank/server/DR # cat TankSysCLI.volume ################################################################################ # CLI Commands using client-side information. # These commands need manual intervention. # # The first section of this file is a set of commands to add the volumes back # into the SAN File System. # The device names were as they appeared during backup on the master server. # The lun names were as they appeared during backup. # The clients listed for each volume are those that had a valid lease and # had SAN access to the volume at the time of the backup. # Please make sure that the client specified in the mkvol command is active.
490
# Please make sure that the lun names appearing here actually exist and # have correct sizes and if not edit the lun names to correct values. # The System MASTER volume has to be specified in tank.properties or via # setupsfs and therefore has no corresponding CLI. # The other System Volumes can either be specified in tank.properties or # added using the CLI command, which appears inside comments for this reason. # # This file also contains commands to restore root privileges for any clients. # Any clients which had root privileges at the time of the backup have # addprivclient commands after the mkvol commands. Please uncomment lines or # change the client names as appropriate. ################################################################################ # SMDR Version: 2.2.0.90 # Time of Backup: Oct 22 02:03:49 ################################################################################ # Backup Master Node: 9.42.164.114:1737 ################################################################################ # Cluster Id: 60355 # Installation Id: 8361388714337296178 # DiskEpoch: 0 ################################################################################ ################################################################################ # MASTER System Disk # VolumeName= MASTER : Size= 2130706432 : Old GID= 74099EBC3308DF32 # Device= /dev/rsde # Lun= "???"
################################################################################ # User Volume # VolumeName= volume1 : Size= 107357405184 : Old GID= 740A862A20C0C3BF # Device= (Unavailable) # Client= AIXRome, Device Path= /dev/rvpath0 # Lun= "VPD83NAA6=600507680188801B2000000000000001" mkvol -lun "VPD83NAA6=600507680188801B2000000000000001" -client AIXRome -pool DEFAULT_POOL -f volume1 ################################################################################ # User Volume # VolumeName= Test_Pool1-Test_Pool1-0 : Size= 104840822784 : Old GID= 740AC37A24431F1C # Device= (Unavailable) # Client= AIXRome, Device Path= /dev/rvpath2 # Lun= "VPD83NAA6=600507680188801B200000000000000C" mkvol -lun "VPD83NAA6=600507680188801B200000000000000C" -client AIXRome -pool Test_Pool1 -f -desc "VPD83NAA6=600507680188801B200000000000000C" Test_Pool1-Test_Pool1-0 ################################################################################ # User Volume # VolumeName= vol_lixprague1 : Size= 32195477504 : Old GID= 740B45FF17B8F7C9 # Device= (Unavailable) # Client= LIXPrague, Device Path= /dev/sdc # Lun= "VPD83NAA6=600507680188801B200000000000001C" mkvol -lun "VPD83NAA6=600507680188801B200000000000001C" -client LIXPrague -pool lixprague -f vol_lixprague1 ################################################################################ # User Volume # VolumeName= vol_winwashington1 : Size= 32195477504 : Old GID= 740B464898206D9C # Device= (Unavailable)
491
# Client= WINWashington, Device Path= \??\VPATH#Disk&Ven_IBM&Prod_2145#1&1a681225&2&01#{53f56307-b6bf-11d0-94f2-00a0c91efb8b} # Lun= "VPD83NAA6=600507680188801B200000000000001D" mkvol -lun "VPD83NAA6=600507680188801B200000000000001D" -client WINWashington -pool winwashington -f vol_winwashington1 ################################################################################ # User Volume # VolumeName= vol_aixrome1 : Size= 32195477504 : Old GID= 740B465F78ED1865 # Device= (Unavailable) # Client= AIXRome, Device Path= /dev/rvpath4 # Lun= "VPD83NAA6=600507680188801B200000000000001B" mkvol -lun "VPD83NAA6=600507680188801B200000000000001B" -client AIXRome -pool aixrome -f vol_aixrome1 # # # # # addprivclient addprivclient addprivclient addprivclient addprivclient -quiet -quiet -quiet -quiet -quiet AIXRome WINcli WINCli LIXPrague WINWashington
3. Edit /usr/tank/server/DR/TankSysCLI.attachpoint to verify the settings, and run the script TankSysCLI.attachpoint with the following command. This is done only for setups where all filesets are attached only to the root directories of other filesets (as recommended). If you have filesets that are attached to directories, you will have to re-create those directories at a client, then reattach those filesets manually:
# sfscli -script /usr/tank/server/DR/TankSysCLI.attachpoint
Example 12-16 shows the contents of our lab setup TankSysCLI.attachpoint file.
Example 12-16 TankSysCLI.attachpoint file content mds1:/usr/tank/server/DR # cat TankSysCLI.attachpoint ################################################################################ # CLI Commands to attach filesets. # These commands need manual intervention. # All the "mkdir" and "attachfileset" commands should be run in the order # given. # The "mkdir" command should be run on a client to recreate the directory path # before running the following "attachfileset" CLI commands. ################################################################################ # SMDR Version: 2.2.0.90 # Time of Backup: Oct 22 02:03:49 ################################################################################ # Backup Master Node: 9.42.164.114:1737 ################################################################################ # Cluster Id: 60355 # Installation Id: 8361388714337296178 # DiskEpoch: 0 ################################################################################ # Root Fileset Attachpoint Name : sanfs ################################################################################ #mkdir -p sanfs/aixfiles attachfileset -attach sanfs/aixfiles -dir aixhome aixfiles #mkdir -p sanfs/lixfiles attachfileset -attach sanfs/lixfiles -dir linuxhome lixfiles #mkdir -p sanfs/userhomes attachfileset -attach sanfs/userhomes -dir user1 user1 #mkdir -p sanfs/winhome
492
Example 12-17 summarizes all the steps described above to create and restore the metadata using the metadata recovery dump file commands.
Example 12-17 Complete example of metadata recovery sfscli> mkdrfile SANFS_10-22-04_1 CMMNP5359I Disaster recovery file SANFS_10-22-04_1 was created successfully. sfscli> sfscli> lsdrfile Name Date and Time Size (KB) ================================================== SANFS_10-22-04_1 Oct 22, 2004 3:00:37 AM 4 SANFS_05-27-04 May 27, 2004 2:04:02 AM 4 sfscli> sfscli> builddrscript SANFS_10-22-04_1 CMMNP5363I Disaster recovery script files for SANFS_10-22-04_1 were built successfully. sfscli> quit mds1:/# cd /usr/tank/server/DR mds1:/usr/tank/server/DR # mds1:/usr/tank/server/DR total 36 drwxr-xr-x 2 root drwxr-xr-x 4 root -rw-rw-rw1 root -rw-rw-rw1 root -rw-r--r-1 root -rw-r--r-1 root -rw-r--r-1 root # ls -l root root root root root root root 352 96 4023 4023 1372 1719 4366 Oct Oct Oct Oct Oct Oct Oct 10 03:00 . 5 03:37 .. 22 02:04 SANFS_05-27-04.dump 22 03:00 SANFS_10-22-04_1.dump 22 03:00 TankSysCLI.attachpoint 22 03:00 TankSysCLI.auto 22 03:00 TankSysCLI.volume
mds1:/usr/tank/server/DR # sfscli -script /usr/tank/server/DR/TankSysCLI.auto mds1:/usr/tank/server/DR # sfscli -script /usr/tank/server/DR/TankSysCLI.volume mds1:/usr/tank/server/DR # sfscli -script /usr/tank/server/DR/TankSysCLI.attachpoint
After you have re-created your SAN File System configuration from scratch, you need to restore all the client files from a backup taken with a file-based application (such as Tivoli Storage Manager).
493
Tip: Individual files can be copied from an available image in the .flashcopy directory back to the fileset itself if only some files in the fileset need to be restored.
2. Select Create from the drop-down menu and click Go. The Introduction window, listing the next three processes, displays, as shown in Figure 12-8 on page 495. Click Next to execute the Select Containers step.
494
3. We chose all the filesets (Figure 12-9). Click Next to go to the Set Properties step. Attention: When creating FlashCopy images, an administrator specifies each fileset to be included; the FlashCopy image feature does not automatically include nested filesets. The FlashCopy image operation is performed individually for each fileset.
495
4. Now we specify the image name, image directory, and description for each FlashCopy image. The default name is Image followed by a sequence number. Figure 12-10 shows the default properties for our lab setup. Click Next to continue to the Verify Settings step. Tips: Although we selected all the filesets, they are created individually, one at a time. A fileset can have as many as 32 read-only FlashCopy images.
5. Check the selections, as shown in Figure 12-11 on page 497, and click Next.
496
6. This completes the process, and the images are created, as shown in Figure 12-12.
Note: All images are full backups; it is not possible to create incremental FlashCopy images.
497
From the client, we can see the FlashCopy images for all the filesets. Note by default the .flashcopy directory is hidden on Windows. Figure 12-13 shows Windows Explorer on a Windows SAN File System client. We can see the FlashCopy images, just created, including the contents of Image-8, and a directory called INSTALL and its subdirectories, reflecting the directories in the actual fileset.
498
no INSTALL directory
1. From the SAN File System GUI, select Manage Copies FlashCopy Images. We select the image Image-8 and the Revert to... action from the drop-down menu, as shown in Figure 12-15.
499
2. Verify and confirm the Image revert, as shown in Figure 12-16, and click OK to continue.
Image-8 is now reverted, as shown in Figure 12-17 on page 501, replacing the current contents of the fileset.
500
Now, if we view the files from the Windows client, we see our directory INSTALL and its contents, as shown in Figure 12-18.
501
User
Pools System
Tape Library
SAN
"Production" SAN FS Clients Tivoli Storage Manager Server SAN FS Clients Tivoli Storage Manager/LAN-Free clients
LAN
The following procedure provides application server-free backup:
Figure 12-19 Exploitation of SAN File System with Tivoli Storage Manager
1. Create FlashCopy images of the filesets that you want to protect. This requires minimal disruption to the SAN File System clients that are performing a production workload (Web servers, application servers, database servers, and so on). 502
2. Now you can back up these FlashCopy images using a file-based backup application like Tivoli Storage Manager, where the Tivoli Storage Manager client is installed on a separate SAN File System client. It still sees all the files, but the backups run independently of the production SAN File System clients. To keep all file attributes, if you have both Windows and UNIX (including Linux)-created data in your SAN File System environment, it should be separated by fileset. Then you should run two separate Tivoli Storage Manager clients in this instance: a Windows Tivoli Storage Manager/SAN File System client to back up Windows files, and an AIX Tivoli Storage Manager/SAN File System client to back up UNIX (including Linux) files. You can also run multiple instances of these if required to improve backup performance. The Tivoli Storage Manager server can be on any supported Tivoli Storage Manager server platform, and only needs to be SAN and LAN attached. It does not need to be a SAN File System client. 3. If you have implemented a non-uniform SAN File System configuration, such that not all filesets are visible to all clients, you will need additional backup clients to ensure that all filesets can be backed up by a client that has visibility to it. 4. You can use the LAN-free backup client to also back up these files directly over the SAN to a SAN-attached library as shown, rather than using the LAN for backup data traffic. Therefore, we have LAN-free and (application) server-free backup capability. Note: Tivoli Storage Manager, in backing up the files in SAN File System, automatically also backs up the associated file metadata. Tivoli Storage Manager also supports restoring files to the same or a different location, and even to a different Tivoli Storage Manager client. This means you could restore files backed up from SAN File System not only to a different SAN File System environment, but also (as in a disaster recovery situation) to a local file system on another UNIX or Windows Tivoli Storage Manager client that is not a SAN File System client, that is, you could still restore these files from a Tivoli Storage Manager backup, even if you do not have a SAN File System environment to restore them to. After all, they are just files to Tivoli Storage Manager; the metadata will be handled appropriately for the restore platform, depending on whether the restore destination is a directory in the SAN File System global namespace or a local file system.
503
Both Tivoli Storage Manager server and client code versions used in our lab were at V5.2.2.0. Please note that in order to back up SAN File System data from AIX and Windows SAN File System clients, you need Tivoli Storage Manager client V5.1 and higher. To back up SAN File System data from Linux and Solaris clients, you need Tivoli Storage Manager client V5.2.3.1 or higher. All these clients are also SAN File System clients. In the following sections, we will introduce sample backup/restore scenarios for both Windows and UNIX SAN File System filesets.
12.6.1 Back up Windows data using Tivoli Storage Manager Windows client
First, we will back up the files with the Tivoli Storage Manager client: 1. To start the GUI, select Start Programs Tivoli Storage Manager Backup-Archive GUI, and select the Backup function. Select the files to back up, as shown in Figure 12-20. Notice that the SAN File System drive and filesets appear as a Local drive in the Backup-Archive client.
2. Start the backup by clicking Backup. The files will be backed up to the Tivoli Storage Manager server. Note that we have selected for our backup not only the actual content of the INSTALL directory, but also its SAN File System FlashCopy image, which resides in folder .flashcopy/Image-8. If you make a FlashCopy image each day (using a different directory) and back it up, Tivoli Storage Manager incremental backup will back up all the files each time. In 12.6.3, Backing up FlashCopy images with the snapshotroot option on
504
page 510, we will show you how to back up SAN File System FlashCopy images incrementally using the Tivoli Storage Manager -snapshotroot option.
Restore user data using Tivoli Storage Manager client for Windows
Having backed up both actual data and its FlashCopy image, we can execute our restore scenarios.
505
2. We chose to restore to the original location, as shown in Figure 12-22. Click Restore to start the restore.
506
3. Select the destination to restore the files to. We will restore the folder to the win2kfiles fileset in S:\winhome\win2kfiles\testfolder, as shown in Figure 12-24. Click Restore to start the restore. Note that we could not (and it would not make sense to) restore the files to the .flashcopy directory, as FlashCopy images, so their directories are read-only.
The restore of the FlashCopy files is now complete; the original folder is restored. Tip: Regular periodic FlashCopy images are highly recommended. They are the most efficient method for quickly backing up and restoring files in scenarios where the metadata is still available.
12.6.2 Back up user data in UNIX filesets with TSM client for AIX
In this section, we introduce the following backup/restore scenarios: Back up and restore files using data in an actual fileset. Back up and restore SAN File System FlashCopy images using the -snapshotroot TSM option.
507
1. Now we back up the files with the Tivoli Storage Manager client. Example 12-19 shows the output.
Example 12-19 Backing up files using Tivoli Storage Manager AIX command line client Rome:/sfs/sanfs >dsmc selective "/sfs/sanfs/aixfiles/aixhome/inst.images/*" "/sfs/sanfs/lixfiles/linuxhome/install/*" IBM Tivoli Storage Manager Command Line Backup/Archive Client Interface - Version 5, Release 2, Level 2.0 (c) Copyright by IBM Corporation and other(s) 1990, 2003. All Rights Reserved. Node Name: AIXROME Session established with server NPSRV2: Windows Server Version 5, Release 2, Level 2.0 Server date/time: 06/02/04 09:51:00 Last access: 06/02/04 Selective Backup function invoked. Directory--> 72 /sfs/sanfs/aixfiles/aixhome/inst.images [Sent] Directory--> 312 /sfs/sanfs [Sent] Directory--> 96 /sfs/sanfs/aixfiles [Sent] Directory--> 144 /sfs/sanfs/aixfiles/aixhome [Sent] Normal File--> 27,673,600 /sfs/sanfs/aixfiles/aixhome/inst.images/IP22727.tivoli.tsm.client.ba.32bit [Sent] Selective Backup processing of '/sfs/sanfs/aixfiles/aixhome/inst.images/*' finished without failure. Directory--> 72 /sfs/sanfs/lixfiles/linuxhome/install [Sent] Directory--> 312 /sfs/sanfs [Sent] Directory--> 72 /sfs/sanfs/lixfiles [Sent] Directory--> 192 /sfs/sanfs/lixfiles/linuxhome [Sent] Normal File--> 696,679 /sfs/sanfs/lixfiles/linuxhome/install/TIVguid.i386.rpm [Sent] Selective Backup processing of '/sfs/sanfs/lixfiles/linuxhome/install/*' finished without failure. Total number of objects inspected: 10 Total number of objects backed up: 10 Total number of objects updated: 0 Total number of objects rebound: 0 Total number of objects deleted: 0 Total number of objects expired: 0 Total number of objects failed: 0 Total number of bytes transferred: 27.05 MB Data transfer time: 2.32 sec Network data transfer rate: 11,909.43 KB/sec Aggregate data transfer rate: 9,186.13 KB/sec Objects compressed by: 0% Elapsed processing time: 00:00:03
09:48:37
2. In Example 12-20, we simulate data loss in the fileset backed up in step 1. We will delete directories /sfs/sanfs/aixfiles/aixhome/inst.images/inst.images and /sfs/sanfs/lixfiles/linuxhome/install.
Example 12-20 Simulating the loss of data by deleting directories that we backed up in step 1 Rome:/sfs/sanfs >rm -rf /sfs/sanfs/lixfiles/linuxhome/install Rome:/sfs/sanfs >rm -rf /sfs/sanfs/aixfiles/aixhome/inst.images
3. Now we will restore our files using the Tivoli Storage Manager AIX line client from the backup created in step 1, as shown in Example 12-21 on page 509. 508
IBM TotalStorage SAN File System
Example 12-21 Restoring files from Tivoli Storage Manager AIX client backup dsmc restore "/sfs/sanfs/aixfiles/aixhome/inst.images/*";dsmc restore "/sfs/sanfs/lixfiles/linuxhome/install/*" IBM Tivoli Storage Manager Command Line Backup/Archive Client Interface - Version 5, Release 2, Level 2.0 (c) Copyright by IBM Corporation and other(s) 1990, 2003. All Rights Reserved. Restore function invoked. Node Name: AIXROME Session established with server NPSRV2: Windows Server Version 5, Release 2, Level 2.0 Server date/time: 06/02/04 09:59:47 Last access: 06/02/04
09:56:34
ANS1247I Waiting for files from the server... Restoring 72 /sfs/sanfs/aixfiles/aixhome/inst.images [Done] Restoring 27,673,600 /sfs/sanfs/aixfiles/aixhome/inst.images/IP22727.tivoli.tsm.client.ba.32bit [Done] Restore processing finished. Total number of objects restored: 2 Total number of objects failed: 0 Total number of bytes transferred: 26.39 MB Data transfer time: 20.45 sec Network data transfer rate: 1,321.14 KB/sec Aggregate data transfer rate: 1,174.53 KB/sec Elapsed processing time: 00:00:23 IBM Tivoli Storage Manager Command Line Backup/Archive Client Interface - Version 5, Release 2, Level 2.0 (c) Copyright by IBM Corporation and other(s) 1990, 2003. All Rights Reserved. Restore function invoked. Node Name: AIXROME Session established with server NPSRV2: Windows Server Version 5, Release 2, Level 2.0 Server date/time: 06/02/04 10:00:10 Last access: 06/02/04
09:59:47
ANS1247I Waiting for files from the server... Restoring 72 /sfs/sanfs/lixfiles/linuxhome/install [Done] Restoring 696,679 /sfs/sanfs/lixfiles/linuxhome/install/TIVguid.i386.rpm [Done] < 680.40 KB> [ - ] Restore processing finished. Total number of objects restored: 2 Total number of objects failed: 0 Total number of bytes transferred: 680.40 KB Data transfer time: 0.36 sec Network data transfer rate: 1,877.42 KB/sec Aggregate data transfer rate: 135.87 KB/sec Elapsed processing time: 00:00:05
509
4. Now we check if the files have been restored to their original locations in Example 12-22.
Example 12-22 Check if files have been successfully restored Rome:/sfs/sanfs >ls -l /sfs/sanfs/lixfiles/linuxhome/install total 2048 -rw-rw-rw1 root system 696679 Jun 01 13:26 TIVguid.i386.rpm Rome:/sfs/sanfs >ls -l /sfs/sanfs/aixfiles/aixhome/inst.images total 55296 -rw-r----1 root system 27673600 Jun 01 14:38 IP22727.tivoli.tsm.client.ba.32bit
Now, in order to back up the SAN File System FlashCopy image using the Tivoli Storage Manager client, you would normally run the following command:
dsmc incr "/sfs/sanfs/aixfiles/aixhome/.flashcopy/Image06-01-2004/*" -subdir=yes
510
In this case, the Tivoli Storage Manager client will start to process the data in the /sfs/sanfs/aixfiles/aixhome/.flashcopy/Image06-01-2004/ directory and its subdirectories. With snapshotroot, we are able to base the backup on the SAN File System FlashCopy image, while still preserving (from the Tivoli Storage Manager server point of view) the actual absolute directory structure and file names from which that particular SAN File System FlashCopy image originates. However, the main reason you might consider using a snapshotroot based backup approach is that it gives you the ability to back up SAN File System FlashCopy images using Tivoli Storage Manager incremental methods. This requires you to add virtual mount point definitions into the Tivoli Storage Manager clients dsm.sys configuration file for: All the filesets you plan to back up Each and every SAN File System FlashCopy image you create for any of your filesets In Example 12-24, you can see how we have defined virtual mount points in our dsm.sys configuration file.
Example 12-24 Virtual mount point definitions example virtualmountpoint virtualmountpoint virtualmountpoint /sfs/sanfs/aixfiles/aixhome /sfs/sanfs/aixfiles/aixhome/.flashcopy/Image06-01-2004 /sfs/sanfs/aixfiles/aixhome/.flashcopy/Image06-02-2004
This is because without virtual mount point definitions, the Tivoli Storage Manager server would store all SAN File System related backups into a single file space (in our example, AIXRome /sfs), as shown in Example 12-25.
Example 12-25 q filespace command: no virtual mount point definitions tsm: NPSRV2>q filesp Node Name: AIXROME File space Name: /sfs FSID: 5 Platform: AIX File space Type: SANFS Is File space Unicode?: No Capacity (MB): 294,480.0 Pct Util: 3.7
511
If we, however, define a virtual mount point for our aixfiles fileset and also for all of our SAN File System FlashCopy images, and run a Tivoli Storage Manager backup, then the file space layout on the Tivoli Storage Manager server (output of the q filesp command) will now look as shown in Example 12-26.
Example 12-26 q filespace command: With virtual mount point definitions tsm: NPSRV2>q filesp Node Name: AIXROME File space Name: /sfs FSID: 5 Platform: AIX File space Type: SANFS Is File space Unicode?: No Capacity (MB): 294,480.0 Pct Util: 3.7 Node Name: AIXROME File space Name: /sfs/sanfs/aixfiles/aixhome FSID: 6 Platform: AIX File space Type: SANFS Is File space Unicode?: No Capacity (MB): 304,688.0 Pct Util: 3.6 Node Name: AIXROME File space Name: /sfs/sanfs/aixfiles/aixhome/.flashcopy/Image06-01-2004 FSID: 7 Platform: AIX File space Type: SANFS Is File space Unicode?: No Capacity (MB): 304,688.0 Pct Util: 3.6 Node Name: AIXROME File space Name: /sfs/sanfs/aixfiles/aixhome/.flashcopy/Image06-02-2004 FSID: 8 Platform: AIX File space Type: SANFS Is File space Unicode?: No Capacity (MB): 304,688.0 Pct Util: 3.6
So far, we have explained the purpose of the snapshotroot option, and outlined the role of the Tivoli Storage Manager clients virtual mount points. Now we will describe how to actually back up SAN File System data using the snapshotroot option.
512
But with the snapshotroot backup approach, you in fact do not run backup against the SAN File System FlashCopy image directory, but rather against the actual data directory. The SAN File System FlashCopy directory is then specified as the option for the snapshotroot option, as shown here:
dsmc incr /sfs/sanfs/aixfiles/aixhome/ -snapshotroot=/sfs/sanfs/aixfiles/aixhome/.flashcopy /Image06-02-2004
Since we have now explained the whole concept behind backing up SAN File System data based on their FlashCopy images using the snapshotroot Tivoli Storage Manager client option, we can now show a step-by-step scenario for this type of backup in our lab environment.
2. Create the SAN File System FlashCopy image. You can use either the SAN File System graphical console or the command-line interface. In our example, we use the command-line interface:
sfscli>mkimage -fileset aixfiles -dir Image06-01-2004 aixfiles_fcopy1
3. Add a new virtual mount point definition in the dsm.sys file for the newly created SAN File System FlashCopy image in step 2:
virtualmountpoint /sfs/sanfs/aixfiles/aixhome/.flashcopy/Image06-01-2004
4. Make sure that the .flashcopy directory is excluded from normal Tivoli Storage Manager backups by adding the appropriate exclude.dir option into the dsm.sys file:
exclude.dir /.../.flashcopy
5. This step is for AIX only. If you are configuring this example on another client than AIX, please skip this step. Add the testflag option to the dsm.sys file in order to prevent undesired object updates due to AIX LVM inode number differences between the actual and FlashCopy data:
testflag ignoreinodeupdate
6. Example 12-27 shows the completed dsm.sys file for our environment.
Example 12-27 Example of the dsm.sys file in our environment Rome:/usr/tivoli/tsm/client/ba/bin >cat dsm.sys SErvername config1 COMMmethod TCPip TCPPort 1500 TCPServeraddress 9.42.164.126 Nodename AIXRome Passwordaccess generate ***** added for SAN File System ***** testflag ignoreinodeupdate virtualmountpoint /sfs/sanfs/aixfiles/aixhome virtualmountpoint /sfs/sanfs/aixfiles/aixhome/.flashcopy/Image06-01-2004 virtualmountpoint /sfs/sanfs/aixfiles/aixhome/.flashcopy/Image06-02-2004
513
7. Perform the Tivoli Storage Manager incremental, selective or archive backup operation. In our case, we performed an incremental backup of the fileset, with the snapshotroot based on Image06-02-2004:
dsmc incr /sfs/sanfs/aixfiles/aixhome/\ \-snapshotroot=/sfs/sanfs/aixfiles/aixhome/.flashcopy/Image06-02-2004
8. Now that we have backed up the SAN File System data incrementally using the FlashCopy image, we can now delete that FlashCopy image on MDS using the command-line interface:
rmimage -fileset aixfiles aixfiles_fcopy1
In the next section, we introduce the backup scenario based on the snapshotroot option, which will demonstrate how the Tivoli Storage Manager incremental backup using snapshotroot really works.
2. Next, we will add the virtual mount point definition to our DSM.SYS configuration file and run an incremental backup of the filesets data using the snapshotroot option, as shown in Example 12-29.
Example 12-29 Run Tivoli Storage Manager backup of the data Rome:/sfs/sanfs/aixfiles/aixhome >dsmc incr /sfs/sanfs/aixfiles/aixhome/ -snapshotroot=/sfs/sanfs/aixfiles/aixhome/.flashcopy/aixfiles-image-1 IBM Tivoli Storage Manager Command Line Backup/Archive Client Interface - Version 5, Release 2, Level 2.0 (c) Copyright by IBM Corporation and other(s) 1990, 2003. All Rights Reserved. Node Name: AIXROME Session established with server NPSRV2: Windows Server Version 5, Release 2, Level 2.0 Server date/time: 06/08/04 14:29:57 Last access: 06/08/04
14:29:05
Incremental backup of volume '/sfs/sanfs/aixfiles/aixhome/' Directory--> 48 /sfs/sanfs/aixfiles/aixhome/lost+found [Sent] Normal File--> 5,495,760 /sfs/sanfs/aixfiles/aixhome/file1.exe [Sent] Successful incremental backup of '/sfs/sanfs/aixfiles/aixhome/*'
of of of of of of
2 2 0 0 0 0
514
Total number of objects failed: 0 Total number of bytes transferred: 5.24 MB Data transfer time: 0.44 sec Network data transfer rate: 11,973.98 KB/sec Aggregate data transfer rate: 1,775.06 KB/sec Objects compressed by: 0% Elapsed processing time: 00:00:03 Rome:/sfs/sanfs/aixfiles/aixhome >
The file /sfs/sanfs/aixfiles/aixhome/file1.exe has been backed up by Tivoli Storage Manager using the SAN File System image in /sfs/sanfs/aixfiles/aixhome/.flashcopy/aixfiles-image-1/file1.exe. 3. Next, we will add a new file into the /sfs/sanfs/aixfiles/aixhome directory named file2.exe. 4. Now we create a new SAN File System FlashCopy image in the /sfs/sanfs/aixfiles/aixhome/.flashcopy/aixfiles-image-2 directory, as shown in Example 12-30.
Example 12-30 Creating a new FlashCopy image sfscli> mkimage -fileset aixfiles -dir aixfiles-image-2 aixfiles-image-2 CMMNP5168I FlashCopy image aixfiles-image-2 on fileset aixfiles was created successfully.
5. Now we will add a new virtual mount point for the new SAN File System FlashCopy image aixfiles-image-2 (Example 12-31).
Example 12-31 Adding a new virtual mount point definition into DSM.SYS and run new backup Rome:/sfs/sanfs/aixfiles/aixhome >cat /usr/tivoli/tsm/client/ba/bin/dsm.sys SErvername config1 COMMmethod TCPip TCPPort 1500 TCPServeraddress 9.42.164.126 Nodename AIXRome Passwordaccess generate ***** added for SAN File System ***** testflag ignoreinodeupdate virtualmountpoint /sfs/sanfs/aixfiles/aixhome virtualmountpoint /sfs/sanfs/aixfiles/aixhome/.flashcopy/aixfiles-image-1 virtualmountpoint /sfs/sanfs/aixfiles/aixhome/.flashcopy/aixfiles-image-2
515
6. Now, run backup again, using the snapshotroot option pointing to the latest Flashcopy image: aixfiles-image-2 (Example 12-32).
Example 12-32 Run backup again, this time using the aixfiles-image-2 image Rome:/sfs/sanfs/aixfiles/aixhome >dsmc incr /sfs/sanfs/aixfiles/aixhome/\ \-snapshotroot=/sfs/sanfs/aixfiles/aixhome/.flashcopy/aixfiles-image-2 IBM Tivoli Storage Manager Command Line Backup/Archive Client Interface - Version 5, Release 2, Level 2.0 (c) Copyright by IBM Corporation and other(s) 1990, 2003. All Rights Reserved. Node Name: AIXROME Session established with server NPSRV2: Windows Server Version 5, Release 2, Level 2.0 Server date/time: 06/08/04 14:45:17 Last access: 06/08/04
14:29:57
Incremental backup of volume '/sfs/sanfs/aixfiles/aixhome/' Normal File--> 5,495,760 /sfs/sanfs/aixfiles/aixhome/file2.exe [Sent] Successful incremental backup of '/sfs/sanfs/aixfiles/aixhome/*'
Total number of objects inspected: 3 Total number of objects backed up: 1 Total number of objects updated: 0 Total number of objects rebound: 0 Total number of objects deleted: 0 Total number of objects expired: 0 Total number of objects failed: 0 Total number of bytes transferred: 5.24 MB Data transfer time: 0.40 sec Network data transfer rate: 13,141.99 KB/sec Aggregate data transfer rate: 1,771.75 KB/sec Objects compressed by: 0% Elapsed processing time: 00:00:03
As you can see, the /sfs/sanfs/aixfiles/aixhome directory has been backed up incrementally, this time using the aixfiles-image-2 image. Therefore, we just backed up the newly added file, file2.exe. 7. Now we will create the file3.exe file in the /sfs/sanfs/aixfiles/aixhome directory. 8. Next, we will make a SAN File System FlashCopy image named aixfiles-image-3, as shown in Example 12-33.
Example 12-33 Making another SAN File System FlashCopy image sfscli> mkimage -fileset aixfiles -dir aixfiles-image-3 aixfiles-image-3 CMMNP5168I FlashCopy image aixfiles-image-3 on fileset aixfiles was created successfully.
9. Next, we add another file named file4.exe. 10.Finally, we will run a backup, pointing the snapshotroot to the aixfiles-image-3 SAN File System FlashCopy image (do not forget to add a new virtual mount point for the aixfiles-image-3 image to the DSM.SYS configuration file). In this case, only the file3.exe is backed up and file4.exe should be ignored. Why? Because we have added the file4.exe to the actual file system directory and did not generate a new SAN File System FlashCopy image afterwards. The SAN File System FlashCopy image aixfiles-image-3 does not contain the image of the file file4.exe, as the file was created after the image was taken. Therefore, file4.exe is not backed up. This is how the snapshotroot option works; in each
516
case, the fileset will be backed up incrementally, using the specified FlashCopy image as a base.
Example 12-34 Final backup Rome:/sfs/sanfs/aixfiles/aixhome >hotroot=/sfs/sanfs/aixfiles/aixhome/.flashcopy/aixfiles-image-3 IBM Tivoli Storage Manager Command Line Backup/Archive Client Interface - Version 5, Release 2, Level 2.0 (c) Copyright by IBM Corporation and other(s) 1990, 2003. All Rights Reserved. Node Name: AIXROME Session established with server NPSRV2: Windows Server Version 5, Release 2, Level 2.0 Server date/time: 06/08/04 15:07:56 Last access: 06/08/04 <
14:45:17
Incremental backup of volume '/sfs/sanfs/aixfiles/aixhome/' Normal File--> 5,495,760 /sfs/sanfs/aixfiles/aixhome/file3.exe [Sent] Successful incremental backup of '/sfs/sanfs/aixfiles/aixhome/*'
Total number of objects inspected: 4 Total number of objects backed up: 1 Total number of objects updated: 0 Total number of objects rebound: 0 Total number of objects deleted: 0 Total number of objects expired: 0 Total number of objects failed: 0 Total number of bytes transferred: 5.24 MB Data transfer time: 0.40 sec Network data transfer rate: 13,115.34 KB/sec Aggregate data transfer rate: 1,768.19 KB/sec Objects compressed by: 0% Elapsed processing time: 00:00:03
As you can see, our assumption that only the file3.exe file will be backed up was right. The Tivoli Storage Manager backup client searches the actual data directory /sfs/sanfs/aixfiles/aixhome for existing objects to be backed up, but for the backup itself, it uses the SAN File System FlashCopy directory specified by the snapshotroot option, in our case, /sfs/sanfs/aixfiles/aixhome/aixfiles-image-3.
517
The above scenario also explains the role of the virtual mount point entries in the DSM.SYS configuration file. As you can see in Example 12-31 on page 515, there is one virtual mount point created for the /sfs/sanfs/aixfiles/aixhome directory. We need this option to say to the Tivoli Storage Manager server that it has to create and use a new separate file space for the /sfs/sanfs/aixfiles/aixhome directory. See the output of the q filesp command from the Tivoli Storage Manager command line interface shown in Example 12-35.
Example 12-35 Query filesp command output from Tivoli Storage Manager CLI interface tsm: NPSRV2>q filesp Node Name: AIXROME File space Name: /sfs/sanfs/aixfiles/aixhome FSID: 10 Platform: AIX File space Type: SANFS Is File space Unicode?: No Capacity (MB): 352,800.0 Pct Util: 8.1
Simply put, if you did not specify a virtual mount point for the /sfs/sanfs/aixfiles/aixhome directory (which is also the attach point of the SAN File System fileset aixfiles), and then ran a backup, the TSM file space name would be /sfs only (as shown in Example 12-25 on page 511) and you would not be able to run incremental backups using the snapshotroot option. So, why do we need virtual mount point entries in DSM.SYS for all of our SAN File System FlashCopy images? The reason is that you can only specify a mount point to the snapshotroot option, not a directory. If the virtualmountpoint entry for the aixfiles-image-3 image was not made, and you tried to run a backup with the snapshotroot option pointing to the /sfs/sanfs/aixfiles/aixhome/.flashcopy/aixfiles-image-3 directory, the Tivoli Storage Manager client would generate an error message, as shown in Example 12-36 below.
Example 12-36 Need for dsm.sys virtual mountpoint entries for SAN File System FlashCopy images Rome:/sfs/sanfs/aixfiles/aixhome >dsmc incr /sfs/sanfs/aixfiles/aixhome/ -snapshotroot=/sfs/sanfs/aixfiles/aixhome/.flashcopy/aixfiles-image-3 ANS7533E The specified drive '/sfs/sanfs/aixfiles/aixhome/.flashcopy/aixfiles-image-3' does not exist or is not a local drive.
Conclusions
Using the snapshotroot option with your Tivoli Storage Manager backup client gives you the ability to use SAN File System FlashCopy images to back up data while still using the incremental backup method. In order to use the snapshotroot option, you need to add a virtualmountpoint entry for the actual fileset and also for each and every SAN File System FlashCopy image generated for that particular fileset. You can avoid the manual modification to your DSM.SYS file (in order to add a specific virtualmountpoint entry) by choosing the standard naming convention for your SAN File System FlashCopy images (Image-1, Image-2, Image-3, and so on). Since the SAN File System supports a maximum number of FlashCopy images of 32, you can predefine all your virtual mount points to your DSM.SYS configuration file and then automate the backup process using scripts.
518
13
Chapter 13.
519
13.1 Overview
This chapter covers the different features and capabilities for performing problem determination (PD) for SAN File System. We have previously described the major functions and features of SAN File System. There are several possibilities and layers to aid in monitoring the system and providing problem determination support. This chapter will describe these items and show how they can be used.
520
Note: If a firewall is used, be aware that many firewalls disable VPN connections by default (for example, Cisco PIX firewall). Consult your firewall documentation on how to enable VPN traffic. 3. The support engineer obtains the Customer Connection ID for the newly established secure VPN connection from the client. 4. The support engineer establishes a secure connection to the VPN server. 5. Using the connection ID and an account on the Master Console, the support representative establishes secure access to the Master Console over the VPN tunnel. 6. The support representative connects to the SAN File System MDS using SSH. This process is shown in Figure 13-2.
1 6
VPN Tunnel M a s te r C o n s o le
5
VPN G a te w a y
RCHASRS3
S u p p o rt D e s k to p P C
521
Component
Message Number
XXXYYnnnnZ
Sub Component Severity Level I=Informational W=Warning E=Error S=Severe
>>-catlog--+--------+--+--------------------+-------------------> +- -?----+ '- -entries--+-25--+-' +- -h----+ +-50--+ '- -help-' +-75--+ '-100-'
522
>--+---------------------+--+--------------------+--------------> | .-cluster--. | '- -date--YYYY-MM-DD-' '- -log--+-admin----+-' +-audit----+ +-event----+ '-security-' .-,--------. V | >--+---------------------+-- -level----+-info-+-+-------------->< '- -order--+-newest-+-' +-err--+ Press Enter To Continue... '-oldest-' +-warn-+ '-sev--'
Parameters -? | -h | -help Displays a detailed description of this command, including syntax, parameter descriptions, and examples. If you specify a help option, all other command options are ignored. -entries Specifies the number of log entries to show at a time, from oldest to newest. Valid values are 25, 50, 75, or 100. If not specified, this command shows the entire log. -log Displays entries in the specified log, ordered by time stamp starting with the most recent entry. The default is cluster. Displays entries in the administrative log, which maintains a history of messages created by the administrative server. Press Enter To Continue... admin
audit
Displays entries in the audit log, which maintains a history of all commands issued by any administrator for all metadata servers in the cluster. Displays entries in the cluster log, which maintains a history of messages created by all Metadata servers in the cluster. Displays event entries in the event log, which maintains a history of event messages issued by all Metadata servers in the cluster.
cluster
event
security Displays entries in the security log, which maintains a history of administrative-user login activity. -date Specifies the date at which you want the displayed log entries to start. The date must be in the format YYYY-MM-DD, where YYYY is the year, MM is the month, and DD is the day. This date must be the current date or older. Future dates are not acceptable.
523
Press Enter To Continue... -order Specifies the direction of the displayed log entries. You can specify one of the following values: newest Displays the log entries form newest to oldest. This is the default value if the -date parameter is not specified. Displays the log entries form oldest to newest. This is the default value if the -date parameter is specified.
oldest
-level info | err | warn | sev Specifies the severity level of the displayed log entries. If not specified, all severity levels are displayed. You can specify one or more levels, separate by a comma and no space (for example, -level info,warn,err)
Description If you run this command from an engine hosting a subordinate metadata server, logs for only the local engine are displayed. If you run this command from the engine hosting the master Metadata server, logs for the entire cluster are displayed. Press Enter To Continue...
If there are log entries that have not been displayed, you are prompted to press Enter to display the next set of entries or to type exit and press Enter to stop. This command displays the following information for the specified log: * * * * * * Message identifier. Severity level (Info, Error, Warning, Severe). Message type (Normal or Event). Name of the Metadata server that generated the message. Date and time the message was generated. Message description.
Tip: It is important that the date and time are correct on the metadata servers so messages are logged correctly and log entries are displayed correctly when using the -time parameter.
Example Display the event entries in the cluster log The following example displays the error messages in the event log that occurred on or after Press Enter To Continue... January 4, 2003. sfscli> catlog -log event -date 2003-01-04 -level err ID Level Type Server Date and Time
524
============================================================ HSTSS0009E Error Event ST3 Jan 2, 2003 8:39:15 PM HSTNL0019E Error Event ST2 Jan 2, 2003 8:40:46 PM Message ====================================================== The Metadata server rpm is not installed. Unable to extract boot record when server is running.
Although the Audit Log (log.audit), Trace Log (log.trace), and Server Log (log.std) have a maximum file size of 250 MB, SAN File System actually stores 500 MB of data for each of these logs. When any of these logs reaches its maximum size, it is renamed to include the extension .old. If a file by that name already exists, SAN File System overwrites the existing file. Then the log is cleared so that it can start accepting new messages again. The log.dmp file starts over for either of these occurrences if the metadata server has restarted (for example, it restarted due to a server crash): The start of each day. The file reaches a size of 1 MB. When you display these logs from the master metadata server using either the administrative command-line interface or the SAN File System console, you see a consolidated view of all the logs from each engine in the cluster. The consolidated view of the server message log is called the Cluster log. Note: You can also display the Event Log. This log is actually a subset of the messages stored in the Cluster Log. It contains only messages with a message type of event.
525
Event log
The event log, /usr/tank/server/log/log.std (Example 13-2), records normal and event messages (including error messages) for the SAN File System MDS. These messages capture routine server activity and error conditions, and are always enabled. This log file always exists for each running instance of a SAN File System MDS.
Example 13-2 Sample log message written to log.std 2004-06-03 05:51:40 cluster master. INFORMATIONAL HSTPG0022I N mds1 Now running as the
You can view the log in sfscli using catlog -log event, as shown in Example 13-3.
Example 13-3 View the server log sfscli> catlog -log event -date 2004-06-04 ID Level Server Date and Time Message =========================================================================================== ============================ HSTCM0395W Warning mds4 Jun 04, 2004 3:20:01 AM Alert. The server state has changed from Online(10) to NotRunning(0). HSTCM0396E Error mds3 Jun 04, 2004 7:59:57 AM Alert. The server state has changed from Online(10) to Joining(5). HSTCM0394I Info mds3 Jun 04, 2004 7:59:57 AM Alert. The server state has changed from Joining(5) to Online(10).
Note that the catlog sfscli command run from the master metadata returns cluster wise logs. When run from a subordinate MDS, catlog only returns logs for the local server.
Audit log
The audit log /usr/tank/server/log/log.audit (Example 13-4) contains administrative audit messages. Audit messages are generated in response to operations performed by the SAN File System administrative server. It does not capture every administrative operation, but records all commands that modify system or user metadata, including commands that would have made such a change but failed. The file also records the user ID issuing the command, along with time stamp and completion status of the requested operation. The file does not record simple query operations; such operations do not alter metadata, and since they are likely to be more numerous than those that do, their presence could easily overwhelm logging and interpretation of more meaningful operations. Audit logging is always enabled, and the log file always exists for each running instance of an MDS.
Example 13-4 Sample audit message written to log.audit 2004-06-03 21:19:01 INFORMATIONAL HSTAD0019I A mds1 User Name: ITSOAdmin Command Name: ServerServiceStopService Parameters: SYSTEMCREATIONCLASSNAME=STC_ComputerSystem SYSTEMNAME=mds4 CREATIONCLASSNAME=STC_TankService NAME=TankService . Command Succeeded.
You can use OS utilities (for example, cat or vi) to view the actual file, or within sfscli, use catlog -log audit, as shown in Example 13-5 on page 527.
526
Example 13-5 View the audit log sfscli> catlog -log audit -date 2004-06-03 ID Level Server Date and Time Message
=========================================================================================== HSTAD0019I Info mds4 Jun 03, 2004 2:56:54 AM User Name: ITSOAdmin Command Name: Filesetlistassociatedpools Parameters: NAME=user1 . Command Succeeded.
Trace log
The /usr/tank/server/log/log.trace file receives trace messages. Because a minimal amount of tracing is always enabled to support first-failure data capture, this file (Example 13-7 on page 528) will always exist. However, the number of messages and the level of detail the messages convey is highly dependent on the current trace settings for the server in question. The default level of tracing active at all times is 0, which sends only the most important messages, which is useful for providing initial first-failure data capture (FFDC) information. Tracing messages provides details about the execution of internal code paths a look inside the black box. Tracing is therefore of interest primarily to IBM support, service, and development, and typically clients would only change settings at their direction. Higher levels of tracing can generate significant CPU activity; therefore, its use should be limited to where necessary. Tracing can be enabled via the GUI, or through the CLI using the trace command. You can control: When tracing begins and ends The MDS components for which tracing will occur The level of detail (verbosity) to show during tracing To get help on the parameters available with the trace command, enter legacy trace from the sfscli session, as shown in Example 13-6.
Example 13-6 Trace options sfscli> legacy trace trace: Trace Command Help -----------------trace enable [ module ] - if module is omitted, the enabled modules are displayed - if module is given, the module will emit messages trace disable [ module ] - if module is omitted, the disabled modules are displayed - if module is given, the module will stop emitting messages trace list - displays a list of trace modules in the server trace verbosity [0 - 9] - if value is omitted, the current verbosity is printed - if value is given, it sets the volume of tracing output (0 = min, 9 = max) trace emit "string" - emits the specified string to the trace log NOTE: Module names can specified with wildcard (* or ?) characters.
527
Administrative log
The administrative log (/usr/tank/admin/log/cimom.log) contains messages generated by the Administrative server. If, from the master metadata server, you display the administrative log from either the administrative command-line interface or the SAN File System console, all administrative logs on all engines in the cluster are consolidated into a single view (see Example 13-8).
Example 13-8 Cimom log 2005-08-26 07:17:48-08:00 I CMMOM0203I **** CIMOM Server Started **** 2005-08-26 07:17:48-08:00 I CMMOM0204I CIMOM Version: 1.2.0.21 2005-08-26 07:17:48-08:00 I CMMOM0205I CIMOM Build Date: 06/13/05 Build Time: 03:51:40 PM 2005-08-26 07:17:48-08:00 I CMMOM0206I OS Name: Linux Version: 2.4.21-231-smp 2005-08-26 07:17:48-08:00 I CMMOM0200I SSG/SSD CIM Object Manager 2005-08-26 07:17:51-08:00 I CMMOM0410I Authorization is active 2005-08-26 07:17:51-08:00 I CMMOM0400I Authorization module = com.ibm.storage.storagetank.auth.SFSLocalAuthModule 2005-08-26 07:17:51-08:00 I CMMOM0901I IndicationProcessor started 2005-08-26 07:17:51-08:00 I CMMOM0906I No pre-existing indication subscriptions 2005-08-26 07:17:51-08:00 I CMMOM0404I Security server starting on port 5989
Security log
The security log (/usr/tank/admin/log/security.log) displays the administrative user login activity for the Administrative server. If you display these logs from either the CLI or the SAN File System console, all administrative and security logs on all engines in the cluster are consolidated into a single view. To view this consolidated log using the CLI, use catlog -log security for the security log (see Example 13-9 on page 529).
528
Example 13-9 Security log sfscli> catlog -log security ID Level Type Server Date and Time Message ========================================================================================== CIMOM[com.ibm.http.HTTPServer.SecurityServer(HTTPServer.java:430)]: Info tank-mds3 Aug 26, 2005 2:17:51 PM The creation date of KeyStore is Fri Aug 26 07:16:49 PDT 2005 CIMOM[com.ibm.http.TrustStoreThread.run(TrustStoreThread.java:78)]: Info tank-mds3 Aug 26, 2005 2:17:51 PM The current date is Fri Aug 26 07:17:51 PDT 2005 CMMOM0302I Info tank-mds3 Aug 26, 2005 2:20:16 PM User (null) on client localhost could not be authenticated CIMOM[com.ibm.http.TrustStoreThread.run(TrustStoreThread.java:78)]: Info tank-mds3 Aug 27, 2005 2:17:51 PM The current date is Sat Aug 27 07:17:51 PDT 2005 CIMOM[com.ibm.http.TrustStoreThread.run(TrustStoreThread.java:78)]: Info tank-mds3 Aug 28, 2005 2:17:51 PM The current date is Sun Aug 28 07:17:51 PDT 2005
Use catlog -log admin for the administrative log, as shown in Example 13-10.
Example 13-10 Consolidated administrative log from the CLI sfscli> catlog -log admin ID Level Type Server Date and Time Message ========================================================================================== CMMOM0203I Info Normal tank-mds3 Aug 26, 2005 2:17:48 PM **** CIMOM Server Started **** CMMOM0204I Info Normal tank-mds3 Aug 26, 2005 2:17:48 PM CIMOM Version: 1.2.0.21 CMMOM0205I Info Normal tank-mds3 Aug 26, 2005 2:17:48 PM CIMOM Build Date: 06/13/05 Build Time: 03:51:40 PM CMMOM0206I Info Normal tank-mds3 Aug 26, 2005 2:17:48 PM OS Name: Linux Version: 2.4.21-231-smp CMMOM0200I Info Normal tank-mds3 Aug 26, 2005 2:17:48 PM SSG/SSD CIM Object Manager CMMOM0410I Info Normal tank-mds3 Aug 26, 2005 2:17:51 PM Authorization is active CMMOM0400I Info Normal tank-mds3 Aug 26, 2005 2:17:51 PM Authorization module = com.ibm.storage.storagetank.auth.SFSLocalAuthModule CMMOM0901I Info Normal tank-mds3 Aug 26, 2005 2:17:51 PM IndicationProcessor started CMMOM0906I Info Normal tank-mds3 Aug 26, 2005 2:17:51 PM No pre-existing indication subscriptions CMMOM0404I Info Normal tank-mds3 Aug 26, 2005 2:17:51 PM Security server starting on port 5989 CMMOM0402I Info Normal tank-mds3 Aug 26, 2005 2:17:51 PM Platform is Unix CMMOM0901I Info Normal mds1 May 06, 2004 3:19:58 AM IndicationProcessor was started CMMOM0906I Info Normal mds1 May 06, 2004 3:19:58 AM No preexisting indication subscriptions CMMOM0404I Info Normal mds1 May 06, 2004 3:19:58 AM Security server starting on port 5989 CMMOM0402I Info Normal mds1 May 06, 2004 3:19:58 AM Platform is Unix
529
530
You can use the IBM eGatherer to collect all necessary logs needed for IBM Technical Support. It is available from:
http://www.ibm.com/pc/support/site.wss/document.do?lndocid=MIGR-4R5VKC
531
syslog facility
The SAN File System client for AIX generates both log and trace messages, which are routed through the syslog facility on the AIX operating system. The syslog facility captures log and trace output from the kernel as well as other operating system services. By default, the syslog facility discards all kernel output. However, you can configure the syslog facility to specify a destination for the messages by modifying /etc/syslog.conf. Specifying a file as the destination: You can specify a file to receive kernel messages, such as /var/adm/ras/messages, for example. To specify that file, perform the following steps: 1. Create /var/adm/ras/messages if it does not already exist. You can use the AIX touch command to create an empty file. 2. Edit /etc/syslog.conf and insert this line:
kern.debug /var/adm/ras/messages
3. Restart the syslogd daemon: a. Run kill -hup syslogd_PID. b. Refer to the AIX Commands Reference for more information about the syslogd daemon. Specifying the console as the destination: Note: If you specify the console as the destination, messages are also written to /var/spool/mqueue/syslog. To specify the console as the destination for kernel messages, perform the following steps: 1. Edit /etc/syslog.conf and insert the line kern.debug dev/console using vi, for example. 532
IBM TotalStorage SAN File System
3. Refer to the AIX Commands Reference for more information about the syslogd daemon.
stfsdebug command
You can use the stfsdebug command to enable tracing for an AIX client. In addition, you can specify which components (called classes) are traced as well as the level of detail to include. You can also use stfsdebug to query the current status of all trace classes. The stfsdebug command requires the full path name of the SAN File System kernel module loaded on the client machine, which you can find by viewing the client configuration file (stclient.conf). The trace output enabled by stfsdebug is sent to the syslog facility.
syslog facility
The SAN File System client for Linux generates both log and trace messages, which are routed through the syslog facility on the Linux operating system. The syslog facility captures log and trace output from the kernel as well as other operating system services. By default, the syslog facility discards all kernel output. However, you can configure the syslog facility to specify a destination for the messages by modifying /etc/syslog.conf. Specifying a file as the destination: You can specify a file to receive kernel messages, such as /var/log/messages, for example. To specify that file, perform the following steps: 1. Create /var/adm/ras/messages if it does not already exist. You can use the Linux touch command to create an empty file. 2. Edit /etc/syslog.conf and insert this line:
kern.debug /var/log/messages
4. Refer to the Linux man page for syslogd for more information about the syslogd daemon. Specifying the console as the destination: To specify the console as the destination for kernel messages, perform the following steps: 1. Edit /etc/syslog.conf and insert or un-edit the line kern.debug dev/console.
533
3. Refer to the Linux man page for syslogd for more information about the syslogd daemon.
From the CLI, run /usr/tank/server/bin/obdc to collect the default data or add additional parameters to customize the data collection (Example 13-13.)
Example 13-13 OBDC from CLI tank-mds3:/tmp # /usr/tank/server/bin/obdc One-Button Data Collection Toolset for SAN File System This program will gather relevant system information for the purpose of diagnosing SANFS failures. All collected data will be stored in an archive which you can then examine before returning it to IBM for analysis. No private data will be transmitted by this program without your permission.
534
You can type 'obdc --help' for more options. OUTPUT DIRECTORY: /usr/tank/OBDC/ TEMPORARY DIRECTORY: /tmp/ SANFS HOME DIRECTORY: /usr/tank/ Do you want to continue? [yes/no] yes 1. Collecting Administrative Configuration Files (0%) 2. Collecting Administrative Log Files (1%) 3. Collecting SANFS Administrative Server Version (3%) 4. Collecting Administrative Legacy Overflow File (5%) 5. Collecting Attached Storage (7%) 6. Collecting HBA Devices (8%) 7. Collecting Network Devices (10%) 8. Collecting PCI Devices (12%) 9. Collecting SCSI Devices (14%) 10. Collecting Network ARP Table (16%) 11. Collecting Network Device Configuration (17%) 12. Collecting Network Connections (19%) 13. Collecting Network Routing Table (21%) 14. Collecting Disk Usage (23%) 15. Collecting Operating System Environment (25%) 16. Collecting Operating System /etc/inittab file (26%) 17. Collecting Memory Usage Statistics (28%) 18. Collecting Operating System Loadable Modules Configuration (30%) 19. Collecting Disk Mounts (32%) 20. Collecting Operating System Processes (33%) 21. Collecting Security logs, specifically, /usr/local/winbind/install/var/log.winbindd. (35%) 22. Collecting Installed Software (37%) 23. Collecting System Log Files (39%) 24. Collecting Operating System Version (41%) 25. Collecting SAN Adapter Statistics (42%) 26. Collecting SAN VPATH Mappings (44%) 27. Collecting SAN Device Statistics (46%) 28. Collecting SAN SDD Kernel Statistics (48%) 29. Collecting SAN SDD Driver Version (50%) 30. Collecting Server Bootstrap File (51%) 31. Collecting Server Configuration Files (53%) 32. Collecting SANFS Administrator List (55%) 33. Collecting SANFS Autorestart Statistics (57%) 34. Collecting SANFS Client List (58%) 35. Collecting SANFS Disaster Recovery File List (60%) 36. Collecting SANFS RSA Card Information (62%) 37. Collecting SANFS LUN List (64%) 38. Collecting SANFS Cluster Server List (66%) 39. Collecting Server Log Files (67%) 40. Collecting Server SHOW Command (69%) 41. Collecting Server SHOW Command (71%) 42. Collecting Server SHOW Command (73%) 43. Collecting Server SHOW Command (75%) 44. Collecting Server SHOW Command (76%) 45. Collecting Server SHOW Command (78%) 46. Collecting Server SHOW Command (80%) 47. Collecting Server SHOW Command (82%) 48. Collecting Server SHOW Command (83%) 49. Collecting Server SHOW Command (85%) 50. Collecting Server SHOW Command (87%) 51. Collecting Server SHOW Command (89%) 52. Collecting Server SHOW Command (91%) 53. Collecting SANFS Server Version (92%) 54. Collecting WAS Configuration Files (94%)
535
55. Collecting WAS Installed Applications (SANFS) (96%) 56. Collecting WAS Server Log Files (98%) obdc: The collection was successfully stored in /usr/tank/OBDC/OBDC-083105-0355-6264.tar.gz tank-mds3:/tmp #
From a UNIX client (including AIX, Solaris, and Linux), access the client and, from a shell prompt, run /usr/tank/client/bin/obdc to collect the default data or add additional parameters to customize the data collection. From a Windows client, log in and, from a shell prompt, run C:\Program Files\IBM\Storage Tank\client\bin\obdc.exe (Example 13-14) to collect the default data.
Example 13-14 OBDC on Windows client C:\>"C:\Program Files\IBM\Storage Tank\Client\bin\obdc.exe" One-Button Data Collection Toolset for SAN File System This program will gather relevant system information for the purpose of diagnosing SANFS failures. All collected data will be stored in an archive which you can then examine before returning it to IBM for analysis. No private data will be transmitted by this program without your permission. You can type 'obdc --help' for more options. OUTPUT DIRECTORY: TEMPORARY DIRECTORY: SANFS HOME DIRECTORY: C:\Documents and Settings\Administrator\Application ... C:\DOCUME~1\ADMINI~1\LOCALS~1\Temp\ C:\Program Files\IBM\Storage Tank\
Do you want to continue? [yes/no] yes 1. Collecting Client Configuration Files (0%) 2. Collecting Client Log Files (4%) 3. Collecting Client Log Files (8%) 4. Collecting SANFS Client Version (12%) 5. Collecting Attached Storage (16%) 6. Collecting HBA Devices (20%) 7. Collecting Network Devices (25%) 8. Collecting PCI Devices (29%) 9. Collecting SCSI Devices (33%) 10. Collecting Network ARP Table (37%) 11. Collecting Network Device Configuration (41%) 12. Collecting Network Connections (45%) 13. Collecting Network Routing Table (50%) 14. Collecting Disk Usage (54%) 15. Collecting Operating System Environment (58%) 16. Collecting Memory Usage Statistics (62%) 17. Collecting Disk Mounts (66%) 18. Collecting Operating System Processes (70%) 19. Collecting Installed Software (75%) 20. Collecting System Log Files (79%) 21. Collecting Operating System Version (83%) 22. Collecting SAN Adapter Statistics (87%) 23. Collecting SAN VPATH Mappings (91%) 24. Collecting SAN Device Statistics (95%) obdc: The collection was successfully stored in C:\Documents and Settings\Administrator\Application Data\IBM\Storage Tank\OBDC\OBDC-060804-1422-2104.tar.gz
536
Video connector (1) System-management connector (6) Ethernet connector (2) External power supply connector (3) Mini-USB connector (4) Power LED Adapter activity LED Reset button (recessed) ASM/serial breakout connector (5)
Video connector (1 in Figure 13-6). The RSA II contains an additional video subsystem on the adapter. If you install the RSA II in a server, it will automatically disable the onboard video. You should connect the servers monitor to the RSA II video connector. 10/100 Ethernet connector (2). For connection to a 10 Mbps or 100 Mbps Ethernet-based client LAN or management LAN. Power connector (3). You still can access the RSA II if the server is powered down when you use the external power supply (supplied when the adapter is purchased as an option). Connect the power supply to a different power source as the server (for example, a separate UPS). Mini-USB connector (4). This port provides the ability for remote keyboard and mouse when using the remote control feature. Connect this to a USB port of the server. Breakout connector (5). To use the RSA II as focal point for an ASM network. Before SAN File System V2.2.2, the access to a remote MDS's RSA card was through a dedicated RS-485 serial network. In V2.2.2 and beyond, all access to a remote RSA card is over the IP network. This makes it even more critical that the IP network have redundancy. Designing for network redundancy was discussed in 3.8.5, Network planning on page 84. The RSA TCP/IP connection is used to shut down a rogue MDS, as described in Fencing through remote power management (RSA) on page 82. When this happens, it is logged in the file /usr/tank/server/log/log.stopengine.
537
To connect to the RSAII card, open up a Web browser and point it to the IP address of the RSAII card, as shown in Figure 13-7 on page 539.
538
539
For example, once logged in to the RSAII card, you can reboot or shut down the server. To restart the server, click Power/Restart in the Tasks section, as shown in Figure 13-8.
You can also view the BIOS log by selecting Event Log under Monitors (see Figure 13-9 on page 541).
540
To completely manage servers from a remote location, you need more than just a keyboard-video-mouse (KVM) redirection. For example, to install the operating system or some patches, you need remote media to connect a CD-ROM or diskette to the server, or you will have to have someone physically load the installation media in the CD-ROM or diskette drive.
541
When you launch a remote console for the first time in your browser, a security warning window will pop up. This warning comes from the Java applets that remote control uses. It is quite usual to see these warnings, and you can trust this certificate from IBM and click Yes or Always (see Figure 13-10).
In the remote control window, a set of buttons simulates specific keystrokes and also shows the video speed selector, as in Figure 13-11. The slider is used to limit the bandwidth that is devoted to the remote console display on your computer.
Reducing the video speed can improve the rate at which the remote console display is refreshed by limiting the video data that must be displayed. You can reduce, or even stop, video data to allow more bandwidth for remote disk, if desired. Move the slider left or right until you find the bandwidth that achieves the best results. Now we are able to manage an MDS server remotely, as shown in Figure 13-12 on page 543. This displays the boot messages appearing on the actual console.
542
More information about the capabilities of the RSA II card is in the manual Remote Supervisory Adapter Users Guide, 88P9243, available at:
http://www.ibm.com/pc/support/site.wss/document.do?lndocid=MIGR-57091
543
In a SAN File System system with strong high availability requirements, the end user should configure an SNMP manager and specify the target IP address according the instructions in the manual IBM TotalStorage SAN File System Installation and Configuration Guide, GA27-4316. This will allow a SAN File System administrator to receive critical alerts and respond if the SAN File System cluster encounters faults. In most cases, the system will automatically handle the fault, but the Administrator will want to repair any inoperable system components (such as engine or switch failures) and investigate the cause of all unexpected faults. When a specified event type occurs, SAN File System sends the SNMP trap and logs the event in the cluster log. Note: SAN File System supports asynchronous monitoring through traps, but does not support SNMP GETs or PUTs for active management, that is, an SNMP Manager cannot manage SAN File System. Examples of events that might generate SNMP trap messages include the following: MDS executes a change in state. MDS detects that another MDS is not active. The size of a fileset reaches a specified percentage of its capacity.
SNMP agent Use this field to specify whether you want to forward alerts to SNMP communities on your network or to allow a SNMP manager to query the SNMP agent. To allow alerts to be sent to SNMP communities, click the drop-down button and select Enabled.
544
Note: To enable the SNMP agent, the following criteria must be met: ASM contact is specified. ASM location is specified. At least one Community name is specified. At least one valid IP address is specified for that Community. Alert recipients whose notification method is SNMP will not receive alerts unless both SNMP traps and SNMP agent are enabled. SNMP traps Use this field to convert all alert information into the ASM MIB SNMP format in order for those alerts to be sent to a SNMP Manager. To allow conversion of alerts to SNMP format, click the drop-down button and select Enabled. Note: Alert recipients whose notification method is SNMP will not receive alerts unless both SNMP traps and SNMP agent are enabled. Communities Use these fields to define the administrative relationship between SNMP agents and SNMP managers. You must define at least one community. Each community definition consists of three parameters. To set up a Community: a. In the Name field, enter a name, or authentication string, that corresponds to the desired community. b. Enter the IP addresses of this community in the corresponding IP address fields for this community. c. Click Save to store your new community configuration. d. On the System page, enter your Contact and Location information. If these are already configured, skip the next step. e. Click Save to store your new Contact and Location. f. Go to the Network Protocols page and choose Enabled in the SNMP agent drop-down list. g. Select the desired entry in the SNMP traps drop-down. h. Click Save to enable your new Community configuration. Note: If an error message window appears, make the necessary adjustments to the fields listed in the error window. Then click the Save button to save your corrected information. Also, you must configure at least one community in order to enable this SNMP agent.
545
2. Set traps Here you specify what kinds of events you want to trigger an SNMP trap. SAN File System messages have four severity levels: informational, warning, error, and severe. Use the settrap command (Example 13-17), choosing the event severity levels desired. If you specify all, then all events will trigger an SNMP trap. This option cannot be combined with any other setting. If you specify none, then no SNMP traps will be sent. This option cannot be combined with any other setting. Any other single level, or combination of levels, is valid.
Example 13-17 Set alerts using CLI sfscli> settrap -event sev,warn,err CMMNP5338I SNMP trap event level was set successfully.
The following commands are available when configuring SNMP: lssnmpmgr: Displays a list of SNMP managers and their attributes. lstrapsetting: Displays a list of event types that currently generate an SNMP trap. rmsnmpmgr: Removes an SNMP manager. settrap: Specifies whether an SNMP trap is generated and sent to all SNMP managers when a specific type of event occurs on the MDS. addsnmpmgr: Adds an SNMP manager to receive SNMP traps.
To start it, use /usr/tank/admin/bin/startConsole and /usr/tank/admin/bin/startCimom. Also, verify that your LDAP server is running. To verify that your MDS can communicate with the LDAP server, start a sfscli session and type lsserver. If the command returns an error, as shown in Example 13-18, verify that your LDAP server is up and running, as discussed in 4.1, Security considerations on page 100. Another problem might be that you have logged in with an ID that has insufficient privileges to run the specified command. The manual IBM IBM TotalStorage SAN File System Administrators Guide and Reference, GA27-4317 has information about the privileges required to run each SAN File System command.
Example 13-18 MDS cannot communicate with LDAP server sfscli> lsserver CMMUI9901E User access to command "lsserver" denied. Tip: Contact Technical Support for assistance.
547
Component XXXYYnnnnZ HST HST HST HST HST HST HST HST HST HST HST HST HST HST HST HST HST HST HST HST
Sub-component XXXYYnnnnZ IP LM LP LV MG NC NE NL NS OM OP PC PG SC SM TM TP UC VC WA
Sub-component description Internet Protocol Services Lock/Lease Manager LALR Parser Generator Logical Volume Manager Message Formatter National Language Compiler Net Server Program messages, default catalog nlsmsg National Language Support Object Meta-Data Manager Run-Time Options Processor Policy Server Program Standard Container Schema Manager Administration Session Manager Protocol Transaction Manager Storage Tank Protocol Utility Version Control Manager Write Ahead Log Administrative server
AP AS NP WU
Provider messages Script messages SAN File System CIMOM providers (error messages) SAN File System UI Console Scripts Administrative agent
CI NP NW OM UI
CLI, CIM, and common errors to CLI and GUI SAN File System UI Console and CLI SAN File System Console Object Manager Administrative agent UI Framework
548
Component XXXYYnnnnZ
Sub-component XXXYYnnnnZ
stfsclient Client Common User (common to all) Client AIX User (only) Client AIX lpp install scripts stfsmount Command Line Option Parser CLIENT sub-components (Kernel Level)
AK CS CW SM
Client AIX Kernel Client Setup perl script Client Windows Client State Manager
549
550
Part 4
Part
551
552
14
Chapter 14.
553
554
Table 14-1 Naming convention for objects within a DB2 SMS table space container File name SQLTAG.NAM SQLxxxxx.DAT SQLxxxxx.LF SQLxxxxx.LB SQLxxxxx.LBA SQLxxxxx.INX SQLxxxxx.IN1 SQLxxxxx.BKM SQLxxxxx.TDA SQLxxxxx.TIX SQLxxxxx.TLB SQLxxxxx.LOG File contents Table space container tag to verify consistency All table rows except LONG VARCHAR, LONG VARGRAPHIC, BLOB, CLOB, or DBCLOB data LONG VARCHAR or LONG VARCHARGRAPHIC data BLOB, CLOB, or DBCLOB data Contains allocation and free space information Index data for a table Index data for a table Dimension block index for a multidimensional clustered table Temporary regular data Temporary index data Temporary LOB data DB2 transaction logs
With traditional storage, all files in an SMS container would reside on the same set of devices, since all the files would reside in the same directory. Since SAN File System provides the ability to choose a storage pool based on the file extension, the different types of DB2 data stored in an SMS container may be placed into different storage pools. Thus, within an SMS table space container, we can now put regular data on different devices than index data and Long/LOB data using SAN File System. Previously, this ability has been traditionally limited to DMS table spaces only. The ability of SAN File System to automatically place different files in different storage pools can significantly reduce file placement tasks that may otherwise be required of the database administrator.
555
Note: If no matching rule is found for a particular file, it will be placed in the default storage pool.
Figure 14-1 illustrates how these rules would cause DB2 data to be assigned.
*.LO G
*.D T A
*.IN X
*.LB
*.LB A
*.LF
*.TD A
556
Threshold alerts
SAN File System provides an alert mechanism based on the amount of allocated space within a storage pool. This is significant within a DB2 environment using SMS tablespaces, since space is acquired as needed. If an alert is triggered, indicating that a storage pool has reached its configured threshold, the SAN File System administrator can choose to add additional storage to the storage pool. Thus, storage can be added before any database users experience any "out of disk space" error codes.
557
DB2 Partition 0
DB2 Partition 1
DB2 Partition 2
DB2 Partition 3
SAN
MDS1
MDS2
/db2inst1/NODE0000 /db2inst1/NODE0001
Figure 14-2 Workload distribution of filesets for DB2
/db2inst1/NODE0002 /db2inst1/NODE0003
The diagram in Figure 14-2 shows a DB2 instance with four DB2 partitions stored on a 2-node SAN File System cluster. The DB2 instance is called db2inst1, and it has DB2 partitions 0, 1, 2, and 3. On the create database command, all the database files, by default, will go under a db2inst1 directory with a correlating subdirectory for each partition. If the NODE0000, NODE0001, NODE002, and NODE0003 directories represent SAN File System filesets, then they can be evenly distributed across the MDS servers. For example, we could assign the NODE0000 and NODE0001 filesets to MDS 1 and the NODE0002 and NODE0003 filesets to MDS 2.
558
DB2
DB2 Buffer pool
Disk
Since DB2 is already caching its own data, the extra caching occurring in the file system cache is not only a potential inefficient use of memory, but additional processing is required to perform read and writes into the file system cache. If the filesystem cache is being used ineffectively, turning on direct I/O will not only save system resources, but performance gains may also be achieved. Important: Enabling direct I/O does not guarantee performance enhancements. These performance gains will be achieved with direct I/O only if the file system cache is not being used effectively. The potential performance gains will vary depending on the nature of the workload. Direct I/O is already supported on Windows and can be enabled via the DB2NTNOCACHE DB2 profile variable:
db2set DB2NTNOCACHE=ON
Direct I/O is supported in DB2 V8.1 FP4 for AIX and can be enabled via the DB2_DIRECT_IO DB2 profile variable. This will enable direct I/O for SMS containers, excluding long data, LOB data, and temporary data:
db2set DB2_DIRECT_IO=ON
559
14.7 FlashCopy
SAN File System provides the ability to take point-in-time copies of your data using its Flash Copy capability. If, for example, an application logic error occurred that requires you to go back to an earlier copy of the data, a FlashCopy image taken previously could be reverted, and then a rollforward performed to a point-in-time just before the application logic error occurred. Note: Before a FlashCopy image is created, you must suspend DB2 first using the DB2 suspend command. The ability to create FlashCopy images, and the use of a global namespace, provide the capability to offload a point-in-time file level backup to other machines. To do this: 1. Suspend the database using the DB2 suspend command. This suspends all write operations to a DB2 UDB database partition (that is, tablespaces and log files). Read operations are not suspended and can therefore continue. Applications can continue to process insert, update, and delete operations that use the DB2 buffer pools. 2. After the suspend is completed, take a FlashCopy image to create a point-in-time file level copy of the database. 3. As soon as the FlashCopy image is created, the database can then be resumed and normal processing can continue to occur with virtually no impact to the clients. 4. A secondary machine can then be used to access the FlashCopy image and perform any necessary file system backups, avoiding the extra consumption of resources on the initial machine, and providing application server-free backup.
560
In our example (Figure 14-4), we will make the following assumptions: We have one DB2 server machine with two DB2 instances called INSTANCEA and INSTANCEB on each. The path location for the create database command on UNIX is /mnt/sanfs/mydir. The drive letter for the create database command on Windows is T. A database called DatabaseA is created under INSTANCEA, and a database called DatabaseB is created under INSTANCE B.
/mnt/sanfs/mydir/INSTANCEA /mnt/sanfs/mydir/INSTANCEB
T:\INSTANCEA T:\INSTANCEB
With one DB2 server creating the databases with the locations used above, everything will work successfully. However, if the environment consisted of two DB2 servers and they chose to use the same instance names and path/drive locations, they would be competing for the same directory. On UNIX, one potential workaround is to ensure that each unique DB2 server machine specifies a different path for the create database command. Alternatively, another mechanism to ensure path uniqueness is to have unique DB2 instance names across the environment. In that case, the same path can be chosen for each create database command, and it is the unique instance name that will guarantee no contention for the same directories. With DB2 for Windows, only the drive letter can be specified on the create database command. So, this cannot be used to avoid directory contention, since each SAN File System client will see the same drive. However, a unique instance name convention can be used to avoid contention for the same directories.
561
562
Part 5
Part
Appendixes
In this part of the redbook, we provide the following supplementary information: Appendix A, Installing IBM Directory Server and configuring for SAN File System on page 565 Appendix C, Client configuration validation script on page 597 Appendix B, Installing OpenLDAP and configuring for SAN File System on page 589 Appendix D, Additional material on page 603
563
564
Appendix A.
Installing IBM Directory Server and configuring for SAN File System
In this appendix, we discuss the following topics: Installing IBM Directory Server V5.1 Creating the LDAP database Configuring IBM Directory Server V5.1 for SAN File System Starting the LDAP Server and configure Admin Server Example of LDIF file used in our configuration
565
Here are the steps to follow: 1. To start the installation, run setup.exe from the directory IDS_SMP. You will first be prompted for a language to use for the install. We selected English. Click OK. 2. Next you will see the Welcome window. Click Next to continue. 3. The license agreement now appears. Select the button that you accept the terms, and click Next. 4. Select a directory to install IBM Directory Server. The default is C:\Program Files\IBM\LDAP, as in Figure A-1. Accept this or enter an alternative, and click Next.
5. Select the language for IBM Directory Server, as shown in Figure A-2 on page 567, and click Next.
566
6. Select the setup type (Figure A-3). We chose Typical. Click Next.
Appendix A. Installing IBM Directory Server and configuring for SAN File System
567
7. Accept the defaults in the features window (Figure A-4) and click Next.
8. Specify a user ID and password for DB2, as in Figure A-5. DB2 is used as the underlying repository, and is installed automatically. You can specify a new user ID or an existing one. If the ID exists, the password must be correct. We chose to create a new user ID, db2admin. Click Next.
568
9. You will see a summary of the installation options selected, as in Figure A-6. If everything is correct, click Next.
10.You will see a pop-up that DB2 will install in the background (see Figure A-7). This may take up to 20 minutes. Click OK to continue. 11.After some time, the GSKit pop-up appears. GSKit will install in the background, and may take up to five minutes. The IBM Global Security Toolkit (GSKit) provides a Secure Sockets Layer (SSL) with encryption strengths up to Triple DES.
12.After some time, the WebSphere Application Server Express appears. This will install in the background, and may take up to 10 minutes. WebSphere provides the application environment for IBM Directory Server. 13.After this is complete, the IBM Directory Server client README file displays. Review it and click Next to continue. 14.The server README displays. Review it and click Next to continue.
Appendix A. Installing IBM Directory Server and configuring for SAN File System
569
15.You will be prompted to restart your system now or later. A reboot is required to complete the installation. Select Yes, restart my system and click Next. The Installation Complete window opens, as shown in Figure A-8. Click Finish to continue, and reboot the server.
570
1. Set the Administrator DN and password. We used the following values: Administrator DN: cn=Manager,o=ITSO Password: password Click OK to continue. 2. You will see confirmation that the Administrator DN and password have been successfully set, as shown in Figure A-10. Click OK.
3. In the next window, click Configure database in the left column. In the Configure Database window, select Create a new database and click Next. You will be prompted for the user ID for the DB2 database that was specified during the installation (see step 8 on page 568). We entered db2admin and our specified password, as shown in Figure A-11. Click Next.
Appendix A. Installing IBM Directory Server and configuring for SAN File System
571
4. Select a name for your LDAP database. This database will be created in DB2. We chose SFSLDAP, as in Figure A-12. Click Next.
5. In Figure A-13, you specify the codepage for your DB2 database. Select the default Create a universal DB2 database and click Next.
572
6. Select the drive letter to create the LDAP database, as in Figure A-14. The default is the C partition. Click Next.
7. Figure A-15 summarizes the entries made. Verify these for correctness and click Finish to create the database.
Appendix A. Installing IBM Directory Server and configuring for SAN File System
573
8. Figure A-16 shows the output messages for creating the database. When it is complete, click Close.
574
2. Select Manage suffixes on the left hand side. Enter a string corresponding to your organization attribute, for example, o=ITSO, and click Add, as shown in Figure A-17.
3. You will see your new attribute listed under Current suffix DNs. 4. Click Import LDIF data from the left column. Enter in the file name of your saved LDIF configuration file. Tip: IBM Directory Server expects a c:\tmp directory on your system drive when importing an LDIF file. Make sure that you have this directory; if it does not exist, create it.
Appendix A. Installing IBM Directory Server and configuring for SAN File System
575
Click Import to start the import of the LDIF file, as shown in Figure A-18.
576
5. The file will import, displaying status messages, as shown in Figure A-19.
6. When the import completes, close the configuration tool. Your LDAP server is now configured for SAN File System.
Once the Directory Server has started, leave this window open in the background. If you close the window, the Directory Server will stop.
Appendix A. Installing IBM Directory Server and configuring for SAN File System
577
2. Now you need to start the admin server. From another command prompt, enter startserver.bat server1 from the directory Program Files\IBM\ldap\appsrv\bin\, as shown in Example A-2.
Example: A-2 Start admin server C:\Program Files\IBM>cd ldap\appsrv\bin\ C:\Program Files\IBM\LDAP\appsrv\bin>startserver.bat server1 ADMU0116I: Tool information is being logged in file C:\Program Files\IBM\LDAP\appsrv\logs\server1\startServer.log ADMU3100I: Reading configuration for server: server1 ADMU3200I: Server launched. Waiting for initialization status. ADMU3000I: Server server1 open for e-business; process id is 1916
Once the admin server has started, you can now close this command prompt. 3. To check that the admin server is working, point your Web browser to http://localhost:9080/IDSWebApp/IDSjap/Login.jsp (the IBM Directory Server). If this does not respond, replace localhost with the actual host name. The Administration login page should display, as shown in Figure A-20.
4. Enter the default user name superadmin and password secret and click Login. The main administrator console should appear, as in Figure A-21 on page 579.
578
Appendix A. Installing IBM Directory Server and configuring for SAN File System
579
5. To change the default administrator login password, select Change console administrator login from the left column, as shown in Figure A-22. Enter a new password and click OK.
580
6. Click Manage console servers. Here you add the host name of your local machine, as shown in Figure A-23.
Appendix A. Installing IBM Directory Server and configuring for SAN File System
581
Click Add. This will bring up the Add server window, as shown in Figure A-24.
Enter the host name (shortname is fine) and leave the other options as default. Click OK.
582
7. The window shown in Figure A-25 appears, confirming that the local host is now added.
Appendix A. Installing IBM Directory Server and configuring for SAN File System
583
8. You can test that it has been added correctly by logging out and then re-logging in, using your host name and LDAP user name, as shown in Figure A-26.
Select the local host name in the LDAP host name drop-down list. Enter the Username and Password as defined in the LDIF file that you imported. In this example, the user name is cn=Manager,o=ITSO and the password is password. Click Login.
584
9. The default admin console should now display, as shown in Figure A-27.
Appendix A. Installing IBM Directory Server and configuring for SAN File System
585
Select Directory Management and then Manage Entries, as shown in Figure A-28.
Select the Object that you want to browse and click Expand. In this example, we are selecting the o=itso object. Select the objectclass that you want to browse and click Expand. In this example, we are selecting the ou=Users, as shown in Figure A-29.
Verify that the users, which you specified in the LDIF file that was imported to the LDAP server, exist. In this example, we can see the Administrator, Backup, Monitor, and Operator user accounts.
586
You have now verified that the LDIF file has been imported correctly and IBM Directory Server is now installed and ready for use by SAN File System. See the manual IBM Tivoli Directory Server Administration Guide, SC32-1339 for more information about how to use and configure IBM Directory Server. This can be found at:
http://www.ibm.com/software/sysmgmt/products/support/IBMDirectoryServer.html
587
uid: ITSOOper userPassword: password # Roles, ITSO dn: ou=Roles,o=ITSO objectClass: organizationalUnit ou: Roles # Administrator, Roles, ITSO dn: cn=Administrator,ou=Roles,o=ITSO objectClass: organizationalRole cn: Administrator roleOccupant: cn=ITSOAdmin Administrator,ou=Users,o=ITSO # Monitor, Roles, ITSO dn: cn=Monitor,ou=Roles,o=ITSO objectClass: organizationalRole cn: Monitor roleOccupant: cn=ITSOMon Monitor,ou=Users,o=ITSO # Backup, Roles, ITSO dn: cn=Backup,ou=Roles,o=ITSO objectClass: organizationalRole cn: Backup roleOccupant: cn=ITSOBack Backup,ou=Users,o=ITSO # Operator, Roles, ITSO dn: cn=Operator,ou=Roles,o=ITSO objectClass: organizationalRole cn: Operator roleOccupant: cn=ITSOOper Operator,ou=Users,o=ITSO
588
Appendix B.
589
We used Red Hat Linux 9; however, the installation should be substantially similar for other releases. SUSE Linux will also be very similar.
(If you run SUSE Linux, the release number will be in /etc/SUSE-release.) 2. Determine the version of OpenLDAP that is currently installed. Enter rpm -qa | grep openldap at the Linux prompt, as shown in Example B-2.
Example: B-2 Determine the version of LDAP installed # rpm -qa |grep openldap openldap-2.0.27-8 openldap-clients-2.0.27-8 openldap-servers-2.0.27-8
If a default Red Hat Linux installation was used, there should be at least one OpenLDAP RPM installed. If you have the right version installed as in Table B-1, skip to Configuration of OpenLDAP client on page 591. If no OpenLDAP RPMs are installed, or there is an invalid version, you will need to install it. The required RPMs for an LDAP server on Red Hat Linux are openldap-2.0.xx-F, openldap-client-2.0.xx-F, and openldap-server-2.0.xx.F, where 2.0.xx-F corresponds to Table B-1. 3. The LDAP RPMs can either be found on your Red HAT CD or downloaded from one of the following RPM download sources: http://www.rpmfind.net. Seach on openldap and select based on the distribution. http://www.redhat.com. Select Download, then search on openldap. Note that other distributions may not be listed here.
590
Note: You only need to download RPMs that are not installed. For example, if you have openldap-2.0.xx and openldap-client-2.0.xx installed but not openldap-server-2.0.xx, then you only need to download the openldap-server-2.0.xx package. 4. After downloading the RPMs to your Linux server, change to the download directory and start the installation using the rpm command, as shown in Example B-3.
Example: B-3 Install OpenLDAP # rpm -ivh openldap*
The RPMs will be installed, with a hash-mark progress bar. If the RPMs are not installed due to any missing prerequisite RPMs, find the RPMs using step 3 on page 590. If, however, the RPMs do not install due to prerequisite files or mismatched file versions, the RPM version selected is not appropriate for the Red Hat Linux installation. You will need to undertake further investigation into the specific files in conflict, and confirm which OpenLDAP RPM version matches those files. 5. Verify that the OpenLDAP RPMs have been installed with rpm -qa | grep openldap at the Linux prompt, as shown in Example B-4.
Example: B-4 Verify that LDAP packages has been installed # rpm -qa | grep openldap openldap-2.0.27-8 openldap-clients-2.0.27-8 openldap-servers-2.0.27-8
Now, the three RPMs should be installed: the base, the clients, and the servers.
591
#BASE #URI
After editing the file, save and quit from the editor. For a more detailed description of this file, refer to the manual page (man ldap.conf).
rootpw
592
After editing the file, save and quit. 2. Create a shielded password for the root DN. Enter the command shown in Example B-7.
Example: B-7 Create shielded password for the root DN # export SLAPPW=slappasswd
Note: The parameter slappasswd is enclosed by the back-quotes (`). When prompted, enter the same password twice. It will be concealed like any UNIX password input. 3. When you return to the prompt, the SLAPPW variable will contain the shielded string that is needed for the slapd.conf file. Put the value of this variable into the slapd.conf file, as shown in Example B-8. Be careful to enter this string exactly, especially if you are not familiar with Linux command syntax.
Example: B-8 Add the shielded password to the slapd.conf # sed -e "s/@rootpw@/$SLAPPW/" slapd.conf > slapd.conf.1 # mv slapd.conf.1 slapd.conf
The basic configuration of your LDAP server has now been done and you are now ready to start your LDAP server. 4. Start the LDAP server using the service command at the Linux prompt, as shown in Example B-9.
Example: B-9 Start LDAP server # service ldap start
You should receive a green OK. If not, check for error messages in /var/log/messages that relates to the slapd and then try again. 5. It is now recommended that you configure the LDAP server to start automatically on boot using the chconfig command. This is shown in Example B-10.
Example: B-10 Configure LDAP server to start automatically # chkconfig --level 235 ldap on
6. Make sure that your LDAP server is running and responding to queries using ldapsearch, as shown in Example B-11.
Example: B-11 Verify that LDAP responds to queries # ldapsearch -h localhost -x -b <base_suffix> '(objectclass=*)'
593
No entries should be returned, but you should get a positive response from the LDAP server, as shown in Example B-12.
Example: B-12 Verify that ldap server responds to queries # ldapsearch -h localhost -x -b o=ITSO '(objectclass=*)' version: 2 # filter: (objectclass=*) # requesting: ALL # search result search: 2 result: 32 No such object # numResponses: 1#
If the LDAP server responded correctly to the query, you are now ready to configure your LDAP server to work with SAN File System.
Enter your root DN password as prompted, which you entered in step 2 on page 593 of Configuration of OpenLDAP server on page 592. If you entered your password correctly, you will not see a prompt. This indicates that ldapadd is waiting for you to type input at the keyboard. While in this mode, add the entry for the base suffix, as shown in Example B-14. Once the base suffix has been entered, press enter a second time to indicate the end of the entry. Type Ctrl+D to exit from the input mode.
Example: B-14 Add base suffix # ldapadd -x -W -h localhost -D "cn=Manager,o=ITSO" Enter LDAP Password: (<=== HERE INPUT PASSWORD) dn: o=ITSO objectClass: organization o: ITSO (<=== 2ND ENTER) adding new entry "o=ITSO" (<=== PRESSED Ctrl+D) #
2. Use ldapsearch to verify that the entry was added to the LDAP database, as shown in Example B-15.
Example: B-15 Verify that entry was successfully added to LDAP database # ldapsearch -x -h localhost -x -b o=ITSO '(objectclass=organization)'
3. Import your LDAP configuration using ldapadd. This is a file with a .ldif suffix (LDIF stands for Lightweight Directory Interchange Format). We used the file ITSOLDAP.ldif shown in Sample LDIF file used on page 587. At a minimum, you will want to edit this file to modify the base suffix (ITSO in our case). The value here should match the organization name
594
that you wish to use, and also match the entry made in the slapd.conf file (Example B-6 on page 592). You may also want to modify users and passwords according to your requirements. Save the file, noting the file name, such as sfsbase.ldif. 4. Import your entries in the file with ldapadd, as shown in Example B-16. When prompted, enter your root DN password, which is the same as you entered in step 2 on page 593 of Configuration of OpenLDAP server on page 592. Make sure to use the right o=xxxxx parameter on the ldapadd command for your environment.
Example: B-16 Import LDIF # ldapadd -x -W -h localhost -D "cn=Manager,o=ITSO" -f sfsbase.ldif Enter LDAP Password: adding new entry "cn=Manager,o=ITSO" adding new entry "ou=Users,o=ITSO" adding new entry "cn=ITSOAdmin Administrator,ou=Users,o=ITSO" adding new entry "cn=ITSOMon Monitor,ou=Users,o=ITSO" adding new entry "cn=ITSOBack Backup,ou=Users,o=ITSO" adding new entry "cn=ITSOOper Operator,ou=Users,o=ITSO" adding new entry "ou=Roles,o=ITSO" adding new entry "cn=Administrator,ou=Roles,o=ITSO" adding new entry "cn=Monitor,ou=Roles,o=ITSO" adding new entry "cn=Backup,ou=Roles,o=ITSO" adding new entry "cn=Operator,ou=Roles,o=ITSO"
5. Use ldapsearch again to verify the objects, as described previously in Example B-15 on page 594. 6. The LDAP directory (ldbm) files reside under the directory /var/lib/ldap/ by default. You can list them to check that they exist, as shown here in Example B-17.
Example: B-17 Verify ldbm exists # ls -lt /var/lib/ldap/ total 56 -rw------1 ldap -rw------1 ldap -rw------1 ldap -rw------1 ldap -rw------1 ldap -rw------1 ldap -rw------1 ldap
21 21 21 21 21 21 21
Tip: If you want to re-configure the LDAP directory from scratch, stop slapd, remove the ldbm files, start slapd, then re-do the steps in this section.
595
596
Appendix C.
597
598
then echo "The specified client cannot access any LUN or does not exist" exit 0 fi
#build the list of pools available in the system echo "INFO Building list of pools defined on SANFS" >> $log_file echo "Listing pools on SANFS..." default_pool=( `sfscli lspool -hdr off -type default | awk '{print $1}'` ) user_pools=( `sfscli lspool -hdr off -type user| awk '{print $1}'` ) pools=( ${default_pool[@]} ${user_pools[@]} ) nb_pools=${#pools[@]} #just in case.... if [ "$nb_pools" -le 0 ] then echo "No pool found on SANFS !" echo "No pool found - stopping now." >> $log_file exit 0 fi #build the list of vol for each pool
echo "INFO Building the list of volumes for each pool in SANFS" >> $log_file index=0 while [ "$index" -lt "$nb_pools" ] do #we build the list of volumes for each pool current_vol=( `sfscli lsvol -hdr off -pool ${pools[$index]} | awk '{print $1}' 2>>$log_file` ) if [ "${#current_vol[*]}" -ne 0 ] then #it's not empty - let concatenate it pool_vol=( ${pool_vol[@]} ${current_vol[@]} ) else #if it's empty, we put "." pool_vol[${#pool_vol[*]}]="." fi #we have place the ":" delimiter pool_vol[${#pool_vol[*]}]=":" index=$((index + 1)) done #We now check which pool the client can correctly (entirely) access pool_vol_cur_index=0 for p in ${pools[@]} do echo "" >> $log_file echo "INFO Checking access to pool $p..." >> $log_file access=1 end_of_pool=0 while [ $end_of_pool -ne 1 ] do pool_vol_cur=${pool_vol[$pool_vol_cur_index]} pool_vol_cur_index=$((pool_vol_cur_index+1))
599
if [ $pool_vol_cur = ":" ] then end_of_pool=1 #echo "Finished checking pool $p - moving to next one" >> $log_file else if [ $pool_vol_cur = "." ] then access=0 echo "INFO The pool $p does not contain any volume" >> $log_file else vol_found=0 client_luns_index=0 while [ $vol_found -eq 0 -a $client_luns_index -lt "${#client_luns[*]}" ] do if [ "$pool_vol_cur" = "${client_luns[$client_luns_index]}" ] then vol_found=1 fi client_luns_index=$((client_luns_index+1)) done if [ $vol_found -eq 0 ] then access=0 echo "WARNING client $client does not have access to volume $pool_vol_cur in pool $p" >> $log_file fi fi fi done if [ "$access" -eq 1 ] then echo "INFO access to pool $p - OK" >> $log_file client_avail_pools[${#client_avail_pools[*]}]=$p else echo "WARNING client $client has incomplete access to pool $p" >> $log_file fi done echo "Now checking filesets access..."
#let's now see the specified filesets until [ -z "$2" ] do cur_fileset="$2" echo "" >> $log_file echo "INFO Checking fileset $cur_fileset..." >> $log_file fileset_pools=( `sfscli reportfilesetuse -hdr off $cur_fileset | awk '{print $1}'` ) fileset_access=1 fileset_pool_index=0 #if the fileset does not require any access to pool, consider an error or wrong fileset name if [ ${#fileset_pools[@]} -eq 0 ] then echo "Fileset $cur_fileset is invalid"
600
fileset_access=0 fi
while [ $fileset_access -eq 1 -a $fileset_pool_index -lt "${#fileset_pools[@]}" ] do # variable "avail_pools_index" will be index to go through the list of pools for the client avail_pools_index=0 # variable "pool_found" set to 1 if the searched pool is within the client pool list pool_found=0 while [ $pool_found -eq 0 -a $avail_pools_index -lt "${#client_avail_pools[@]}" ] do if [ "${fileset_pools[fileset_pool_index]}" = "${client_avail_pools[avail_pools_index]}" ] then # We found the pool pool_found=1 fi avail_pools_index=$((avail_pools_index+1)) done if [ $pool_found -eq 0 ] then fileset_access=0 echo "WARNING Pool ${fileset_pools[fileset_pool_index]} is missing" >> $log_file fi fileset_pool_index=$((fileset_pool_index+1)) done if [ "$fileset_access" -eq 0 ] then echo "WARNING - Client $client does not have correct access to fileset $cur_fileset" echo "WARNING - Client $client does not have correct access to fileset $cur_fileset" >> $log_file else echo "INFO - Client $client has correct access to fileset $cur_fileset" echo "INFO - Client $client has correct access to fileset $cur_fileset" >> $log_file fi #move to the next fileset shift done echo "" echo "Please refer to $log_file for details." echo "####### Checking for client $client finished successfully ############" >>$log_file exit 0
601
602
Appendix D.
Additional material
This redbook refers to additional material that can be downloaded from the Internet as described below.
Select the Additional materials and open the directory that corresponds with the redbook form number, SG247057.
603
604
605
606
Related publications
The publications listed in this section are considered particularly suitable for a more detailed discussion of the topics covered in this redbook.
IBM Redbooks
For information about ordering these publications, see How to get IBM Redbooks on page 611. Note that some of the documents referenced here may be available in softcopy only. Designing and Optimizing an IBM Storage Area Network, SG24-6419 DS4000 Best Practices and Performance Tuning Guide, SG24-6363 Getting Started with zSeries Fibre Channel Protocol, REDP-0205 Get More out of your SAN with IBM Tivoli Storage Manager, SG24-6687 IBM SAN Survival Guide, SG24-6143 IBM Tivoli Storage Management Concepts, SG24-4877 IBM Tivoli Storage Manager Implementation Guide, SG24-5416 IBM TotalStorage Enterprise Storage Server: Implementing the ESS in Your Environment, SG24-5420 IBM TotalStorage SAN Volume Controller, SG24-6423 IBM TotalStorage Enterprise Storage Server Implementing ESS Copy Services in Open Environments, SG24-5757 Implementing Systems Management Solutions using IBM Director, SG24-6188 Understanding the IBM TotalStorage Open Software Family, SG24-7098 Understanding LDAP - Design and Implementation, SG24-4986 Virtualization in a SAN, REDP-3633
Other publications
These publications are also relevant as further information sources: IBM Tivoli Directory Server Administration Guide, SC32-1339 IBM TotalStorage FAStT Storage Manager Version 8.4x Installation and Support Guide for AIX, HP-UX, and Solaris, GC26-7622 IBM TotalStorage FAStT Storage Manager Version 8.4x Installation and Support Guide for Intel-based Operating System Environments, GC26-7621 IBM TotalStorage FAStT Storage Manager Version 9 Installation and Support Guide for AIX, HP-UX, Solaris, and Linux on POWER, GC26-7648 IBM TotalStorage FAStT Storage Manager Version 9 Installation and Support Guide for Intel-based Operating System Environments, GC26-7649 IBM TotalStorage Master Console for SAN File System and SAN Volume Controller Installation and Users Guide Version 3 Release 1, GC30-4090
607
IBM TotalStorage SAN File System Administrators Guide and Reference, GA27-4317 IBM TotalStorage SAN File System Maintenance and Problem Determination Guide, GA27-4318 IBM TotalStorage SAN File System Planning Guide, GA27-4344 IBM TotalStorage SAN File System Installation and Configuration Guide, GA27-4316 Microsoft Cluster Server Enablement Installation and Users Guide, GC30-4115 (NOT FOUND) Remote Supervisory Adapter Users Guide, 88P9243, found at:
http://www.ibm.com/pc/support/site.wss/MIGR-4TZQAK.html
Online resources
These Web sites and URLs are also relevant as further information sources: Distributed Management Task Force
http://www.dmtf.org
Download Cygwin
http://www.cygwin.com http://www.cygwin.com/setup.exe
Download OpenSSH
http://www.openssh.com
Download PuTTY
http://www.putty.nl
Heimdal Kerberos 5
http://www.pdc.kth.se/heimdal
608
IBM Personal computing support - Flash BIOS update (Linux update package) IBM ^ xSeries 345
http://www.ibm.com/pc/support/site.wss/document.do?lndocid=MIGR-54484
IBM Personal computing support - Flash BIOS Update (Linux package) - IBM ^ xSeries 346
http://www.ibm.com/pc/support/site.wss/document.do?sitestyle=ibm&lndocid=MIGR-57356
IBM Personal computing support - Flash BIOS update (Linux update package) IBM ^ xSeries 365
http://www.ibm.com/pc/support/site.wss/document.do?sitestyle=ibm&lndocid=MIGR-60101
IBM Personal computing support - IBM FAStT Storage Manager for Linux - TotalStorage
http://www.ibm.com/pc/support/site.wss/document.do?lndocid=MIGR-60591
IBM Personal computing support - Remote Supervisor Adapter II Firmware Update IBM ^ xSeries 345
http://www.ibm.com/pc/support/site.wss/document.do?lndocid=MIGR-46489
IBM Personal computing support - Remote Supervisor Adapter II Firmware Update for Linux - IBM ^ xSeries 346
http://www.ibm.com/pc/support/site.wss/document.do?sitestyle=ibm&lndocid=MIGR-56759
IBM Personal computing support - Remote Supervisor Adapter II Firmware update - IBM eServer xSeries 365
http://www.ibm.com/pc/support/site.wss/document.do?sitestyle=ibm&lndocid=MIGR-53861
IBM SAN File System: Interoperability - IBM TotalStorage Open Software Family
http://www.ibm.com/servers/storage/software/virtualization/sfs/interop.html
IBM TotalStorage DS4x00 (FAStT) Linux RDAC Software Package - Fibre Channel Solutions
http://www.ibm.com/support/docview.wss?rs=593&uid=psg1MIGR-54973&loc=en_US:
Related publications
609
IBM TotalStorage support: DS4500 Midrange Disk System FC-2 HBA current downloads:
http://www.ibm.com/servers/storage/support/disk/ds4500/hbadrivers1.html
IBM TotalStorage support: DS4500 Midrange Disk System Storage Manager current level downloads
http://www.ibm.com/servers/storage/support/disk/ds4500/stormgr1.html
IBM TotalStorage support: Search for host bus adapters, firmware and drivers
http://www.ibm.com/servers/storage/support/config/hba/index.wss
QLogic
http://www.qlogic.com
610
Rpmfind.net
http://rpmfind.net
Sun Microsystems
http://www.sun.com
Related publications
611
612
Index
Symbols
.tank.passwd 247, 253 /etc/defaultdomain 364 /etc/krb5.conf 357 /etc/nsswitch.conf 362 /etc/openldap/ldap.conf 363 /etc/resolv.conf 130 /etc/security/ldap/ldap.cfg 354 /etc/sysconfig/network/routes 130 /etc/yp.conf 364 /usr/local/winbind/install/lib/smb.conf 358 cpio 390 fuser 174 installp 170 lsdev 112, 115 lslpp 114 mkdir 172 mount 174, 182 sysdumpstart 533 tar 331, 390 touch 532533 varyoffvg 116 varyonvg 117 AIx commands mksecldap 364 alert 260, 269, 546 an 15 API 9 application availability 22 application server consolidation 34 application support 87 asymmetric virtualization 13 audit log 526 authentication 100 authorization 100 automatic failover 61 automatic restart 60 autorestart service 82, 413414
A
Access Control Lists 26 ACL 26, 57, 297 Active Directory 54, 78, 338339, 348, 350 l ogin 360 add server 396 addprivclient 299 addserver 398 administration 42 SAN File System 252 administrative log 529 Administrator 74, 102 Adobe Acrobat 195 advanced heterogeneous file sharing 78 AIX 28, 51, 169 client configuration file 533 configure SAN File System 175 configure SDD 115 DB2 559 expand disk 280 HACMP 52 install SDD 113 LVM 85 RDAC 120 SAN File System client dump 533 SAN File System client logging 532 SAN File System configuration 172 SecureWay 364 SMIT 113, 170 stclient.conf 533 stfsdebug 532533 syslog 532533 system logging 532533 take fileset ownership 300 unmount SAN File System 174, 182 upgrade client 244 virtual I/O 53 AIX commands cfgmgr 170, 280 chmod 300 chown 300 cp 390 Copyright IBM Corp. 2003, 2004, 2006. All rights reserved.
B
Backup 74, 102 backup and recovery 4 BIOS 135, 234235, 540 block aggregation 9 block subsystems 9 browser interface 256 bufferpools 558
C
cache 94 caching 53, 558 catpolicy 311 change validation 20 CHKDSK 57 CIFS 78, 468 CIM 6, 10, 31, 40, 45 agent 8 object manager 8 Object Manager see CIM/OM CIM/OM 8 CIMOM 186 cimom.log 528 CIM-XML 7 client data access 427
613
monitor 412 privileged 298 show volume access 304 validation script 430 client dump 532533 client logging 530, 532533 client tracing 531 client-server 51 cluster log 530 clustering 87 Common Information Model 31, 45 Connection Manager 46 consolidated logs 530 consolidation 34 logical storage consolidation 5 physical 5 copy on write 376 copy services 16 copy-on-write 80 Create a shielded password 593 create database 558 create storage pool 269 cron 441 Customer Connection ID 521 cygwin 252, 480
D
data migration 8889, 389 offline 89 online 90 data migration phases 393 data replication functions 5 data sharing 11, 14 database DB2 554 database location 560 DB2 199, 554, 568 AIX 559 and SAN File System 554 bufferpools 558 database location 560 Database Managed Storage 554 direct I/O 558559 FlashCopy 560 free space 557 global namespace 560 index data 555 large object data 554 logs 554 policy placement 554 SAN File System rules 556 space consumption 557 storage management 557 System Managed Storage 554 transaction logs 555 Windows 559 DB2 commands create database 558, 560 create tablespace 554555 suspend 560
DB2_DIRECT_IO 559 DB2NTNOCACHE 559 default storage pool 263, 277 default User Pool 79 default user pool 49, 328 delete server 397 demilitarized zone 520 device candidate list 185 device management 23 DFSMS 11, 16 direct I/O 53, 87, 558559 directory server 348349 disable default pool 328 Disabling the default User Pool 328 disk consolidation 5 disk performance management 23 DMS 554 DMTF 6, 10 DMZ 520 DNS 130, 350 DRfile 481 dropserver 398 DS4000 34, 79, 119 DS8000 10 DTMF 6 dual hardware components 59 dump 532533 dynamic 287 dynamic fileset 415
E
engine 37, 40, 252 enterprise class shared disk array 5 ESS 24, 34, 66, 7879, 235, 406 CIM/OM 8 PPRC (Metro Mirror) 24 SDD 69 Ethernet bonding 60, 67, 84 SAN Fiile System Ethernet bonding 131 event log 331, 526, 530 expand SVC disk 279
F
fabric management 20 failback 418 fail-over 413 failover 61, 233, 419 failover monitoring 421 failover time 427 FAStT 34, 45, 66, 6970, 79 RDAC 69, 119 FAT 28 fencing 82 FFDC 527 file metadata 34, 390 file level virtualization 17 file lifecycle management 441
614
file management policy 50, 441 create 442 execute 443 syntax 442 file metadata 41 file permission mapping 338339 file placement policy 49, 80, 304 file preallocation 324 file sharing 78, 338 administrative commands 348 directory server 348349 heterogeneous 338 homogeneous 338 homogenous 338 user domain 347 file system cache 558 definition 25 free space 557 LAN 28 local 28 permissions 26 SAN 28 security 26 file/record subsystems 9 fileset 41, 4647, 286287, 300, 415, 557 assign MDS 294 attach 296 change characteristics 295 convert static to dynamic 295 delete 296 detach 296 dynamic 287 failover 415 link to storage pool 304 metadata 427 permissions 291, 340 primary allegiance 297, 340 quota 290 redistribution 415 static 287, 415 statistics 399 storage pool relationship 304 take ownership 367 threshold 290 filesets 415 firewall 189, 521 first-failure data capture 527 FlashCopy 42, 47, 55, 58, 80, 291, 376, 478, 560 considerations 378 copy on write 376 create 380 directory 378 list images 380, 384 remove image 387 revert image 384 space 376 flexible SAN 69 forecasting 21 free space 557
G
GBIC 66 getent passwd 364 global namespace 34, 41, 46, 54, 560 grace period 41 groups 72 GSKit 569
H
HACMP 52, 87, 560 hard quota 48 hardware element management 12 hardware faults 60 HBA 37, 40, 271 HBA performance 406 heartbeat 60 Heimdal 355356 heterogeneous file sharing 54, 78, 338 high availability 11 homogenous file sharing 54, 338 HTTP 7 hwclock 129130
I
IBM Director 45, 421 IBM Directory Server 566 configuration 570 configuring 574 create LDAP database 570 DB2 568 GSKit 569 install 566 start admin server 578 IBM Global Security Toolkit 569 IBM services SAN File System IBM services 90 IBM Tivoli Storage Manager 48 IBM Tivoli Storage Manager see ITSM IBM Tivoli Storage Manager see Tivoli Storage Manager IBM Tivoli Storage Resource Manager 21 IBM TotalStorage Multiple Device Manager 22 IBM TotalStorage Open Software Family 14 IBM TotalStorage Productivity Center 19 IBM TotalStorage SAN File System 16, 30 IBM TotalStorage SAN File System see SAN File System IBM TotalStorage SAN Volume Controller see SVC IBM TotalStorage Virtualization Family 31 ifcfgeth0 130 ifconfig 131 IFS 52 IIS 57 implementation services 90 in-band virtualization 1315 install MDS 138, 237 install SAN File System 164 installable file system 36 instant copy 58 interoperability 10 Index
615
J
Java 38, 187188 JBOD 79, 554 JFS 28 JRE 187
K
Kerberos 357 kinit 360 klist 360
L
LAN 28 LAN file system 28 LDAP 37, 7374, 78, 100, 102, 186, 261, 339 client 591 configure for SAN File System 594 configure OpenLDAP server 592 Data Interchange Format 574 database 74, 102, 570 DN 74 IBM Directory Server 566 install OpenLDAP 590 LDIF 102, 574 OpenLDAP 590 slapd 592 start admin server 578 start OpenLDAP server 593 start server 578 switch to local authentication 246 User 74, 102 user ID 74 userPassword 74 verify entriesLDAP browse directory 585 WebSphere 569 LDAP commands ibmslapd 577 ldapadd 594 ldapsearch 103104, 593595 mksecldap 355 secldaplcntd 355 slappasswd 593 startserver 578 LDIF 102, 574 leases 41 Legato NetWorker 478 LI 254 license agreement 138 life cycle management 441 lifecycle management 50 create policy 442 execute policy 443 recommendations 446
Linux 164 change password 129 direct I/O 53 Ethernet bonding 60 install OpenLDAP 590 kernel upgrade 128 LDAP 590 RDAC 121 Red Hat 590 SAN File System client logging 533 service pack 128129 SUSE 3738, 121, 128129, 590 syslog 533 system logging 533 zSeries 178 Linux client setup 165 Linux commands fuser 182 passwd 129 rpm 591 service 593 shutdown 131 top 408, 412 useradd 252 vmstat 408, 412 List policy contents 310 list user map 366 load balancing 557 LOB 554, 559 local authentication 72, 100, 186 change from LDAP 246 local file system 28 locks 41 log.audit 526 log.std 331, 526 log.trace 527 logging 331, 521, 530, 532533 client 530 SAN File System 331, 526528, 530 SAN File System clients 530 logical consolidation 5 logs DB2 554 LPAR 53, 178 LUN 20, 263 LUN expansion 70 LUN masking 67 LUN statistics 407 LUN-based backup 478479 LVM 85
M
Management Information Base 543 master console 38, 520 and firewall 189 installation 187 SVC 187 master failover 419 master MDS 41
616
failover 419 identifying 255 MBCS 56, 296, 322 MDM 22 MDM Replication Managerl 24 MDS 17, 36, 40 Active Directory queries 357 verifySDD 118 MDS autorestart 414 memory 94 memory.dmp 532 Metadata 14 metadata 14, 34, 47, 78, 88, 94, 427 Metadata server 17, 36, 40, 67 RSA 71 metadata server see MDS Metro Mirror 24 MIBs 543 Microsoft Management Console 157 migratedata 89 migration services 90 mkpolicy 311, 453 MMC 157 MOF 6 Monitor 74, 102 monitoring failover 421 mprivclient 300 MSCS 53, 88, 447449 Multi-pathing device driver 69 Multiple Device Manager 2223 Multiple Device Manager Replication Manager 24
configure for SAN File System 594 configure server 592 install 590 slapd 592 start server 593 Operator 74, 102 Oplocks 162 out-of-band 14 out-of-band virtualization 13 out-of-space condition 22
P
PD 520 performance management 23 physical consolidation 5 planning worksheets 95 policy 49, 304305, 334 policy best practices 334 policy rule file 309 policy rule syntax 307 policy statistics 332 policy-based automation 11 pool 268 POSIX 53 PPRC 24, 78 preallocation 324 prerequisite software 135 primary allegiance 297, 340, 479 privileged client 86, 297298, 366, 391, 429 problem determination 520 proxy model 8 PuTTY 198, 252, 480
N
N+1 415, 427 name resolutioin 130 nameservice cache 363 NAS 4 NAT 189 nested fileset 48, 289 nested filesets 83 NetView 213 network bonding 60, 84 NFS 297 NIS 54, 78, 338339, 348, 355, 364 NLS 322 non-uniform 69, 79, 303, 328, 334, 428429 Now we backup the files with 508 NTFS 28, 54, 57
Q
QLogic 135, 271 quorum disk 419 quota 48
R
RAID 5, 66 RAID-5 78 RAM 94 RDAC 38, 69, 85, 119, 136, 235 re-add server 398 recovery time 427 Red Hat 51 Red Hat Linux 590 OpenLDAP 590 Redbooks Web site 611 Contact us xxiii regedit 532 reliability 59 Remote Aaccess 520 Remote Access 46 remote mirroring 5 Remote Supervisor Adapter 60 remove server 397 remove volume 70
O
ODBC 534 ODBC see one-button data collection offline data migration 89 one-button data collection 534 online data migration 90 onsistency group 479 open standards 10 OpenLDAP 352, 590 client 591
Index
617
replication 14 rogue MDS 413 rogue server 82 ROI 21 rolling upgrade 233 root squashing 297 RS-485 67, 243 RSA 38, 46, 71, 135, 231, 235, 414, 538 fencing 82 logs 537 RSA II 60 rule file 309
S
Samba 78, 355, 357 winbind 348, 357358 SAN 22, 66 distance 5 fencing 82 non-uniform configuration 69 uniform configuration 69 zoning 108 SAN FIle System administration 42 uniform SAN configuration 69 SAN File System 14, 17, 28, 31, 252, 256, 269, 558559 .tank.passwd 247, 253 access to LUNs 67 activate policy 311 activate volume 266 Active Directory 349350 Active Directory query 357 active policy 305 add MDS 396 add privileged client 261, 298299 add SNMP manager 546 add volume 263264, 270 administration 37, 252, 256, 557 administrative log 529 administrative roles 72 administrators 261 advanced file sharing 347 AIX client 169 AIX client configuration file 172 AIX client dump 533 AIX client logging 532 alert 260, 269, 290, 546, 557 allocation size 269 and DB2 554 and firewall 189 application support 87 assign fileset server 294 attach fileset 296 audit log 526 authentication 72, 100, 186 authorization 100 automatic failover 61, 233 automatic restart 60 autorestart 262, 414 autorestart service 82, 413414
backup 478 balanced workload 80 browser access 252 cache 94, 558 caching 53 catpolicy 310 change cluster configuration 298 change fileset 295 change storage pool 276 change volume 266 check metadata 262 CIMOM 186 cimom.log 528 clear log files 262 CLI 37, 42, 46, 74, 102, 228, 252 CLI password 253254 CLI user 247 CLI_USER 254 client 51, 94 client access validation 430 client configuration file 533 client data access 427 client dump 532533 client installation 149 client log 530 client logging 530, 532533 client monitoring 412 client operations 296 client properties 157 client tracing 531 client validation script 430 clients 261 cluster configuration 262 cluster log 530 cluster name 293 cluster statistics 400 cluster status 262, 298 clustering 52, 87 commands 44 configuration files 481 configure AIX client 175 configure OpenLDAP 594 consolidated logs 530 convert static to dynamic 295 copy on write 376 create file management policy 442 create fileset 290 create FlashCopy 380 create policy 309, 311 create user domain 365 create user map 366 create volume 264 data access 427 data migration 8889, 389 decrease cluster 397 default storage pool 263, 277, 556 default user pool 49, 79, 328 defragment files 441 deinstall client 157 delete fileset 296
618
delete MDS 397 delete storage pool 277 delete volume 266 detach fileset 296 detect new LUNs 263 device candidate list 185 direct I/O 53, 87 directory server 348349 disable autorestart 262 disable default pool 330 discover LUNs 269 disk accessl SAN File System LUN access 67 disk acess 69 display cluster status 262 display engines 262 display LUNs 264 display policy rules 310 domain 365 drain volume 266 DRfile 481 DS4000 119 dual hardware components 59 dump 532533 dynamic 287 dynamic fileset 415 enable autorestart 262 engine 37, 40, 252 engine status 262 error log 331 Ethernet bonding 60, 67, 84 event logging 331, 526 execute file management policy 443 expand system volume 284 expand user volume 277 failback 418 fail-over 413 failover 233, 419 failover monitoring 421 failover time 427 FAStT support 119 faulty disk 268 fencing 82 fiileset 46 file defragmentation 441 file lifecycle management 441 file management policy 50, 441 file metadata 41 file metadata information 401 file movement 436 file permission mapping 338339 file placement policy 49, 80, 304 file sharing 78, 297, 338 fileset 41, 4647, 286, 415, 557 fileset assignment 287 fileset failover 415 fileset hard quota 48 fileset ownership 300 fileset permissions 291 fileset quota 290
fileset redistribution 415 fileset soft quota 48 fileset statistics 399 fileset status 262 fileset threshold 290 filesets 415 first-filure data capture 527 FlashCopy 42, 47, 55, 58, 80, 291, 376, 478, 560 FlashCopy considerations 378 flexible SAN 69 global namespace 34, 41, 46, 54 grace period 41 groups 72 GUI 252, 256 GUI monitoring 402 GUI Web server 43 hard quota 48 hardware faults 60 hardware validation 105 HBA 37 heartbeat 60 Heimdal 356 helper service 156 heterogeneous file sharing 78, 338 heterogenous file sharing 54 high availability 81, 413 homogenous file sharing 54, 338 IBM Director 421 identifying master MDS 255 implementation services 90 increase cluster 396, 398 install AIX client 169 install client 149 install Master Console 187 install Solaris client 168 installation 126, 138, 237 instant copy 58 iSCSI 40 Kerberos 357 kernel extension 172 LDAP 73, 100, 186, 252, 348, 566, 590 leases 41 license agreement 138 lifecycle management 50, 441 lifecycle management recommendations 446 Linux client 164 Linux client logging 533 Linux kernel 128 list administrators 261 list clients 261, 402 list engines 262 list filesets 291 list FlashCopy images 380, 384 list logs 261 list LUNs 260, 264 list mapped user IDs 364 list policy 310 list pools 265 list server 262 list servers 258
Index
619
list storage pools 260 list user map 366 list volume contents 267 list volumes 259 listing logs 522 load balancing 557 local authentication 72, 100, 138, 186, 246 locks 41 log files 261, 331 log.audit 526 log.std 331, 526 log.trace 527 logging 331, 521, 526528, 530, 532533 logs 417, 522 LUN 263 LUN expansion 70 LUN-based backup 478479 LUNs 260 make fileset 290 make FlashCopy 380 make user domain 365 make user map 366 make volume 264, 270 map users 347 Master Console 38, 45, 520 master console installation 187 master failover 419 master metadata server 41 MBCS 56, 322 MDS 17, 36, 40 MDS installation 138, 237 MDS performance 400 MDS validation 105 message format 522 metadata 47, 78, 88, 94 Metadata server 3637, 67 metadata size 92 migrate data 391 migrate to local authentication 246 migration services 90 MMC 157 modify fileset 295 modify storage pool 276 modify volume 266 monitor clients 412 monitor server performance 400 monitoring 398 monitoring failover 421 move file 436 MSCS 88 multi-path device driver 119 N+1 415, 427 nested fileset 48, 289 nested filesets 83 network bonding 60, 67, 84 network infrastructure 71 new LUNs 263 NIS 355, 364 NLS 322 non-uniform configuration 69, 79, 303, 328, 334,
428429 NTFS restrictions 57 ODBC 534 offline data migration 89 one-button data collection 534 online data migration 90 Oplocks 162 package installation 138, 237 partition size 269 password 254 performance monitoring 398 permission mapping 338339 planning worksheets 95 policy 49, 304305, 334, 554 policy best practices 334 policy evaluation 328 policy rule 305, 556 policy rule examples 322 policy rule syntax 307 policy rules 310 policy statistics 332 pool 268 preallocation 324 prerequisite software 135 primary allegiance 297, 340, 479 privileged client 86, 261, 291, 297298, 366, 391, 429, 452 problem determination 520 processes 262 QLogic driver 135 quiesce cluster 262 quota 290 RDAC 121, 235 re-add MDS 398 reassign fileset 262, 294 recovery time 427 rediscover LUNs 263, 269 reliability 59 Remote Access 46, 520 remove faulty disk 268 remove fileset 296 remove FlashCopy image 387 remove MDS 397 remove privileged client 300 remove storage pool 277 remove volume 266 volume drain 70 report files in volume 332 report fileset use 304 reports 405 resume cluster 262 revert FlashCopy image 384 rogue server 82, 413 rolling upgrade 67, 233 root access 86, 297 root squashing 297 RSA 38, 71, 135, 231, 235, 538 RSA card 46 RSA fencing 82 RSA II 44, 60
620
RSA logs 537 rule file 309 Samba 78, 357 SAN fencing 82 SDD 118, 235 secure shell 136, 228 security log 528 server log 526 server statistics 262, 400 server status 258, 262 set alerts 546 set default User Pool 263 set hardware clock 129130 setup local authentication 100 sfscli 44, 74, 102 show cluster status 262 show filesets 291 show LUN access 304 show pools 265 show user map 366 show volume access 304 sizing 9192 Snap-in 158 SNMP 421, 543 soft quota 48 software 39 software faults 60 Solaris client 168 spare MDS 415, 427 ssh 228 ssh keys 136 start cluster 262 start server 262 statfileset 399 static fileset 287, 415 statserver 400 stclient.conf 172 stop cluster 262 stop server 262 storage pool 4849, 78, 268, 555 storage pool design 78 storage pool threshold 269 supported clients 85 suspend volume 263 system metadata 41 System Pool 49, 78, 91 system time 130 system volume size 92 take ownership of fileset 300 TankSysCLI.attachpoint 492 TankSysCLI.auto 490 TankSysCLI.volume 490 threshold 269 time zone 130 TMVT 105, 146 trace properties 163 tracing 527, 531 transaction rate 557 uniform configuration 69 UNIX-based client 338
update policy 311 upgrade 233 upgrade AIX client 244 upgrade kernel 128 upgrade to local authentication 246 upgrade Windows client 245 upgrading 230 use local authentication 100 usepolicy 311 user domain 347, 365 user ID synchronization 338 user map entries 54, 339, 347 User Pools 49, 79, 92 user volumes 40 validate RSA 538 verify servers 258 verify volumes 259 viewing logs 522 virtual I/O 53 volume 263 volume contents 267 volume drain 70, 266 volume expansion 277, 284 volume files report 332 volume visibility 69 volumes 42 VPN 46, 520 winbind 348, 357358 Windows client 149 Windows client dump 532 Windows client logging 530 Windows client tracing 531 Windows driver 156 workload 79 workload balancing 80, 427 workload unit 286 zSeries client 178 SAN File system Metadata server 40 SAN File System client AIX 51 Linux 51 pSeries 52 Red Hat Linux 51 statistics 408 SuSE Linux 52 VMWare 51 Windows 2000 51 Windows 2003 51 zSeries 52 SAN File System client commands stfsdebug 532533 stlog 531 SAN FIle System commands stopautorestart 242 SAN File System commands 299300, 453 activatevol 259, 266 addprivclient 261, 298, 452 addserver 396398
Index
621
addsnmpmgr 546 attachcontainer 296 attachfileset 296 autofilesetserver 295 builddrscript 488489 catlog 261, 417, 421, 522, 526, 528530 chclusterconfig 262, 298 chfileset 295 chpool 276 chvol 266 clearlog 262 datapath query adaptstat 406, 412 detachfilieset 296 disabledefaultpool 330 dropserver 397398 expandvol 281, 286 hwclock 129130 ldapadd 595 legacy trace 527 lsadmuser 248, 261 lsautorestart 82, 262, 414 lsclient 261, 402 lscluster 489 lsdrfile 488 lsengine 262 lsfileset 291292 lsimage 380, 384385 lslun 260, 264, 266267, 270, 275, 278, 281, 284, 304, 431, 451 lspolicy 310 lspool 260, 265, 276, 278, 282 lsproc 262 lsserver 230, 234, 258, 262, 285, 397, 483484 lssnmpmgr 546 lstrapsetting 546 lsusermap 366 lsvol 259, 265267, 276, 278, 282, 286, 304, 428, 431, 438, 452 migratedata 89, 390391, 393 mkdomain 365 mkdrfile 484, 488 mkfileset 290, 415, 452 mkimage 380, 382 mkpolicy 310, 334 mkpool 269 mkusermap 366 mkvol 70, 263264, 270, 276, 451 mvfile 436, 438, 440441 pmf 534 quiescecluster 262, 480 rediscoverluns 263, 269, 275 reportclient 278, 303304, 428, 438 reportfilesetuse 303304, 431 reportvolfiles 267268, 332, 438440, 455 resetadmuser 100, 263 resumecluster 262, 482 reverttoimage 384385, 493 rmdrfile 488 rmfileset 296 rmimage 387388
rmpool 277 rmsnmpmgr 546 rmstclient 174, 182, 244, 280, 482483 rmvol 70, 266268, 277 sanfs_ctl disk 186 setdefaultpool 263 setfilesetserver 262, 294, 397, 415, 427 settrap 546 setupsfs 231, 481 setupstclient 165, 172, 174, 182, 280 sfscli 258 startautorestart 82, 241, 262, 415 startcluster 262, 483484 startmetadatacheck 262 startserver 262, 397 statcluster 236, 255, 262, 298, 400, 421 statfile 401, 428, 445 statfileset 262 statpolicy 332 statserver 262, 400 stfsclient 172 stfsdisk 186 stfsdriver 172 stfsmount 172 stfsumount 174 stopautorestart 234, 262 stopcluster 262, 482483 stopmetadatacheck 262 stopserver 234, 262, 397, 416 suspendvol 263 tankpasswd 254 tmvt 105 trace 527 upgradecluster 230, 243 usepolicy 334, 453 SAN Volume Controller 12, 1416, 38, 187 SCSI 40 SDD 38, 69, 85, 109, 136, 407 configure for AIX 115 install on AIX 113 install on Windows 2000 110 upgrade driver 235 SDD commands cfgvpath 274 datapath 408 datapath query adapter 112, 118 datapath query device 111, 116, 118, 270, 274, 483 datapath query devstats 407 hd2vp 117 vp2hd 117 secure ftp 480 secure LDAP 74 secure shell 228 security log 528 server consolidation 34 server log 526 Service Location Protocol 40 services implementation 90 migration 90
622
setupstclient 172 sfscli 44, 258, 490 script option 489 sfscli -script 489 sfslcm.pl 443 shared disk capacity 5 shared nothing 448 Simple Network Management Protocol 543 sizing 9192 slapd 592 slapd.conf 592 SLP 23 SMI 67 SMI-S 2223 SMIS 10 SMIT 113 SMS 554 SNIA 67, 910, 2223, 31 SNIA Storage Model 9 SNMP 189, 414, 421, 543 add manager 546 set traps 546 snmptrap 425 soft quota 48 software faults 60 Solaris 51, 168 RDAC 120 space consumption 557 spare MDS 415, 427 Specify a user and password 568 SSH 136, 252, 521 ssh 228, 233 SSL 74, 569 certificate 74 standards organizations 6 static 287 static fileset 415 stclient.conf 172, 175, 533 stfsclient 172 stfsdebug 532533 stfsdriver 172 stfsmount 172 stfsstat 408 storage administration costs 4 costs 21 forecasting 21 growth 21 return on investment 21 space consumption 557 standards 10 standards organizations 6 TCO 4, 10 virtualization 11, 31 storage consolidation 5, 34 storage level virtualization 11 storage management costs 21 storage model SNIA 9
storage partitioning 67 storage pool 4849, 78, 268 alert 269 allocation size 269 partition size 269 threshold 269 Storage Resource Management ROI 21 Subsystem Device Driver see SDD Sun Cluster 52 SUSE 3738, 51, 121, 128129, 590 SVC 10, 1516, 34, 45, 66, 7071, 187, 235, 270, 277, 284, 406 expand disk 279 LUN id 278 vdisk 271 SVC see SAN Volume Controller svcinfo lsvdiskhostmap 278 symbolic link 46 symmetric virtualization 13 sysdumpstart 533 syslog 532533 syslog.conf 532533 syslogd 534 system metadata 41 System Pool 49, 78, 91 add volume 274 expand volume 284 system-managed storage 16
T
take ownership 300 take ownership of fileset 367 tankpasswd 247 TCO 4, 10, 16 The LDAP RPMs can 590 threshold 269, 290 Tivoli 12 Tivoli Bonus Pack for SAN Management 22 Tivoli Storage Manager 478 Tivoli Storage Manager see ITSM Tivoli Storage Resource Manager 21 TMVT 105, 146 TotalStorage Open Software Family 14 TotalStorage Productivity Center 19 TotalStorage Productivity Center for Data 21 TotalStorage Productivity Center for Disk 22 TotalStorage Productivity Center for Fabric 20 TotalStorage Productivity Center for Replication 24 touch 532533 TPC 19 TPC for Data 21 TPC for Disk 22 device management 23 Device Manager 23 performance management 23 TPC for Fabric 20, 22, 45 TPC for Replication 24 trace 163 trace log 527 Index
623
tracing 527, 531 transaction rate 557 TSM see Tivoli Storage Manager TSRM 21 tunnel 520
W
watchdog 60 WBEM 6, 10 WebSphere 569 winbind 348, 355, 357358 Windows Active Directory 54, 78, 338339, 348350 DB2 559 Directory Change Notification 57 Event Log 530 expand LUN 282 MSCS 53 privileged client 301 registry editor 532 SAN File System client dump 532 SAN File System client logging 530 SAN File System client tracing 531 short names 162 take fileset ownership 301 upgrade client 245 Wordpad 531 Windows 2000 51 install SDD 110 RDAC 119 Windows 2003 51, 110 Windows commands perfmon 412 xcopy 390 Wordpad 531 workload balancing 427
U
UDP 543 uniform configuration 69 United Linux 128129 UNIX device candidate list 185 privileged client 300 take fileset ownership 300 UNIX-based client 338 upgrade SAN File System 230 UPS 60 usepolicy 311 user domain 347, 365 user ID 74 user map entries 54, 339, 347, 366 User Pools 49, 79, 92 add volume 270 expand volume 277 userPassword 74
V
validate 538 validation 430 vdisk 271 VERITAS 48, 85 VERITAS NetBackup 478 VFS 52 VIO 53 virtual disks 15 virtual file system 36 virtual I/O 53 Virtual Private Network 520 virtual volumes 16 virtualization 11, 31 asymmetric 13 fabric level 11 file level 17 in-band 1315 network level 11 out-of-band 13 server level 11 storage level 11 symmetric 13 VM 11 volume 263 create 270 list contents 332 volume drain 266 volume visibility 69 VPN 46, 520521 VSAN 107
X
xSeries 37
Z
z/VM 178 zones 20 zoneShow 108 zoning 67, 108 zSeries 178
624
Back cover