Sei sulla pagina 1di 656

Front cover

IBM TotalStorage SAN File System


New! Updated for Version 2.2.2 of SAN File System Heterogeneous file sharing

Policy-based file lifecycle management

Charlotte Brooks Huang Dachuan Derek Jackson Matthew A. Miller Massimo Rosichini

ibm.com/redbooks

International Technical Support Organization IBM TotalStorage SAN File System January 2006

SG24-7057-03

Note: Before using this information and the product it supports, read the information in Notices on page xix.

Fourth Edition (January 2006) This edition applies to Version 2, Release 2, Modification 2 of IBM TotalStorage SAN File System (product number 5765-FS2) on the day of announcement in October of 2005. Please note that pre-release code was used for the screen captures and command output; some minor details may vary from the generally available product.
Copyright International Business Machines Corporation 2003, 2004, 2006. All rights reserved. Note to U.S. Government Users Restricted Rights -- Use, duplication or disclosure restricted by GSA ADP Schedule Contract with IBM Corp.

Contents
Figures . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xi Notices . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xix Trademarks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xx Preface . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xxi The team that wrote this redbook. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xxi Become a published author . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xxiii Comments welcome. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xxiii Summary of changes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xxv December 2004, Third Edition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xxv January 2006, Fourth Edition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xxv Part 1. Introduction to IBM TotalStorage SAN File System. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1 Chapter 1. Introduction. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3 1.1 Introduction: Growth of SANs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4 1.2 Storage networking technology: Industry trends . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4 1.2.1 Standards organizations and standards . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6 1.2.2 Storage Networking Industry Association . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9 1.2.3 The IBM approach . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10 1.3 Rise of storage virtualization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11 1.3.1 What is virtualization? . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11 1.3.2 Types of storage virtualization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11 1.3.3 Storage virtualization models . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13 1.4 SAN data sharing issues . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14 1.5 IBM TotalStorage Open Software Family . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14 1.5.1 IBM TotalStorage SAN Volume Controller . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15 1.5.2 IBM TotalStorage SAN File System . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16 1.5.3 Comparison of SAN Volume Controller and SAN File System . . . . . . . . . . . . . . . 18 1.5.4 IBM TotalStorage Productivity Center. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19 1.5.5 TotalStorage Productivity Center for Fabric . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20 1.5.6 TotalStorage Productivity Center for Data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21 1.5.7 TotalStorage Productivity Center for Disk . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22 1.5.8 TotalStorage Productivity Center for Replication . . . . . . . . . . . . . . . . . . . . . . . . . 24 1.6 File system general terminology . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25 1.6.1 What is a file system? . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25 1.6.2 File system types . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27 1.6.3 Selecting a file system . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28 1.7 Filesets and the global namespace. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 30 1.8 Value statement of IBM TotalStorage SAN File System . . . . . . . . . . . . . . . . . . . . . . . . 30 Chapter 2. SAN File System overview. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.1 SAN File System product overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.2 SAN File System V2.2 enhancements overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.3 SAN File System V2.2.1 and V2.2.2 enhancements overview . . . . . . . . . . . . . . . . . . . 2.4 SAN File System architecture . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.5 SAN File System hardware and software prerequisites . . . . . . . . . . . . . . . . . . . . . . . . 33 34 35 35 36 37

Copyright IBM Corp. 2003, 2004, 2006. All rights reserved.

iii

2.5.1 Metadata server . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.5.2 Master Console hardware and software . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.5.3 SAN File System software . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.5.4 Supported storage for SAN File System . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.5.5 SAN File System engines . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.5.6 Master Console . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.5.7 Global namespace . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.5.8 Filesets . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.5.9 Storage pools . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.5.10 Policy based storage and data management . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.5.11 Clients . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.5.12 FlashCopy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.5.13 Reliability and availability . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.5.14 Summary of major features. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

37 38 39 39 40 45 46 47 48 49 51 58 59 61

Part 2. Planning, installing, and upgrading . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 63 Chapter 3. MDS system design, architecture, and planning issues. . . . . . . . . . . . . . . 3.1 Site infrastructure . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.2 Fabric needs and storage partitioning . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.3 SAN File System volume visibility . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.3.1 Uniform SAN File System configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.3.2 Non-uniform SAN File System configuration. . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.4 Network infrastructure . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.5 Security . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.5.1 Local authentication . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.5.2 LDAP. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.6 File sharing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.6.1 Advanced heterogenous file sharing. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.6.2 File sharing with Samba . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.7 Planning the SAN File System configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.7.1 Storage pools and filesets. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.7.2 File placement policies . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.7.3 FlashCopy considerations. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.8 Planning for high availability . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.8.1 Cluster availability . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.8.2 Autorestart service . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.8.3 MDS fencing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.8.4 Fileset and workload distribution. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.8.5 Network planning . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.8.6 SAN planning . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.9 Client needs and application support . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.9.1 Client needs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.9.2 Privileged clients . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.9.3 Client application support . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.9.4 Clustering support . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.9.5 Linux for zSeries . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.10 Data migration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.10.1 Offline data migration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.10.2 Online data migration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.11 Implementation services for SAN File System . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.12 SAN File System sizing guide . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.12.1 Assumptions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 65 66 67 69 69 69 71 72 72 73 78 78 78 78 78 80 80 81 81 82 82 83 84 85 85 85 86 87 87 88 88 89 90 90 91 91

iv

IBM TotalStorage SAN File System

3.12.2 IP network sizing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.12.3 Storage sizing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.12.4 SAN File System sizing. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.13 Planning worksheets . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.14 Deploying SAN File System into an existing SAN. . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.15 Additional materials . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

91 91 92 95 96 97

Chapter 4. Pre-installation configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 99 4.1 Security considerations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 100 4.1.1 Local authentication configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 100 4.1.2 LDAP and SAN File System considerations . . . . . . . . . . . . . . . . . . . . . . . . . . . . 101 4.2 Target Machine Validation Tool (TMVT) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 105 4.3 SAN and zoning considerations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 106 4.4 Subsystem Device Driver . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 109 4.4.1 Install and verify SDD on Windows 2000 client. . . . . . . . . . . . . . . . . . . . . . . . . . 110 4.4.2 Install and verify SDD on an AIX client . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 112 4.4.3 Install and verify SDD on MDS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 117 4.5 Redundant Disk Array Controller (RDAC) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 119 4.5.1 RDAC on Windows 2000 client . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 119 4.5.2 RDAC on AIX client. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 120 4.5.3 RDAC on MDS and Linux client . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 121 Chapter 5. Installation and basic setup for SAN File System . . . . . . . . . . . . . . . . . . . 5.1 Installation process overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.2 SAN File System MDS installation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.2.1 Pre-installation setting and configurations on each MDS . . . . . . . . . . . . . . . . . . 5.2.2 Install software on each MDS engine . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.2.3 SUSE Linux 8 installation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.2.4 Upgrade MDS BIOS and RSA II firmware. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.2.5 Install prerequisite software on the MDS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.2.6 Install SAN File System cluster . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.2.7 SAN File System cluster configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.3 SAN File System clients . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.3.1 SAN File System Windows 2000/2003 client . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.3.2 SAN File System Linux client installation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.3.3 SAN File System Solaris installation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.3.4 SAN File System AIX client installation. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.3.5 SAN File System zSeries Linux client installation . . . . . . . . . . . . . . . . . . . . . . . . 5.4 UNIX device candidate list . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.5 Local administrator authentication option . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.6 Installing the Master Console . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.6.1 Prerequisites . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.6.2 Installing Master Console software . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.7 SAN File System MDS remote access setup (PuTTY / ssh). . . . . . . . . . . . . . . . . . . . 5.7.1 Secure shell overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Chapter 6. Upgrading SAN File System to Version 2.2.2. . . . . . . . . . . . . . . . . . . . . . . 6.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.2 Preparing to upgrade the cluster . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.3 Upgrade each MDS. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.3.1 Stop SAN File System processes on the MDS . . . . . . . . . . . . . . . . . . . . . . . . . . 6.3.2 Upgrade MDS BIOS and RSA II firmware. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.3.3 Upgrade the disk subsystem software . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.3.4 Upgrade the Linux operating system . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Contents

125 126 126 127 127 128 135 135 138 147 149 149 164 168 169 178 185 186 187 187 192 228 228 229 230 231 233 234 234 235 236 v

6.3.5 Upgrade the MDS software. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.4 Special case: upgrading the master MDS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.5 Commit the cluster upgrade . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.6 Upgrading the SAN File System clients . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.6.1 Upgrade SAN File System AIX clients . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.6.2 Upgrade Solaris/Linux clients . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.6.3 Upgrade SAN File System Windows clients . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.7 Switching from LDAP to local authentication . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

236 241 243 244 244 245 245 246

Part 3. Configuration, operation, maintenance, and problem determination . . . . . . . . . . . . . . . . . . . 249 Chapter 7. Basic operations and configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.1 Administrative interfaces to SAN File System . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.1.1 Accessing the CLI . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.1.2 Accessing the GUI . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.2 Basic navigation and verifying the cluster setup . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.2.1 Verify servers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.2.2 Verify system volume . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.2.3 Verify pools . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.2.4 Verify LUNs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.2.5 Verify administrators . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.2.6 Basic commands using CLI. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.3 Adding and removing volumes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.3.1 Adding a new volume to SAN File System . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.3.2 Changing volume settings. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.3.3 Removing a volume . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.4 Storage pools . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.4.1 Creating a storage pool. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.4.2 Adding a volume to a user storage pool . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.4.3 Adding a volume to the System Pool . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.4.4 Changing a storage pool . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.4.5 Removing a storage pool . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.4.6 Expanding a user storage pool volume. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.4.7 Expanding a volume in the system storage pool. . . . . . . . . . . . . . . . . . . . . . . . . 7.5 Filesets . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.5.1 Relationship of filesets to storage pools . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.5.2 Nested filesets . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.5.3 Creating filesets . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.5.4 Moving filesets . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.5.5 Changing fileset characteristics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.5.6 Additional fileset commands . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.5.7 NLS support with filesets . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.6 Client operations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.6.1 Fileset permissions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.6.2 Privileged clients . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.6.3 Take ownership of filesets . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.7 Non-uniform SAN File System configurations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.7.1 Display a list of clients with access to particular volume or LUN . . . . . . . . . . . . 7.7.2 List fileset to storage pool relationship . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.8 File placement policy. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.8.1 Policies and rules . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.8.2 Rules syntax . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.8.3 Create a policy and rules with CLI . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 251 252 252 256 258 258 259 259 260 261 261 263 263 266 266 268 269 270 270 276 277 277 284 286 287 289 290 294 295 296 296 296 297 297 300 303 304 304 304 305 307 309

vi

IBM TotalStorage SAN File System

7.8.4 7.8.5 7.8.6 7.8.7 7.8.8 7.8.9

Creating a policy and rules with GUI. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . More examples of policy rules. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . NLS support with policies . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . File storage preallocation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Policy management considerations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Best practices for managing policies. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

311 322 322 324 328 334 337 338 340 340 347 348 348 348 349 355 365 375 376 376 378 389 390 391 395 396 396 397 398 398 399 413 414 415 419 421 427 427 429 430 431 435 436 436 439 441 441 442 442 443 446 vii

Chapter 8. File sharing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.1 File sharing overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.2 Basic heterogeneous file sharing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.2.1 Implementation: Basic heterogeneous file sharing . . . . . . . . . . . . . . . . . . . . . . . 8.3 Advanced heterogeneous file sharing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.3.1 Software components . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.3.2 Administrative commands . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.3.3 Configuration overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.3.4 Directory server configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.3.5 MDS configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.3.6 Implementation of advanced heterogeneous file sharing . . . . . . . . . . . . . . . . . . Chapter 9. Advanced operations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.1 SAN File System FlashCopy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.1.1 How FlashCopy works . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.1.2 Creating, managing, and using the FlashCopy images . . . . . . . . . . . . . . . . . . . 9.2 Data migration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.2.1 Planning migration with the migratedata command . . . . . . . . . . . . . . . . . . . . . . 9.2.2 Perform migration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.2.3 Post-migration steps . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.3 Adding and removing Metadata servers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.3.1 Adding a new MDS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.3.2 Removing an MDS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.3.3 Adding an MDS after previous removal . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.4 Monitoring and gathering performance statistics . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.4.1 Gathering and analyzing performance statistics . . . . . . . . . . . . . . . . . . . . . . . . . 9.5 MDS automated failover . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.5.1 Failure detection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.5.2 Fileset redistribution . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.5.3 Master MDS failover . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.5.4 Failover monitoring . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.5.5 General recommendations for minimizing recovery time . . . . . . . . . . . . . . . . . . 9.6 How SAN File System clients access data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.7 Non-uniform configuration client validation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.7.1 Client validation sample script details . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.7.2 Using the client validation sample script . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Chapter 10. File movement and lifecycle management . . . . . . . . . . . . . . . . . . . . . . . . 10.1 Manually move and defragment files . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10.1.1 Move a single file using the mvfile command . . . . . . . . . . . . . . . . . . . . . . . . . . 10.1.2 Move multiple files using the mvfile command . . . . . . . . . . . . . . . . . . . . . . . . . 10.1.3 Defragmenting files using the mvfile command . . . . . . . . . . . . . . . . . . . . . . . . 10.2 Lifecycle management with file management policy . . . . . . . . . . . . . . . . . . . . . . . . . 10.2.1 File management policy syntax. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10.2.2 Creating a file management policy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10.2.3 Executing the file management policy. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10.2.4 Lifecycle management recommendations and considerations . . . . . . . . . . . . .
Contents

Chapter 11. Clustering the SAN File System Microsoft Windows client . . . . . . . . . . 11.1 Configuration overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11.2 Cluster configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11.2.1 MSCS configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11.2.2 SAN File System configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11.3 Installing the SAN File System MSCS Enablement package . . . . . . . . . . . . . . . . . . 11.4 Configuring SAN File System for MSCS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11.4.1 Creating additional cluster groups. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11.5 Setting up cluster-managed CIFS share . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Chapter 12. Protecting the SAN File System environment . . . . . . . . . . . . . . . . . . . . . 12.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12.1.1 Types of backup . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12.2 Disaster recovery: backup and restore . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12.2.1 LUN-based backup . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12.2.2 Setting up a LUN-based backup . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12.2.3 Restore from a LUN based backup. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12.3 Backing up and restoring system metadata . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12.3.1 Backing up system metadata . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12.3.2 Restoring the system metadata . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12.4 File recovery using SAN File System FlashCopy function . . . . . . . . . . . . . . . . . . . . 12.4.1 Creating FlashCopy image . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12.4.2 Reverting FlashCopy images . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12.5 Back up and restore using IBM Tivoli Storage Manager . . . . . . . . . . . . . . . . . . . . . . 12.5.1 Benefits of Tivoli Storage Manager with SAN File System . . . . . . . . . . . . . . . . 12.6 Backup/restore scenarios with Tivoli Storage Manager . . . . . . . . . . . . . . . . . . . . . . 12.6.1 Back up Windows data using Tivoli Storage Manager Windows client. . . . . . . 12.6.2 Back up user data in UNIX filesets with TSM client for AIX . . . . . . . . . . . . . . . 12.6.3 Backing up FlashCopy images with the snapshotroot option . . . . . . . . . . . . . . Chapter 13. Problem determination and troubleshooting . . . . . . . . . . . . . . . . . . . . . . 13.1 Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13.2 Remote access support . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13.3 Logging and tracing. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13.3.1 SAN File System Message convention . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13.3.2 Metadata server logs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13.3.3 Administrative and security logs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13.3.4 Consolidated server message logs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13.3.5 Client logs and traces . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13.4 SAN File System data collection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13.5 Remote Supervisor Adapter II . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13.5.1 Validating the RSA configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13.5.2 RSA II management . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13.6 Simple Network Management Protocol . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13.6.1 SNMP and SAN File System. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13.7 Hints and tips . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13.8 SAN File System Message conventions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

447 448 449 449 450 455 458 468 468 477 478 478 479 479 480 482 484 484 488 493 494 498 502 502 503 504 507 510 519 520 520 521 522 525 528 530 530 534 537 538 538 543 543 546 547

Part 4. Exploiting the SAN File System. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 551 Chapter 14. DB2 with SAN File System. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14.1 Introduction to DB2 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14.2 Policy placement . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14.2.1 SMS tablespaces . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . viii
IBM TotalStorage SAN File System

553 554 554 554

14.2.2 DMS tablespaces . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14.2.3 Other data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14.2.4 Sample SAN File System policy rules. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14.3 Storage management . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14.4 Load balancing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14.5 Direct I/O support . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14.6 High availability clustering . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14.7 FlashCopy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14.8 Database path considerations. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

555 556 556 557 557 558 560 560 560

Part 5. Appendixes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 563 Appendix A. Installing IBM Directory Server and configuring for SAN File System Installing IBM Tivoli Directory Server V5.1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Creating the LDAP database . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Configuring IBM Directory Server for SAN File System . . . . . . . . . . . . . . . . . . . . . . . . . . . Starting the LDAP Server and configuring Admin Server . . . . . . . . . . . . . . . . . . . . . . . . . Verifying LDAP entries . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Sample LDIF file used . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Appendix B. Installing OpenLDAP and configuring for SAN File System . . . . . . . . . Introduction to OpenLDAP 2.0.x on Red Hat Linux . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Installation of OpenLDAP packages . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Configuration of OpenLDAP client . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Configuration of OpenLDAP server . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Configure OpenLDAP for SAN File System . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 565 566 570 574 577 585 587 589 590 590 591 592 594

Appendix C. Client configuration validation script . . . . . . . . . . . . . . . . . . . . . . . . . . . 597 Sample script listing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 598 Appendix D. Additional material . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Locating the Web material . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Using the Web material . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . System requirements for downloading the Web material . . . . . . . . . . . . . . . . . . . . . . . How to use the Web material . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 603 603 603 603 604

Abbreviations and acronyms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 605 Related publications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . IBM Redbooks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Other publications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Online resources . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . How to get IBM Redbooks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Help from IBM . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 607 607 607 608 611 611

Index . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 613

Contents

ix

IBM TotalStorage SAN File System

Figures
1-1 1-2 1-3 1-4 1-5 1-6 1-7 1-8 1-9 1-10 1-11 1-12 1-13 1-14 1-15 1-16 1-17 1-18 2-1 2-2 2-3 2-4 2-5 2-6 2-7 2-8 2-9 2-10 3-1 3-2 3-3 3-4 3-5 3-6 3-7 3-8 4-1 4-2 4-3 5-1 5-2 5-3 5-4 5-5 5-6 5-7 5-8 5-9 SAN Management standards bodies . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6 CIMOM proxy model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8 SNIA storage model. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9 Intelligence moving to the network. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12 In-band and out-of-band models . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13 Block level virtualization. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15 IBM TotalStorage SAN Volume Controller. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16 File level virtualization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17 IBM TotalStorage SAN File System architecture . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18 Summary of SAN Volume Controller and SAN File System benefits. . . . . . . . . . . . . 19 TPC for Fabric . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21 TPC for Data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22 TPC for Disk functions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24 TPC for Replication . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25 Windows system hierarchical view . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26 Windows file system security and permissions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27 File system types . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28 Global namespace . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 30 SAN File System architecture . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 36 SAN File System administrative structure . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 43 SAN File System GUI browser interface . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 44 Global namespace . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 47 Filesets and nested filesets . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 48 SAN File System storage pools . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 49 File placement policy execution . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 50 Windows 2000 client view of SAN File System . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 55 Exploring the SAN File System from a Windows 2000 client. . . . . . . . . . . . . . . . . . . 55 FlashCopy images . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 59 Mapping of Metadata and User data to MDS and clients . . . . . . . . . . . . . . . . . . . . . 68 Illustrating network setup . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 72 Data classification example . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 79 SAN File System design . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 84 SAN File System data migration process . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 89 SAN File System data flow . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 93 Typical data and metadata flow for a generic application with SAN File System . . . 94 SAN File System changes the way we look at the Storage in todays SANs. . . . . . . 97 LDAP tree . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 102 Example of setup . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 108 Verify disks are seen as 2145 disk devices . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 111 SAN File System Console GUI sign-on window . . . . . . . . . . . . . . . . . . . . . . . . . . . 147 Select language for installation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 150 SAN File System Windows 2000 Client Welcome window . . . . . . . . . . . . . . . . . . . 150 Security Warning . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 151 Configuration parameters . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 152 Review installation settings . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 153 Security alert warning . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 153 Driver IBM SANFS Cluster Bus Enumerator . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 154 Driver IBM SAN Volume Manager . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 154

Copyright IBM Corp. 2003, 2004, 2006. All rights reserved.

xi

5-10 5-11 5-12 5-13 5-14 5-15 5-16 5-17 5-18 5-19 5-20 5-21 5-22 5-23 5-24 5-25 5-26 5-27 5-28 5-29 5-30 5-31 5-32 5-33 5-34 5-35 5-36 5-37 5-38 5-39 5-40 5-41 5-42 5-43 5-44 5-45 5-46 5-47 5-48 5-49 5-50 5-51 5-52 5-53 5-54 5-55 5-56 5-57 5-58 5-59 5-60 5-61 5-62 xii

Start SAN File System client immediately . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Windows client explorer . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Windows 2000 client SAN File System drivers . . . . . . . . . . . . . . . . . . . . . . . . . . . . Windows 20003 client SAN File System drivers . . . . . . . . . . . . . . . . . . . . . . . . . . . SAN File System helper service . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Launch MMC . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Add the Snap-in for SAN File System . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Add Snap-in . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Add the IBM TotalStorage System Snap-in . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Add/Remove Snap-in. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Save MMC console . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Save MMC console to the Windows desktop. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . IBM TotalStorage File System Snap-in Properties . . . . . . . . . . . . . . . . . . . . . . . . . DisableShortNames . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Verify value for DisableShortNames . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Trace Properties . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Volume Properties . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Modify Volume Properties . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . J2RE Setup Type. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . J2RE verify the install . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . SNMP Service Window . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . SNMP Service Properties . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Verifying SNMP and SNMP Trap Service . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Master Console installation wizard initial window . . . . . . . . . . . . . . . . . . . . . . . . . . Set user account privileges . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Adobe Installer Window . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Master Console installation wizard information . . . . . . . . . . . . . . . . . . . . . . . . . . . . Select optional products to install . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Viewing the Products List. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . PuTTY installation complete . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . DB2 Setup wizard . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . DB2 select installation type . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . DB2 select installation action . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . DB2 Username and Password menu. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . DB2 administration contact . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . DB2 instance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . DB2 tools catalog. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . DB2 administration contact . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . DB2 confirm installation settings . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . DB2 confirm installation settings . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Verify DB2 install . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Verify SVC console install . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Select database repository . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Specify single DB2 user ID . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Enter DB2 user ID . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Set trapdSharePort162 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Define trapdTrapReceptionPort . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Enter TSANM Manager name and port . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . IBM Director Installation Directory window . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . IBM Director Service Account Information . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . IBM Director network drivers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . IBM Director database configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . IBM Director superuser . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

155 155 156 156 157 158 158 159 159 160 160 161 161 162 162 163 163 164 188 189 190 191 192 194 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 212 213 214 215 216 217 217 218 218 220

IBM TotalStorage SAN File System

5-63 5-64 5-65 5-66 5-67 5-68 5-69 5-70 6-1 7-1 7-2 7-3 7-4 7-5 7-6 7-7 7-8 7-9 7-10 7-11 7-12 7-13 7-14 7-15 7-16 7-17 7-18 7-19 7-20 7-21 7-22 7-23 7-24 7-25 7-26 7-27 7-28 7-29 7-30 7-31 7-32 7-33 7-34 7-35 7-36 7-37 7-38 7-39 7-40 7-41 7-42 7-43 8-1

Disk Management . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Upgrade to dynamic disk . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Verify both disks are set to type Dynamic . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Add Mirror . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Select mirrored disk . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Mirroring process . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Mirror Process completed . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Setting Folder Options . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . SAN File System console . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Create PuTTY ssh session. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . SAN File System GUI login window. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . GUI welcome window . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Information Center . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Basic SAN File System configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Select expand vdisk . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . vdisk expansion window . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Data LUN display . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Disk before expansion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Disk after expansion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Relationship of fileset to storage pool . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Filesets from the MDS and client perspective . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Nested filesets . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Nested filesets . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Windows Explorer shows cluster name sanfs as the drive label . . . . . . . . . . . . . . . List nested filesets . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . MBCS characters in fileset attachment directory . . . . . . . . . . . . . . . . . . . . . . . . . . . Select properties of fileset . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ACL for the fileset . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Verify change of ownership . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Windows security tab . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Policy rules based file placement. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Policies in SAN File System Console (GUI). . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Create a New Policy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . New Policy: High Level Settings sample input. . . . . . . . . . . . . . . . . . . . . . . . . . . . . Add Rules to Policy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . New rule created . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Edit Rules for Policy. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . List of defined policies . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Activate Policy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Verify Activate Policy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . New Policy activated . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Delete a Policy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Verify - Delete Policy Window . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . List Policies . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . MBCS characters in policy rule . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Generated SQL for MBCS characters in policy rule . . . . . . . . . . . . . . . . . . . . . . . . Select a policy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Rules for selected policy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Edited rule for Preallocation. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Activate new policy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Disable default pool with GUI. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Display policy statistics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . View Windows permissions on newly created fileset. . . . . . . . . . . . . . . . . . . . . . . .
Figures

222 223 223 224 225 225 226 226 245 253 256 257 258 264 279 280 281 283 284 288 289 289 292 293 294 296 301 302 302 303 306 312 313 314 315 316 317 318 318 319 319 320 321 321 323 324 326 326 327 327 331 333 341 xiii

8-2 8-3 8-4 8-5 8-6 8-7 8-8 8-9 8-10 8-11 8-12 8-13 8-14 8-15 8-16 8-17 8-18 8-19 8-20 9-1 9-2 9-3 9-4 9-5 9-6 9-7 9-8 9-9 9-10 9-11 9-12 9-13 9-14 9-15 9-16 9-17 9-18 9-19 9-20 9-21 9-22 9-23 9-24 9-25 9-26 9-27 9-28 9-29 9-30 9-31 9-32 9-33 9-34 xiv

Set permissions for Everyone group . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Advanced permissions for Everyone . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Set permissions on Administrator group to allow Full control . . . . . . . . . . . . . . . . . View Windows permissions on winfiles fileset . . . . . . . . . . . . . . . . . . . . . . . . . . . . . View Windows permissions on fileset . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Read permission for Everyone group . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . SAN File System user mapping . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Sample configuration for advanced heterogeneous file sharing . . . . . . . . . . . . . . . Created Active Directory Domain Controller and Domain: sanfsdom.net . . . . . . . . User Creation Verification in Active Directory . . . . . . . . . . . . . . . . . . . . . . . . . . . . . SAN File System Windows client added to Active Directory domain. . . . . . . . . . . . Sample heterogeneous file sharing LDAP diagram . . . . . . . . . . . . . . . . . . . . . . . . . Log on as sanfsuser. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Contents of svcfileset6. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . unixfile.txt permissions. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Edit the file in Windows as sanfsuser and save it . . . . . . . . . . . . . . . . . . . . . . . . . . Create the file on the Windows client as sanfsuser . . . . . . . . . . . . . . . . . . . . . . . . . Show file contents in Windows as sanfsuser . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . winfile.txt permissions from Windows . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Make FlashCopy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Copy on write . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . The .flashcopy directory view. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Create FlashCopy image GUI . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Create FlashCopy wizard. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Fileset selection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Set Flashcopy image properties . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Verify FlashCopy image properties . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . FlashCopy image created . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . List of FlashCopy images using GUI . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . List of FlashCopy images before and after a revert operation . . . . . . . . . . . . . . . . . Select image to revert . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Delete Image selection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Delete Image verification . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Delete image complete . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Data migration to SAN File System: data flow . . . . . . . . . . . . . . . . . . . . . . . . . . . . . SAN File System overview. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . View statistics: client sessions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Statistics: Storage Pools . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Console Statistics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Create report . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . View report. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . SAN File System failures and actions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . List of MDS in the cluster . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . List of filesets . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Metadata server mds3 missing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Filesets list after failover . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Metadata server mds3 not started automatically . . . . . . . . . . . . . . . . . . . . . . . . . . . Failback warning . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Graceful stop of the master Metadata server. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Metadata server mds2 as new master. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Configuring SANFS for SNMP . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Selecting the event severity level that will trigger traps . . . . . . . . . . . . . . . . . . . . . . Log into IBM Director Console . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

341 342 343 343 345 346 347 350 351 351 352 352 368 369 369 370 371 371 373 377 377 379 381 381 382 382 383 383 384 386 387 388 388 389 390 403 404 404 405 405 406 414 416 416 417 417 418 418 420 420 422 422 423

IBM TotalStorage SAN File System

9-35 9-36 9-37 9-38 9-39 9-40 9-41 9-42 10-1 10-2 11-1 11-2 11-3 11-4 11-5 11-6 11-7 11-8 11-9 11-10 11-11 11-12 11-13 11-14 11-15 11-16 11-17 11-18 11-19 11-20 11-21 11-22 11-23 11-24 11-25 11-26 11-27 11-28 11-29 11-30 11-31 11-32 11-33 11-34 11-35 11-36 11-37 11-38 11-39 11-40 11-41 11-42 11-43

Discover SNMP devices . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Compile a new MIB . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Select the MIB to compile . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . MIB compilation status windows . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Viewing all events in IBM Director . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Viewing the test trap in IBM Director . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Trap sent when an MDS is shutdown . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Example of required client access . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Windows-based client accessing homefiles fileset . . . . . . . . . . . . . . . . . . . . . . . . . Verify file sizes in homefiles fileset . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . MSCS lab setup . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Basic cluster resources . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Network Interfaces in the cluster . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Cluster Resources . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . SAN File System client view of the global namespace . . . . . . . . . . . . . . . . . . . . . . Fileset directory accessible . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Show permissions and ownership . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Create a file on the fileset. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Choose the installation language. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . License Agreement . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Complete the client information . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Choose where to install the enablement software . . . . . . . . . . . . . . . . . . . . . . . . . . Confirm the installation parameters . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . New SANFS resource is created . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Create a new cluster group . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Name and description for the group. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Specify preferred owners for group . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Group created successfully . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ITSOSFSGroup displays . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Create new resource . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . New resource name and description . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Select all nodes as possible owners . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Enter resource dependencies . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . SAN File System resource parameters . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Display filesets . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Fileset for cluster resource selected . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Cluster resource created successfully . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . New resource in Resource list . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Bring group online . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Group and resource are online . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Resource moves ownership on failures. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Resource stays with current owner after rebooting the original owner . . . . . . . . . . Create IP Address resource. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . IP address resource: General properties . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . IP address resource: Parameters . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Network Name resource: General properties . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Network Name resource: Dependencies . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Network Name resource: Parameters . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . File Share resource: General properties . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . File Share resource: dependencies . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . File Share resource: parameters . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . All file share resources online . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Designate a drive for the CIFS share. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Figures

423 424 424 425 425 426 426 430 437 443 448 449 450 450 451 454 454 455 456 456 457 457 458 459 459 460 460 461 461 462 462 463 463 464 464 465 465 466 466 467 467 468 469 469 470 471 471 472 472 473 473 474 474 xv

11-44 11-45 11-46 12-1 12-2 12-3 12-4 12-5 12-6 12-7 12-8 12-9 12-10 12-11 12-12 12-13 12-14 12-15 12-16 12-17 12-18 12-19 12-20 12-21 12-22 12-23 12-24 13-1 13-2 13-3 13-4 13-5 13-6 13-7 13-8 13-9 13-10 13-11 13-12 13-13 14-1 14-2 14-3 14-4 A-1 A-2 A-3 A-4 A-5 A-6 A-7 A-8 A-9 xvi

CIFS client access SAN File System via clustered SAN File System client . . . . . . Copy lots of files onto the share. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Drive not accessible. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . SVC FlashCopy relationships and consistency group . . . . . . . . . . . . . . . . . . . . . . . Metadata dump file creation start. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Metadata dump file name . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . DR file creation final step . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Delete/remove the metadata dump file . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Verify deletion of the metadata dump file. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . FlashCopy option window GUI. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . FlashCopy Start GUI window. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Select Filesets . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Set Properties of FlashCopy images . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Verify FlashCopy settings . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . FlashCopy images created . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Windows client view of the FlashCopy images . . . . . . . . . . . . . . . . . . . . . . . . . . . . Client file delete . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . FlashCopy image revert selection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Image restore / revert verification and restore . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Remaining FlashCopy images after revert. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Client data restored . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Exploitation of SAN File System with Tivoli Storage Manager. . . . . . . . . . . . . . . . . User files selection. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Restore selective file selection. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Select destination of restore file(s). . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Restore files selection for FlashCopy image backup . . . . . . . . . . . . . . . . . . . . . . . . Restore files destination path selection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . IBM Connection Manager . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Steps for remote access . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . SAN File System message format . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Event viewer on Windows 2000. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . OBDC from GUI . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Remote Supervisor Adapter II . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . RSAII interface using Internet Explorer . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Accessing remote power using RSAII . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Access BIOS log using RSAII . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Java Security Warning . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . RSA II: Remote control buttons . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ASM Remote control . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . SNMP configuration on RSA II. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Example storage pool layout for DB2 objects . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Workload distribution of filesets for DB2 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Default data caching . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Directory structure information. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Select location where to install. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Language selection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Setup type . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Features to install. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . User ID for DB2 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Installation summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . GSKit pop-up . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Installation complete . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Configuration tool. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

475 475 476 481 485 486 486 487 487 494 495 495 496 497 497 498 499 499 500 501 501 502 504 505 506 506 507 520 521 522 531 534 537 539 540 541 542 542 543 544 556 558 559 561 566 567 567 568 568 569 569 570 570

IBM TotalStorage SAN File System

A-10 A-11 A-12 A-13 A-14 A-15 A-16 A-17 A-18 A-19 A-20 A-21 A-22 A-23 A-24 A-25 A-26 A-27 A-28 A-29

User ID pop-up. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Enter LDAP database user ID . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Enter the name of the database. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Select database codepage . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Database location . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Verify database configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Database created. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Add organizational attribute . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Browse for LDIF . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Start the import . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . IBM Directory Server login . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . IBM Directory Server Web Administration Tool . . . . . . . . . . . . . . . . . . . . . . . . . . . . Change admin password . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Add host. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Enter host details . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Verify that host has been added . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Login to local host name . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Admin console . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Manage entries . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Expand ou=Users . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

571 571 572 572 573 573 574 575 576 577 578 579 580 581 582 583 584 585 586 586

Figures

xvii

xviii

IBM TotalStorage SAN File System

Notices
This information was developed for products and services offered in the U.S.A. IBM may not offer the products, services, or features discussed in this document in other countries. Consult your local IBM representative for information on the products and services currently available in your area. Any reference to an IBM product, program, or service is not intended to state or imply that only that IBM product, program, or service may be used. Any functionally equivalent product, program, or service that does not infringe any IBM intellectual property right may be used instead. However, it is the user's responsibility to evaluate and verify the operation of any non-IBM product, program, or service. IBM may have patents or pending patent applications covering subject matter described in this document. The furnishing of this document does not give you any license to these patents. You can send license inquiries, in writing, to: IBM Director of Licensing, IBM Corporation, North Castle Drive Armonk, NY 10504-1785 U.S.A. The following paragraph does not apply to the United Kingdom or any other country where such provisions are inconsistent with local law: INTERNATIONAL BUSINESS MACHINES CORPORATION PROVIDES THIS PUBLICATION "AS IS" WITHOUT WARRANTY OF ANY KIND, EITHER EXPRESS OR IMPLIED, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF NON-INFRINGEMENT, MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE. Some states do not allow disclaimer of express or implied warranties in certain transactions, therefore, this statement may not apply to you. This information could include technical inaccuracies or typographical errors. Changes are periodically made to the information herein; these changes will be incorporated in new editions of the publication. IBM may make improvements and/or changes in the product(s) and/or the program(s) described in this publication at any time without notice. Any references in this information to non-IBM Web sites are provided for convenience only and do not in any manner serve as an endorsement of those Web sites. The materials at those Web sites are not part of the materials for this IBM product and use of those Web sites is at your own risk. IBM may use or distribute any of the information you supply in any way it believes appropriate without incurring any obligation to you. Information concerning non-IBM products was obtained from the suppliers of those products, their published announcements or other publicly available sources. IBM has not tested those products and cannot confirm the accuracy of performance, compatibility or any other claims related to non-IBM products. Questions on the capabilities of non-IBM products should be addressed to the suppliers of those products. This information contains examples of data and reports used in daily business operations. To illustrate them as completely as possible, the examples include the names of individuals, companies, brands, and products. All of these names are fictitious and any similarity to the names and addresses used by an actual business enterprise is entirely coincidental. COPYRIGHT LICENSE: This information contains sample application programs in source language, which illustrates programming techniques on various operating platforms. You may copy, modify, and distribute these sample programs in any form without payment to IBM, for the purposes of developing, using, marketing or distributing application programs conforming to the application programming interface for the operating platform for which the sample programs are written. These examples have not been thoroughly tested under all conditions. IBM, therefore, cannot guarantee or imply reliability, serviceability, or function of these programs. You may copy, modify, and distribute these sample programs in any form without payment to IBM for the purposes of developing, using, marketing, or distributing application programs conforming to IBM's application programming interfaces.

Copyright IBM Corp. 2003, 2004, 2006. All rights reserved.

xix

Trademarks
The following terms are trademarks of the International Business Machines Corporation in the United States, other countries, or both:
AFS AIX 5L AIX DB2 Universal Database DB2 DFS Enterprise Storage Server Eserver Eserver FlashCopy HACMP IBM NetView PowerPC POWER POWER5 pSeries Redbooks Redbooks (logo) SecureWay Storage Tank System Storage Tivoli TotalStorage WebSphere xSeries z/VM zSeries

The following terms are trademarks of other companies: Java, J2SE, Solaris, Sun, Sun Java, and all Java-based trademarks are trademarks of Sun Microsystems, Inc. in the United States, other countries, or both. Microsoft, Windows NT, Windows, Win32, and the Windows logo are trademarks of Microsoft Corporation in the United States, other countries, or both. i386, Intel, Pentium, Intel logo, Intel Inside logo, and Intel Centrino logo are trademarks or registered trademarks of Intel Corporation or its subsidiaries in the United States, other countries, or both. UNIX is a registered trademark of The Open Group in the United States and other countries. Linux is a trademark of Linus Torvalds in the United States, other countries, or both. Other company, product, or service names may be trademarks or service marks of others.

xx

IBM TotalStorage SAN File System

Preface
This IBM Redbook is a detailed technical guide to the IBM TotalStorage SAN File System. SAN File System is a robust, scalable, and secure network-based file system designed to provide near-local file system performance, file aggregation, and data sharing services in an open environment. SAN File System helps lower the cost of storage management and enhance productivity by providing centralized management, higher storage utilization, and shared access by clients to large amounts of storage. We describe the design and features of SAN File System, as well as how to plan for, install, upgrade, configure, administer, and protect it. This redbook is for all who want to understand, install, configure, and administer SAN File System. It is assumed the reader has basic knowledge of storage and SAN technologies.

The team that wrote this redbook


This redbook was produced by a team of specialists from around the world working at the Advanced Technical Support Storage Solutions Benchmark Center in Gaithersburg, Maryland.

Figure 1 The team: Dachuan, Massimo, Matthew, Derek, and Charlotte

Copyright IBM Corp. 2003, 2004, 2006. All rights reserved.

xxi

Charlotte Brooks is an IBM Certified IT Specialist and Project Leader for Storage Solutions at the International Technical Support Organization, San Jose Center. She has 14 years of experience with IBM in the fields of IBM TotalStorage hardware and software, IBM ^ pSeries servers, and AIX. She has written 15 Redbooks, and has developed and taught IBM classes in all areas of storage and storage management. Before joining the ITSO in 2000, she was the Technical Support Manager for Tivoli Storage Manager in the Asia Pacific Region. Huang Dachuan is an Advisory IT Specialist in the Advanced Technical Support team of IBM China in Beijing. He has nine years of experience in networking and storage support. He is CCIE certified and his expertise includes Storage Area Networks, IBM TotalStorage SAN Volume Controller, SAN File System, ESS, DS6000, DS8000, copy services, and networking products from IBM and Cisco. Derek Jackson is a Senior IT Specialist working for the Advanced Technical Support Storage Solutions Benchmark Center in Gaithersburg, Maryland. He primarily supports SAN File System, IBM TotalStorage Productivity Center, and the ATSs lab infrastructure. Derek has worked for IBM for 22 years, and has been employed in the IT field for 30 years. Before joining ATS, Derek worked for IBM's Business Continuity and Recovery Services and was responsible for delivering networking solutions for its clients. Matthew A. Miller is an IBM Certified IT Specialist and Systems Engineer with IBM in Phoenix, AZ. He has worked extensively with IBM Tivoli Storage Software products as both a field systems engineer and as a software sales representative and currently works with Tivoli Techline. Prior to joining IBM in 2000, Matt worked for 16 years in the client community in both technical and managerial positions. Massimo Rosichini is an IBM Certified Product Services and Country Specialist in the ITS Technical Support Group in Rome, Italy. He has extensive experience in IT support for TotalStorage solutions in the EMEA South Region. He is an ESS/DS Top Gun Specialist and is an IBM Certified Specialist for Enterprise Disk Solutions and Storage Area Network Solutions. He was an author of previous editions of the Redbooks IBM TotalStorage Enterprise Storage Server Implementing ESS Copy Services in Open Environments, SG24-5757 and IBM TotalStorage SAN File System SG24-7057. Thanks to the following people for their contributions to this project: The authors of previous editions of this redbook: Jorge Daniel Acua, Asad Ansari, Chrisilia Davis, Ravi Khattar, Michael Newman, Massimo Rosichini, Leos Stehlik, Satoshi Suzuki, Mats Wahlstrom, Eric Wong Cathy Warrick and Wade Wallace International Technical Support Organization, San Jose Center Todd Bates, Ashish Chaurasia, Steve Correl, Vinh Dang, John George, Jeanne Gordon, Matthew Krill, Joseph Morabito, Doug Rosser, Ajay Srivastava, Jason Young SAN File System Development, IBM Beaverton Rick Taliaferro, Ida Wood IBM Raleigh Herb Ahmuty, John Amann, Kevin Cummings, Gonzalo Fuentes, Craig Gordon, Rosemary McCutchen, IBM Gaithersburg Todd DeSantis IBM Pittsburgh xxii
IBM TotalStorage SAN File System

Bill Cochran, Ron Henkhaus IBM Illinois Drew Davis IBM Phoenix Michael Klein IBM Germany John Bynum IBM San Jose

Become a published author


Join us for a two- to six-week residency program! Help write an IBM Redbook dealing with specific products or solutions, while getting hands-on experience with leading-edge technologies. You'll team with IBM technical professionals, Business Partners, or clients. Your efforts will help increase product acceptance and client satisfaction. As a bonus, you'll develop a network of contacts in IBM development labs, and increase your productivity and marketability. Find out more about the residency program, browse the residency index, and apply online at:
ibm.com/redbooks/residencies.html

Comments welcome
Your comments are important to us! We want our Redbooks to be as helpful as possible. Send us your comments about this or other Redbooks in one of the following ways: Use the online Contact us review redbook form found at:
ibm.com/redbooks

Send your comments in an e-mail to:


redbook@us.ibm.com

Mail your comments to: IBM Corporation, International Technical Support Organization Dept. QXXE Building 80-E2 650 Harry Road San Jose, California 95120-6099

Preface

xxiii

xxiv

IBM TotalStorage SAN File System

Summary of changes
This section describes the technical changes made in this edition of the book and in previous editions. This edition may also include minor corrections and editorial changes that are not identified. Summary of Changes for SG24-7057-03 for IBM TotalStorage SAN File System as created or updated on January 27, 2006.

December 2004, Third Edition


This revision reflects the addition, deletion, or modification of new and changed information described below.

New information
Advanced heterogeneous file sharing File movement and lifecycle management File sharing with Samba

Changed information
Client support

January 2006, Fourth Edition


This revision reflects the addition, deletion, or modification of new and changed information described below.

New information
New centralized installation procedure Preallocation policy for large files Local authentication option Microsoft clustering support

Changed information
New MDS server and client platform (including zSeries support) New RSA connectivity and high availability details

Copyright IBM Corp. 2003, 2004, 2006. All rights reserved.

xxv

xxvi

IBM TotalStorage SAN File System

Part 1

Part

Introduction to IBM TotalStorage SAN File System


In this part of the book, we introduce general industry and client issues that have prompted the development of the IBM TotalStorage SAN File System, and then present an overview of the product itself.

Copyright IBM Corp. 2003, 2004, 2006. All rights reserved.

IBM TotalStorage SAN File System

Chapter 1.

Introduction
In this chapter, we provide background information for SAN File System, including these topics: Growth in SANs and current challenges Storage networking technology: industry trends Rise of storage virtualization and growth of SAN data Data sharing with SANs: issues IBM TotalStorage products overview Introduction to file systems and key concepts Value statement for SAN File System

Copyright IBM Corp. 2003, 2004, 2006. All rights reserved.

1.1 Introduction: Growth of SANs


Storage Area Networks (SANs) have gained wide acceptance. Interoperability issues between components from different vendors connected by a SAN fabric have received attention and have generally been resolved, but the problem of managing the data stored on a variety of devices from different vendors is still a major challenge to the industry. The volume of data storage required in daily life and business has exploded. Specific figures vary, but it is indisputably true that capacity is growing, hardware costs are decreasing, while availability requirements are rapidly approaching 100%. Three hundred million Internet users are driving two petabytes of data traffic per month. Users are mobile, access patterns unpredictable, and the content of data becomes more and more interactive. Clients deploying SANs today face many issues as they build or grow their storage infrastructures. Although the cost of purchasing storage hardware continues its rapid decline, the cost of managing storage is not keeping pace. In some cases, storage management costs are actually rising. Recent studies show that the purchase price of storage hardware comprises as little as 5 to 10 percent of the total cost of storage. The various factors that make up the total cost of ownership include: Administration costs Downtime Environmental overhead Device management tasks Backup and recovery procedures Shortage of skilled storage administrators Heterogeneous server and storage installations Information technology managers are under significant pressure to reduce costs while deploying more storage to remain competitive. They must address the increasing complexity of storage systems, the explosive growth in data, and the shortage of skilled storage administrators. Furthermore, the storage infrastructure must be designed to help maximize the availability of critical applications. Storage itself may well be treated as a commodity. However, the management of it is certainly not; in fact, the cost of managing storage is typically many times its actual acquisition cost.

1.2 Storage networking technology: Industry trends


In the late 1990s, storage networking emerged in the form of SANs, Network Attached Storage (NAS), and Internet Small Computer System Interface (iSCSI) technologies. These were aimed at reducing the total cost of ownership (TCO) of storage by managing islands of information among heterogeneous environments with disparate operating systems, data formats, and user interfaces, in a more efficient way. SANs enable you to consolidate storage and share resources by enabling storage capacity to be connected to servers at a greater distance. By disconnecting storage resource management from individual hosts, a SAN enables disk storage capacity to be consolidated. The results can be lower overall costs through better utilization of the storage, lower management costs, increased flexibility, and increased control. This can be achieved physically or logically.

IBM TotalStorage SAN File System

Physical consolidation
Data from disparate storage subsystems can be combined onto large, enterprise class shared disk arrays, which may be located at some distance from the servers. The capacity of these disk arrays can be shared by multiple servers, and users may also benefit from the advanced functions typically offered with such subsystems. This may include RAID capabilities, remote mirroring, and instantaneous data replication functions, which might not be available with smaller, integrated disks. The array capacity may be partitioned, so that each server has an appropriate portion of the available gigabytes. Available capacity can be dynamically allocated to any server requiring additional space. Capacity not required by a server application can be re-allocated to other servers. This avoids the inefficiency associated with free disk capacity attached to one server not being usable by other servers. Extra capacity may be added, non disruptively. However, physical consolidation does not mean that all wasted space concerns are addressed.

Logical consolidation
It is possible to achieve shared resource benefits from the SAN, but without moving existing equipment. A SAN relationship can be established between a client and a group of storage devices that are not physically co-located (excluding devices that are internally attached to servers). A logical view of the combined disk resources may allow available capacity to be allocated and re-allocated between different applications running on distributed servers, to achieve better utilization.

Extending the reach: iSCSI


While growing in popularity, nevertheless there are certain perceived barriers to entry with SANs. These include a higher cost, and complexity of implementation and administration. The iSCSI protocol is intended to address this by bringing some of the performance benefits of a SAN, while not requiring the same infrastructure. It achieves this by providing block-based I/O over a TCP/IP network, rather than the Fibre Channel for SAN. Todays storage solutions need to embrace emerging technologies at all price points to offer the client the highest freedom of choice.

Chapter 1. Introduction

1.2.1 Standards organizations and standards


Today, there are at least 10 organizations involved in creating standards for storage, storage management, SAN management, and interpretability. Figure 1-1 shows the key organizations involved in developing and promoting standards relating to storage, storage management, and SAN management, and the relevant standards for which they are responsible.

SAN Management Standards Bodies


Marketing De-facto Standards Formal Standards
Internet Engineering Task Force (IETF) Formal standards for SNMP and MIBs

Storage Networking Industry Association (SNIA) SAN umbrella organization IBM participation: Founding member Board, Tech Council, Project Chair

Fibre Channel Industry Association (FCIA) Sponsors customer events IBM participation: Board

Jiro (StoreX) Sun consortium Fibre Alliance EMC consortium

American National Standards Institute (ANSI) X3T11 for FC/FICON standards X3T10 for SCI standards IBM participation

SCSI Trade Association Technology roadmaps IBM participation: Member

National Storage Industry Consortium Pre-competitive consortium

International Organization for Standardization (ISO) International standardization IBM Software development ISO Certified

Distributed Management Task Force (DMTF) Development of CIM IBM participation

Figure 1-1 SAN Management standards bodies

Key standards for Storage Management are: Distributed Management Task Force (DMTF) Common Information Model (CIM) Standards. This includes the CIM Device Model for Storage. Storage Networking Industry Association (SNIA) Storage Management Initiative (SMI) Specification.

CIM/WEB management model


CIM was developed as part of the Web-Based Enterprise Management (WBEM) initiative by the Desktop Management Task Force (DMTF) to simplify management of distributed systems. It uses an object-oriented approach to describe management information, and the description (data model) is platform- and vendor-independent. CIM profiles have already been developed for some devices, such as Storage Subsystems, Fibre Channel switches, and NAS devices. IBMs intent is to support CIM-based management as and when device manufacturers deliver CIM-based management interfaces. CIM/WBEM technology uses a powerful human and machine readable language called the managed object format (MOF) to precisely specify object models. Compilers can be

IBM TotalStorage SAN File System

developed to read MOF files and automatically generate data type definitions, interface stubs, and GUI constructs to be inserted into management applications.

SMI Specification
SNIA has fully adopted and enhanced the CIM standard for Storage Management in its SMI Specification. SMI Specification was launched in mid-2002 to create and develop a universal open interface for managing storage devices, including storage networks. The idea behind SMIS is to standardize the management interfaces so that management applications can utilize them and provide cross device management. This means that a newly introduced device can be immediately managed, as it will conform to the standards. SMIS extends CIM/WBEM with the following features:

A single management transport: Within the WBEM architecture, the CIM-XML over HTTP
protocol was selected for this transport in SMIS.

A complete, unified, and rigidly specified object model: SMIS defines profiles and recipes within the CIM that enables a management client to reliably utilize a component vendors implementation of the standard, such as the control of LUNs and Zones in the context of a SAN. Consistent use of durable names: As a storage network configuration evolves and is
reconfigured, key long-lived resources like disk volumes must be uniquely and consistently identified over time.

Rigorously documented client implementation considerations: SMIS provides client developers with vital information for traversing CIM classes within a device/subsystem and between devices/subsystems so that complex storage networking topologies can be successfully mapped and reliably controlled. An automated discovery system: SMIS compliant products, when introduced in a SAN
environment, will automatically announce their presence and capabilities to other constituents.

Resource locking: SMIS compliant management applications from multiple vendors can
exist in the same storage device or SAN and cooperatively share resources via a lock manager. The models and protocols in the SMIS implementation are platform-independent, enabling application development for any platform, and enabling them to run on different platforms. The SNIA will also provide interpretability tests that will help vendors to test their applications and devices if they conform to the standard.

Chapter 1. Introduction

Integrating existing devices into the CIM model


As these standards are still evolving, we cannot expect that all devices will support the native CIM interface, and because of this, the SMIS is introducing CIM agents and CIM object managers. The agents and object managers bridge proprietary device management to device management models and protocols used by SMIS. The agent is used for one device and an object manager for a set of devices. This type of operation is also called a proxy model and is shown in Figure 1-2. The CIM Agent or CIM Object Manager (CIM/OM) will translate a proprietary management interface to the CIM interface. An example of a CIM/OM is the IBM CIM Object Manager for the IBM TotalStorage Enterprise Storage Server.

Proxy model (CIM Agent/Object Manager) for legacy devices


Lock Manager SA 0n Directory Server Directory Agent 0n
SLP TCP/IP CIMxml CIM operations over http TCP/IP

Client User Agent 0n

SA

Agent 0n
1 1 Proprietary

Service Agent (SA) Agent Device or 0n


Subsystem

SA Object Manager Provider


1 Proprietary n

0n

Device or Subsystem

Embedded Model

Device Subsystem

Device or

Proxy Model

Proxy Model

Figure 1-2 CIMOM proxy model

In the future, more and more devices will be native CIM compliant, and will therefore have a built-in Agent, as shown in the Embedded Model in Figure 1-2. When widely adopted, SMIS will streamline the way that the entire storage industry deals with management. Management application developers will no longer have to integrate incompatible, feature-poor interfaces into their products. Component developers will no longer have to push their unique interface functionality to applications developers. Instead, both will be better able to concentrate on developing features and functions that have value to end users. Ultimately, faced with reduced costs for management, end users will be able to adopt storage-networking technology faster and build larger, more powerful networks.

IBM TotalStorage SAN File System

1.2.2 Storage Networking Industry Association


The Storage Networking Industry Association (SNIA) was incorporated in December 1997 as a nonprofit trade association that is made up of over 200 companies. SNIA includes well established storage component vendors as well as emerging storage technology companies. The SNIA mission is to ensure that storage networks become efficient, complete, and trusted solutions across the IT community1. The SNIA vision is to provide a point of cohesion for developers of storage and networking products in addition to system integrators, application vendors, and service providers for storage networking. SNIA provides architectures, education, and services that will propel storage networking solutions into the broader market.

The SNIA Shared Storage Model


IBM is an active member of SNIA and fully supports SNIAs goals to produce the open architectures, protocols, and APIs required to make storage networking successful. IBM has adopted the SNIA Storage Model and is basing its storage software strategy and road map on this industry-adopted architectural model for storage, as shown in Figure 1-3.

The SNIA Storage Model


Application
File/record subsystem
Database (dbms) File system (FS)

Redundancy mgmt (backup, )

Resource mgmt, configuration

High availability (fail-over, )

Services subsystem

Storage domain

Discovery, monitoring

Block aggregation

SN-based block aggregation Device-based block aggregation

Storage devices (disks, tape, etc.)

Block subsystem
Copyright 2000, Storage Network Industry Association

Figure 1-3 SNIA storage model

IBM is committed to deliver best-of-breed products in all aspects of the SNIA storage model, including: Block aggregation File/record subsystems Storage devices/block subsystems
1

http://www.snia.org/news/mission/

Security, billing

Host-based block aggregation

Capacity planning

Chapter 1. Introduction

Services subsystems In the area of block aggregation, IBM offers the IBM TotalStorage SAN Volume Controller (SVC), implemented in an in-band model. In the area of file aggregation in a SAN, IBM offers IBM TotalStorage SAN File System, a SAN-wide file system implemented in an out-of-band model. Both of these solutions will adhere to open industry standards. For more information about SMIS/CIM/WBEM, see the SNIA and DMTF Web sites:
http://www.snia.org http://www.dmtf.org

Why open standards?


Products that adhere to open standards offer significantly more benefits than using proprietary ones. The history of the information technology industry has shown essentially open systems offer three key benefits:

Better solutions at a lower price: By harnessing the resources of multiple companies, more development resources are brought to bear on common client requirements, such as ease of management. Improved interoperability: Without open standards, every vendor needs to work with every other vendor to develop interfaces for interoperability. The result is a range of very complex products whose interdependencies make them difficult for clients to install, configure, and maintain. Client choice: By complying with standards developed jointly, products interoperate seamlessly with each other, preventing vendors from locking clients into their proprietary platform. As client needs and vendor choices change, products that interoperate seamlessly provide clients with more flexibility and improve co-operation among vendors.
More significantly, given the industry-wide focus on business efficiency, the use of fully integrated solutions developed to open industry standards will ultimately drive down the TCO of storage.

1.2.3 The IBM approach


Deploying a storage network requires many choices. Not only are there SANs and NAS to consider, but also other technologies, such as iSCSI. The choice of when to deploy a SAN, or use NAS, continues to be debated. CIOs and IT professionals must plan to ensure that all the components from multiple storage vendors will work together in a virtualization environment to enhance their existing storage infrastructures, or build new infrastructures, while keeping a sharp focus on business efficiency and business continuance. The IBM approach to solving these pervasive storage needs is to address the entire problem by simplifying deployment, use, disaster recovery, and management of storage resources. From a TCO perspective, the initial purchase price is becoming an increasingly small part of the equation. As the cost per megabyte of disk drives continues to decrease, the client focus is shifting away from hardware towards software value-add functions, storage management software, and services. The importance of a highly reliable, high performance hardware solution, such as the IBM TotalStorage DS8000), as the guardian of mission-critical data for a business, is still a cornerstone concept. However, software is emerging as a critical element of any SAN solution. Management and virtualization software provide advanced functionality for administering distributed IT assets, maintaining high availability, and minimizing downtime. 10
IBM TotalStorage SAN File System

1.3 Rise of storage virtualization


Storage virtualization techniques are becoming increasingly more prevalent in the IT industry today. Storage virtualization forms one of several levels of virtualization in a storage network, and can be described as the abstraction from physical volumes of data storage to a logical level. Storage virtualization addresses the increasing complexity of managing storage, while reducing the associated costs. Its main purpose is the full exploitation of the benefits promised by a SAN. Virtualization enables data sharing, ensuring higher availability, providing disaster tolerance, improving performance, allowing for consolidation of resources, providing policy-based automation, and much more besides, which do not automatically result from the implementation of todays SAN hardware components. Storage virtualization is possible on several levels of the storage network components, meaning that it is not limited to the disk subsystem. Virtualization separates the representation of storage to the operating system and its users from the actual physical components. This has been available, and taken for granted, in the mainframe environment for many years (such as DFSMS from IBM, and IBMs VM operating system with minidisks).

1.3.1 What is virtualization?


Storage virtualization gathers the storage into storage pools, which are independent of the actual layout of the storage (that is, the overall file system structure). Because of this independence, new disk systems can be added to a storage network, and data migrated to them, without causing disruption to applications. Since the storage is no longer controlled by individual servers, it can be used by any server as needed. In addition, it can allow capacity to be added or removed on demand without affecting the application servers. Storage virtualization will simplify storage management, which has been an escalating expense in the traditional SAN environment.

1.3.2 Types of storage virtualization


Virtualization can be implemented at the following levels: Server level Storage level Fabric level

Chapter 1. Introduction

11

The IBM strategy is to move the intelligence out of the server, eliminating the dependency on having to implement specialized software at the server level. Removing it at the storage level decreases the dependency on implementing RAID subsystems, and alternative disks can be utilized. By implementing at a fabric level, storage control is moved into the network, which gives the opportunity for virtualization to all, and at the same time reduces complexity by providing a single view of storage. The storage network can be used to leverage all kinds of services across multiple storage devices, including virtualization. A high-level view of this is shown in Figure 1-4.

Application DBMS File System DBMS

Application File System Installable File System

Device Driver

Device Driver

SAN

SAN Volume Controller

Storage Network Intelligent Storage Ctller RAID Controller Disk Intelligent Storage Ctller RAID Controller Disk RAID Controller Disk

Figure 1-4 Intelligence moving to the network

The effective management of resources from the data center across the network increases productivity and lowers TCO. In Figure 1-4, you can see how IBM accomplishes this effective management by moving the intelligence from the storage subsystems into the storage network using the SAN Volume Controller, and moving the intelligence of the file system into the storage network using SAN File System. The IBM storage management software, represented in Figure 1-4 as hardware element management and Tivoli Storage Management (a suite of SAN and storage products), addresses administrative costs, downtime, backup and recovery, and hardware management. The SNIA model (see Figure 1-3 on page 9) distinguishes between aggregation at the block and file level.

Block aggregation or block level virtualization


The block layer in the SNIA model is responsible for providing low-level storage to higher levels. Ultimately, data is stored on native storage devices, such as disk drives, solid-state disks, and tape drives. These devices can be used directly, or the storage they provide can

12

IBM TotalStorage SAN File System

Hardware Element Management Tivoli Storage Management

Traditional SAN

Common File System - SAN File System

be aggregated into one or more block vectors to increase or decrease their size, or provide redundancy. Block aggregation or block level virtualization is used to deliver a powerful set of techniques that, when used individually or in combination, serve many purposes, such as: Space management through combining or slicing-and-dicing native storage into new, aggregated block storage Striping through spreading the aggregated block storage across several native storage devices Redundancy through point-in-time copy and both local and remote mirroring

File aggregation or file level virtualization


The file/record layer in the SNIA model is responsible for packing items, such as files and databases, into larger entities, such as block level volumes and storage devices. File aggregation or file level virtualization is used to deliver a powerful set of techniques that, when used individually or in combination, serve many purposes. They can: Allow data sharing and collaboration across heterogeneous servers with high performance and full locking support Enhance productivity by providing centralized and simplified management through policy-based storage management automation Increase storage utilization by reducing the amount of duplicate data and by sharing free and temporary space across servers

1.3.3 Storage virtualization models


Storage virtualization can be broadly classified into two models: In-band virtualization, also referred to as symmetric virtualization Out-of-band virtualization, also referred to as asymmetric virtualization Figure 1-5 shows the two storage virtualization models.

Figure 1-5 In-band and out-of-band models

Chapter 1. Introduction

13

In-band
In an in-band storage virtualization implementation, both data and control information flow over the same path. The IBM TotalStorage SAN Volume Controller (SVC) engine is an in-band implementation, which does not require any special software in the servers and provides caching in the network, allowing support of cheaper disk systems. See the redbook IBM TotalStorage SAN Volume Controller, SG24-6423 for further information.

Out-of-band
In an out-of-band storage virtualization implementation, the data flow is separated from the control flow. This is achieved by storing data and metadata (data about the data) in different places. This involves moving all mapping and locking tables to a separate server (the Metadata server) that contains the metadata of the files. IBM TotalStorage SAN File System is an out-of-band implementation. In an out-of-band solution, the servers (who are clients to the Metadata server) request authorization to data from the Metadata server, which grants it, handles locking, and so on. The servers can then access the data directly without further Metadata server intervention. Separating the flow of control and data in this manner allows the data I/O to use the full bandwidth that a SAN provides, while control I/O goes over a separate network like TCP/IP. For many operations, the metadata controller does not even intervene. Once a client has obtained access to a file, all I/O will go directly over the SAN to the storage devices. Metadata is often referred to as data about the data; it describes the characteristics of stored user data. A Metadata server, in the SAN File System, is a server that off loads the metadata processing from the data-storage environment to improve SAN performance. An instance of the SAN File System runs on each engine, and together the Metadata servers form a cluster.

1.4 SAN data sharing issues


The term data sharing is used somewhat loosely by users and some vendors. It is sometimes interpreted to mean the replication of files or databases to enable two or more users, or applications, to concurrently use separate copies of the data. The applications concerned may operate on different host platforms. Data sharing may also be used to describe multiple users accessing a single copy of a file. This could be called true data sharing. In a homogeneous server environment, with appropriate application software controls, multiple servers may access a single copy of data stored on a consolidated storage subsystem. If attached servers are heterogeneous platforms (for example, a mix of UNIX and Windows), sharing of data between such unlike operating system environments is complex. This is due to differences in file systems, access controls, data formats, and encoding structures.

1.5 IBM TotalStorage Open Software Family


Storage and network administrators face tough challenges today. Demand for storage continues to grow, and enterprises require increasingly resilient storage infrastructures to support their on demand business needs. Compliance with legal, governmental, and other industry specific regulations is driving new data retention requirements. The IBM TotalStorage Open Software Family is a comprehensive, flexible storage software solution that can help enterprises address these storage management challenges today. As a first step, IBM offers infrastructure components that adhere to industry standard open 14
IBM TotalStorage SAN File System

interfaces for registering with management software and communication connection and configuration information. As the second step, IBM offers automated management software components that integrate with these interfaces to collect, organize, and present information about the storage environment. The IBM TotalStorage Open Software Family includes the IBM TotalStorage SAN Volume Controller, IBM TotalStorage SAN File System, and the IBM TotalStorage Productivity Center.

1.5.1 IBM TotalStorage SAN Volume Controller


The IBM TotalStorage SAN Volume Controller (SVC) is an in-band, block-based virtualization product that minimizes the dependency on unique hardware and software, decoupling the storage functions expected in a SAN environment from the storage subsystems and managing storage resources. In a typical non-virtualized SAN, shown to the left of Figure 1-6, servers are mapped to specific devices, and the LUNs defined within the storage subsystem are directly presented to the host or hosts. With the SAN Volume Controller, servers are mapped to virtual disks, thus creating a virtualization layer.

SANS Today

Block Virtualization

Servers are mapped to specific physical disks i.e. physical mapping

Servers are mapped to a virtual disk i.e. logical mapping

Figure 1-6 Block level virtualization

Chapter 1. Introduction

15

The IBM TotalStorage SAN Volume Controller is designed to provide a redundant, modular, scalable, and complete solution, as shown in Figure 1-7.

Redundant, modular, scalable, complete solution

Managed Disks

Figure 1-7 IBM TotalStorage SAN Volume Controller

Each SAN Volume Controller consists of one or more pairs of engines, each pair operating as a single controller with fail-over redundancy. A large read/write cache is mirrored across the pair, and virtual volumes are shared between a pair of nodes. The pool of managed disks is controlled by a cluster of paired nodes. The SAN Volume Controller is designed to provide complete copy services for data migration and business continuity. Since these copy services operate on the virtual volumes, dramatically simpler replication configurations can be created using the SAN Volume Controller, rather than replicating each physical volume in the managed storage pool. The SAN Volume Controller improves storage administrator productivity, provides a common base for advanced functions, and provides for more efficient use of storage. The SAN Volume Controller consists of software and hardware components delivered as a packaged appliance solution in a variety of form factors. The IBM SAN Volume Controller solution can be preconfigured to the client's specification, and will be installed by an IBM customer engineer.

1.5.2 IBM TotalStorage SAN File System


The IBM TotalStorage SAN File System architecture brings the benefits of the existing mainframe system-managed storage (DFSMS) to the SAN environment. Features such as policy-based allocation, volume management, and file management have long been available on IBM mainframe systems. However, the infrastructure for such centralized, automated management has been lacking in the open systems world of Linux, Windows, and UNIX. On conventional systems, storage management is platform dependent. IBM TotalStorage SAN File System provides a single, centralized point of control to better manage files and data, and is platform independent. Centralized file and data management dramatically simplifies storage administration and lowers TCO.

16

IBM TotalStorage SAN File System

SAN File System is a common file system specifically designed for storage networks. By managing file details (via the metadata controller) on the storage network instead of in individual servers, the SAN File System design moves the file system intelligence into the storage network where it can be available to all application servers. Figure 1-8 shows the file level virtualization aggregation, which provides immediate benefits: a single global namespace and a single point of management. This eliminates the need to manage files on a server by server basis. A global namespace is the ability to access any file from any client system using the same name.

Block virtualization: An important step

Common file system: SAN FS Extends the value

FS

Servers are mapped to a virtual disk, easing the administration of the physical assets

Server file systems are enhanced through a common file system and single name space

Figure 1-8 File level virtualization

IBM TotalStorage SAN File System automates routine and error-prone tasks, such as file placement, and monitors out of space conditions. IBM TotalStorage SAN File System will allow true heterogeneous file sharing, where reads and writes on the same data can be done by different operating systems. The SAN File System Metadata server (MDS) is a server cluster attached to a SAN that communicates with the application servers to serve the metadata. Other than installing the SAN File System client on the application servers, no changes are required to applications to use SAN File System, since it emulates the syntax and behavior of local file systems.

Chapter 1. Introduction

17

Figure 1-9 shows the SAN File System environment.

External clients
NFS / CIFS

SFS admin console


Client / metadata communications

IP Network

SAN
FC
FC/iSCSI Gateway

LAN
iSCSI

SFS meta-data cluster


2-8 servers

SFS metadata storage


System storage

Multiple, heterogeneous storage pools

SFS user storage

Figure 1-9 IBM TotalStorage SAN File System architecture

In summary, IBM TotalStorage SAN File System is a common SAN-wide file system that permits centralization of management and improved storage utilization at the file level. IBM TotalStorage SAN File System is configured in a high availability configuration with clustering for the Metadata servers, providing redundancy and fault tolerance. IBM TotalStorage SAN File System is designed to provide policy-based storage automation capabilities for provisioning and data placement, nondisruptive data migration, and a single point of management for files on a storage network.

1.5.3 Comparison of SAN Volume Controller and SAN File System


Both the IBM SAN Volume Controller and IBM SAN File System provide storage virtualization capabilities that address critical storage management issues, including: Optimized storage resource utilization Improved application availability Enhanced storage personnel productivity The IBM SAN Volume Controller addresses volume related tasks that impact these requirements including: Add, replace, remove storage arrays Add, delete, change LUNs Add capacity for applications Manage different storage arrays Manage disaster recovery tools Manage SAN topology

18

IBM TotalStorage SAN File System

Optimize storage performance The IBM SAN File System addresses file related tasks that impact these same requirements. For example: Extend or truncate file system Format file system De-fragmentation File-level replication Data sharing Global name space Data lifecycle management A summary of SAN Volume Controller and SAN File System benefits can be seen in Figure 1-10.

IBM TotalStorageTM Virtualization Family

Benefit Create a single pool of storage from multiple disparate storage devices File, Data sharing across heterogeneous Servers, OS Centralized Management

SAN Volume Controller Virtual Volumes from the storage pool

SAN File System

Single SAN-wide File System, global namespace Single interface for the storage pool Pools volumes across disparate storage devices No downtime to manage LUNs, migrate volumes, add storage Volume-based Peer-toPeer Remote Copy and FlashCopy Single view of file space across heterogeneous servers Reduces storage needs at File Level Non-disruptive additions/ changes to file space, less out-ofspace conditions File-based spaceefficient FlashCopy Files, Data, Quality-ofService based pooling

Improved Capacity Utilization

Improved Application Availability

Single, Cost Effective set of Advanced Copy Services Policy Based Automation

SAN Volume Controller and SAN File System provide complementary benefits to address Volume and File level issues
2005 IBM Corporation

on demand operating environment

Figure 1-10 Summary of SAN Volume Controller and SAN File System benefits

1.5.4 IBM TotalStorage Productivity Center


The IBM TotalStorage Productivity Center is an open storage infrastructure management solution designed to help reduce the effort of managing complex storage infrastructures, to help improve storage capacity utilization, and to help improve administrative efficiency. It is designed to enable an agile storage infrastructure that can respond to on demand storage needs. The IBM TotalStorage Productivity Center is comprised of a user interface designed for ease of use, and the following components: TotalStorage Productivity Center for Fabric TotalStorage Productivity Center for Data TotalStorage Productivity Center for Disk TotalStorage Productivity Center for Replication

Chapter 1. Introduction

19

1.5.5 TotalStorage Productivity Center for Fabric


TotalStorage Productivity Center for Fabric is designed to build and maintain a complete, current map of your storage network. TPC for Fabric can automatically determine both the physical and logical connections in your storage network and display the information in both a topological format and a hierarchical format. Looking outward from the SAN switch, TPC for Fabric can answer questions that help administrators validate proper configuration of your open storage network: What hosts are attached to your storage network and how many HBAs does each host have? What firmware levels are loaded on all your HBAs? What firmware levels are loaded on all your SAN switches? How are the logical zones configured? Looking downward from the host, TPC for Fabric answers administrator questions that arise when changes occur in the storage network that could affect host access to storage: Does a given host have alternate paths through the storage network? Do those alternate paths use alternate switches? If available, are those alternate paths connected to alternate controllers on the storage device? Looking upward from the storage device, TPC for Fabric answers administrator questions that arise when changes happen in the storage network that could affect the availability of stored data: What hosts are connected to a given storage device? What hosts have access to a given storage logical unit (LUN)? Another key function of the TPC for Fabric is change validation. TPC for Fabric detects changes in the storage network, both planned and unplanned, and it can highlight those changes for administrators. Figure 1-11 on page 21 shows a sample topology view provided by TPC for Fabric.

20

IBM TotalStorage SAN File System

Figure 1-11 TPC for Fabric

1.5.6 TotalStorage Productivity Center for Data


TotalStorage Productivity Center for Data is an analyzing software tool that helps storage administrators to manage the content of systems from a logical perspective. TPC for Data improves the storage return on investment by: Delaying purchases of disks: After performing housecleaning, you can satisfy the demand for more storage from existing (now freed-up) disks. Depending on your particular situation, you may discover you have more than adequate capacity and can defer the capital expense of additional disks for a considerable time. Lowering the storage growth rate: Because you are now monitoring and keeping better control of your storage according to policies in place, it should grow at a lower rate than before. Lowering disk costs: With TPC for Data, you will know what the real quarter-to-quarter growth rates actually are, instead of approximating (best-effort basis) once per year. You can project your annual demand with a good degree of accuracy, and can negotiate an annual contract with periodic deliveries, at a price lower than you would have paid for periodic emergency purchases. Lowering storage management costs: The manual effort is greatly reduced as most functions, such as gathering the information and analyzing it, are automated. Automated Alerts can be set up so the administrator only needs to get involved in exceptional conditions.

Chapter 1. Introduction

21

Figure 1-12 shows the TPC for Data dashboard.

Figure 1-12 TPC for Data

Before using TPC for Data to manage your storage, it was difficult to get advance warning of out-of-space conditions on critical application servers. If an application did run out of storage on a server, it would typically just stop. This means revenue generated from that application or the service provided by it also stopped. And it incurred a high cost to fix it, as fixing unplanned outages is usually expensive. With TPC for Data, applications will not run out of storage. You will know when they need more storage, and can get it at a reasonable cost before an outage occurs. You will avoid the loss of revenue and services, plus the additional costs associated with unplanned outages.

1.5.7 TotalStorage Productivity Center for Disk


TotalStorage Productivity Center for Disk is designed to enable administrators to manage storage area network (SAN) storage components based on the Storage Networking Industry Association (SNIA) Storage Management Interface Specification (SMI-S). TPC for Disk also includes the BonusPack for TPC for Fabric, bringing together device management with fabric management. This combination is designed to allow a storage administrator to configure storage devices from a single point, monitor SAN status, and provide operational support to storage devices.

22

IBM TotalStorage SAN File System

Managing a virtualized SAN


In a pooled or virtualized SAN environment, multiple devices work together to create a storage solution. TPC for Disk is designed to provide integrated administration, optimization, and replication features for these virtualization solutions. TPC for Disk is designed to provide an integrated view of an entire SAN system to help administrators perform complex configuration tasks and productively manage the SAN infrastructure. TPC for Disk offers features that can help simplify the establishment, monitoring, and control of disaster recovery and data migration solutions, because the virtualization layers support advanced replication configurations. TPC for Disk includes a device management function, which discovers supported devices, collects asset, configuration, and availability data from the supported devices, and provides a topographical view of the storage usage relationships among these devices. The administrator can view essential information about storage devices discovered by TPC for Disk, examine the relationships among the devices, and change their configurations. The TPC for Disk device management function provides discovery of storage devices that adhere to the SNIA SMI-S standards. The function uses the Service Location Protocol (SLP) to discover supported storage subsystems on the SAN, create managed objects to represent these discovered devices, and display them as individual icons in the TPC Console. Device management in TPC offers: Centralized access to information from storage devices Enhanced storage administrator productivity with integrated volume configuration Outstanding problem determination with cross-device configuration Centralized management of storage devices with browser launch capabilities TPC for Disk also provides a performance management function: a single, integrated console for the performance management of supported storage devices. The performance management function monitors metrics such as I/O rates and cache utilization, and supports optimization of storage through the identification of the best LUNs for storage allocation. It stores received performance statistics in database tables for later use, and analyzes and generates reports on monitored devices for display in the TPC Console. The administrator can configure performance thresholds for the devices based on performance metrics and the system can generate alerts when these thresholds are exceeded. Actions can then be configured to trigger from these events, for example, send e-mail or an SNMP trap. The performance management function also provides gauges (graphs) to track real-time performance. These gauges are updated when new data becomes available. The performance management function provides: Proactive performance management Performance metrics monitoring across storage subsystems from a single console Timely alerts to enable event action based on client policies Focus on storage optimization through identification of the best LUN for a storage allocation

Chapter 1. Introduction

23

Figure 1-13 shows the TPC main window with the performance management functions expanded.

Figure 1-13 TPC for Disk functions

1.5.8 TotalStorage Productivity Center for Replication


Data replication is a core function required for data protection and disaster recovery. TotalStorage Productivity Center for Replication (TPC for Replication) is designed to control and monitor the copy services operations in storage environments. It provides advanced copy services functions for supported storage subsystems on the SAN. Today, it provides Continuous Copy and Point-in-Time Copy services. Specific support is for IBM FlashCopy for ESS and PPRC (Metro Mirror) for ESS. TPC for Replication provides configuration assistance by automating the source-to-target pairing setup, as well as monitoring and tracking the replication operations. TPC for Replication helps storage administrators keep data on multiple related volumes consistent across storage systems. It enables freeze-and-go functions to be performed with consistency on multiple pairs when errors occur during the replication (mirroring) operation. And it helps automate the mapping of source volumes to target volumes, allowing a group of source volumes to be automatically mapped to a pool of target volumes. With TPC for Replication, the administrator can: Keep data on multiple related volumes consistent across storage subsystems Perform freeze-and-go functions with consistency on multiple pairs when errors occur during a replication operation Figure 1-13 shows the TPC main window with the Replication management functions expanded. 24
IBM TotalStorage SAN File System

Figure 1-14 TPC for Replication

1.6 File system general terminology


Since SAN File System implements a SAN-based, global namespace file system, it is important here to understand some general file system concepts and terms.

1.6.1 What is a file system?


A file system is a software component that builds a logical structure for storing files on storage devices (typically disk drives). File systems hide the underlying physical organization of the storage media and present abstractions such as files and directories, which are more easily understood by users.

Chapter 1. Introduction

25

Generally, it appears as a hierarchical structure in which files and folders (or directories) can be stored. The top of the hierarchy of each file system is usually called root. Figure 1-15 shows an example of a Windows system hierarchical view, also commonly known as the tree or directory.

Figure 1-15 Windows system hierarchical view

A file system specifies naming conventions for naming the actual files and folders (for example, what characters are allowed in file and directory names; are spaces permitted?) and defines a path that represents the location where a specific file is stored. Without a file system, files would not even have names and would appear as nameless blocks of data randomly stored on a disk. However, a file system is more than just a directory tree or naming convention. Most file systems provide security features, such as privileges and access control for: Access to files based on user/group permissions Access Control Lists (ACLs) to allow/deny specific actions on file(s) to specific user(s) Figure 1-16 on page 27 and Example 1-1 on page 27 show Windows and UNIX system security and file permissions, respectively.

26

IBM TotalStorage SAN File System

Figure 1-16 Windows file system security and permissions Example 1-1 UNIX file system security and permissions # ls -l total 2659 -rw-------rw------drwxr-xr-x -rwxr-xr-x -rw-------rw-r--r-drwxr-xr-x -rw-r--r--rwxrwxrwx drwxr-x--lrwxrwxrwx drwxr-xr-x drwxrwxr-x -rw-r--r-drwxr-xr-x drwxr-xr-x

1 1 10 1 1 1 2 1 1 2 1 2 5 1 2 2

root root root root root root root root root root bin root root root root root

system system system system system system system system system audit bin system system system system system

31119 196 512 3970 3440 115 512 3802 6600 512 8 512 3072 108 512 512

Sep Sep Sep Apr Sep May Apr Sep May Apr Apr Apr Sep Sep May May

15 15 15 17 16 13 17 04 14 16 17 18 15 15 13 29

16:11 16:11 16:11 11:36 08:16 14:12 11:36 09:51 08:01 2001 09:35 08:30 15:00 09:16 15:12 13:40

.TTauthority .Xauthority .dt .dtprofile .sh_history .xerrors TT_DB WebSM.pref aix_sdd_data_gatherer audit bin -> /usr/bin cdrom dev dposerv.lock drom essdisk1fs

1.6.2 File system types


File systems have a wide variety of functions and capabilities and can be broadly classified into: Local file systems LAN file systems SAN file systems

Chapter 1. Introduction

27

Local file systems


A local file system is tightly integrated with the operating system, and is therefore usually specific to that operating system. A local file system provides services to the system where the data is installed. All data and metadata are served over the systems internal I/O path. Some examples of local file systems are Windows NTFS, DOS FAT, Linux ext3, and AIX JFS.

LAN file systems


LAN file systems allow computers attached via a LAN to share data. They use the LAN for both data and metadata. Some LAN file systems also implement a global namespace, like AFS. Examples of LAN file systems are Network File System (NFS), Andrew File System (AFS), Distributed File System (DFS), and Common Internet File System (CIFS).

Network file sharing appliances


A special case of a LAN file system is a specialized file serving appliance, such as the IBM N3700 and similar from other vendors. These provide CIFS and NFS file serving capabilities using both LAN and iSCSI protocols.

SAN file systems


SAN file systems allow computers attached via a SAN to share data. They typically separate the actual file data from the metadata, using the LAN path to serve the metadata, and the SAN path for the file data. The IBM TotalStorage SAN File System is a SAN File System. Figure 1-17 shows the different file system types.

Local File Systems Integral part of OS NTFS, FAT, JFS


Leo

LAN File Systems Use LAN for data & metadata NFS, AFS, DFS, CIFS
Leo Iva Lou

SAN File Systems Use SAN for data and LAN for metadata SAN FS
Leo Iva Lou

Leo files

File ServerA

File ServerB

SAN
Metadata Server

Leo/Iva/Lou files

Leo/Iva files

Iva/Lou files

Virtualized Storage Subsystem

Figure 1-17 File system types

1.6.3 Selecting a file system


The factors that determine which type of file system is most appropriate for an application or business requirement include: Volume of data being processed Type of data being processed Patterns of data access Availability requirements Applications involved Types of computers requiring access to the file system 28
IBM TotalStorage SAN File System

LAN file systems are designed to provide data access over the IP network. Two of the most common protocols are Network File System (NFS) and Common Internet File System (CIFS). Typically, NFS is used for UNIX servers and CIFS is used for Windows servers. Tools exist to allow Windows servers to support NFS access and UNIX/Linux servers to support CIFS access, which enable these different operating systems to work with each others files. Local file systems limitations surface when business requirements mandate the need for a rapid increase in data storage or sharing of data among servers. Issues may include: Separate islands of storage on each host. Because local file systems are integrated with the servers operating system, each file system must be managed and configured separately. In situations where two or more file system types are in use (for example, Windows and Sun Servers), operators require training and skills in each of these operating systems to complete even common tasks such as adding additional storage capacity. No file sharing between hosts. Inherently difficult to manage. LAN file systems can address some of the limitations of local file systems by adding the ability to share among homogenous systems. In addition, there are some distributed file systems that can take advantage of both network-attached and SAN-attached disk. Some restrictions of LAN file systems include: In-band cluster architectures are inherently more difficult to scale than out-of-band SAN file system architectures. Performance is impacted as these solutions grow. Homogeneous file-sharing only. There is no (or limited) ability to provide file-locking and security between mixed operating systems. Each new cluster creates an island of storage to manage. As the number of islands grow, similar issues as with local file systems tend to increase. File-level policy-based placement is inherently more difficult. Clients still use NFS/CIFS protocols with the inherent limitations of those protocols (security, locking, and so on) File system and storage resources are not scalable beyond a single NAS appliance. An NAS appliance must handle blocks for non-SAN attached clients. SAN file systems address the limitations of local and network file systems. They enable 7x24 availability, increasing rates of change to the environment, and reduction of management cost. The IBM SAN File System offers these advantages: Single global view of file system. This enables tremendous flexibility to increase or decrease the amount of storage available to any particular server as well as full file sharing (including locking) between heterogeneous servers. Metadata Server processes only metadata operations. All data I/O occurs at SAN speeds. Linear scalability of global file system can be achieved by adding Metadata Server nodes. Advanced, centralized, file-granular, and policy-based management. Automated lifecycle management of data can take full advantage of tiered storage. Nondisruptive management of physical assets provides the ability to add, delete, and change the disk subsystem without disruption to the application servers.

Chapter 1. Introduction

29

1.7 Filesets and the global namespace


A key concept for SAN File System is the global namespace. Traditional file systems and file sharing systems operate separate namespaces, that is, each file is tied or mapped to the server which hosts it, and the clients must know which server this is. For example, in Figure 1-17 on page 28, in a LAN file system, user Iva has files stored both on File Server A and File Server B. She would need to specify the particular file server in the access path for each file. SAN File System, by contrast, presents a global namespace: there is one file structure (subdivided into parts called filesets), which is available simultaneously to all the clients. This is shown in Figure 1-18.
ROOT

fileset 1

fileset 2

fileset 3

fileset 4

fileset 5
Figure 1-18 Global namespace

fileset 6

Filesets are subsets of the global namespace. To the clients, the filesets appear as normal directories, where they can create their own subdirectories, place files, and so on. But from the SAN File System server perspective, the fileset is the building-block of the global namespace structure, which can only be created and deleted by SAN File System administrators. Filesets represent units of workload for metadata; therefore, by dividing the files into filesets, you can split the task of serving the metadata for the files across multiple servers. There are other implications of filesets; we will discuss them further in Chapter 2, SAN File System overview on page 33.

1.8 Value statement of IBM TotalStorage SAN File System


As the data stored in the open systems environment continues to grow, new paradigms for the attachment and management of data and the underlying storage of the data are emerging. One of most commonly used technologies in this area is the Storage Area Network (SAN). Using a SAN to connect large amounts of storage to large numbers of computers gives us the potential for new approaches to accessing, sharing, and managing our data and storage. However, existing operating systems and file systems are not built to exploit these new capabilities. IBM TotalStorage SAN File System is a SAN based distributed file system and storage management solution that enables many of the promises of SANs, including shared heterogeneous file access, centralized management, and enterprise-wide scalability. In addition, SAN File System leverages the policy-based storage and data management

30

IBM TotalStorage SAN File System

concepts found in mainframe computers and makes them available in the open systems environment. IBM TotalStorage SAN File System can provide an effective solution for clients with a small number of computers and small amounts of data, and it can scale up to support clients with thousands of computers and petabytes of data. IBM TotalStorage SAN File System is a member of the IBM TotalStorage Virtualization Family of solutions. The SAN File System has been designed as a network-based heterogeneous file system for file aggregation and data sharing in an open environment. As a network-based heterogeneous file system, it provides: High performance data sharing for heterogeneous servers accessing SAN-attached storage in an open environment. A common file system for UNIX and Windows servers with a single global namespace to facilitate data sharing across servers. A highly scalable out-of-band solution (see 1.3.3, Storage virtualization models on page 13) supporting both very large files and very large numbers of files without the limitations normally associated with NFS or CIFS implementations. IBM TotalStorage SAN File System is a leading edge solution that is designed to: Lower the cost of storage management Enhance productivity by providing centralized and simplified management through policy-based storage management automation Improve storage utilization by reducing the amount of duplicate data and by sharing free and temporary space across servers Improve application availability Simplify and lower the cost of data backups through application server free backup and built in file-based FlashCopy images Allow data sharing and collaboration across servers with high performance and full locking support Eliminate data migration during application server consolidation Provide a scalable and secure infrastructure for storage and data on demand IBM TotalStorage SAN File System solution includes a Common Information Model (CIM) Agent, supporting storage management by products based on open standards for units that comply with the open standards of the Storage Network Industry Association (SNIA) Common Information Model.

Chapter 1. Introduction

31

32

IBM TotalStorage SAN File System

Chapter 2.

SAN File System overview


In this chapter, we provide an overview of the SAN File System Version 2.2.2, including these topics: Architecture SAN File System Version 2.2, V2.2.1, and V2.2.2 enhancements overview Components: Hardware and software, supported storage, and clients Concepts: Global namespace, filesets, and storage pool Supported storage devices Supported clients Summary of major features Direct data access Global namespace (scalability for growth) File sharing Policy based automatic placement Lifecycle management

Copyright IBM Corp. 2003, 2004, 2006. All rights reserved.

33

2.1 SAN File System product overview


The IBM TotalStorage SAN File System is designed on industry standards so it can: Allow data sharing and collaboration across servers over the SAN with high performance and full file locking support, using a single global namespace for the data. Provide more effective storage utilization by reducing the amount of duplicate data and by sharing free and temporary space across servers. Improve productivity and reduce the pain for IT storage and server management staff by centralizing and simplifying management through policy-based storage management automation, thus significantly lowering the cost of storage management. Facilitate application server and storage consolidation across the enterprise to scale the infrastructure for storage and data on demand. Simplify and lower the cost of data backups through built-in, file-based FlashCopy image function. Eliminate data migration during application server consolidation, and also reduce application downtime and failover costs. SAN File System is a multiplatform, robust, scalable, and highly available file system, and is a storage management solution that works with Storage Area Networks (SANs). It uses SAN technology, which allows an enterprise to connect a large number of computers and share a large number of storage devices, via a high-performance network. With SAN File System, heterogeneous clients can access shared data directly from large, high-performance, high-function storage systems, such as IBM TotalStorage Enterprise Storage Server (ESS), IBM TotalStorage SAN Volume Controller (SVC), and IBM TotalStorage DS4000 (formerly IBM TotalStorage FAStT), as well as non-IBM storage devices. The SAN File System is built on a Fibre Channel network, and is designed to provide superior I/O performance for data sharing among heterogeneous computers. SAN File System differs from conventional distributed file systems in that it uses a data-access model that separates file metadata (information about the files, such as owner, permissions, and the physical file location) from actual file data (contents of the files). The metadata is provided to clients by MDSs; the clients communicate with the MDSs only to get the information they need to locate and access the files. Once they have this information, the SAN File System clients access data directly from storage devices via the clients own direct connection to the SAN. Direct data access eliminates server bottlenecks and provides the performance necessary for data-intensive applications. SAN File System presents a single, global namespace to clients where they can create and share data, using uniform file names from any client or application. Furthermore, data consistency and integrity is maintained through SAN File Systems management of distributed locks and the use of leases. SAN File System also provides automatic file placement through the use of policies and rules. Based on rules specified in a centrally-defined and managed policy, SAN File System automatically stores data on devices in storage pools that are specifically created to provide the capabilities and performance appropriate for how the data is accessed and used.

34

IBM TotalStorage SAN File System

2.2 SAN File System V2.2 enhancements overview


In addition to the benefits listed above, enhancements of SAN File System V2.2 include: Support for SAN File System clients on AIX 5L V5.3, SUSE Linux Enterprise Server 8 SP4, Red Hat Enterprise Linux 3, Windows 2000/2003, and Solaris 9 Support for iSCSI attached clients and iSCSI attached user data storage Support for IBM storage and select non-IBM storage and multiple types of storage concurrently for user data storage Support for an unlimited amount of storage for the user data Support for multiple SAN storage zones for enhanced security and more flexible device support Support for policy-based movement of files between storage pools Support for policy-based deletion of files Ability to move or defragment individual files Improved heterogeneous file sharing with cross platform user authentication and security permissions between Windows and UNIX environments Ability to export the SAN File System global namespace using Samba 3.0 on the following SAN File System clients: AIX 5L V5.2 and V5.3 (32- and 64-bit), Red Hat EL 3.0, SUSE Linux Enterprise Server 8.0, and Sun Solaris 9 Improved globalization support, including Unicode fileset attach point names and Unicode fine name patterns in policy rule

2.3 SAN File System V2.2.1 and V2.2.2 enhancements overview


MDS support for SLES9 as well as SLES8. Clients who remain with SLES8 will need to upgrade to SLES8 SP4. Support for xSeries 365 as Metadata server. Support for new IBM disk hardware: IBM TotalStorage DS6000 and IBM TotalStorage DS8000. Redundant Ethernet support on the MDSs (Linux Ethernet bonding). Improved installation: A new loadcluster function automatically installs the SAN File System software and its prerequisites across the entire cluster from one MDS. Preallocation policies to improve performance of writing large new files. MDSs support (and require) a TCP/IP interface to RSA cards. Support for SAN File System client on zSeries Linux SLES8 and pSeries Linux SLES8. Microsoft cluster support for SAN File System clients on Windows 2000 and Windows 2003. Local user authentication option: LDAP is no longer required for the authentication of administrative users. Virtual I/O device support on AIX. Support for POSIX direct I/O file system interface calls on Intel 32-bit Linux. Japanese translation of Administrator interfaces: GUI and CLI at the V2.2 level.

Chapter 2. SAN File System overview

35

2.4 SAN File System architecture


SAN File System architecture and components are illustrated in Figure 2-1. Computers that want to share data and have their storage centrally managed are all connected to the SAN. In SAN File System terms, these are known as clients, since they access SAN File System services, although in the enterprise context, they would most likely be, for example, database servers, application servers, or file servers.
External clients
NFS / CIFS

SFS admin console


Client / metadata communications

IP Network

SAN
FC
FC/iSCSI Gateway

LAN
iSCSI

SFS meta-data cluster


2-8 servers

SFS metadata storage


System storage

Multiple, heterogeneous storage pools

SFS user storage

Figure 2-1 SAN File System architecture

In Figure 2-1, we show five such clients, each running a SAN File System currently supported client operating system. The SAN File System client software enables them to access the global namespace through a virtual file system (VFS) on UNIX/Linux systems and an installable file system (IFS) on Windows systems. This layer (VFS/IFS) is built by the OS vendors for use specifically for special-purpose or newer file systems. There are also special computers called Metadata server (MDS) engines, that run the Metadata server software, as shown in the left side of the figure. The MDSs manage file system metadata (including file creation time, file security information, file location information, and so on), but the user data accessed over the SAN by the clients does not pass through an MDS. This eliminates the performance bottleneck from which many existing shared file system approaches suffer, giving near-local file system performance. MDSs are clustered for scalability and availability of metadata operations and are often referred to as the MDS cluster. In a SAN File System server cluster, there is one master MDS and one or more subordinate MDSs. Each MDS runs on a separate physical engine in the cluster. Additional MDSs can be added as required if the workload grows, providing solution scalability. Storage volumes that store the SAN File System clients user data (User Pools) are separated from storage volumes that store metadata (System Pool), as shown in Figure 2-1.

36

IBM TotalStorage SAN File System

The Administrative server allows SAN File System to be remotely monitored and controlled through a Web-based user interface called the SAN File System console. The Administrative server also processes requests issued from an administrative command line interface (CLI), which can also be accessed remotely. This means the SAN File System can be administered from almost any system with suitable TCP/IP connectivity. The Administrative server can use local authentication (standard Linux user IDs and groups) to look up authentication and authorization information about the administrative users. Alternatively, an LDAP server (client supplied) can be used for authentication. The primary Administrative server runs on the same engine as the master MDS. It receives all requests issued by administrators and also communicates with Administrative servers that run on each additional server in the cluster to perform routine requests.

2.5 SAN File System hardware and software prerequisites


The SAN File System is delivered as a software only package. SAN File System software requires the following hardware and software to be supplied and installed on each MDS in advance, by the customer. SAN File System also includes software for an optional Master Console; if used, then the customer must also provide the prerequisite hardware and software for this, as described in 2.5.2, Master Console hardware and software on page 38.

2.5.1 Metadata server


SAN File System V2.2.2 supports from two to eight Metadata servers (MDS) running on hardware that must be supplied by the client. The hardware servers that run the MDSs are generically known as engines. Each engine must be a rack-mounted, high-performance, and highly-reliable Intel server. The engine can be a SAN File System Metadata Server engine (4146 Model 1RX), an IBM ^ xSeries 345 server, an IBM ^ xSeries 346 server, an IBM ^ xSeries 365 server, or equivalent servers with the hardware components listed below. SAN File System V2.2 will support a cluster of MDSs consisting of both 4146-1RX engines, IBM ^ xSeries servers, and equivalents. If not using the IBM ^ 345, 346, 365, or 4146-Model 1RX, the following hardware components are required for each MDS: Two processors of minimum 3 GHz each. Minimum of 4 GB of system memory. Two internal hard disk drives with mirroring capability, minimum 36 GB each. These are used to install the MDS operating system, and should be set up in a mirrored (RAID 1) configuration. Two power supplies (optional, but highly recommended for redundancy). A minimum of one 10/100/1000Gb port for Ethernet connection (Fibre or Copper); however, two Ethernet connections are recommended to take advantage of high-availability capabilities with Ethernet bonding. Two 2 Gb Fibre Channel host bus adapter (HBA) ports. These must be compatible with the SUSE operating system and the storage subsystems in your SAN environment. They must also be capable of running the QLogic 2342 device driver. Suggested adapters: QLogic 2342 or IBM part number 24P0960. CD-ROM and diskette drives.

Chapter 2. SAN File System overview

37

Remote Supervisory Adapter II card (RSA II). This must be compatible with the SUSE operating system. Suggested card: IBM part number 59P2984 for x345, 73P9341 - IBM Remote Supervisor Adapter II Slim line for x346. Certified for SUSE Linux Enterprise Server 8, with Service Pack 4 (kernel level 2.4.21-278) or SUSE Linux Enterprise Server 9, Service Pack 1, with kernel level 2.6.5-7.151. Each MDS must have the following software installed: SUSE Linux Enterprise Server 8, Service Pack 4, kernel level 2.4.21-278, or SUSE Linux Enterprise Server 9, Service Pack 1, kernel level 2.6.5-7.151. Multi-pathing driver for the storage device used for the metadata LUNs. At the time of writing, if using DS4x000 storage for metadata LUNs, then either RDAC V9.00.A5.09 (SLES8) or RDAC V9.00.B5.04 (SLES9) is required. If using other IBM storage for metadata LUNs (ESS, SVC, DS6000, or DS8000), then SDD V1.6.0.1-6 is required. However, these levels will change over time. Always check the release notes distributed with the product CD, as well as the SAN File System for the latest supported device driver level. More information about the multi-pathing driver can be found in 4.4, Subsystem Device Driver on page 109 and 4.5, Redundant Disk Array Controller (RDAC) on page 119.

2.5.2 Master Console hardware and software


The SAN File System V2.2.2 Master Console is an optional component of a SAN File System configuration for use as a control point. If deployed, it requires a client-supplied, high performance, and highly reliable rack-mounted Intel Pentium 4 processor server. This can be an IBM ^ xSeries 305 server, a SAN File System V1.1 or V2.1 Master Console, 4146-T30 feature #4001, a SAN Volume Controller Master Console, or equivalent Intel server with the following capabilities: At least 2.6 GHz processor speed At least 1 GB of system memory Two 40 GB IDE hard disk drives CD-ROM drive Diskette drive Two 10/100/1000 Mb ports for Ethernet connectivity (Copper or Fiber) Two Fibre Channel Host Bus Adapter (HBA) ports Monitor and keyboard: IBM Netbay 1U Flat Panel Monitor Console Kit with keyboard or equivalent If a SAN Volume Controller Master Console is already available, it can be shared with SAN File System, since it meets the hardware requirements. The Master Console, if deployed, must have the following software installed: Microsoft Windows 2000 Server Edition with Service Pack 4 or higher, or Microsoft Windows Professional with Update 818043, or Windows 2003 Enterprise Edition, or Windows 2003 Standard Edition. Microsoft Windows Internet Explorer Version 6.0 (SP1 or later). Sun Java Version 1.4.2 or higher.

38

IBM TotalStorage SAN File System

Antivirus software is recommended. Additional software for the Master Console is shipped with the SAN File System software package, as described in 2.5.6, Master Console on page 45.

2.5.3 SAN File System software


SAN File System software (5765-FS2) is required licensed software for SAN File System. This includes the SAN File System code itself and the client software packages to be installed on the appropriate servers, which will gain access to the SAN File System global namespace. These servers are then known as SAN File System clients. The SAN File System software bundle consists of three components: Software that runs on each SAN File System MDS Software that runs on your application servers, called the SAN File System Client software Optional software that is installed on the Master Console, if used

2.5.4 Supported storage for SAN File System


SAN-attached storage is required for both metadata volumes as well as user volumes. Supported storage subsystems for metadata volumes (at the time of writing) are listed in Table 2-1.
Table 2-1 Storage subsystem platforms supported for metadata LUNs Storage platform ESS DS4000 / FAStT Models supported 2105-F20, 2105-750, 2105-800 4100/100, 4300/600, 4400/700, 4500/900, that is, all except for DS4800 All All 2145 v2.1.x v1.1.8 Driver and microcode SDD v1.6.0.1-6 RDAC v09.00.x for the Linux v2.4 or v2.6 kernel SDD v1.6.0.1-6 SDD v1.6.0.1-6 SDD v1.6.0.1-6 SDD v1.6.0.1-6 Mixed operating system access? Yes No

DS6000 DS8000 SVC (SLES8 only) SVC for Cisco and MDS9000

Yes Yes Yes Yes

Note this information can change at any time; the latest information about specific supported storage, including device driver levels and microcode, is at this Web site. Please check it before starting your SAN File System installation:
http://www.ibm.com/storage/support/sanfs

Metadata volume considerations


Metadata volumes should be configured using RAID, with a low ratio of data to parity disks. Hot spares should also be available, to minimize the amount of time to recover from a single disk failure.

Chapter 2. SAN File System overview

39

User volumes
SAN File System can be configured with any SAN storage device for the user data storage, providing it is supported by the operating systems running the SAN File System client (including having a compatible HBA) and that it conforms to the SCSI standard for unique device identification. SAN File System also supports storage devices for user data storage attached through iSCSI. The iSCSI attached storage devices must conform to the SCSI standard for unique device identification and must be supported by the SAN File System client operating systems. Consult your storage systems documentation or the vendor to see if it meets these requirements. Note: Only IBM storage subsystems are supported for the system (metadata) storage pool. SAN File System supports an unlimited number of LUNs for user data storage. The amount of user data storage that you can have in your environment is determined by the amount of storage that is supported by the storage subsystems and the client operating systems. In the following sections, SAN File System hardware and logical components are described in detail.

2.5.5 SAN File System engines


Within SAN File System, an engine is the physical hardware on which a MDS and an Administrative server runs. SAN File System supports any number from two to eight engines. Increasing the number of engines increases metadata traffic capacity and can provide higher availability to the configuration. Note: Although you cannot configure an initial SAN File System with only one engine, you can run a single-engine system if all of the other engines fail (for example, if you have only two engines and one of them fails), or if you want to bring down all of the engines except one before performing scheduled maintenance tasks. Performance would obviously be impacted in this case, but these scenarios are supported and workable, on a temporary basis. The administrative infrastructure on each engine allows an administrator to monitor and control SAN File System from a standard Web browser or an administrative command line interface. The two major components of the infrastructure are an Administrative agent, which provides access to administrative operations, and a Web server that is bundled with the console services and servlets that render HTML for the administrative browsers. The infrastructure also includes a Service Location Protocol (SLP) daemon, which is used for administrative discovery of SAN File System resources by third-party Common Information Model (CIM) agents. An administrator can use the SAN File System Console, which is the browser-based user interface, or administrative commands (CLI) to monitor and control an engine from anywhere with a TCP/IP connection to the cluster. This is in contrast to the SAN Volume Controller Console, which uses the Master Console for administrative functions.

Metadata server
A Metadata server (MDS) is a software server that runs on a SAN File System engine and performs metadata, administrative, and storage management services. In a SAN File System

40

IBM TotalStorage SAN File System

server cluster, there is one master MDS and one or more subordinate MDSs, each running on a separate engine in the cluster. Together, these MDSs provide clients with shared, coherent access to the SAN File System global namespace. All of the servers, including the master MDS, share the workload of the SAN File System global namespace. Each is responsible for providing metadata and locks to clients for filesets that are hosted by that MDS. Each MDS knows which filesets are hosted by each particular MDS, and when contacted by a client, can direct the client to the appropriate MDS. They manage distributed locks to ensure the integrity of all of the data within the global namespace. Note: Filesets are subsets of the entire global namespace and serve to organize the namespace for all the clients. A fileset serves as the unit of workload for the MDS; each MDS has a workload assigned of some of the filesets. From a client perspective, a fileset appears as a regular directory or folder, in which the clients can create their own regular directories and files. Clients, however, cannot delete or rename the directories at which filesets are attached. In addition to providing metadata to clients and managing locks, MDSs perform a wide variety of other tasks. They process requests issued by administrators to create and manage filesets, storage pools, volumes, and policies, they enforce the policies defined by administrators to place files in appropriate storage pools, and they send alerts when any thresholds established for filesets and storage pools are exceeded.

Performing metadata services


There are two types of metadata:

File metadata: This is information needed by the clients in order to access files directly from storage devices on a Storage Area Network. File metadata includes permissions, owner and group, access time, creation time, and other file characteristics, as well as the location of the file on the storage. System metadata: This is metadata used by the system itself. System metadata includes information about filesets, storage pools, volumes, and policies. The MDSs perform the reads and writes required to create, distribute, and manage this information.
The metadata is stored and managed in a separate system storage pool that is only accessible by the MDS in a server cluster. Distributing locks to clients involves the following operations: Issuing leases that determine the length of time that a server guarantees the locks it grants to clients. Granting locks to clients that allow them shared or exclusive access to files or parts of files. These locks are semi-preemptible, which means that if a client does not contact the server within the lease period, the server can steal the clients locks and grant them to other clients if requested; otherwise, the client can reassert its locks (get its locks back) when it can make contact, thereby inter-locking the connection again. Providing a grace period during which a client can reassert its locks before other clients can obtain new locks if the server itself goes down and then comes back online.

Chapter 2. SAN File System overview

41

Performing administrative services


An MDS processes the requests from administrators (issued from the SAN File System console or CLI) to perform the following types of tasks: Create and manage filesets, which are subsets of the entire global namespace and serve as the units of workload assigned to specific MDSs. Receive requests to create and manage volumes, which are LUNs labeled for SAN File Systems use in storage pools. Create and maintain storage pools (for example, an administrator can create a storage pool that consists of RAID or striped storage devices to meet reliability requirements, and can create a storage pool that consists of random or sequential access or low-latency storage devices to meet high performance requirements). Manually move files between storage pools, and defragment files in storage pools. Create FlashCopy images of filesets in the global namespace that can be used to make file-based backups easier to perform. Define policies containing rules for placement of files in storage pools. Define policies that define the automatic background movement of files among storage pools and the background deletion of files.

Performing storage management services


An MDS performs these storage management services: Manages allocation of blocks of space for files in storage pool volumes. Maintains pointers to the data blocks of a file. Evaluates the rules in the active policy and manages the placement of files in specific storage pools based on those rules. Issues alerts when filesets and storage pools reach or exceed their administrator-specified thresholds, or returns out-of-space messages if they run out of space.

Administrative server
Figure 2-2 on page 43 shows the overall administrative interface structure of SAN File System.

42

IBM TotalStorage SAN File System

Admin Clients Web Browser (GUI Client) ssh (CLI access)


rd CIM Client (3 party/IGS)

SAN File File System Clients Installable file system Virtual file system Customer Network SAN File system Server Cluster GUI Web Server CLI Client (tanktool) (sfscli) Admin Agent (CIM) Metadata server Server Auth Server LDAP Server OR Local Authentication Linux RSA Card

Master Console (optional) Console KVM Call-Home Remote Support Director

Figure 2-2 SAN File System administrative structure

The SAN File System Administrator server, which is based on a Web server software platform, is made up of two parts: the GUI Web server and the Administrative Agent.

Chapter 2. SAN File System overview

43

The GUI Web server is the part of the administrative infrastructure that interacts with the SAN File System MDSs and renders the Web pages that make up the SAN File System Console. The Console is a Web-based user interface, either Internet Explorer or Netscape. Figure 2-3 shows the GUI browser interface for the SAN File System.

Figure 2-3 SAN File System GUI browser interface

The Administrative Agent implements all of the management logic for the GUI, CLI, and CIM interfaces, as well as performing administrative authorization/authentication against the LDAP server. The Administrative Agent processes all management requests initiated by an administrator from the SAN File System console, as well as requests initiated from the SAN File System administrative CLI, which is called sfscli. The Agent communicates with the MDS, the operating system, the Remote Supervisor Adapter (RSA II) card in the engine, the LDAP, and Administrative Agents on other engines in the cluster when processing requests. Example 2-1 shows all the commands available with sfscli.
Example 2-1 The sfscli commands for V2.2.2 itso3@tank-mds3:/usr/tank/admin/bin> sfscli> help activatevol lsadmuser addprivclient lsautorestart addserver lsclient addsnmpmgr lsdomain attachfileset lsdrfile autofilesetserver lsfileset builddrscript lsimage catlog lslun catpolicy lspolicy chclusterconfig lspool chdomain lsproc chfileset lsserver chldapconfig lssnmpmgr ./sfscli mkvol mvfile quiescecluster quit rediscoverluns refreshusermap reportclient reportfilesetuse reportvolfiles resetadmuser resumecluster reverttoimage rmdomain setfilesetserver setoutput settrap startautorestart startcluster startmetadatacheck startserver statcluster statfile statfileset statldap statpolicy statserver

44

IBM TotalStorage SAN File System

chpool lstrapsetting rmdrfile chvol lsusermap rmfileset clearlog lsvol rmimage collectdiag mkdomain rmpolicy detachfileset mkdrfile rmpool disabledefaultpool mkfileset rmprivclient dropserver mkimage rmsnmpmgr exit mkpolicy rmusermap expandvol mkpool rmvol help mkusermap setdefaultpool sfscli> itso3@tank-mds3:/usr/tank/admin/bin>

stopautorestart stopcluster stopmetadatacheck stopserver suspendvol upgradecluster usepolicy

An Administrative server interacts with a SAN File System MDS through an intermediary, called the Common Information Model (CIM) agent. When a user issues a request, the CIM agent checks with an LDAP server, which must be installed in the environment, to authenticate the user ID and password and to verify whether the user has the authority (is assigned the appropriate role) to issue a particular request. After authenticating the user, the CIM agent interacts with the MDS on behalf of that user to process the request. This same system of authentication and interaction is also available to third-party CIM clients to manage SAN File System.

2.5.6 Master Console


The Master Console software is designed to provide a unified point of service for the entire SAN File System cluster, simplifying service to the MDSs. It makes a Virtual Private Network (VPN) connection readily available that you can initiate and monitor to enable hands-on access by remote IBM support personnel. It also provides a common point of residence for the IBM TotalStorage TPC for Fabric, IBM Director, and other tools associated with the capabilities just described, and can act as a central repository for diagnostic data. It is optional (that is, not required) to install a Master Console in a SAN File System configuration. If deployed, the Master Console hardware is customer-supplied and must meet the specifications listed in 2.5.2, Master Console hardware and software on page 38. The Master Console supported by the SAN File System is the same as that used for the IBM TotalStorage SAN Volume Controller (SVC) and IBM TotalStorage SAN Integration Server (SIS), so if there is already one in the client environment, it can be shared with the SAN File System. The Master Console software package includes the following software, which must be installed on it, if deployed: Adobe Acrobat Reader DB2 DS4000 Storage Manager Client IBM Director PuTTY SAN Volume Controller Console 6 Tivoli Storage Area Network Manager IBM VPN Connection Manager From the Master Console, the user can access the following components: SAN File System console, through a Web browser. Administrative command-line interface, through a Secure Shell (SSH) session. Any of the engines in the SAN File System cluster, through an SSH session.

Chapter 2. SAN File System overview

45

The RSA II card for any of the engines in the SAN File System cluster, through a Web browser. In addition, the user can use the RSA II Web interface to establish a remote console to the engine, allowing the user to view the engine desktop from the Master Console. Any of the SAN File System clients, through an SSH session, a telnet session, or a remote display emulation package, depending on the configuration of the client.

Remote access
Remote Access support is the ability for IBM support personnel who are not located on a users premises to assist an administrator or a local field engineer in diagnosing and repairing failures on a SAN File System engine. Remote Access support can help to greatly reduce service costs and shorten repair times, which in turn will reduce the impact of any SAN File System failures on business. Remote Access provides a support engineer with full access to the SAN File System console, after a request initiated by the customer. The access is via a secure VPN connection, using IBM VPN Connection Manager. This allows the support engineer to query and control the SAN File System MDS and to access metadata, log, dump, and configuration data, using the CLI. While the support engineer is accessing the SAN File System, the customer is able to monitor their progress via the Master Console display.

2.5.7 Global namespace


In most file systems, a typical file hierarchy is represented as a series of folders or directories that form a tree-like structure. Each folder or directory could contain many other folders or directories, file objects, or other file system objects, such as symbolic links or hard links. Every file system object has a name associated with it, and it is represented in the namespace as a node of the tree. SAN File System introduces a new file system object, called a fileset. A fileset can be viewed as a portion of the tree-structured hierarchy (or global namespace). It is created to divide the global namespace into a logical, organized structure. Filesets attach to other directories in the hierarchy, ultimately attaching through the hierarchy to the root of the SAN File System cluster mount point. The collection of filesets and its content in SAN File System combine to form the global namespace. Fileset boundaries are not visible to the clients. Only a SAN File System administrator can see them. From a clients perspective, a fileset appears as a regular directory or folder within which the clients can create their own regular directories and files. Clients, however, cannot delete or rename the directories to which filesets are attached. The global namespace is the key to the SAN File System. It allows common access to all files and directories by all clients if required, and ensures that the SAN File System clients have both consistent access and a consistent view of the data and files managed by SAN File System. This reduces the need to store and manage duplicate copies of data, and simplifies the backup process. Of course, security mechanisms, such as permissions and ACLs, will restrict visibility of files and directories. In addition, access to specific storage pools and filesets can be restricted by the use of non-uniform SAN File System configurations, as described in 3.3.2, Non-uniform SAN File System configuration on page 69.

How the global namespace is organized


The global namespace is organized into filesets, and each fileset is potentially available to the client-accessible global namespace at its attach point. An administrator is responsible for creating filesets and attaching them to directories in the global namespace, which can be done at multiple levels. Figure 2-4 on page 47 shows a sample global namespace. An attach point appears to a SAN File System client as a directory in which it can create files and

46

IBM TotalStorage SAN File System

folders (permissions permitting). From the MDS perspective, the filesets allow the metadata workload to be split between all the servers in the cluster. Note: Filesets can be organized in any way desired, to reflect enterprise needs.

SAN File System

/ ROOT

(Default Fileset)

(Additional Filesets)

/HR

/Finance

/CRM

/Manufacturing

Figure 2-4 Global namespace

For example, the root fileset (for example, ROOT) is attached to the root level in the namespace hierarchy (for example, sanfs), and the filesets are attached below it (that is, HR, Finance, CRM, and Manufacturing). The client would simply see four subdirectories under the root directory of the SAN File System. By defining the path of a filesets attach point, the administrator also automatically defines its nesting level in relationship to the other filesets.

2.5.8 Filesets
A fileset is a subset of the entire SAN File System global namespace. It serves as the unit of workload for each MDS, and also dictates the overall organizational structure for the global namespace. It is also a mechanism for controlling the amount of space occupied by SAN File System clients. Filesets can be created based on workflow patterns, security, or backup considerations, for example. You might want to create a fileset for all the files used by a specific application, or associated with a specific client. The fileset is used not only for managing the storage space, but also as the unit for creating FlashCopy images (see 2.5.12, FlashCopy on page 58). Correctly defined filesets mean that you can take a FlashCopy image for all the files in a fileset together in a single operation, thus providing a consistent image for all of those files. A key part of SAN File System design is organizing the global namespace into filesets that match the data management model of the enterprise. Filesets can also be used as a criteria in placement of individual files within the SAN File System (see 2.5.10, Policy based storage and data management on page 49). Tip: Filesets are assigned to a MDS either statically (that is, by specifying a MDS to serve the fileset when it is created), or dynamically. If dynamic assignment is chosen, automatic simple load balancing will be done. If using static fileset assignment, consider the overall I/O loads on the SAN File System cluster. Since each fileset is assigned to one (and only one) MDS at a time, for serving the metadata, you will want to balance the load across all MDS in the cluster, by assigning filesets appropriately. More information about filesets is given in 7.5, Filesets on page 286.
Chapter 2. SAN File System overview

47

An administrator creates filesets and attaches them at specific locations below the global fileset. An administrator can also attach a fileset to another fileset. When a fileset is attached to another fileset, it is called a nested fileset. In Figure 2-5, fileset1 and fileset2 are the nested filesets of parent fileset Winfiles. Note: In general, we do not recommend creating nested filesets; see 7.5.2, Nested filesets on page 289 for the reasons why.

/
/HR /UNIXfiles

( ROOT )

/Winfiles

/Manufacturing

(filesets)

fileset1

fileset2

(nested filesets)

Figure 2-5 Filesets and nested filesets

Here we have shown several filesets, including filesets called UNIXfiles and Winfiles. We recommend separating filesets by their primary allegiance of the operating system. This will facilitate file sharing (see Sharing files on page 54 for more information). Separation of filesets also facilitates backup, since if you are using file-based backup methods (for example, tar, Windows Backup vendor products like VERITAS NetBackup, or IBM Tivoli Storage Manager), full metadata attributes of Windows files can only be backed up from a Windows backup client, and full metadata attributes of UNIX files can only be backed up from a UNIX backup client. See Chapter 12, Protecting the SAN File System environment on page 477 for more information. When creating a fileset, an administrator can specify a maximum size for the fileset (called a quota) and specify whether SAN File System should generate an alert if the size of the fileset reaches or exceeds a specified percentage of the maximum size (called a threshold). For example, if the quota on the fileset was set at 100 GB, and the threshold was 80%, an alert would be raised once the fileset contained 80 GB of data. The action taken when the fileset reaches its quota size (100 GB in this instance) depends on whether the quota is defined as hard or soft. If a hard quota is used, once the threshold is reached, any further requests from a client to add more space to the fileset (by creating or extending files) will be denied. If a soft quota is used, which is the default, more space can be allocated, but alerts will continue to be sent. Of course, once the amount of physical storage available to SAN File System is exceeded, no more space can be used. The quota limit, threshold, and quota type can be set differently and individually for each fileset.

2.5.9 Storage pools


A storage pool is a collection of SAN File System volumes that can be used to store either metadata or file data. A storage pool consists of one or more volumes (LUNs from the back-end storage system perspective) that provide, for example, a desired quality of service for a specific use, such as to store all files for a particular application. An administrator must assign one or more volumes to a storage pool before it can be used.

48

IBM TotalStorage SAN File System

SAN File System has two types of storage pools (System and User), as shown in Figure 2-6.

SAN File System


System Pool User Pool1
Default User Pool

User Pool2

User Pool3

Figure 2-6 SAN File System storage pools

System Pool
The System Pool contains the system metadata (system attributes, configuration information, and MDS state) and file metadata (file attributes and locations) that is accessible to all MDSs in the server cluster. There is only one System Pool, which is created automatically when SAN File System is installed with one or more volumes specified as a parameter to the install process. The System Pool contains the most critical data for SAN File System. It is very important to use highly reliable and available LUNs as volumes (for example, using mirroring, RAID, and hot spares in the back-end storage system) so that the MDS cluster always has a robust copy of this critical data. For the greatest protection and highest availability in a local configuration, mirrored RAID-5 volumes are recommended. The RAID configuration should have a low ratio of data to parity disks, and hot spares should also be available, to minimize the amount of time to recover from a single disk failure. Remote mirroring solutions, such as MetroMirror, available on the IBM TotalStorage SAN Volume Controller, DS6000, and DS8000, are also possible.

User Pools
User Pools contain the blocks of data that make up user files. Administrators can create one or more user storage pools, and then create policies containing rules that cause the MDS servers to store data for specific files in the appropriate storage pools. A special User Pool is the default User Pool. This is used to store the data for a file if the file is not assigned to a specific storage pool by a rule in the active file placement policy. One User Pool, which is automatically designated the default User Pool, is created when SAN File System is installed. This can be changed by creating another User Pool and setting it to the default User Pool. The default pool can also be disabled if required.

2.5.10 Policy based storage and data management


SAN File System provides automatic file placement, at the time of creation, through the use of polices and storage pools. An administrator can create quality-of-service storage pools that are available to all users, and define rules in file placement policies that cause newly created files to be placed in the appropriate storage pools automatically. SAN File System also provides file lifecycle management through the use of file management policies.

File placement policy


A file placement policy is a list of rules that determines where the data for specific files is stored. A rule is an SQL-like statement that tells a SAN File System MDS to place the data for a file in a specific storage pool if the file attribute that the rule specifies meets a particular condition. A rule can apply to any file being created, or only to files being created within a specific fileset, depending on how it is defined.
Chapter 2. SAN File System overview

49

A storage pool is a named set of storage volumes that can be specified as the destination for files in rules. Only User Pools are used to store file data. The rules in a file placement policy are processed in order until the condition in one of the rules is met. The data for the files is then stored in the specified storage pool. If none of the conditions specified in the rules of the policy is met, the data for the file is stored in the default storage pool. Figure 2-7 shows an example of how file placement policies work. The yellow box shows a sequence of rules defined in the policy. Underneath each storage pool is a list of some files that will be placed in it, according to the policy. For example, the file /HR/dsn.bak matches the first rule (put all files in the fileset /HR into User Pool 1) and is therefore put into User Pool 1. The fact that it also matches the second rule is irrelevant, because only the first matching rule is applied. See 7.8, File placement policy on page 304 for more information.

File Name Fileset File Type


/HR /Finance /CRM /Manufacturing

Rules for File Placement


/HR go into User Pool 1 *.bak go into User Pool 4 DB2.* go into User Pool 2 *.tmp go into User Pool 3

SAN File System

User Pool 1
/HR/dsn1.txt /HR/DB2.pgm /HR/dsn1.bak

User Pool 2
/CRM/DB2.pgm /Finance/DB2.tmp

User Pool 3
/CRM/dsn3.tmp

User Pool 4
/CRM/dsn2.bak /Finance/dsn4.bak

Figure 2-7 File placement policy execution

The file placement policy can also optionally contain preallocation rules. These rules, available with SAN File System V2.2.2, allow a system administrator to automatically preallocate space for designated files, which can improve performance. See 7.8.7, File storage preallocation on page 324 for more information about preallocation.

File management policy and lifecycle management


SAN File System Version 2.2 introduced a lifecycle management function. This allows administrators to specify how files should be automatically moved among storage pools during their lifetime, and, optionally, specify when files should be deleted. The business value of this feature is that it improves storage space utilization, allowing a balanced use of premium and inexpensive storage matching the objectives of the enterprise. For example, an enterprise may have two types of storage devices; one that has higher speed, reliability, and cost, and one that has lower speed, reliability, and cost. Lifecycle management in SAN File System could be used to automatically move infrequently accessed files from the more 50
IBM TotalStorage SAN File System

expensive storage to cheaper storage, or vice versa, for more critical files. Lifecycle management reduces the manual intervention necessary in managing space utilization and therefore also reduces the cost of management. Lifecycle management is set up via file management policies. A file management policy is a set of rules controlling the movement of files among different storage pools. Rules are of two types: migration and deletion. A migration rule will cause matching files to be moved from one storage pool to another. A deletion rule will cause matching files to be deleted from the SAN File System global namespace. Migration and deletion rules can be specified based on pool, fileset, last access date, or size criteria. The system administrator defines these rules in a file management policy, then runs a special script to act on the rules. The script can be run in a planning mode to determine in advance what files would be migrated/deleted by the script. The plan can optionally be edited by the administrator, and then passed back for execution by the script so that the selected files are actually migrated or deleted. For more information, see Chapter 10, File movement and lifecycle management on page 435.

2.5.11 Clients
SAN File System is based on a client-server design. A SAN File System client is a computer that accesses and creates data that is stored in the SAN File System global namespace. The SAN File System is designed to support the local file system interfaces on UNIX, Linux, and Windows servers. This means that the SAN File System is designed to be used without requiring any changes to your applications or databases that use a file system to store data. The SAN File System client for AIX, Sun Solaris, Red Hat, and SUSE Linux use the virtual file system interface within the local operating system to provide file system interfaces to the applications running on AIX, Sun Solaris, Red Hat, and SUSE Linux. The SAN File System client for Microsoft Windows (supported Windows 2000 and 2003 editions) uses the installable file system interface within the local operating system to provide file system interfaces to the applications. Clients access metadata (such as a file's location on a storage device) only through a MDS, and then access data directly from storage devices attached to the SAN. This method of data access eliminates server bottlenecks and provides read and write performance that is comparable to that of file systems built on bus-attached, high-performance storage. SAN File System currently supports clients that run these operating systems: AIX 5L Version 5.1 (32-bit uniprocessor or multiprocessor). The bos.up or bos.mp packages must be at level 5.1.0.58, plus APAR IY50330 or higher. AIX 5L Version 5.2 (32-bit and 64-bit). The bos.up package must be at level 5.2.0.18 or later. The bos.mp package must be at level 5.2.0.18 or later. APAR IY50331 or higher is required. AIX 5L Version 5.3 (32-bit or 64-bit). Windows 2000 Server and Windows 2000 Advanced Server with Service Pack 4 or later. Windows 2003 Server Standard and Enterprise Editions with Service Pack 1 or later. VMWare ESX 2.0.1 running Windows only. Red Hat Enterprise Linux 3.0 AS, ES, and WS, with U2 kernel 2.4.21-15.0.3 hugemem, smp or U4 kernel 2.4.21-27 hugemem, and smp on x86 systems.

Chapter 2. SAN File System overview

51

SUSE Linux Enterprise Server 8.0 on kernel level 2.4.21-231 (Service Pack 3) kernel level 2.4.21-278 (Service Pack 4) on x86 servers (32-bit). SUSE Linux Enterprise Server 8.0 SP3 kernel 2.4.21-251 on pSeries (64-bit). SUSE Linux Enterprise Server 8.0 SP3 kernel 2.4.21-251 on zSeries (31-bit). Sun Solaris 9 (64-bit) on SPARC servers. Note: The AIX client is supported on pSeries systems with a maximum of eight processors. The Red Hat client is supported on either the SMP or Hugemem kernel, with a maximum of 4 GB of main memory. The zSeries SUSE 8 SAN File System client uses the zFCP driver and supports access to ESS, DS6000, and DS8000 for user LUNs. SAN File System client software must be installed on each AIX, Windows, Solaris, SUSE, or Red Hat client. On an AIX, Linux, and Solaris client, the software is a virtual file system (VFS), and on a Windows client, it is an installable file system (IFS). The VFS and IFS provide clients with local access to the global namespace on the SAN. Note that clients can also act as servers to a broader clientele. They can provide NFS or CIFS access to the global namespace to LAN-attached clients and can host applications such as database servers. A VFS is a subsystem of an AIX/Linux/Solaris clients virtual file system layer, and an IFS is a subsystem of a Windows clients file system. The SAN File System VFS or IFS directs all metadata operations to an MDS and all data operations to storage devices attached to a SAN. The SAN File System VFS or IFS provides the metadata to the client's operating system and any applications running on the client. The metadata looks identical to metadata read from a native, locally attached file system, that is, it emulates the local file system semantics. Therefore, no change is necessary to the client applications' access methods to use SAN File System. When the global namespace is mounted on an AIX/Linux/Solaris client, it looks like a local file system. When the global namespace is mounted on a Windows client, it appears as another drive letter and looks like an NTFS file system. Files can therefore be shared between Windows and UNIX clients (permissions and suitable applications permitting).

Clustering
SAN File System V2.2.2 supports clustering software running on AIX, Solaris, and Microsoft clients.

AIX clients
HACMP is supported on SAN File System clients running AIX 5L V5.1, V5.2, and V5.3, when the appropriate maintenance levels are installed.

Solaris clients
Solaris client clustering is supported when used with Sun Cluster V3.1. Sun clustered applications can use SAN File System provided that the SAN File System is declared to the cluster manager as a Global File System. Likewise, non-clustered applications are supported when Sun Cluster is present on the client. Sun Clusters can also be used as an NFS server, as the NFS service will fail over using local IP connectivity.

52

IBM TotalStorage SAN File System

Microsoft clients
Microsoft client clustering is supported for Windows 2000 and Windows 2003 clients with MSCS (Microsoft Cluster Server), using a maximum of two client nodes per cluster.

Caching metadata, locks, and data


Caching allows a client to achieve low-latency access to both metadata and data. A client can cache metadata to perform multiple metadata reads locally. The metadata includes mapping of logical file system data to physical addresses on storage devices attached to a SAN. A client can also cache locks to allow the client to grant multiple opens to a file locally without having to contact a MDS for each operation that requires a lock. In addition, a client can cache data for small files to eliminate I/O operations to storage devices attached to a SAN. A client performs all data caching in memory. Note that if there is not enough space in the clients cache for all of the data in a file, the client simply reads the data from the shared storage device on which the file is stored. Data access is still fast because the client has direct access to all storage devices attached to a SAN.

Using the direct I/O mode


Some applications, such as database management systems, use their own sophisticated cache management systems. For such applications, SAN File System provides a direct I/O mode. In this mode, SAN File System performs direct writes to disk, and bypasses local file system caching. Using the direct I/O mode makes files behave more like raw devices. This gives database systems direct control over their I/O operations, while still providing the advantages of SAN File System, such as SAN File System FlashCopy. Applications need to be aware of (and configured for) direct I/O. IBM DB2 UDB supports direct I/O (see 14.5, Direct I/O support on page 558 for more information). On the Intel Linux (IA32) releases supported with the SAN File System V2.2.2 client, support is provided for the POSIX direct I/O file system interface calls.

Virtual I/O
The SAN File System 2.2.2 client for AIX 5L V5.3 will interoperate with Virtual I/O (VIO) devices. VIO enables virtualization of storage across LPARs in a single POWER5 system. SAN File System support for VIO enables SAN File System clients to use data volumes that can be accessed through VIO. In addition, all other SAN File System clients will interoperate correctly with volumes that are accessed through VIO by one or more AIX 5L V5.3 clients. Version 1.2.0.0 of VIO is supported by SAN File System. Restriction: SAN File System does not support the use of Physical Volume Identifier (PVID) in order to export a LUN/physical volume (for example, hdisk4) on a VIO Server. To list devices with a PVID, type lspv. If the second column has a value of none, the physical volume does not have a PVID. For a description of driver configurations that require the creation of a volume label, see What are some of the restrictions and limitations in the VIOS environment? on the VIOS Web site at:
http://www.software.ibm.com/webapp/set2/sas/f/vios/documentation/faq.html

Chapter 2. SAN File System overview

53

Sharing files
In a homogenous environment (either all UNIX or all Windows clients), SAN File System provides access and semantics that are customized for the operating system running on the clients. When files are created and accessed from only Windows clients, all the security features of Windows are available and enforced. When files are created and accessed from only UNIX clients, all the security features of UNIX are available and enforced. In Version 2.2 of SAN File System (and beyond), the heterogenous file sharing feature improves the flexibility and security involved in sharing files between Windows and UNIX based environments. The administrator defines and manages a set of user map entries using the CLI or GUI, which specifies a UNIX domain-qualified user and a Windows domain-qualified user that are to be treated as equivalent for the purpose of validating file access permissions. Once these mappings are defined, the SAN File System automatically accesses the Active Directory Sever (Windows) and either LDAP or Network Information Service (NIS) on UNIX to cross-reference the user ID and group membership. See 8.3, Advanced heterogeneous file sharing on page 347 for more information about heterogenous file sharing. If no user mappings are defined, then heterogeneous file sharing (where there are both UNIX and Windows clients) is handled in a restricted manner. When files created on a UNIX client are accessed by a non-mapped user on a Windows client, the access available will be the same as those granted by the Other permission bits in UNIX. Similarly, when files created on a Windows client are accessed on a non-mapped user on a UNIX client, the access available is the same as that granted to the Everyone user group in Windows. If the improved heterogenous file sharing capabilities (user mappings) are not implemented by the administrator, then file sharing is positioned primarily for homogenous environments. The ability to share files heterogeneously is recommended for read-only, that is, create files on one platform, and provide read-only access on the other platform. To this end, filesets should be established so that they have a primary allegiance. This means that certain filesets will have files created in them only by Windows clients, and other filesets will have files created in them only by UNIX clients.

How clients access the global namespace


SAN File System clients mount the global namespace onto their systems. After the global namespace is mounted on a client, users and applications can use it just as they do any other file system to access data and to create, update, and delete directories and files. On a UNIX-based client (including AIX, Solaris, and Linux), the global namespace looks like a local UNIX file system. On a Windows client, it appears as another driver letter and looks like any other local NTFS file system. Basically, the global namespace looks and acts like any other file system on a clients system. There are some restrictions on NTFS features supported by SAN File System (see Windows client restrictions on page 56). Figure 2-8 on page 55 shows the My Computer view from a Windows 2000 client: The S: drive (labelled sanfs) is the attach point of the SAN File System. A Windows 2003 client will see a similar display.

54

IBM TotalStorage SAN File System

Figure 2-8 Windows 2000 client view of SAN File System

If we expand the S: drive in Windows Explorer, we can see the directories underneath (Figure 2-9 shows this view). There are a number of filesets available, including the root fileset (top level) and two filesets under the root (USERS and userhomes). However, the client is not aware of this; they simply see the filesets as regular folders. The hidden directory, .flashcopy, is part of the fileset and is used to store FlashCopy images of the fileset. More information about FlashCopy is given in 2.5.12, FlashCopy on page 58 and 9.1, SAN File System FlashCopy on page 376.

Figure 2-9 Exploring the SAN File System from a Windows 2000 client

Chapter 2. SAN File System overview

55

Example 2-2 shows the AIX mount point for the SAN File System, namely SANFS. It is mounted on the directory /sfs. Other UNIX-based clients see a similar output from the df command. A listing of the SAN File System namespace base directory shows the same directory or folder names as in the Windows output. The key thing here is that all SAN File System clients, whether Windows or UNIX, will see essentially the same view of the global namespace.
Example 2-2 AIX /UNIX mount point of the SAN file system Rome:/ >df Filesystem 512-blocks Free %Used Iused %Iused Mounted on /dev/hd4 65536 46680 29% 1433 9% / /dev/hd2 1310720 73752 95% 21281 13% /usr /dev/hd9var 65536 52720 20% 455 6% /var /dev/hd3 131072 103728 21% 59 1% /tmp /dev/hd1 65536 63368 4% 18 1% /home /proc - /proc /dev/hd10opt 65536 53312 19% 291 4% /opt /dev/lv00 4063232 1648688 60% 657 1% /usr/sys/inst.images SANFS 603095040 591331328 2% 1 1% /sfs Rome:/ > cd /sfs/sanfs Rome:/ > ls .flashcopy aix51 aixfiles axi51 files lixfiles lost+found smallwin testdir tmp userhomes USERS winfiles winhome

Some client restrictions


There are certain restrictions in the current release for SAN File System clients.

Use of MBCS
Multi-byte characters (MBCS) can now be used (from V2.2 onwards) in pattern matching in file placement policies and for fileset attach point directories. MBCS are not supported in the names of storage pools and filesets. Likewise, MBCS cannot be used in the SAN File System cluster name, which appears in the namespace as the root fileset attach point directory name (for example, /sanfs), or in the fileset administrative object name (as opposed to the fileset directory attach point).

UNIX client restriction


UNIX clients cannot use user IDs or group IDs 999999 and 1000000 for real users or groups; these are reserved IDs used internally by SAN File System. Note: To avoid any conflicts with your current use of IDs, the reserved user IDs can be configured once at installation time.

Windows client restrictions


The SAN File System is natively case-sensitive. However, Windows applications can choose to use case-sensitive or case-insensitive names. This means that case-sensitive applications, such as those making use of Windows support for POSIX interfaces, behave as expected. Native Win32 clients (such as Windows Explorer) get only case-aware semantics. The case specified at the time of file creation is preserved, but in general, file names are case-insensitive. For example, Windows Explorer allows the user to create a file named Hello.c, but an attempt to create hello.c in the same folder will fail because the file already exists. If a Windows-based client accesses a folder that contains two files that are created on

56

IBM TotalStorage SAN File System

a UNIX-based client with names that differ only in case, its inability to distinguish between the two files may lead to undesirable results. For this reason, it is not recommended for UNIX clients to create case-differentiated files in filesets that will be accessed by Windows clients. The following features of NTFS are not currently supported by SAN File System: File compression on either individual files or all files within a folder. Extended attributes. Reparse points. Built-in file encryption on files and directories. Quotas; however, quotas are provided by SAN File System filesets. Defragmentation and error-checking tools (including CHKDSK). Alternate data streams. Assigning an access control list (ACL) for the entire drive. NTFS change journal. Scanning all files/directories owned by a particular SID (FSCTL_FIND_FILES_BY_SID). Security auditing or SACLs. Windows sparse files. Windows Directory Change Notification. Applications that use the Directory Change Notification feature may stop running when a file system does not support this feature, while other applications will continue running. The following applications stop running when Directory Change Notification is not supported by the file system: Microsoft applications ASP.net Internet Information Server (IIS) The SMTP Service component of Microsoft Exchange Non-Microsoft application Apache Web server The following application continues to run when Directory Change Notification is not supported by the file system: Windows Explorer. Note that when changes to files occur by other processes, the changes will not be automatically reflected until a manual refresh is done or the file folder is reopened. In addition to the above limitations, note these differences: Programs that open files using the 64-bit file ID (the FILE_OPEN_BY_FILE_ID option) will fail. This applies to the NFS server bundled with Microsoft Services for UNIX. Symbolic links created on UNIX-based clients are handled specially by SAN File System on Windows-based clients; they appear as regular files with a size of 0, and their contents cannot be accessed or deleted. Batch oplocks are not supported. LEVEL_1, LEVEL_2 and Filter types are supported.

Chapter 2. SAN File System overview

57

Differences between SAN File System and NTFS SAN File System differs from Microsoft Windows NT File System (NTFS) in its degree of integration into the Windows administrative environment. The differences are: Disk management within the Microsoft Management Console shows SAN File System disks as unallocated. SAN File System does not support reparse points or extended attributes. SAN File System does not support the use of the standard Windows write signature on its disks. Disks used for the global namespace cannot sleep or hibernate. SAN File System also differs from NTFS in its degree of integration into Windows Explorer and the desktop. The differences are: Manual refreshes are required when updates to the SAN File System global namespace are initiated on the metadata server (such as attaching a new fileset). The recycle bin is not supported. You cannot use distributed link tracing. This is a technique through which shell shortcuts and OLE links continue to work after the target file is renamed or moved. Distributed link tracking can help a user locate the link sources in case the link source is renamed or moved to another folder on the same or different volume on the same PC, or moved to a folder on any PC in the same domain. You cannot use NTFS sparse-file APIs or change journaling. This means that SAN File System does not provide efficient support for the indexing services accessible through the Windows Search for files or folders function. However, SAN File System does support implicitly sparse files.

2.5.12 FlashCopy
A FlashCopy image is a space-efficient, read-only copy of the contents of a fileset in a SAN File System global namespace at a particular point in time. A FlashCopy image can be used with standard backup tools available in a users environment to create backup copies of files onto tapes. A FlashCopy image can also be quickly reverted, that is, roll back the current fileset contents to an available FlashCopy image. When creating FlashCopy images, an administrator specifies which fileset to create the FlashCopy image for. The FlashCopy image operation is performed individually for each fileset. A FlashCopy image is simply an image of an entire fileset (and just that fileset, not any nested filesets) as it exists at a specific point in time. An important benefit is that during creation of a FlashCopy image, all data remains online and available to users and applications. The space used to keep the FlashCopy image is included in its overall fileset space; however, a space-efficient algorithm is used to minimize the space requirement. The FlashCopy image does not include any nested filesets within it. You can create and maintain a maximum of 32 FlashCopy images of any fileset. See 9.1, SAN File System FlashCopy on page 376 for more information about SAN File System FlashCopy. Figure 2-10 on page 59 shows how a FlashCopy image can be seen on a Windows client. In this case, a FlashCopy image was made of the fileset container_A, and specified to be created in the directory 062403image. The fileset has two top-level directories, DRIVERS and Adobe. After the FlashCopy image is made, a subdirectory called 062403image appears in the special directory .flashcopy (which is hidden by default) underneath the root of the fileset. This directory contains the same folders as the actual fileset, that is, DRIVERS and Adobe, and all the file/folder structure underneath. It is simply frozen at the time the image was taken.

58

IBM TotalStorage SAN File System

Therefore, clients have file-level access to these images, to access older versions of files, or to copy individual files back to the real fileset if required, and if permissions on the flashcopy folder are set appropriately.

Figure 2-10 FlashCopy images

2.5.13 Reliability and availability


Reliability is defined as the ability of SAN File System to perform to its specifications without error. This is critical for a system that will store corporate data. Availability is the ability to stay up and running, plus the ability to transparently recover to maintain the available state. SAN File System has many built-in features for reliability and availability. The SAN File System operates in a cluster. Each MDS engine supplied by the client is required to have the following features for availability: Dual hardware components: Hardware mirrored internal disk drives Dual Fibre Channel ports supporting multi-path I/O for storage devices

Chapter 2. SAN File System overview

59

Remote Supervisor Adapter II (RSA II). The RSA-II provides remote access to the engines desktop, monitoring of environmental factors, and engine restart capability. The RSA card communicates with the service processors on the MDS engines in the cluster to collect hardware information and statistics. The RSA cards also communicate with the service processors to enable remote management of the servers in the cluster, including automatic reboot if a server hang is detected. More information about the RSA card can be found in 13.5, Remote Supervisor Adapter II on page 537. To improve availability, the MDS hardware also needs the following dual redundant features: Dual power supplies. Dual fans. Dual Ethernet connections with network bonding enabled. Bonding network interfaces together allows for increased failover in high availability configurations. Beginning with V2.2.2, SAN File System supports network bonding with SLES8 SP 4 and SLES 9 SP 1. Redundant Ethernet support on each MDS enables the full redundancy of the IP network between the MDSs in the cluster as well as between the SAN File System Clients and the MDSs. The dual network interfaces in each MDS are combined redundantly servicing a single IP address. Each MDS still uses only one IP address. One interface is used for IP traffic unless the interface fails, in which case IP service is failed over to the other interface. The time to fail over an IP service is on the order of a second or two. The change is transparent to SAN File System. No change to client configuration is needed. We also strongly recommend UPS systems to protect the SAN File System engines.

Automatic restart from software problems


SAN File System has the availability functions to monitor, detect, and recover from faults in the cluster. Failures in SAN File System can be categorized into two types: software faults that affect MDS software components, and hardware faults that affect hardware components.

Software faults
Software faults are server errors or failures for which recovery is possible via a restart of the server process without manual administrative intervention. SAN File System detects and recovers from software faults via a number of mechanisms. An administrative watchdog process on each server monitors the health of the server and restarts the MDS processes in the event of failure, typically within about 20 seconds of the failure. If the operating system of an MDS hangs, it will be ejected from the cluster once the MDS stops responding to other cluster members. A surviving cluster member will raise an event and SNMP trap, and will use the RSA card to restart the MDS that was hung.

Hardware faults
Hardware faults are server failures for which recovery requires administrative intervention. They have a greater impact than software faults and require at least a machine reboot and possibly physical maintenance for recovery. SAN File System detects hardware faults by way of a heartbeat mechanism between the servers in a cluster. A server engine that experiences a hardware fault stops responding to heartbeat messages from its peers. Failure of a server to respond for a long enough period of

60

IBM TotalStorage SAN File System

time causes the other servers to mark it as being down and to send administrative SNMP alerts.

Automatic fileset and master role failover


SAN File System supports the nondisruptive, automatic failover of the workload (filesets). If any single MDS fails or is manually stopped, SAN File System automatically redistributes the filesets of that MDS to surviving MDSs and, if necessary, reassigns the master role to another MDS in the cluster. SAN File System also uses automatic workload failover to provide nondisruptive maintenance for the MDSs. 9.5, MDS automated failover on page 413 contains more information about SAN File system failover.

2.5.14 Summary of major features


To summarize, SAN File System provides the following features.

Direct data access by exploitation of SAN technology


SAN File System uses a data access model that allows client systems to access data directly from storage systems using a high-bandwidth SAN, without interposing servers. Direct data access helps eliminate server bottlenecks and provides the performance necessary for data-intensive applications.

Global namespace
SAN File System presents a single global namespace view of all files in the system to all of the clients, without manual, client-by-client configuration by the administrator. A file can be identified using the same path and file name, regardless of the system from which it is being accessed. The single global namespace shared directly by clients also reduces the requirement of data replication. As a result, the productivity of the administrator as well as the users accessing the data is improved. It is possible to restrict access to the global namespace by using a non-uniform SAN File System configuration. In this way, only certain SAN File System volumes and therefore filesets will be available to each client. See 3.3.2, Non-uniform SAN File System configuration on page 69 for more information.

File sharing
SAN File System is specifically designed to be easy to implement in virtually any operating system environment. All systems running this file system, regardless of operating system or hardware platform, potentially have uniform access to the data stored (under the global namespace) in the system. File metadata, such as last modification time, are presented to users and applications in a form that is compatible with the native file system interface of the platform. SAN File System is also designed to allow heterogeneous file sharing among the UNIX and Windows client platforms with full locking and security capabilities. By enabling this capability, heterogeneous file sharing with SAN File System increases in performance and flexibility.

Chapter 2. SAN File System overview

61

Policy based automatic placement


SAN File System is aimed at simplifying the storage resource management and reducing the total cost of ownership by the policy based automatic placement of files on appropriate storage devices. The storage administrator can define storage pools depending on specific application requirements and quality of services, and define rules based on data attributes to store the files at the appropriate storage devices automatically.

Lifecycle management
SAN File System provides the administrator with policy based data management that automates the management of data stored on storage resources. Through the policy based movement of files between storage pools and the policy based deletion of files, there is less effort needed to update the location of files or sets of files. Free space within storage pools will be more available as potentially older files are removed. The overall cost of storage can be reduced by using this tool to manage data between high/low performing storage based on importance of the data.

62

IBM TotalStorage SAN File System

Part 2

Part

Planning, installing, and upgrading


In this part of the book, we present detailed information for planning, installing, and upgrading the IBM TotalStorage SAN File System.

Copyright IBM Corp. 2003, 2004, 2006. All rights reserved.

63

64

IBM TotalStorage SAN File System

Chapter 3.

MDS system design, architecture, and planning issues


In this chapter, we discuss the following topics: Site infrastructure Fabric needs and storage partitioning SAN storage infrastructure Network infrastructure Security: Local Authentication and LDAP File Sharing: Heterogeneous file sharing Planning for storage pools, filesets, and policies Planning for high availability Client needs and application support Client data migration SAN File System sizing guide Integration of SAN File System into an existing SAN Planning worksheets

Copyright IBM Corp. 2003, 2004, 2006. All rights reserved.

65

3.1 Site infrastructure


To make sure that the installation of SAN File System is successful, it is crucial to plan thoroughly. You need to verify that the following site infrastructure is available for SAN File System: Adequate hardware for SAN File System Metadata server engines. SAN File System is shipped as a software product; therefore, the hardware for SAN File System must be supplied by the client. In order to help to size the hardware for SAN File System Metadata server engines, a SAN File System sizing guide is available. We discuss sizing considerations in 3.12, SAN File System sizing guide on page 91. The Metadata servers must be set up with two internal drives for the operating system, configured as a RAID 1 mirrored pair. SAN configuration with no single point of failure. This means that connectivity should be guaranteed in case there is a loss of an HBA, switch, GBIC, fibre cable, or storage controller. Detailed information about planning SANs is available in the redbook Designing and Optimizing an IBM Storage Area Network, SG24-6419. A KVM (Keyboard Video Mouse) for each server. This is also required for the Master Console, if deployed; however, a separate KVM can also be used. Typical clients will use a switch so that the KVM can be shared between multiple servers. The Master Console KVM can be shared with the SAN File System servers through the RSA card. A SAN with two switch ports per SAN File System server engine, and enough SAN ports for any additional storage devices and clients. The SAN ports on the SAN File System engines are required to be 2 Gbps, so appropriate cabling is required. Client supplied switches can be 1 or 2 Gbps (2 Gbps is recommended for performance). Optionally, but recommended, the Master Console, if deployed, uses two additional SAN ports. The HBA in the MDS must be capable of supporting the QLogic device driver level recommended for use with SAN File System V2.2.2. A supported back-end storage device with LUNs defined for both system and user storage. Currently supported disk systems for system storage are the IBM TotalStorage Enterprise Storage Server (ESS), the IBM TotalStorage DS8000 series, the IBM TotalStorage DS6000 series, the IBM TotalStorage SAN Volume Controller (SVC), and IBM TotalStorage DS4000 series (formally FAStT) Models DS4300, DS4400, and DS4500. System metadata should be configured on high availability storage (RAID with a low ratio of data to parity disks). SAN File System V2.2.2 can be configured with any suitable SAN storage device for user data storage. That is, any SAN-attached storage supported by the operating systems on which the SAN File System client runs can be used, provided it conforms to the SCSI standard for unique device identification. SAN File System V2.2.2 also supports iSCSI data LUNs as long as the devices conform to the SCSI driver interface standards. Sufficient GBICs, LAN, and SAN cables should be available for the installation. Each SAN File System engine needs at least two network ports and TCP/IP addresses (one for the server host address and the other for the RSA connection). The ports can be either the standard 10/100/1000 Ethernet, or optional Fibre connection. The Master Console, if deployed, requires two 10/100 Ethernet ports and two TCP/IP address. Therefore the minimum requirement for a two engine cluster is four Ethernet ports, or six if the optional Master Console is deployed. In addition, Ethernet bonding (see 3.8.5, Network planning on page 84 for more information) is HIGHLY recommended for every SAN File System configuration. This requires an additional network port (either standard

66

IBM TotalStorage SAN File System

copper or optional fibre), preferably on a separate switch for maximum redundancy. With Ethernet bonding configured, three network ports are required per MDS. To perform a rolling upgrade to SAN File System V2.2.2, you must leave the USB/RS-485 serial network interface in place for the RSA cards. Once the upgrade is committed, you can remove the RS-485 interface, since it is no longer used. It is replaced by the TCP/IP interface for the RSA cards. Power outlets (one or two per server engine; dual power supplies for the engine are recommended but not required). You need two wall outlets or two rack PDU outlets per server engine. For availability, these should be on separate power circuits. The Master Console, if deployed, requires one wall outlet or one PDU outlet. SAN clients with supported client operating systems, and supported Fibre Channel adapters for the disk system being used. Supported SAN File System clients at the time of writing are listed in 2.5.11, Clients on page 51, and are current at the following Web site:
http://www.ibm.com/servers/storage/software/virtualization/sfs/interop.html

3.2 Fabric needs and storage partitioning


When planning the fabric for SAN File System, consider these criteria: The SAN configuration for the SAN File System should not have a single point of failure. This means that connectivity should be guaranteed in case there is a loss of an HBA, switch, GBIC, fibre cable, or storage controller. We recommend separating the fabrics between the HBA ports within the MDS. By separating the fabrics, you will avoid a single path of failure for the fabric services, such as the name server. A maximum of 126 dual-path LUNs can be assigned to the system storage pool. SAN File System V2.2 supports an unlimited number of LUNs for user data storage; however, the environment will necessarily impose some practical restrictions on this item, determined by the amount of storage supported by the storage devices and the client operating systems. The SAN File System Metadata servers (MDS) must have access to all Metadata (or system storage pool) LUNs. Access to client data LUNs is not required. The SAN File System clients must be prevented from having access to the Metadata LUNs, as shown in Figure 3-1 on page 68. The darker area includes the MDS engines and the LUNs in the system pool. The lighter areas include various combinations of SAN File System clients and LUNs in user pools. Overlaps are possible in the clients range of access, depending on the user data access required and the underlying support for this in the storage devices. The SAN File System clients need to have access only to those LUNs they will eventually access. This will be achieved by using zoning/LUN masking/storage partitioning on the back-end storage devices.

Chapter 3. MDS system design, architecture, and planning issues

67

AIX M etadata Server

AIX

W indow s 2000

W indow s 2000

HBA

HBA

HBA

HBA

HBA

FC Sw itch FC Sw itch 1 1

FC Sw itch 1
System Pool User Pool

HBA

M etadata Server

Figure 3-1 Mapping of Metadata and User data to MDS and clients

Each of the SAN File System clients should be zoned separately (hard zoning is recommended) so that each HBA can detect all the LUNs containing that clients data in the User Pools. If there are multiple clients with the same HBA-type (manufacturer and model), these may be in the same zone; however, putting different HBA-types in the same zone is not supported, for incompatibility reasons. LUN masking must be used where supported by the storage device to LUN mask the metadata storage LUNs for exclusive use by the Metadata servers. Here are some guidelines for LUN masking: Specify the Metadata LUNs to the Linux mode (if the back-end storage has OS-specific operating modes). Specify the LUNs for User Pool LUNs, when using ESS, as follows (note that on SVC, there is no host type setting): Set the correct host type according to which client/server you are configuring. The host type is set on a per-host basis, not for the LUN, regardless of host. Therefore, with LUNs in User Pools, the LUNs may be mapped to multiple hosts, for example, Windows and AIX. You can ignore any warning messages about unlike hosts. Tip: For ESS, if you have microcode level 2.2.0.488 or above, there will be a host type entry of IBM SAN File System (Lnx MDS). If this is available, choose it for the LUNs. If running an earlier microcode version, choose Linux. For greatest security, SAN File System fabrics should preferably be isolated from non-SAN File System fabrics on which administrative activities could occur. No hosts can have access to the LUNs used by the SAN File System apart from the MDS servers and the SAN File System clients. This could be achieved by appropriate zoning/LUN masking, or for greatest security, by using separate fabrics for SAN File System and non-SAN File System activities.

68

IBM TotalStorage SAN File System

The Master Console hardware, if deployed, requires two fibre ports for connection to the SAN. This enables it to perform SAN discovery for use with IBM TotalStorage Productivity Center for Fabric. We strongly recommend installing and configuring IBM TotalStorage Productivity Center for Fabric on the Master Console, as having an accurate picture of the SAN configuration is important for a successful SAN File System installation. Multi-pathing device drivers are required on the MDS. IBM Subsystem Device driver (SDD) is required on SAN File System MDS when using IBM TotalStorage Enterprise Storage Server, DS8000, DS6000, and SAN Volume Controller. RDAC is required on SAN File System MDS for SANs using IBM TotalStorage DS4x00 series disk systems. Multi-pathing device drivers are recommended on the SAN File System clients for availability reasons, if provided by the storage system vendor.

3.3 SAN File System volume visibility


In SAN File System V1.1, there were restrictions on the visibility of user volumes. Basically, all the MDS and all the clients were required to have access to all the data LUNs. With V21 and later of SAN File System, this restriction is eased. The MDS requires access to all the Metadata LUNs only, and the clients require access to all or a subset of the data LUNs. Note that it is still true that SAN File System clients must not have visibility to the System volumes. Important: Make sure your storage device supports sharing LUNs among different operating systems if you will be sharing individual user volumes (LUNs) among different SAN File System clients. Some storage devices allow each LUN to be made available only to one operating system type. Check with your vendor. In general, we can distinguish two ways for setting up a SAN File System environment: a uniform and a non-uniform SAN File System configuration.

3.3.1 Uniform SAN File System configuration


In a uniform SAN File System configuration, all SAN File System clients have access to all user volumes. Since this uniform configuration simplifies the management of the whole SAN File System environment, it might be a preferred approach for smaller, homogenous environments. In a uniform SAN File System configuration, all SAN File System data are visible to all clients. If you need to prevent undesired client access to a particular data, you can use standard operating system file/directory permissions to control access at a file/directory level. The uniform SAN File System configuration corresponds to a SAN File System V1.1 environment.

3.3.2 Non-uniform SAN File System configuration


In a non-uniform SAN File System configuration, not all SAN File System clients have access to all the user volumes. Clients only access user volumes they really need, or the volumes residing on disk systems for which they have operating support. The main consideration for a non-uniform configuration is to ensure that all clients have access to all user storage pool volumes that can potentially be used by a corresponding fileset. Any attempt to read/write data on a volume to which a SAN File System client does not have access will lead to an I/O error. We consider non-uniform configurations as preferable for large and heterogeneous SAN environments.

Chapter 3. MDS system design, architecture, and planning issues

69

Note for SAN File System V2.1 clients: SAN configurations for SAN File System V2.1 are still supported by V2.2 and above, so no changes are required in the existing SAN infrastructure when upgrading. A non-uniform SAN File System configuration provides the following benefits. Flexibility Scalability Security Wider range of mixed environment support

Flexibility
SAN File System can adapt to desired, environment-to-environment specific SAN zoning requirements. Instead of enforcing a single zone environment, multiple zones, and therefore multiple spans of access to SAN File System user data, are possible. This means it is now easier to deploy SAN File System into an existing SAN environment. In order to help make SAN File System configurations more manageable, a set of new functions and commands were introduced with SAN File System V2.1: SAN File System volume size can now be increased in size without interrupting file system processing or moving the content of the volume. This function is supported on those systems on which the actual device driver allows LUN expansion (for example, current models of SVC or the DS4000 series) and the host operating system also supports it. Data volume drain functionality (rmvol) uses a transactional-based approach to manage the movement of data blocks to other volumes in the particular storage pool. From the client perspective, this is a serialized operation, where only one I/O at a time occurs to volumes within the storage pool. The goal of employing this kind of mechanism is to reduce the clients CPU cycles. Some commands for managing the client data (for example, mkvol and rmvol) now require a client name as a mandatory parameter. This ensures that the administrative command will be executed only on that particular client. We cover the basic usage of most common SAN File System commands in Chapter 7, Basic operations and configuration on page 251.

Scalability
The MDS can host up to 126 dual-path LUNs for the system pool. The maximum number of LUNs for client data depends on platform-specific capabilities of that particular client. Very large LUN configurations are now possible if the data LUNs are divided between different clients.

Security
By easing the zoning requirements in SAN File System, better storage and data security is possible in the SAN environment, as all hosts (SAN File System clients) have access only to their own data LUNs. You can see an example of a SAN File System zoning scenario in Figure 3-1 on page 68.

Wider range of mixed environment support


Since not all the data LUNs need to be visible to all SAN File System clients and to the MDS, and therefore not all storage must be supported on every client and MDS, this expands the range of supported storage devices for clients. For example, if you have Linux and Windows clients, and a storage system that is supported only on Windows, you could make the LUNs on that system available only to the Windows clients, and not the Linux clients. 70
IBM TotalStorage SAN File System

Note that LUNs within a DS4000 partition can only be used by one operating system type; this is a restriction of the DS4x00 partition. Other disk systems, for example, SVC, allow multi-operating system access to the same LUNs.

3.4 Network infrastructure


SAN File System has the following requirements for the network topology: One IP address is required for each MDS and one for the Remote Supervisor Adapter II (RSAII) in each engine. This is still true when implementing redundant Ethernet support (Ethernet bonding; see 3.8.5, Network planning on page 84) with SAN File System V2.2.2, since the two Ethernet NICs share one physical IP address. Currently, SAN File System supports from two to eight engines. To take full advantage of the MDS Dual Ethernet/Ethernet bonding support provided in V2.2.2, each Ethernet NIC must be cabled to a separate Ethernet port, preferably in a separate switch.This provides greater availability in the event of an Ethernet switch outage. Two types of interfaces are supported on the MDS: 10/100/1000 Copper or 1 Gb Fibre Ethernet. The RSAII uses 10/100/1000 Copper Ethernet. The Master Console, if deployed, requires two Ethernet ports. One is connected to the existing IP network (connected to the Master Console, all MDS, and clients), and one for a VPN connection to be used for remote access to bypass the firewall. This configuration allows the Master Console to be shared with an SVC (if installed). The client to cluster and intra-cluster communication traffic will be on the existing client LAN. All Metadata servers must be on the same physical network. If multiple subnets are configured on the physical network, it is recommended that all engines are on the same subnet. If possible, avoid any routers or gateways between the clients and the MDS. This will optimize performance. Any systems that will be used for SAN File System administration require IP access to the SAN File System servers hosting the Administrative servers.

Chapter 3. MDS system design, architecture, and planning issues

71

An example of how the network can be set up is shown in Figure 3-2. Note there are two physical connections on the right of each MDS, indicating the redundant Ethernet configuration. However, these share the one TCP/IP address.

M a s te r C o n s o le

V P N fo r re m o te a c c e s s

E x is tin g IP N e tw o r k M e ta d a ta S e rve r
RSA

A IX

A IX

W in d o w s 2000

W in d o w s 2000

FC t w i 1 F C S w iS c h t c h 1

F C S w itc h 2
S y s te m Pool U ser Pool

RSA

M e ta d a ta S e rve r

Figure 3-2 Illustrating network setup

3.5 Security
Authentication to the SAN File System administration interface can be accomplished in one of two ways: using LDAP, or using a new procedure called local authentication, which uses the Linux operating system login process (/etc/passwd and /etc/group). You must choose, as part of the planning process, whether you will use LDAP or local authentication. If an LDAP environment already exists, and you plan to implement SAN File System heterogenous file sharing, there is an advantage to using that LDAP; however, for those environments not already using LDAP, SAN File System implementation can be simplified by using local authentication. Using local authentication can eliminate one potential point of failure, since it does not depend on access to an external LDAP server to perform administrative functions.

3.5.1 Local authentication


With SAN File System V2.2.1 and later, you can use local authentication for your administrative IDs. Local authentication uses native Linux methods on the MDS to verify users and their authority to perform administrative operations. When issuing an administrative request (for example, to start the SAN File System CLI or log in to the GUI), the user ID and password is validated, and then it is verified that the user ID has authority to issue that particular request. Each user ID is assigned a role (corresponding to an OS group) that gives that user a specific level of access to administrative operations. These roles are Monitor, Operator, Backup, and Administrator. After authenticating the user ID, the administrative server interacts with the MDS to process the request.

Setting up local authentication


To use local authentication, define specific groups on each MDS (Administrator, Operator, Backup, or Monitor). They must have these exact names. Then add users, associating them 72
IBM TotalStorage SAN File System

with the appropriate groups according to the privileges required. For a new SAN File System installation, this is part of the pre-installation/planning process. For an existing SAN File System cluster that has previously been using LDAP authentication, migration to the local authentication method can be at any time, except for during a SAN File System software upgrade. We show detailed steps for defining the required groups and user IDs in 4.1.1, Local authentication configuration on page 100 (for new SAN File System installations) and 6.7, Switching from LDAP to local authentication on page 246 (for existing SAN File System installations who want to change methods). When using local authentication, whenever a user ID/password combination is entered to start the SAN File System CLI or GUI, the authentication method checks that the user ID exists as a UNIX user account in /etc/passwd, and if the correct password was supplied. It then checks that the user ID is a member of one of the four required groups (Administrator, Operator, Backup, or Monitor). Finally, based on the group of which the user ID is a member, the method determines whether this group is authorized to perform the requested function in order to decide access.

Some points to note when using the local authentication method


Every MDS must have the standard groups defined (Administrator, Operator, Backup, or Monitor). You need at least one user ID with the Administrator role. Other IDs with the Administrator or other roles may be defined, as many as are required. You can have more than one ID in each group, but each ID can only be in one group. Every MDS must have the same set of user IDs defined as UNIX OS accounts. The same set of users and groups must be manually configured on each MDS. Use the same password for each SAN File System user ID on every MDS. These must be synchronized manually in the local /etc/passwd and /etc/group files; use of other methods (for example, NIS) are not supported. You cannot change the authentication method during a rolling upgrade of the SAN File System software. Each user ID corresponding to a SAN File System administrator name must be a member of exactly one group name corresponding to a SAN File System administrator authorization level (Administrator, Operator, Backup, or Monitor). Users who will not access SAN File System must not be members of SAN File System administration groups. As of SAN File System V2.2.1, you may choose to either deploy an LDAP server as before, or use the new local authentication option. When installing a new SAN File System, you can select which option to use (LDAP or local authentication). When prompted for a CLI user and password, specify an ID in the Administrator group, and its associated password. We will show this method for installing SAN File System in 5.2.6, Install SAN File System cluster on page 138 and 5.2.7, SAN File System cluster configuration on page 147. For existing SAN File System installations, you can switch from LDAP to the new local authentication option. We show how to do this in 6.7, Switching from LDAP to local authentication on page 246.

3.5.2 LDAP
A Lightweight Directory Access Protocol (LDAP) server is the other alternative for authentication with the SAN File System administration interface. This LDAP server can be any compliant implementation, running on any supported operating system. It is not supported to install the LDAP server on any MDS or the Master Console at this time.

Chapter 3. MDS system design, architecture, and planning issues

73

Although any standards-compliant LDAP implementation should work with SAN File System, at the time of writing, tested combinations included: IBM Directory Server V5.1 for Windows IBM Directory Server V5.1 for Linux OpenLDAP/Linux Microsoft Active Directory The LDAP server needs to be configured appropriately with SAN File System in order to use LDAP to authenticate SAN File System administrators. Examples of LDAP setup and configuration are provided in the following appendixes: Appendix A, Installing IBM Directory Server and configuring for SAN File System on page 565 Appendix B, Installing OpenLDAP and configuring for SAN File System on page 589

LDAP network requirements


SAN File System requires basic network knowledge: IP address and port numbers. If you want to use a secure LDAP connection (optional), there must be a Secure Sockets Layer (SSL) in place, and an SSL certificate is required to set up the secure connection with the SAN File System. In order for the SAN File System to authorize with the LDAP server, it also requires an authorized LDAP Username, which can browse the LDAP tree where the Users and Roles are stored. The short description of the necessary fields and a recommended value can be seen in Table 3-1, with space to fill in your values.

LDAP users
A User, in SAN File System and LDAP terms, is an entry in the LDAP database that corresponds to an administrator of the SAN File System. This is a person that will use the CLI (sfscli) or the GUI (console). While you can also use LDAP on your SAN File System clients to authenticate client users, this is not required, and is not discussed further in this redbook. All SAN File System administrative users must have an entry in the LDAP database. They all must have the same parent DN, and they must all be the same objectClass. They must contain a user ID attribute, which will be their login name. It must also contain a userPassword attribute. The short description of the necessary fields and a recommended value can be seen in Table 3-1, with space to fill in your values.

LDAP roles
SAN File System administrators must have a role. The role of a SAN File System administrator determines the scope of commands they are allowed to execute. In increasing order of permission, the four roles are Monitor, Operator, Backup, and Administrator. Each of the four roles must have an entry in the LDAP database. All must have the parent DN (distinguished name), and all must have the same objectClass. When a user logs in, SAN File System checks the LDAP server to determine the role to which the user belongs. Each must have an attribute containing the string that describes its role: Administrator, Backup, Operator, or Monitor. Finally, they each must support an attribute that can contain multiple values, which will contain one value for each role occupants DN. The short description of the necessary fields and a recommended value can be seen in Table 3-1, with space to fill in your values.
Table 3-1 LDAP role information for SAN File System planning Description Network IP Address of LDAP server Port Numbers 9.42.164.125 389 insecure, 636 secure Example of value Your value

74

IBM TotalStorage SAN File System

Description Authorized LDAP Username Authorized LDAP Password Organization Organization Parent DN ObjectClass Organization Manager, ITSO organization Manager Parent dn ObjectClass Attribute containing Role name Users, ITSO organization User Parent dn ObjectClass ou Organization Admin User Parent dn ObjectClass Attribute containing role name objectClass of User entries Attribute containing login user ID Attribute containing login password Monitor, Users, ITSO organization Monitor User Parent dn ObjectClass Attribute containing Role name objectClass of User entries Attribute containing Login user ID

Example of value superadmin (default for IBM Directory Server) secret (default for IBM Directory Server)

Your value

dn: o=ITSO organization ITSO

dn: cn=Manager,o=ITSO organizationalRole cn: Manager

dn: ou=Users,o=ITSO organizationalUnit Users

cn=ITSOAdmin Administrator,ou=Users,o=ITSO inetOrgPerson cn: ITSOAdmin Administrator sn: Administrator uid: ITSOAdmin userPassword: password

dn: cn=ITSOMon Monitor,ou=Users,o=ITSO inetOrgPerson cn: ITSOMon Monitor sn: Monitor uid: ITSOMon

Chapter 3. MDS system design, architecture, and planning issues

75

Description Attribute containing Login password Backup, Users, ITSO organization Backup User Parent dn ObjectClass Attribute containing Role name ObjectClass of User entries Attribute containing login user ID Attribute containing Login password Operator, Users, ITSO organization Operator User Parent dn ObjectClass Attribute containing Role name ObjectClass of User entries Attribute containing Login user ID Attribute containing Login password Roles Role Parent dn ObjectClass ou ObjectClass of Role entries Attribute containing Role name Attribute for Role occupants Administrator, Roles, ITSO organization Admin Role Parent dn

Example of value userPassword: password

Your value

dn: cn=ITSOBack Backup,ou=Users,o=ITSO inetOrgPerson cn: ITSOBack Backup sn: Backup uid: ITSOBack userPassword: password

dn: cn=ITSOOper Operator,ou=Users,o=ITSO inetOrgPerson cn: ITSOOper Operator sn: Operator uid: ITSOOper userPassword: password

dn: ou=Roles,o=ITSO organizationalUnit Roles organizationalRole cn: Administrator roleOccupant: cn=ITSOAdmin Administrator,ou=Users,o=ITSO

dn: cn=Administrator,ou=Roles,o=IT SO

76

IBM TotalStorage SAN File System

Description ObjectClass Attribute containing Role name Attribute for Role occupants Monitor, Roles, ITSO organization Monitor Role Parent dn ObjectClass Attribute containing Role name Attribute for Role occupants Backup, Roles, ITSO organization Backup Role Parent dn ObjectClass Attribute containing Role name Attribute for Role occupants Operator, Roles, ITSO organization Operator Role Parent dn ObjectClass Attribute containing Role name Attribute for Role occupants

Example of value organizationalRole cn: Administrator roleOccupant: cn=ITSOAdmin Administrator,ou=Users,o=ITSO

Your value

dn: cn=Monitor,ou=Roles,o=ITSO organizationalRole cn: Monitor roleOccupant: cn=ITSOMon Monitor,ou=Users,o=ITSO

dn: cn=Backup,ou=Roles,o=ITSO organizationalRole cn: Backup roleOccupant: cn=ITSOBack Backup,ou=Users,o=ITSO

dn: cn=Operator,ou=Roles,o=ITSO objectClass: organizationalRole cn: Operator roleOccupant: cn=ITSOOper Operator,ou=Users,o=ITSO

Chapter 3. MDS system design, architecture, and planning issues

77

3.6 File sharing


In this section, we cover some requirements for advanced heterogeneous file sharing with the SAN File System.

3.6.1 Advanced heterogenous file sharing


Advanced heterogeneous file sharing was introduced in SAN File System V2.2. It enables secure User and Group authorization when sharing files between UNIX and Windows based systems. This allows for files created on UNIX based systems to be viewed by authorized users on Windows based systems and vice versa. More details on setting up advanced heterogeneous file sharing are given in 8.3, Advanced heterogeneous file sharing on page 347. Heterogenous file sharing requires that an Active Directory Domain on Windows, and either an NIS or LDAP instance for UNIX, to provide directory services for user IDS on the SAN File System clients. At present, only one UNIX directory service domain (either NIS or LDAP) and one Active Directory instance is supported (although it is possible that this instance is serving multiple Active Directory domains).

3.6.2 File sharing with Samba


The use of Samba on selected SAN File System clients is also supported to export the global name space. Samba is an open source Common Internet File System (CIFS) that can be loaded on UNIX and Linux based platforms.

3.7 Planning the SAN File System configuration


In this section, we cover some basic planning and sizing guidelines for SAN File System.

3.7.1 Storage pools and filesets


SAN File System volumes (LUNs) are grouped into storage pools, as described in 2.5.9, Storage pools on page 48. There are two types of pools: User Pools and the System Pool. The System Pool is used for the actual file metadata, as well as for general bookkeeping of the SAN File System, that is, the system metadata, or the common information shared among all cluster engines. Important: The System Pool contains the most critical data for SAN File System. It is very important to use highly reliable and available LUNs as volumes (for example, using mirroring, RAID, and hot spares in the back-end storage system) so that the MDS cluster always has a robust copy of this critical data. For the greatest protection and highest availability in a local configuration, mirrored RAID-5 volumes are recommended, with a low ratio of data to parity disks. Remote mirroring solutions, such as Metro and Global Mirroring, available on the IBM TotalStorage Enterprise Storage Server, SVC, and DS6000/DS8000 series are also possible.

78

IBM TotalStorage SAN File System

One default User Pool is created on installation; additional pools may be created based on criteria chosen by particular organization. Examples of the many possible criteria that could be chosen include: Device capabilities Performance Availability Location: secure or unsecure Business owners Application types We strongly recommend separating workload across different types of storage LUNs. The system pool size should start at approximately 2-5% of the total user data size, and volumes may be added to increase the pool size as user data grows. As part of the planning and design process, you should determine which storage pools are needed. In order to determine how many storage pools are needed, a data classification analysis might be required. For example, you might want to place the database data, the shared work directories used by application developers, and the personal home directories of individuals into separate Storage Pools. The reason why you would want to do this is to use storage capacity more efficiently. With the active data pooled, we can use enterprise class disk array like IBM TotalStorage DS8000 for the databases, mid-range disk array like the DS4x00 series for the shared work directories, and low-cost storage (JBODs) for the personal home directories. The goal of storing the data in separate pools is to match the value of the data to the cost of the storage. Figure 3-3 shows three storage pools that have been defined for particular needs. The clients have also been mapped to particular storage pools according to the access requirements. This mapping information is used to determine which LUNs need to be made available to which clients (via a combination of zoning, LUN masking, or other methods as available in the storage system).

Clients

SAN
Virtualized File System

Cheap Storage Pool (JBOD


Figure 3-3 Data classification example

OLTP Storage Pool (random I/O)

Critical Storage Pool (RAID-5, cache)

Data classification analysis will also help to implement policy-based file placement management into the SAN File System. Policy determines which pool or pools will be used to place files when they are created within SAN File System. If a non-uniform configuration is being used in SAN File System, then you need to make sure that for each client, all volumes in any storage pool that could be used by any fileset to which that client needs access are available to that client. We will show some methods for doing this in 9.7, Non-uniform configuration client validation on page 429.

Chapter 3. MDS system design, architecture, and planning issues

79

For the best performance in SAN File System, all engines should be busy in a balanced manner. This is facilitated through the use of filesets. You should plan for at least N filesets for N MDS; if not, then some of the Metadata servers will be in standby mode. You could carve out the workload into a multiple of N filesets, all expected to be similar in terms of workload, or use a more granular approach, where the filesets have different access characteristics (for example, where some generate more metadata traffic than others). SAN File System also supports basic load balancing functions for filesets; you can balance the fileset workload by dynamically assigning them to an MDS, depending on the number of filesets already being served by each MDS. See 7.5, Filesets on page 286 for more information about dynamic filesets and load balancing. Nested filesets are not recommended; see 7.5.2, Nested filesets on page 289 for reasons why. Note: Remember that the performance of the SAN File System cluster itself is dependent on metadata traffic and not data traffic.

3.7.2 File placement policies


SAN File System includes a powerful mechanism for controlling how administrators can manage files in the global file system: the file placement policy. This is the placement of files in storage pools using rules based on file attributes, such as file name, owner, group ID of owner, or the system creating the files. It is important that you determine the policy during the planning and design phase, as an incorrect policy can cause files to be created in an unexpected storage pool. You can define as many policies you like for the SAN File System. However, only one policy at a time can be active. Changes in policy are not retroactive, that is, if you later decide to create a rule to put certain files into a different pool, this will not affect files meeting that criteria that are already in the global file system. For detailed information about how to implement policy, see 7.8, File placement policy on page 304.

3.7.3 FlashCopy considerations


FlashCopy images for each fileset are stored in a hidden .flashcopy directory, which is located under the filesets attachment point in the directory structure. FlashCopy images are stored on the same volumes as the original fileset. The SAN File System FlashCopy engine uses a space-efficient, copy-on-write method to make the image. When the image is first made, the image and the original data share the same space. As data in the actual fileset changes (data added, deleted, or modified), only the changed blocks in the fileset are written to a new location on disk. The FlashCopy image continues to point to the old blocks, while the actual fileset will be updated over time to point to the new blocks. 9.1.1, How FlashCopy works on page 376 describes this process in more detail. Since the change rate of a fileset is not generally predictable, it is not possible to accurately determine how much space a particular FlashCopy image will occupy at a particular time. When planning space requirements, include space for FlashCopy images. You can maintain up to 32 images per fileset. The more images you maintain, the more space will be needed. Therefore, you need to consider how many FlashCopy images you would like to maintain for a particular fileset.

80

IBM TotalStorage SAN File System

You might make the assumption that 10% of the total amount of data in the fileset will change during the lifetime of a FlashCopy image, that is, between when the image is taken and when it is deleted. Let us assume the following example. We have 500 GB of data in one fileset. We want to keep three FlashCopy images. Therefore, for a 10% changed data ratio, we will need 50 GB additional space per FlashCopy image, in total, 150 GB additional space for all three FlashCopy images. Note: Keep in mind that the space used by FlashCopy images counts against the quota of the particular fileset.

3.8 Planning for high availability


The SAN File System has been architected to provide end user applications with Highly Available (HA) access to data contained in a SAN File System global namespace. The following SAN File System features are geared toward providing high availability: Clustered MDSs for redundancy of the file system service Fileset relocation (failover/failback) in response to cluster changes SAN File System Client logic for automatic fileset failover re-discovery and lock reassertion SAN File System Client logic for automatic detection of changes in the MDS cluster SAN File System Client logic for automatic establishment and maintenance of leases MDS failure monitoring and detection through network heartbeats Cluster server fencing through SAN messaging or remote power control Redundant paths to storage devices through dual HBAs and multipathing drivers Redundant MDS Ethernet connections through dual network adaptors and driver level path failover (Ethernet bonding in active-backup mode) Rolling upgrade of cluster software between releases Quorum disk lock function for network partition handling Administrative agent with autorestart service for software fault handling Internal deadlock detection In combination, these capabilities allow a SAN File System to respond to many network, SAN, software, and hardware faults automatically with little to no down time for client applications. In addition, routine maintenance operations such as server hardware, cluster software, or network switch upgrades can be performed in a SAN File System while preserving application access to the SAN File System namespace.

3.8.1 Cluster availability


In a normal state, each MDS in the active cluster exchanges heartbeats with other MDSs and uses these heartbeats to detect a failed peer MDS. In V2.2.2, the default heartbeat interval is 500 milliseconds with a heartbeat threshold of 20 heartbeats. If an MDS misses 20 consecutive expected heartbeats (10 seconds by default in V2.2.2), the cluster will declare the node failed and start to eject it. If the ejection cannot be communicated to the master MDS (for example, if the failed node is the master itself), then the observing MDS starts a process to elect a new master MDS from all the peer MDSs that remain in the cluster (that is, are reachable).

Chapter 3. MDS system design, architecture, and planning issues

81

The length of the failure detection window is set so that a crashing MDS process has time to be restarted automatically if possible, and rejoin the cluster before the ejection process is started. This means that filesets do not have to be relocated in the event of most software faults. The following section discusses the restart mechanism that makes this rejoin possible.

3.8.2 Autorestart service


The SAN File System Administrative agent running on each MDS has an autorestart service that monitors the MDS processes. If the processes fail on a particular MDS (software fault), the autorestart service immediately restarts the MDS processes, which then attempt to rejoin the active cluster. If the restart and rejoin is successful, no filesets are relocated. If the autorestart service is stopped or disabled, or if the rejoin fails, one of the other MDSs will detect this, and initiate an ejection operation to remove the failed node and relocate its filesets to an active MDS. The state of the autorestart service can be viewed using the sfscli lsautorestart command. To achieve the highest degree of availability, we highly recommend that the autorestart service always be enabled on all MDSs. The autorestart service will automatically disable itself if restarting an MDS fails four times within a one hour period. This may happen, for example, in cases where a SAN fault causes continued I/O errors at restart time when the MDS attempts to rejoin the cluster. Periodic checks should be made to ensure that the autorestart service is active, especially after recovering from a fault. The service can be started using the sfscli startautorestart <servername> command on the master or target MDS.

3.8.3 MDS fencing


If the cluster loses contact with a MDS that was previously active, the MDS is called a rogue server. Before moving filesets, or electing a new master MDS, if the rogue MDS was the master, the master (or new master candidate) must first be certain that the rogue node cannot issue latent I/Os. That is, the rogue server must be fenced from the cluster so that it is guaranteed not to issue latent I/Os after failover. The SAN File System cluster software has two mechanisms for fencing rogue servers: The first is a SAN based messaging protocol, and the second uses a remote power management capability to power the rogue node off.

Fencing through SAN communication


Certain types of network faults can cause the cluster to lose contact with an MDS. With the use of Ethernet bonding, this is less likely. If the lost MDS is actually alive and has access to the SAN but no network connection, the master MDS will send a shutdown message through a SAN based messaging protocol. When a partitioned node receives the SAN based shutdown message it stops all I/O and sends an OK response through the SAN signifying that it agrees to shut down and that it has completed all I/O. The partitioned node is then considered safe and its workload may be relocated to another online MDS.

Fencing through remote power management (RSA)


If the MDS is unreachable either via the network or the SAN, another MDS can detect the loss of heartbeat and start the ejection from the cluster. It will also remotely power off the unreachable MDS before relocating its filesets. The remote power control function is implemented by the MDS using the IBM ^ xSeries RSA (Remote Supervisory Adapter) system on the failed node. Before V2.2.2, remote access to a MDS's RSA card was through a dedicated RS-485 serial network. An MDS wishing to fence another MDS from the cluster would log on to the local RSA card and access the remote RSA card over this RS-485 network. In V2.2.2 and beyond, all access to a remote RSA card is over the IP network. In V2.2.2 and higher, each MDS also periodically checks if it can reach all RSA cards in all other MDSs. By default, this check is executed daily. If a MDS cannot access a peer metadata server's RSA card, that detecting node will log the error and issue an SNMP alert (see 13.6, 82
IBM TotalStorage SAN File System

Simple Network Management Protocol on page 543). The RSA check interval can be changed or disabled using internal existing commands:
sfscli legacy setrsacheckinterval <interval_in_seconds> sfscli legacy setrsacheckinterval DEFAULT sfscli legacy disablersacheck

Each of these commands must be executed from the master MDS. If the RSA fault detection is disabled with the last command, a manual check can be performed on demand (or via a cron job) using the internal existing command sfscli legacy lsengine. This is shown in 13.5.1, Validating the RSA configuration on page 538.

3.8.4 Fileset and workload distribution


A fileset is a logical subtree of the SAN File System global namespace, and is the fundamental unit of workload assigned to an MDS in a cluster. Each MDS essentially provides access to a subset of filesets that comprise the global namespace. The fileset workload of a MDS will be relocated to a peer MDS if the original MDS fails or is stopped. When a fileset is created, it can be assigned to a specific MDS for management. As long as the specified MDS is part of the active cluster group, that fileset will be serviced from the specified MDS. This is known as static fileset assignment. You can also choose to allow the cluster to assign the fileset to a suitable MDS, using a simple load balancing algorithm. This fileset may be moved from one MDS to another whenever the fileset load (number of filesets per MDS) gets unbalanced because of a change in the cluster membership. This is known as dynamic fileset assignment. Filesets can be changed from static to dynamic, and from dynamic to static, and a static fileset can also be reassigned statically to another MDS. We recommend using either all dynamic filesets or all static fileset assignments to avoid undesired excessive load on a specific MDS cluster node and to get more predictable fileset distribution/failover behavior. Using all static filesets allows you to have more precise control of load balancing the SAN File System cluster. Dynamic filesets will be allocated to different MDSs to balance the load. However, the load balancing algorithm essentially only considers the number of filesets assigned to each MDS. It does not take into account that some filesets may be more active than others. Therefore, if you know which filesets are expected to be more active, you can use this knowledge to assign them statically to cluster nodes based on activity as opposed to number of filesets. In a static fileset environment, you can also choose to have an idle MDS with no filesets assigned. This idle server is available to receive failed over filesets. This is known as an N+1 or spare server configuration.The only way to force a spare server N+1 configuration is to make all filesets static and leave one node with no static fileset assignments. It is important to note that a client must be able to access all filesets in the path to an object in order to access the object. Therefore, namespace design has an impact on availability and in general nested filesets should be avoided for maximum availability because an event impacting a parent fileset can impact all children filesets.

Chapter 3. MDS system design, architecture, and planning issues

83

3.8.5 Network planning


A SAN File System is implemented in the client IP network. The properties of this underlying network impact the availability of the SAN File System. Figure 3-4 shows a SAN File System network designed for high availability.

SAN 1

Storage system 1
fc3 fc4

SDD, RDAC

Windows

AIX

Linux

SFS Clients

User data pools (data)

Ethernet Bonding

Admin client Master console LDAP ip1

IP network
ip2

Ethernet Bonding

SFS MDS Cluster


Master MDS Sub MDS
SDD, RDAC

Sub MDS

System pool (metadata)

fc1

fc2

SAN 2
Storage subsystem 2

Figure 3-4 SAN File System design

Each SAN File System MDS has dual Ethernet adapters (Gigabit or Fibre Channel), and uses Ethernet bonding to provide redundant connections to the IP network. Bonding is a term used to describe combining multiple physical Ethernet links together to form one virtual link, and is sometimes referred to as trunking, channel bonding, NIC teaming, IP multipathing, or grouping. Bonding is commonly implemented either in the kernel network stack (driver and device independent) or by Ethernet device drivers. There are multiple bonding modes with the most common being active-active (load balancing packets across all bonded members), and active-backup, in which only one NIC in a bonded group is active at a time. In active-backup mode failover occurs to the inactive NIC upon failure of the active NIC. Active-backup mode works with any existing Ethernet infrastructure, while active-active mode (load balancing) requires participation of the network switches. In SAN File System V2.2.2, the dual Ethernet adapters on each MDS can be bonded into one virtual interface in active-backup mode with mii monitoring for link failure detection. See Set up Ethernet bonding on page 131 and IBM TotalStorage SAN File System Installation and Configuration Guide, GA27-4316 for details on configuring Ethernet bonding on the MDS. Although not compulsory, we strongly recommend you implement Ethernet bonding in your SAN File System cluster. Ethernet bonding allows a single NIC or cable to fail without downtime, but if both NICs are connected to a common switch, a switch failure can cause significant downtime. Therefore, for highest availability, a fully physical redundant network layer is recommended, so that each NIC is connected to a separate switch.

84

IBM TotalStorage SAN File System

The combination of Ethernet Bonding and a fully redundant physical network allows a SAN File System to be transparent to many network faults or maintenance operations, such as cable faults, NIC faults, or switch replacements. If both NICs are isolated, then failover will occur, that is, the MDS will be ejected from the cluster, and filesets transferred to a surviving MDS(s). Ethernet bonding may also be implemented on the SAN File System clients; this is optional, and the methods for doing this depend on the client OS platform. Consult your OS documentation for details.

3.8.6 SAN planning


A SAN File System that has high availability requirements also needs a redundant SAN fabric. The Fibre Channel Host Bus Adapters (HBAs) on each MDS should be connected to separate switches. Client machines should also be attached to more than one switch, but it is less critical since the loss of a single client's I/O path does not impact other clients as does the total loss of a MDS's I/O path. A highly available SAN configuration is shown in Figure 3-4 on page 84. Redundancy in the physical SAN must work together with a function that detects SAN path failure, and fails over the traffic to another path. This function is commonly provided by either the disk device driver or at the OS kernel level. The SAN File System MDSs work with the Subsystem Device Driver (SDD) when the system pool is comprised of volumes provided by DS8000/DS6000, ESS, or SVC. If the system volumes are provided by DS4x00/FaST, then the RDAC multipathing driver is used.

3.9 Client needs and application support


This section details some specifics for clients (file sharing, administrative, and Master Console) as well as application support.

3.9.1 Client needs


At the time of writing, SAN File System supports the following client platforms: Windows 2000 Server and Advanced Server Windows Server 2003 Standard and Enterprise Editions AIX 5L Version 5.1 (32-bit) AIX 5L Version 5.2 (32- and 64-bit) AIX 5L Version 5.3 (32- and 64-bit) Red Hat Enterprise Linux 3.0 on Intel SUSE Linux Enterprise Server 8.0 on Intel SUSE Linux Enterprise Server 8.0 for IBM zSeries SUSE Linux Enterprise Server 8.0 for IBM pSeries Solaris 9 See the following Web site for the latest list of supported SAN File System client platforms, including full fix, Service Pack, and kernel levels:
http://www.ibm.com/storage/support/sanfs

Volume managers, such as VERITAS Volume Manager or LVM in AIX, can be used only to manage virtual disks or LUNs that are not managed by SAN File System. This is because both SAN File System and other volume managers need to own their particular volumes.

Chapter 3. MDS system design, architecture, and planning issues

85

The clients require HBAs that are compatible with the underlying storage systems used for data storage by SAN File System. See the following IBM Web sites for supported adapters: DS6x00 and DS8x00 series:
http://www.ibm.com/servers/storage/support/disk/ds6800/ http://www.ibm.com/servers/storage/support/disk/ds8100/ http://www.ibm.com/servers/storage/support/disk/ds8300/

ESS:
http://www.ibm.com/servers/storage/support/disk/2105.html

SVC:
http://www.ibm.com/servers/storage/support/virtual/2145.html

DS4x00 series except DS4800:


http://www.ibm.com/servers/storage/support/disk/ds4100/ http://www.ibm.com/servers/storage/support/disk/ds4300/ http://www.ibm.com/servers/storage/support/disk/ds4400/ http://www.ibm.com/servers/storage/support/disk/ds4500/

For non-IBM storage, consult your vendor for supported HBAs. Each client requires at least 20 MB of available space on the hard drive for the SAN File System client code. To remotely administer the SAN File System, you need a secure shell (SSH) client for the CLI and a Web browser for the GUI. Examples of SSH clients are PuTTY, Cygwin, or OpenSSH, which are downloadable at:
http://www.putty.nl http://www.cygwin.com http://www.openssh.com

The Web browsers currently supported are Internet Explorer 6.0 SP1 and above and Netscape 6.2 and above (Netscape 7.0 and above is recommended). To access the Web interface for the RSAII card, Java plug-in Version 1.4 is also required, which can be downloaded from:
http://www.java.sun.com/products/plugin

3.9.2 Privileged clients


A privileged client, in SAN File System terms, is a client that needs to have root privileges in a UNIX environment or Administrator privileges in a Windows environment. A root or Administrator user on a privileged SAN File System client will have full control over all file system objects in the filesets. A root or Administrator user on a non-privileged SAN File System client will not have full control over file system objects. We will discuss privileged clients in more detail in 7.6.2, Privileged clients on page 297. How many privileged clients do you need? This depends on your environment. However, we recommend having at least two privileged clients per platform, which means two for Windows, and two for UNIX-based systems. In this case, since AIX, Solaris, and Linux all use the same user/group permissions scheme, we will consider them all as UNIX-based systems. A privileged client is also needed to perform backup/restore operations. You may consider configuring additional privileged clients if root or Administrator privileges are required by any of your client applications or particular security needs.

86

IBM TotalStorage SAN File System

You can grant or revoke privileged client access dynamically. In this way, you can simply grant privileged client access only when you need to perform an action requiring root privileges on SAN File System objects, and revoke it once you complete the action.

3.9.3 Client application support


SAN File System is designed to work with all applications, and application binaries can be installed in the SAN File System global namespace.

Virtual I/O on AIX


The SAN File System V2.2.2 client for AIX 5L V5.3 will interoperate correctly with Virtual I/O (VIO) devices. The support for VIO enables SAN File System clients to use data volumes that can be accessed through VIO. In addition, all other V2.2.2 SAN File System clients will interoperate correctly with volumes that are accessed through VIO by one or more AIX 5L V5.3 clients. SAN File System supports the use of data LUNs over VIO devices, except in storage subsystem/device driver configurations that require the administrator to write a VIOS Volume Label in order to use a LUN. The list of supported devices and configurations for VIO, including limitations on those which require writing a VIOS Volume Label, is available at:
http://www.software.ibm.com/webapp/set2/sas/f/vios/documentation/home.html

Direct I/O
Some applications, such as database management systems, use their own sophisticated cache management systems. For such cases, SAN File System provides a direct I/O mode. In this mode, SAN File System performs direct writes to disk, and bypasses local file system caching. Using the direct I/O mode makes files behave more like raw devices. This gives database systems direct control over their I/O operations, while still providing the advantages of SAN File System features, such as policy based placement. The application needs to be written to understand how to use direct I/O. Normally, Enterprise applications know how to handle this. Basically, the application sets the O_DIRECT flag when you call open() on AIX. On Windows, the application has to set the FILE_NO_INTERMEDIATE_BUFFERING flag at open time. It is not possible to enable this by the Administrator itself (for example, at mount time). Direct I/O is already available for IBM DB2 UDB for Windows. It is available for IBM DB2 UDB for AIX at V8.1 FP4. Direct I/O is also available on SAN File System Intel clients that run Linux releases (32-bit) that support the POSIX Direct I/O file system interface calls, such as SLES8 and RHEL3.

3.9.4 Clustering support


SAN File System supports the following clustering software on the clients: HACMP clustering software on AIX. Sun Cluster Version 3.1 is also supported. Microsoft Cluster Server (MSCS) on Windows 2000 Advanced Server and Windows Server 2003 Enterprise Edition.

Chapter 3. MDS system design, architecture, and planning issues

87

MSCS with SAN File System


A maximum of two nodes per MSCS cluster are supported. When implementing MSCS on SAN File System clients, the administrator defines accessible filesets or directories in the SAN File System namespace as cluster resources that are owned and may be served (that is, using CIFS sharing) by only one node in the MSCS cluster at a time. MSCS then moves ownership of these resources around in the cluster according to the availability of the nodes. This means that any eligible (see next paragraph) fileset/directory in the SAN File System namespace is accessible through only one node in a given MSCS cluster at a time. For clients not in the MSCS cluster, access to the SAN File System namespace is shared as usual according to SAN File System features. The individual MSCS cluster resources that are eligible to be defined can be any first-level directory below the root drive. Therefore, these could be directories corresponding to filesets which are attached to the root of the SAN File System global namespace, or could be non-fileset directories that are attached directly at the ROOT fileset. Second or lower-level directories (regardless of whether they correspond to filesets) are not available to be defined as MSCS cluster resources. See Chapter 11, Clustering the SAN File System Microsoft Windows client on page 447 for more information about MSCS.

3.9.5 Linux for zSeries


The SAN File System client for Linux for IBM ^ zSeries supports the 31-bit SLES8 distribution, with the 2.4.21-251 kernel. This can be running under z/VM V5.1 or later, or directly within an LPAR, on any generally available zSeries model that supports the co-required OS and software stack. The SAN File System zSeries Linux client supports the use of fixed block SCSI SAN for zSeries with the zFCP driver, with data LUNs on IBM ESS, DS6000, and DS8000 storage. Therefore, the zSeries SAN File System client can share data in the SAN File System with other zSeries clients, or with other SAN File System on other platforms, providing the data resides on one of the zSeries Linux supported disk systems. Non-IBM storage and iSCSI storage are not supported at this time.

3.10 Data migration


Existing data must be migrated (copied) into the SAN File System global namespace from its original file system location. This is because the SAN File System separates the metadata (information about the files) from the actual user data. The migrate process will store the metadata in the System Pool and the file data in the appropriate User Pool(s), according to the policy in place. Figure 3-5 on page 89 shows this process. Remember that after you have migrated applications and data, files may be stored in different locations than previously. You may have to update configuration files, environment variables, and scripts to reflect the new file locations. Careful testing will be required after migration has completed to ensure that the clients will be able to access the data. There are currently two options available for data migration: offline and online migration.

88

IBM TotalStorage SAN File System

Client (sees both original and destination directory)

Create Metadata
SAN File System
r u le s
W r it e m et ad at

Po

l ic y

Original data

Migrate data

User Pools

System Pool

Figure 3-5 SAN File System data migration process

3.10.1 Offline data migration


The SAN File System client includes a special data migration tool called migratedata. This is a transaction-based restartable utility, designed to migrate large quantities of data. This command operates offline, that is, all applications accessing the data must be stopped, and no clients may access the data while migration is in progress. The migratedata command operates in three modes: plan, migrate, and verify. In the plan phase, data is collected about the size of the data being migrated and the system resources available, to provide an estimate of the length of time required to migrate the data. The migrate phase actually copies the data into SAN File System, and the verify phase is used to verify the integrity of the migrated data. Any file-based copy utility can also be used to migrate data to SAN File System (for example, cp, mv, xcopy, tar, and backup/restore programs). You must make sure there is enough storage capacity to perform the migration to SAN File System. You must also determine whether there is sufficient storage capacity to provide for short and mid-term growth in the storage capacity. The amount of space required during migration will be at least double the space currently occupied in the source file system during the migration operation itself. This is because at the end of the migration, disk space will be required for both the original data, as well as the new location in SAN File System. After migration, the migrated files should occupy approximately the same amount of space as before. An exception to this is that migration files in NTFS compressed drives will be expanded, and sparse files will become dense or full; these types of files will therefore require more space. Once the migration is validated, for example, by verifying the files and performing some application testing, the source data can be deleted, and its associated disk space will be able to be re-used. An important aspect of planning for migration is the migration time. You should plan for approximately eight hours to migrate one terabyte (this includes 3-4 hours of hardware configuration and data verification). The migration of the data itself should take approximately 3-4 hours per terabyte of data, and you can also use the plan phase of the migratedata to gain a more accurate estimate. Careful planning and calculation of migration time is crucial, as user applications will be offline during the migration. Because migration is a complex task, it is highly recommended to engage professional services for migration to ensure proper planning and execution. IBM is providing migration services that will address these issues. Contact your local IBM representative for more information. More detailed information about migration of data to SAN File System using the migratedata utility is in 9.2, Data migration on page 389.

Chapter 3. MDS system design, architecture, and planning issues

89

3.10.2 Online data migration


A service offering is available from IBM that will provide nondisruptive data migration to SAN File System at installation time.

Service description
The TotalStorage Services SAN File System Migration offering provides a nondisruptive online data migration at the file or block level to ensure data integrity. The TotalStorage Services team will provide architectural planning, along with execution at the byte-level, to migrate data from current application servers into a virtualized SAN File System environment. The service includes: Architecture planning Installation and hardware planning Software installation Initial file system comparisons and synchronizations Implementing a calculated replication schedule Contact your IBM service representative for more details of this offering or go to the following Web site:
http://www.storage.ibm.com/services/software.html

3.11 Implementation services for SAN File System


An IBM service offering for implementing the SAN File System is available to help you introduce SAN File System into your IT environment.

Service description
The IBM Implementation service offering for SAN File System provides planning, installation, configuration, and verification of SAN File System solutions. The service includes: Pre-install planning session Skills transfer SAN File System MDS installation Assistance with the LUNs configuration on back-end storage for SAN File System Storage pool and filesets configuration Master Console installation and configuration Client installation Optional LDAP installation

Benefits
IBM has years of experience with providing Storage Virtualization solutions. The key benefit is skills transfer from IBM Specialists to client personnel during the installation and configuration phase. This offering also helps clients to manage and focus resources on day-to-day operations. Contact your service representative for more details of this offering or go to the following Web site:
http://www.storage.ibm.com/services/software.html

90

IBM TotalStorage SAN File System

3.12 SAN File System sizing guide


The purpose of this section is to provide guidance for sizing a SAN File System solution. The SAN File System provides a global namespace to a host or client, which appears as a local file system. The data contents for all the file objects in the SAN File System are directly accessible to all clients over the SAN. The metadata is served by the Metadata servers over an IP network that connects the client(s) to the SAN File System cluster. This architecture is therefore designed to give near-local file system performance, while providing the availability, scalability, and flexibility offered by a SAN.

3.12.1 Assumptions
We do not specifically address sizing of the SAN and fabrics. Specific fibre bandwidth must be available to support the current application workload. The SAN bandwidth and topology should be seen as a separate exercise using existing best practices. We assume that this exercise has been completed and the SAN performance is satisfactory. Since the SAN File System engines have 2 Gbps HBAs, it is recommended to use 2 Gbps connections in all the switches and clients. It is assumed that the IP network connecting the client/s and the Metadata servers is sufficiently large with sufficiently low latency.

3.12.2 IP network sizing


The network topology is recommended to be at least 100 Mbps, with 1 Gbps preferred. Standard network analyzers and network performance tools (for example, netstat) can be used to measure network utilization and traffic.

3.12.3 Storage sizing


SAN File System is a journaling or logging file system, and hence the Metadata servers need to perform logging operations periodically to preserve file system integrity. This logging is done in the System Pool; therefore, it is necessary to guarantee sufficient bandwidth and low latency to these LUNs to get overall good performance, especially during peak metadata transaction periods. You should use the most robust RAID configurations, such as RAID 5 with write back caching enabled for the System Pool LUNs.

Size of the System Pool


Another important aspect of sizing is to be able to estimate the amount of space required to set up, populate, and deploy a SAN File System installation. This involves estimating the volume of metadata that will be stored and served by the Metadata servers. In general, since SAN File System is more scalable in this aspect and can support heterogeneous clients, the metadata space overhead should be marginally higher than for most local file systems. The rule of thumb for generic local file systems (those without SAN File System) is that they require approximately 3 MB for every 100 MB of actual data for metadata, which is about 3%. SAN File System will require approximately 5% for metadata. This number is typically proportional to the number of populated objects (files, directories, symbolic links, and so on) in SAN File System. So both factors (total user space and number of objects) should be considered in sizing the metadata space requirement. For example, a small number of large files would have less metadata than a large number of small files.

Chapter 3. MDS system design, architecture, and planning issues

91

However, the minimum recommended size for a system volume is 2 GB. This is because SAN File System has been designed to work with large amounts of data, and therefore testing has been targeted on system volumes of at least this size. Using the 5% rule, this would give a minimum global namespace of 40 GB. Important: Do not allow the System Pool to fill up. Alerts are provided to monitor it. If spare LUNs are available to the MDS, the System Pool can be expanded without disruption.

Size of the User Pools


Initially, the User Pools would be of similar size to the local data space that they replaced. Some reduction in size will be achieved because of the separation of metadata. However, when migrating data into SAN File System, be aware that files in NTFS compressed drives will be uncompressed, and sparse files (if used) will be made dense or expanded. In addition, space needs to be considered for FlashCopy images. Therefore, you will want a free space margin in your User Pools. If spare LUNs are available to the clients, the User Pools can be expanded without disruption.

3.12.4 SAN File System sizing


The chief metric for SAN File System sizing is the number of MDSs that will be required. The data access method for SAN File System is critical to understanding how many MDSs are required, since one of the primary factors for this count is the number of metadata transactions per second. When a SAN File System client accesses a file for the first time, it sends a request over the LAN to the SAN File System cluster. The metadata is returned to the client. All reads and writes of the user data go between the client and the storage device, directly over the SAN, as shown in Figure 3-6 on page 93. The client also caches metadata locally, meaning that subsequent opens for the same file do not require a request to be sent to the MDS. Therefore, since each MDS is only involved in metadata access, not actual file data access, the mix and rate of metadata transactions is a key factor. This mix and rate will clearly vary according to the particular client workload; therefore, a pilot under specific application conditions may be the best method to get an accurate measure.

92

IBM TotalStorage SAN File System

TCP/IP LAN

Metadata Client . . .

FS
Data

SAN Fabric

Metadata Cluster System Pool

User Pools

Figure 3-6 SAN File System data flow

Other parameters affecting loading and sizing of the SAN File System include: The number and mix of file system objects (for example, files, directories, symbolic links, and so on) that would be involved in the combined workload as seen by the MDS cluster. The number of filesets those file system objects are partitioned into. The size and mix of the objects and filesets that each client would be expected to operate on with their respective applications. For workload distribution purposes, there should be at least one fileset assigned to each engine. This implies that there should be at least as many filesets as there are engines in the cluster, unless it is desired to have a spare idle MDS in the cluster, for example, for availability reasons. One subordinate MDS should have some spare capacity, so that it can support takeover of other filesets, in case of a hard failure of an engine. The mix of metadata operations affects the maximum load. For example, file create operations may take up to twice as long as a file open. The typical file operations a client application would generate. For example, is it primarily read-only, or does it write a lot of new files? The impact of multi-client sharing of SAN File System file system objects. This will generate more metadata traffic, particularly if the file is shared heterogeneously. Collecting this data and analyzing it is a difficult exercise and it requires considerable expertise with the application under consideration. Performance analysis should be based around peak application workloads rather than average workloads. File operation profiles for many well known and standard workload classifications can be used to estimate this information; your IBM representative can assist with the sizing of the SAN File System.

Chapter 3. MDS system design, architecture, and planning issues

93

SAN File System clients


The cache plays an important role in SAN File System client operation, and it essentially operates like any other least recently used object cache. It has the typical characteristics of a cache in the sense that, the larger the cache, the better the performance, and a large working set size gives the potential for lower performance. Applications that use few file system objects and are I/O intensive over a small working set size tend to have the ideal cache footprint, and hence could potentially perform the best with SAN File System; that is, very close to that of a local file system. Various SAN File System clients may implement varying amounts of memory to be used for caching the metadata. In general, larger amounts of RAM on the client could improve performance, as more client caching will be done. Consider the impact of multi-client sharing of objects in the SAN File System, especially if the sharing involves heterogeneous clients. In general, the more clients that are sharing objects, the higher the potential for metadata transactions. An example of how object transactions within the SAN File System are done is shown in Figure 3-7. Note: The data cache and metadata cache are on the SAN File System client.

APPLICATION PROCESS OPRATIONS

FOPS

SAN FILE SYSTEM

METADATA PATH

META DATA CACHE

MDS OPS

MDS SERVER(s)

DP AA TT AH

DATA CACHE

D A T A

SAN
Figure 3-7 Typical data and metadata flow for a generic application with SAN File System

SAN File System metadata workload


The number of MDSs needed is defined by the predicted metadata workload of the clients. Clients cache metadata locally, and obviously the higher hit rate on this cache (that is, the percentage of time when a metadata request can be satisfied from the cache without having to access the MDS), the lower the workload will be on the MDS, and fewer MDSs will be required.

94

IBM TotalStorage SAN File System

Testing has shown very high client metadata cache hit ratios, depending on the application workload. Therefore, many application operations that could require metadata services will be able to be satisfied locally, without having to access the MDS itself. In other words, under normal working conditions, the volume of MDS operations per second (MDS OPS in Figure 3-7 on page 94) will essentially be relatively few compared to the volume of File System Operations per second (FOPS in Figure 3-7 on page 94) produced from a given workload of application operations per second. Please consult your IBM representative for support in sizing a SAN File System configuration.

3.13 Planning worksheets


Table 3-2, Table 3-3, Table 3-4, Table 3-5 on page 96, and Table 3-6 on page 96 are sample worksheets to fill in while planning the installation. You will find other worksheets inIBM TotalStorage SAN File System Planning Guide, GA27-4344.
Table 3-2 Network configuration of SAN File System Item IP address for engine IP address for RSAII Host name Subnetmask Gateway DNS address (optional) Cluster name Table 3-3 SAN File System drive letter for Windows clients Item Host name Desired drive letter Table 3-4 SAN File System namespace directory point for UNIX-based clients Item Host name Directory attach point UNIX Client 1 UNIX Client 2 UNIX Client 3 Windows Client 1 Windows Client 2 Windows Client 3 MDS 1 MDS 2

Chapter 3. MDS system design, architecture, and planning issues

95

Table 3-5 Storage planning sheet Pool type System Storage device Accessible clients Volume_Names

User Default Default_Pool

User Pool Name

User Pool Name

Table 3-5 will help you plan out the zoning or LUN access, by specifying which clients should have access to which storage pool(s). Remember a client needs access to all volumes in a storage pool.
Table 3-6 Client to fileset and fileset to storage pool relationships planning sheet Client name Fileset name Storage Pool name

Use Table 3-6 to relate your filesets, storage pools, and policies. First, decide which fileset(s) each client should have access to. Then decide which storage pool(s) each fileset should be able to store files in. You will use this information to plan your policies as well as to confirm that each client has access to the required volumes in the pools to support the required fileset access.

3.14 Deploying SAN File System into an existing SAN


If you are installing SAN File System into a separate, non-mission critical SAN (for example, a test environment), no special consideration needs to be taken. However, this is not true when deploying SAN File System into a production SAN environment. Keep in mind that by introducing SAN File System into a SAN environment, the way you look at the storage devices in the environment changes completely. As you can see in Figure 3-8 on page 97, with SAN File System, you move from the original concept of having one or more particular LUNs exclusively assigned to a single host and introduce a common file space approach instead. 96
IBM TotalStorage SAN File System

Therefore, your existing SAN configuration will be considerably affected, especially from the zoning and LUN management perspective of view. Consider initially deploying SAN File System in an isolated environment, do the basic setup, test your configuration, and once you are sure that SAN File System is running smoothly in an isolated environment, you can start with the rollout into the production environment. Tip: If you do not have the facility to use a stand-alone, isolated SAN environment for the initial SAN File System setup, you can zone-out necessary storage resources in your production environment and use that part of the zoned-out environment for your SAN File System setup. Another major step in the SAN File System deployment phase is preparation for data migration. We cover this topic in more detail in 3.10, Data migration on page 88.

SANs Today

SAN File System

File System

File System

File System

SAN File System

SAN
high

SAN
medium low

Figure 3-8 SAN File System changes the way we look at the Storage in todays SANs

3.15 Additional materials


Our aim in this chapter is to give you the basic overview on how to plan for the SAN File System deployment. The scale of this area goes well beyond the scope of this redbook. If you need additional information regarding planning and sizing a SAN File System environment, please refer to IBM TotalStorage SAN File System Planning Guide, GA27-4344 for more detailed information.

Chapter 3. MDS system design, architecture, and planning issues

97

98

IBM TotalStorage SAN File System

Chapter 4.

Pre-installation configuration
In this chapter, we discuss how to pre-configure your environment before installing SAN File System. We discuss the following topics: Security considerations Target Machine Validation Tool (TMVT) Back-end storage and zoning considerations SDD on clients and SAN File System MDS RDAC on clients and SAN File System MDS

Copyright IBM Corp. 2003, 2004, 2006. All rights reserved.

99

4.1 Security considerations


As discussed in 3.5, Security on page 72, SAN File System requires administrator authentication and authorization whenever a GUI or CLI command is issued. Authentication means confirming that a valid user ID is being used. Authorization is determining the level of privileges (that is, permitted operations) that the user ID may perform. The administrator authentication and authorization function for SAN File System can be performed using either the native Linux operating system login process (local authentication), or using an LDAP (Lightweight Directory Access Protocol) server. We will discuss both these options. When issuing a SAN File System administrative request, communication occurs to authenticate the supplied user ID and password and to verify that the user ID has the authority to issue that particular request. Each user ID is assigned an LDAP role or is a member of a UNIX group, which gives that user a specific level of access to administrative operations. The available and required privilege levels are Monitor, Operator, Backup and Administrator. The IBM TotalStorage SAN File System Administrators Guide and Reference, GA27-4317 lists, for each SAN File System CLI command and GUI function, what privilege is required to execute it. Table 4-1 lists the SAN File System privilege levels.
Table 4-1 SAN File System user privilege levels Role Monitor Level Basic level of access Description Can obtain basic status information about the cluster, display the message logs, display the rules in a policy, and list information regarding SAN File System elements such as storage pools, volumes, and filesets. Can perform backup and recovery tasks plus all operations available to the Monitor role. Can perform day-to-day operations and tasks requiring frequent modifications, plus all operations available to the Backup and Monitor roles. Has full, unrestricted access to all administrative operations.

Operator Backup

Monitor + backup access Backup + additional access

Administrator

Full access

After authenticating the user ID, the administrative server interacts with the MDS to process the request. The administrative agent caches all authenticated user roles for 600 seconds. You can clear the cache using the resetadmuser command.

4.1.1 Local authentication configuration


Local authentication supports native Linux services on the MDSs to verify SAN File System CLI/GUI users and their authority to perform administrative operations. Before configuring local authentication, make sure you have read the considerations in Some points to note when using the local authentication method on page 73. To use local authentication with SAN File System, define user IDs and groups (corresponding to the required SAN File System privilege levels) as local objects on each MDS by following these steps. Do this task on every MDS.

100

IBM TotalStorage SAN File System

1. Define the following four groups. These correspond to the four SAN File System command roles:
# # # # groupadd groupadd groupadd groupadd Administrator Operator Backup Monitor

You must use these exact group names and define all of the groups. 2. Decide which IDs you will require to administer SAN File System, and which administrative privilege (group) they should have. At a minimum, you need one ID in the Administrator group, but you can make them as required, and there can be several IDs each in the same group. Define the user IDs and passwords that will log in to the SAN File System CLI or GUI. When defining each user ID, associate it with the appropriate group. In this example, we are defining an ID itsoadm, in the Administrator group, and an ID ITSOMon, in the Monitor group. # useradd -g Administrator itsoadm # passwd itsoadm (specify a password when prompted) # useradd -g Monitor ITSOMon # passwd ITSOMon (specify a password when prompted) UNIX user IDs, groups, and passwords are case sensitive. We recommend limiting UNIX user IDs to eight characters or fewer. 3. Once all UNIX groups and user IDs/passwords are defined on all MDSs, log in using each user ID to verify the ID/password, and to make sure a /home/userid directory structure exists. Create home directories if required (use the md command). You can also list the contents of the /etc/passwd and /etc/group files to verify that the intended UNIX groups and user IDs were added to the MDSs. You are now ready to use local authentication in the SAN File System cluster that you will install in Chapter 5, Installation and basic setup for SAN File System on page 125. You will specify the -noldap option when installing SAN File System. You will select one local user ID/password combination, which is in the Administrator group, and specify it as the CLI_USER/CLI_PASSWD parameters when installing SAN File System (see step 4 on page 138 in 5.2.6, Install SAN File System cluster on page 138).

4.1.2 LDAP and SAN File System considerations


The LDAP server can be run on any LDAP compliant software and operating system, but is not supported on either an MDS or the Master Console. At the time of writing, tested combinations included: IBM Directory Server V5.1 for Windows IBM Directory Server V5.1 for Linux OpenLDAP for Linux Microsoft Active Directory for Windows Some basic configuration of the LDAP server is required by the SAN File System for it to use the LDAP server to authenticate SAN File System administrators. For example, the SAN File System requires an authorized LDAP user name that can browse the LDAP tree where the users and roles are stored. The requirements to configure the SAN File System for LDAP include: You must be able to create four objects under one parent distinguished name (DN), one for each SAN File System role.

Chapter 4. Pre-installation configuration

101

Each role object must contain an attribute that supports multiple DNs. You must be able to create an object for each SAN File System administrative user. Each administrative user object must contain an attribute that can be used to log in to the SAN File System console or CLI, and a userPassword attribute. If you are accessing the LDAP server over Secure Sockets Layer (SSL), a public SSL authorization certificate (key) must be included when the truststore is created during installation. For our configuration, we used the LDAP configuration shown in Figure 4-1. This configuration is represented in an LDIF file and imported into the LDAP server. We show the LDIF file corresponding to this tree in Sample LDIF file used on page 587.

LDAP Directory Tree for ITSO Example


o=ITSO

ou=Roles

ou=Users

cn=Manager

cn=Administrator cn=Monitor cn=Backup cn=Operator

cn=ITSOAdmin Administrator cn=ITSOMon Monitor cn=ITSOBack Backup cn=ITSOOper Operator

Figure 4-1 LDAP tree

Users
A User, in SAN File System and LDAP terms, is an entry in the LDAP database that corresponds to an administrator of the SAN File System. This is a person that will use the CLI (sfscli) or the SAN File System Console (GUI Interface) to administer the SAN File System. You can also use LDAP on your SAN File System clients to authenticate client users, and to coordinate a common user ID/group ID environment. For more detailed information about LDAP, see the IBM Redbook Understanding LDAP: Design and Implementation, SG24-4986.

Roles
SAN File System administrators must each have a certain role, which determines the scope of commands they are allowed to execute. In increasing order of permission, the four roles are Monitor, Operator, Backup, and Administrator. Each of the four roles must have an entry in the LDAP database. The Roles are described in Table 4-1 on page 100. At least one user with the Administrator role is required. You can also choose to define other roles as appropriate for your organization.

102

IBM TotalStorage SAN File System

All roles must have the parent DN (distinguished name), and all roles must have the same objectClass. Examples are given in Appendix A, Installing IBM Directory Server and configuring for SAN File System on page 565 and Appendix B, Installing OpenLDAP and configuring for SAN File System on page 589. Next, verify that the LDAP has been set up correctly and that each MDS can talk to the LDAP server. This procedure assumes that Linux is already installed with TCP/IP configured on the MDS, as described in 5.2.2, Install software on each MDS engine on page 127. The ldapsearch command is used to send LDAP queries from the MDS to the LDAP server. Start a login session with each MDS (using the default root/password) and enter ldapsearch at the Linux prompt, specifying the IP address of the LDAP server and the parent DN (ITSO in our case), as shown in Example 4-1.
Example 4-1 Verifying that an MDS can contact the LDAP server NP28Node1:~ # ldapsearch -h 9.42.164.125 -x -b o=ITSO '(objectclass=*)' version: 2 # filter: (objectclass=*) # requesting: ALL # ITSO dn: o=ITSO objectClass: organization o: ITSO # Manager, ITSO dn: cn=Manager,o=ITSO objectClass: organizationalRole cn: Manager # Users, ITSO dn: ou=Users,o=ITSO objectClass: organizationalUnit ou: Users # ITSOAdmin Administrator, Users, ITSO dn: cn=ITSOAdmin Administrator,ou=Users,o=ITSO objectClass: inetOrgPerson cn: ITSOAdmin Administrator sn: Administrator uid: ITSOAdmin # ITSOMon Monitor, Users, ITSO dn: cn=ITSOMon Monitor,ou=Users,o=ITSO objectClass: inetOrgPerson cn: ITSOMon Monitor sn: Monitor uid: ITSOMon # ITSOBack Backup, Users, ITSO dn: cn=ITSOBack Backup,ou=Users,o=ITSO objectClass: inetOrgPerson cn: ITSOBack Backup sn: Backup uid: ITSOBack # ITSOOper Operator, Users, ITSO dn: cn=ITSOOper Operator,ou=Users,o=ITSO objectClass: inetOrgPerson cn: ITSOOper Operator sn: Operator uid: ITSOOper # Roles, ITSO dn: ou=Roles,o=ITSO objectClass: organizationalUnit ou: Roles # Administrator, Roles, ITSO

Chapter 4. Pre-installation configuration

103

dn: cn=Administrator,ou=Roles,o=ITSO objectClass: organizationalRole cn: Administrator roleOccupant: cn=ITSOAdmin Administrator,ou=Users,o=ITSO # Monitor, Roles, ITSO dn: cn=Monitor,ou=Roles,o=ITSO objectClass: organizationalRole cn: Monitor roleOccupant: cn=ITSOMon Monitor,ou=Users,o=ITSO # Backup, Roles, ITSO dn: cn=Backup,ou=Roles,o=ITSO objectClass: organizationalRole cn: Backup roleOccupant: cn=ITSOBack Backup,ou=Users,o=ITSO # Operator, Roles, ITSO dn: cn=Operator,ou=Roles,o=ITSO objectClass: organizationalRole cn: Operator roleOccupant: cn=ITSOOper Operator,ou=Users,o=ITSO # search result search: 2 result: 0 Success # numResponses: 13 # numEntries: 12

You should perform ldapsearch on each MDS to ensure that they can all communicate with the LDAP server. To list the parameters that you can use with the ldapsearch command, use the -? option, as shown in Example 4-2.
Example 4-2 Using ldapsearch help NP28Node1:~ # ldapsearch -? ldapsearch: invalid option -- ? ldapsearch: unrecognized option -? usage: ldapsearch [options] [filter [attributes...]] where: filter RFC-2254 compliant LDAP search filter attributes whitespace-separated list of attribute descriptions which may include: 1.1 no attributes * all user attributes + all operational attributes Search options: -a deref one of never (default), always, search, or find -A retrieve attribute names only (no values) -b basedn base dn for search -F prefix URL prefix for files (default: "file:///tmp/) -l limit time limit (Max seconds) for search -L print responses in LDIFv1 format -LL print responses in LDIF format without comments -LLL print responses in LDIF format without comments and version -s scope one of base, one, or sub (search scope) -S attr sort the results by attribute `attr' -t write binary values to files in temporary directory -tt write all values to files in temporary directory -T path write files to directory specified by path (default: /tmp) -u include User Friendly entry names in the output -z limit size limit (in entries) for search

104

IBM TotalStorage SAN File System

Common options: -d level set LDAP debugging level to `level' -D binddn bind DN -f file read operations from `file' -h host LDAP server -H URI LDAP Uniform Resource Identifier(s) -I use SASL Interactive mode -k use Kerberos authentication -K like -k, but do only step 1 of the Kerberos bind -M enable Manage DSA IT control (-MM to make critical) -n show what would be done but don't actually search -O props SASL security properties -p port port on LDAP server -P version protocol version (default: 3) -Q use SASL Quiet mode -R realm SASL realm -U authcid SASL authentication identity -v run in verbose mode (diagnostics to standard output) -w passwd bind passwd (for simple authentication) -W prompt for bind passwd -x Simple authentication -X authzid SASL authorization identity ("dn:<dn>" or "u:<user>") -Y mech SASL mechanism -Z Start TLS request (-ZZ to require successful response) NP28Node1:~ #

4.2 Target Machine Validation Tool (TMVT)


As described in 2.5.1, Metadata server on page 37, SAN File System requires specific hardware and pre-installed operating system software. In order to validate your SAN File System setup, a validation tool is included with the SAN File System software package. This tool is known as the Target Machine Validation Tool (TMVT) and is intended to verify that your hardware and software prerequisites have been met. In order to run TMVT, you must have already installed the SAN File System metadata software. TMVT is invoked as shown:
/usr/tank/server/bin/tmvt -r report_file_name

Examine the results in report_file_name, paying particular attention to areas flagged as non-compliant. Resolve those prerequisites, and then rerun the tool until TMVT runs without errors. Example 4-3 shows a partial listing from the TMVT report file. In this case, we had to check and install the RSA firmware.
Example 4-3 TMVT report file tank-mds1:~ # /usr/tank/server/bin/tmvt -r /usr/tank/admin/log/TMVT_MDS1_afterinstall -I=9.82.22.175 -U=USERID -P=PASW0RD HSTPV0009E The Hardware Components group fails to comply with the requirements of the recipe. HSTPV0007E Machine: tank-mds1 FAILS TO COMPLY with requirements of SAN File System release 2.2.2.91, build sv22_0001. tank-mds1:~ # cat /usr/tank/admin/log/TMVT_MDS1_afterinstall Hardware Components (14) Item Name Current Recipe Failed Hardware Component Checks (1) Remote Supervisor Adapter 2

MISSING

present

Chapter 4. Pre-installation configuration

105

Passed Hardware Component Checks (13) Available RAM (Megabytes) 4039 Disk space in /var (Megabytes) 16386 TCP/IP enabled Ethernet controller Broadcom Corporation NetX Ethernet controller Broadcom Corporation NetX Ethernet controller Intel Corp. 82546EB Gigab Machine BIOS Level NA Machine BIOS Build GEE163AUS Machine Type/Model 41461RX FC HBA Manufacturer QLogic FC HBA Model QLA2342 FC HBA BIOS/Firmware Version 3.03.06 FC HBA Driver Version 7.03.00 Software Components (18) Item Name Correct Software Packages (18) xshared perl pango ncurses lsb-runtime libusb libstdc++ libgcc gtk2 gtk glibc glib2 glib expect ethtool bash atk aaa_base Current

4000 4096 enabled . . . . . . . . 4146* QLogic QLA23* . .

Recipe

4.2.0-270 5.8.0-201 1.0.4-148 5.2-402 1.2-105 0.1.5-179 3.2.2-54 3.2.2-54 2.0.6-154 1.2.10-463 2.2.5-233 2.0.6-47 1.2.10-326 5.34-192 1.7cvs-26 2.05b-50 1.0.3-66 2003.3.27-76

4.2.0-270 5.8.0-201 1.0.4-148 5.2-402 1.2-105 0.1.5-179 3.2.2-54 3.2.2-54 2.0.6-154 1.2.10-463 2.2.5-233 2.0.6-47 1.2.10-326 5.34-192 1.7cvs-26 2.05b-50 1.0.3-66 2003.3.27-76

Note: TMVT non-compliance does not strictly prevent the installation of the SAN File System. It identifies deviations from the recommended hardware and software platform.

4.3 SAN and zoning considerations


Here are some guidelines for preparing your SAN and zoning for use with SAN File System.

SAN considerations
Set up your switch configuration to maximize the number of physical LUNs addressable by the MDSs and to minimize or preferably eliminate sharing of fabrics with other non-SAN File System users whose usage may be disruptive to the SAN File System. Verify that the storage devices that are used by SAN File System are set up so that the appropriate storage LUNs are available to the SAN File System.

106

IBM TotalStorage SAN File System

Zoning considerations
Because of the restriction on the number of LUNs an MDS can access (126 currently), make sure you limit the number of paths created through the fabrics from each metadata server to the storage to two paths, one per host-bus adapter (HBA) port. Some combination of zoning and physical fabric construction may be used to reduce or limit the number of physical paths. Each fabric should consist of one or more switches from the same vendor. Keep in mind that no level of SAN zoning can totally protect SAN File System systems from SAN events caused by other non-SAN File System systems connected to the same fabric. Therefore, your SAN File System fabric should be isolated from traffic and administrative contact with non-SAN File System systems. You can utilize VSANs to accomplish this fabric isolation. When metadata and user storage reside on the same storage subsystem, you must ensure that the metadata storage is fully isolated and protected from access by client systems. With some subsystems, access to various LUNs is determined by connectivity to various ports of the storage subsystems. With these storage subsystems, hard zoning of the attached switches may be sufficient to ensure isolation of the metadata storage from access by client systems. However, with other storage subsystems (such as ESS), LUN access is available from all ports and LUN masking must be used to ensure that only the MDSs can access the metadata LUNs. Important: SAN File System user and metadata LUNs should not share the same ESS 2105 Host Adapter ports. SAN File System clients should be zoned or LUN masked such that each can see user storage only. Specify that the metadata storage or LUNs are to be configured to the Linux mode (if the storage subsystem has operating system-specific operating modes). For more information about planning to implement zoning, see the following manual and redbook: IBM TotalStorage SAN File System Planning Guide, GA27-4344 IBM SAN Survival Guide, SG24-6143 The following is an example of a lab setup and is shown in Figure 4-2 on page 108. There are two MDS, two xSeries Windows clients and two pSeries AIX clients. Each system (MDS and client) has two FC HBAs. The port names are: NP28Node1, two ports: MDS1_P1 and MDS1_P2 NP28Node2, two ports: MDS2_P1 and MDS2_P2 SVC: Two nodes, four ports per node: svcn1_p1, svcn1_p2, svcn1_p3, svcn1_p4, svcn2_p1, svcn2_p2, svcn2_p3, and svcn2_p4 AIX1, two ports: AIX1_P1 and AIX1_P2 AIX2, two ports: AIX2_P1and AIX2_P2 WIN2kup, two ports: wink2up_p1 and wink2up_p2 WIN2kdn, two ports: wink2dn_p1 and wink2dn_p2

Chapter 4. Pre-installation configuration

107

There are two pairs of switches: the first pair consists of Switch 11 and Switch 31, and the second pair consists of Switch 12 and Switch 32.

AIX1 Client WIN2kup Client1 WIN2kdn Client

AIX2 Client

Switch 31

Switch 32

Switch 11

Switch 12

NP28Node1 (MDS1)

NP28Node2 (MDS2)

SVC with FAStT

Figure 4-2 Example of setup

The zoning was implemented as follows: Each client HBA is zoned to one port of each SVC node. Since there are four clients and two HBAs in each client, four client zones have been defined on each switch pair. One MDS zone is defined on the first switch pair, including one port from each MDS and one port from the first SVC node (three ports in total). One MDS zone is defined on the second switch pair, including one port from each MDS and one port from the second SVC node (three ports in total). The switch zoning using the above rules is shown in Example 4-4. For simplicity, the zoning for the SVC to its back-end storage has been omitted.
Example 4-4 Using zoneShow First switch pair: cfg: Redbook zone: AIX1_SVC 12,3 12,4 32,6 zone: AIX2_SVC 12,1 12,2 32,4 zone: MDS_SVC 32,9

[SVCN1_P2] [SVCN2_P2] [AIX1_P1] [SVCN1_P4] [SVCN2_P4] [AIX2_P1] [MDS1_P1]

108

IBM TotalStorage SAN File System

zone:

zone:

Second cfg: zone:

zone:

zone:

zone:

zone:

32,8 [MDS1_P2] 12,3 [svcn1_p2] win2kdn_SVC 32,14 [win2kdn_p1] 12,1 [SVCN1_P4] 12,2 [SVCN2_P4] win2kup_SVC 12,4 [svcn2_p2] 12,3 [svcn1-p2] 32,13 [win2kup_p1] switch pair: Redbook AIX1_SVC 31,6 [AIX1_p2] 11,3 [svcn1_p1] 11,4 [svcn2_p1] AIX2_SVC 31,4 [AIX2_p2] 11,1 [svcn1_p3] 11,2 [svcn2_p3] MDS_SVC 31,9 [MDS1_P2] 31,8 [MDS2_P2] 11,4 [svcn2_p1] win2kup_SVC 31,13[win2kup_p2] 11,3 [svcn1_p1] 11,4 [svcn2_p1] wink2dn_SVC 11,1 [svcn1_p3] 11,2 [svcn2_p3] 31,14[win2kdn_p2]

LUN masking / Storage Partitioning was implemented as follows: One 3.5 GB LUN, mapped to both HBAs in both MDS Nodes to be used for the System Pool. Four LUNs, of size 4.5 GB, 4 GB, 3 GB, and 1 GB, were assigned to all the HBAs Host Clients, to be used for User Pools. The setup described here is simply to show how the fabric and back-end storage is configured and is an example only. There are many other possibilities for doing this. Planning rules and considerations are explained in Chapter 3, MDS system design, architecture, and planning issues on page 65.

4.4 Subsystem Device Driver


The Subsystem Device Driver (SDD) is a pseudo device driver designed to support the multipath configuration environments in the IBM TotalStorage Enterprise Storage Server and the IBM TotalStorage SAN Volume Controller. SDD provides the following functions: Enhanced data availability Dynamic input/output (I/O) load balancing across multiple paths Automatic path fail-over protection Concurrent download of licensed internal code This section describes how to install and verify (SDD) on the MDS, and on the SAN File System client platforms, AIX and Windows.

Chapter 4. Pre-installation configuration

109

Attention: The examples shown here for installing and configuring SDD may not exactly match the current required version of SDD for SAN File System; however, the instructions are similar. Please refer to the SAN File System support Web site to confirm the required SDD version.

4.4.1 Install and verify SDD on Windows 2000 client


The following hardware and software components are required to install SDD on a Windows 2000 client. The steps are very similar for a Windows 2003 client. One or more supported storage devices are needed. Supported Host Bus Adapters (HBAs) are necessary. For a complete list of HBAs supported by the back-end storage device, see:
http://www.ibm.com/servers/storage/support/config/hba/index.wss

Windows 2000 operating system with Service Pack 2 or higher is required for SDD; however, SAN File System requires Service Pack 4. Approximately 1 MB of space is required on the Windows 2000 system drive. ESS devices are configured as IBM 2105xxx (where xxx is the ESS model number), SVC devices are configured as 2145, and DS6000/DS8000 devices are configured as IBM 2107.

Install SDD on Windows 2000 client


Download the Windows 2000 SDD install package from the following Web site:
http://www.ibm.com/support/dlsearch.wss?rs=540&tc=ST52G7&dc=D430

1. Run setup.exe from the download directory and accept the defaults during the installation. Tip: If you have previously installed V1.3.1.1 (or earlier) of SDD, you will see an Upgrade? prompt. Answer Yes to continue the installation. 2. At the end of the process, you will be prompted to reboot now or later. A reboot is required to complete the installation. 3. After the reboot, the Start menu will include a Subsystem Device Driver entry containing the following selections: Subsystem Device Driver management SDD Technical Support Web site README

Verify SDD and the storage devices on Windows 2000


1. Verify that the disks are visible in Device Manager. Since we are using SVC, the disks are listed as 2145 SCSI disk devices, as in Figure 4-3 on page 111. (There are 16, representing the four paths for each of the four disks.)

110

IBM TotalStorage SAN File System

Figure 4-3 Verify disks are seen as 2145 disk devices

2. To verify that SDD can see the devices, use the datapath query device command, as shown in Example 4-5.
Example 4-5 Verifying SDD on Windows 2000 C:\Program Files\IBM\Subsystem Device Driver>datapath query device Total Devices : 4

DEV#: 0 DEVICE NAME: Disk1 Part0 TYPE: 2145 POLICY: OPTIMIZED SERIAL: 600507680185001B2000000000000002 ============================================================================ Path# Adapter/Hard Disk State Mode Select Errors 0 Scsi Port2 Bus0/Disk1 Part0 OPEN NORMAL 0 0 1 Scsi Port2 Bus0/Disk1 Part0 OPEN NORMAL 31 0 2 Scsi Port3 Bus0/Disk1 Part0 OPEN NORMAL 28 0 3 Scsi Port3 Bus0/Disk1 Part0 OPEN NORMAL 0 0 DEV#: 1 DEVICE NAME: Disk2 Part0 TYPE: 2145 POLICY: OPTIMIZED SERIAL: 600507680185001B2000000000000003 ============================================================================ Path# Adapter/Hard Disk State Mode Select Errors 0 Scsi Port2 Bus0/Disk2 Part0 OPEN NORMAL 0 0 1 Scsi Port2 Bus0/Disk2 Part0 OPEN NORMAL 30 0 2 Scsi Port3 Bus0/Disk2 Part0 OPEN NORMAL 29 0 3 Scsi Port3 Bus0/Disk2 Part0 OPEN NORMAL 0 0

Chapter 4. Pre-installation configuration

111

DEV#: 2 DEVICE NAME: Disk3 Part0 TYPE: 2145 POLICY: OPTIMIZED SERIAL: 600507680185001B2000000000000001 ============================================================================ Path# Adapter/Hard Disk State Mode Select Errors 0 Scsi Port2 Bus0/Disk3 Part0 OPEN NORMAL 24 0 1 Scsi Port2 Bus0/Disk3 Part0 OPEN NORMAL 0 0 2 Scsi Port3 Bus0/Disk3 Part0 OPEN NORMAL 0 0 3 Scsi Port3 Bus0/Disk3 Part0 OPEN NORMAL 35 0 DEV#: 3 DEVICE NAME: Disk4 Part0 TYPE: 2145 POLICY: OPTIMIZED SERIAL: 600507680185001B2000000000000006 ============================================================================ Path# Adapter/Hard Disk State Mode Select Errors 0 Scsi Port2 Bus0/Disk4 Part0 OPEN NORMAL 0 0 1 Scsi Port2 Bus0/Disk4 Part0 OPEN NORMAL 24 0 2 Scsi Port3 Bus0/Disk4 Part0 OPEN NORMAL 35 0 3 Scsi Port3 Bus0/Disk4 Part0 OPEN NORMAL 0 0

The actual devices are shown in the DEVICE NAME heading, in this case Disk1, Disk2, Disk3, and Disk4. Note that there are four paths displayed for each disk, as the SVCs have been configured for four paths to each LUN. 3. Finally, check that both FC adapters has been correctly configured to use SDD. Use the datapath query adapter command, as shown in Example 4-6.
Example 4-6 Display information about HBAs that is currently configured for SDD C:\Program Files\IBM\Subsystem Device Driver>datapath query adapter Active Adapters :2 Adpt# Adapter Name State Mode Select Errors Paths 0 Scsi Port2 Bus0 NORMAL ACTIVE 109 0 8 1 Scsi Port3 Bus0 NORMAL ACTIVE 127 0 8

Active 8 8

In this example, the two HBAs have been installed and successfully configured for SDD. You have now successfully installed and verified SDD on a Windows 2000 client.

4.4.2 Install and verify SDD on an AIX client


Before installing SDD on an AIX client, determine the installation package that is appropriate for your environment. SAN File System is supported (at the time of writing) on AIX 5L Version 5.1, Version 5.2, and Version 5.3. Download the appropriate package for your AIX version from the following Web site:
http://www.ibm.com/support/dlsearch.wss?rs=540&tc=ST52G7&dc=D430

The prerequisites for installing SDD on AIX are: You must have root access. The following procedures assume that SDD will be used to access all single-path and multipath devices. If installing an older version of SDD, first remove any previously installed, newer version of SDD from your client. Make sure that your HBAs are installed by using lsdev -Cc adapter |grep fc. The output should be similar to Example 4-7 on page 113, which shows two HBAs: fcs0 and fcs1.

112

IBM TotalStorage SAN File System

Example 4-7 Make sure FC adapter is installed fcs0 fcs1 Available 20-58 Available 20-60 FC Adapter FC Adapter

Note: In certain circumstances, when upgrading from a previous version of SDD, you may see the following error message during installation:
Error, volume group configuration may not be saved completely. Failure occurred during pre_rm. Failure occurred during rminstal. Finished processing all filesets. (Total time: 16 secs).

To correct this, unmount all file systems belonging to SDD volume groups and vary off those volume groups. See the SDD manual and README file for more information.

Install SDD on AIX client


We will use SMIT to install the SDD driver: 1. Use smitty install_update and select Install Software. In the INPUT device field, enter the directory where the SDD package was saved. The included packages will be displayed, as in Example 4-8.
Example 4-8 Install and update software Install and Update Software by Package Name (includes devices and printers) Tylqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqk Prx Select Software to Install x x x x Move cursor to desired item and press F7. Use arrow keys to scroll. x * x ONE OR MORE items can be selected. x+ x Press Enter AFTER making all selections. x x x x devices.sdd.43 ALL x x 1.5.1.0 IBM Subsystem Device Driver for AIX V433 x x x x devices.sdd.51 ALL x x + 1.5.1.0 IBM Subsystem Device Driver for AIX V51 x x [BOTTOM] x x x x F1=Help F2=Refresh F3=Cancel x F1x F7=Select F8=Image F10=Exit x F5x Enter=Do /=Find n=Find Next x F9mqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqj

2. Select the required SDD level depending on the level of AIX that you are running (devices.sdd.51 in our case). The installation will complete. 3. If you are using SVC as a front-end to SAN File System user storage, you also need to install the 2145 component for SDD called AIX Attachment Scripts for SVC. This component can be found from the SVC Support site:
http://www.ibm.com/servers/storage/support/virtual/2145.html

Chapter 4. Pre-installation configuration

113

Use smitty install_update, and select Install Software. In the INPUT device field, the included packages will be displayed, as in Example 4-9.
Example 4-9 Install 2145 component for SDD Install and Update Software by Package Name (includes devices and printers) Type or select a value for the entry field. Press Enter AFTER making all desired changes. lqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqk x Select Software to Install x * x x+ x Move cursor to desired item and press F7. Use arrow keys to scroll. x x ONE OR MORE items can be selected. x x Press Enter AFTER making all selections. x x x x #--------------------------------------------------------------------- x x # x x # KEY: x x # @ = Already installed x x # x x #--------------------------------------------------------------------- x x x x ibm2145.rte ALL x x 4.3.2002.1111 IBM 2145 TotalStorage SAN Volume Controller x x x x F1=Help F2=Refresh F3=Cancel x F1x F7=Select F8=Image F10=Exit x F5x Enter=Do /=Find n=Find Next x F9mqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqj

4. Select ibm2145.rte and press Enter to install. 5. Verify that SDD has installed successfully using lslpp -l *sdd*, as in Example 4-10.
Example 4-10 Verify that SDD has been installed root@aix2:/# lslpp -l '*sdd*' Fileset Level State Description ---------------------------------------------------------------------------Path: /usr/lib/objrepos devices.sdd.51.rte 1.5.1.0 COMMITTED IBM Subsystem Device Driver for AIX V51 Path: /etc/objrepos devices.sdd.51.rte

1.5.1.0 COMMITTED

IBM Subsystem Device Driver for AIX V51

Note: You do not need to reboot the pSeries, even though the installation message indicates this. SDD on your AIX client platform has now been installed, and you are ready to configure SDD. Tip: For AIX 5L Version 5.1 and AIX 5L Version 5.2, the published limitation on one system is 10,000 devices. The combined number of hdisk and vpath devices should not exceed the number of devices that AIX supports. In a multipath environment, because each path to a disk creates an hdisk, the total number of disks being configured can be reduced by the number of paths to each disk.

114

IBM TotalStorage SAN File System

Configure and verify SDD for the AIX client


Before you configure SDD, ensure that: The supported storage devices are operational. The supported storage device hdisks are configured correctly on the AIX host system. The supported storage devices are configured. If you configure multiple paths to a supported storage device, ensure that all paths (hdisks) are in the Available state. Otherwise, some SDD devices will lose multipath capability. To configure SDD on AIX: 1. Issue the lsdev -Cc disk | grep 2105 command to check the ESS device configuration, or issue the lsdev -Cc disk | grep SAN Volume Controller command to check the SVC. In our setup, we are using SVC, and the command output is shown in Example 4-11. We see 16 hdisks, which represent the four paths to each of the four disks.
Example 4-11 Check that you can see the SVC volumes root@aix2:/# lsdev -Cc disk |grep "SAN Volume Controller" hdisk2 Available 10-70-01 SAN Volume Controller Device hdisk3 Available 10-70-01 SAN Volume Controller Device hdisk4 Available 10-70-01 SAN Volume Controller Device hdisk5 Available 10-70-01 SAN Volume Controller Device hdisk6 Available 10-70-01 SAN Volume Controller Device hdisk7 Available 10-70-01 SAN Volume Controller Device hdisk8 Available 10-70-01 SAN Volume Controller Device hdisk9 Available 10-70-01 SAN Volume Controller Device hdisk10 Available 20-58-01 SAN Volume Controller Device hdisk11 Available 20-58-01 SAN Volume Controller Device hdisk12 Available 20-58-01 SAN Volume Controller Device hdisk13 Available 20-58-01 SAN Volume Controller Device hdisk14 Available 20-58-01 SAN Volume Controller Device hdisk15 Available 20-58-01 SAN Volume Controller Device hdisk16 Available 20-58-01 SAN Volume Controller Device hdisk17 Available 20-58-01 SAN Volume Controller Device

2. Verify that you can see the vpaths using lsdev -Cc disk | grep vpath (Example 4-12). Here we see the consolidated devices, representing the four actual disks.
Example 4-12 Verify that you can see the vpaths root@aix2:/# lsdev -Cc disk | grep "vpath*" vpath0 Available Data Path Optimizer vpath1 Available Data Path Optimizer vpath2 Available Data Path Optimizer vpath3 Available Data Path Optimizer Pseudo Pseudo Pseudo Pseudo Device Device Device Device Driver Driver Driver Driver

Chapter 4. Pre-installation configuration

115

In our setup, four user data LUNs have been assigned to the clients. To verify that they have been correctly configured for SDD and correspond to the hdisk listing, use datapath query device (Example 4-13 shows how the command works).
Example 4-13 Verify that vpaths correlate to the hdisk root@aix2:/# datapath query device Total Devices : 4

DEV#: 0 DEVICE NAME: vpath0 TYPE: 2145 POLICY: Optimized SERIAL: 600507680185001B2000000000000003 ========================================================================== Path# Adapter/Hard Disk State Mode Select Errors 0 fscsi0/hdisk2 CLOSE NORMAL 0 0 1 fscsi0/hdisk6 CLOSE NORMAL 0 0 2 fscsi1/hdisk10 CLOSE NORMAL 0 0 3 fscsi1/hdisk14 CLOSE NORMAL 0 0 DEV#: 1 DEVICE NAME: vpath1 TYPE: 2145 POLICY: Optimized SERIAL: 600507680185001B2000000000000001 ========================================================================== Path# Adapter/Hard Disk State Mode Select Errors 0 fscsi0/hdisk3 CLOSE NORMAL 0 0 1 fscsi0/hdisk7 CLOSE NORMAL 0 0 2 fscsi1/hdisk11 CLOSE NORMAL 0 0 3 fscsi1/hdisk15 CLOSE NORMAL 0 0 DEV#: 2 DEVICE NAME: vpath2 TYPE: 2145 POLICY: Optimized SERIAL: 600507680185001B2000000000000002 ========================================================================== Path# Adapter/Hard Disk State Mode Select Errors 0 fscsi0/hdisk4 CLOSE NORMAL 0 0 1 fscsi0/hdisk8 CLOSE NORMAL 0 0 2 fscsi1/hdisk12 CLOSE NORMAL 0 0 3 fscsi1/hdisk16 CLOSE NORMAL 0 0 DEV#: 3 DEVICE NAME: vpath3 TYPE: 2145 POLICY: Optimized SERIAL: 600507680185001B2000000000000006 ========================================================================== Path# Adapter/Hard Disk State Mode Select Errors 0 fscsi0/hdisk5 CLOSE NORMAL 0 0 1 fscsi0/hdisk9 CLOSE NORMAL 0 0 2 fscsi1/hdisk13 CLOSE NORMAL 0 0 3 fscsi1/hdisk17 CLOSE NORMAL 0 0

In our setup, we assigned four SVC LUNs to the AIX client, using four paths to each SVC LUN. If your LUNs do not show up as expected, continue to the next steps to configure your disk devices to work with SDD. If the disk devices have been configured correctly, the SDD setup for AIX 5L Version 5.1 has been completed. 3. If you have already created some ESS or SVC volume groups, vary off (deactivate) all active volume groups with ESS or SVC by using the varyoffvg AIX command. Attention: Before you vary off a volume group, unmount all file systems in that volume group. If some supported storage devices (hdisks) are used as physical volumes of an active volume group and if there are file systems of that volume group being mounted, you must unmount all file systems and vary off all active volume groups with supported storage device SDD disks, in order to configure SDD vpath devices correctly. 116
IBM TotalStorage SAN File System

4. Using smit devices, highlight Data Path Device and press Enter. The Data Path Device panel is displayed, as shown in Example 4-14.
Example 4-14 Data Path Device panel Data Path Devices Move cursor to desired item and press Enter. Display Data Path Device Configuration Display Data Path Device Status Display Data Path Device Adapter Status Define and Configure all Data Path Devices Add Paths to Available Data Path Devices Configure a Defined Data Path Device Remove a Data Path Device

5. Select Define and Configure All Data Path Devices. The configuration process begins. When complete, the output should look similar to Example 4-15.
Example 4-15 Devices configured COMMAND STATUS Command: OK stdout: yes Before command completion, additional vpath0 Available Data Path Optimizer vpath1 Available Data Path Optimizer vpath2 Available Data Path Optimizer vpath3 Available Data Path Optimizer stderr: no instructions may appear below. Pseudo Device Driver Pseudo Device Driver Pseudo Device Driver Pseudo Device Driver

6. Exit smitty and then verify the SDD configuration, as described in steps 1 through 3 above. 7. Use the varyonvg command to vary on all deactivated supported storage device volume groups. 8. If you want to convert the supported storage device hdisk volume group to SDD vpath devices, you must run the hd2vp utility. SDD provides two conversion scripts, hd2vp and vp2hd. The hd2vp converts a volume group from supported storage device hdisks to SDD vpaths, and vp2hd converts a volume group from SDD vpaths to supported storage device hdisks. Use vp2hd if you want to configure the applications back to their original supported storage device hdisks, or if you want to remove SDD from your AIX client. For more information about these scripts, consult your SDD user guide. You have now successfully configured SDD for AIX 5L Version 5.1.

4.4.3 Install and verify SDD on MDS


SDD is installed after the operating system has been upgraded and before SAN File System is installed. You can download SDD from the following Web site:
http://www.ibm.com/servers/storage/support/virtual/2145.html

Note: It is important that you verify the SDD level at the SDD Web site:
http://www.ibm.com/servers/storage/support/software/sdd/

Chapter 4. Pre-installation configuration

117

Install SDD on MDS


1. Download the code and store it in the /usr/tank/packages/ directory. 2. Install the SDD package with the following command:
# rpm -Uvh /media/cdrom/IBMsdd-1.6.0.1-6.i686.ul1.rpm

3. Configure SDD to start during boot:


# chkconfig -a sdd 35

4. Start SDD:
# sdd start

Verify SDD on MDS


To verify that the MDS HBAs have been correctly configured for SDD, start a local session with each MDS (using default root/password) and enter datapath query adapter at the Linux prompt. Example 4-16 shows that two HBAs are installed in the MDS and are correctly recognized by SDD.
Example 4-16 Display information about HBAs that are currently configured for SDD NP28Node1:~ # datapath query adapter Active Adapters :2 Adpt# Adapter Name State Mode 0 Host2Channel0 NORMAL ACTIVE 1 Host3Channel0 NORMAL ACTIVE

Select 2778 25

Errors Paths 0 5 0 5

Active 5 5

Verify that you can display information about the devices currently assigned to the MDS, using datapath query device, as shown in Example 4-17. We see the correct output: one SVC device is attached to the SCSI path. This will be used for the System Pool.
Example 4-17 Display information about devices that are currently configured for SDD mds1:~ # datapath query device DEV#: 0 DEVICE NAME: vpatha TYPE: 2145 POLICY: Optimized SERIAL: 600507680188801b2000000000000000 ============================================================================ Path# Adapter/Hard Disk State Mode Select Errors 0 Host2Channel0/sdb CLOSE NORMAL 0 0 1 Host2Channel0/sde CLOSE NORMAL 0 0 2 Host3Channel0/sdh CLOSE NORMAL 0 0 3 Host3Channel0/sdk CLOSE NORMAL 0 0

Once you have confirmed that SDD has correctly configured the HBAs and the disk devices, you should re-do the steps on the other MDS servers. You should carefully note the serial number (SERIAL field) corresponding to each vpathx device on each MDS. The mapping may not be the same on each MDS. When installing SAN File System, we need to specify at least one device name (/dev/rvpathx) to be configured as the first volume in the System Pool. This is specified for each MDS; therefore, it is vital to ensure that the correct device corresponding to the correct serial number is entered for each MDS (for this example, the device will be vpatha, with serial number 600507680185001b2000000000000000).

118

IBM TotalStorage SAN File System

4.5 Redundant Disk Array Controller (RDAC)


The RDAC component contains a multipath driver and hot-add support for IBM TotalStorage DS4x00 (formerly FAStT) devices. It must be installed on each SAN File System MDS when using DS4x00 for system volumes. Restriction: DS4800 is not supported for metadata (system) volumes at the time of writing. RDAC can also optionally be installed on SAN File System clients to provide Fibre Channel fail-over functions for DS4x00 user volumes. Metadata LUNs (system volumes) must be in a separate DS4x00 partition from the LUNs to be used by other operating systems (for example, SAN File System clients). This requirement also safeguards against corruption of metadata by SAN File System clients. Metadata should always be isolated from access by client systems. A DS4x00 data LUN can be shared by multiple SAN File System clients, provided that the clients are of one operating system type (homogeneous clients). LUNs within a DS4x00 partition can only be used by one operating system type. See the Web site http://www.ibm.com/servers/storage/support/ds4x00 (substitute ds4x00 for DS4300, DS4400, DS4500, or DS4800, as appropriate) for the latest supported host adapters, device drivers, Linux kernel versions, and updated readme. Attention: The examples shown here for installing and configuring RDAC may not exactly match the current required version of RDAC for SAN File System; however, the instructions are similar. Please refer to the SAN File System support Web site to confirm the required RDAC version.

4.5.1 RDAC on Windows 2000 client


To install RDAC on Windows 2000, follow the instructions in the manual IBM TotalStorage FAStT Storage Manager Version 9 Installation and Support Guide for Intel-based Operating System Environments, GC26-7649. The steps are very similar for a Windows 2003 client. To verify that you have installed RDAC correctly, perform the following steps: 1. Select Start Programs Administrative Tools Computer Management. The Computer Management window opens. Go to the \System Tools\System Information\Software environment\drivers directory. 2. Scroll through the list of device drivers until you find rdacfltr. 3. Verify that the rdacfltr entry shows state type Running and status OK.

Chapter 4. Pre-installation configuration

119

4.5.2 RDAC on AIX client


This section describes how to check the current RDAC driver program driver version level, update the RDAC device driver, and verify that the RDAC update is complete.

Verifying the RDAC driver


The AIX RDAC driver files are not included on the DS4000 installation CD. Either install them from the AIX Operating System CD, if the correct version is included, or download them from the following Web site:
http://www.ibm.com/servers/storage/support/download.html

To install RDAC, follow the instructions in the manual IBM TotalStorage FAStT Storage Manager Version 9 Installation and Support Guide for AIX, HP-UX, Solaris, and Linux on POWER, GC26-7648. This manual also covers installation of RDAC on Solaris, which we do not cover here. Verify that the correct version of the software was successfully installed with the lslpp command:
# lslpp -ah devices.fcp.disk.array.rte

The output should look similar to Example 4-18.


Example 4-18 Check RDAC level on AIX Rome:/ >lslpp -ah devices.fcp.disk.array.rte Fileset Level Action Status Date Time ---------------------------------------------------------------------------Path: /usr/lib/objrepos devices.fcp.disk.array.rte 5.1.0.0 COMMIT COMPLETE 12/04/03 13:33:03 5.1.0.0 APPLY COMPLETE 12/04/03 13:33:03 5.1.0.54 COMMIT COMPLETE 12/04/03 14:41:26 5.1.0.54 APPLY COMPLETE 12/04/03 14:31:44 Path: /etc/objrepos devices.fcp.disk.array.rte 5.1.0.0 COMMIT 5.1.0.0 APPLY 5.1.0.54 COMMIT 5.1.0.54 APPLY Rome:/ >

COMPLETE COMPLETE COMPLETE COMPLETE

12/04/03 12/04/03 12/04/03 12/04/03

13:33:05 13:33:05 14:41:27 14:31:46

Configure the devices for the software changes to take effect by typing the following command:
# cfgmgr -v

Next, use the following command:


# lsdev -Cc disk

to see if the RDAC software recognizes the FAStT volumes as shown in the following list: Each DS4300 (FAStT600) volume is recognized as a 1722 (600) Disk Array Device. Each DS4400 (FAStT700) volume is recognized as a 1742 (700) Disk Array Device. Each DS4500 (FAStT900) volume is recognized as a 1742-900 Disk Array Device. Example 4-19 on page 121 shows the output of the lsdev command for a set of DS4500 (FAStT900) LUNs. 120
IBM TotalStorage SAN File System

Example 4-19 Device listing for DS4500 LUNs # lsdev -Cc disk hdisk0 Available 10-88-00-8,0 16 Bit LVD hdisk32 Available 31-08-01 1742-900 Disk hdisk33 Available 91-08-01 1742-900 Disk hdisk34 Available 31-08-01 1742-900 Disk hdisk35 Available 91-08-01 1742-900 Disk SCSI Disk Drive Array Device Array Device Array Device Array Device

4.5.3 RDAC on MDS and Linux client


Download the Linux RDAC package from the following Web site:
http://www.ibm.com/support/docview.wss?rs=593&uid=psg1MIGR-54973&loc=en_US:

The README file contains detailed installation instructions; we will just summarize the procedure here. It is similar for both the MDS (SUSE Linux) and the SAN File System client (Red Hat Linux). You will also find more information in the manual IBM TotalStorage FAStT Storage Manager Version 9 Installation and Support Guide for Intel-based Operating System Environments, GC26-7649. The Linux RDAC driver is released as a source-code gunzip compressed tar package. To unpack it, enter the following command at the Linux command prompt:
mds4:/tmp/rdac # tar -zxvf ibmrdac-linux-xx.xx.xx.xx.tar.gz

where xx.xx.xx.xx is the release version of the RDAC driver (09.00.a5.00 at the time of the writing of this redbook). The source files will uncompress to the linuxrdac directory. Attention: The Host server must have the non-fail-over Fibre Channel HBA device driver properly built and installed before the Linux RDAC driver installation. Refer to the FC HBA device driver README or the FC HBA User Guide for instructions on installing the non-fail-over version of the device driver. The driver source tree is included in the package if you need to build it from the source tree. To build and install the RDAC package, first perform the steps shown in Example 4-20. These will ensure synchronization between the RDAC driver and the running kernel. The output of these commands is omitted for brevity.
Example 4-20 # # # # # cd /usr/src/linux make mrproper make cloneconfig make dep make j 8 modules

Next, change to the linuxrdac directory (use cd /RDAC/linuxrdac) and remove the old driver modules in that directory. Type the following command:
make clean

Then, compile the driver modules and utilities by running:


make

Chapter 4. Pre-installation configuration

121

The next step copies the driver modules to the kernel module tree and builds the new RAMdisk image (mpp.img) which includes the RDAC driver modules and all driver modules that are needed during boot time. Run the following command:
make install

After RDAC installation, we must verify that the RDAC driver has discovered the available physical LUNS and created virtual LUNS for them. Use the following command:
ls -lR /proc/mpp

We can see the output of this command in Example 4-21.


Example 4-21 mpp output command mds3:~ # ls -lR /proc/mpp /proc/mpp: total 0 dr-xr-xr-x 3 root root dr-xr-xr-x 123 root root dr-xr-xr-x 4 root root crwxrwxrwx 1 root root /proc/mpp/H3_FastT600_SISRack: total 0 dr-xr-xr-x 4 root root dr-xr-xr-x 3 root root dr-xr-xr-x 3 root root dr-xr-xr-x 3 root root -rw-r--r-1 root root

254,

0 0 0 0

May May May May

14 14 14 14

15:42 15:38 15:42 15:42

. .. H3_FastT600_SISRack mppVBusNode

0 0 0 0 0

May May May May May

14 14 14 14 14

15:42 15:42 15:42 15:42 15:42

. .. controllerA controllerB virtualLun0

/proc/mpp/H3_FastT600_SISRack/controllerA: total 0 dr-xr-xr-x 3 root root 0 May 14 15:42 . dr-xr-xr-x 4 root root 0 May 14 15:42 .. dr-xr-xr-x 2 root root 0 May 14 15:42 qla2300_h3c0t0 /proc/mpp/H3_FastT600_SISRack/controllerA/qla2300_h3c0t0: total 0 dr-xr-xr-x 2 root root 0 May 14 15:42 . dr-xr-xr-x 3 root root 0 May 14 15:42 .. -rw-r--r-1 root root 0 May 14 15:42 LUN0 -rw-r--r-1 root root 0 May 14 15:42 UTM_LUN31 /proc/mpp/H3_FastT600_SISRack/controllerB: total 0 dr-xr-xr-x 3 root root 0 May 14 15:42 . dr-xr-xr-x 4 root root 0 May 14 15:42 .. dr-xr-xr-x 2 root root 0 May 14 15:42 qla2300_h2c0t0 /proc/mpp/H3_FastT600_SISRack/controllerB/qla2300_h2c0t0: total 0 dr-xr-xr-x 2 root root 0 May 14 15:42 . dr-xr-xr-x 3 root root 0 May 14 15:42 .. -rw-r--r-1 root root 0 May 14 15:42 LUN0 -rw-r--r-1 root root 0 May 14 15:42 UTM_LUN31 mds3:~ #

You can now issue I/Os to the LUNS.

122

IBM TotalStorage SAN File System

For grub, edit the /etc/grub.conf file and copy the original configuration to a new entry at the beginning of the boot list, changing the new entry's initrd image to mpp.img. It should look something like Example 4-22 (note that it may vary with a different system configuration).
Example 4-22 File grub.conf editing mds4:/tmp/rdac/linuxrdac # vi /boot/grub/menu.lst "/boot/grub/menu.lst" 14L, 407Cgfxmenu (hd0,0)/boot/message color white/blue black/light-gray default 0 timeout 8 title linux with mpp support kernel (hd0,0)/boot/vmlinuz root=/dev/sda1 acpi=oldboot initrd (hd0,0)/boot/mpp.img title linux kernel (hd0,0)/boot/vmlinuz root=/dev/sda1 acpi=oldboot initrd (hd0,0)/boot/initrd title floppy root (fd0) chainloader +1 title failsafe kernel (hd0,0)/boot/vmlinuz.shipped root=/dev/sda1 ide=nodma apm=off acpi=off vga=normal nosmp disableapic maxcpus=0 3 initrd (hd0,0)/boot/initrd.shipped mds4:/tmp/rdac/linuxrdac #

If you make any changes to the MPP configuration file (/etc/mpp.conf) or persistent binding file (/var/mpp/devicemapping), run mppUpdate to re-build the RAMdisk image to include the new file so that the new configuration file (or persistent binding file) can be used on the next system reboot.

Chapter 4. Pre-installation configuration

123

The command fdisk -l, shown in Example 4-23, displays two DS4500 LUNs (sdb and sdc) in addition to the OS disk (sda). Note if you install the Storage Manager runtime and Storage Manager utilities, you can also use commands like SMdevices to list the RDAC devices. These packages are available at the following Web site:
http://www.ibm.com/pc/support/site.wss/document.do?lndocid=MIGR-60591 Example 4-23 fdisk -l output command mds1:~ # fdisk -l Disk /dev/sda: 255 heads, 63 sectors, 4420 cylinders Units = cylinders of 16065 * 512 bytes Device Boot /dev/sda1 * /dev/sda2 /dev/sda3 Start 1 1268 1530 End Blocks 1267 10177146 1529 2104515 3618 16779892+ Id 83 82 83 System Linux Linux swap Linux

Disk /dev/sdb: 255 heads, 63 sectors, 30335 cylinders Units = cylinders of 16065 * 512 bytes Disk /dev/sdb doesn't contain a valid partition table Disk /dev/sdc: 255 heads, 63 sectors, 30335 cylinders Units = cylinders of 16065 * 512 bytes Disk /dev/sdc doesn't contain a valid partition table mds1:~ #

124

IBM TotalStorage SAN File System

Chapter 5.

Installation and basic setup for SAN File System


In this chapter, we cover the following topics: MDS installation MDS configuration Client installation (AIX, Windows, Linux, and Solaris) Information about using local authentication with SAN File System Master Console installation Remote access setup (PuTTY/ssh) Important: The installation package versions given in this chapter, and in Chapter 6, Upgrading SAN File System to Version 2.2.2 on page 229, were correct at the time of writing, but have changed by the time of publication.

Copyright IBM Corp. 2003, 2004, 2006. All rights reserved.

125

5.1 Installation process overview


The following broad steps are required to install the IBM TotalStorage SAN File System: Make sure a supported SAN exists and the required LUNs are available. Details are provided in Chapter 3, MDS system design, architecture, and planning issues on page 65 and Chapter 4, Pre-installation configuration on page 99. Perform operating system installation on each SAN File System engine, if necessary. Perform and verify network settings for each SAN File System engine and test. Check the disk configuration. Install the cluster from the master SAN File System MDS. Create pools, volumes, filesets, and policies. Install client software for AIX, Windows, Solaris, and Linux. Perform the post installation checks to complete the installation. Important: Please follow carefully the installation instructions in this chapter, together with the information in IBM TotalStorage SAN File System Installation and Configuration Guide, GA27-4316. The most usual cause of unsuccessful installations is skipping steps or executing them incorrectly.

5.2 SAN File System MDS installation


Here are the overview steps to install and configure each SAN File System MDS and the SAN File System cluster. Perform these steps on each server in the cluster. They will each be described in detail: 1. Check these pre-installation settings and configurations on each MDS engine: RSA-II adapter network settings. Hard disk mirroring. 2. Install software on each MDS engine: a. Install Linux OS as a prerequisite. b. Set date and time. c. Set IP addresses. d. Apply OS service pack. e. Perform network configuration including Ethernet bonding. f. Install other software prerequisite packages. 3. Configure the cluster; these steps are performed once by the installation script on the master MDS: a. Install SAN File System packages on each MDS. b. Set up the Master MDS. c. Set up all the subordinate MDS. d. Set up the cluster on the master MDS.

126

IBM TotalStorage SAN File System

5.2.1 Pre-installation setting and configurations on each MDS


The following steps need to be done on each MDS in the cluster.

Verifying boot drive and setting RSA II IP configuration


Perform the following steps: 1. Boot the engine (no CD in the drive) and press F1 to enter the Configuration/Setup Utility. 2. Select Start Options Startup Sequence Options and verify that FirstStartupDevice is equal to CD-ROM (you can cycle through the selections using the up and down arrow keys). 3. Keep pressing Esc until you return to the main BIOS screen. Select Advanced Setup RSA II Settings. 4. Select Use Static IP, and set the IP address, subnet, and gateway. We used the following addresses for the RSA II cards in our cluster of two MDS: Master MDS: 9.82.22.173 Secondary MDS: 9.82.22.174 5. Select Linux OS rather than Other OS. 6. Select Save Settings and Reboot RSA II. Remove and re-insert the power cables to reboot the RSA II card and machine. 7. After you have completed the SAN File System installation, you should verify the correct RSA operation, following the steps given in 13.5.1, Validating the RSA configuration on page 538.

Hard disk drive mirroring setting and verification


Each MDS in the SAN File System is required to have two internal hard disk drives set up for high availability, using RAID 1. Consult your system documentation on how to set this up; typically, this involves accessing the hardware menus at boot time. Since we were using xSeries 345 servers, the process to follow to check or set up disk mirroring (RAID-1) is as follows: 1. Boot the engine (no CD in the drive). Hit Ctrl-C when prompted to access the LSI Configuration. Select the first device by pressing Enter. Select Mirroring properties. 2. Verify that the first drive is set to Primary, the second is set to Secondary, and press Enter. 3. A screen will appear explaining what actions are being taken. 4. Press Esc to exit. The system will continue to sync the drives.

5.2.2 Install software on each MDS engine


There are several steps for installing the basic software on each MDS. These are covered in detail in IBM TotalStorage SAN File System Installation and Configuration Guide, GA27-4316. One of the following operating systems can be installed on each MDS. Note that although there may be newer service packs or kernel updates available, the SAN File System software has been tested with and only supports these versions. Each MDS in the cluster must at the same Linux kernel level.

Chapter 5. Installation and basic setup for SAN File System

127

SUSE Linux Enterprise Server (SLES) 8, with Service Pack 4 running the 2.4.21-278-smp kernel. (SLES8 is also referred to as United Linux.) Service Pack 3 must be installed before installing Service Pack 4. The required kernel level is included with the Service Pack 4 GA distribution. SUSE Linux Enterprise Server (SLES) 9, with Service Pack 1 running the 2.6.5-7.151-bigsmp kernel and kernel source. You can obtain the required kernel and source packages from your SUSE Maintenance Web service Note: Our example uses the SLES 8 Linux version. SAN File System is also supported on MDS running SLES 9. Check the IBM TotalStorage SAN File System Installation and Configuration Guide, GA27-4316. for more details on differences between installing the different SUSE versions.

5.2.3 SUSE Linux 8 installation


In addition to the software provided with the SAN File System, you will need to obtain the following software if it is not already installed on your MDS. This software is not supplied with the SAN File System CD set: SUSE Linux Enterprise Server 8.0. You will need a licensed copy of Linux Enterprise Server 8.0 for each of the MDS engines in the cluster. For more information about obtaining Enterprise Server, visit http://www.suse.com. United Linux Service Pack 3 and 4. For more information about obtaining the United Linux Service Pack 3, visit http://www.suse.com. We cover the installation of the Service Pack in 5.2.5, Install prerequisite software on the MDS on page 135. Linux kernel version 2.4.21-278 is provided with Service Pack 4. A driver for your FC HBA. If using a QLogic HBA, you can download the driver from http://www.qlogic.com/support/ibm_page.html or http://www.ibm.com/servers/storage/support/disk/ds4500/hbadrivers1.html. Driver level 7.03.00 is the currently required level for SUSE 8 (for SUSE 9, it is 8.00.00). For SUSE 8, it should be supplied and installed with the Service Pack; for SUSE 9, you can download and install from the Web sites listed above, following the instructions there. Post-installation steps include: Set date and time. Check TCP/IP network configuration. Configure redundant Ethernet support. Install SUSE Linux, following the instructions in IBM TotalStorage SAN File System Installation and Configuration Guide, GA27-4316. Tips: Advanced Linux users may choose to create additional partitions from those described in the installation guide. If so, make sure that there is sufficient space available in the root and /var file systems to install the SAN File System software. When configuring the network interfaces, make sure that the correct Ethernet interface is chosen for the primary network interface. If you have a standard Gigabit Ethernet Controller, and the optional Broadcom Ethernet adapter, the installation process may select the Broadcom adapter for the primary. Make sure to select the correct interface that you will be using. Remember the root password that you set during installation! 128
IBM TotalStorage SAN File System

Tips: If using SLES 9, you require United Linux Service Pack 1, QLogic HBA device driver 8.00.00, and a new kernel version 2.6.5-7.151. See IBM TotalStorage SAN File System Installation and Configuration Guide, GA27-4316 for installation instructions for SLES 9, since these steps are different, including for Ethernet bonding.

Apply United Linux Service Pack 3 and 4


Apply the updates only after performing an initial installation of SUSE LINUX Enterprise Server 8.0 on the MDS. 1. Insert the United Linux Service Pack CD-ROM into the CD-ROM drive. 2. Mount the CD-ROM:
mount /media/cdrom/

3. Run the installation script:


sh /media/cdrom/install.sh

4. Select Option 1 - Update System to Service Pack 3 level. 5. After the updates have been applied, you are prompted to quit. Press Enter. 6. Unmount the CD-ROM and remove it from the CD-ROM drive:
umount /media/cdrom/

7. Reboot the engine (shutdown -r now). After rebooting, log in as root. 8. Repeat steps 1 to 7 for the Service Pack 4. 9. Verify that the required kernel level is installed:
rpm qa | grep e k_smp e kernel

The correct kernel level should be listed, as shown in Example 5-1.


Example 5-1 Show SUSE Linux kernel level # rpm -qa |grep -e k_smp -e kernel k_smp-2.4.21-278 kernel-source-2.4.21-278 #uname -a Linux tank-mds3 2.4.21-278-smp #1 SMP Mon Mar 7 09:17:29 UTC 2005 i686 unknown

Set the date and time


Perform the following steps: 1. Log in as user ID root with the password set during installation. You can change the password with the passwd command. Well-secured passwords are recommended. 2. Set the clock with the hwclock command (see Example 5-2).
Example 5-2 hwclock setting command # hwclock --set --date Friday Aug 19 11:00

Chapter 5. Installation and basic setup for SAN File System

129

3. Set the time zone if you did not set it during installation. Choose the appropriate time zone setting from the listings in /usr/share/zoneinfo. Example 5-3 shows setting the time zone to US Eastern time.
Example 5-3 Time zone settings # rm /etc/localtime # ln -s /usr/share/zoneinfo/EST5EDT /etc/localtime

4. Set the system time from the hardware clock with the hwclock command (Example 5-4).
Example 5-4 Time setting from hardware clock # hwclock --hctosys

Tip: Make sure each MDS in the cluster is set to the same date and time!

Check TCP/IP network configuration


Verify the TCP/IP settings configured during installation. 1. Check the host name, as shown in Example 5-5.
Example 5-5 Host name setting commands # cat /etc/HOSTNAME tank-mds3 #

2. Change to the networking configuration directory /etc/sysconfig/network/. Look at the configuration file ifcfgeth0 to check that the IPADDR and NETMASK values are appropriately set, as shown in Example 5-6.
Example 5-6 IPADDR and NETMASK settings # cat /etc/sysconfig/network/ifcfg-eth0 IPADDR=9.82.22.171 NETMASK=255.255.255.0 BROADCAST=9.82.22.255

3. Check that the file /etc/resolv.conf includes correct DNS information. At a minimum, you need one nameserver and domain entry, as shown in Example 5-7.
Example 5-7 DNS settings example: /etc/resolv.conf nameserver 192.168.254.100 nameserver 192.168.254.101 domain company.com search company.net company.com

Note: If DNS is not being used, the IP addresses and host names of each SAN File System engine must be included in the /etc/hosts file on each SAN File System engine. 4. Check /etc/sysconfig/network/routes for the TCP/IP routing information, including a default route at a minimum (see Example 5-8 on page 131).

130

IBM TotalStorage SAN File System

Example 5-8 IP routing - /etc/sysconfig/network/routes 224.0.0.0 0.0.0.0 240.0.0.0 eth0 multicast default 9.82.22.1 0.0.0.0 eth0

5. If you had to make any changes to the network configuration, you need to shut down and reboot the MDS (run shutdown now -r). 6. Use ifconfig to verify network operation, as shown in Example 5-9.
Example 5-9 ifconfig # ifconfig eth0 Link encap:Ethernet HWaddr 00:10:18:00:47:29 inet addr:9.82.22.171 Bcast:9.82.22.255 Mask:255.255.255.0 UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:1625358 errors:0 dropped:0 overruns:0 frame:0 TX packets:263962 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:100 RX bytes:128100235 (122.1 Mb) TX bytes:24174608 (23.0 Mb) Interrupt:20 Memory:efff0000-f0000000 lo Link encap:Local Loopback inet addr:127.0.0.1 Mask:255.0.0.0 UP LOOPBACK RUNNING MTU:16436 Metric:1 RX packets:78270 errors:0 dropped:0 overruns:0 frame:0 TX packets:78270 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:51608135 (49.2 Mb) TX bytes:51608135 (49.2 Mb)

7. Check that you can ping other host names in your network from the MDS, and that the MDS host name itself, both short and fully qualified form if used, is resolvable from other hosts before proceeding. If you have problems, re-check the network settings and Ethernet cabling, as well as the configuration of the DNS, if used. 8. You can now perform the rest of the installation at another system with SSH connection to your MDS, that is, you do not need to be at the MDS console. If you do not have an SSH-enabled system, see 5.7, SAN File System MDS remote access setup (PuTTY / ssh) on page 228 and 7.1.1, Accessing the CLI on page 252 for details on how to do this task.

Set up Ethernet bonding


Ethernet Bonding is a Linux operating system feature that provides a higher degree of availability to the Metadata servers through the bonding of multiple Ethernet interfaces to a single IP address. From SAN File System V2.2.2 onwards, Ethernet bonding is highly recommended for configuration on each MDS, so that server/server and client/server IP traffic need not be broken by the failure of a single network interface on an MDS engine. Ethernet bonding is required for the high availability features of SAN File System V2.2.2 to function most completely.

Chapter 5. Installation and basic setup for SAN File System

131

Redundant Ethernet support has several benefits for SAN File System clients: A single Ethernet component failure no longer needs to result in a metadata service outage or a failover. This makes a network partition much less likely, which is a particularly disruptive failure. It reduces the chance that a failure will cause a file system error to be returned by SAN File System to the application. It allows certain client network maintenance (for example, switch replacement) to be performed without impacting access to the SAN File System service. The procedure is slightly different for enabling Ethernet Bonding in a SLES 8 or SLES 9 configuration. The IBM TotalStorage SAN File System Installation and Configuration Guide, GA27-4316, provides instructions for the SLES 9 environment. In our environment (SLES 8 Service Pack 4), each MDS has two Broadcom Gigabit Ethernet adapters. You could also enable bonding on the x345 using the built-in Fast Ethernet adapters. The following steps should be performed on each MDS in turn. Important: If you are configuring Ethernet bonding on an existing SAN File System cluster, run the steps on any subordinate(s) first, then finally on the master MDS. 1. Enter /etc/sysconfig/network/network stop to stop networking. 2. Check the configuration of the first Ethernet interface in the file /etc/sysconfig/network/ifcfg-eth0. The value of BOOTPROTO must be static, as in Example 5-10.
Example 5-10 BOOTPROTO value tank-mds2:~ # cat /etc/sysconfig/network/ifcfg-eth0 BOOTPROTO='static' BROADCAST='9.82.24.255' IPADDR='9.82.24.96' NETMASK='255.255.255.0' NETWORK='9.82.24.0' STARTMODE='onboot' UNIQUE='QOEa.4zNNCpehEiC' WIRELESS='no' device='eth0'

3. Add the lines shown in Example 5-11 to /etc/init.d/boot.local so that bonding will be configured on each system reboot. The bonding options are defined with the modprobe command and tell what mode and timer monitor values are supported. SAN File System requires mode=active-backup and miimon=100. The active-backup mode means only one of the interfaces will be active, and the other waits in standby until needed. The next modprobe statement loads the NIC driver. Since we are using the Broadcom Gigabit Ethernet NICs, we specify bcm5700. If using the Intel adapter, enter modprobe e1000. The ifconfig and ifenslave commands create an adapter called bond0 with the same TCP/IP address as eth0, and tie eth0 and eth1 to the bond0 adapter. SLES 8 requires that the bond, or enslave, be restarted after each boot, unlike SLES 9.
Example 5-11 Bonding options # Here you should add things, that should happen directly after booting # before we're going to the first run level. # modprobe bonding mode=active-backup miimon=100 modprobe bcm5700 ifconfig bond0 9.82.24.96 netmask 255.255.255.0 up

132

IBM TotalStorage SAN File System

ifenslave bond0 eth0 ifenslave bond0 eth1

4. Check /etc/sysconfig/network/routes and make sure that the default route is not tied to a specific adapter, such as eth0, or eth1, as in Example 5-12.
Example 5-12 Routes tank-mds1:~ # more /etc/sysconfig/network/routes # default 9.82.24.1 0.0.0.0

5. Now reboot the MDS to activate the changes. To verify that bonding is active, check the status of all three adapters (bond0, eth0 and eth1) using the ifconfig command, as in Example 5-13. In our example, with mode=active-backup, (specified in Example 5-11 on page 132), eth0 is the adapter sending and receiving traffic, while eth1 is sitting idle, in backup mode.
Example 5-13 Initial ifconfig output tank-mds1:/etc/sysconfig/network # ifconfig bond0 Link encap:Ethernet HWaddr 00:10:18:00:99:5C inet addr:9.82.24.96 Bcast:9.255.255.255 Mask:255.255.255.0 inet6 addr: fe80::210:18ff:fe00:995c/64 Scope:Link UP BROADCAST RUNNING MASTER MULTICAST MTU:1500 Metric:1 RX packets:291601 errors:0 dropped:0 overruns:0 frame:0 TX packets:207016 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:23762180 (22.6 Mb) TX bytes:17686933 (16.8 Mb) eth0 Link encap:Ethernet HWaddr 00:10:18:00:99:5C inet addr:9.82.24.96 Bcast:9.255.255.255 Mask:255.255.255.0 inet6 addr: fe80::210:18ff:fe00:995c/64 Scope:Link UP BROADCAST RUNNING SLAVE MULTICAST MTU:1500 Metric:1 RX packets:276771 errors:0 dropped:0 overruns:0 frame:0 TX packets:207013 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:100 RX bytes:22159636 (21.1 Mb) TX bytes:17686711 (16.8 Mb) Interrupt:20 Memory:efff0000-f0000000 Link encap:Ethernet HWaddr 00:10:18:00:99:5C inet addr:9.82.24.96 Bcast:9.255.255.255 Mask:255.255.255.0 inet6 addr: fe80::210:18ff:fe00:995c/64 Scope:Link UP BROADCAST RUNNING NOARP SLAVE MULTICAST MTU:1500 Metric:1 RX packets:14830 errors:0 dropped:0 overruns:0 frame:0 TX packets:3 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:100 RX bytes:1602544 (1.5 Mb) TX bytes:222 (222.0 b) Interrupt:22 Memory:edff0000-ee000000

eth1

Chapter 5. Installation and basic setup for SAN File System

133

6. Now we can test if the NIC failover works. We disconnected the connection from eth0 while running a continuous ping from a workstation on a different subnet from the MDS. We see a momentary timeout to the ping response from 9.82.24.96 as eth1 becomes the active adapter (see Example 5-14).
Example 5-14 Ping timeout Reply from 9.82.24.96: Reply from 9.82.24.96: Reply from 9.82.24.96: Reply from 9.82.24.96: Reply from 9.82.24.96: Request timed out. Reply from 9.82.24.96: Reply from 9.82.24.96: Reply from 9.82.24.96: Reply from 9.82.24.96: Reply from 9.82.24.96: Reply from 9.82.24.96: bytes=32 bytes=32 bytes=32 bytes=32 bytes=32 bytes=32 bytes=32 bytes=32 bytes=32 bytes=32 bytes=32 time=299ms TTL=57 time=46ms TTL=57 time=10ms TTL=57 time=10ms TTL=57 time=7ms TTL=57 time=301ms TTL=57 time=2ms TTL=57 time=7ms TTL=57 time=7ms TTL=57 time=7ms TTL=57 time=6ms TTL=57

7. We can verify that eth1 is now the active adapter and eth0 is the backup by issuing another ifconfig on tank-mds1, as shown in Example 5-15, compared to the previous output, as shown in Example 5-13 on page 133.
Example 5-15 Ifconfig output after eth0 failover ifconfig bond0 Link encap:Ethernet HWaddr 00:10:18:00:99:D5 inet addr:9.82.24.96 Bcast:9.255.255.255 Mask:255.255.255.0 inet6 addr: fe80::210:18ff:fe00:99d5/64 Scope:Link UP BROADCAST RUNNING MASTER MULTICAST MTU:1500 Metric:1 RX packets:10525 errors:0 dropped:0 overruns:0 frame:0 TX packets:7516 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:945831 (923.6 Kb) TX bytes:715659 (698.8 Kb) Link encap:Ethernet HWaddr 00:10:18:00:99:D5 inet addr:9.82.24.96 Bcast:9.255.255.255 Mask:255.255.255.0 inet6 addr: fe80::210:18ff:fe00:99d5/64 Scope:Link UP BROADCAST NOARP SLAVE MULTICAST MTU:1500 Metric:1 RX packets:5822 errors:0 dropped:0 overruns:0 frame:0 TX packets:4413 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:100 RX bytes:540595 (527.9 Kb) TX bytes:422500 (412.5 Kb) Interrupt:20 Memory:efff0000-f0000000 Link encap:Ethernet HWaddr 00:10:18:00:99:D5 inet addr:9.82.24.96 Bcast:9.255.255.255 Mask:255.255.255.0 inet6 addr: fe80::210:18ff:fe00:99d5/64 Scope:Link UP BROADCAST RUNNING SLAVE MULTICAST MTU:1500 Metric:1 RX packets:4703 errors:0 dropped:0 overruns:0 frame:0 TX packets:3103 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:100 RX bytes:405236 (395.7 Kb) TX bytes:293159 (286.2 Kb) Interrupt:22 Memory:edff0000-ee000000 Link encap:Local Loopback inet addr:127.0.0.1 Mask:255.0.0.0 inet6 addr: ::1/128 Scope:Host UP LOOPBACK RUNNING MTU:16436 Metric:1 RX packets:3296 errors:0 dropped:0 overruns:0 frame:0

eth0

eth1

lo

134

IBM TotalStorage SAN File System

TX packets:3296 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:2442382 (2.3 Mb) TX bytes:2442382 (2.3 Mb)

8. The test has succeeded; we can reconnect the eth0 interface. 9. Repeat these steps on the remaining MDSs.

5.2.4 Upgrade MDS BIOS and RSA II firmware


You may need to upgrade the BIOS and firmware on the MDS, including the RSA card. Check the SAN File System Release Notes to determine the levels of machine FLASH BIOS and RSA Firmware needed to support V2.2.2. For the IBM ^ xSeries 345, you can download the BIOS at the following Web site:
http://www.ibm.com/pc/support/site.wss/document.do?lndocid=MIGR-54484

For the IBM ^ xSeries 346 model, the BIOS is at the following Web site:
http://www.ibm.com/pc/support/site.wss/document.do?sitestyle=ibm&lndocid=MIGR-57356

For the IBM ^ xSeries 365 model, the BIOS is at the following Web site:
http://www.ibm.com/pc/support/site.wss/document.do?sitestyle=ibm&lndocid=MIGR-60101

Follow the README notes that come with the FLASH BIOS package for installation instructions. In our case, we dumped the BIOS to a diskette and rebooted the MDS with the diskette inserted in the drive. The MDS reboots off of the diskette, asks some elementary questions, and flashes the BIOS. Now check the required RSA II card firmware level. You can download this firmware (for the IBM ^ xSeries 345) from the following Web site:
http://www.ibm.com/pc/support/site.wss/document.do?lndocid=MIGR-46489

For the IBM ^ xSeries 346 model, the RSA II firmware is at the following Web site
http://www.ibm.com/pc/support/site.wss/document.do?sitestyle=ibm&lndocid=MIGR-56759

For the IBM ^ xSeries 365 model, the RSA II firmware is at the following Web site
http://www.ibm.com/pc/support/site.wss/document.do?sitestyle=ibm&lndocid=MIGR-53861

Instructions for upgrading the BIOS and firmware are given in the manual IBM TotalStorage SAN File System Installation and Configuration Guide, GA27-4316

5.2.5 Install prerequisite software on the MDS


The following prerequisite software must be installed on each MDS. 1. QLogic driver for your HBA: if not available, this must be downloaded from the QLogic Web site. For SLES 8, it is installed with the Service Pack; for SLES 9, get the driver from:
http://www.ibm.com/servers/storage/support/disk/ds4500/hbadrivers1.html

Chapter 5. Installation and basic setup for SAN File System

135

2. IBM Subsystem Device Driver (SDD) V1.6.0.1-6 or Redundant Disk Array Controller (RDAC) V9.00.A5.09 or later, as appropriate for your system storage. We provide detailed instructions for installing the device driver in 4.4, Subsystem Device Driver on page 109 and 4.5, Redundant Disk Array Controller (RDAC) on page 119. In our case, we use SDD: a. Install and start IBMsdd to manage multiple Fibre Channel paths to IBM storage LUNs (LUNs have already been created and mapped to this host). Go to http://www.ibm.com/servers/storage/support/software/sdd/downloading.html. b. Click TotalStorage Multipath Subsystem Device Driver downloads and select Subsystem Device Driver for Linux, and start the SDD download for the storage subsystem and OS you are using. c. Install the SDD driver by running, for example, rpm U IBMsdd-1.6.0.1-4.i686.ul1.rpm. d. Configure SDD to restart during boot by running chkconfig a sdd 35 (see Example 5-16).
Example 5-16 SDD boot config tank-mds2:~ # chkconfig sdd 35 sdd 0:off tank-mds2:~ # 1:off 2:off 3:on 4:off 5:on 6:off

e. Start SDD by running sdd start. f. Verify that SDD devices were configured by running lsvpcfg (see Example 5-17).
Example 5-17 List vpaths tank-mds2:~ # lsvpcfg 000 vpathc ( 254, 32) 600507680184001aa800000000000087 = 600507680184001aa800000000000087 = /dev/sdc /dev/sde /dev/sdg /dev/sdi 001 vpathd ( 254, 48) 600507680184001aa800000000000088 = 600507680184001aa800000000000088 = /dev/sdd /dev/sdf /dev/sdh /dev/sdj tank-mds2:~ #

g. Verify that each MDS discovers the same number of LUNs, and verify that the multi-path device driver restarts after a reboot. 3. Install IBM Java Runtime Environment (provided on the SAN File System installation CD). Mount the CD-ROM, for example, /media/cdrom, and run the command:
rpm -U /media/dvd/common/IBMJava2-142-ia32-JRE-1.4.2-1.0.i386.rpm

4. Install heterogeneous security. This is required if you will use advanced heterogeneous security, as described in 8.3, Advanced heterogeneous file sharing on page 347. 5. Set up SSH keys as described in IBM TotalStorage SAN File System Installation and Configuration Guide, GA27-4316. Example 5-18 shows how we enabled ssh keys on our cluster, tank-mds3 and tank-mds4. With ssh keys, the installation procedure can run from the master MDS, without having to continually prompt for passwords for the subordinate MDS.
Example 5-18 Setup ssh keys for cross-authentication on each MDS *************************** First, create keys on both MDS Start with tank-mds4 *************************** tank-mds4:/ # mkdir -p ~/.ssh tank-mds4:/ # ssh-keygen -t rsa -N ""

136

IBM TotalStorage SAN File System

Generating public/private rsa key pair. Enter file in which to save the key (/root/.ssh/id_rsa): Your identification has been saved in /root/.ssh/id_rsa. Your public key has been saved in /root/.ssh/id_rsa.pub. The key fingerprint is: 5f:f5:a2:b5:db:0c:08:71:57:70:12:53:61:68:5e:52 root@tank-mds4 *************************** Now on tank-mds3 *************************** tank-mds3:~ # mkdir -p ~/.ssh tank-mds3:~ # ssh-keygen -t rsa -N "" Generating public/private rsa key pair. Enter file in which to save the key (/root/.ssh/id_rsa): Your identification has been saved in /root/.ssh/id_rsa. Your public key has been saved in /root/.ssh/id_rsa.pub. The key fingerprint is: 8c:a9:74:3c:a2:ff:6f:07:1a:82:9a:4a:c1:13:21:1d root@tank-mds3 *************************** Once creating the ssh keys, add the public key from each server to the $HOME/.ssh/suthorized_keys file on each of the other servers. on mds4 *************************** tank-mds4:/ # ssh root@tank-mds3 "cat >> ~/.ssh/authorized_keys" < ~/.ssh/id_rsa.pub The authenticity of host 'tank-mds3 (9.82.22.171)' can't be established. RSA key fingerprint is f6:59:1d:2b:0e:1b:0b:9d:28:ee:c9:f4:50:df:5b:af. Are you sure you want to continue connecting (yes/no)? yes 4425: Warning: Permanently added 'tank-mds3,9.82.22.171' (RSA) to the list of known hosts. root@tank-mds3's password: tank-mds4:/ # *************************** And on mds3 *************************** tank-mds3:~ # ssh root@tank-mds4 "cat >> ~/.ssh/authorized_keys" < ~/.ssh/id_rsa.pub The authenticity of host 'tank-mds4 (9.82.22.172)' can't be established. RSA key fingerprint is 38:30:e6:fe:46:0e:d1:31:95:d9:3a:56:ba:fd:5d:a0. Are you sure you want to continue connecting (yes/no)? yes 400: Warning: Permanently added 'tank-mds4' (RSA) to the list of known hosts. root@tank-mds4's password: tank-mds3:~ # *************************** Finally verify root password no longer required for ssh: On mds3 *************************** tank-mds3:~ # ssh root@tank-mds4 Last login: Fri Aug 26 04:09:24 2005 from sig-9-48-48-194.mts.ibm.com tank-mds4:~ # who root pts/0 Aug 26 01:19 (0013108fb8cf.wma.ibm.com) root pts/1 Aug 26 04:09 (sig-9-48-48-194.mts.ibm.com) root pts/2 Aug 26 05:09 (tank-mds3.wsclab.washington.ibm.com) *************************** on mds4 *************************** tank-mds4:/ # ssh root@tank-mds3 Last login: Fri Aug 26 04:22:11 2005 from 0013108fb8cf.wma.ibm.com tank-mds3:~ # who root pts/0 Aug 26 04:06 (sig-9-48-48-194.mts.ibm.com) root pts/1 Aug 26 04:22 (0013108fb8cf.wma.ibm.com) root pts/2 Aug 26 05:10 (tank-mds4.wsclab.washington.ibm.com) tank-mds3:~ #

Chapter 5. Installation and basic setup for SAN File System

137

5.2.6 Install SAN File System cluster


The SAN File System cluster is installed by executing a self-extracting archive and shell script. The self-extracting archive is named install_sfs-package-<version>.<platform>.sh and contains the software packages for all SAN File System components, including the metadata server, the administrative server, and all clients. Note that the version string for the install package might differ from the version strings of the individual software packages; this is normal. The platform string is either i386 for SLES8 or i586 for SLEES9. You run the installation script from one MDS only. This MDS will become the master, and will automatically install all the subordinate MDS. Run the installation script that corresponds to the version of SUSE Linux Enterprise server that is installed on your system. There are two install_sfs-package scripts on the SAN File System CD: For SUSE Linux Enterprise server version 8, in a directory named SLES8 For SUSE Linux Enterprise server version 9, in a directory named SLES9 Note: If you are using a remote ssh session such as PuTTY, make sure logging is not enabled. We found that using PuTTY logging caused some formatting errors in the installation prompts. 1. Put the SAN File System CD in the drive of the MDS where you want to run the installation. Mount the CD (run, for example, mount /media/cdrom). 2. Generate a configuration file template by running install_sfs-package-<version>.<platform>.sh with the --genconfig option and redirect the output to a file:
/media/cdrom/SLESx/install_sfs-package-<version>.<platform>.sh --genconfig > /tmp/sfs.conf

3. Edit the generated file (/tmp/sfs.conf in our example), and change each entry to match your environment. See 5.2.7, SAN File System cluster configuration on page 147 for details of the parameters included in this file. 4. Run install_sfs-package-<version>.<platform>.sh to install, configure, and start the SAN File System cluster, specifying the configuration file created in the previous steps (for example, /tmp/sfs.conf):
/media/cdrom/SLESx/install_sfs-package-<version>.<platform>.sh --loadcluster --sfsargs "-f /tmp/sfs.conf -noldap"

Note: If you are using an LDAP server rather than local authentication to authenticate SAN File System Administration console users, omit the -noldap option. The command will then be /media/cdrom/SLESx/install_sfs-package-<version>.sh --loadcluster --sfsargs "-f /tmp/sfs.conf". We provide details of local authentication in 3.5.1, Local authentication on page 72, 4.1.1, Local authentication configuration on page 100, and 5.5, Local administrator authentication option on page 186. Choose the installation language (we chose 2 for English), press Enter to display the license agreement, and enter 1 when prompted to accept the license agreement, as shown in Example 5-19 on page 139.

138

IBM TotalStorage SAN File System

Example 5-19 Cluster installation: language and license agreement tank-mds3:/media/cdrom/SLES8 # ./install_sfs-package-2.2.2-132.i386.sh --loadcluster --sfsargs "-f /tmp/sfs.conf -noldap" Software Licensing Agreement 1. Czech 2. English 3. French 4. German 5. Italian 6. Polish 7. Portuguese 8. Spanish 9. Turkish Please enter the number that corresponds to the language you prefer. 2 Software Licensing Agreement Press Enter to display the license agreement on your screen. Please read the agreement carefully before installing the Program. After reading the agreement, you will be given the opportunity to accept it or decline it. If you choose to decline the agreement, installation will not be completed and you will not be able to use the Program. International Program License Agreement Part 1 - General Terms BY DOWNLOADING, INSTALLING, COPYING, ACCESSING, OR USING THE PROGRAM YOU AGREE TO THE TERMS OF THIS AGREEMENT. IF YOU ARE ACCEPTING THESE TERMS ON BEHALF OF ANOTHER PERSON OR A COMPANY OR OTHER LEGAL ENTITY, YOU REPRESENT AND WARRANT THAT YOU HAVE FULL AUTHORITY TO BIND THAT PERSON, COMPANY, OR LEGAL ENTITY TO THESE TERMS. IF YOU DO NOT AGREE TO THESE TERMS, - DO NOT DOWNLOAD, INSTALL, COPY, ACCESS, OR USE THE PROGRAM; AND - PROMPTLY RETURN THE PROGRAM AND PROOF OF ENTITLEMENT TO Press Enter to continue viewing the license agreement, or, Enter "1" to accept the agreement, "2" to decline it or "99" to go back to the previous screen. 1

Chapter 5. Installation and basic setup for SAN File System

139

5. Now the packages are extracted, as in Example 5-20. You will be prompted to accept the options that you configured in the configuration file /tmp/sfs.conf. You can either accept each one, by pressing Enter, or change them to other values.
Example 5-20 Cluster installation: unpack packages and check installation options Installing /usr/tank/packages/sfs.server.verify.linux_SLES8-2.2.2-91.i386.rpm...... sfs.server.verify.linux_SLES8################################################## sfs.server.verify.linux_SLES8-2.2.2-91 Installing /usr/tank/packages/sfs.server.config.linux_SLES8-2.2.2-91.i386.rpm...... sfs.server.config.linux_SLES8################################################## sfs.server.config.linux_SLES8-2.2.2-91 IBM SAN File System metadata server setup To use the default value that appears in [square brackets], press the ENTER key. A dash [-] indicates no default is available. SAN File System CD mount point (CD_MNT) ======================================= setupsfs needs to access the SAN File System CD to verify the license key and install required software. Enter the full path to the SAN File System CDs mount point. CDs mount point [/media/cdrom]: /media/cdrom Server name (SERVER_NAME) ========================= Every engine in the cluster must have a unique name. This name must be the same as the unique name used to configure the RSA II adapter on each engine. However, no checks are done by the metadata server to enforce this rule. Server name [tank-mds3]: tank-mds3 Cluster name (CLUSTER_NAME) =========================== Specifies the name given to the cluster. This cluster name becomes the global name space root. For example, when a client mounts the namespace served by cluster name sanfs on the path /mnt/, the SAN File System is accessed by /mnt/sanfs/. If a name is not specified, a default cluster name will be assigned. The cluster name can be a maximum of 30 ASCII bytes or the equivalent in unicode characters. Cluster name [ITSO_GBURG]: ITSO_GBURG Server IP address (IP) ====================== This is dotted decimal IPv4 address that the local metadata server engine has bound to its network interface. Server IP address [9.82.22.171]: 9.82.22.171 Language (LANG)

140

IBM TotalStorage SAN File System

=============== The metadata server can be configured to use a custom locale. This release supports only UTF8 locales. Language [en_US.utf8]: en_US.utf8 System Managment IP (SYS_MGMT_IP) ================================= Enter the System Managment IP address This is the address assigned to your RSAII card. System Managment IP [9.82.22.173]: 9.82.22.173 Authorized RSA User (RSA_USER) ============================== Enter the user name used to access the RSA II card. Authorized RSA User [USERID]: USERID RSA Password (RSA_PASSWD) ========================= Enter the password used to access the RSA II card. RSA Password [PASSWORD]: PASSW0RD CLI User (CLI_USER) =================== Enter the user name that will be used to access the administrative CLI. This user must have an administrative role. CLI User [itsoadm]: itsoadm CLI Password (CLI_PASSWD) ========================= Enter the password used to access the administrative CLI. CLI Password [itso]: xxxxx Truststore Password (TRUSTSTORE_PASSWD) ======================================= Enter the password used to secure the truststore file. The password must be at least six characters. Truststore Password [password]: xxxx LDAP SSL Certificate (LDAP_CERT) ================================ If your LDAP server only allows SSL connections, enter the full path to the file containing the LDAP certificate. Otherwise, do not enter anything.

Chapter 5. Installation and basic setup for SAN File System

141

LDAP SSL Certificate []: Metadata disk (META_DISKS) ========================== A space separated list of raw devices on which SAN File System metadata is stored. Metadata disk [/dev/rvpatha]: /dev/rvpatha

Note: If you are using LDAP authentication, (that is, you did not use the -noldap option), you will also be prompted for additional options, as shown in Example 5-21. These will appear after the Language (LANG) option, and before the System Management IP (SYS_MGMT_IP) option shown in Example 5-20. The sample values here correspond to the LDAP configuration shown in 3.5.2, LDAP on page 73 and 4.1.2, LDAP and SAN File System considerations on page 101.
Example 5-21 Cluster installation: LDAP options LDAP server (LDAP_SERVER) ========================= An LDAP server is used to authenticate users who will administer the server. LDAP server IP address [9.42.164.114]: 9.42.164.114

LDAP user (LDAP_USER) ===================== Distinguished name of an authorized LDAP user. LDAP user [cn=root]: cn=Manager,o=ITSO LDAP user password (LDAP_PASSWD) ================================ Password of the authorized LDAP user. This password will need to match the credentials set in your LDAP server. LDAP user password [atslock]: password LDAP secured connection (LDAP_SECURED_CONNECTION) ================================================= Set this value to true if your LDAP server requires SSL connections. If your LDAP server is not using SSL or you are not sure, set this value to false. LDAP secured connection [false]: false LDAP roles base distinguished name (LDAP_BASEDN_ROLES) ====================================================== Base distinguished name to search for roles. For example: ou=Roles,o=company,c=country

142

IBM TotalStorage SAN File System

LDAP roles base distinguished name [ou=Roles,o=ITSO]: ou=Roles,o=ITSO LDAP members attribute (LDAP_ROLE_MEM_ID_ATTR) ============================================== When a SAN File System administration login is attempted, the SAN File System console searches all Role entries to get a list of uses that have permission to access the SAN File System Console. LDAP members attribute [roleOccupant]: roleOccupant LDAP user id attribute (LDAP_USER_ID_ATTR) ========================================== When a SAN File System administration login is attempted, the SAN File System Console searches all users which are associated with a SAN File System Role to see if the login attempt should be allowed. LDAP user id attribute [uid]: uid LDAP role name attribute (LDAP_ROLE_ID_ATTR) ============================================ The attribute that holds the name of the role. LDAP role name attribute [cn]: cn

6. You will be asked if there any subordinate nodes in the cluster. Answer yes (default). You will be then prompted to enter in the host name, Ethernet TCP/IP address, and RSA TCP/IP address of any subordinate MDS, as shown in Example 5-22. Repeat for each subordinate MDS, then answer no to the question Will this cluster have any subordinates (sic) nodes?" when all subordinates have been entered. We have one subordinate node, tank-mds4.
Example 5-22 Cluster installation: enter subordinate node details Subordinate server setup ======================== setupsfs will now collect information about each subordinate node in the cluster. - Enter No if this cluster will not have any subordinate nodes. - Enter Yes to continue. Will this cluster have any subordinates nodes? [Yes]: yes Subordinate Server Name ======================= Every engine in the cluster must have a unique name. Subordinate Name. [-]: tank-mds4 Subordinate IP address ====================== The dotted decimal IPv4 address that the subordinate Metadata server engine has bound to its network interface.

Chapter 5. Installation and basic setup for SAN File System

143

Subordinate Server IP address [-]: 9.82.22.172 System Managment IP (SYS_MGMT_IP) ================================= Enter the System Managment IP address This is the address assigned to your RSAII card. System Management IP [-]: 9.82.22.174 Subordinate server setup ======================== - Enter No if there are not any more subordinate nodes. - Enter Yes to continue. Is there another subordinates node? [Yes]: no

7. Now the installation proceeds: The unpacked software is installed on the master MDS and all subordinates, and the server processes are started on each MDS. Finally, the subordinates are joined to the SAN File System cluster (see Example 5-23). Note: If you did not set up the ssh keys correctly, as described in 5.2.5, Install prerequisite software on the MDS on page 135, you will be prompted many times to enter the root password for any subordinate node(s).
Example 5-23 Cluster installation: install each MDS and form SAN File System cluster Run SAN File System server setup ================================ The configuration utility has not made any changes to your system configuration. - Enter No to quit without configuring the metadata server on this system. - Enter Yes to start the metadata server. Run server setup [Yes]: yes Gathering required files Copying files to 9.82.22.172 HSTPV0035I Machine tank-mds3 complies with requirements of SAN File System version 2.2.2.91, build sv22_0001. Installing:sfs.server.verify.linux_SLES8-2.2.2-91.i386.rpm on 9.82.22.172 . sfs.server.verify.linux_SLES8-2.2.2-91 HSTPV0035I Machine tank-mds4 complies with requirements of SAN File System version 2.2.2.91, build sv22_0001. . Installing:wsexpress-5.1.2-1.i386.rpm on 9.82.22.172 . wsexpress-5.1.2-1 . Installing:sfs.server.config.linux_SLES8-2.2.2-91.i386.rpm on 9.82.22.172 . sfs.server.config.linux_SLES8-2.2.2-91 . Installing:sfs.admin.linux_SLES8-2.2.2-91.i386.rpm on 9.82.22.172 .

144

IBM TotalStorage SAN File System

HSTWU0011I Installing the SAN File System console... HSTWU0014I The SAN File System console has been installed successfully. sfs.admin.linux_SLES8-2.2.2-91 . Installing:sfs.server.linux_SLES8-2.2.2-91.i386.rpm on 9.82.22.172 . sfs.server.linux_SLES8-2.2.2-91 Creating configuration for 9.82.22.172 . Updating configuration file: /tmp/fileIFY1Lm/sfs.conf.9.82.22.172

Updating configuration file: /usr/tank/admin/config/cimom.properties . Installing:sfs.admin.linux_SLES8-2.2.2-91.i386.rpm on 9.82.22.171 . HSTWU0011I Installing the SAN File System console... HSTWU0014I The SAN File System console has been installed successfully. sfs.admin.linux_SLES8-2.2.2-91 . Installing:sfs.server.linux_SLES8-2.2.2-91.i386.rpm on 9.82.22.171 . sfs.server.linux_SLES8-2.2.2-91 Creating configuration for 9.82.22.171 . Updating configuration file: /tmp/fileIFY1Lm/sfs.conf.9.82.22.171 HSTAS0005I Creating truststore file. HSTAS0006I The truststore was created successfully. Starting the metadata server on 9.82.22.171 . Starting the CIM agent on 9.82.22.171 . . Starting the SAN File System Console on 9.82.22.171 . . Name State Server Role Filesets Last Boot ============================================================= tank-mds3 Online Master 1 Aug 26, 2005 7:17:03 AM Starting the metadata server on 9.82.22.172 . Starting the CIM agent on 9.82.22.172 . Starting the SAN File System Console on 9.82.22.172 . Name State Server Role Filesets Last Boot ============================================================= tank-mds3 Online Master 1 Aug 26, 2005 7:17:03 AM Name State Server Role Filesets Last Boot ============================================================= tank-mds3 Online Master 1 Aug 26, 2005 7:17:03 AM NODE: 0 9.82.22.171 1737 1700 1738 1800 5989 GR tank-mds3 2.2.2.91 9.82.22.173 CMMNP5205I Metadata server 9.82.22.172 on port 1737 was added to the cluster successfully. Configuration complete. #

Chapter 5. Installation and basic setup for SAN File System

145

8. You can verify the setup using the sfscli lsserver command, as shown in Example 5-24. It should show one master, with the rest as subordinates. All MDSs should have a state of Online.
Example 5-24 SAN File System installation complete. # sfscli lsserver Name State Server Role Filesets Last Boot ============================================================= tank-mds4 Online Master 1 Aug 26, 2005 7:17:03 AM tank-mds3 Online Subordinate 0 Aug 26, 2005 7:17:47 AM

9. The installation process stores the software packages for all SAN File System components, including the Metadata server, the administrative server, and all clients, in the directory /usr/tank/packages. Example 5-25 shows the SAN File System packages installed in the directory.
Example 5-25 SAN File System installation packages # cd /usr/tank/packages # ls . .. inst_list.cd inst_list.no.cd sfs-client-WIN2K3-opt-2.2.2.82.exe sfs.admin.linux_SLES8-2.2.2-91.i386.rpm sfs.client.aix51 sfs.client.aix52 sfs.client.aix53 sfs.client.linux_RHEL-2.2.2-82.i386.rpm sfs.client.linux_SLES8-2.2.2-82.i386.rpm sfs.client.linux_SLES8-2.2.2-82.ppc64.rpm sfs.client.solaris9.2.2.2-82 sfs.locale.linux_SLES8-2.2.2-8.i386.rpm sfs.server.config.linux_SLES8-2.2.2-91.i386.rpm sfs.server.linux_SLES8-2.2.2-91.i386.rpm sfs.server.verify.linux_SLES8-2.2.2-91.i386.rpm sfs.server.linux-2.2.0-83.i386.rpm mds1:/usr/tank/packages #

10.Run the Target Machine Validation Tool (TMVT) to verify that your hardware and software prerequisites have been met. We showed an example of using this tool in 4.2, Target Machine Validation Tool (TMVT) on page 105:
/usr/tank/server/bin/tmvt -r report_file_name

11.To confirm this setup, you can access the MDS GUI from a browser, as indicated. Figure 5-1 on page 147 shows the SAN File System console login window. After you have signed in using the CLI_USER and password, you can run the GUI, as described in 7.1.2, Accessing the GUI on page 256.

146

IBM TotalStorage SAN File System

Figure 5-1 SAN File System Console GUI sign-on window

5.2.7 SAN File System cluster configuration


This section contains information about the SAN File System configuration values that are required to be set when running the installation script. As described in step 2 on page 138 in 5.2.6, Install SAN File System cluster on page 138, you first create a configuration file (for example, sfs.conf), and then edit it for your actual values. You then run the installation script, specifying the edited configuration file as input. The configuration file generated contains the parameters in Table 5-1. This table gives a description of each parameter, and also the values used for our lab setup. If you are using local authentication, you do not need to make any entries for the parameters shown in bold. If you are using LDAP, you need entries for all parameters. Edit the file and insert your actual values. You then use the edited file as an input parameter, when running the next step of the installation, step 4 on page 138 in 5.2.6, Install SAN File System cluster on page 138.
Table 5-1 SAN File System configuration file parameters Attribute SERVER_NAME Meaning A uniquely-identifying name for this server. Recommend you use the short host name. The name of the cluster, which will be exposed as the first directory under the mount point in UNIX or the disk label in Windows. IP Address of this MDS. Set to en_US.utf8 for English or ja_JP.utf8 for Japanese. Example tank-mds3

CLUSTER_NAME

ITSO_GBURG

IP LANG

9.82.22.171 En_US.utf8

Chapter 5. Installation and basic setup for SAN File System

147

Attribute LDAP_SERVER

Meaning The IP address or resolvable machine name of the LDAP Server. You can select local authentication by specifying the -noldap option (see 5.5, Local administrator authentication option on page 186). Distinguished name of an authorized LDAP user. This user must have read access to the directory where the Roles and Users are. Password of LDAP_USER. Set to true if using a secureLDAP connection (the LDAP certificate must be available). Set to false otherwise. The base DN that contains the role objects as leaf nodes. The attribute of the role object that points to the DN of a user that belongs to that role. The attribute that holds the user ID. The attribute that holds the name of the role. The user name to use to communicate with the RSA II card. The password to use to communicate with the RSA II card. This is a login that will be used for accessing the SAN File System CLI and GUI. Must belong to a user object in the LDAP directory that is assigned to a specific role (if using LDAP) OR be a local OS user ID defined as in 4.1.1, Local authentication configuration on page 100. The password for the CLI_USER. Specify a password to be used when configuring the truststore. Certificate if the LDAP server is using SSL. If LDAP_SECURED_CONNECTION is false, leave this blank. A space-separated list of the fully-qualified raw device names for at least one metadata disk.

Example 9.42.164.125

LDAP_USER

cn=Manager,o=ITSO

LDAP_PASSWD LDAP_SECURED_CONNECTION

password false

LDAP_BASEDN_ROLES LDAP_ROLE_MEM_ID_ATTR

ou=Roles,o=ITSO roleOccupant

LDAP_USER_ID_ATTR LDAP_ROLE_ID_ATTR RSA_USER RSA_PASSWD CLI_USER

uid cn USERID (default) PASSW0RD (default) itsoadm

CLI_PASSWD TRUSTSTORE_PASSWORD LDAP_CERT

xxxxx xxxxx

META_DISKS

/dev/rvpatha

148

IBM TotalStorage SAN File System

Attribute NODE_LIST

Meaning Information about subordinate nodes: We found that even if you complete this field in the format shown in the file, you will still be prompted to enter them, as in Example 5-22 on page 143. Set this value to the IP of your RSA card

Example

SYS_MGMT_IP

9.82.22.173

Tip: The disk specified in META_DISKS should be at least 2 GB in size.

5.3 SAN File System clients


This section describes the installation and maintenance of Windows 2000, Linux, Solaris, and AIX SAN File System client code. The client installation packages are included with the SAN File System server software, so it can be downloaded from any MDS. Before installing, you should verify that the client has access to all required volumes in the User Pools.

5.3.1 SAN File System Windows 2000/2003 client


This section explains how to install the client on a Windows 2000 or Windows 2003 client.

Installation prerequisites
Service Pack 4 or higher for Windows 2000 is required. One free drive letter is required to attach the SAN File System global namespace. The SAN File System cluster must be up and running.

Windows client installation steps


These should be performed at the actual console of the client; at the time of writing this redbook, it was not supported (and caused errors) to use a Terminal Services (including Windows Remote Desktop) session for installing the Windows clients. Check the release notes to see if this restriction still applies; if in doubt, install at the physical console. Perform the following steps: 1. Copy the install package from an MDS (/usr/tank/packages/sfs-client-WIN2K3-version.exe or similar name) to a local drive of the Windows client. You can use secure ftp from an MDS, or start the SAN File System console at a browser, select Download Client Software (Figure 7-3 on page 257), and follow the prompts to download the appropriate package to the client to be installed.

Chapter 5. Installation and basic setup for SAN File System

149

2. Run the executable sfs-client-WIN2K3-version.exe on the client to be installed. Note that the same executable file is used for both Windows 2000 and Windows 2003 installations. Select a language for the installation, either English or Japanese (see Figure 5-2).

Figure 5-2 Select language for installation

3. Figure 5-3 shows the welcome window. Click Next to continue.

Figure 5-3 SAN File System Windows 2000 Client Welcome window

4. You will see a security warning. Click Run to continue (see Figure 5-4 on page 151).

150

IBM TotalStorage SAN File System

Figure 5-4 Security Warning

5. In the next window, you are prompted to enter the configuration parameters, as shown in Figure 5-5 on page 152. Enter the appropriate information in the fields and click Next. The fields are: SAN File System server name: MDS IP Address in decimal. This can be any MDS in the cluster; you can specify the current master MDS, if you have trouble choosing one. SAN File System server port: 1700 (default). SAN File System preferred drive letter: Enter any free drive letter; the default is T. SAN File System client name: Enter a name for the Windows client; we recommend using the short host name. Disable Disk Management Write Signature Dialogue Box: Make sure this box is checked. SAN File System network connection type: Select TCP. SAN file system client critical error handling policy: The default is Log. Important: It is important to check the Disable Disk Management Write Signature dialogue box, as this will prevent Windows systems from writing its own default signature on SAN File System owned volumes. The box is checked by default.

Tip: The installation option, SAN file system client critical error handling policy, determines how the client will behave if it gets critical errors when trying to access the SAN File System global namespace. It has three possible values:

Log (default): SAN File System client errors are logged to the system log of the client
machine.

freezefs: The client does not attempt to write any more data to the SAN File System
drive, and halts communication with the MDS cluster.

systemhalt: The client system performs a shutdown.


We recommend choosing the default Log behavior unless specifically advised otherwise.

Chapter 5. Installation and basic setup for SAN File System

151

Figure 5-5 Configuration parameters

6. A confirmation/review window will appear (Figure 5-6 on page 153). Verify that the information is correctly entered, and click Next.

152

IBM TotalStorage SAN File System

Figure 5-6 Review installation settings

7. On a Windows 2000 client, the installation will now proceed. Skip to step 10 on page 154. 8. Only on Windows 2003, you will get a pop-up informing you that you will have to click twice (see Figure 5-7). Click OK.

Figure 5-7 Security alert warning

Chapter 5. Installation and basic setup for SAN File System

153

9. On Windows 2003, we have to select twice to accept the installation of the IBM SANFS Cluster Bus Enumerator driver (Figure 5-8) and IBM SANFS Cluster Volume Manager driver (Figure 5-9). These are required for PlugnPlay integration of the SAN File System drive with Windows Explorer. Click Yes on each window.

Figure 5-8 Driver IBM SANFS Cluster Bus Enumerator

Figure 5-9 Driver IBM SAN Volume Manager

10.After successful installation, you are prompted to start the SAN File System client immediately, as shown in Figure 5-10 on page 155. Click Yes.

154

IBM TotalStorage SAN File System

Figure 5-10 Start SAN File System client immediately

11.A final window informs you that installation is complete. You should now be able to view the SAN File System namespace, attached at the drive specified. Open Windows Explorer and the new drive letter T: should display, as in Figure 5-11. Notice the drive label; this will match the Cluster name specified when installing the MDS cluster, (CLUSTER_NAME parameter in 5.2.7, SAN File System cluster configuration on page 147). In this case, the Windows client is attached to a SAN File System cluster with the CLUSTER_NAME of ATS_GBURG.

Figure 5-11 Windows client explorer

Chapter 5. Installation and basic setup for SAN File System

155

12.Verify that the driver has started successfully. Select Computer Management System Tools System Information Software Environment Drivers for Windows 2000. You will see the SAN File System drivers in a running state, as in Figure 5-12.

Figure 5-12 Windows 2000 client SAN File System drivers

For Windows 2003, select Computer Management System Tools Device Manager (see Figure 5-13).

Figure 5-13 Windows 20003 client SAN File System drivers

13.You can also see the SAN File System Helper service in the Services applet, as shown in Figure 5-14 on page 157. This service is used for some internal functions, including tracing; it does not stop or start the SAN File System driver.

156

IBM TotalStorage SAN File System

Figure 5-14 SAN File System helper service

The installation of the Windows client is complete.

Removing the SAN File System Windows client


To remove the SAN File System client, select Add/Remove Programs from the Control Panel. Select IBM SAN File System Client in the Currently Installed Programs list. Click Change/Remove. Confirm you want to remove the client and reboot when prompted. After rebooting, the SAN File System global namespace is no longer visible.

Stopping the SAN File System Windows client


The SAN File System Windows client can only be stopped by shutting down the Windows machine. To stop the SAN File System Windows client from starting automatically at boot time, you need to edit the registry key HKEY_LOCAL_MACHINE\System\CurrentControlSet\Services\STFS\Start. Its default value is 2, which starts the client at boot time. If you do not want the client to start at boot time, set this value to 3. The next time the machine boots, SAN File System will not be started.

Manual start of the SAN File System Windows client


If the SAN File System Windows client has been prevented from automatically starting, as described in the previous section, use the command net start stfs to start it. You cannot start the client from the Services applet, and the only way to stop it is to shut down Windows.

Maintaining the Windows client


This section describes how to view and change the properties of a Windows-based client using the Microsoft Management Console (MMC). SAN File System provides a snap-in to the MMC for changing certain parameters in the Windows client.

Chapter 5. Installation and basic setup for SAN File System

157

Setup the MMC for SAN File System


1. To start MMC, select Start Run. Type mmc in the Run window and click OK to launch MMC, as shown in Figure 5-15.

Figure 5-15 Launch MMC

2. To add the Snap-in for SAN File System, select Console Add/Remove Snap-in, as shown in Figure 5-16.

Figure 5-16 Add the Snap-in for SAN File System

3. The Add/Remove Snap-in window opens. Click Add, as shown in Figure 5-17 on page 159.

158

IBM TotalStorage SAN File System

Figure 5-17 Add Snap-in

4. Scroll down to select the IBM TotalStorage File System Snap-in and click Add, as shown in Figure 5-18.

Figure 5-18 Add the IBM TotalStorage System Snap-in

Chapter 5. Installation and basic setup for SAN File System

159

5. Click OK or add other Snap-ins as desired. For example, the Computer Management Snap-in could be useful to monitor the client. When finished, click OK, as in Figure 5-19.

Figure 5-19 Add/Remove Snap-in

6. Select Console Save As, as shown in Figure 5-20, to save the MMC console for future use.

Figure 5-20 Save MMC console

160

IBM TotalStorage SAN File System

7. Enter a location and file name for the MMC console and click Save. We called the MMC console SANFS and saved it to the Desktop, as shown in Figure 5-21. This creates an icon on our desktop, which we can click to launch the console in the future.

Figure 5-21 Save MMC console to the Windows desktop

8. The MMC has now been configured for use with SAN File System.

Using the MMC for SAN File System


To launch the MMC, click the .msc file where you saved the console in the previous section. In our example, we would click SANFS.msc from the Desktop. There are three categories provided: Global Properties, Trace Properties, and Volume Property. 1. Click Global Properties in the left-hand column, as shown in Figure 5-22.

Figure 5-22 IBM TotalStorage File System Snap-in Properties

Chapter 5. Installation and basic setup for SAN File System

161

The following global properties can be changed using MMC: DisableOplocks: Controls the setting of the Oplocks feature that provides improved CIFS performance by caching file data. The default value is 0, which indicates that Oplocks are enabled. DisableShortNames: Controls the setting of the ShortNames feature that enables the generation of the MS DOS 8.3 name format. The default value is 0, which indicates that short names are enabled. LogInternalErrors: Enables or disables internal error logging. The default value is 0, which indicates that the logging is disabled. WriteThrough: Forces all cached writes to be synchronously flushed to disk. The default value is 0, which indicates that this action is disabled. 2. To change any of the global properties, double-click it. In this example, we are changing the DisableShortNames property. The DisableShortNames property window will open. We change the value to 1 and click OK to save, as shown in Figure 5-23.

Figure 5-23 DisableShortNames

3. Verify that the value has been changed for the DisableShortNames property in the right hand column, as shown in Figure 5-24. For changes to global properties to take effect, reboot the Windows client.

Figure 5-24 Verify value for DisableShortNames

162

IBM TotalStorage SAN File System

4. Select Trace Properties, as shown in Figure 5-25. These Trace Properties can be changed: Categories: Lists the upper-driver trace classes enabled for tracing. CsmCategories: Lists the CSM trace classes enabled for tracing. 5. To change the Trace Properties: a. Double-click Categories or CsmCategories. b. Edit the list of trace classes. c. Click OK to close the window. Changes to trace properties take effect immediately.

Figure 5-25 Trace Properties

6. The following Volume Properties can be modified: Preferred drive letter for the SAN File System namespace. Windows client name. The IP address of the MDS. The TCP port number at which the MDS listens. 7. To change a Volume Property, click Volume Property in the left-hand column and then right-click and select Properties on the volume that you want to modify, as shown in Figure 5-26. The volume represents the SAN File System namespace.

Figure 5-26 Volume Properties

Chapter 5. Installation and basic setup for SAN File System

163

8. The Volume Properties window will open. Modify any of the values and click OK, as shown in Figure 5-27.

Figure 5-27 Modify Volume Properties

9. For changes to Volume Properties to take effect, close MMC and reboot the Windows client.

5.3.2 SAN File System Linux client installation


This topic provides the general steps for installing the SAN File System on a Linux client. These steps must be performed on each Linux client in the SAN File System. The steps are similar for any Linux-supported SAN File System client platform (including pSeries and zSeries) This particular sequence shows Red Hat for Intel. Some specifics for the zSeries SAN File System client are in 5.3.5, SAN File System zSeries Linux client installation on page 178. The client installation package is called sfs.client.linux_RHEL-version.i386.rpm. For a SUSE distribution, the package is called sfs.client.linux_SLES8-version.platform.rpm (platform is i386 for Intel, ppc64 for pSeries Linux, or s390 for zSeries Linux). 1. You can load this package on the client from the SAN File System package repository. The package repository is located on each Metadata server engine. Use either scp or the SAN File System console to transfer the package from an MDS. Copy the software package from the master MDS, as shown in Example 5-26.
Example 5-26 Copy SAN File system client package scp root@mds1:/usr/tank/packages/sfs.client.linux_RHEL-2.2.2-82.i386.rpm .

2. To install the client package, use the rpm command:


rpm -ihv sfs.client.linux_RHEL3-2.2.2-82.i386.rpm

Tip: If you are upgrading the SAN File System client from a previous version, you need to first remove the old package, using rpm -e, then re-install the new package. The output should be similar to that shown in Example 5-27 on page 165.

164

IBM TotalStorage SAN File System

Example 5-27 Install SAN File system client package [root@prague code]# rpm -ihv sfs.client.linux_RHEL-2.2.2-82.i386.rpm Preparing... ########################################### [100%] 1:sfs.client.linux_RHEL ########################################### [100%] Run /usr/tank/client/bin/setupstclient -prompt to configure and start the SAN File System client.

3. Make sure that the master MDS is running, then configure and start the client with the setupstclient command:
/usr/tank/client/bin/setupstclient -prompt

4. You will be prompted to enter values for the client configuration, as in Example 5-28: SAN File System server name (no default) SAN File System server port (the default is 1700.) SAN File System mount point (no default) SAN File System client name (the default is the short version of the host name.) SAN File System network connection type (the default is TCP.) SAN File System client critical error handling policy (the default is log.) SAN File System candidate disks

Example 5-28 Linux setupstclient [root@prague /]# /usr/tank/client/bin/setupstclient -prompt IBM SAN File System client setup utility The IBM SAN File System client setup utility performs the following functions: 1. Prompts you for information necessary to set up the SAN File System client. 2. (Optional) Saves the configuration you specify to the file: /usr/tank/client/config/stclient.conf 3. (Optional) Runs the setup process: a. Loads the SAN File System driver as a kernel module (using the insmod(1) command). b. Creates the SAN File System client (using the stfsclient(1) command). c. Mounts the SAN File System (using the stfsmount(1) command). Because the utility does not make changes until the configuration file is saved and the setup process begins, you can press Ctrl-c to exit the utility without making changes at any time before that point. To use the default value that appears in [square brackets], press Enter. A dash [-] indicates no default is available. Device candidate list (devices) =============================== The SAN File System client determines which disks to use as SAN File System user data volumes by searching a list of disks, called device candidates. The device candidate list consists of those devices that have device-special files in the directory you specify. Device candidate list [pat=/dev/sd*[a-z]]: pat=/dev/sd* Client name (clientname) Chapter 5. Installation and basic setup for SAN File System

165

======================== You can set the name of this SAN File System client. The name can be any string, but must be unique. By default, the client setup utility uses the host name (output of the hostname command). Client name [linux]: LIXPrague Metadata server IP address (server_ip) ====================================== During setup, the SAN File System client must connect to one of the Metadata servers in the cluster. After the client establishes a connection to the server, the server notifies the client of any other servers in the cluster. Specify the IP address for any Metadata server in the cluster to establish the connection. Metadata server connection IP address [-]: 9.82.22.172 Metadata server port number (server_port) ========================================= The SAN File System client must connect to a specific port on the Metadata server. In most cases the Metadata server uses port 1700. Accept this default unless you know the Metadata server was configured to listen on a different port. Metadata server port number [1700]: SAN File System mount point (mount_point) ========================================= The client setup utility mounts the SAN File System to a specified mount point (directory) and creates the file system image. If the specified mount point does not exist it will be created. Once mounted, the directory tree for the file system image appears at that mount point. Mount point [/mnt/sanfs]: /sfs2 Read-only file system (readonly) ================================ If you mount the SAN File System as read-only, data and metadata in the file system can be viewed, but not modified. Accessing a file system object does not affect its access time attribute. Mount file system read-only [No]: NLS converter [convertertype]: =============================== The NLS converter tells the Metadata server how to convert strings from the SAN File System client into Unicode. NLS converter [ISO-8859-1]:

166

IBM TotalStorage SAN File System

Transport protocol (nettype) ============================ The transport protocol determines how the SAN File System client connects to the Metadata server. Specify either tcp or udp. Transport protocol [tcp]: Record mount in /etc/mtab (etc_mtab) ==================================== By default, if the file system mount succeeds, the client setup utility adds an entry for the file system image to /etc/mtab. You can choose to not record the mount in this file. Record the mount [Yes]: Show number of free blocks (always_empty) ========================================= By default, the number of blocks reported as free blocks by statfs() is actually the number of blocks in partitions that are not assigned to a fileset. Some programs might mistakenly report that there is no free space left in partitions assigned to the fileset, when there is actually free space available. This option forces statfs() to report the number of free blocks as being one less than the number of blocks in the file system. Always indicate blocks free [No]: Display verbose messages (verbose) ================================== By default, the client setup utility runs quietly, suppressing informational messages generated by the commands. You can choose to display these messages by specifying verbose. Display verbose output [No]: yes Configuration data collection complete. Save configuration ================== You can save the configuration that you just completed to a file. You can modify and use this file to set up additional SAN File System clients on other machines. Save configuration [Yes]: Creating configuration file: /usr/tank/client/config/stclient.conf Run SAN File System client setup ================================

Chapter 5. Installation and basic setup for SAN File System

167

The configuration utility has not made any changes to your system configuration. - Enter No to quit without configuring the SAN File System client on this system. - Enter Yes to start the SAN File System client Run the SAN File System client setup utility [Yes]: HSTCL0031I The client named LIXPrague was created with client identifier 159ec800 for SAN File System Metadata server at IP address 9.42.164.114, port 1700. HSTCL0068I Establishing 256 candidate SAN File System user data disk devices. HSTMO0015I Mounted SAN File System client LIXPrague of file-system type sanfs over directory /sfs2 in read-write mode. SAN File System client setup complete.

In most cases, you can accept the defaults. 5. To validate that the SAN File System was installed properly on the Linux client, use the cat command:
cat /usr/tank/client/VERSION

The results should be similar to Example 5-29.


Example 5-29 Linux client version VERSION 2.2.2 RELEASE 82 INTERFACE 0

6. Use the mount command to verify that the SAN File System is mounted on the client. The mount point for the SAN File System should be displayed, /sfs2 in this case, as shown in Example 5-30.
Example 5-30 SAN File System is mounted [root@prague root]# mount /dev/sda1 on / type ext2 (rw) none on /proc type proc (rw) usbdevfs on /proc/bus/usb type usbdevfs (rw) none on /dev/pts type devpts (rw,gid=5,mode=620) /dev/sda7 on /home type ext2 (rw) none on /dev/shm type tmpfs (rw) /dev/sda2 on /tmp type ext2 (rw) /dev/sda3 on /usr type ext2 (rw) /dev/sda5 on /var type ext2 (rw) LIXPrague on /sfs2 type sanfs (rw)

5.3.3 SAN File System Solaris installation


This topic provides the general steps for installing the SAN File System on a Solaris client. These steps must be performed on each Solaris client in the SAN File System. The client installation package is called sfs.client.solaris9.version. You can load this package on the client from the SAN File System package repository. The package repository is located on each MDS engine. Use either scp (secured copy) or the SAN File System console to transfer the package from a MDS. 168
IBM TotalStorage SAN File System

1. On the client, change the directories to a temporary directory:


cd /tmp

2. Copy the software package from the master Metadata server:


scp userID@server_host_name:/usr/tank/packages/sfs.client.solaris9.2.2.2-116

3. Install the client package:


pkgadd -d sfs.client.solaris9.2.2.2-116

4. Enter All (the default) when prompted to select the packages to be installed. 5. Enter y when prompted to select the packages to be installed. 6. Configure and start the client with the setupstclient command:
/usr/tank/client/bin/setupstclient -prompt

You will be prompted to enter values for the client configuration: SAN File System server name (no default) SAN File System server port (the default is 1700.) SAN File System mount point (no default) SAN File System client name (the default is the short version of the host name.) SAN File System network connection type (the default is TCP.) SAN File System client critical error handling policy (the default is log.) In most cases, you can accept the defaults. Make sure to enter in the actual IP address of an MDS. The execution of the setupstclient command is similar to that shown in Example 5-28 on page 165. 7. To validate that the SAN File System was installed properly on Solaris client, use the cat command:
cat /usr/tank/client/VERSION

The results should be similar to Example 5-31.


Example 5-31 Solaris Client Version VERSION 2.2.2 RELEASE 116 INTERFACE 0

8. Use the mount command to verify that the SAN File System is mounted on the client. The mount point for the SAN File System should be displayed.

5.3.4 SAN File System AIX client installation


1. Check the Release Notes or the SAN File System product support Web site to confirm that you have the correct AIX level with fixes (PTFs) installed. At the time of writing, these were: AIX 5L Version 5.1, Maintenance Level 3 and bos.mp/bos.up at 5.1.0.58 or higher with fix Y50330 or higher, 32-bit AIX 5L Version 5.2, bos.mp at 5.2.0.18 or higher with fix IY50331or higher, 32 and 64-bit AIX 5L Version 5.3, basic release level, 64-bit

Chapter 5. Installation and basic setup for SAN File System

169

2. Enable asynchronous input/output for the AIX client if you are running AIX 5L V5.2 or V5.3. Start SMIT and select Devices Asynchronous I/O Asynchronous I/O (Legacy) Change/Show Characteristics of Asynchronous I/O. The screen in Example 5-31 on page 169 should appear.
Example 5-32 Enable asynchronous I/O Change / Show Characteristics of Asynchronous I/O Type or select values in entry fields. Press Enter AFTER making all desired changes. [Entry Fields] [1] [10] [4096] [39] available enable

MINIMUM number of servers MAXIMUM number of servers per cpu Maximum number of REQUESTS Server PRIORITY STATE to be configured at system restart State of fast path

# # # # + +

3. Exit SMIT. 4. Run cfgmgr to apply the changes. 5. Copy the install package from an MDS (/usr/tank/packages/sfs.client.aix5x) to a local directory of the AIX client. Make sure to select the appropriate package for your version of AIX; the client packages are called sfs.client.aix51, sfs.client.aix52, and sfs.client.aix53 for AIX 5L V5.1, V5.2, and V5.3, respectively. You can use secure ftp from an MDS, or start the SAN File System console (select Download Client Software) and follow the prompts. We copied the install package to the directory /tmp/SANFS_Client. The package is of the format sfs.client.aix5x, where x is either 1, 2, 3. Select the appropriate package for your installed version of AIX. 6. Use the AIX installp command or SMIT to install (Run SMIT and select Software Installation and Maintenance Install and Update Software Install Software. Complete the parameters as shown in Example 5-33, and the install should complete, as shown in Example 5-34 on page 171.
Example 5-33 Installation directory and file selection Install and Update from ALL Available Software Type or select values in entry fields. Press Enter AFTER making all desired changes. [Entry Fields] . [+ 2.2.2.82 SAN File > + no + yes + no + yes + yes + no + no + no + yes + no + no +

* INPUT device / directory for software * SOFTWARE to install PREVIEW only? (install operation will NOT occur) COMMIT software updates? SAVE replaced files? AUTOMATICALLY install requisite software? EXTEND file systems if space needed? OVERWRITE same or newer versions? VERIFY install and check file sizes? DETAILED output? Process multiple volumes? ACCEPT new license agreements? Preview new LICENSE agreements?

170

IBM TotalStorage SAN File System

Example 5-34 Installation output COMMAND STATUS Command: OK stdout: yes stderr: no

Before command completion, additional instructions may appear below. [TOP] I:sfs.client.aix52 2.2.2.82

+-----------------------------------------------------------------------------+ Pre-installation Verification... +-----------------------------------------------------------------------------+ Verifying selections...done Verifying requisites...done Results... SUCCESSES --------Filesets listed in this section passed pre-installation verification and will be installed. Selected Filesets ----------------sfs.client.aix5.2 2.2.2.82 # SAN File System client for A... << End of Success Section >> FILESET STATISTICS -----------------1 Selected to be installed, of which: 1 Passed pre-installation verification ---1 Total to be installed +-----------------------------------------------------------------------------+ Installing Software... +-----------------------------------------------------------------------------+ installp: APPLYING software for: sfs.client.aix52 2.2.2.82

. . . . . << Copyright notice for sfs.client.aix51-opt >> . . . . . . . Licensed Materials - Property of IBM 5765-FS1 5765-FS2 (C) Copyright International Business Machines Corp. 2003-2004 All rights reserved. US Government Users Restricted Rights - Use, duplication or disclosure restricted by GSA ADP Schedule Contract with IBM Corp. . . . . . << End of copyright notice for sfs.client.aix52 >>. . . . Run /usr/tank/client/bin/setupstclient -prompt to configure and start the SAN File System client. Finished processing all filesets. (Total time: 10 secs). +-----------------------------------------------------------------------------+ Summaries: +-----------------------------------------------------------------------------+ Chapter 5. Installation and basic setup for SAN File System

171

Installation Summary -------------------Name Level Part Event Result ------------------------------------------------------------------------------sfs.client.aix52 2.2.2.82 USR APPLY SUCCESS sfs.client.aix52 2.2.2.82 ROOT APPLY SUCCESS

Configuring the AIX client to the SAN File System server


1. Log in to the AIX system as root and create a directory that will be used by the SAN File System client as the mount point, using the mkdir command. We created a directory /mnt/tank for this task (using mkdir /mnt/tank). 2. Run the client setup utility by using setupstclient; this should be in your path as /usr/sbin/setupstclient:
# setupstclient

The client setup utility: Prompts you for information necessary to set up the SAN File System client. (Optional) Saves the configuration you specify to the configuration file /usr/tank/client/config/stclient.conf. (Optional) Runs the setup process, which: Loads the SAN File System driver as a kernel extension (using the stfsdriver command). Creates the SAN File System client (using the stfsclient command). Mounts the SAN File System (using the stfsmount command).

The client setup utility prompts for the following information:

Path to the kernel extension: The default is /usr/tank/client/bin/stfs. The client setup utility
loads the SAN File System driver as a kernel extension, named kernextname, and this requires the path to be defined as described above.

Device candidate list: List of disks to use as data volumes. The default is pat=/dev/rhdisk*. Since we are using SVC, we will change this to pat=/dev/rvpath*. Client name: Enter in a logical name for your client. We recommend using the host name
AIXRome in our case.

Metadata server connection IP address: We entered the IP address 9.82.22.172, our master MDS. Whenever it starts, the SAN File System client must connect to one of the MDSs in the cluster to initiate communication. We recommend setting this parameter to the master MDS initially. Metadata server port number: The default is 1700. Mount point: The default is /mnt/sfs. The client setup utility mounts the SAN File System global namespace to a specified mount point (directory). Once mounted, the directory tree for the global namespace file system image appears at that mount point. This directory must exist. We already created the /mnt/sfs directory, so we will use it. Mount file system read-only: The default is No. If you mount the SAN File System as
read-only, you will not be able to add or edit any files in the global namespace.

Disable automatic restart: By default, the SAN File System client restarts when the system starts. Enter Yes to enable automatic restart of the SAN File System client at startup and No to disable automatic restart.
172
IBM TotalStorage SAN File System

SAN File System kernel extension major number: The default is 99. Specify a major number that will be used to create the driver instance. NLS converter: The default is ISO-8859-1. The NLS converter tells the MDS how to
convert strings from the SAN File System client into Unicode.

Transport protocol: The default is tcp. Method of handling critical errors: The default is log.
Important: The installation option, Method of handling critical errors, determines how the client will behave if it gets critical errors when trying to access the SAN File System global namespace. It has three possible values:

Log (default): SAN File System client errors are logged to the system log of the client
machine.

freezefs: The client does not attempt to write any more data to the SAN File System
drive, and halts communication with the MDS cluster.

systemhalt: The client system performs a shutdown.


We recommend choosing the default log behavior unless specifically advised otherwise.

Attention: If the mount point directory does not already exist, you will get the error message Directory does not exist and the installation will stop. Create the directory and re-start the installation.

Display verbose output: The default is No. By default, the client setup utility runs quietly,
suppressing informational messages generated by the commands. You can choose to display these messages by specifying verbose.

Save configuration: The default is Yes. This creates the configuration file with the name
/usr/tank/client/config/stclient.conf. See Example 5-40 on page 175 for the sample contents of the configuration file.

Run the SAN File System client setup utility: The default setting is Yes.
Now the setup utility runs and completes with the messages shown in Example 5-35.
Example 5-35 Client setup utility complete HSTDR0029I The kernel extension was successfully loaded from file /usr/tank/client/bin/stfs kernel module ID (kmid) = 5f944bc. HSTDR0030I File system driver is initialized and ready to handle file-system type 20. SAN File System client setup complete.

Chapter 5. Installation and basic setup for SAN File System

173

The SAN File System should now be mounted and this can be verified using the mount command on the AIX system. This should be similar to Example 5-36. Note the mounted file system /sfs, of type stfs.
Example 5-36 Mount verification # mount node mounted -------- --------------/dev/hd4 /dev/hd2 /dev/hd9var /dev/hd3 /dev/hd1 /proc /dev/hd10opt /dev/lv00 SANFS mounted over vfs date options --------------- ------ ------------ --------------/ jfs Jun 03 10:54 rw,log=/dev/hd8 /usr jfs Jun 03 10:54 rw,log=/dev/hd8 /var jfs Jun 03 10:54 rw,log=/dev/hd8 /tmp jfs Jun 03 10:54 rw,log=/dev/hd8 /home jfs Jun 03 10:55 rw,log=/dev/hd8 /proc procfs Jun 03 10:55 rw /opt jfs Jun 03 10:55 rw,log=/dev/hd8 /usr/sys/inst.images jfs Jun 03 10:55 rw,log=/dev/hd8 /mnt/sfs sanfs Jun 03 17:05 rw

Disconnecting the AIX SAN File System client


This procedure will unmount the SAN File System global namespace and unload the SAN File System kernel extension. 1. Check that the client system is not using any files in the SAN File System by using the fuser utility, for example, fuser /mnt/sfs. 2. Unmount the SAN File System by using the stfsumount command, as shown in Example 5-37.
Example 5-37 Unmount the SAN File System # cd /usr/tank/client/bin # ./stfsumount /mnt/sfs HSTUM0007I Unmounted the file system image with vfsnumber 11

3. Now issue the rmstclient command. This will disconnect the client from the SAN file system, as shown in Example 5-38. It will also unmount the SAN File System if it was not already unmounted.
Example 5-38 Remove AIX SAN File System client # /usr/tank/client/bin/rmstclient -noprompt Using configuration file: /usr/tank/client/config/stclient.conf HSTDR0033I SAN File System driver shut down successfully. HSTDR0035I The kernel extension 62777f8 was unloaded successfully. SAN File System client removal complete.

Reconnecting the AIX SAN File System client


To re-connect the AIX client, re-run the setupstclient command, as shown in Example 5-39 on page 175. Because the client configuration file /usr/tank/client/config/stclient.conf already exists, it will load the kernel extension and automatically mount the SAN File System using the parameters specified at install. If you want to change the configuration, you can simply edit the parameters in this file. You can use the noprompt option (as shown) to avoid being prompted for input.

174

IBM TotalStorage SAN File System

Example 5-39 Re-connecting the AIX SAN File System client # ./setupstclient -noprompt Using configuration file: /usr/tank/client/config/stclient.conf HSTDR0029I The kernel extension was successfully loaded from file /usr/tank/client/bin/stfs kernel module ID (kmid) = 62777f8. HSTDR0030I File system driver is initialized and ready to handle file-system type 20.

Uninstalling the AIX SAN File System client


To uninstall the client, first disconnect it, as shown in Disconnecting the AIX SAN File System client on page 174. Then use SMIT or the installp command to remove the client software from the system, for example:
installp -u sfs.client.aix52

Make sure to specify the actual installed SAN File System package for your system.

AIX client configuration file


The AIX client uses the configuration file /usr/tank/client/config/stclient.conf. Example 5-40 shows the sample contents of this file for one of our clients.
Example 5-40 stclient.conf file # cat stclient.conf # # SAN File System client configuration for AIX # # # # # # # # # # SAN File System kernel extension (kernextname) ============================================== The client setup utility loads the SAN File System driver as a kernel extension and creates the file system driver instance. You must specify the path to the location of the SAN File System kernel extension. Path to the kernel extension [/usr/tank/client/bin/stfs]:

kernextname=/usr/tank/client/bin/stfs # # # # # # # # # # Device candidate list (devices) =============================== The SAN File System client determines which disks to use as SAN File System user data volumes by searching a list of disks, called device candidates. The device candidate list consists of those devices that have device-special files in the directory you specify. Device candidate list [pat=/dev/rhdisk*]:

devices=pat=/dev/rvpath* # # # # # Client name (clientname) ======================== You can set the name of this SAN File System client. The name can be any string, but must be unique. By

Chapter 5. Installation and basic setup for SAN File System

175

# default, the client setup utility uses the host name # (output of the hostname command). # # Client name [Rome]: clientname=AIXRome # # # # # # # # # # # # Metadata server IP address (server_ip) ====================================== During setup, the SAN File System client must connect to one of the Metadata servers in the cluster. After the client establishes a connection to the server, the server notifies the client of any other servers in the cluster. Specify the IP address for any Metadata server in the cluster to establish the connection. Metadata server connection IP address [-]:

server_ip=9.82.22.172 # # # # # # # # # Metadata server port number (server_port) ========================================= The SAN File System client must connect to a specific port on the Metadata server. In most cases the Metadata server uses port 1700. Accept this default unless you know the Metadata server was configured to listen on a different port. Metadata server port number [1700]:

server_port=1700 # # # # # # # # # # SAN File System mount point (mount_point) ========================================= The client setup utility mounts the SAN File System to a specified mount point (directory) and creates the file system image. If the specified mount point does not exist it will be created. Once mounted, the directory tree for the file system image appears at that mount point. Mount point [/mnt/sanfs]:

mount_point=/mnt/sfs # # # # # # # # Read-only file system (readonly) ================================ If you mount the SAN File System as read-only, data and metadata in the file system can be viewed, but not modified. Accessing a file system object does not affect its access time attribute. Mount file system read-only [No]:

readonly=No # Disable automatic restart(autorestart) # =======================================

176

IBM TotalStorage SAN File System

# # # # # # # #

By default, the SAN File System client restarts when the system starts. - Enter Yes to enable automatic restart of the SAN File System client at startup. - Enter No to disable automatic restart of the SAN File System client at startup. Enable automatic restart at startup [Yes]:

autorestart=Yes # # # # # # # SAN File System kernel extension major number (majornumber) =========================================================== SAN File System driver requires a major number while creating a file system driver instance. Please specify a major number. SAN File System kernel extension major number [99]:

majornumber=99 # # # # # # # NLS converter [convertertype]: =============================== The NLS converter tells the Metadata server how to convert strings from the SAN File System client into Unicode. NLS converter [ISO-8859-1]:

convertertype=ISO-8859-1 # # # # # # # # Transport protocol (nettype) ============================ The transport protocol determines how the SAN File System client connects to the Metadata server. Specify either tcp or udp. Transport protocol [tcp]:

nettype=tcp # # # # # # # # # # # # # Error handling (stfserror) ========================== All SAN File System client errors are logged to the system log of the client machine. There are some error conditions that may require additional measures, such as when an application exits and a subsequent hardware failure prevents data from being committed to disk. For these types of error conditions, you can select the freezefs or systemhalt options. The freezefs option prevents the SAN File System from writing additional data to disk and will halt communication with the Metadata servers. The systemhalt option forces the client system to abruptly shut down. Choose either log, freezefs, or systemhalt.

# Method of handling critical errors [log]: stfserror=log

Chapter 5. Installation and basic setup for SAN File System

177

# # # # # # # # #

Display verbose messages (verbose) ================================== By default, the client setup utility runs quietly, suppressing informational messages generated by the commands. You can choose to display these messages by specifying verbose. Display verbose output [No]:

verbose=Yes

5.3.5 SAN File System zSeries Linux client installation


This section explains how to install the client on a zSeries Linux partition.

Installation prerequisites
The SAN File System client for Linux for IBM ^ zSeries supports the following configurations: Supports the 31-bit SLES8, Service Pack 3 distribution under z/VM 5.1 or later, or directly within an LPAR, on any generally available zSeries model that supports the co-required OS and software stack. Consult your system documentation or system administrator for details of setting up an LPAR or z/VM environment with SLES8 Linux. Supports the use of SCSI SAN with zSeries with the zFCP driver, fixed block SCSI, with IBM ESS, DS6000, and DS8000 storage LUNs. Correct configuration of the disks (LUNs) is very important. For detailed information about this task, see the Redpaper Getting Started with zSeries Fibre Channel Protocol, REDP-0205. The SAN File System cluster must be up and running.

zSeries client installation steps


These steps should be performed at the actual console of the zSeries client. 1. Copy and install the client package from an MDS using secure ftp. Then install the package using rpm, as shown in Example 5-41. The packages name will have the format sfs.client.linux_SLES8-version.
Example 5-41 Install zSeries client sanfs01:~ # rpm -ivh sfs.client.linux_SLES8-2.2.2-98.s390.rpm sfs.client.zSeries_SLES8 ################################################## Run /usr/tank/client/bin/setupstclient -prompt to configure and start the SAN File System client.

2. Configure the zSeries SAN File System client by running the setupstclient command from the client installation directory, as shown in Example 5-42. Specify appropriate values for your environment, in particular, the MDS TCP/IP address.
Example 5-42 Configure zSeries client sanfs01:~ # cd /usr/tank/client/bin sanfs01:/usr/tank/client/bin: # setupstclient Using configuration file: /usr/tank/client/config/stclient.conf IBM SAN File System client setup utility

178

IBM TotalStorage SAN File System

The IBM SAN File System client setup utility performs the following functions: 1. Prompts you for information necessary to set up the SAN File System client. 2. (Optional) Saves the configuration you specify to the file: /usr/tank/client/config/stclient.conf 3. (Optional) Runs the setup process: a. Loads the SAN File System driver as a kernel module (using the insmod(1) command). b. Creates the SAN File System client (using the stfsclient(1) command). c. Mounts the SAN File System (using the stfsmount(1) command). Because the utility does not make changes until the configuration file is saved and the setup process begins, you can press Ctrl-c to exit the utility without making changes at any time prior to that point. To use the default value that appears in [square brackets], press Enter. A dash [-] indicates no default is available. Device candidate list (devices) =============================== The SAN File System client determines which disks to use as SAN File System user data volumes by searching a list of disks called device candidates. The device candidate list consists of those devices that have device-special files in the directory you specify. Device candidate list [pat=/dev/sd*[a-z]]: Client name (clientname) ======================== You can set the name of this SAN File System client. The name can be any string, but must be unique. By default, the client setup utility uses the host name (output of the hostname command). Client name [sanfs01]: Metadata server IP address (server_ip) ====================================== During setup, the SAN File System client must connect to one of the metadata servers in the cluster. After the client establishes a connection to the server, the server notifies the client of any other servers in the cluster. Specify the IP address for any metadata server in the cluster to establish the connection. Metadata server connection IP address [192.168.71.75]: Metadata server port number (server_port) ========================================= The SAN File System client must connect to the client server port

Chapter 5. Installation and basic setup for SAN File System

179

on the metadata server. In most cases, the metadata server uses port 1700. Accept this default unless you know that the metadata server was configured to listen on a different port. The sfscli command statserver -netconfig will print the client server port. Metadata server port number [1700]: SAN File System mount point (mount_point) ========================================= The client setup utility mounts the SAN File System to specified mount point (directory) and creates the file image. If the specified mount point does not exist, it created. Once mounted, the directory tree for the file image appears at that mount point. Mount point [/mnt/sanfs]: Read-only file system (readonly) ================================ If you mount the SAN File System as read-only, data and metadata in the file system can be viewed, but not modified. Accessing a file system object does not affect its access time attribute. Mount file system read-only [No]: Disable automatic restart(autorestart) ======================================= By default, the SAN File System client restarts when the system starts. - Enter Yes to enable automatic restart of the SAN File System client at startup. - Enter No to disable automatic restart of the SAN File System client at startup. Enable automatic restart at startup [No]: NLS converter (convertertype): =============================== The NLS converter tells the metadata server how to convert strings from the SAN File System client into Unicode. NLS converter [ISO-8859-1]: Transport protocol (nettype) ============================ The transport protocol determines how the SAN File System client connects to the metadata server. Specify either tcp or udp. Transport protocol [tcp]: Record mount in /etc/mtab (etc_mtab) ==================================== By default, if the file system mount succeeds, the client a system is system

180

IBM TotalStorage SAN File System

setup utility adds an entry for the file system image to /etc/mtab. You can choose not to record the mount in this file. Record the mount [Yes]: Show number of free blocks (always_empty) ========================================= By default, the number of blocks reported as free blocks by statfs() is actually the number of blocks in partitions that are not assigned to a fileset. Some programs might mistakenly report that there is no free space left in partitions assigned to the fileset, when there is actually free space available. This option forces statfs() to report the number of free blocks as being one less than the number of blocks in the file system. Always indicate blocks free [No]: Display verbose messages (verbose) ================================== By default, the client setup utility runs quietly, suppressing informational messages generated by the commands. You can choose to display these messages by specifying verbose. Display verbose output [No]: Run SAN File System client setup. ================================= The configuration utility has not yet made changes to your system configuration. - Enter No to quit without configuring the SAN File System client on this system. - Enter Yes to put these changes into effect and start the SAN File System client. Run the SAN File System client setup utility [Yes]: HSTCL0068I Establishing 256 candidate SAN File System user data disk devices. SAN File System client setup complete.

Chapter 5. Installation and basic setup for SAN File System

181

3. The SAN File System should now be mounted and this can be verified using the df -k command at the zSeries Linux prompt. This should be similar to Example 5-43.
Example 5-43 SAN File System mount verification sanfs01:/usr/tank/client/config # df -k Filesystem /dev/dasda1 /dev/dasdb1 /dev/dasdc1 shmfs sanfs01 1K-blocks 2365444 2365444 2365444 257372 8589934588 Used Available Use% Mounted on

1075176 1170108 48% / 1878164 367120 84% /usr 68736 2176548 4% /tmp 0 257372 0% /dev/shm 4 8589934584 1% /mnt/sanfs

Disconnecting the zSeries SAN File System client


This procedure will unmount the SAN File System global namespace and unload the SAN File System kernel extension. 1. Check that the client system is not using any files in the SAN File System by using the fuser utility, for example, fuser /mnt/sfs. 2. Issue the rmstclient command. This will disconnect the client from the SAN file system, as shown in Example 5-44. It will also unmount the SAN File System if it was not already unmounted.
Example 5-44 Remove zSeries SAN File System client sanfs01:~ # rmstclient Using configuration file: /usr/tank/client/config/stclient.conf IBM SAN File System client setup utility... ... ... SAN File System client removal complete.

Reconnecting the zSeries SAN File System client


To re-connect the zSeries client, re-run the setupstclient command, as shown in Example 5-45. Because the client configuration file /usr/tank/client/config/stclient.conf already exists, it will load the kernel extension and automatically mount the SAN File System using the parameters specified at install. If you want to change the configuration, you can simply edit the parameters in this file. You can use the noprompt option (as shown) to avoid being prompted for input.
Example 5-45 Re-connecting the zSeries SAN File System client sanfs01:~ # setupstclient -noprompt Using configuration file: /usr/tank/client/config/stclient.conf HSTCL0068I Establishing 256 candidate SAN File System user data disk devices. SAN File System client setup complete.

zSeries client configuration file


The zSeries client uses the configuration file /usr/tank/client/config/stclient.conf. Example 5-46 on page 183 shows the sample contents of this file for one of our clients.

182

IBM TotalStorage SAN File System

Example 5-46 Configuration file for zSeries client sanfs01:/usr/tank/client/config # cat stclient.conf # # SAN File System client configuration for Linux # # # # # # # # # # # Device candidate list (devices) =============================== The SAN File System client determines which disks to use as SAN File System user data volumes by searching a list of disks called device candidates. The device candidate list consists of those devices that have device-special files in the directory you specify. Device candidate list [pat=/dev/sd*[a-z]]:

devices=pat=/dev/sd*[a-z] # # # # # # # # # Client name (clientname) ======================== You can set the name of this SAN File System client. The name can be any string, but must be unique. By default, the client setup utility uses the host name (output of the hostname command). Client name [sanfs01]:

clientname=sanfs01 # # # # # # # # # # # # Metadata server IP address (server_ip) ====================================== During setup, the SAN File System client must connect to one of the metadata servers in the cluster. After the client establishes a connection to the server, the server notifies the client of any other servers in the cluster. Specify the IP address for any metadata server in the cluster to establish the connection. Metadata server connection IP address [-]:

server_ip=192.168.71.75 # # # # # # # # # # # Metadata server port number (server_port) ========================================= The SAN File System client must connect to the client server port on the metadata server. In most cases, the metadata server uses port 1700. Accept this default unless you know that the metadata server was configured to listen on a different port. The sfscli command statserver -netconfig will print the client server port. Metadata server port number [1700]:

server_port=1700 Chapter 5. Installation and basic setup for SAN File System

183

# # # # # # # # # #

SAN File System mount point (mount_point) ========================================= The client setup utility mounts the SAN File System to specified mount point (directory) and creates the file image. If the specified mount point does not exist, it created. Once mounted, the directory tree for the file image appears at that mount point. Mount point [/mnt/sanfs]: a system is system

mount_point=/mnt/sanfs # # # # # # # # Read-only file system (readonly) ================================ If you mount the SAN File System as read-only, data and metadata in the file system can be viewed, but not modified. Accessing a file system object does not affect its access time attribute. Mount file system read-only [No]:

readonly=No # # # # # # # # # # Disable automatic restart(autorestart) ======================================= By default, the SAN File System client restarts when the system starts. - Enter Yes to enable automatic restart of the SAN File System client at startup. - Enter No to disable automatic restart of the SAN File System client at startup. Enable automatic restart at startup [Yes]:

autorestart=No # # # # # # # NLS converter (convertertype): =============================== The NLS converter tells the metadata server how to convert strings from the SAN File System client into Unicode. NLS converter [ISO-8859-1]:

convertertype=ISO-8859-1 # # # # # # # # Transport protocol (nettype) ============================ The transport protocol determines how the SAN File System client connects to the metadata server. Specify either tcp or udp. Transport protocol [tcp]:

nettype=tcp # Record mount in /etc/mtab (etc_mtab)

184

IBM TotalStorage SAN File System

# # # # # # # #

==================================== By default, if the file system mount succeeds, the client setup utility adds an entry for the file system image to /etc/mtab. You can choose not to record the mount in this file. Record the mount [Yes]:

etc_mtab=Yes # # # # # # # # # # # # # # # Show number of free blocks (always_empty) ========================================= By default, the number of blocks reported as free blocks by statfs() is actually the number of blocks in partitions that are not assigned to a fileset. Some programs might mistakenly report that there is no free space left in partitions assigned to the fileset, when there is actually free space available. This option forces statfs() to report the number of free blocks as being one less than the number of blocks in the file system. Always indicate blocks free [No]:

always_empty=No # # # # # # # # # Display verbose messages (verbose) ================================== By default, the client setup utility runs quietly, suppressing informational messages generated by the commands. You can choose to display these messages by specifying verbose. Display verbose output [No]:

verbose=No

5.4 UNIX device candidate list


UNIX-based SAN File System clients (including AIX, Solaris, and Linux) have what is known as a device candidate list. This parameter gives a pattern-matching string for device names that should be discovered for use by User volumes. Edit it to include a suitable string to match the device files configured. For example, if using DS4x00, the string would be /dev/sd*. If using SDD devices, the pattern would /dev/rvpath*.

Chapter 5. Installation and basic setup for SAN File System

185

You can add additional disks to the list after installation using the stfsdisk command on AIX (in /usr/tank/client/bin), or the sanfs_ctl disk command on Solaris. An example of the stfsdisk command is shown in Example 5-47. For more details on the parameters for this command, see IBM TotalStorage SAN File System Installation and Configuration Guide, GA27-4316.
Example 5-47 Show AIX device candidate list # cd /usr/tank/client/bin # ./stfsdisk -query -kmname /usr/tank/client/bin/stfs INACTIVE /dev/rvpath0 ACTIVE /dev/rvpath1 INACTIVE /dev/rvpath2 ACTIVE /dev/rvpath3 INACTIVE /dev/rvpath7

5.5 Local administrator authentication option


Every administrative request that is received via an administrator's GUI, CLI, or CIM/HTTP interface is authenticated by the CIMOM using the supplied credentials to validate the identity of the SAN File System administrative user that issued the request. Once the request has been successfully authenticated, the CIMOM verifies the administrative user's authorization to ensure the user has the required access level to execute the requested operation. In previous releases, the CIMOM required the use of an LDAP Authentication Module to authenticate and authorize administrative users. This meant an LDAP server had to be provided and configured with suitable entries so that it could be accessed by the CIMOM (as described in 3.5.2, LDAP on page 73 and 4.1.2, LDAP and SAN File System considerations on page 101). As of SAN File System V2.2.1 and above, you may choose to either deploy an LDAP server as before, or use the local authentication option for administrative users. If the local authentication option is used, administrator authentication and authorization (user ID and password when logging into the SAN File System GUI or CLI) is checked against the password and group files on the actual MDS that receives the request. To enable this, clients must define SAN File System users and groups on each MDS. Each MDS in a cluster must have the same set of user IDs and groups defined and kept in synchronization. The LDAP method continues to be supported as before. To use the local authentication method, define specific groups on each MDS (Administrator, Operator, Backup, or Monitor). Then add users, associating them with the appropriate groups according to the privileges required. For a new SAN File System installation, this is part of the pre-installation/planning process. An existing SAN File System cluster which is already using LDAP authentication can be migrated to the local authentication method at any time, except for during a software upgrade to the SAN File System itself. Review the local authentication considerations in Some points to note when using the local authentication method on page 73. Detailed procedures for setting up local authentication for a new SAN File System installation are given in 4.1.1, Local authentication configuration on page 100. Detailed instructions for switching from LDAP to local authentication are given in 6.7, Switching from LDAP to local authentication on page 246. When using local authentication, whenever a user ID/password combination is entered to start the SAN File System CLI or GUI, the authentication method checks that the user ID exists as a UNIX user account in /etc/passwd, and if the correct password was supplied. It

186

IBM TotalStorage SAN File System

then checks that the user ID is a member of one of the four required groups (Administrator, Operator, Backup, or Monitor). Finally, based on the group of which the user ID is a member, the method determines whether this group is authorized to perform the requested function in order to decide access.

5.6 Installing the Master Console


This section is an overview of installing the SAN File System Master Console. The SAN File System Master Console is an optional component of a SAN File System configuration. If deployed, it provides a single point from which to manage both the SAN File System and the SAN Volume Controller, as well as an access point for IBM support to provide remote login to the SAN File System cluster. We will focus on the overall process, and point out any major considerations for running the install. A detailed coverage of the installation process is in the IBM TotalStorage Master Console for SAN File System and SAN Volume Controller Installation and Users Guide Version 3 Release 1, GC30-4090. If you do not choose to use the Master Console, you can skip this section.

5.6.1 Prerequisites
The client must provide a suitable Intel server preloaded with the designated software before installing the Master Console software package that is shipped with SAN File System. We listed the hardware and software prerequisites for the Master Console in 2.5.2, Master Console hardware and software on page 38. Important: Using a Master Console is optional; it is no longer required for a SAN File System configuration. We listed the contents of the Master Console software package in 2.5.6, Master Console on page 45. If you have an existing Master Console and want to install the SAN File System Master Console package, we recommend you start with a clean system, that is, disable the existing mirroring and reinstall the operating system. We discovered some problems when trying to upgrade an existing Master Console installation. If you are sharing a Master Console with a SAN Volume Controller, we recommend maintaining it at the same level as currently used for the SVC.

Hints for the basic software installation


Before starting to install the SAN File System Master Console package, you need to have installed the following prerequisite software: Microsoft Windows 2000 Server Edition with Service Pack 4 or higher, or Microsoft Windows Professional with Update 818043, or Windows 2003 Enterprise Edition, or Windows 2003 Standard Edition Microsoft Windows Internet Explorer Version 6.0 with Service Pack 1 Antivirus software (not required but strongly recommended) J2SE Java Runtime Environment (JRE) V1.4.2

Chapter 5. Installation and basic setup for SAN File System

187

Do the following steps for the installation: 1. Install the selected Windows version, and configure the TCP/IP addresses and other parameters for your environment. Refer to the IBM TotalStorage Master Console for SAN File System and SAN Volume Controller Installation and Users Guide Version 3 Release 1, GC30-4090 for more details. 2. Install the Service Pack and the Windows Update (for Windows 2000, if required). You can download the Service Pack and Windows update from:
http://v4.windowsupdate.microsoft.com

To find the Windows Update, select Fixes for Windows 2000 Professional SP4 and Recommended Updates. Search for 818043. 3. Install your antivirus package, according to the vendors instructions. 4. Install Java Runtime. You can obtain JRE 1.4.2 from the Web site http://www.sun.com. Select Downloads Java & Technologies Java 2 Platform Standard Edition 1.4, and then Download J2SE JRE. We used the current package at the time of writing, which is V1.4.2. The Master Console installation wizard is written in Java, so you need this runtime environment. 5. From the directory where you downloaded the Java package, run the J2RE-1_4_2.exe file. The installation wizard initiates. 6. Accept the License Agreement and click Next. 7. On the Setup Type window, select Typical and click Next, as shown in Figure 5-28.

Figure 5-28 J2RE Setup Type

8. The installation will proceed. It may take some time, depending on your processor speed. 9. To verify that you have installed J2RE, check for the Java Web Start icon on your Desktop. It should be similar to Figure 5-29 on page 189.

188

IBM TotalStorage SAN File System

Figure 5-29 J2RE verify the install

Configure firewall support


If you are using a firewall, you need to enable certain ports so that the Master Console operates correctly. Local Area Connection 1 on the Master Console must be allowed to connect to the IBM Remote Support Gateway through UDP port 500 and UDP port 1701. If you have a NAT (Network Address Translation) firewall, you will also need to allow Local Area Connection 1 on the Master Console to connect to the IBM Remote Support Gateway through UDP port 4500. For Remote Support to work, a maximum of two ports will be permitted to connect to the Local Area Connection 1 on the Master Console. Check with your network system administrator to determine if you have access to the necessary ports, and to gain access if necessary. Ports and Protocol requirements: L2TP: UDP 500 and UDP 1701 NAT-T: UDP 4500 ESP: IP protocol 50

Installing SNMP service


SNMP must be pre-installed before starting the Master Console installation. You may have selected to install this during the Windows 2000 setup; even so, you should verify the installation, as described here. To install the SNMP service: 1. Select Start Settings Control Panel. 2. Double-click Add/Remove Programs and select Add/Remove Windows Components on the left hand side. 3. Click Management and Monitoring Tools and then click Details. 4. Check Simple Network Management Protocol and click OK. 5. Check Next to complete the installation process. 6. From the Control Panel, double-click Administrative Tools. 7. Double-click Computer Management. 8. Expand Services and Applications. 9. Click Services.

Chapter 5. Installation and basic setup for SAN File System

189

Figure 5-30 shows the Services applet after installing the SNMP service.

Figure 5-30 SNMP Service Window

10.Double-click SNMP Service. 11.On the General tab, check that the Startup Type is set to Automatic as the Startup Type, as shown in Figure 5-31 on page 191. You should also start the service if it is not started (right-click SNMP Service and select Start).

190

IBM TotalStorage SAN File System

Figure 5-31 SNMP Service Properties

12.In the Security tab, ensure there is a public community name with a minimum of Read rights, as shown in Figure 5-32 on page 192. Click OK.

Chapter 5. Installation and basic setup for SAN File System

191

13.Verify that SNMP Trap Service status is set to Manual. To do this, double-click its entry on the Services applet and check that Manual is selected from the Startup type drop-down. Figure 5-32 shows the Services applet with the correct startup types for the two SNMP services. 14.If installing on Windows 2003, select Accept SNMP packets from any host; this is the default on Windows 2000.

Figure 5-32 Verifying SNMP and SNMP Trap Service

5.6.2 Installing Master Console software


Note: The process and screen captures in this section are for a slightly older version of the Master Console, which was shipped with SAN File System V2.2. The current release at the time of writing is V3.1. The actual procedures and screen captures may vary slightly from those given here; see the IBM TotalStorage Master Console for SAN File System and SAN Volume Controller Installation and Users Guide Version 3 Release 1, GC30-4090 for authoritative details. Now you are ready to install the Master Console software package. A wizard is provided that steps through the installation of all the software contained in this package.

Master Console Installation wizard


The Master Console wizard ensures that the master console meets all prerequisites and launches the installation program for each of the software products being installed. 192
IBM TotalStorage SAN File System

Important: The Master Console has to be rebooted at certain stages during the installation. If you are prompted to reboot the system, you should do so, EXCEPT WHERE EXPLICITLY TOLD NOT TO IN THESE INSTRUCTIONS. After each reboot, the Master Console installation wizard will automatically continue the installation process from the point it was interrupted by the required reboot. Leave any CD in the drive across reboots. You will be prompted when it is necessary to insert another CD. The following software components are contained on the CD-ROM package and will be installed by the wizard: Adobe Acrobat Reader PuTTY DB2 SAN Volume Controller console DS4000 STorage Manager Client (formerly FAStT Storage Manager Client) Tivoli Storage Area Network Manager IBM Director IBM VPN Connection Manager

Starting the Installation wizard


Before you begin: Make sure that you have logged in using a user ID with administrative privileges and the modified user rights that are mentioned in the IBM TotalStorage Master Console for SAN File System and SAN Volume Controller Installation and Users Guide Version 3 Release 1, GC30-4090. If the User rights are not set, you can choose to have the Master Console installation wizard set them automatically, or you can manually set them. 1. Insert the Master Console CD-ROM 1. The file readme.txt file on CD-ROM 1 has the latest information about the Master Console. 2. Run setup.exe from the CD-ROM drive letter. Click OK. 3. The wizard initializes, as shown in Example 5-48, Master Console wizard initialization on page 193.
Example 5-48 Master Console wizard initialization Initializing InstallShield Wizard... Preparing Java(tm) Virtual Machine... ......................................................................

4. Select the language to be used for installation wizard and click OK.

Chapter 5. Installation and basic setup for SAN File System

193

5. The Installation wizard window appears, as shown in Figure 5-33. Click Next to start the installation.

Figure 5-33 Master Console installation wizard initial window

6. Accept the License Agreement on the next window and click Next. 7. Verify the privileges to be assigned to the account used for the installation by clicking Yes, as shown in Figure 5-34.

Figure 5-34 Set user account privileges

8. You will be prompted to log out, then re-login, and restart the installation. Do this now, logging in as the same user as previously (Administrator in our case). Restart the Master Console installation by rerunning setup.exe. 9. You may be prompted again to set the language and accept the license agreement.

194

IBM TotalStorage SAN File System

10.The actual software installation begins now with the installation of Adobe Acrobat Reader.

Installing Adobe Acrobat Reader


1. The Adobe Acrobat installer window is displayed, as shown in Figure 5-35. Click Next to install Adobe.

Figure 5-35 Adobe Installer Window

Chapter 5. Installation and basic setup for SAN File System

195

2. When the installation is complete, an information window appears, describing the actual installation wizard, as shown in Figure 5-36. Click Next.

Figure 5-36 Master Console installation wizard information

3. From this window, you can access the documentation by right-clicking the left side of the window. Click Next to continue.

Choosing the Master Console destination directory


You are now prompted to choose a directory to install the Master Console. The default is C:\Program Files\IBM\MasterConsole; you can accept this or choose an alternative. Click Next to continue.

Select optional products


You can select additional products to be installed, as shown in Figure 5-37 on page 197. We choose to install Connection Manager. If you have DS4000 disk, you may also choose to install the DS4000 Storage Manager Client (FAStT Storage Manager Client).

196

IBM TotalStorage SAN File System

Figure 5-37 Select optional products to install

If you do not select to install the DS4000 Storage Manager Client, a pop-up warning window appears. You can ignore this if you do not have a DS4000 disk by clicking OK.

Viewing the products to be installed


Next you will see the list of products to be installed for the Master Console. The installation wizard determines if any of these products are already installed and if so, whether the installed version is later than the version on the CDs. The wizard will upgrade existing products to their required level; however, we recommend starting with a clean system with only the operating system and the other prerequisites installed.

Chapter 5. Installation and basic setup for SAN File System

197

In Figure 5-38, none of the products have been installed; therefore, all will be installed.

Figure 5-38 Viewing the Products List

From this window, click Next to begin installing the first product on the list, that is, PuTTY.

Installing PuTTY
1. The PuTTY installation window appears. Click Next to continue. 2. Confirm that you want to install PuTTY by clicking Yes. 3. The PuTTY setup wizard launches. Click Next to continue. 4. For the next windows (destination directory, Start Menu folder, and Additional Tasks), we recommend accepting the defaults. Click Next to advance. 5. Confirm the installation settings, and click Install. 6. The PuTTY Setup wizard completes. You can view the Readme.txt file or click Finish to complete the installation. Note: If you are using the SAN Volume Controller for your metadata storage, you should create a public and private key using PuTTY. You will need these keys when you install the SAN Volume Controller Console. Follow the instructions in the IBM TotalStorage Master Console for SAN File System and SAN Volume Controller Installation and Users Guide Version 3 Release 1, GC30-4090. This process is not invoked during the Master Console installation wizard, so it must be done separately at this point. You have now installed PuTTY. Click Next to continue the installation wizard, as shown in Figure 5-39 on page 199.

198

IBM TotalStorage SAN File System

Figure 5-39 PuTTY installation complete

Installing DB2
The wizard now continues with installing DB2. 1. You are prompted to begin installing DB2 UDB Enterprise Edition. Click Next to begin. 2. You will be prompted to put in the next CD. Click OK when this is done.

Chapter 5. Installation and basic setup for SAN File System

199

3. The DB2 setup wizard starts, as shown in Figure 5-40. Click Next.

Figure 5-40 DB2 Setup wizard

4. Accept the license agreement and click Next. 5. On the Select installation type window (Figure 5-41 on page 201), accept the defaults. Click Next.

200

IBM TotalStorage SAN File System

Figure 5-41 DB2 select installation type

Chapter 5. Installation and basic setup for SAN File System

201

6. Accept the default to install DB2 Enterprise Server Edition on this computer, as shown in Figure 5-42. Click Next.

Figure 5-42 DB2 select installation action

7. Select the default directory; otherwise, specify the destination directory of your choice. Click Next. 8. In the next window, you have to specify a user name and password for DB2. The default user name is db2admin, as shown in Figure 5-43 on page 203. You need to enter a password for this user name.

202

IBM TotalStorage SAN File System

Figure 5-43 DB2 Username and Password menu

Make sure the Use the same values for the remaining DB2 Username and Password settings box is checked. If you do not, then you will subsequently be prompted to enter a user name and password at several points. Click Next when done. If you are prompted to create the user, click Yes.

Chapter 5. Installation and basic setup for SAN File System

203

On the Set up administration contact list, make sure Local - create a contact list on this system is checked, as shown in Figure 5-44. Click Next.

Figure 5-44 DB2 administration contact

Ignore the SMTP warning if it appears. Click OK. 9. On the DB2 instances window, select DB2, as shown in Figure 5-45 on page 205. Click Next.

204

IBM TotalStorage SAN File System

Figure 5-45 DB2 instance

Chapter 5. Installation and basic setup for SAN File System

205

10.On the next window, accept the default Do not prepare the DB2 tools catalog on this computer, as shown in Figure 5-46. Click Next.

Figure 5-46 DB2 tools catalog

11.Enter an appropriate administration contact name and e-mail address in your organization, as shown in Figure 5-47 on page 207. Click Next.

206

IBM TotalStorage SAN File System

Figure 5-47 DB2 administration contact

Chapter 5. Installation and basic setup for SAN File System

207

12.Confirm the installation settings and click Install to start the installation, as shown in Figure 5-48.

Figure 5-48 DB2 confirm installation settings

13.The installation proceeds to copy the files and configure the database instance. 14.Click Finish to complete the installation in the window shown in Figure 5-49 on page 209.

208

IBM TotalStorage SAN File System

Figure 5-49 DB2 confirm installation settings

15.The installation will take some time. When the IBM DB2 Universal Database First Steps window appears, click Exit First Steps.

Chapter 5. Installation and basic setup for SAN File System

209

16.On the Master Console installation wizard, click Next to verify the DB2 installation, as shown in Figure 5-50.

Figure 5-50 Verify DB2 install

17.You will be prompted to continue with the IBM TotalStorage SAN Volume Controller Console software installation. Click Next.

Installing SAN Volume Controller console


1. Insert the next Master Console installation CD and click OK. 2. Follow the instructions in the IBM TotalStorage Master Console for SAN File System and SAN Volume Controller Installation and Users Guide Version 3 Release 1, GC30-4090 to install the SAN Volume Controller console. The wizard will automatically install this product even if you do not have an SVC in your environment. You can accept the defaults when prompted. You will need to generate a public and private key if you have not already done so when installing PuTTY. 3. The installation may take some time. When it is complete, the post installation tasks will display in a text file, indicating how to access the SVC console. Close this window. 4. On the Master Console installation wizard, click Next to verify the SVC console installation, as shown in Figure 5-51 on page 211. If you selected to install the DS4000 Storage Manager Client (FAStT Storage Manager Client) in Select optional products on page 196, this will now install. See IBM TotalStorage Master Console for SAN File System and SAN Volume Controller Installation and Users Guide Version 3 Release 1, GC30-4090 for instruction on installing this item. Otherwise, IBM Tivoli Storage Area Network Manager (TPC for Fabric) is installed.

210

IBM TotalStorage SAN File System

Figure 5-51 Verify SVC console install

Installing IBM Tivoli SAN Manager (TPC for Fabric)


Next, the wizard installs IBM Tivoli SAN Manager. 1. Click Next to start the IBM Tivoli Storage Area Network Manager installation. 2. Insert the next Master Console installation CD and click OK. 3. Select the installation language to be used and then click OK. 4. Click Next at the welcome window. 5. Accept the License Agreement and click Next. 6. At the Destination Directory window, accept the default or choose an alternate location. Click Next. 7. At the base port number window, click Next to accept the default of 9950.

Chapter 5. Installation and basic setup for SAN File System

211

8. Select DB2 as the data repository and click Next, as shown in Figure 5-52. Note that DB2 is not the default selection on this window.

Figure 5-52 Select database repository

9. On the Single/Multiple User ID/Password Choice window, you can decide to use the DB2 Administrator user name and password you specified during the DB2 installation in step 8 on page 202 for all IDs and passwords on this window. We recommend using the same ID/password for all options, as shown in Figure 5-53. Click Next. .

Figure 5-53 Specify single DB2 user ID

212

IBM TotalStorage SAN File System

10.Enter the DB2 ID and password that you defined in step 8 on page 202, as shown in Figure 5-54.

Figure 5-54 Enter DB2 user ID

11.On the database name window, click Next to accept the default itsanmdb. 12.On the Tivoli NetView installation drive window, click Next to accept the default drive. 13.Click Next to confirm the installation. The installation proceeds; it may take some time. 14.Click Next to complete the installation of Tivoli SAN Manager. 15.You are prompted to reboot the computer; click Finish to do this. 16.After rebooting, the SAN Manager installation is validated. Then the wizard proceeds to install the Tivoli SAN Manager Agent.

Disabling NetView traps from SNMP


Before continuing to install the Tivoli SAN Manager Agent, you need to disable the NetView component of Tivoli SAN Manager from receiving traps from the Windows SNMP Trap Service. Since IBM Director receives traps on the same port as Tivoli SAN Manager by default, this will cause a conflict when SNMP traps are sent from SAN File System or SAN Volume Controller. Therefore, you need to configure IBM Director to forward traps to NetView. To disable NetView from receiving SNMP traps: 1. Run regedit to edit the Windows registry.

Chapter 5. Installation and basic setup for SAN File System

213

2. Locate the key HKEY_LOCAL_MACHINE\SOFTWARE\Tivoli\NetView\CurrentVersion 3. Change the value of trapdSharePort162 to 0, as shown in Figure 5-55.

Figure 5-55 Set trapdSharePort162

3. Add a value (select Edit New DWord Value) and name it trapdTrapReceptionPort. 4. Double-click the new value and set it to an available port number, such as 9950, and click the Decimal radio button, as shown in Figure 5-56 on page 215.

214

IBM TotalStorage SAN File System

Figure 5-56 Define trapdTrapReceptionPort

Remember the port number that you set here. You will refer to that number when you modify the IBM Director configuration later. 5. Exit the Windows registry. 6. Open a Command Prompt window. Change to the directory c:\usr\ov\bin. Remove the NetView service with the command nvservice -remove. 7. Reinstall the NetView service (which will remove the dependency on the SNMP Trap Service) with the command nvservice -install -username .\NetView -password password, entering a password for the local NetView account. Close the Command Prompt window when done. 8. Return to the wizard to install the Tivoli SAN Manager Agent.

Installing Tivoli SAN Manager Agent


1. Click Next to begin the installation of the Tivoli SAN Manager Agent. 2. Insert CD 1 in the CD drive and click OK. 3. Select the installation language and click Next. 4. This launches the Tivoli SAN Manager installation wizard. Click Next. 5. Accept the License Agreement, and click Next. 6. Accept the default directory and click Next.

Chapter 5. Installation and basic setup for SAN File System

215

7. On the Manager name and port number window, enter localhost for the Tivoli Manager name (because both the Manager and the Agent are installed on the Master Console). Accept the default Port Number and click Next, as shown in Figure 5-57.

Figure 5-57 Enter TSANM Manager name and port

8. Accept the default base port number and click Next. 9. On the next window, Host Authentication Password window, enter the password of the Host Authentication ID you specified when you installed Tivoli SAN Manager. The default was to use the DB2 ID, db2admin. 10.Click Next to verify the installation settings and begin copying files. 11.Click Finish to complete installation of the Tivoli SAN Manager Agent. 12.On the Master Console installation wizard, click Next to verify the install of Tivoli SAN Manager Agent. 13.The wizard will then install IBM Director.

Installing IBM Director


1. From the Master Console installation wizard, click Next to begin installing IBM Director. 2. This launches the IBM Director Setup Wizard. Click Next. 3. Accept the License Agreement and click Next. 4. On the Server Plus window, click Next. 5. On the Feature and Installation Directory window, click the Red X for SNMP Access and Trap Forwarding. Click This Feature will be installed on the local harddrive, and click Next. The window should look similar to Figure 5-58 on page 217.

216

IBM TotalStorage SAN File System

Figure 5-58 IBM Director Installation Directory window

6. On the IBM Director service account information window, fill in the following fields: Domain: Enter the host name of the Master Console. User name: Enter a Windows user account with administrative privileges, for example, Administrator. Password: Enter and confirm the password for the specified Windows user account. The window should look similar to Figure 5-59. Click Next.

Figure 5-59 IBM Director Service Account Information

7. On the Encryption Settings window, accept the defaults and click Next. 8. On the Software distribution settings window, accept the defaults and click Next.

Chapter 5. Installation and basic setup for SAN File System

217

9. Click Install to begin installation. This may take some time. 10.On the Network driver configuration pop-up, select the first port and click Enable driver, as shown in Figure 5-60. Click OK.

Figure 5-60 IBM Director network drivers

11.On the IBM Director database configuration window (Figure 5-61), make sure that Microsoft Jet 4.0 is selected. Do not select DB2 here. Click Next.

Figure 5-61 IBM Director database configuration

12.On the next window, accept the defaults for ODBC data source and Database name; these cannot be changed. Click Next.

218

IBM TotalStorage SAN File System

13.Click Finish to complete the installation. 14.When prompted to reboot the system, click No. DO NOT REBOOT until you have completed the next task. 15.From the Master Console installation wizard, click Next. The wizard validates the installation of IBM Director.

Configure IBM Director traps


After completing the installation and before continuing with the Master Console installation wizard, you need to configure IBM Director to forward traps to the NetView component of Tivoli SAN Manager. 1. Open a Command Prompt window. 2. Change to the IBM Director installation directory. The default directory is C:\Program Files\IBM\Director by default. 3. Change to the data\snmp subdirectory. 4. Edit the file SNMPServer.properties. a. Uncomment the line:
snmp.trap.v1.forward.address.1=

by deleting the # sign, and add the host name of the Master Console. For example:
snmp.trap.v1.forward.port.1=KCWC09K

b. Uncomment the line:


snmp.trap.v1.forward.port.1=

by deleting the # sign, and add the port that you specified for the trapdTrapReceptionPort value in the Windows Registry key, in step 4 on page 214:
snmp.trap.v1.forward.port.1=9950

5. Save and close the file. 6. Click Next on the Master Console installation wizard. The wizard validates the installation of IBM Director. 7. Reboot the Master Console.

Chapter 5. Installation and basic setup for SAN File System

219

Preconfiguring IBM Director


The Master Console installation wizard now performs some preconfiguration steps for IBM Director. It will create a set of event action plans from archived files on the installation CD. 1. Enter an ID and password for the IBM Director superuser, for example, superusr, as shown in Figure 5-62. Click Next.

Figure 5-62 IBM Director superuser

2. The preconfiguration tasks execute automatically to configure IBM Director. 3. After discovery is complete, click Next to continue.

Set local system account for IBM Director


You now need to complete the following steps to ensure that the local system account can log on to IBM Director: 1. Open the Services applet. 2. Right-click IBM Director Server and select Properties. 3. Select the Log On tab, click Local System account, and check Allow service to interact with desktop. Click OK. 4. You are prompted that you need to stop and restart the service for the new properties to take effect. Click OK. Stop and restart the IBM Director Server service.

Installing documentation and utilities files


You now need to copy the documentation and utilities files using the Master Console installation wizard. 1. From the Master Console installation wizard, click Next to begin copying documentation and support utilities. 220
IBM TotalStorage SAN File System

2. The documentation and support utilities are copied.

Finishing the Master Console installation


This completes the installation of the Master Console. Review the Master Console installation log (mclog.txt) to ensure that all products are properly installed. The log file is in the logs subdirectory of the Master Console installation directory (default is C:\Program Files\IBM\Master Console). The error log should look similar to Example 5-49.
Example 5-49 mclog file to verify Master Console installation (Oct 6, 2004 3:06:52 PM), This summary log is an overview of the sequence of the installation of the Master Console for IBM TotalStorage SAN File System (Oct 6, 2004 3:10:05 PM), Installing Acrobat Reader ... (Oct 6, 2004 3:11:17 PM), Acrobat Reader successfully installed. (Oct 6, 2004 3:20:54 PM), WARNING: FAStT Storage Manager Client should only be deselected if FAStT disk drives are not part of this configuration. (Oct 6, 2004 3:23:42 PM), Master Console for IBM TotalStorage SAN File System will be installed in the following location: C:\Program Files\IBM\MasterConsole (Oct 6, 2004 3:23:52 PM), Installing PuTTY Utility ... (Oct 6, 2004 3:32:25 PM), PuTTY Utility successfully installed. (Oct 6, 2004 3:37:01 PM), Installing IBM DB2 Universal Database Enterprise Edition ... (Oct 6, 2004 3:45:23 PM), Master Console for IBM TotalStorage SAN File System will be installed in the following location: C:\Program Files\IBM\MasterConsole (Oct 6, 2004 3:45:24 PM), Installing IBM DB2 Universal Database Enterprise Edition ... (Oct 6, 2004 4:11:52 PM), IBM DB2 Universal Database Enterprise Edition successfully installed. (Oct 6, 2004 4:13:57 PM), Installing IBM TotalStorage SAN Volume Controller Console ... (Oct 6, 2004 4:35:10 PM), IBM TotalStorage SAN Volume Controller Console successfully installed. (Oct 6, 2004 4:37:40 PM), Installing IBM Tivoli Storage Area Network Manager Manager ... (Oct 6, 2004 5:16:46 PM), Resuming installation... (Oct 6, 2004 5:18:05 PM), IBM Tivoli Storage Area Network Manager Manager successfully installed. (Oct 6, 2004 5:35:04 PM), Installing IBM Tivoli Storage Area Network Manager Agent ... (Oct 6, 2004 5:41:12 PM), IBM Tivoli Storage Area Network Manager Agent successfully installed. (Oct 6, 2004 5:41:44 PM), Installing IBM Director ... (Oct 6, 2004 5:53:43 PM), IBM Director successfully installed. (Oct 6, 2004 5:53:43 PM), (Oct 6, 2004 6:05:14 PM), Resuming installation... (Oct 6, 2004 6:05:34 PM), IBM Director successfully installed. (Oct 6, 2004 6:05:34 PM), (Oct 6, 2004 6:10:54 PM), Restarting "IBM Director Support Program" service. Please wait... (Oct 6, 2004 6:11:09 PM), "IBM Director Support Program" service stopped. (Oct 6, 2004 6:11:16 PM), "IBM Director Support Program" service started. (Oct 6, 2004 6:11:48 PM), Service "IBM Director Support Program" successfully restarted. (Oct 6, 2004 6:11:50 PM), Discovering IBM Director managed systems ... (Oct 6, 2004 6:20:22 PM), Installing Support Utils... (Oct 6, 2004 6:20:25 PM), Installing Documents... (Oct 6, 2004 6:20:45 PM), Installing Infocenter... (Oct 6, 2004 6:20:45 PM), Installing Infocenter... (Oct 6, 2004 6:20:46 PM), Creating Windows registry entries ... (Oct 6, 2004 6:20:46 PM), Windows registry entries successfully created. (Oct 6, 2004 6:20:47 PM), Command to be executed : regedit /s "C:\Program Files\IBM\MasterConsole\Support Utils\Remote_Support\IPSecAllow.reg" (Oct 6, 2004 6:20:47 PM), The reg file "C:\Program Files\IBM\MasterConsole\Support Utils\Remote_Support\IPSecAllow.reg" successfully loaded to the Windows registry Chapter 5. Installation and basic setup for SAN File System

221

(Oct 6, 2004 6:20:47 PM), Command to be executed : regedit /s "C:\Program Files\IBM\MasterConsole\Support Utils\Remote_Support\IPsecInstall.reg" (Oct 6, 2004 6:20:47 PM), The reg file "C:\Program Files\IBM\MasterConsole\Support Utils\Remote_Support\IPsecInstall.reg" successfully loaded to the Windows registry (Oct 6, 2004 6:21:02 PM), Successfully installed IBM VPN Client. (Oct 6, 2004 6:21:06 PM), You need to reboot your system. (Oct 6, 2004 6:21:06 PM), Master Console for IBM TotalStorage SAN File System successfully installed.

You will be prompted to reboot the system in order to complete the installation; select Yes, restart my computer, and click Finish. The final step is to set up mirroring of the Master Console boot drive for redundancy.

Mirroring the boot drive


1. From the desktop, right-click My Computer, click Manage, expand Storage, and click Disk Management. The window should look similar to Figure 5-63. It shows the two drives in the Master Console system: one is Healthy/Dynamic and one is Unallocated/Basic.

Figure 5-63 Disk Management

222

IBM TotalStorage SAN File System

2. Right-click the Basic Unallocated disk (Disk 1 in our example) and select Upgrade to Dynamic Disk, as shown in Figure 5-64.

Figure 5-64 Upgrade to dynamic disk

3. Both disks should now show as Dynamic, as shown in Figure 5-65.

Figure 5-65 Verify both disks are set to type Dynamic

Chapter 5. Installation and basic setup for SAN File System

223

Note: Both the disks must be set to Dynamic for mirroring to work with Windows. If the other disk is not set Dynamic, do the following: 1. Right-click Disk0 and select Upgrade to Dynamic Disk. 2. Click Yes on the warning. The system will probably reboot. 3. After the reboot, re-start Disk Management. 4. Right-click Disk0, and select the Add Mirror option, as shown in Figure 5-66.

Figure 5-66 Add Mirror

224

IBM TotalStorage SAN File System

5. The Add Mirror window is displayed, as in Figure 5-67. Select the other disk, Disk 1 in our example, and click Add Mirror.

Figure 5-67 Select mirrored disk

6. This initiates the mirroring process (to synchronize the two drives). Figure 5-68 shows the progress; both Disk 0 and Disk 1 are Regenerating. This process takes about 20-25 minutes.

Figure 5-68 Mirroring process

Chapter 5. Installation and basic setup for SAN File System

225

7. Once the mirroring process is completed, a warning displays, as shown in Figure 5-69. Click OK. It tells you that to be able to boot from the new mirrored disk, you have to add an entry to the boot.ini file.

Figure 5-69 Mirror Process completed

8. The boot.ini file is, by default, a system file, which is disabled from view. To display and edit this file, you have to modify the folder options. Do the following to access/view the file: a. Open My Computer and click C:. b. From the Menu, select Tools Folder Options. c. In the Folder Options window, select the View tab, and select Show hidden files and folders, as in Figure 5-70. d. Click OK for the changes to take effect.

Figure 5-70 Setting Folder Options

9. You can now see the hidden files. Edit the boot.ini file from the C: drive. The file should look similar to Example 5-50 on page 227. Copy the highlighted line if necessary to add the second entry for Disk 1. Be very careful when editing this file; an error may prevent your system from booting.

226

IBM TotalStorage SAN File System

Example 5-50 Boot.ini file [boot loader] timeout=30 default=multi(0)disk(0)rdisk(0)partition(1)\WINNT [operating systems] multi(0)disk(0)rdisk(0)partition(1)\WINNT="Microsoft Windows 2000 Advanced Server" /fastdetect multi(0)disk(1)rdisk(0)partition(1)\WINNT="Microsoft Windows 2000 Advanced Server" /fastdetect

10.Save the file and reboot the Master Console. 11.After the system completes its POST sequence, the system should prompt you to Select an operating system to boot from. Because we have mirrored the drive, the options listed should show the same name and same operating system. The screen should look similar to Example 5-51.
Example 5-51 Boot selection screen Please select the operating system to start: Microsoft Windows 2000 Advanced Server Microsoft Windows 2000 Advanced Server

Use the arrow keys to select the operating system of your choice: Press Enter.

Because both disks have been assigned the same name, it is hard to distinguish the primary and the secondary drive. 12.To modify the names of the drives, edit the boot.ini file again to add identifiers to the disks. You can use Primary and Secondary to differentiate between the two disks. Example 5-52 shows the updated boot.ini file.
Example 5-52 Identify disks in boot.ini [boot loader] timeout=30 default=multi(0)disk(0)rdisk(0)partition(1)\WINNT [operating systems] multi(0)disk(0)rdisk(0)partition(1)\WINNT="Microsoft Windows 2000 Advanced Server Primary" /fastdetect multi(0)disk(1)rdisk(0)partition(1)\WINNT="Microsoft Windows 2000 Advanced Server Secondary" /fastdetect

13.Reboot the system for changes to take effect. Your boot screen should now show the distinct disk identifiers. Try booting from both the drives to verify the mirroring has worked. Your Master Console is now successfully installed.

Chapter 5. Installation and basic setup for SAN File System

227

5.7 SAN File System MDS remote access setup (PuTTY / ssh)
The CLI for SAN File System can only be accessed with a secure shell connection. Telnet is not available on the MDS.

5.7.1 Secure shell overview


A secure shell is used to secure the administrative data flow to the SAN File System cluster when using the CLI. The connection is secured by the means of a private or public key pair: The public key is uploaded to the SSH server. The private key identifies the client and is checked against the public key during the connection. The private key must be protected. The two keys are generated together. The SSH server must also identify itself with a specific host key. If the client does not have that key yet, it is added to a list of known hosts. SSH is a client-server network application. The MDS cluster acts as the SSH server in this relationship. The secure shell (SSH) client provides a secure environment in which to connect to a remote machine. It uses the principles of public and private keys for authentication. When an SSH client (A) attempts to connect to a server (B), a key is needed to authenticate the connection. The key consists of two halves: the public and private keys. The public key is put onto (B), and when (A) tries to connect, the private key on (A) is able to authenticate with its public half on (B). The SSH keys are generated by the SSH client software. This includes a public key, which is uploaded and maintained by the cluster, and a private key, which is kept private to the workstation that is running the SSH client. These keys authorize specific users to access the administration and service functions on the cluster. Each key is associated with a user-defined ID string that can consist of up to 40 characters. Up to 100 keys can be stored on the cluster. You can also add new IDs and keys or delete unwanted IDs and keys. In order to use the SAN File System Command-Line Interface (CLI), you must have an SSH client installed on that system, generate the SSH key pair on the client system, and store the clients SSH public key on the SAN File System cluster(s). The SAN File System Master Console has a free implementation of SSH-2 for Windows called PuTTY pre-installed. This software provides the Secure Shell (SSH) client function for users logged into the master console who wish to manage the SAN File System cluster. For other remote systems that will access SAN File System, they must install an SSH client, for example, PuTTY or Cygwin.

228

IBM TotalStorage SAN File System

Chapter 6.

Upgrading SAN File System to Version 2.2.2


This chapter describes how to upgrade from a previous release of SAN File System (V2.2.1 in this case) to the current release (V2.2.2 at the time of writing). This chapter includes the following: Upgrade SAN File System to Version 2.2.2 introduction Detailed steps for upgrading the MDS cluster Upgrading SAN File System clients Important: The installation package versions given in this chapter, and in Chapter 5, Installation and basic setup for SAN File System on page 125 were correct at the time of writing, but have changed by the time of publication.

Copyright IBM Corp. 2003, 2004, 2006. All rights reserved.

229

6.1 Introduction
This chapter details the steps needed to prepare and then perform a Rolling Upgrade on the SAN File System Metadata servers and Clients. The following procedures upgrade a 2-node MDS cluster consisting of tank-mds1, which is the current master MDS, and tank-mds2, which is the subordinate MDS. The cluster is currently at V2.2.1.32; we will upgrade to V2.2.2. The current configuration is shown in Example 6-1 using the lsserver and statcluster SAN File System commands. Note the Software Version and Committed Software Version.
Example 6-1 Show SAN File System cluster status with lsserver command tank-mds1:~ # sfscli lsserver Name State Server Role Filesets Last Boot ========================================================== tank-mds1 Online Master 9 Aug 19, 2005 3:24:25 PM tank-mds2 Online Subordinate 7 Aug 19, 2005 3:58:44 PM sfscli> statcluster Name ATS_GBURG ID 61306 State Online Target State Online Last State Change Aug 19, 2005 12:29:49 PM Last Target State Change Servers 2 Active Servers 1 Software Version 2.2.1.32 Committed Software Version 2.2.1.32 Last Software Commit Aug 19, 2005 10:40:58 AM Software Commit Status Not In Progress Metadata Check State Idle Metadata Check Percent Completed 0 %

SAN File System provides a rolling software upgrade so that the SAN File System clients do not experience access disruptions to the SAN File System namespace during most of the upgrade process. The detailed procedure is specific for each release, as operating systems and hardware supported have changed with each release. However, the high level process is as follows: 1. Preparation: Complete some prerequisite steps and save important configuration details, described in 6.2, Preparing to upgrade the cluster on page 231. 2. Stop a single MDS, upgrade any BIOS/device drivers and operating system prerequisites, upgrade its SAN File System binaries, then restart that MDS and allow it to rejoin the cluster. During this time, the cluster as a whole continues to operate at the previous software version with some MDSs running the previous version and some MDSs running the new version. Repeat the process until every MDS is running the new software version binaries, while continuing to use the old cluster protocols and data formats. For the current release of SAN File System (upgrading from V2.2.1 to V2.2.2), you should upgrade each subordinate, and finally the master MDS last; however, check the rolling upgrade instructions for each specific release, as the recommended order may change. 3. Once all binaries have been updated on each MDS, and all MDS have rejoined the cluster, issue the sfscli upgradecluster command to go through a coordinated cluster transition to using the new protocols, shared data structures, and new functionality. 4. Stop individual SAN File System clients and upgrade the SAN File System client software. Restart the SAN File System client and any applications. Check the specific rolling upgrade instructions for each release.

230

IBM TotalStorage SAN File System

6.2 Preparing to upgrade the cluster


This section describes the major steps for upgrading the SAN File System software from V2.2.1 to V2.2.2. 1. If not previously done, connect each RSA II adapter card to a port on the IP network and assign an IP address to the card. Instructions for doing this are in Verifying boot drive and setting RSA II IP configuration on page 127. You will record this TCP/IP address in step 1 and also use it as an input parameter to the upgrade process. 2. Create a backup archive of the current configuration by running the setupsfs command with the -backup option, as shown in Example 6-2.
Example 6-2 Upgrade cluster: backup configuration tank-mds1:~ # cd /usr/tank/admin/bin tank-mds1:~ # ./setupsfs -backup -f /usr/tank/admin/config/backup.list.template HSTSS0035E File does not exist: /etc/sysconfig/network/ifroute-eth0 /etc/HOSTNAME /etc/resolv.conf /etc/sysconfig/network/ifcfg-eth0 /etc/tank/server/Tank.Bootstrap /etc/tank/server/Tank.Config /etc/tank/admin/tank.properties /usr/tank/admin/truststore /var/tank/server/DR/TankSysCLI.auto /var/tank/server/DR/TankSysCLI.volume /var/tank/server/DR/TankSysCLI.attachpoint /var/tank/server/DR/After_upgrade_to_2.2.1-13.dump /var/tank/server/DR/After_upgrade_to_2.2.1.13.dump /var/tank/server/DR/Before_Upgrade_2.2.2.dump /var/tank/server/DR/Moved_to_ESSF20.dump /var/tank/server/DR/SFS_BKP_After_Upgrade_to_2.2.0.dump /var/tank/server/DR/Test_051805.dump /var/tank/server/DR/ATS_GBURG.rules /var/tank/server/DR/ATS_GBURG_CLONE.rules Created file: /usr/tank/server/DR/DRfiles-tank-mds1-20050819123200.tar.gz

3. The resulting backup archive is located in /usr/tank/server/server/DR/ and has a name of the format DRfiles-<hostname->-<datestamp>.tar.gz. It should also be copied to a safe location other than on the local file system of the MDS being upgraded. This is especially important if you will upgrade to SUSE 9, as this upgrade will overwrite any data on the operating system disk.

Chapter 6. Upgrading SAN File System to Version 2.2.2

231

4. Gather and record the configuration information shown in Table 6-1 for each MDS.
Table 6-1 Configuration information to save Configuration name Mount point of CD Value Method to obtain value Path to the root of the mounted SAN File System CD. Default: /media/cdrom. grep TrustPassword /usr/tank/admin/config/cimo m.properties. grep ^Port /usr/tank/admin/config/cimo m.properties. Log into the user account that has been set up to access the SAN File System CLI (sfscli) and run cat $HOME/.tank.passwd. Example: # cat $HOME/.tank.passwd itsoadm:password. The first value is the CLI User ID; the second value is the CLI User password. Address assigned to the RSA II card. Either LDAP or local authentication. Run grep ^AuthModule /usr/tank/admin/config/cimo m.properties. If the value is=com.ibm.storage.storagetan k.auth.SFSLocalAuthModule is present, then local authentication is used; otherwise LDAP authentication is used.

Truststore Password

CIMOM port number

CLI User CLI Password

System management IP Address Authentication method

If you are using the LDAP configuration method, you must also record the LDAP parameters from /usr/tank/admin/config/tank.properties: LDAP_SERVER, LDAP_USER, LDAP_PASSWD, LDAP_SECURED_CONNECTION, LDAP_BASEDN_ROLES, LDAP_ROLE_MEM_ID_ATTR, LDAP_USER_ID_ATTR, and LDAP_ROLE_ID_ATTR. 5. Decide which version of SUSE Linux you will run on the upgraded cluster. SAN File System V2.2.2 requires either SLES 8 Service Pack 4 (kernel version 2.4.21-278) or SLES 9 with Service Pack 1 and kernel version 2.6.5-7.151. Run the command shown in Example 6-3 on page 233 on each MDS in the cluster to check the Linux kernel version. We are currently running SLES 8 with Service Pack 3, as required by SAN File System V2.2.1. Since we are using SVC metadata storage, we will upgrade to SLES 8 Service Pack 4, as at the time of writing, this storage was not supported at SLES 9. Therefore, we will need to apply SLES 8 Service Pack 4. If you decide to move to SLES 9, you will need to first upgrade the operating system to SLES 9, then apply SLES 9 Service Pack 1, and then upgrade the kernel to the right level.

232

IBM TotalStorage SAN File System

Example 6-3 Show kernel version on MDS tank-mds2:~ # rpm -qa |grep kernel kernel-source-2.4.21-231

You can get the Linux kernel packages from your SUSE Maintenance Web service, or through a public Linux download site such as http://rpmfind.net. See the IBM TotalStorage SAN File System Installation and Configuration Guide, GA27-4316 and also Apply United Linux Service Pack 3 and 4 on page 129 for details on upgrading the SUSE operating system. We will perform the actual upgrade during the rolling upgrade process, but you should gather any CDs required before starting the upgrade. This will minimize the outage time on any given MDS. 6. Check in the Release notes, and download any BIOS or firmware upgrade images required; Web sites for these notes are given in 6.3.2, Upgrade MDS BIOS and RSA II firmware on page 234. 7. Check in the Release notes, and download any disk device driver upgrade images required; Web sites for these notes are given in 6.3.3, Upgrade the disk subsystem software on page 235. 8. If you are upgrading from V2.2.1, you should cable a second redundant Ethernet connection to each MDSs second Ethernet port before upgrading so that Ethernet bonding can be set up. To function in the most complete fashion, the V2.2.2 high-availability feature requires that Ethernet bonding be set up. Bonding enables you to configure multiple Ethernet connections with the same IP address so that if one of the connections fails, the second connection takes over. To set up Ethernet bonding on SLES8, follow the procedure in Set up Ethernet bonding on page 131. To set up Ethernet bonding on SLES9, see the instructions in the SAN File System 2.2.2 Release Notes and IBM TotalStorage SAN File System Installation and Configuration Guide, GA27-4316. Important: If you need to configure Ethernet bonding on an existing SAN File System cluster, run the steps on any subordinate(s) first, then finally on the master MDS. 9. Make sure that SSH keys have been set up before proceeding to the next step. This will allow unchallenged root login among each MDS and will avoid the necessity of being prompted many times to enter root passwords during various SAN File System maintenance processes. If you have not previously configured the SSH keys, follow the procedure in step 5 on page 136 in 5.2.5, Install prerequisite software on the MDS on page 135.

6.3 Upgrade each MDS


Now we will upgrade each MDS in turn, starting with each subordinate, and ending with the current MDS. On each MDS, we will perform any BIOS and disk device driver upgrades, upgrade the operating system, and then finally upgrade the SAN File System packages. The first step is to gracefully take the selected MDS offline from the SAN File System cluster. This will fail its workload to other MDSs in the cluster (in our case, to the master tank-mds1). We give more information about automatic failover features of SAN File System in 9.5, MDS automated failover on page 413. You must perform these steps on each MDS in turn, with the current master MDS going last. In our configuration, there are two servers: tank-mds1 and tank-mds2. The current master is tank-mds1; therefore, we upgrade tank-mds2 first, followed by tank-mds1.

Chapter 6. Upgrading SAN File System to Version 2.2.2

233

6.3.1 Stop SAN File System processes on the MDS


1. Use the stopserver command to gracefully shut down a MDS and fail over its filesets to another MDS. We run stopserver tank-mds2 from the master MDS tank-mds1. We confirm the MDS has been shutdown with the lsserver command; this shows the State of tank-mds2 is Not Running and that all the filesets have been taken over by tank-mds1. Example 6-4 shows the output.
Example 6-4 Stop the subordinate server using stopserver tank-mds1:~ # sfscli lsserver Name State Server Role Filesets Last Boot ============================================================== tank-mds1 Online Master 1 Aug 19, 2005 10:41:01 AM tank-mds2 Online Subordinate 0 Aug 19, 2005 11:44:38 AM tank-mds1:~ # sfscli stopserver tank-mds2 Are you sure you want to stop the metadata server tank-mds2? This operation distributes this metadata server workload to the remaining metadata servers. [y/n]:y CMMNP5252I Metadata server tank-mds2 stopped gracefully. tank-mds1:~ # sfscli lsserver Name State Server Role Filesets Last Boot =================================================================== tank-mds1 Online Master 1 Aug 19, 2005 10:41:01 AM tank-mds2 Not Running Subordinate -

Attention: If the server being stopped is the master, wait until the new master takes over before proceeding. You can check this with the lsserver command. If the output shows the other server in a Joining state you must wait. When the master takeover is complete, it will show one MDS with the Master role and the other in state Not Running. In our case, after upgrading the current subordinate, tank-mds2, we would then shut down the master, tank-mds1. We would wait until the lsserver command output shows tank-mds2 in an Online state with the Master role and all filesets assigned, since it will have taken over tank-mds1 workload. The shutdown MDS tank-mds1 will stay in state Not Running. We could then proceed to upgrade tank-mds1. 2. On the MDS, which was shut down, disable the automatic restart capability, using the stopautorestart command, as shown in Example 6-5.
Example 6-5 Disable autorestart tank-mds2:~ # sfscli stopautorestart tank-mds2 CMMNP5365I The automatic restart service for metadata server tank-mds2 successfully disabled

6.3.2 Upgrade MDS BIOS and RSA II firmware


Now we need to upgrade the BIOS and firmware on the shutdown MDS. Check the Release Notes first to determine the levels of machine FLASH BIOS and RSA Firmware needed to support SAN File System V2.2.2. In our case, we needed to upgrade our IBM ^ xSeries 345 machines FLASH BIOS level to 1.19. We chose to use the latest level (1.21 for our testing). You can download this version at the following Web site:
http://www.ibm.com/pc/support/site.wss/document.do?lndocid=MIGR-54484

234

IBM TotalStorage SAN File System

For the IBM ^ xSeries 346 model, the BIOS is at the following Web site
http://www.ibm.com/pc/support/site.wss/document.do?sitestyle=ibm&lndocid=MIGR-57356

For the IBM ^ xSeries 365 model, the BIOS is at the following Web site
http://www.ibm.com/pc/support/site.wss/document.do?sitestyle=ibm&lndocid=MIGR-60101

Follow the README notes that come with the FLASH BIOS package for installation instructions. In our case, we dumped the BIOS to a diskette and rebooted the MDS with the diskette inserted in the drive. The MDS reboots off of the diskette, asks some elementary questions, and flashes the BIOS. Then we upgrade the RSA II card firmware level to the latest level, which is 1.09. You can download this firmware (for the IBM ^ xSeries 345) from the following Web site:
http://www.ibm.com/pc/support/site.wss/document.do?lndocid=MIGR-46489

For the IBM ^ xSeries 346 model, the RSA II firmware is at the following Web site
http://www.ibm.com/pc/support/site.wss/document.do?sitestyle=ibm&lndocid=MIGR-56759

For the IBM ^ xSeries 365 model, the RSA II firmware is at the following Web site
http://www.ibm.com/pc/support/site.wss/document.do?sitestyle=ibm&lndocid=MIGR-53861

Instructions for upgrading the BIOS and firmware are given in the manual IBM TotalStorage SAN File System Installation and Configuration Guide, GA27-4316. Attention: If a reboot is required for either one of the upgrades above, after the machine reboots make sure the MDS is in a Not Running state. Use:
#sfscli lsserver

If the server is in an Online state, stop it with:


# sfscli stopserver <server-name>

6.3.3 Upgrade the disk subsystem software


If you are using SVC, ESS, DS8000, or DS6000 as metadata storage, you may need to upgrade the SDD package with the latest code level defined in the Release Notes. In our case, we had to upgrade to 1.6.0.1-6. This same level of SDD is required on both SLES 8 and SLES 9. If using DX4x000 metadata storage, RDAC is required to be at level 9.00.A5.09 for SLES 8 and 9.00.B5.04 for SLES9. Instructions on installing disk subsystem software are given in 4.4.3, Install and verify SDD on MDS on page 117 and 4.5.3, RDAC on MDS and Linux client on page 121. Attention: Make sure to download the version of SDD that is required to support the Linux kernel you are running. Use this URL to download SDD:
http://www.ibm.com/support/docview.wss?rs=540&context=ST52G7&uid=ssg1S4000107&loc=en_ US&cs=utf-8&lang=en

The RDAC drive can be downloaded from this Web site. Make sure to choose the correct version for your Linux kernel.
http://www.ibm.com/servers/storage/support/disk/ds4500/stormgr1.html

Chapter 6. Upgrading SAN File System to Version 2.2.2

235

6.3.4 Upgrade the Linux operating system


If you are remaining at SUSE8, install Service Pack 4. If you will upgrade to SUSE 9, you will have to shut down the MDS, boot from the SUSE 9 CD and install that version, and then apply Service Pack 1. This will overwrite all data on the boot drive, so make sure you have saved the backup configuration archive collected in steps 2 on page 231 and 3 on page 231 of 6.2, Preparing to upgrade the cluster on page 231, plus any local scripts or utilities you might be using on the MDS. See the IBM TotalStorage SAN File System Installation and Configuration Guide, GA27-4316 and also Apply United Linux Service Pack 3 and 4 on page 129 for details on upgrading the SUSE operating system. Finally, you must install the required kernel and source package for 2.6.5-7.151, which are available from your SUSE Maintenance Web service. Your kernel should now be at the correct level, as in Example 6-6 for SUSE 8.
Example 6-6 Show kernel version on MDS tank-mds2:~ # rpm -qa |grep kernel kernel-source-2.4.21-278

If you upgraded to SLES9, copy the saved backup configuration archive file (in Example 6-2 on page 231) back to /usr/tank/server/DR/and also any local scripts or utilities that were installed on top of SAN File System.

6.3.5 Upgrade the MDS software


Upgrade the SAN File System software on each MDS by doing the following steps: 1. Determine what packages are currently installed and what software version is currently committed in the MDS, as shown in Example 6-7. Use the rpm command to display the Linux installed packages, and the SAN File System statcluster command to display the committed version. In this case, we are running SAN File System V2.2.1.
Example 6-7 Show current SAN File System rpm packages and committed version tank-mds1:~ # rpm -qa | grep sfs dosfstools-2.8-296 sfs-package-2.2.1-62 sfs.admin.linux-2.2.1-32 sfs.server.linux-2.2.1-32 tank-mds1:~ # sfscli sfscli> statcluster Name ID State Target State Last State Change Last Target State Change Servers Active Servers Software Version Committed Software Version Last Software Commit Software Commit Status Metadata Check State Metadata Check Percent Completed

ATS_GBURG 61306 Online Online Aug 19, 2005 12:29:49 PM 2 1 2.2.1.32 2.2.1.32 Aug 19, 2005 10:40:58 AM Not In Progress Idle 0 %

236

IBM TotalStorage SAN File System

2. Mount the SAN File System CD in the CD-ROM, for example, at /media/cdrom. Install the 1.4.2-1.0 version of IBM Java Runtime Environment, provided in the SAN File System installation CD, using the following command:
rpm -U /media/cdrom/common/IBMJava2-142-ia32-JRE-1.4.2-1.0.i386.rpm

Upgrade the MDS to V2.2.2 by running the install_sfs-package-<version>.sh script, as shown in Example 6-8 on page 238. Run the installation script that corresponds to the version of SUSE Linux Enterprise server that is installed on your system. There are two install_sfs-package scripts on the SAN File System CD: For SUSE Linux Enterprise Server Version 8, in a directory named SLES8 For SUSE Linux Enterprise Server Version 9, in a directory named SLES9. We are at SLES8, so we run the script from that directory:
cd /media/cdrom/SLES8 ./install_sfs-package-<version>.sh --restore /usr/tank/server/DR/savedDRarchive --sfsargs "-noldap"

Use the --restore option and reference the archive file created in Example 6-2 on page 231. If your configuration is using local authentication, rather than LDAP, use the --sfsargs -noldap option as shown; otherwise, the command will be of the format:
./install_sfs-package-<version>.sh --restore /usr/tank/server/DR/savedDRarchive

that is, without the --sfsargs -noldap option. Do not attempt to migrate to local authentication during the rolling upgrade process; either migrate before upgrading (and thoroughly test) or after the upgrade is complete. The install_sfs package is a self-extracting archive and shell script and contains the software packages for all SAN File System components, including the metadata server, the administrative server, and all clients. Note that the version string for the install_sfs-package might differ from the version strings of the individual packages, but this does not cause any problems with the installation. Using Example 6-8 on page 238 as a reference, enter the number corresponding to the language to use for the installation (we entered 2 for English), press Enter to display the license agreement, and 1 to accept it. The process extracts the packages, then prompts you for the server configuration parameters. Accept the prompted entries if they are correct; otherwise, enter amended values. You should have saved this information for entry in step 4 on page 232 of 6.2, Preparing to upgrade the cluster on page 231. Note particularly that you have to enter the TCP/IP address of your RSA card where prompted (System Management IP). This is the address that you saved in step 1 on page 231 of 6.2, Preparing to upgrade the cluster on page 231, and recorded in step 4 on page 232 of the same section. If you are using LDAP authentication, you will also be prompted to enter in these values: LDAP_SERVER, LDAP_USER, LDAP_PASSWD, LDAP_SECURED_CONNECTION, LDAP_BASEDN_ROLES, LDAP_ROLE_MEM_ID_ATTR, LDAP_USER_ID_ATTR, and LDAP_ROLE_ID_ATTR. You should also have saved these in step 4 on page 232 of 6.2, Preparing to upgrade the cluster on page 231. If you are already using local authentication, make sure to enter a valid locally defined user ID/password combination that is a member of the Administrator group for the CLI_USER/CLI_PASSWD prompts; otherwise, enter the LDAP user ID with the Administrator role.

Chapter 6. Upgrading SAN File System to Version 2.2.2

237

Example 6-8 Upgrade cluster: Install SAN File System package part 1 tank-mds2:/media/cdrom/SLES8 # ./install_sfs-package-2.2.2-132.i386.sh --restore /usr/tank/server/DR/DRfiles-tank-mds1-20050819123200.tar.gz --sfsargs "-noldap" Software Licensing Agreement 1. Czech 2. English 3. French 4. German 5. Italian 6. Polish 7. Portuguese 8. Spanish 9. Turkish Please enter the number that corresponds to the language you prefer. 2 Software Licensing Agreement Press Enter to display the license agreement on your screen. Please read the agreement carefully before installing the Program. After reading the agreement, you will be given the opportunity to accept it or decline it. If you choose to decline the agreement, installation will not be completed and you will not be able to use the Program.

International Program License Agreement Part 1 - General Terms BY DOWNLOADING, INSTALLING, COPYING, ACCESSING, OR USING THE PROGRAM YOU AGREE TO THE TERMS OF THIS AGREEMENT. IF YOU ARE ACCEPTING THESE TERMS ON BEHALF OF ANOTHER PERSON OR A COMPANY OR OTHER LEGAL ENTITY, YOU REPRESENT AND WARRANT THAT YOU HAVE FULL AUTHORITY TO BIND THAT PERSON, COMPANY, OR LEGAL ENTITY TO THESE TERMS. IF YOU DO NOT AGREE TO THESE TERMS, - DO NOT DOWNLOAD, INSTALL, COPY, ACCESS, OR USE THE PROGRAM; AND - PROMPTLY RETURN THE PROGRAM AND PROOF OF ENTITLEMENT TO Press Enter to continue viewing the license agreement, or, Enter "1" to accept the agreement, "2" to decline it or "99" to go back to the previous screen. 1 Installing sfs-package-2.2.2-132.i386.rpm...... sfs-package ################################################## sfs-package-2.2.2-132 Installing /usr/tank/packages/sfs.locale.linux_SLES8-2.2.2-8.i386.rpm...... sfs.locale.linux_SLES8 ################################################## sfs.locale.linux_SLES8-2.2.2-8 Installing /usr/tank/packages/sfs.server.verify.linux_SLES8-2.2.2-91.i386.rpm...... sfs.server.verify.linux_SLES8################################################## sfs.server.verify.linux_SLES8-2.2.2-91

238

IBM TotalStorage SAN File System

Installing /usr/tank/packages/sfs.server.config.linux_SLES8-2.2.2-91.i386.rpm...... sfs.server.config.linux_SLES8################################################## sfs.server.config.linux_SLES8-2.2.2-91 SAN File System CD mount point (CD_MNT) ======================================= setupsfs needs to access the SAN File System CD to verify the license key and install required software. Enter the full path to the SAN File System CDs mount point. CDs mount point [/media/cdrom]: Truststore Password (TRUSTSTORE_PASSWD) ======================================= Enter the password used to secure the truststore file. The password must be at least six characters. Truststore Password [-]: ibmstore CIMOM port (CIMOM_PORT) ======================= The CIMOM port is the port used for secure administrative operations. CIMOM port number [5989]: CLI User (CLI_USER) =================== Enter the user name that will be used to access the administrative CLI. This user must have an administrative role. CLI User [-]: itsoadm CLI Password (CLI_PASSWD) ========================= Enter the password used to access the administrative CLI. CLI Password [-]: xxxx System Managment IP (SYS_MGMT_IP) ================================= Enter the System Managment IP address This is the address assigned to your RSAII card. System Managment IP [-]: 9.82.22.176

Chapter 6. Upgrading SAN File System to Version 2.2.2

239

3. The process continues installing the new packages on the MDS, as shown in Example 6-9.
Example 6-9 Upgrade cluster: Install SAN File System package part 2 . Gathering required files .HSTPV0035I Machine tank-mds2 complies with requirements of SAN File System version 2.2.2.91, build sv22_0001. . Installing:wsexpress-5.1.2-1.i386.rpm on 9.82.24.97 . wsexpress-5.1.2-1 . Installing:ibmusbasm-1.09-2.i386.rpm on 9.82.24.97 . Found Product ID 4001 USB Service Processor. Installing the USB Service Processor driver. ibmusbasm-1.09-2 . Installing:sfs.admin.linux_SLES8-2.2.2-91.i386.rpm on 9.82.24.97 . HSTWU0011I Installing the SAN File System console... HSTWU0014I The SAN File System console has been installed successfully. sfs.admin.linux_SLES8-2.2.2-91 . Installing:sfs.server.linux_SLES8-2.2.2-91.i386.rpm on 9.82.24.97 . sfs.server.linux_SLES8-2.2.2-91 Restoring configuration files on 9.82.24.97 . Updating configuration file: /usr/tank/admin/config/cimom.properties Starting the CIM agent on 9.82.24.97 . Starting the SAN File System Console on 9.82.24.97 . Configuration complete.

4. Check the status of the upgraded MDS with the lsserver command, as shown in Example 6-10.
Example 6-10 Check MDS status tank-mds2:~ # sfscli lsserver Name State Server Role Filesets Last Boot ================================================================== tank-mds2 Not Running Subordinate 0 Jan 1, 1970 12:00:00 AM

5. Check to see what SAN File System packages have been installed (see Example 6-11). Compare it to Example 6-7 on page 236.
Example 6-11 Check new installed packages tank-mds2:~ # rpm -qa|grep sfs dosfstools-2.8-296 sfs-package-2.2.2-132 sfs.server.config.linux_SLES8-2.2.2-91 sfs.locale.linux_SLES8-2.2.2-8 sfs.server.verify.linux_SLES8-2.2.2-91

240

IBM TotalStorage SAN File System

sfs.server.linux_SLES8-2.2.2-91 sfs.admin.linux_SLES8-2.2.2-91

6. Now we can start the upgraded MDS to run it at V2.2.2, as shown in Example 6-12.
Example 6-12 Start the upgraded server tank-mds2:~ # /usr/tank/admin/bin/sfscli startserver tank-mds2 Are you sure you want to start the metadata server? Starting the metadata server might cause filesets to be reassigned to this metadata server in accordance with the fileset assignment algorithm. [y/n]:y CMMNP5248I Metadata server tank-mds2 started successfully.

7. The MDS will rejoin the cluster. Check this on the master MDS, as shown in Example 6-15.
Example 6-13 Upgraded server rejoins the cluster tank-mds1:~ #sfscli lsserver Name State Server Role Filesets Last Boot ============================================================== tank-mds1 Online Master 1 Aug 19, 2005 10:41:01 AM tank-mds2 Online Subordinate 0 Aug 19, 2005 3:24:25 PM

8. Re-enable the automatic restart capability on the MDS that was just upgraded using the startautorestart command, as shown in Example 6-14.
Example 6-14 Re-enable autorestart tank-mds2:~ # sfscli startautorestart tank-mds2 CMMNP5365I The automatic restart service for metadata server tank-mds2 successfully enabled

9. Repeat these steps for all subordinate MDS.

6.4 Special case: upgrading the master MDS


After upgrading all subordinate MDSs, the final step is to upgrade the master, tank-mds1, in our configuration. The process is almost the same; however, there are a few special considerations. 1. First, stop the SAN File System server on tank-mds1 and allow the master role to failover to tank-mds2, as in Example 6-15.
Example 6-15 Stop master MDS and failover master role tank-mds1:~ #sfscli lsserver Name State Server Role Filesets Last Boot ============================================================== tank-mds1 Online Master 1 Aug 19, 2005 10:41:01 AM tank-mds2 Online Subordinate 0 Aug 19, 2005 3:24:25 PM tank-mds1:~ # sfscli stopserver tank-mds1 Are you sure you want to stop the metadata server tank-mds1? This operation distributes this metadata server workload to the remaining metadata servers. [y/n]:y CMMNP5252I Metadata server tank-mds1 stopped gracefully. tank-mds1:~ # sfscli lsserver Name State Server Role Filesets Last Boot ================================================================== Chapter 6. Upgrading SAN File System to Version 2.2.2

241

tank-mds1 Not Running Master tank-mds2 Joining Subordinate 0 Aug 19, 2005 3:24:25 PM tank-mds1:~ # sfscli lsserver Name State Server Role Filesets Last Boot ================================================================== tank-mds1 Not Running Master tank-mds2 Joining Subordinate 0 Aug 19, 2005 3:24:25 PM tank-mds1:~ # sfscli lsserver Name State Server Role Filesets Last Boot ==================================================== tank-mds1 Not Running Subordinate -

2. Check on tank-mds2 that the master role is correctly assumed (see Example 6-16).
Example 6-16 Check master role failover tank-mds2:~ #sfscli lsserver Name State Server Role Filesets Last Boot ================================================================== tank-mds1 Not Running Subordinate tank-mds2 Online Master 1 Aug 19, 2005 3:24:25 PM

3. On the MDS that was shutdown, disable the automatic restart capability using the stopautorestart command, as shown in Example 6-17.
Example 6-17 Disable autorestart tank-mds1:~ # sfscli stopautorestart tank-mds1 CMMNP5365I The automatic restart service for metadata server tank-mds1 successfully disabled

4. Now follow the same steps to upgrade the final MDS, as described in 6.3.2, Upgrade MDS BIOS and RSA II firmware on page 234, 6.3.3, Upgrade the disk subsystem software on page 235, 6.3.4, Upgrade the Linux operating system on page 236, and 6.3.5, Upgrade the MDS software on page 236. 5. After the upgrade is complete, check the status of tank-mds1 (see Example 6-18).
Example 6-18 Check upgraded MDS status tank-mds1:~ # sfscli lsserver Name State Server Role Filesets Last Boot ================================================================== tank-mds1 Not Running Subordinate 0 Jan 1, 1970 12:00:00 AM

6. Check that the new SAN File System packages were installed (see Example 6-19).
Example 6-19 Check SAN File System packages upgraded tank-mds1:~ # rpm -qa|grep sfs dosfstools-2.8-296 sfs-package-2.2.2-132 sfs.server.config.linux_SLES8-2.2.2-91 sfs.locale.linux_SLES8-2.2.2-8 sfs.server.verify.linux_SLES8-2.2.2-91 sfs.server.linux_SLES8-2.2.2-91 sfs.admin.linux_SLES8-2.2.2-91

7. Start the MDS, tank-mds1 (see Example 6-20 on page 243).

242

IBM TotalStorage SAN File System

Example 6-20 Start upgraded MDS tank-mds1:~ # /usr/tank/admin/bin/sfscli startserver tank-mds1 Are you sure you want to start the metadata server? Starting the metadata server might cause filesets to be reassigned to this metadata server in accordance with the fileset assignment algorithm. [y/n]:y CMMNP5248I Metadata server tank-mds1 started successfully.

8. The master role will remain on tank-mds2. On tank-mds2, check the status of both servers to make sure the software of both servers are upgraded, as in Example 6-21.
Example 6-21 Check all MDS are upgraded tank-mds2:~ # sfscli lsserver -l Name State Last State Change Target State Last Target State Change Server Role Filesets Last Boot Current Time Most Current Software Version =========================================================================================== ====================================================================================== tank-mds2 Online Aug 19, 2005 4:24:03 PM Online Master 0 Aug 19, 2005 3:24:25 PM Aug 19, 2005 4:24:51 PM 2.2.2.91 tank-mds1 Online Aug 19, 2005 3:58:58 PM Online Subordinate 1 Aug 19, 2005 3:58:44 PM Aug 19, 2005 3:59:47 PM 2.2.2.91

6.5 Commit the cluster upgrade


After upgrading each MDS, using the rolling procedure, the statcluster command will show Software version of 2.2.2.91, and Committed Software Version of 2.2.1.32. This indicates that the cluster is still running the older SAN File System, but is ready to commit the new version, V2.2.2. We commit the upgrade, with the upgradecluster command, on the master MDS, as in Example 6-22. Rerun the statcluster command to show the commit has completed.
Example 6-22 Commit the upgrade tank-mds2:~ # sfscli upgradecluster Are you sure you want to upgrade the cluster software? [y/n]:y CMMNP5210I Cluster upgrade successful. tank-mds2:~ # sfscli statcluster Name ATS_GBURG ID 61306 State Online Target State Online Last State Change Aug 19, 2005 4:26:09 PM Last Target State Change Servers 2 Active Servers 2 Software Version 2.2.2.91 Committed Software Version 2.2.2.91 Last Software Commit Aug 19, 2005 4:26:06 PM Software Commit Status Not In Progress Metadata Check State Idle Metadata Check Percent Completed 0 % Installation Date Aug 19, 2005 10:40:58 AM

9. Congratulations! Your cluster is now upgraded to V 2.2.2 of SAN File System. 10.You may now disconnect the USB/RS-485 serial network interface on the RSA cards. This is not required, but the interface and connection is no longer used by SAN File System.

Chapter 6. Upgrading SAN File System to Version 2.2.2

243

6.6 Upgrading the SAN File System clients


Although the V2.2.1 SAN File System Clients will run with a cluster running at V2.2.2, we recommend upgrading the clients to the new V2.2.2 client code as soon as possible to take advantage of new features and performance improvements

6.6.1 Upgrade SAN File System AIX clients


1. First, copy the new V2.2.2 AIX client software to a local directory on each AIX client. You can do this via secure ftp or using the SAN File System browser interface. Make sure to use the correct package corresponding to your version of AIX. See 5.3.4, SAN File System AIX client installation on page 169 for the package names. 2. Stop all applications using SAN File System on the AIX client. Check if the SAN File System is in use, as shown in Example 6-23.
Example 6-23 Showing the mounted AIX File Systems root@sanm80:/ > mount node mounted mounted over -------- --------------- --------------/dev/hd4 / /dev/hd2 /usr /dev/hd9var /var /dev/hd3 /tmp /dev/hd1 /home /proc /proc /dev/hd10opt /opt /dev/fslv00 /downloads root@sanm80:/ > fuser -cuxV /mnt/sanfs /mnt/sanfs:

vfs -----jfs jfs jfs jfs jfs procfs jfs jfs2

date -----------Aug 17 16:27 Aug 17 16:27 Aug 17 16:27 Aug 17 16:28 Aug 17 16:29 Aug 17 16:29 Aug 17 16:29 Aug 17 16:29

options --------------rw,log=/dev/hd8 rw,log=/dev/hd8 rw,log=/dev/hd8 rw,log=/dev/hd8 rw,log=/dev/hd8 rw rw,log=/dev/hd8 rw,log=/dev/loglv00

3. If there are any outstanding processes accessing the mount point, they should be terminated. 4. Stop the SAN File System client using the rmstclient command with the noprompt option, as shown in Example 5-38 on page 174. This command will fail if the SAN File System is being accessed by the client. Make sure to stop all use of SAN File System on the AIX client, as described in the previous step. 5. Copy the current stclient.conf configuration file to a temporary location on the AIX client:
cp /usr/tank/client/config/stclient.conf /tmp

Note: If you did not choose to save the setup configuration when the AIX client was first installed, you may not have this file. 6. Remove the current SAN File System software from the AIX system, as described in Uninstalling the AIX SAN File System client on page 175. Then install the new client package, as described in 5.3.4, SAN File System AIX client installation on page 169. Make sure to specify the location of the new version package file where it was saved in the first step. 7. Copy your saved stclient.conf file back to the /usr/tank/client/config directory:
cp /tmp/stclient.conf /usr/tank/client/config

8. Reconfigure the SAN File System client, using the stored parameters in the stclient.conf directory, as shown in Configuring the AIX client to the SAN File System server on

244

IBM TotalStorage SAN File System

page 172. Use the -noprompt option to have the setup run silently, using the values in the configuration file.

6.6.2 Upgrade Solaris/Linux clients


The procedure to upgrade Solaris and Linux clients is similar to that for AIX. Basically, obtain the new client code, back up the existing configuration file, remove the older version, install the new version, and re-run the configuration step. See 5.3.2, SAN File System Linux client installation on page 164, 5.3.3, SAN File System Solaris installation on page 168, and IBM TotalStorage SAN File System Installation and Configuration Guide, GA27-4316 for details.

6.6.3 Upgrade SAN File System Windows clients


The following steps will help you upgrade the Windows 2000 and 2003 SAN File System Clients. These should be performed at the actual console of the client; at the time of writing this redbook, it was not supported (and caused errors) to use a Terminal Services (including Windows Remote Desktop) session for installing the Windows clients. Check the Release notes to see if this restriction still applies; if in doubt, install at the physical console. 1. Download the V2.2.2 Windows client software to your Windows client from an MDS. You can use secure ftp, or the SAN File System GUI. To use the GUI, select Download Client Software from the main window, as shown in Figure 6-1.

Figure 6-1 SAN File System console

2. Scroll down to the Windows 2000 or 2003 client section and save the executable file to a temporary directory. It will be called sfs-client-WIN2K3-2.2.2-.x.exe (x is the release number).

Chapter 6. Upgrading SAN File System to Version 2.2.2

245

3. Determine the current configuration information for the client, including: SAN File System Master Server IP address or host name SAN File System server port SAN File System preferred drive letter SAN File System client name SAN File System network connection type (TCP or UDP) SAN File System client critical error handling policy 4. Stop all applications using SAN File System on the Windows client. 5. Uninstall the current client version, as shown in Removing the SAN File System Windows client on page 157. Make sure you reboot your client following a successful de-installation. 6. Install the new version, as shown in Windows client installation steps on page 149, being sure to specify the new client package you just obtained from the MDS. Enter in the saved configuration parameters.

6.7 Switching from LDAP to local authentication


You can switch an existing MDS cluster from LDAP to local authentication at any time (except for during a SAN File System software upgrade). The simplest procedure is to define identical user IDs and passwords for local authentication that are already in use in the LDAP server. To change the administrative authentication/authorization method from LDAP to local, do the following: 1. Define a UNIX group for each role defined in the LDAP server by issuing the commands:
# # # # groupadd groupadd groupadd groupadd Administrator Operator Backup Monitor

You must use these exact group names and define all of the groups. 2. For each LDAP user ID that was used for SAN File System, define a UNIX user ID, and specify the same password. When defining each user ID with the useradd command, specify the same group that matches its LDAP role. You may decide to use different user IDs than were previously used in LDAP; if so, note the special steps required. In our case, we will preserve an existing ID, ITSOMon, but replace the previous ITSOAdmin ID with itsoadm (remember, user IDs, groups, and passwords are case sensitive). # useradd -g Administrator itsoadm # passwd itsoadm (Specify a password when prompted.) # useradd -g Monitor ITSOMon # passwd ITSOMon (Specify a password when prompted.) Repeat for each user ID that was defined in LDAP or for any new user ID required. We recommend limiting UNIX user IDs to eight characters or fewer. 3. After making these definitions identically on each MDS, log in to each MDS, using each user ID to verify the ID/password, and to make sure a /home/userid directory structure exists. Create home directories if required (use the md command). You can also list the contents of the /etc/passwd and /etc/group files to verify that the intended UNIX groups and user IDs were added to the MDSs.

246

IBM TotalStorage SAN File System

4. After making these definitions on every MDS, reconfigure the cluster to use local authentication. On each MDS, enter the following: # /usr/tank/admin/bin/stopCimom (this stops the administrative agent). 5. Edit /usr/tank/admin/config/cimom.properties and change the line beginning with AuthModule to:
AuthModule=com.ibm.storage.storagetank.auth.SFSLocalAuthModule

6. Re-start the administrative agent:


/usr/tank/admin/bin/startCimom

Repeat steps 4 to 6 on each MDS. You are now using local authentication and can delete the SAN File System definitions from the LDAP server, as they are no longer required. You will now log in to the CLI and GUI using a local user ID and password combination. You can only use user IDs that are members of one of the SAN File System standard groups; an attempt to use the CLI or GUI with a user ID that is not a member of one of these groups will fail. You do not need to log in to the operating system as a SAN File System user ID to run the CLI; the CLI is just an application that runs after logging in. Therefore, you can log in to the MDS as any user, and then run the CLI as a SAN File System user ID. The user ID that will be used to run the CLI will be specified in the .tank.passwd file in the home directory of the ID that logged into the operating system. Check in any existing .tank.passwd files that the user IDs and passwords specified in them have been configured locally. If you have used different user IDs from the previous LDAP configuration (for example, in our case, to illustrate this, we had ITSOAdmin in LDAP but defined instead a user ID itsoadm - remember these are case sensitive), you will need to update any .tank.passwd files to reflect the correct user ID and password combination. For example, in our case for root, our /root/.tank.passwd file previously contained:
ITSOAdmin:xxxxx

where xxxxx is the actual LDAP password. This indicates that when we log in to the MDS OS as root, when running the CLI, it runs with the privileges of the ITSOAdmin user ID. Since this ID no longer exists, we need to replace it with a valid SAN File System local user ID. Use the tankpasswd command, as shown in Example 6-24. Change to the home directory for the user that had logged into the MDS (root in this example), then update the .tank.passwd file to set a user ID to be used when logging into the CLI. Repeat this process while logged in as any other user IDs that have been accessing the CLI. You have to configure the .tank.passwd file even if logging into the MDS OS as the same user ID that will run the CLI.
Example 6-24 Update the CLI password # cd ~ # cat .tank.passwd ITSOAdmin:password # /usr/tank/admin/bin/tankpasswd -u itsoadm -p password # cat .tank.passwd itsoadm:password # sfscli ladmuser tank-mds4:~ # sfscli lsadmuser Name User Role Authorization =============================== ITSOMon Monitor Not Current itsoadm Admin Current

Chapter 6. Upgrading SAN File System to Version 2.2.2

247

The example also shows the lsadmuser command; this command displays the currently defined SAN File System user IDs, and shows our current session ran under the user ID itsoadm (Authorization is Current). If you subsequently upgrade the SAN File System software, make sure to enter a valid locally defined user ID/password combination that is a member of the Administrator group when prompted for CLI_USER/CLI_PASSWD prompts (as in Example 6-8 on page 238).

248

IBM TotalStorage SAN File System

Part 3

Part

Configuration, operation, maintenance, and problem determination


In this part of the book, we present detailed information for configuring, operating, protecting, and solving problems for the IBM TotalStorage SAN File System.

Copyright IBM Corp. 2003, 2004, 2006. All rights reserved.

249

250

IBM TotalStorage SAN File System

Chapter 7.

Basic operations and configuration


In this chapter, we discuss the following topics: Administrative interfaces: CLI and Web interface Basic navigation and verifying the cluster setup Adding and removing LUNs and volumes (volume drain) Creating storage pools and filesets Expanding volumes Non-uniform SAN File System configurations Setting up SAN File System file placement policies Client operations: Mounting file systems and file sharing (homogeneous/heterogeneous)

Copyright IBM Corp. 2003, 2004, 2006. All rights reserved.

251

7.1 Administrative interfaces to SAN File System


The hardware servers that run the Metadata servers are generically known as engines. There are two methods for managing the SAN File System, a command-line interface (CLI) or a graphical user interface (GUI), which is called the SAN File System console. The CLI is accessed either by logging in directly to the engine itself (at the KVM) or by using a Secure Shell (SSH) client to remotely connect to the engine. The console is accessed using a Web browser. This section will describe how to access the CLI and the console. SAN File System provides you with different levels of user access to perform administrative tasks. The user roles are defined on your LDAP server or locally within the Linux of the MDSs. Therefore, you must use an appropriate user ID when accessing either the SAN File System GUI or CLI. We show you how to set up an LDAP environment for SAN File System in 4.1.2, LDAP and SAN File System considerations on page 101, with further instructions in Appendix A, Installing IBM Directory Server and configuring for SAN File System on page 565, and Appendix B, Installing OpenLDAP and configuring for SAN File System on page 589. We showed how to set up local authentication in 4.1.1, Local authentication configuration on page 100 and 6.7, Switching from LDAP to local authentication on page 246.

7.1.1 Accessing the CLI


The CLI is accessed using a secure shell (SSH). SSH is a client-server network application. The SAN File System cluster acts as the SSH server in this relationship. The SSH client provides a secure environment in which you connect to a remote machine, where data submitted between the client and the server is encrypted. In order to use the CLI for SAN File System, you must have an SSH Client installed on the system that will access SAN File System, and an operating system (Linux) user ID and password on the MDS that you wish to log on to. There are several different SSH clients available, including PuTTY (which is shipped in the Master Console installation package) and Cygwin. Both of these clients are shareware and free to download. Here are Web sites for the download package: Cygwin: http://www.cygwin.com/setup.exe PuTTY: http://www.chiark.greenend.org.uk/~sgtatham/putty/ After installing the ssh client, you can connect to an MDS remotely. To do this, you need a Linux user ID and password for the MDS. We do not recommend using the root ID for security reasons. To create additional users for the MDS, you can use the Linux command useradd. To start an SSH session with the MDS, using Cygwin, at the prompt, type ssh userid@MDS-ipaddress, answer Yes to add it to the known hosts file, and enter the password for that user ID, as shown in Example 7-1.
Example 7-1 Connecting to master MDS using cygwin $ ssh root@9.42.164.115 The authenticity of host '9.42.164.115 (9.42.164.115)' can't be established. RSA key fingerprint is 20:09:40:d9:44:e9:ff:a2:0c:a2:80:df:b5:cc:19:b5. Are you sure you want to continue connecting (yes/no)? yes Warning: Permanently added '9.42.164.115' (RSA) to the list of known hosts. root@9.42.164.115's password: Last login: Mon May 17 11:53:46 2004 from 9.37.229.38 Welcome to SAN File System

252

IBM TotalStorage SAN File System

mds1:~ #

If using PuTTY, start the PuTTY interface, and create a session for your MDS, as shown in Figure 7-1.

Figure 7-1 Create PuTTY ssh session.

SAN File System CLI password


You may have noticed that when running the SAN File System CLI (sfscli), you are not prompted for a user ID and password. Yet you have defined, either in LDAP or at the operating system, user ID/group (role)/password combinations to access SAN File System. You explicitly enter an authorized user ID and password when running the SAN File System GUI (see 7.1.2, Accessing the GUI on page 256), but not when running the CLI. How? The answer is that when the CLI (sfscli) is run, SAN File System checks for a password file (called .tank.passwd) in the home directory of the OS-logged in user. If it exists, it checks if it contains a valid user ID/password combination (either in the OS or LDAP, depending on the authentication method used). It then uses the privilege level (role/group) of the specified user ID to determine which SAN File System commands are or are not valid for execution. If the file does not exist, or if the user ID in it is not a member of one of the required SAN File System groups/LDAP roles (Backup, Administrator, Monitor, or Operator), or if the password is not correct for that user ID, you cannot run sfscli commands. The SAN File System installation automatically creates a .tank.passwd file in the home directory of the root user (/root). The .tank.passwd file contains the user ID and password specified by the CLI_USER and CLI_PASSWD parameters that are prompted for during the installation. This is why you can log in as root to an MDS, then run sfscli without further authentication.

Chapter 7. Basic operations and configuration

253

If you want to run the CLI when logged in as a non-root user ID, you must manually create a password file specifying a valid SAN File System user ID/password combination in the home directory of the login user ID. Use the tankpasswd command with a SAN File System user ID and password combination, as shown in Example 7-2. In the example, the session is logged in as lxuser, but after the command is run, when lxuser runs sfscli, it will run with the privilege level of the user ID ITSOMon.
Example 7-2 Create .tank.passwd file for non-root users lxuser@mds1:~> cd $HOME lxuser@mds1:~> /usr/tank/admin/bin/tankpasswd -u ITSOMon -p password lxuser@mds1:~> cat .tank.passwd ITSOMon:password lxuser@mds1:~>

SAN File System sfscli


Now you can start a SAN File System CLI (sfscli) session to run commands in interactive mode. This utility can also run a single command (by appending the specific command to sfscli) or run a set of commands from a script. The sfscli program is in the directory /usr/tank/admin/bin. To avoid having to use the whole path each time you enter sfscli, edit the PATH statement in the file .bashrc in your home directory. It should look similar to this:
export PATH=$JAVA_HOME/bin:$PATH:.:/usr/tank/admin/bin

If you have done this, you can now start a sfscli session by simply typing sfscli, as shown in Example 7-3. Most administrative tasks necessary to administer the cluster can be run using the CLI. A few tasks (for example, during installation) are executed outside of sfscli, that is, directly at the MDS operating system. For these, mostly standard Linux commands are used.
Example 7-3 Starting sfscli mds1:~ # sfscli sfscli>

Type help at the sfscli prompt to get a list of commands available (Example 7-4).
Example 7-4 Access help using sfscli sfscli> help activatevol addprivclient addserver addsnmpmgr attachfileset autofilesetserver builddrscript catlog catpolicy chclusterconfig chdomain chfileset chldapconfig chpool chvol clearlog collectdiag lsadmuser lsautorestart lsclient lsdomain lsdrfile lsfileset lsimage lslun lspolicy lspool lsproc lsserver lssnmpmgr lstrapsetting lsusermap lsvol mkdomain mkvol mvfile quiescecluster quit rediscoverluns refreshusermap reportclient reportfilesetuse reportvolfiles resetadmuser resumecluster reverttoimage rmdomain rmdrfile rmfileset rmimage rmpolicy setfilesetserver setoutput settrap startautorestart startcluster startmetadatacheck startserver statcluster statfile statfileset statldap statpolicy statserver stopautorestart stopcluster stopmetadatacheck stopserver

254

IBM TotalStorage SAN File System

detachfileset disabledefaultpool dropserver exit expandvol help sfscli>

mkdrfile mkfileset mkimage mkpolicy mkpool mkusermap

rmpool rmprivclient rmsnmpmgr rmusermap rmvol setdefaultpool

suspendvol upgradecluster usepolicy

To get more about a specific command, type help clicommand (see Example 7-5). This will show the full reference for the selected command, including syntax and examples. You can also use help -s clicommand to display just the short description of a command.
Example 7-5 Help on specific CLI command sfscli> help rmfileset rmfileset Removes one or more empty, detached filesets and optionally the files in the filesets, including any FlashCopy(R) images.

>>-rmfileset--+--------+--+---------+--+-----+------------------> +- -?----+ '- -quiet-' '- -f-' +- -h----+ '- -help-' .--------------. V | >--+---fileset_name-+-+---------------------------------------->< '- - --------------' << information deleted >>

You can run sfscli on any MDS; however, many commands are valid for execution only at the master MDS. Also, some commands execute differently when executed on the master and subordinate MDS; for example, the lsserver command, when run on the master MDS, will list all MDSs in the cluster. Tip: To display which of the cluster nodes is currently running as a Master MDS, use the statcluster -netconfig command from the SAN File System command line interface. If issued from a subordinate MDS, it displays attributes only about the local MDS. More information about command restrictions or operations can be found in the IBM TotalStorage SAN File System Administrators Guide and Reference, GA27-4317. This will also tell you the required LDAP privileges (Administrator, Backup, Monitor, or Operator) required for the various commands.

Chapter 7. Basic operations and configuration

255

7.1.2 Accessing the GUI


The SAN File System console is a Web-based GUI for administering SAN File System. For a list of supported Web browsers to access the console, see 3.9, Client needs and application support on page 85. The SAN File System console includes a banner, task bar, work frame, work area, and a help assistant, as shown in Figure 7-3 on page 257. To access the console, open up a browser window and enter the URL https://mdsmasteripaddress:7979/sfs. It will redirect automatically to the master MDS if you try to connect to a subordinate engine. When you launch the console, the login window (Figure 7-2) displays.

Figure 7-2 SAN File System GUI login window

Enter your administrator ID and password to display the main window (Figure 7-3 on page 257).

256

IBM TotalStorage SAN File System

Task Bar

Refresh button Help Assistant Icon

My work frame

My work area

Figure 7-3 GUI welcome window

The My work frame area on the left hand side contains links to the SAN File System administrative functions, consisting of a series of embedded menus.

Chapter 7. Basic operations and configuration

257

Several user assistance resources are also available, including an embedded Help Assistant for window help information, as well as a more comprehensive SAN File System Information Center. To open the embedded Help Assistant, click the Help Assistant Icon in the top right corner. To access the Information Center, select one of the topics under the SAN File System Assistance section in the work area. The Information Center will then open up a new window, as shown in Figure 7-4.

Figure 7-4 Information Center

7.2 Basic navigation and verifying the cluster setup


Now we will introduce some of the basic tasks used in SAN File System to verify the cluster setup, following on from the Installation. For informational purposes, the GUI window selections are shown as Selection Selection. For brevity, GUI screen captures are not shown and CLI examples are given. Start a sfscli session on the master MDS, as shown in Example 7-3 on page 254.

7.2.1 Verify servers


GUI: Manage Servers and Clients Servers. First, check that all your MDS servers are up and running. Use the lsserver command at the master MDS, as shown in Example 7-6 on page 259.

258

IBM TotalStorage SAN File System

Example 7-6 List server sfscli> lsserver Name State Server Role Filesets Last Boot ========================================================= mds1 Online Master 2 May 14, 2004 2:47:31 AM mds2 Online Subordinate 0 May 16, 2004 10:39:58 PM

In this output: Name: The name of the MDS. State: Indicates the state of the MDS. The possible states are: Failed Initialization, Fully Quiescent, Initializing, Joining, Not Added, Not Running, Offline, Online, Partly Quiescent, and Unknown Server Role: Indicates whether the MDS is master or subordinate. Filesets: Indicates number of filesets assigned to the MDS. Last Boot section: Shows when the engine was last started.

7.2.2 Verify system volume


GUI: Manage Storage Volumes. If all the engines are up and running, check that the system volume is available. This is the raw device specified when installing the master MDS (rvpatha, for example). Use lsvol, as shown in Example 7-7. The lsvol command lists all the volumes defined by SAN File System. At this time, we only have one volume defined, which is assigned to the System Pool.
Example 7-7 List volumes and check your system volume sfscli> lsvol Name State Pool Size (MB) Used (MB) Used (%) ======================================================================= MASTER Activated SYSTEM 2032 240 11

In this output: Name: The name of the volume; in this case, MASTER is a system-defined name for the initial system volume. State: Indicates whether the volume is active or not (a volume can be activated using the activatevol command). Pool: The pool that the volume is assigned to (SYSTEM in this case, indicating the System Pool). Size (MB): Size of the volume in MB. Used (MB): Amount of space being used in MB. Used (%): Percentage of the available size being used in the volume.

7.2.3 Verify pools


GUI: Manage Storage Storage Pools. During installation, a System Pool (SYSTEM) and a default User Pool (called DEFAULT_POOL) are created. All the metadata resides in the System Pool. All files for which no specific policy rules exist will be placed in the default User Pool. The System Pool will include (at least) one volume and the default User Pool will be empty (no volumes assigned).
Chapter 7. Basic operations and configuration

259

To verify that these two pools exist after install, use lspool, as shown in Example 7-8.
Example 7-8 Verify the system and default pool sfscli> lspool Name Type Size (MB) Used (MB) Used (%) Threshold (%) Volumes ============================================================================ SYSTEM System 2032 240 11 80 1 DEFAULT_POOL User Default 0 0 0 80 0

In this output: Name: The name of the pool. Type: Indicates whether it is a System Pool or User Pool. The default pool is also indicated. Size (MB): Total size of the pool in MB. Used (MB): Amount of space being used in the pool, in MB. Used (%): Percentage of the available size being used in the pool. Threshold (%): Percentage of the storage pools estimated capacity which, when reached or exceeded, causes the MDS to generate an alert. Volumes: Number of volumes defined in the pool. All storage pools should be monitored to ensure they do not run out of space, but it is crucial, in particular, to monitor the System Pool. If the System Pool fills, then no metadata can be written and the cluster will be unavailable to the clients.

7.2.4 Verify LUNs


GUI: Manage Storage Metadata LUNs. To check that the SAN File System can see all the LUNs that you have allocated from the back-end storage device, use lslun, as shown in Example 7-9. Please note that the lslun command, if performed without the -client <client_name> parameter, will show only LUNs physically visible to an MDS.
Example 7-9 List LUNs example sfscli> lslun Lun ID Vendor Product Size (MB) Volume State ================================================================================================= VPD83NAA6=600507680188801B2000000000000002 IBM 2145 102400 Available VPD83NAA6=600507680188801B2000000000000001 IBM 2145 102400 Available VPD83NAA6=600507680188801B2000000000000000 IBM 2145 2048 MASTER Assigned

In this output: Lun ID: The WWN of the LUN assigned from the back-end storage device. Vendor: Indicates the vendor of the back-end storage device. Product: The type (product ID in this case) of the back-end storage device. Size: The size of the LUN presented from the back-end storage device. Volume: Indicates if a volume name has been defined for the volume within SAN File System. State (wrapped): Indicates whether the LUN is assigned to a pool or not.

260

IBM TotalStorage SAN File System

In the example, we can see there are three LUNS mapped: one of them has a state of Assigned (and is our System Pool volume) and the other two are in the Available state, ready to be assigned. To list the LUNs which are visible to a SAN File System client, use the lslun -client <client_name> command, as shown in Example 7-10. GUI: Manage Storage Data LUNs.
Example 7-10 List LUNs for a particular client example sfscli> lslun -client LIXPrague Lun ID Vendor Product Size (MB) Volume State =========================================================================================== VPD83NAA6=600507680188801B200000000000001C IBM 2145 40959 vol_lixprague1 Assigned

7.2.5 Verify administrators


GUI: Administer Access Users. Administrator users and roles are defined on your LDAP server or locally, depending on how you configured your system. You can check the users and roles with the lsadmuser command (see Example 7-11). Any user that is currently logged in will show Authorization: Current.
Example 7-11 List users and roles defined for SAN File System access sfscli> lsadmuser Name User Role Authorization ================================= itsoadm Admin Current itsooper Operator Not Current itsomon Monitor Not Current itsoback Backup Not Current

In this output: Name: The user ID defined on the LDAP server. User Role: The LDAP role of the user ID. Authorization: Indicates if the user ID is currently authenticated with the MDS.

7.2.6 Basic commands using CLI


Here is a list of additional basic commands and their functions, which you might find useful for day-to-day administration of SAN File System: Tip: Remember to type help clicommand to get additional help on each command. lsclient: Displays a list of clients that are currently being served by one or more MDS in the cluster. GUI: Manage Servers and Clients Client Sessions. addprivclient: Grants privileged access to the SAN File System global namespace to the specified clients. GUI: Manage Servers and Clients Client Sessions Select Action: Grant Clients Root Privileges. catlog: Displays the contents of the various log files maintained by the administrative server and the cluster. GUI: Monitor System Cluster Log/Administrative Log/Audit Log/Security Log.

Chapter 7. Basic operations and configuration

261

clearlog: Clears the audit log and cluster log files. GUI: Follow the preceding menu selection and then Select Action: Clear Log. chclusterconfig: Modifies the cluster settings that do not require a restart when changed. GUI: Manage Servers and Clients Cluster Select Properties Select Localization or Tuning. quiescecluster: Changes the state of all MDSs in the cluster to one of three quiescent states. GUI: Manage Servers and Clients Cluster Select Change State. resumecluster: Brings all MDSs in the cluster to the online state. GUI: Manage Servers and Clients Cluster Select Change State. startcluster: Starts all MDS in the cluster and brings them to the full online state. GUI: Manage Servers and Clients Cluster Select Start Online. statcluster: Displays status, network, workload, and configuration information about the cluster. GUI: Manage Servers and Clients Cluster Select Properties. stopcluster: Stops all MDSs in the cluster gracefully. GUI: Manage Servers and Clients Cluster Select Stop. lsserver: Displays a lists of all MDSs in the cluster and their attributes (if issued from the master MDS), or displays attributes about the local MDS if issued from a subordinate MDS. GUI: Manage Servers and Clients Server. startserver: Starts the specified MDS. GUI: Manage Servers and Clients Server Select Server Select Action Start. statserver: Displays status, configuration, and workload information for a specific MDS in the cluster, if issued from the master MDS. Displays status, configuration, and workload information for the local MDS if issued from a subordinate. GUI: Manage Servers and Clients Server Select Server Select Action Properties. stopserver: Shuts down a subordinate MDS gracefully. GUI: Manage Servers and Clients Server Select Server Select Action Stop. setfilesetserver: Reassigns an existing fileset to be hosted by a different Metadata server. GUI: Manage Filing Click on desired Fileset Properties Select General Settings Server Assignment Method. statfileset: Displays the number of started and completed transactions for the filesets being served by the local MDS. GUI: Monitor System Filesets. startmetadatacheck: Starts the utility that performs a consistency check on the metadata for the entire system or a set of filesets, generates reports in the cluster log, and optionally repairs inconsistencies in the metadata. GUI: Maintain System Check Metadata. stopmetadatacheck: Stops the metadata check utility that is currently in progress. GUI: Maintain System Check Metadata Select Stop. lsautorestart: Displays a list of MDSs and the automatic-restart settings for each. GUI: Maintain System Restart Service. startautorestart: Enables the MDS to restart automatically if it is down. GUI: Maintain System Restart Service Select Server Select Action Enable Service. stopautorestart: Disables the MDS from restarting automatically if it is down. GUI: Maintain System Restart Service Select Server Select Action Disable Service. lsproc: Displays a list of long-running processes that are not yet complete and their attributes.GUI: Monitor System Processes.

262

IBM TotalStorage SAN File System

setdefaultpool: Designates a User Pool to be the default storage pool, and changes the previous default pool to a regular, nondefault User Pool. GUI: Manage Storage Storage Pools General Properties Select Enable (Select a User Pool). resetadmuser: Forces all administrative users to log in again. GUI: Administer Access Users Select Actions Timeout All Authorizations. suspendvol: Suspends one or more volumes so that the MDS cannot allocate new data on the volumes. GUI: Manage Storage Volumes Select Volume Select Action Suspend.

7.3 Adding and removing volumes


In SAN File Systems terms, a volume is a Logical Unit Number (LUN) labeled by SAN File System for its use and associated with a storage pool. The term LUN is industry-standard: It is the unit of storage assignable by a SAN or other disk subsystem to SAN File System servers and clients. Note: During startup, the MDS scans all LUNs that it can access, searching for the label that tells it that the LUN is a valid SAN File System volume. Only volumes that have been added to storage pools will have this label.

7.3.1 Adding a new volume to SAN File System


Our first task is to add a volume to the default User Pool. Before clients can store files into SAN File System, volumes must be assigned to the storage pools. You should have at least one volume assigned to the default User Pool, as files that do not match a rule within a policy will end up in the default pool. Other storage pools can be created as required. Note: Depending on your configuration, you may actually disable the default storage pool. If you do this, then special considerations apply. More details of this possibility are in 7.8.8, Policy management considerations on page 328. You can add new LUNs (volumes) to system and user pools. If they are already visible to the client or MDS operating system, simply use the mkvol command, as described in this section. If you are adding brand new LUNS, then they must first be created/allocated in your back-end storage system and made available to the client(s) of MDS. Consult your storage system documentation to do this. After the LUNs have been made available, you will need to run the rediscoverluns command to make them visible. You can use the rediscoverluns command with the -client parameter to rediscover new LUNs for a particular client, as shown in Example 7-12. If the new LUN has been made visible to multiple clients, the rediscoverluns command must be run specifying each client in turn. If you have made new LUNs for system storage, run the rediscoverluns command without the -client parameter; this will rediscover Metadata LUNs. GUI: Manage Storage Available Data LUNs Select client Rediscover.
Example 7-12 Rediscover LUNs sfscli> rediscoverluns -client AIXRome CMMNP5410I The LUNs have been rediscovered. Tip: Run lslun to view the LUNs. sfscli>

Chapter 7. Basic operations and configuration

263

GUI: Manage Storage Data LUNs, select Client name from the drop-down, and refresh. Our lab setup is shown in Figure 7-5.

AIX Metadata Server

AIX

Windows 2000

Windows 2000

HBA

HBA

HBA

HBA

HBA

FC Switch FC Switch 1 1

FC Switch 2
System Pool User Pool

HBA

Metadata Server

Figure 7-5 Basic SAN File System configuration

To make sure that the LUNs are visible to SAN File System, use the lslun command, as shown in Example 7-9 on page 260. In order to list LUNs visible to a particular client, run the lslun command with the -client <client_name> parameter. Once you have verified that each client and MDS can see all of the required LUNs, you can now start defining volumes. Use the mkvol command, as shown in Example 7-13. When adding LUNs to a user pool (to the default pool in our example), you must use the -client parameter, and the client specified must be one that has access to the LUN being added. In our case, we specify client AIXRome to add this LUN. You can specify any client with this command, so long as that client can see the LUN being added. GUI: Manage Storage Add Volumes
Example 7-13 Add a volume to your default pool sfscli> lslun -client AIXRome Lun ID Vendor Product Size (MB) Volume State =========================================================================================== ========== VPD83NAA6=600507680188801B2000000000000001 IBM 2145 102399 Available VPD83NAA6=600507680188801B200000000000001C IBM 2145 40959 Available sfscli> mkvol -lun VPD83NAA6=600507680188801B2000000000000001 -pool DEFAULT_POOL -client AIXRome -activate yes vol01

264

IBM TotalStorage SAN File System

CMMNP5426I Volume vol01 was created successfully.

In this command, you specify the following parameters: -lun lun_identifier: Specifies the identifier of a LUN to make into a volume. -client client_name: Name of a client that has visibility to the LUN. In order to create a volume in the user pool, the client must be active (must appear in the client list when you run the lsclient command) and have access to that particular LUN (use the reportclient command to report active clients that can access the LUN). This parameter is only required for adding volumes to a User Pool, that is, it is not used when adding volumes to the System Pool. -pool pool_name: Name of the storage pool to which to add the new volumes. The storage pool is either a User Pool or the System Pool. If not specified, the volumes are added to the default User Pool. -activate yes/no: Specify whether to activate the volume. Data is only stored on activated volumes. The default is yes. -f: Force the MDS to add the volume and write a new label to it, even if the volume already has a valid SAN File System label. Note: You can use -f only if the volume is not assigned to another storage pool in the same cluster. volume_name: Name(s) assigned to the added volume(s). This name must be unique within the storage pool, and can be up to 256 characters in length. In Example 7-13 on page 264, a volume called vol01 is added to the default pool. To verify that the volume has been successfully added, use the lspool command, as shown in Example 7-14. Compare with the listing before we added the volume (Example 7-8 on page 260). GUI: Manage Storage Storage Pools.
Example 7-14 List the pools to verify that the volume has been added to the default pool sfscli> lspool Name Type Size (MB) Used (MB) Used (%) Threshold (%) Volumes ============================================================================ SYSTEM System 2032 240 11 80 1 DEFAULT_POOL User Default 102384 0 0 80 1

We see that the size of the default pool has increased, and that there is one volume listed under the Volumes column. Checking lsvol, the new volume appears, as shown in Example 7-15. GUI: Manage Storage Volumes.
Example 7-15 List volumes sfscli> lsvol Name State Pool Size (MB) Used (MB) Used (%) ========================================================== MASTER Activated SYSTEM 2032 240 11 vol01 Activated DEFAULT_POOL 102384 0 0

Chapter 7. Basic operations and configuration

265

Finally, with lslun, we can see that the state of the newly defined LUN has changed from Available to Assigned, as shown in Example 7-16. Please note the syntax of the lslun command. As you can see in Example 7-13 on page 264, we have created the volume for client AIXRome. Because the LUN used to create this volume is visible to the AIXRome client only, we need to specify -client parameter for lslun command.
Example 7-16 Display LUNs for an SAN File System client sfscli> lslun -client AIXRome Lun ID Vendor Product Size (MB) Volume State ==================================================================================== VPD83NAA6=600507680188801B2000000000000001 IBM 2145 102400 vol01 Assigned VPD83NAA6=600507680188801B200000000000001C IBM 2145 140959 vol01 Available

Note: Please note that the lsvol command does not have the -client option. When the lsvol command is issued, all defined volumes will be displayed. From the SAN File System MDS point of view, volumes represent logical units of particular Storage Pools. By contrast, when lslun is used, usually the -client parameter has to be specified (except when you want to list the system LUNs visible to an MDS node). This is because a LUN represents a physical object of a back-end Storage visible to a particular client or MDS.

7.3.2 Changing volume settings


You can change the name or description of a volume with the chvol command. In Example 7-17, we changed the volume name from vol01 to volume1. To change the description, specify the -desc parameter. GUI: Manage Storage Volumes.
Example 7-17 Change name on volume sfscli> chvol -name volume1 vol01 CMMNP5133I Volume vol01 was modified successfully.

The change is reflected using the lsvol command, as shown in Example 7-18.
Example 7-18 List volumes to verify changes sfscli> lsvol Name State Pool Size (MB) Used (MB) Used (%) =========================================================== MASTER Activated SYSTEM 2032 240 11 volume1 Activated DEFAULT_POOL 102384 0 0

7.3.3 Removing a volume


Before removing a volume from a storage pool, SAN File System moves (drains) the contents of the volume to other available volumes within the same storage pool (there must be at least one additional volume in the storage pool to enable this). Removing a volume can be done online without any interruptions to client applications using the rmvol command. If the storage pool does not have sufficient space available in other volumes to move all of the data contained in the specified volume, the removal of the volume fails. In that case, the volume is suspended, which means that new data cannot be allocated on that volume. Once a volume has been suspended, it must be manually re-activated using the activatevol command.

266

IBM TotalStorage SAN File System

If one or more files cannot be accessed during the removal, for example, if there are bad sectors on the volume being removed, the volume removal will fail unless you specify the -f option. With this parameter, all files on the volume will be deleted, not copied to other volumes. Before removing a volume with the -f option, we recommend listing the files on the volume using the reportvolfiles command, as shown in Example 7-19. This command gives you the MDS perspective of what files are stored on that particular volume; it does not actually access the volume contents. Note that you cannot perform this operation at the GUI.
Example 7-19 reportvolfiles sfscli> lsvol Name State Pool Size (MB) Used (MB) Used (%) =========================================================== MASTER Activated SYSTEM 2032 240 11 volume1 Activated DEFAULT_POOL 102384 16 0 sfscli> reportvolfiles volume1 ROOT:sanfs/test.txt ROOT:sanfs/files/B4rFEm ROOT:sanfs/files/B7rz7u ROOT:sanfs/files/cfgvg.out ROOT:sanfs/files/codcron ROOT:sanfs/files/lslpplc.out ROOT:sanfs/files/post_i.out ROOT:sanfs/files/pre_rm.out ROOT:sanfs/files/rc.net.out ROOT:sanfs/files/rc.net.serial.out ROOT:sanfs/files/rmTrace ROOT:sanfs/files/rpcbind.file ROOT:sanfs/files/sdd.temporary.file ROOT:sanfs/files/sddsrv.out ROOT:sanfs/files/sfs.client.aix51-opt ROOT:sanfs/files/xlogfile

The reportvolfiles command will tell you where the files are located on that volume within the global namespace. In the example, all the files on the volume volume1 are contained in the fileset called files. Attention: Specifying the -f parameter with the rmvol command will remove the files from the volume, and these files will have to be restored from an existing backup. We recommend using RAID disk (for example, RAID 5 or RAID 10) for user volumes, to minimize the possibility of volume corruption. If the -f parameter is not specified, the MDS will automatically move the data off the volume to another volume within the pool. If you want to assign a volume to another storage pool, you must move all the files from it first using the rmvol command. The rmvol command requires a -client parameter in order to remove a volume from a user pool. The client specified must have access not only to the volume being removed, but to all other volumes in the storage pool. To verify this, use lsvol -pool <storage_pool> and lslun -client <name>, and crosscheck the results. To remove a system volume, use the rmvol command without the -client parameter.

Chapter 7. Basic operations and configuration

267

GUI: Manage Storage Volumes Select volume Select action Remove. Example 7-20 shows how to remove a user volume, vol01.
Example 7-20 Removing volumes in SAN File System sfscli> lsvol Name State Pool Size (MB) Used (MB) Used (%) =========================================================== MASTER Activated SYSTEM 2032 240 11 volume1 Suspended DEFAULT_POOL 102384 16 0 sfscli> rmvol -client AIXRome volume1 Are you sure you want to delete Volume volume1? [y/n]:y CMMNP5449E There is not enough space on other volumes to move the volume contents. sfscli> lsvol Name State Pool Size (MB) Used (MB) Used (%) =========================================================== MASTER Activated SYSTEM 2032 240 11 volume1 Suspended DEFAULT_POOL 102384 16 0 sfscli> rmvol -client AIXRome -f volume1 Are you sure you want to delete Volume volume1? [y/n]:y CMMNP5442I Volume volume1 was removed successfully. sfscli> lsvol Name State Pool Size (MB) Used (MB) Used (%) ==================================================== MASTER Activated SYSTEM 2032 240 11

If you suspect a faulty disk and wanted to delete it gracefully from the storage pool, do the following operations: Attempt rmvol on the volume (without the -f option). This will move all the data that is accessible on the volume. List the remaining contents of the volume (reportvolfiles). Keep this list. Force remove the volume (rmvol -f). This will delete all traces of the remaining files and remove the volume from its storage pool. Add additional volume(s) to the storage pool if space is required to replace the failing volume. Restore the deleted files from a backup.

7.4 Storage pools


SAN File System organizes its volumes into storage pools. A storage pool (or pool for short) is a collection of volumes. Any volumes can be assigned to any pool, but to make the best use of SAN File Systems policy-based management features, it is expected that administrators will configure volumes and pools reflecting their particular needs for performance, security, availability, and so on. Storage pools should be carefully planned in advance, as discussed in Chapter 3, MDS system design, architecture, and planning issues on page 65. The following rules apply when working with pools: There must be LUNs available to the clients to create a new storage pool. If not, these need to be defined in the back-end storage device and made available to the clients. If you 268
IBM TotalStorage SAN File System

define new LUNS, run the rediscoverluns -client <client_name> command to make the new LUNs available to the clients, to be assigned as volumes. Only one System Pool can exist. This is created by default at system installation; therefore, any new pools will be User Pools.

7.4.1 Creating a storage pool


Use the mkpool command to create a storage pool, as shown in Example 7-21. We confirm that the new pool, datapool, was added with the mkpool command, which has the following parameters: partsize: Unit of space allocation for the pool. It can be either 16 (default), 64, or 256 MB. allocsize: The size by which to create or extend files in the storage pool. It can be auto (default), 4 KB, or 128 KB. When set to auto, the system will set the size for each file allocation automatically. thresh: Percentage of the pools capacity, which, when exceeded, will generate an alert. This is a value between 0 and 100. The default is 80; if set to 0, no alerts are generated. The pool capacity is the sum of the volumes that are assigned to it. Therefore, when we first add the pool, it has size 0, as no volumes are assigned. desc: Optional pool description. pool_name: User-defined; must be unique in the SAN File System. Attention: You cannot change either the allocation size or the partition size after it is defined. For the System Pool, the partition size is always 16 MB. GUI: Manage Storage Create a Storage Pool.
Example 7-21 Creating a storage pool sfscli> lspool Name Type Size (MB) Used (MB) Used (%) Threshold (%) Volumes ============================================================================ SYSTEM System 2032 240 11 80 1 DEFAULT_POOL User Default 0 0 0 80 0 sfscli> mkpool -partsize 16 -thresh 87 -desc "pool_for_svc_disks" datapool CMMNP5079I Storage Pool datapool was created successfully. sfscli> lspool Name Type Size (MB) Used (MB) Used (%) Threshold (%) Volumes ============================================================================ SYSTEM System 2032 240 11 80 1 DEFAULT_POOL User Default 0 0 0 80 0 datapool User 0 0 0 87 0

Chapter 7. Basic operations and configuration

269

7.4.2 Adding a volume to a user storage pool


Once the pool is created, it is empty and ready for adding volumes, as described in 7.3.1, Adding a new volume to SAN File System on page 263. We added a volume to the pool using the mkvol command, as shown in Example 7-22. Note that the mkvol command requires the -lun and -client parameters. You can list LUNs visible to particular client by running the lslun -client command, as shown in Example 7-16 on page 266. GUI: Manage Storage Add Volumes.
Example 7-22 Adding a volume to svcpool sfscli> mkvol -lun VPD83NAA6=600507680188801B200000000000001C -pool datapool -client AIXRome -activate yes volume01 CMMNP5426I Volume volume01 was created successfully. sfscli> lsvol Name State Pool Size (MB) Used (MB) Used (%) ======================================================== MASTER Activated SYSTEM 2032 240 11 volume01 Activated datapool 40944 0 0

7.4.3 Adding a volume to the System Pool


You can expand the size of the System Pool dynamically, that is, without having to bring down the SAN File System cluster. This example will show you how to configure a newly created back-end storage LUN, then add it to the system pool. First, we will configure the LUN to be recognized by the MDS.

MDS recognizes the new LUN


1. As shown in Example 7-23, we currently have three LUNs visible to the SAN File System to use for metadata. We are using an SVC device for metadata storage. One of the three visible LUNs is already assigned to the System Pool, and the other two are available for use. GUI: Manage Storage Metadata LUNs.
Example 7-23 List defined LUNs # sfscli lslun Lun ID Vendor Product Size (MB) Volume State Storage Device WWNN Port WWN ================================================================================ VPD83NAA6=600507680188801B2000000000000002 IBM 2145 102400 - Available UNKNOWN UNKNOWN, VPD83NAA6=600507680188801B2000000000000001 IBM 2145 102400 - Available UNKNOWN UNKNOWN, VPD83NAA6=600507680188801B2000000000000000 IBM 2145 10240 MASTER Assigned UNKNOWN UNKNOWN,

2. In Example 7-24 on page 271, we verify that SDD has been configured to use the three volumes that has been assigned to SAN File System. The output of the datapath query device command shows there are three devices that have been configured by SDD. The Serial numbers match the LUN IDs reported in the previous example.

270

IBM TotalStorage SAN File System

Example 7-24 Datapath query device # datapath query device Total Devices : 3 DEV#: 0 DEVICE NAME: vpatha TYPE: 2145 POLICY: Optimized SERIAL: 600507680188801b2000000000000000 ============================================================================ Path# Adapter/Hard Disk State Mode Select Errors 0 Host2Channel0/sdb OPEN NORMAL 154515 0 1 Host2Channel0/sde OPEN NORMAL 0 0 2 Host3Channel0/sdi OPEN NORMAL 155066 0 3 Host3Channel0/sdl OPEN NORMAL 0 0 DEV#: 1 DEVICE NAME: vpathb TYPE: 2145 POLICY: Optimized SERIAL: 600507680188801b2000000000000001 ============================================================================ Path# Adapter/Hard Disk State Mode Select Errors 0 Host2Channel0/sdc CLOSE NORMAL 0 0 1 Host2Channel0/sdf CLOSE NORMAL 96 0 2 Host3Channel0/sdh CLOSE NORMAL 0 0 3 Host3Channel0/sdk CLOSE NORMAL 75 0 DEV#: 2 DEVICE NAME: vpathc TYPE: 2145 POLICY: Optimized SERIAL: 600507680188801b2000000000000002 ============================================================================ Path# Adapter/Hard Disk State Mode Select Errors 0 Host2Channel0/sdd CLOSE NORMAL 0 0 1 Host2Channel0/sdg CLOSE NORMAL 70 0 2 Host3Channel0/sdj CLOSE NORMAL 0 0 3 Host3Channel0/sdm CLOSE NORMAL 101 0

3. At the SVC, we added a new LUN (vdisk) and made it available to the MDS host ports. See your storage device documentation for detailed instruction on how to do this. The next steps show how to have the MDS Linux OS dynamically recognize the new LUN. You must perform the remaining steps in this section on every MDS before continuing to Adding volumes to system storage pool on page 274. 4. Force the HBA driver (QLogic for the MDS) to rescan the SAN fabric. Since we are using QLogic adapters, the commands are as shown in Example 7-25.
Example 7-25 Force a scan for new devices # echo scsi-qlascan >/proc/scsi/qla2300/2 # echo scsi-qlascan >/proc/scsi/qla2300/3

Chapter 7. Basic operations and configuration

271

5. This will update the two QLogic files in the /proc directory. View these files, as shown in Example 7-26 (we just show one of the files, /proc/scsi/qla2300/2, in the example, but you should check both of them). Scroll down to the SCSI LUN information. A * indicates a newly discovered LUN that has not yet been registered with the operating system. Take a note of the SCSI ID and LUN number from the left hand column of any entries marked with a *.
Example 7-26 View QLogic proc # cat /proc/scsi/qla2300/2 QLogic PCI to Fibre Channel Host Adapter for QLA2342: Firmware version: 3.02.24, Driver version 6.06.64 Entry address = c5000060 HBA: QLA2312 , Serial# F97353 Request Queue = 0x50e8000, Response Queue = 0x50d0000 Request Queue count= 128, Response Queue count= 512 Total number of active commands = 0 Total number of interrupts = 155823 Total number of IOCBs (used/max) = (0/600) Total number of queued commands = 0 Device queue depth = 0x20 Number of free request entries = 27 Number of mailbox timeouts = 0 Number of ISP aborts = 0 Number of loop resyncs = 0 Number of retries for empty slots = 0 Number of reqs in pending_q= 0, retry_q= 0, done_q= 0, scsi_retry_q= 0 Host adapter:loop state= <READY>, flags= 0x8e0813 Dpc flags = 0x0 MBX flags = 0x0 SRB Free Count = 4096 Link down Timeout = 000 Port down retry = 030 Login retry count = 030 Commands retried with dropped frame(s) = 0 SCSI Device Information: scsi-qla0-adapter-node=200000e08b09691d; scsi-qla0-adapter-port=210000e08b09691d; scsi-qla0-target-0=5005076801400364; scsi-qla0-target-1=500507680140035a; SCSI LUN Information: (Id:Lun) * - indicates lun is not registered with the OS. ( 0: 0): Total reqs 152325, Pending reqs 0, flags 0x0, 0:0:81, ( 0: 1): Total reqs 729, Pending reqs 0, flags 0x0, 0:0:81, ( 0: 2): Total reqs 0, Pending reqs 0, flags 0x0*, 0:0:81, ( 0: 3): Total reqs 763, Pending reqs 0, flags 0x0, 0:0:81, ( 1: 0): Total reqs 733, Pending reqs 0, flags 0x0, 0:0:82, ( 1: 1): Total reqs 833, Pending reqs 0, flags 0x0, 0:0:82, ( 1: 2): Total reqs 0, Pending reqs 0, flags 0x0*, 0:0:82, ( 1: 3): Total reqs 844, Pending reqs 0, flags 0x0, 0:0:82,

6. Add the collected SCSI ID and LUN numbers of the newly added LUNS to the /proc/scsi/scsi file. These are 0 2 and 1 2 in our example. To do this, for each controller number (2 and 3 are the controller numbers for the QLogic 2342 ports) and ID LUN combination, enter the echo scsi add-single-device controller 0 ID LUN>/proc/scsi/scsi command at the system prompt, as shown in Example 7-27 on page 273.

272

IBM TotalStorage SAN File System

Example 7-27 Add the new LUNs to /proc/scsi/scsi # # # # echo echo echo echo "scsi "scsi "scsi "scsi add-single-device add-single-device add-single-device add-single-device 2 3 2 3 0 0 0 0 0 0 1 1 2" 2" 2" 2" >/proc/scsi/scsi >/proc/scsi/scsi >/proc/scsi/scsi >/proc/scsi/scsi

Once the /proc directory has been updated, the LUNs should now be recognized by the operating system. Verify this by viewing the /proc/scsi/qla2300/2 and /proc/scsi/qls2300/3 files, as shown in Example 7-28. There are now no * entries in the list of LUNs, indicating the new LUN is recognized by the operating system.
Example 7-28 Verify that LUNs is now recognized by OS # cat /proc/scsi/qla2300/2 QLogic PCI to Fibre Channel Host Adapter for QLA2342: Firmware version: 3.02.24, Driver version 6.06.64 Entry address = c5000060 HBA: QLA2312 , Serial# F97353 Request Queue = 0x50e8000, Response Queue = 0x50d0000 Request Queue count= 128, Response Queue count= 512 Total number of active commands = 0 Total number of interrupts = 156227 Total number of IOCBs (used/max) = (0/600) Total number of queued commands = 0 Device queue depth = 0x20 Number of free request entries = 121 Number of mailbox timeouts = 0 Number of ISP aborts = 0 Number of loop resyncs = 0 Number of retries for empty slots = 0 Number of reqs in pending_q= 0, retry_q= 0, done_q= 0, scsi_retry_q= 0 Host adapter:loop state= <READY>, flags= 0x8e0813 Dpc flags = 0x0er-port=210000e08b09691d; MBX flags = 0x0t-0=5005076801400364; SRB Free Count = 40960507680140035a; Link down Timeout = 000 Port down retry = 030 Login retry count = 030 lun is not registered with the OS. Commands retried with dropped frame(s) = 0, flags 0x0, 0:0:81, ( 0: 1): Total reqs 728, Pending reqs 0, flags 0x0, 0:0:81, ( 0: 2): Total reqs 0, Pending reqs 0, flags 0x0*, 0:0:81, SCSI Device Information: Pending reqs 0, flags 0x0, 0:0:81, scsi-qla0-adapter-node=200000e08b09691d; flags 0x0, 0:0:82, scsi-qla0-adapter-port=210000e08b09691d; flags 0x0, 0:0:82, scsi-qla0-target-0=5005076801400364;0, flags 0x0*, 0:0:82, scsi-qla0-target-1=500507680140035a;s 0, flags 0x0, 0:0:82, SCSI LUN Information: (Id:Lun) * - indicates lun is not registered with the OS. ( 0: 0): Total reqs 152704, Pending reqs 0, flags 0x0, 0:0:81, ( 0: 1): Total reqs 731, Pending reqs 0, flags 0x0, 0:0:81, ( 0: 2): Total reqs 8, Pending reqs 0, flags 0x0, 0:0:81, ( 0: 3): Total reqs 765, Pending reqs 0, flags 0x0, 0:0:81, ( 1: 0): Total reqs 735, Pending reqs 0, flags 0x0, 0:0:82, ( 1: 1): Total reqs 835, Pending reqs 0, flags 0x0, 0:0:82, ( 1: 2): Total reqs 9, Pending reqs 0, flags 0x0, 0:0:82, ( 1: 3): Total reqs 846, Pending reqs 0, flags 0x0, 0:0:82,

Chapter 7. Basic operations and configuration

273

7. Force the Subsystem Device Driver (SDD), or equivalent driver, to rescan and map the new devices. For SDD, enter the /usr/sbin/cfgvpath command at the system prompt, as shown in Example 7-29.
Example 7-29 Force SDD to rescan and map the new devices # /usr/sbin/cfgvpath crw-r--r-1 root root 253, 0 Sep 26 21:50 /dev/IBMsdd major number 254 assigned to vpath (dev: vpathe) Added vpathe 254 64 ...

We can see that a new vpath, for the new LUN, called vpathe, was added. 8. Verify that SDD recognized the newly added LUN using the datapath query device command, as shown in Example 7-30.
Example 7-30 Verify using datapath query command # datapath query device Total Devices : 4 DEV#: 0 DEVICE NAME: vpatha TYPE: 2145 POLICY: Optimized SERIAL: 600507680188801b2000000000000000 ============================================================================ Path# Adapter/Hard Disk State Mode Select Errors 0 Host2Channel0/sdb OPEN NORMAL 155601 0 1 Host2Channel0/sde OPEN NORMAL 0 0 2 Host3Channel0/sdi OPEN NORMAL 156238 0 3 Host3Channel0/sdl OPEN NORMAL 0 0 DEV#: 1 DEVICE NAME: vpathb TYPE: 2145 POLICY: Optimized SERIAL: 600507680188801b2000000000000001 ============================================================================ Path# Adapter/Hard Disk State Mode Select Errors 0 Host2Channel0/sdc CLOSE NORMAL 0 0 1 Host2Channel0/sdf CLOSE NORMAL 96 0 2 Host3Channel0/sdh CLOSE NORMAL 0 0 3 Host3Channel0/sdk CLOSE NORMAL 75 0 DEV#: 2 DEVICE NAME: vpathc TYPE: 2145 POLICY: Optimized SERIAL: 600507680188801b2000000000000002 ============================================================================ Path# Adapter/Hard Disk State Mode Select Errors 0 Host2Channel0/sdd CLOSE NORMAL 0 0 1 Host2Channel0/sdg CLOSE NORMAL 70 0 2 Host3Channel0/sdj CLOSE NORMAL 0 0 3 Host3Channel0/sdm CLOSE NORMAL 101 0 DEV#: 3 DEVICE NAME: vpathe TYPE: 2145 POLICY: Optimized SERIAL: 600507680188801b200000000000002b ============================================================================ Path# Adapter/Hard Disk State Mode Select Errors 0 Host3Channel0/sdq CLOSE NORMAL 0 0 1 Host2Channel0/sdn CLOSE NORMAL 0 0 2 Host3Channel0/sdo CLOSE NORMAL 0 0 3 Host2Channel0/sdp CLOSE NORMAL 0 0

Adding volumes to system storage pool


Now we will add the new volume to the System Pool. Perform these steps on the master MDS only. 1. Example 7-31 on page 275 shows there is one volume currently defined in the SYSTEM pool.

274

IBM TotalStorage SAN File System

GUI: Manage Storage Storage Pools.


Example 7-31 List SYSTEM pool #sfscli lspool SYSTEM Name Type Size (MB) Used (MB) Used (%) Threshold (%) Volumes ================================================================ SYSTEM System 10224 384 3 80 1

2. Use the lslun command to list LUNs that are available to the SAN File System, as shown in Example 7-32. As you can see, there are two unallocated LUNs. This matches the output we saw in Example 7-23 on page 270. Therefore, SAN File System has not yet detected our newly added LUN, ID 600507680188801b200000000000002b. GUI: Manage Storage Metadata LUNs.
Example 7-32 List available LUNs # sfscli lslun Lun ID Vendor Product Size (MB) Volume State Storage Device WWNN Port WWN ================================================================================ VPD83NAA6=600507680188801B2000000000000002 IBM 2145 102400 - Available UNKNOWN UNKNOWN, VPD83NAA6=600507680188801B2000000000000001 IBM 2145 102400 - Available UNKNOWN UNKNOWN, VPD83NAA6=600507680188801B2000000000000000 IBM 2145 10240 MASTER Assigned UNKNOWN UNKNOWN,

3. In order for SAN File System to discover new LUNs, run the rediscoverluns command, as shown in Example 7-33. This will force SAN File System to go and rescan for new LUNs that are available and recognized by the operating system. GUI: Manage Storage Metadata LUNs Select Action Rediscover LUNs.
Example 7-33 Rediscover new LUNs # sfscli rediscoverluns CMMNP5410I The LUNs have been rediscovered. Tip: Run lslun to view the LUNs

4. Rerun the lslun command to verify that the new LUN has been recognized, as shown in Example 7-34.
Example 7-34 List LUNs # sfscli lslun Lun ID Vendor Product Size (MB) Volume State Storage Device WWNN Port WWN ================================================================================ VPD83NAA6=600507680188801B2000000000000002 IBM 2145 102400 - Available UNKNOWN UNKNOWN, VPD83NAA6=600507680188801B2000000000000001 IBM 2145 102400 - Available UNKNOWN UNKNOWN, VPD83NAA6=600507680188801B2000000000000000 IBM 2145 10240 MASTER Assigned UNKNOWN UNKNOWN, VPD83NAA6=600507680188801B200000000000002B IBM 2145 999 - Available UNKNOWN UNKNOWN,

Chapter 7. Basic operations and configuration

275

5. You can now add the new LUN to the System Pool, using the mkvol command, as shown in Example 7-35. GUI: Manage Storage Add Volumes.
Example 7-35 Add the new LUN to the SYSTEM pool # sfscli mkvol -lun VPD83NAA6=600507680188801B200000000000002B -pool SYSTEM -desc "SYS VOLUME2" newsysvol CMMNP5426I Volume newsysvol was created successfully.

6. Verify the new volume using the lsvol command, as shown in Example 7-36. GUI: Manage Storage Volumes.
Example 7-36 List volumes # sfscli lsvol Name State Pool Size (MB) Used (MB) Used (%) ======================================================= MASTER Activated SYSTEM 10224 384 3 newsysvol Activated SYSTEM 992 48 4 avol1 Activated poola 102384 1280 1 avol2 Activated poola 40944 1264 3 bvol1 Activated poolb 51184 1600 3 bvol2 Activated poolb 46064 1584 3

7. Finally, verify that the System Pool now includes the new volume using the lspool command, as shown in Example 7-37. Compare this with the previous pool listing, Example 7-31 on page 275. GUI: Manage Storage Storage Pools
Example 7-37 Verify SYSTEM pool # sfscli lspool SYSTEM Name Type Size (MB) Used (MB) Used (%) Threshold (%) Volumes ================================================================ SYSTEM System 11216 432 3 80 2

You have now successfully added a new LUN to the System Pool.

7.4.4 Changing a storage pool


The name, description, or threshold can be modified on an already defined pool with the chpool command, as shown in Example 7-38. GUI: Manage Storage Storage Pools Select storage pool Select action Properties General Settings.
Example 7-38 Changing parameters on defined storage pool sfscli> chpool -thresh 95 datapool CMMNP5094I Storage Pool datapool was modified successfully. sfscli> lspool Name Type Size (MB) Used (MB) Used (%) Threshold (%) Volumes ============================================================================ SYSTEM System 2032 240 11 80 1 DEFAULT_POOL User Default 0 0 0 80 0 datapool User 102384 0 0 95 1

276

IBM TotalStorage SAN File System

Tip: It is a good practice to rename or even remove the pool DEFAULT_POOL, since you will typically either create and assign another pool as the default pool, or even disable the default user pool entirely (see Disabling the default User Pool on page 328). In this case, it would be confusing to have a pool called DEFAULT_POOL, which is not, in fact, the default storage pool.

7.4.5 Removing a storage pool


If you need to remove a pool, consider the following rules: The storage pool must be empty. You must remove all volumes from the storage pool before you can delete it. See 7.3.3, Removing a volume on page 266 for more information about how to remove a volume. Note that if there is data in the pool, you will have to use the -f option with the rmvol command to delete the volumes. This means the data is deleted and has to be restored from backup. You cannot delete a storage pool that is referenced by the active policy; see 7.8, File placement policy on page 304. Use the rmpool command to remove an empty, unreferenced pool, as shown in Example 7-39. GUI: Manage Storage Storage Pools Select storage pool Select action Delete.
Example 7-39 Removing a pool sfscli> rmpool datapool Are you sure you want to delete Storage Pool datapool? [y/n]:y CMMNP5083I Storage Pool datapool was removed successfully. sfscli> lspool Name Type Size (MB) Used (MB) Used (%) Threshold (%) Volumes ============================================================================ SYSTEM System 2032 240 11 80 1 DEFAULT_POOL User Default 0 0 0 80 0

7.4.6 Expanding a user storage pool volume


Expanding or increasing the size of data (user) volumes increases the capacity of the storage pool to which they have been defined. SAN File System can recognize expanded volumes in user storage pools. A prerequisite for this is that your back-end storage system must be capable of expanding a LUN. We will show an example using the IBM Total Storage SAN Volume Controller (SVC), which has this capability. Depending on the operating system, some clients can detect the new volume capacity dynamically, that is, without requiring to be rebooted. At the time of writing, dynamic expansion of LUNs (without rebooting) is only supported on AIX 5L V5.2 and later clients. Other client types will require a reboot in order to recognize the new LUN capacity. Note: For nondisruptive addition and expansion of volumes, the client operating systems must support online replacement and online insertion (OLR/OLI) capability. If the clients do not have OLI/OLR capability, then the clients will not be able to discover the newly added LUNS without disruption. In this case, you must reboot the client machines to discover the LUNs.

Chapter 7. Basic operations and configuration

277

When expanding a volume, you must make sure all systems with visibility to it have recognized the new capacity. When expanding a LUN in a user pool, make sure to validate the expansion on every client that has visibility to it. You can display which clients have visibility to a LUN using the reportclient command, as shown in 7.7.1, Display a list of clients with access to particular volume or LUN on page 304.

Expand the underlying LUN


First, we will expand the underlying disk at the storage device. We are using a SAN Volume Controller; for detailed information about configuring and administering this device, see the redbook IBM TotalStorage SAN Volume Controller, SG24-6423. 1. We want to check the host mapping of the disk and also the LUN ID for this disk. To do this, run the svcinfo lsvdiskhostmap command shown in Example 7-40 at the SAN Volume Controller Master Console. It will display information for the SVC disk aix52_SanFS.
Example 7-40 Display the host associations and LUN ID of the vdisk to expand IBM_2145:admin>svcinfo lsvdiskhostmap -delim : aix52_SanFS id:name:SCSI_id:host_id:host_name:wwpn:vdisk_UID 9:aix52_SanFS:3:3:aix52host:10000000C92855E1:6005076801848008C80000000000000A

We can see the vdisk is mapped to the port 10000000C92855E1, which corresponds to our AIX SAN File System client agent47, and has the LUN ID 6005076801848008C80000000000000A. 2. We will verify the size of this LUN at the MDS using the lslun command, as shown in Example 7-41. This command shows the current capacity (about 7 GB) of the LUN to be expanded, which is visible from client agent47. GUI: Manage Storage Data LUNs.
Example 7-41 LUN size before expansion sfscli> lslun -client agent47 VPD83NAA6=6005076801848008C80000000000000A Lun ID Vendor Product Size (MB) Volume State Storage Device WWNN Port WWN =========================================================================================== ============================ VPD83NAA6=6005076801848008C80000000000000A IBM 2145 6999 svc-svcpool-2 Assigned Unknown -

3. Example 7-41 shows that this LUN is available to SAN File System as the volume svc-svcpool-2. We can display the size of this volume (it matches the size of the LUN) using the lsvol command, as shown in Example 7-42. GUI: Manage Storage Volumes.
Example 7-42 Volume size before expansion sfscli> lsvol svc-svcpool-2 Name State Pool Size (MB) Used (MB) Used (%) =========================================================================== svc-svcpool-2 Activated svcpool 6999 0 0

4. This volume is assigned to the storage pool svcpool. The lspool command shows its current size of 9 GB (see Example 7-43 on page 279). GUI: Manage Storage Storage Pools.

278

IBM TotalStorage SAN File System

Example 7-43 Pool size before expansion sfscli> lspool Name Type Size (MB) Used (MB) Used (%) Threshold (%) Volumes ============================================================================ SYSTEM System 64832 768 1 80 3 DEFAULT_POOL User Default 5696 3184 55 80 4 PolicyPool User 944 32 3 80 1 svcpool User 9031 112 1 80 2

5. Now we will expand the volume. At the SAN Volume Controller browser interface, display the Virtual Disks (vdisks), as shown in Figure 7-6. Select the check box for the vdisk to be expanded (aix52_SanFS). Check the vdisk you are attempting to expand. If it is of type image, then it cannot be expanded. If it is of type sequential, it will the become a striped vdisk when it is expanded. A vdisk of type striped remains of this type when expanded. Select Expand a Vdisk from the drop-down menu and click Go.

Figure 7-6 Select expand vdisk

Chapter 7. Basic operations and configuration

279

6. Select the managed disks (mdisks) to be used for the vdisk expansion and also the size to expand the vdisks by. We will expand the disk by 500 MB from its current 7 GB size. Click OK (Figure 7-7).

Figure 7-7 vdisk expansion window

7. Now you must verify the expansion on the client(s). You must do this on each SAN FIle System client that has visibility to the LUN.

Verify expansion on AIX


1. Stop each application that is using the SAN File System. 2. Stop the AIX client using the rmstclient command. 3. Run cfgmgr on the client. 4. Re-start the AIX client using the setupstclient command.

Configure the expansion


1. Verify that all clients can see the new volume size using the lslun command on the master MDS, as shown in Example 7-44 on page 281. In our case, we are showing the LUN visible to the AIX client (agent47). The size before expansion was 6999 MB (as in Example 7-41 on page 278) and we can see the new size of 7499 MB reported correctly in Example 7-44 on page 281. GUI: Manage Storage Data LUNs.

280

IBM TotalStorage SAN File System

Example 7-44 Validate new LUN size sfscli> lslun -client agent47 VPD83NAA6=6005076801848008C80000000000000A Lun ID Vendor Product Size (MB) Volume State Storage Device WWNN Port WWN =========================================================================================== ============================ VPD83NAA6=6005076801848008C80000000000000A IBM 2145 7499 svc-svcpool-2 Assigned Unknown -

You can also do this from the GUI by selecting Manage Storage Data LUNS from the left hand window, selecting the client, and clicking Refresh. Figure 7-8 shows that our LUN, VPD83NAA6=6005076801848008C80000000000000A, is recognized at its new size.

Figure 7-8 Data LUN display

2. Now we need to expand the size of the SAN File System volume on the MDS. We know from Example 7-44 that our LUN is actually the volume svc-svcpool-2. Use the expandvol command, specifying the client agent47, as shown in Example 7-45. GUI: Manage Storage Volumes Select Volume Select action Properties Size Select client with visibility to the volume Expand volume.
Example 7-45 Expand the volume sfscli> expandvol -client agent47 svc-svcpool-2 CMMNP5389I Volume svc-svcpool-2 was expanded successfully.

3. Verify the size of this LUN at the MDS using the lslun command. Example 7-46 on page 282 shows the new capacity (about 7.5 GB) of the LUN is recognized and visible from client agent47. GUI: Manage Storage Metadata LUNs.

Chapter 7. Basic operations and configuration

281

Example 7-46 LUN size after expansion sfscli> lslun -client agent47 VPD83NAA6=6005076801848008C80000000000000A Lun ID Vendor Product Size (MB) Volume State Storage Device WWNN Port WWN =========================================================================================== ============================ VPD83NAA6=6005076801848008C80000000000000A IBM 2145 7499 svc-svcpool-2 Assigned Unknown -

4. Verify that the volume has been expanded with the lsvol command. Compare the previous size (Example 7-42 on page 278) with the new size of 7488 MB (see Example 7-47). GUI: Manage Storage Volumes.
Example 7-47 Volume size after expansion sfscli> lsvol svc-svcpool-2 Name State Pool Size (MB) Used (MB) Used (%) =========================================================================== svc-svcpool-2 Activated svcpool 7488 0 0

5. Verify that the pool now reports the correct expanded size using lspool. Compare the previous size in Example 7-43 on page 279 to the size shown in Example 7-48. GUI: Manage Storage Storage Pools.
Example 7-48 Pool size after expansion sfscli> lspool Name Type Size (MB) Used (MB) Used (%) Threshold (%) Volumes ============================================================================ SYSTEM System 64832 768 1 80 3 DEFAULT_POOL User Default 5696 3184 55 80 4 PolicyPool User 944 32 3 80 1 svcpool User 9520 112 1 80 2

Disk expansion on a Windows client


To expand a disk on Windows, follow the steps in Expand the underlying LUN on page 278. After expanding the LUN, you can verify the new capacity on the Windows client, as shown below. Then follow the steps on the MDS in Configure the expansion on page 280. Before expanding, the disk (disk 15 in our example) was 4.88 GB, as seen in Figure 7-9 on page 283.

282

IBM TotalStorage SAN File System

Figure 7-9 Disk before expansion

Chapter 7. Basic operations and configuration

283

We expanded the vdisk by 500 MB in the SVC. Windows 2000 requires a reboot for it to detect the expanded volume. After the reboot, Disk Manager confirms that the disk had been expanded, as shown in Figure 7-10. It now has a capacity of 5.37 GB.

Figure 7-10 Disk after expansion

7.4.7 Expanding a volume in the system storage pool


Expanding or increasing the size of a System volume increases the capacity of the System Pool. SAN File System can recognize expanded volumes in the System Pool. A prerequisite for this is that your back-end storage system must be capable of expanding a LUN. We will show an example using the IBM Total Storage SAN Volume Controller (SVC), which has this capability. Because the SUSE operating system cannot recognize expanded volumes dynamically, each MDS requires a reboot in order to recognize the new LUN capacity. In this section, we will expand a LUN in the System Pool. As shown in Example 7-49 on page 285, there are four LUNs defined (use the lslun command to see them). We are going to expand the volume newsysvol, which is currently 999 MB and assigned to the SYSTEM pool. GUI: Manage Storage Metadata LUNs.

284

IBM TotalStorage SAN File System

Example 7-49 List LUNs # sfscli lslun Lun ID Vendor Product Size (MB) Volume St ate Storage Device WWNN Port WWN ================================================================================ VPD83NAA6=600507680188801B200000000000002B IBM 2145 999 newsysvol As signed UNKNOWN UNKNOWN, VPD83NAA6=600507680188801B2000000000000002 IBM 2145 102400 Av ailable UNKNOWN UNKNOWN, VPD83NAA6=600507680188801B2000000000000001 IBM 2145 102400 Av ailable UNKNOWN UNKNOWN, VPD83NAA6=600507680188801B2000000000000000 IBM 2145 10240 MASTER As signed UNKNOWN UNKNOWN,

1. Expand the LUN using the procedures for your back-end storage system. Note that at the time of the writing of this redbook, the SVC is the only supported metadata storage device that can expand an existing LUN. 2. After the expansion on the SVC, reboot each MDS in the cluster, one at a time, in a rolling fashion. The reboot is necessary for each MDS to recognize the expanded LUN. Make sure that each MDS has re-joined the cluster (using the lsserver command) before initiating a reboot of the next MDS. By rebooting each MDS individually, you will maintain availability of the filesets to the clients. 3. Once every MDS has been rebooted, verify that all LUNs are still visible to the SAN File System, as shown in Example 7-50. As you can see, the LUN has been successfully expanded, and now shows the updated size of 1199 MB. GUI: Manage Storage Metadata LUNs.
Example 7-50 List LUNs # sfscli lslun Lun ID Vendor Product Size (MB) Volume St ate Storage Device WWNN Port WWN ================================================================================ ==================================== VPD83NAA6=600507680188801B200000000000002B IBM 2145 1199 newsysvol As signed UNKNOWN UNKNOWN, VPD83NAA6=600507680188801B2000000000000002 IBM 2145 102400 Av ailable UNKNOWN UNKNOWN, VPD83NAA6=600507680188801B2000000000000001 IBM 2145 102400 Av ailable UNKNOWN UNKNOWN, VPD83NAA6=600507680188801B2000000000000000 IBM 2145 10240 MASTER As signed UNKNOWN UNKNOWN,

Chapter 7. Basic operations and configuration

285

4. Even though the LUN has been expanded, SAN File System does not yet recognize the new capacity in the volume, as shown in Example 7-51. The associated volume, newsysvol, still shows the former capacity of 992 MB. GUI: Manage Storage Volumes.
Example 7-51 List volumes # sfscli lsvol Name State Pool Size (MB) Used (MB) Used (%) ======================================================= MASTER Activated SYSTEM 10224 384 3 newsysvol Activated SYSTEM 992 48 4 avol1 Activated poola 102384 1280 1 avol2 Activated poola 40944 1264 3 bvol1 Activated poolb 51184 1600 3 bvol2 Activated poolb 46064 1584 3

5. Use the expandvol command to expand the volume, as in Example 7-52, specifying the volume that was expanded, that is, newsysvol. GUI: Manage Storage Volumes Select Volume Select action Properties Size Expand volume.
Example 7-52 Expand volume # sfscli expandvol newsysvol CMMNP5389I Volume newsysvol was expanded successfully.

6. Verify that the volume has been successfully expanded using the lsvol command, as shown in Example 7-53. This shows that the capacity has increased. GUI: Manage Storage Volumes.
Example 7-53 Show expanded volume size # sfscli lsvol Name State Pool Size (MB) Used (MB) Used (%) ======================================================= MASTER Activated SYSTEM 10224 384 3 newsysvol Activated SYSTEM 1184 48 4 avol1 Activated poola 102384 1280 1 avol2 Activated poola 40944 1264 3 bvol1 Activated poolb 51184 1600 3 bvol2 Activated poolb 46064 1584 3

You have now successfully expanded a system volume.

7.5 Filesets
A fileset is a unit of workload, and is a subset of the SAN File System global namespace. Filesets are created by an administrator to divide up the namespace into a logical organization structure. The fileset is the unit for which FlashCopy images are created. We will differentiate between: Dynamic fileset assignment Static fileset assignment

286

IBM TotalStorage SAN File System

When a fileset is created, it can be assigned to a specific MDS for management. This is known as a static fileset. You can also choose to allow the cluster to assign the fileset to a suitable MDS, using a simple load balancing algorithm. This is known as a dynamic fileset. Filesets can be changed from static to dynamic, and from dynamic to static, and a static fileset can also be rebound statically to another MDS. In a balanced environment, each MDS should host at least one fileset unless you choose to have an idle MDS with no filesets assigned available to provide failover functions, if desired. This is known as an N+1 configuration. We discuss SAN File System failover and its effect on fileset assignments in 9.5, MDS automated failover on page 413. We recommend using either all dynamic or all static fileset assignments to avoid undesired excessive load on a specific MDS cluster node. Using all static filesets allows you to have more precise control of load balancing the SAN File System cluster. Dynamic filesets will be allocated to different MDSs to balance the load; however, the algorithm essentially only considers the number of filesets assigned to each MDS. It does not take into account that some filesets are busier than others. Therefore, if you know which filesets generate more transactions, you can use this knowledge to statically assign them in a balanced manner across the MDS cluster. Tip: The ROOT fileset is treated like any other fileset. The only difference is, since it is created by the system, it will start out as a static fileset statically assigned to the master at creation time.

7.5.1 Relationship of filesets to storage pools


Filesets are not specifically related to storage pools, although each file in a fileset physically resides in blocks in a storage pool. This relationship is many-to-many; files in a fileset can have their data stored in multiple user pools, depending on the policy that has been defined (see 7.8, File placement policy on page 304). A storage pool can contain files from many filesets. However, all the data for a particular file (that is, all the file extents which comprise it) will be wholly contained within one storage pool.

Chapter 7. Basic operations and configuration

287

These relationships are shown in Figure 7-11.

.jpg .gif Fileset 1 .pdf Fileset 2 .c Fileset 3 .h .exe


Figure 7-11 Relationship of fileset to storage pool

Storage Pool A

Storage Pool B

Storage Pool C

Storage Pool D

Once a fileset has been created and attached to a particular location within the SAN File System, it appears as a regular directory or folder to the SAN File System clients. The clients can create files and directories in the fileset, permissions permitting. From a client perspective, a fileset looks like a normal directory within the SAN file system; clients only mount the single global namespace, and thereby have access to all the filesets (within security constraints). A client cannot move, rename, or delete a directory that is the root of a fileset. A client cannot create hard links across fileset boundaries. Figure 7-12 on page 289 shows the MDS and client perspective of filesets. There are five filesets shown: the root, Images, Install, UNIXfiles, and Winfiles. Some of these have subdirectories (for example, the folder Backup is a subdirectory on the root file system, and the fileset unixfiles has a subdirectory called data). The client, however, is not specifically aware which folders are filesets; they all appear as regular directories.

288

IBM TotalStorage SAN File System

Local Drives and Devices


SAN FS Global namespace
Root FileSet (greenmds) Images FileSet Install FileSet

Unixfiles FileSet
has subdirectories

Winfiles FileSet
has subdirectories

Backup directory in Root FileSet (greenmds)

Figure 7-12 Filesets from the MDS and client perspective

7.5.2 Nested filesets


A fileset that is attached to another fileset is known as a nested fileset, or a child of a parent fileset. The filesets in Figure 7-12, Images, Install, and so on, are not considered nested filesets, because they are attached to the root of the SAN File System. Figure 7-13 shows an example of a nested fileset. The fileset, Website, is nested under the fileset Projects.

ROOT

Projects

Dev

Website
Figure 7-13 Nested filesets

You should be careful when creating nested filesets for the following reasons: You cannot access a child fileset if the MDS hosting the parent fileset is unavailable. In the example, if the MDS hosting the fileset Projects failed, then both the Projects fileset, and the fileset Website, even if hosted by a different MDS, would be unavailable until the failed MDS workload was failed over.

Chapter 7. Basic operations and configuration

289

A FlashCopy image is created at the individual fileset level; it does not include any nested filesets. Also, you cannot make a FlashCopy image of a fileset and any nested filesets in a single operation. This may be of concern if it must have a consistent image of a fileset and its nested filesets. Making FlashCopy images in multiple operations could potentially lead to ordering or consistency issues. A FlashCopy image cannot be reverted when nested filesets exist within the fileset. You must manually detach the nested filesets before reverting the image. In the example, if you wanted to revert a FlashCopy image of the fileset Projects, you would first need to detach the fileset Website. You could reattach it after the fileset Projects was reverted. If creating nested filesets, attach them only directly to other filesets. Do not attach filesets to client-created directories. Doing this makes doing a large-scale restore more complex. In the example, Website is attached directly to the Projects fileset. To be able to detach a fileset, you need to detach all its nested filesets first. In the example, if you needed to detach the fileset Projects, you would first need to detach the fileset Website.

7.5.3 Creating filesets


To create a fileset, use the mkfileset command. The parameters for this command are: server: The MDS that will host the fileset. This parameter is optional; if used, the fileset will be created as a static fileset. If omitted, the fileset will be created as a dynamic fileset and will be assigned automatically to an initial MDS. Dynamic filesets will be evenly distributed across the MDS cluster so that each MDS will have the same or nearly the same number of filesets to serve. quota: The maximum size, in MB, for the fileset, which, when exceeded, will cause the MDS to generate an alert. It can be from 0 (default) to 1 073 741 824 (1 PB, which is the current maximum size for a fileset). If set to 0, no alerts are sent. thresh: The maximum percentage of the quota size, which, when exceeded, will cause the MDS to generate an alert (including SNMP trap). It can be from 0 to 100. If set to 0, no alerts are sent. The default is 80. We strongly recommend using the threshold value and alerts so that you can allocate more storage capacity to the SAN File System before it reaches the fileset limits. qtype: The quota type for the fileset. A hard quota produces a log message and SNMP trap when the quota is met, and denies client requests for more space. A soft quota (default) produces a log message and potential alert when the quota size is exceeded, but grants client requests for more space, providing that there is space available in the storage pool where the file is being created or modified. attach: The existing directory path attach point for the fileset. This must include the root of the global namespace; this corresponds to the CLUSTER_NAME parameter defined when installing SAN File System (see Table 5-1 on page 147). The directory path must exist before running the mkfileset command. You should attach filesets to attach points that are themselves filesets. dir: The actual directory name seen on the client. It must not exist before running the mkfileset command, as it will be created by the command. This will appear as a newly created subdirectory under the path specified in the attach parameter. For example, if you create a fileset with the attach parameter set at /sanfs (the root of the global namespace), and specify dir of myfileset, a client will see, in its file view, a new subdirectory called myfileset, created under /sanfs. desc: Optional description for the fileset.

290

IBM TotalStorage SAN File System

fileset_name: The name for the fileset. This is a logical name, internal to the MDS cluster; it is not visible to the clients. It need not be the same as the dir parameter, although you might choose to make it the same, for clarity. Note: We strongly recommend attaching filesets either onto the root or to other filesets, and not to directories, as this will make restore easier if required. If you attach filesets to directories, then you have to re-create the directory on the client itself before you can restore the fileset. Newly created filesets have owner and permissions set to the following: File permissions 000 (no access), owned by user ID/groupID 1000000/1000000 (no access) when viewed from UNIX-based clients. No access, and owned by SID S-1-0-0 when viewed from Windows-based clients. You need to set ownership and permissions to a suitable value once for each fileset on a privileged client, as described in 7.6, Client operations on page 296, to be able to use the new filesets. An example of creating filesets is shown in Example 7-54. GUI: Manage Filing Create a Fileset.
Example 7-54 Creating filesets using the CLI sfscli> mkfileset -attach sanfs -dir userhomes s-desc "user home directories" userhomes CMMNP5147I Fileset userhomes was created successfully. sfscli> mkfileset -server mds1 -attach /sanfs/userhomes -dir user1 user1 CMMNP5147I Fileset user1 was created successfully.

We created two filesets, the first called userhomes and the second called user1. We attached the fileset userhomes to the root (sanfs, which is the name of the cluster) and we also named the directory userhomes. Since we did not specify the -server option, this fileset will be assigned dynamically to one of the MDS cluster nodes. The second fileset, user1 was attached to the newly created fileset /sanfs/userhomes, at the directory point user1, and was also statically assigned to mds1. To verify that the filesets were created, use the lsfileset command, as shown in Example 7-55 on page 292. The column Most Recent Image will list the date and time that the last FlashCopy image of the fileset was made; in this case, we have not made any FlashCopy images yet. The final column, Server, shows the server that is currently hosting the fileset. GUI: Manage Filing Filesets. Note: The directory point and the fileset name do not need to be the same, although they are in our example. The directory point is the directory that will be visible to the clients. The fileset name is the logical name of the fileset as displayed by the SAN File System administrator.

Chapter 7. Basic operations and configuration

291

Example 7-55 Listing defined filesets sfscli> lsfileset Name Fileset State Quota Type Quota (MB) Used (MB) Used (%) Threshold (%) Most Recent Image Server =========================================================================================== userhomes Attached Soft 0 0 0 80 - mds4 user1 Attached Soft 0 0 0 80 - mds1

The -l flag on the lsfileset command will show more details of the fileset, including the hosting MDS, MDS state, number of FlashCopy images which exist for the fileset, attach point, directory name, and parent fileset. An example of this command is shown in Example 7-56. We can determine if a fileset is static or dynamic by looking in the Assigned Server column (to the left of the Attach Point). Fileset userhomes has a - (dash) in this field, indicating it is a static fileset. The Assigned field for this fileset has the value mds4, since this is the MDS currently hosting the fileset. For fileset user1, the MDS mds1 is listed in both the Assigned Server and the Server column -, indicating it is a static fileset that is being hosted by its assigned server. GUI: Manage Filing Filesets Click on the fileset.
Example 7-56 Long listing of filesets sfscli> lsfileset -l userhomes user1 Name Fileset State Serving State Quota Type Quota (MB) Used (MB) Used (%) Threshold (%) Images Most Recent Image Server Assigned Server Attach Point Directory Name Directory Path Parent Children Description =========================================================================================================== =========================================================================================================== =========================== userhomes Attached Online Soft 0 0 0 80 1 Jun 07, 2004 10:24:42 AM mds4 sanfs/userhomes userhomes sanfs ROOT 1 user home directories user1 Attached Online Soft 0 16 0 80 1 Jun 07, 2004 3:56:44 AM mds1 mds1 sanfs/userhomes/user1 user1 sanfs/userhomes userhomes 0 -

Client view of filesets


Let us assume that we have also created the fileset USERS, attached directly to the root of the Global Namespace. Figure 7-14 shows the current fileset layout.

ROOT

userhomes

USERS

user1
Figure 7-14 Nested filesets

292

IBM TotalStorage SAN File System

Now, to see the clients view of the new filesets, we will show Windows Explorer from a Windows 2000 (or Windows 2003) client. For this client, drive S: was specified as the mount point for the SAN File System cluster. The user homes and USERS fileset can be viewed on the client under S:, as shown in Figure 7-15. Note: The CLUSTER_NAME, sanfs, is shown as the disk label of the S: drive. This is the same as the name specified when installing the SAN File System cluster.

CLUSTER_NAME

Figure 7-15 Windows Explorer shows cluster name sanfs as the drive label

Chapter 7. Basic operations and configuration

293

To view the nested fileset user1 that was attached under sanfs\userhomes, expand its tree on the left-hand side, as shown in Figure 7-16.

Figure 7-16 List nested filesets

As you can see in Figure 7-16, the user1 is attached to sanfs\userhomes and the name of the directory is user1.

7.5.4 Moving filesets


If an MDS fails, the MDS cluster will automatically reassign the fileset to another MDS. This is true for both dynamic and static defined filesets. See 9.5.2, Fileset redistribution on page 415 for more details of automatic fileset reassignment. To manually assign a fileset to another MDS, use the setfilesetserver command. Both the original hosting MDS and the new hosting MDS remain online throughout. If you perform this command on a dynamic fileset, it converts it to a static fileset and also assigns it to the specified MDS. Example 7-57 shows the usage of the setfilesetserver command; this assigns the fileset aixfiles statically to the MDS mds3. GUI: Manage Filing Filesets Click on the fileset Select action Properties General Settings Server Assignment Method Manual select Server.
Example 7-57 Reassign a fileset to another MDS cluster node sfscli> setfilesetserver -server mds3 aixfiles CMMNP5140I Fileset aixfiles assigned to Metadata server mds3.

While filesets are being moved, there will be a pause for clients that are accessing that fileset. Typically, this will simply be interpreted as an operation taking a little longer than usual to complete; the explicit behavior depends on the application. After the move, the clients can continue transparently; they do not need to re-start the application to recognize the new fileset host.

294

IBM TotalStorage SAN File System

To convert a static fileset to a dynamic fileset, use the autofilesetserver command. This is shown in Example 7-58. We change the previously static fileset user1 to a dynamic fileset. After the command is issued, the Assigned Server column has a dash (-) in it, indicating a dynamic fileset. GUI: Manage Filing Filesets Click on the fileset Select action Properties General Settings Server Assignment Method Automatic.
Example 7-58 Change static fileset to a dynamic fileset sfscli> autofilesetserver user1 CMMNP5402I Automatic Metadata server assignment for fileset user1 is enabled. sfscli> lsfileset -l user1 Name Fileset State Serving State Quota Type Quota (MB) Used (MB) Used (%) Threshold (%) Images Most Recent Image Server Assigned Server Attach Point Directory Name Directory Path Parent Children Description =========================================================================================================== =========================================================================================================== ============ user1 Attached Online Soft 0 16 0 80 1 Jun 07, 2004 3:56:44 AM mds1 sanfs/userhomes/user1 user1 sanfs/userhomes userhomes 0 -

7.5.5 Changing fileset characteristics


You can change certain fileset characteristics using the chfileset command. Example 7-59 shows setting a quota and threshold, and changing the quota type for an existing fileset. GUI: Manage Filing Filesets Click on the fileset Select action Properties Quota Options.
Example 7-59 Configuring quota on fileset sfscli> lsfileset Name Fileset State Quota Type Quota (MB) Used (MB) Used (%) Threshold (%) Server =========================================================================================== ROOT Attached Soft 0 16 0 0 - mds1 userhomes Detached Soft 0 0 0 80 - mds1 user1 Attached Soft 0 0 0 80 - mds1 dbdir Detached Soft 0 0 0 80 - mds2 sfscli> chfileset -quota 1000 -thresh 65 -qtype hard dbdir CMMNP5166I Fileset dbdir was modified successfully. sfscli> lsfileset Name Fileset State Quota Type Quota (MB) Used (MB) Used (%) Threshold (%) Server =========================================================================================== ROOT Attached Soft 0 16 0 0 - mds1 userhomes Detached Soft 0 0 0 80 - mds2 user1 Attached Soft 0 0 0 80 - mds1 dbdir Detached Hard 1000 0 0 65 - mds2

Chapter 7. Basic operations and configuration

295

7.5.6 Additional fileset commands


Here are some additional commands that you might find useful when working with filesets: attachfileset: Attaches an existing fileset to a specific point in the global file system. GUI: Manage Filing Filesets Click on the fileset Select action Attach. detachfileset: Detaches a fileset from the global file system. Detached filesets are unavailable for access until they are re-attached (using the attachfileset command; however, their contents are not affected). GUI: Manage Filing Filesets Click on the fileset Select action Detach. rmfileset: Removes an empty, detached fileset (or, optionally, the files in the fileset, including any FlashCopy images). GUI: Manage Filing Filesets Click on the fileset Select action Delete.

7.5.7 NLS support with filesets


Beginning with SAN File System V2.2, MBCS characters as well as 8-bit ASCII can be used in the directory attachment point for a fileset. Figure 7-17 shows a fileset that has been attached to a directory whose name uses Japanese characters.

Figure 7-17 MBCS characters in fileset attachment directory

7.6 Client operations


We assume you have clients installed that have attached the SAN File System global namespace (as described in 5.3, SAN File System clients on page 149). At this stage, in order for the clients to access each fileset, there are important prerequisite steps. This section will give you a brief overview of how to take ownership and change permissions on 296
IBM TotalStorage SAN File System

filesets using Windows and UNIX-based clients. These steps are needed before any client can start accessing a fileset. Note: These steps need to be performed only ONCE for each fileset.

7.6.1 Fileset permissions


Newly created filesets are initially attached with a special dedicated user ID and group ID that lock out access to all clients. These are:

UNIX-based (including Linux): File permissions 000 (read/write/execute blocked for


user/group/other), with user ID/groupID 1000000/1000000

Windows: Owner S-1-0-0


In order for clients to be able to access a fileset, a client must first take ownership of it by changing its owner to a valid user that will provide the required access. The take-ownership operation is only performed once for each file system, and can only be done by a privileged client. The concept of root squashing (familiar from the NFS world) means that, by default, when client root or Administrator users are accessing the SAN File System, they do not have root privileges there, but instead only the equivalent of the everybody or other group. Therefore, in order to change the ownership and permissions on a fileset, one or more privileged clients must be created. It is recommended to have at least one privileged client of each client OS type (Windows and UNIX-based). In the current release of SAN File System, we recommend arranging files so that they can be backed up by the correct client OS type. Since the two OS types use different security schemes, files created by a UNIX-based client must be backed up by an application running on a UNIX-based client in order to accurately capture all the security permissions and attributes. Similarly, files created on Windows clients must be backed up by an application running on a Windows client in order to accurately capture all the security permissions and attributes. One way to achieve this is to separate client files in filesets for each client OS type, that is, Windows clients create files only within designated filesets and UNIX-based clients create files only within another set of filesets. This is referred to as the primary allegiance of a fileset, that is, either Windows or UNIX-based. The different client platforms can, however, share files within filesets (read/write) if the permissions allow. Therefore, it is important to set up your ACLs on the clients to accomplish this goal. To be able to take ownership and change permission on a new fileset, you need to turn off root squashing for the client, that is, enable it as a privileged client to SAN File System.

7.6.2 Privileged clients


A privileged client, in SAN File System terms, is a client that has root privileges in a UNIX environment or Administrator privileges in Windows environment. A root or Administrator user on a privileged SAN File System client will have full control over all file system objects in the filesets. A root or Administrator user on a non-privileged SAN File System client will not have full control over file system objects.

Chapter 7. Basic operations and configuration

297

List current privileged clients


Issue the statcluster -config command to get the current privileged client list, as shown in Example 7-60. The output shows that there are no privileged clients currently configured. GUI: Manage Servers and Clients Privileged Clients.
Example 7-60 Get privileged client list sfscli> statcluster -config Name sanfs ID 60355 State Online Target State Online Last State Change Sep 17, 2004 10:46:27 PM Last Target State Change Servers 2 Active Servers 2 Software Version 2.2.0.83 Committed Software Version 2.2.0.83 Last Software Commit Sep 06, 2004 3:39:47 AM Software Commit Status Not In Progress Installation Date May 06, 2004 3:39:47 AM ===========User-Defined Configuration Settings============ Pool Space Reclamation Interval 60 minutes Privileged Clients RSA User USERID RSA Password ******** ===========Service-Defined Tuning Configuration=========== Master Server Buffer 2048 pages Subordinate Server Buffer 200000 pages Admin Process Limit 4 Server Workload Process Limit 20

Create privileged clients


There are two ways to add privileged clients to the MDS configuration. You can use either the addprivclient or chclusterconfig command. Please note that the addprivclient command preserves the list of already added clients, while chclusterconfig overwrites the whole list. We will create two privileged clients using the chclusterconfig command (see Example 7-61). GUI: Manage Servers and Clients Privileged Clients Enter client name Add.
Example 7-61 Add AIXRome and LIXPrague to the privileged client list sfscli> chclusterconfig -privclient AIXRome,LIXPrague Are you sure you want to change cluster configuration settings? [y/n]:y CMMNP5336I The cluster was modified successfully.

Attention: chclusterconfig -privclient list replaces the entire list of current privileged clients. If you use this command to add an additional privileged client, you must specify both the current and new clients in the new list. The addprivclient command behaves differently (see Example 7-63). Re-issue the statcluster -config command to verify that the clients AIXRome and LIXPrague have been added to the privileged client list. This is shown in Example 7-62 on page 299.

298

IBM TotalStorage SAN File System

Example 7-62 Verify privileged client list sfscli> statcluster -config Name sanfs ID 60355 State Online Target State Online Last State Change Sep 27, 2004 4:52:46 AM Last Target State Change Servers 2 Active Servers 2 Software Version 2.2.0.83 Committed Software Version 2.2.0.83 Last Software Commit Sep 15, 2004 4:41:21 PM Software Commit Status Not In Progress Installation Date Oct 14, 2003 12:04:25 PM ===========User-Defined Configuration Settings============ Pool Space Reclamation Interval 60 minutes Privileged Clients AIXRome,LIXPrague RSA User USERID RSA Password ******** ===========Service-Defined Tuning Configuration=========== Master Server Buffer 2048 pages Subordinate Server Buffer 200000 pages Admin Process Limit 4 Server Workload Process Limit 20

The other method to add a privileged client and preserve the existing privileged clients list is to use the addprivclient command. Example 7-63 shows how to use the addprivclient command and displays the output of the statcluster -config command again to show the modified list of privileged clients.
Example 7-63 Add new privileged client WINWashington using the addprivclient command sfscli> statcluster -config Name sanfs ID 60355 State Online Target State Online Last State Change Sep 27, 2004 4:52:46 AM Last Target State Change Servers 2 Active Servers 2 Software Version 2.2.0.83 Committed Software Version 2.2.0.83 Last Software Commit Sep 15, 2004 4:41:21 PM Software Commit Status Not In Progress Installation Date Oct 14, 2003 12:04:25 PM ===========User-Defined Configuration Settings============ Pool Space Reclamation Interval 60 minutes Privileged Clients AIXRome,LIXPrague, WINWashington RSA User USERID RSA Password ******** ===========Service-Defined Tuning Configuration=========== Master Server Buffer 2048 pages Subordinate Server Buffer 200000 pages Admin Process Limit 4 Server Workload Process Limit 20

Chapter 7. Basic operations and configuration

299

Remove a privileged client


If you need to remove a privileged client, you can do so using the rmprivclient command. In Example 7-64, we show how to remove a privileged client. GUI: Manage Servers and Clients Privileged Clients Select client Select action Revoke Root Privileges.
Example 7-64 Remove a privileged client sfscli> rmprivclient LIXPrague Are you sure you want to remove LIXPrague as a privileged client? [y/n]:y CMMNP5380I Privileged client access successfully removed for LIXPrague.

7.6.3 Take ownership of filesets


Now you can take ownership of filesets on your UNIX-based and Windows privileged clients.

Take ownership of fileset: UNIX


Here we will take ownership of a new fileset called aixfiles, which was attached as aixfiles under the root in the global namespace. Notice that the CLUSTER_NAME, sanfs, shows as the root or base of the SAN File System on UNIX-based clients, and is appended to the directory where we chose to attach the global namespace when we installed the client (/sfs/sanfs). 1. Log in to the AIXRome client and view the directory listing to verify the default user ID, group ID, and permissions for the directory lixfiles, as shown in Example 7-65. At this stage, you could not create any files in this fileset, as the directory is owned by a user ID and group ID that do not exist locally, and there are no read/write/execute permissions.
Example 7-65 Verify permissions on the new fileset aixfiles # pwd /sfs/sanfs # ls -la total 6 drwxr-xr-x drwxrwxrwx dr-xr-xr-x d---------

6 3 2 3

root root root 1000000

system system system 1000000

144 72 48 72

May May May May

19 19 19 19

2004 . 14:25 .. 14:27 .flashcopy 2004 aixfiles

2. Try to change to the AIX directory; as you do not have the correct permission to do this, an error will be displayed, as shown in Example 7-66.
Example 7-66 Verify no access to AIX directory # cd aixfiles ksh: AIX: Permission denied.

Because you are on a privileged client, you can change these permissions. Use the chown, chgrp, and chmod commands to set the user ID, group ID, and permissions, and then verify the changes, as shown in Example 7-67 on page 301. Now you can change to the directory and create files there.

300

IBM TotalStorage SAN File System

Example 7-67 Take ownership and set permissions on the fileset # chown root.system aixfiles # chmod 755 aixfiles # ls -la total 6 drwxr-xr-x 6 root system drwxrwxrwx 3 root system dr-xr-xr-x 2 root system drwxr-xr-x 3 root system # cd aixfiles # ls -la total 3 drwxr-xr-x 3 root system drwxr-xr-x 6 root system d--------2 1000000 1000000

144 72 48 72

May May May May

19 19 19 19

2004 . 14:25 .. 14:27 .flashcopy 2004 aixfiles

72 May 19 2004 . 144 May 19 2004 .. 48 May 19 2004 .flashcopy

Take ownership of fileset: Windows


The next example shows taking ownership and setting permissions of a fileset attached as Users on a privileged Windows client. 1. Ensure that you are logged in as a member of the administrator group on the privileged client and then open Windows Explorer. 2. Right-click S:\users and select Properties, as shown in Figure 7-18.

Figure 7-18 Select properties of fileset

3. Open the Security tab, and click Advanced to display the access control settings window.

Chapter 7. Basic operations and configuration

301

4. Select the Owner Tab, as shown in Figure 7-19.

Figure 7-19 ACL for the fileset

5. The owner will be the default, S-1-0-0, which is the null security ID. Choose another owner, usually Administrator or Administrators. Make sure the box Replace owners on subcontainers and objects is checked. 6. Click Apply and then click OK. Acknowledge the warning given in Figure 7-20.

Figure 7-20 Verify change of ownership

Select Yes to activate the new settings. 7. Select the Security tab and set the permissions you want for the folder. Here we have given all privileges to the Administrators group (see Figure 7-21 on page 303). You should set the Everyone permissions according to your security requirements. If a UNIX-based client accesses this folder, it will do so with the permissions assigned to Everyone. Click OK to activate the changes.

302

IBM TotalStorage SAN File System

Figure 7-21 Windows security tab

8. Verify that you can access the fileset by opening the USERS directory (in this example, S:\USERS). You can now create files in the fileset.

7.7 Non-uniform SAN File System configurations


In this section, we will introduce some considerations and new commands, which were introduced in SAN File System V2.1, to help to manage non-uniform SAN File System configurations. Refer to 3.3, SAN File System volume visibility on page 69 for understanding our definition of a non-uniform SAN File System configuration. These new commands are: reportclient: Displays a list of clients that have access to the specified volume or logical unit number (LUN). reportfilesetuse: Displays the usage statistics for pools that currently store data for the specified fileset. Many of the already existing commands now require a -client parameter, for example, mkvol. We will not cover these commands here, because we have already shown the usage of such commands in earlier parts of this chapter. By combining the output of these commands, and by checking it against the active policy set, you can determine which storage pools (and all of their volumes) a particular client needs to see in order to access all its files. Because this might become a really complex Administrator task, especially in larger-scale SAN File System environments, a sample script is provided in 9.7.1, Client validation sample script details on page 430 to help to check and report any inconsistencies among Client-Volume-Fileset relationships.

Chapter 7. Basic operations and configuration

303

7.7.1 Display a list of clients with access to particular volume or LUN


To list all clients that have access to specified volumes or LUNs, use the reportclient command. You can list volumes by using one of the following two options: -volume <volume_name>: This option will list all clients with access to the specified volume. You can list your SAN File System volumes by using the lsvol command. -lun <lun_id>: This option will list all clients with access to the specified LUN. You can list your SAN File System LUNs by using the lslun with the appropriate -client parameter. Example 7-68 shows the usage of the reportclient command with both command line options. GUI: Manage Storage Data LUNs Select LUN Select action Clients that can see the LUN.
Example 7-68 Example of reportclient command sfscli> reportclient -lun VPD83NAA6=600507680188801B200000000000001B Name ======= AIXRome sfscli> reportclient -vol vol_aixrome1 Name ======= AIXRome

7.7.2 List fileset to storage pool relationship


To list all storage pools that are referenced by a particular fileset, use the reportfilesetuse command. This will list all storage pools that currently contain files for the designated fileset, as well as all storage pools that could potentially contain files in that fileset, according to the active policy. Example 7-69 shows that the fileset aixfiles currently has files stored in the aixrome storage pool and could in the future store files in the pool DEFAULT_POOL. The final column lists the number of policy rules in the active policy that reference that storage pool. Therefore, any client that wants to access the fileset aixfiles needs to have access to all the volumes in the storage pools aixrome and DEFAULT_POOL. GUI: Manage Filing Filesets Select Fileset Select action Details of the File Placements in Pools.
Example 7-69 Example of reportfilesetuse command sfscli> reportfilesetuse aixfiles Pool Fileset Usage Total Rules That Enable Usage ======================================================== DEFAULT_POOL Not In Use 1 aixrome In Use 1

7.8 File placement policy


SAN File System provides automatic file placement through the use of file placement policies that determine in which storage pool a file will be placed when created. There is another type of policy, file management policy, which is used for lifecycle management of files. This is discussed in 10.2, Lifecycle management with file management policy on page 441. For the rest of this chapter, when we discuss policy, we are referring to file placement policy. 304
IBM TotalStorage SAN File System

We will first present general policy information, then show how to set up policy in the CLI (7.8.3, Create a policy and rules with CLI on page 309) and the GUI (7.8.4, Creating a policy and rules with GUI on page 311).

7.8.1 Policies and rules


Policies and the rules that they contain are used to assign files to specific storage pools.

Rules
A rule is an SQL-like statement that tells a SAN File System MDS to place the data for a file in a specific storage pool if the file meets a particular condition. A rule can apply to any file being created or only to files being created within a specific fileset.

Policies
A policy is a set of rules that determines where specific files are placed. An administrator can define any number of policies, but only one policy can be active at a time. If an administrator activates another policy, or makes changes to a policy, that action has no effect on existing files in the SAN File System. The new policy will be effective only on newly created files in the SAN File System. Restriction: Please be aware that you cannot change rules in an active policy. You need to deactivate the policy first (by activating another policy), then edit the rules and activate the policy back again. See 7.8.9, Best practices for managing policies on page 334 for our recommendations. A policy can contain any number of rules; however, the entire policy cannot exceed a length of 1 MB (which includes any spaces used to delimit the rules and any comments). Rules in the active policy are effective on all the SAN File System MDS (and therefore apply to all the clients). Rules can specify any of these conditions, which when matched, will cause that rule to be applied: Fileset File name or extension Date and time when the file is created User ID and Group ID on UNIX clients SAN File System evaluates rules in the order that they appear in the active policy. When a client creates a file, SAN File System scans the list of rules in the active policy to determine which rule applies to the file. When a rule applies to the file, SAN File System stops processing the rules and assigns the file to the appropriate storage pool. If no rule applies, the file is assigned to the default storage pool. Note: Rules in a policy are evaluated only when a file is being created. If an administrator switches from one policy to another, the rules in the new policy apply only to newly created files. Activating a new policy does not change the storage pool assignments for existing files. Moving or renaming a file does not cause a policy to be applied; however, restoring a file will cause it to be created in the storage pool required by the current policy. At install time, a default null policy is created, which remains active until a new policy is created and activated. The null policy assigns all files to the default storage pool. Therefore, when creating new User Pools, these will not be used until you create and activate a policy with rules specifying direct files to those pools.
Chapter 7. Basic operations and configuration

305

Figure 7-22 shows how the policy rules operate to control how SAN File System allocates new files to the desired storage pools.

Policy Rules based File Placement - Example


/
Filesets

(SAN File System root)

/HR
/HR/DB2.data /HR/dsn1.bak /HR/plan.doc

/CRM
/CRM/DB2.data /CRM/DB2.bak

/Finance
/Finance/DB2.data /Finance/cust.data

/MFG

Rules for File Placement /HR go into User Pool A *.bak go into User Pool C *DB2.* go into User Pool B Others go into Default Storage Pool

/MFG/proj/plan.doc /MFG/proj/plan.bak

SAN File System


User Pool A
Volumes (LUNs) ( RAID-5 )

User Pool B
Volumes (LUNs) ( RAID-10 )

User Pool C
Volumes (LUNs) ( JBOD )

Default Storage Pool


Volumes (LUNs) ( RAID-5)

/HR/DB2.data /HR/dsn1.bak /HR/plan.doc

/CRM/DB2.data /Finance/DB2.data

/CRM/DB2.bak /MFG/proj/plan.bak

/Finance/cust.data /MFG/proj/plan.doc

Figure 7-22 Policy rules based file placement

The example shows four different storage pools available with volumes assigned to provide different qualities of service: User Pool A, User Pool B, User Pool C, and User Pool D. User Pool A and the Default Storage Pool both use RAID 5, User Pool B uses RAID 10, and User Pool C uses JBOD volumes. There are four filesets in the SAN File System: HR, CRM, Finance, and MFG. The figure shows how the active policy is applied to determine the placement of files, as created, in the available storage pools. The rules in the box specify the following actions: All files in the fileset HR go to User Pool A. Files with suffix .bak go to User Pool C. Files containing the string DB2. in the file name go to User Pool B. Other files that do not meet any rules go to the default storage pool (this is the default rule that is implicit in any policy). Note: The figure does not show the exact syntax for rules, but is for illustration only. The order that the rules are listed in a policy determines the results (when a file is created, the rules are evaluated in order from top to bottom); the first rule that the file matches determines the files placement. For example, although the file /HR/DB2.data matches both the first and third rule, the first rule takes precedence; therefore, it is placed in User Pool A when created.

306

IBM TotalStorage SAN File System

7.8.2 Rules syntax


SAN File System rules use an SQL-like syntax:
RULE <rule_name> SET STGPOOL <pool_name> FOR FILESET <file_set_list> WHERE SQL_expression

In this syntax, the parameters are defined as follows: rule_name pool_name file_set_list SQL_expression Optional identifier (name) for the rule. Storage pool where the files matching the rule should be stored. One or more (comma-separated) filesets, for which this rule applies. Narrows the file selection for which the rule applies. The SQL_expression can be any combination of standard SQL-syntax expressions, except that Case expressions and compare-when clauses are not allowed. You can use many built-in functions in the SQL-expression, for date and time manipulation, numeric manipulation, and string manipulation. These are listed next. Each rule must include either a FOR clause or a WHERE clause, or both a FOR clause and a WHERE clause. This will determine whether a rule is restricted in operation to files in a particular fileset or filesets. This concept is illustrated in 7.8.5, More examples of policy rules on page 322.

Note for SAN File System V1.1 clients: SAN File System V2.1 and higher still supports the use of the FOR CONTAINER clause in the rules, so existing policies do not need to be changed at this time. However, we recommend all future policies be written using the FOR FILESET clause.

Attributes
You can use the following file attributes in the WHERE clause: NAME Name of the file. % in the name represents one or more characters (wildcard), and _ (underscore) represents any single character. You can specify only the file name here, not a directory path. Date and time that the file is created. Numeric group ID, only valid for UNIX clients. Numeric user ID, only valid for UNIX clients.

CREATION_DATE GROUP_ID USER_ID

String functions
These string-manipulation functions are available for file names and literals. Strings must be enclosed in single-quotation marks. A single-quotation mark can be included in a string by using two single-quotation marks (for example, ab represents the string ab). CHAR(x) Converts an integer x to a string. CHARACTER_LENGTH(x) Determines the number of characters in string x. CHAR_LENGTH(x) CONCAT(x,y) HEX(x) LCASE(x) Determines the number of characters in string x. Concatenates string x and y. Converts an integer x in hexadecimal format. Converts string x to lowercase.

Chapter 7. Basic operations and configuration

307

LOWER(x,y) LEFT(x,y,z) LENGTH(x) LTRIM(x) POSITION(x IN y) POSSTR(x,y) RIGHT(x,y,z) RTRIM(x)

Converts string x to lowercase. Left-justifies string x in a field of y characters, optionally padding with z. Determines the length of the data type of string x. Removes leading blanks from string x. Determines the position of string x in y. Determines the position of string x in y. Right-justifies string x in a field of y characters, optionally padding with z. Removes the trailing blanks from string x.

SUBSTR(x FROM y FOR z) Extracts a position of string x, starting at position y, optionally for z characters. SUBSTRING(x FROM y FOR z) Extracts a position of string x, starting at position y, optionally for z characters. TRIM(x) TRIM(x FROM y) TRIM(x y FROM z) UCASE(x) UPPER(x) Trims blanks from the beginning and end of string x. Trims blanks that are x (LEADING, TRAILING, or BOTH) from string y. Trims character y that is x (LEADING, TRAILING, or BOTH) from string z. Converts string x to uppercase. Converts string x to uppercase.

Numerical functions
These numeric-calculation functions are available for numerical parts of the file name, numeric parts of the current date, and UNIX-client user IDs or group IDs. INT(x) INTEGER(x) MOD(x) Converts number x to a whole number, rounding up fractions of .5 or greater. Converts number x to a whole number, rounding up fractions of .5 or greater. Determines x % y.

Date and time functions


These date-manipulation and time-manipulation functions are available for the creation date and current date. CURRENT_DATE CURRENT_TIME Determines the current date on the MDS. Determines the current time on the MDS.

CURRENT_TIMESTAMP Determines the current date and time on the MDS. DATE(x) DAY(x) DAYOFWEEK(x) Creates a date out of x. Creates a day of the month out of x. Creates the day of the week out of date x, where x is a number from 1 to 7 (Sunday=1).

308

IBM TotalStorage SAN File System

DAYOFYEAR(x) DAYS(x) DAYSINMONTH(x) DAYSINYEAR(x) HOUR(x) MINUTE(x) MONTH(x) QUARTER(x) SECOND(x) TIME(x) TIMESTAMP(x,y) WEEK(x) YEAR(x)

Creates the day of the year out of date x, where x is a number from 1 to 366. Determines the number of days since 0000-00-00. Determines the number of days in the month from date x. Determines the day of the year from date x. Determines the hour of the day (a value from 0 to 23) of time or time stamp x. Determines the minutes from date x. Determines the month of the year from date x. Determines the quarter of year from date x, where x is a number from 1 to 4. Returns the seconds portion of time x. Displays x in a time format. Creates a time stamp (date and time) from a date x and, optionally, a time y. Determines the week of the year from date x. Determines the year from date x.

7.8.3 Create a policy and rules with CLI


To create a policy with the CLI, log in to the master MDS, and use the text editor vi to create a file containing the rules, using the SQL syntax shown. When creating a rule file: Every policy file must start with the string VERSION 1. The policy file may use MBCS or ASCII (7 or 8 bit) for file pattern matching strings or file names. All fileset names and storage pool names can only be specified using 8-bit ASCII, that is, no MBCS characters. SAN File System will not allow you to use MBCS when defining these types of administrative objects. A policy is not required to contain any rules, in which case it would be equivalent to the default policy (all files stored in the default storage pool). The maximum size of a policy is 32 KB. To add comments to the policy, use the delimiters /* and */ (for example, /* This is a comment */). Example 7-70 shows a very simple rule file, containing two rules.
Example 7-70 Sample rule file VERSION 1 /* Do not remove or change this line! */ rule stgRule1 set stgpool mp3pool where NAME like %.mp3 rule stgRule2 set stgpool DBpool where NAME like %DB2.%

Save the file and note the name used. We saved our file with the name /home/admin/sample_policy.txt.

Chapter 7. Basic operations and configuration

309

Now that you have created the rule file, you need to create a policy for it within SAN File System. Your rule file will be checked for valid SQL syntax during this step.

Create a policy
Use the mkpolicy command to create a policy containing the rule file, as shown in Example 7-71. Notice that the -file parameter is used to specify the name of the file containing the rules, as created in the previous step. We also specify a name for the policy (Sample_Policy) and enter a description (optional).
Example 7-71 Create a policy mds1:/usr/tank/admin/bin # sfscli sfscli> mkpolicy -file /usr/tank/admin/bin/sample_policy.txt -desc Sample Policy for Typical File Handling sample_policy CMMNP5193I Policy sample_policy was created successfully..

List policies and rules in a policy


To display the policies currently available in the SAN File System, use the lspolicy command, as shown in Example 7-72.
Example 7-72 List the policies sfscli> sfscli> lspolicy Name State Last Active Modified Description =========================================================================================== ========================================== DEFAULT_POLICY active May 06, 2004 3:40:05 AM May 06, 2004 3:40:05 AM Default policy set (assigns all files to default storage pool) sample_policy inactive May 14, 2004 3:14:22 AM Sample Policy for Typical File Handling

In this example, DEFAULT_POLICY is active as the default configuration, and the newly imported policy Sample_Policy is inactive.

List policy contents


You can display the rules within a policy with the catpolicy command (Example 7-73).
Example 7-73 List contents of a policy sfscli>catpolicy DEFAULT_POLICY DEFAULT_POLICY: VERSION 1 /* Default Policy Set Assign all files to default storage pool. When no rule applies to a file, default storage pool is assigned. */ sfscli> catpolicy sample_policy sample_policy: VERSION 1 /* Do not remove or delete this line! */ rule 'stgRule1' set stgpool 'mp3pool' where NAME like '%.mp3' rule 'stgRule2' set stgpool 'DBpool' where NAME like '%DB2.%' sfscli>

310

IBM TotalStorage SAN File System

Activate the new policy


To activate the new policy (replacing the previously active DEFAULT_POLICY), use the usepolicy command, as shown in Example 7-74. You will be prompted to confirm your choice to activate this policy, and thereby deactivate the current policy.
Example 7-74 Activate a policy sfscli> usepolicy sample_policy Are you sure you want to use this policy? New files should be allocated to a pool that is accessible to the clients where the file is needed. [y/n]:y CMMNP5189I Policy sample_policy is now the active policy.

Rerun the lspolicy command to see that the new policy is now active, as shown in Example 7-75.
Example 7-75 List the policies sfscli> sfscli> lspolicy Name State Last Active Modified Description =========================================================================================== =========================================== DEFAULT_POLICY inactive May 14, 2004 3:16:24 AM May 06, 2004 3:40:05 AM Default policy set (assigns all files to default storage pool) sample_policy active May 14, 2004 3:16:24 AM May 14, 2004 3:14:22 AM Sample Policy for Typical File Handling sfscli>

When an administrator activates the policy, the master MDS checks all references to filesets and storage pools. If a rule in the policy references a non-existent storage pool, or a non-existent or unattached fileset, an error is returned and the policy is not activated.

Updating a policy
To update a policy, retrieve it using the catpolicy command, as shown in List policy contents on page 310. You can capture this output to a file; make the necessary changes with a text editor. Then create a policy with the mkpolicy command, and activate it with the usepolicy command. Note: Simply editing your original rules file will not update the policy, since once the policy is created, the rules in the file are imported into the SAN File System configuration, and there is no preserved link back to the original text file. Also, you cannot be sure that the original text file has not been tampered with. Therefore you should always retrieve the stored version of the policy, as described in this section. You can also modify policies using the GUI, as shown in the next section.

7.8.4 Creating a policy and rules with GUI


If you do not want to use a text editor to enter the rules, or are not familiar with SQL, creating a policy and rules may be more easily performed using the SAN File System Console (GUI). With the GUI, you enter rules using drop-down menus to select the functions required, and the SQL is automatically generated, which will be syntactically correct. Unlike the CLI, with the GUI, you can return later and edit the existing rules or create new ones.

Chapter 7. Basic operations and configuration

311

Here is a summary of the steps required to create a policy and rules with the SAN File System Console (GUI): 1. 2. 3. 4. 5. List currently defined policies, and determine the active policy. Create a policy (with high-level settings). Add rules for the policy. Edit rules (if necessary). Activate the policy.

List policies
Select Manage Filing Policies to display the Policies window (Figure 7-23), showing the currently defined policies (Initially, DEFAULT_POLICY, which is active and was created at installation).

Figure 7-23 Policies in SAN File System Console (GUI)

312

IBM TotalStorage SAN File System

Create a policy
To create a new policy, click Create a Policy, or select Create from the drop-down menu and click Go. The Introduction window (Figure 7-24) will be displayed.

Figure 7-24 Create a New Policy

Chapter 7. Basic operations and configuration

313

This window shows the three major steps to create a new policy. Click Next to start Step 1. The High-Level Settings window displays. Enter a name for the policy and a description, as shown in Figure 7-25. You can also select to clone or copy an existing policy. We will create a new policy by selecting New Policy. Click Next to continue.

Figure 7-25 New Policy: High Level Settings sample input

Add rules
The Add Rules to Policy window displays. Here you enter a description for the rule, select the destination storage pool in the Storage Pool Assignment, and specify one or more conditions to apply in the Conditions fields. In Figure 7-26 on page 315, the rule specifies that files ending in the extension .mp3 are to be stored in the storage pool svcpoolA. You can optionally limit the rule to apply only to files in a certain fileset by checking the Fileset box and making a selection from the drop-down menu. Notice that all the SQL functions described above are easy to select here by making the appropriate choice in the different pull-down menus. Also, you can use any individual condition, or any combination of conditions, to define the scope of the rule.

314

IBM TotalStorage SAN File System

Figure 7-26 Add Rules to Policy

Chapter 7. Basic operations and configuration

315

If you have more rules to specify, click New Rule at the bottom and repeat this step. We added another rule for DB2 files, to store them in DEFAULT_POOL, as shown in Figure 7-27.

Figure 7-27 New rule created

When you have specified all the rules you want in the policy, click Next at the bottom of this window (button not shown).

Edit rules
The Edit Rules for Policy window (Figure 7-28 on page 317) will be displayed. This shows the SQL corresponding to the rules entered, and allows you to make further edits if required.

316

IBM TotalStorage SAN File System

Figure 7-28 Edit Rules for Policy

You can edit and modify the rules or add new ones as you wish. Notice that if you click Back at this stage, any editing changes will be lost.

Chapter 7. Basic operations and configuration

317

When finished editing, click Finish. The Policies window (Figure 7-29) will be displayed, showing our newly created policy, Sample_Policy. It is inactive, and the DEFAULT_POLICY is still active.

Figure 7-29 List of defined policies

Activate the policy


To activate the new policy, check the box beside the policy, select Activate... from the drop-down menu, and click Go, as shown in Figure 7-30.

Figure 7-30 Activate Policy

318

IBM TotalStorage SAN File System

You are prompted to confirm your choice to activate this policy, and thereby deactivate the current policy, as shown in Figure 7-31. If you are satisfied with this action, click OK.

Figure 7-31 Verify Activate Policy

The Policy window (Figure 7-32) will be displayed again, reflecting the new active policy.

Figure 7-32 New Policy activated

Chapter 7. Basic operations and configuration

319

Updating a policy
You can use the GUI to update the rules in a policy, but it must be inactive, that is, you cannot update the active policy. Therefore, to change the active policy, first activate another policy, then select the now inactive policy. Select Properties from the drop-down menu, then click the Rules entry on the left hand side. This will display the current rules in a text box which you can edit to add/remove/modify rules as required. After making all the changes you want, activate the edited policy. As a best practice, after activating a new policy, create an additional copy of the policy (using the Clone policy option). The policy copy created will have exactly the same rules as the currently active policy, but it will be inactive, and can therefore be edited, then activated, whenever you want to change the policy rules.

Deleting a policy
You can delete a policy from the GUI, but its status must be inactive. To make a policy inactive, activate another existing policy, then delete the required inactive policy. The policy window lists all the active and inactive polices. As shown in Figure 7-33, select a policy and, from the drop-down menu, select Delete, and then click Go.

Figure 7-33 Delete a Policy

The verify window will be displayed (Figure 7-34 on page 321).

320

IBM TotalStorage SAN File System

Figure 7-34 Verify - Delete Policy Window

Click OK to confirm your delete. Once you have confirmed your deletion, you will be returned to the List Policy window (Figure 7-35).

Figure 7-35 List Policies

Chapter 7. Basic operations and configuration

321

7.8.5 More examples of policy rules


Example 7-76 shows some examples of more complex rules: Rules using the LIKE statement for the file name in the WHERE clause. Rules assigning based on filesets only. Rules based on UNIX user ID. Notice that user IDs and group IDs are specified with their numeric equivalent, not their actual name. Rules based on built-in functions evaluation with a files attributes.
Example 7-76 Additional rules VERSION 1 /* Do not remove or change this line!*/ RULE 'stgRule1' SET STGPOOL DBpool WHERE NAME LIKE '%db2%' RULE 'stgRule2' SET STGPOOL ImagePool WHERE NAME LIKE '%.jpeg' RULE 'stgRule4' SET STGPOOL UNIXUsers FOR FILESET(User1,User2,User3,User4) RULE 'stgRule3' SET STGPOOL UNIXSysPool FOR FILESET(UnixSys) WHERE USER_ID <= 100 RULE 'DoW_Sun' SET STGPOOL Sunday FOR FILESET(fileset1) WHERE DAYOFWEEK(CREATION_DATE)==1 RULE 'DoW_Web' SET STGPOOL Wednesday FOR FILESET(fileset1) WHERE DAYOFWEEK(CREATION_DATE)==4

7.8.6 NLS support with policies


Beginning with SAN File System V2.2, MBCS characters as well as 8-bit ASCII characters can be used in file pattern matching strings in a policy set. To illustrate this, we created a policy called MBCS_Policy. We entered a rule to the policy that will match Japanese characters in the file name, as shown in Figure 7-36 on page 323.

322

IBM TotalStorage SAN File System

Figure 7-36 MBCS characters in policy rule

Chapter 7. Basic operations and configuration

323

Figure 7-37 shows the generated SQL for this rule.

Figure 7-37 Generated SQL for MBCS characters in policy rule

7.8.7 File storage preallocation


Before SAN File System V2.2.2, when space to write a file was required, it was allocated only as needed. SAN File System V2.2.2 and higher has policy-based file preallocation, so that any file matching a given rule will have additional space allocated to it in advance. This will reduce the number of storage allocations required when writing files, since larger blocks are reserved in advance. To allocate storage to a file in advance, create so-called preallocation rules in the file placement policy. Preallocation rules are always processed before the normal file placement rules, so you can write them anywhere in the policy text. However, for clarity, we recommend that you group preallocation rules together or by fileset rules. When a file is created, the preallocation value from the policy will be used instead of the default allocation of one block. The preallocation value is rounded up to the number of blocks required for the specified amount. For example, a value of 1 byte can be specified, but SAN File System will allocate one 4-kilobyte block. The maximum preallocation value is 128 MB. When space has been preallocated to a file, it is not then available for other uses in the SAN File System. However, unused blocks of storage are returned to free space when a file is closed. Preallocating storage to files can provide the following benefits: Allows different files in the same storage pool to have different space allocation behavior. Reduces the response time for writing new files, especially files up to 128 MB in size.

324

IBM TotalStorage SAN File System

Reduces the number of Metadata server transactions needed to write new files. Reduces the number of allocation messages flowing between client and server when new files are written. Important: At the time of the writing of this redbook, preallocation has no effect on write performance on files larger than 1 MB. This is expected to change in a future release of SAN File System. Preallocation rules can still be written without error, specifying a value of up to 128 MB; however, we recommend at this time not to create rules for files expected to grow larger than 1 MB. When writing files larger than 1 MB, the MDS will automatically allocate enough storage to cover the actual write size requested.

Preallocation policy rule syntax


The preallocation policy rule syntax is as follows:
RULE rule_name SET PREALLOC value FOR FILESET (file_set_list) WHERE SQL_expression

The rule_name parameter is optional. The FOR FILESET clause is optional, but if used, will restrict the application of the policy to files in that fileset or filesets. The SQL_expression is a file matching specification that is the same as for the file placement policy. The PREALLOC value can be specified in bytes, kilobytes or megabytes as follows: BYTE, BYTES, KB, KILOBYTE, KILOBYTES, MB, MEGABYTE, and MEGABYTES. Both upper and lower case is valid, and white space is allowed between the number and unit. For example, 1 mb and 1 MB are valid preallocation values.

Configuring preallocation policies


To insert a preallocation rule in a policy, select an inactive policy and edit. If you want to modify the existing active policy, you first need to copy it to another policy, modify it, then activate it. To do this, select Manage Filing Create a Policy. Select the Clone Policy radio button, and then select the policy that you want to copy from the Existing Policy drop-down list. You will then modify the newly created inactive policy to insert any preallocation rule(s).

Chapter 7. Basic operations and configuration

325

In Figure 7-38, the currently active policy, DEFAULT_POLICY, has been copied to a new inactive policy called common_policy. Click the check box for the new policy, select Properties from the drop-down, and click Go.

Figure 7-38 Select a policy

Click Rules on the left hand side to display the contents of the policy, stgRule1 and stgRule2, as in Figure 7-39.

Figure 7-39 Rules for selected policy

326

IBM TotalStorage SAN File System

You can now edit in this text box to add a preallocation rule. We will add a rule to set a 128 MB preallocation for any file named bigfile in the fileset aixfiles, as shown in Figure 7-40. Click OK when you are finished editing the policy.

Figure 7-40 Edited rule for Preallocation

Returning to the list of policies, check the policy you just edited, and select Activate (see Figure 7-41).

Figure 7-41 Activate new policy

Your new preallocation is now in effect.

Chapter 7. Basic operations and configuration

327

7.8.8 Policy management considerations


There are some points to consider for applying and managing the policy.

Policy evaluation
The master MDS evaluates the policy as follows: When an administrator creates a new policy, the master MDS checks the basic syntax of all the rules in the policy. When an administrator activates the policy, the master MDS checks all references to filesets and storage pools. If a rule in the policy references a non-existent storage pool, or a non-existent or unattached fileset, an error is returned and the policy is not activated. After the policy is successfully activated, the rules in the policy are evaluated in order, whenever a file is subsequently created in the SAN File System. At this stage, if an error is detected in the policy, an entry is made in the SAN File System log file, and the file will be stored in the default User Pool.

FOR FILESET clause


If you specify the FOR FILESET clause, the rule is operational only for files actually in that fileset, that is, it does not apply to files in any other filesets, including nested filesets. If the FOR FILESET clause is not specified, the rule applies to all files in all filesets.

Assign and manage enough spare capacity to default User Pool


All files that do not match any rules in the active policy are stored in the default User Pool. Also, if the policy has an error, as just described, other files may also be stored there. You will want to have enough storage capacity for the default User Pool, and monitor its utilization.

Disabling the default User Pool


If you are using a non-uniform SAN File System configuration and the setup is such that there is no User Pool that is accessible by all clients, then you need to disable the default User Pool. If you have to do this, be careful of how your policies will work. If you are still using the default policy, then no files can be created in SAN File System. This is because the default policy says to store all files in the default User Pool, which does not exist. Example 7-81 on page 331 shows the error that occurs when trying to create a file that, according to the policy, must be stored in a non-existent default User Pool. If you have configured another policy, make sure that the default rule is never invoked. That is, make sure your policy explicitly covers all files that could be created. If the default rule has to be invoked because the policy falls through to it, the file create will fail, since SAN File System will try to put it in a non-existent pool. Whenever a file cannot be placed, because the policy dictates that it goes into the (disabled) default storage pool, an error will be returned to the respective clients operating system, and SAN File System also logs errors to indicate the problem. Therefore, special attention is required when creating a policy for a non-uniform SAN File System configuration where there is no common, default User Pool. You will need to write policy rules that explicitly catch all files. The simplest way to do this is to use the FOR FILESET clause on each rule, and to make sure that all filesets are included in at least one rule. If you do not use the FOR FILESET clause, then the rule will apply to all filesets, and might then try to place files in pools that are not accessible. If you create new filesets, then the policy must be altered to include rules for that fileset.

328

IBM TotalStorage SAN File System

For example, suppose you have three filesets called: Personnel Development Manufacturing You have defined three User Pools with volumes: Personnel_Pool Development_Pool Manufacturing_Pool You have three clients, and you have configured the LUNs for the volumes in the User pools so that each client has access to only one of the pools. Therefore, you want to confine each fileset to using only one pool. A simple policy, which will ensure no files fall through the policy (that is, that an explicit rule will apply to every file), is shown in Example 7-77.
Example 7-77 Simple complete policy when no default User pool VERSION 1 /* Do not remove or change this line!*/ RULE 'stgRule1' SET STGPOOL Personnel_Pool FOR FILESET Personnel RULE 'stgRule2' SET STGPOOL Development_Pool FOR FILESET Development RULE 'stgRule3' SET STGPOOL Manufacturing_Pool FOR FILESET Manufacturing

If you added another fileset, for example, Test, and another pool Test_Pool, you could include another similar rule so that the policy would now be as shown in Example 7-78.
Example 7-78 Simple complete policy with extra fileset and pool when no default User pool VERSION 1 /* Do not remove or change this line!*/ RULE RULE RULE RULE 'stgRule1' 'stgRule2' 'stgRule3' 'stgRule4' SET SET SET SET STGPOOL STGPOOL STGPOOL STGPOOL Personnel_Pool FOR FILESET Personnel Development_Pool FOR FILESET Development Manufacturing_Pool FOR FILESET Manufacturing Test_Pool FOR FILESET Test

If you had additional pools, you could enhance the policy, while still including the catch-all rules (so that an explicit rule will apply to every file), as shown in Example 7-79 on page 330. Note we qualify each rule with the FOR FILESET clause so that we know exactly which filesets will be using each rule. In this example, we assume four filesets: Personnel, Development, Manufacturing, and Test. There are six pools: Personnel_Pool, Development_Pool, Manufacturing_Pool, Test_Pool, DB2_Pool, and Notes_Pool. The LUNs/volumes in the pools are made visible to the three clients as follows: clientA: Personnel_Pool, Notes_Pool: We want this client to be able to access the fileset Personnel. clientB: Development_Pool, Test_Pool, DB2_Pool: We want this client to be able to access filesets Development and Test. clientC: Manufacturing_Pool, DB2_Pool, Notes_Pool: We want this client to be able to access the fileset Manufacturing.

Chapter 7. Basic operations and configuration

329

There are no common pools, so we have disabled the default User Pool. We need a policy that ensures that files in each fileset will only be stored in pools that are accessible by the client that we have declared needs access to that fileset. Example 7-79 shows one such sample policy that meets this requirement.
Example 7-79 Simple complete policy with extra fileset and pool when no default User pool VERSION 1 /* Do RULE 'stgRule1' RULE 'stgRule2' RULE 'stgRule3' RULE 'stgRule4' RULE 'stgRule1' RULE 'stgRule2' RULE 'stgRule3' RULE 'stgRule4' not SET SET SET SET SET SET SET SET remove or change this line!*/ STGPOOL DB2_Pool FOR FILESET Development WHERE NAME like %DB2% STGPOOL DB2_Pool FOR FILESET Manufacturing WHERE NAME like %DB2% STGPOOL Notes_Pool FOR FILESET Personnel WHERE NAME like %.nsf STGPOOL Notes_Pool FOR FILESET Manufacturing WHERE NAME like %.nsf STGPOOL Personnel_Pool FOR FILESET Personnel STGPOOL Development_Pool FOR FILESET Development STGPOOL Manufacturing_Pool FOR FILESET Manufacturing STGPOOL Test_Pool FOR FILESET Test

Of course, there are many possible ways to write your policy, but the important thing to remember is to walk through the policy to check that: The policy meets your requirements for file storage. The implicit default rule (assign any non-explicitly matched file to the Default User pool) will never be invoked.

Creating a file in a non-existent Default User pool


The following examples show what happens when you try to assign a file that has to go to the default pool, which has been disabled. In Example 7-80, we first list the policies, showing that the policy created in 7.8.4, Creating a policy and rules with GUI on page 311 is active. This policy has placement rules only for DB2 and MP3 files; therefore, any other files will match the default rule (store in the default storage pool). We now disable the default pool with the disabledefaultpool command. Note that we get a warning before disabling it, and a message indicating that we must now have explicit policy rules for all files.
Example 7-80 Default storage pool is disabled mds1:~ # sfscli sfscli> lspolicy Name State Last Active Modified Description =========================================================================================== ============================================ DEFAULT_POLICY inactive May 21, 2004 5:20:36 AM May 06, 2004 3:40:05 AM Default policy set (assigns all files to default storage pool) Example_Policy active May 21, 2004 5:22:17 AM May 19, 2004 11:17:31 PM Example_Policy rules for handling *.mp3 and *DB2.* files Test_Policy inactive May 21, 2004 5:22:17 AM May 20, 2004 2:02:08 AM For testing purpose sfscli> disabledefaultpool Are you sure you want to disable the default storage pool? [y/n]:y CMMNP5412I The default storage pool is now disabled. Files must match a policy rule to be created or saved. sfscli>

Next, we try to create the file testDefaultPool.txt11 into the SAN File System on the client Rome. Since this file does not match any of the rules in our active policy, it must go to the default pool. However, we have disabled the default pool. The client cannot write the file in a non-existent pool, so the file create fails, as shown in Example 7-81.

330

IBM TotalStorage SAN File System

Example 7-81 Client creates a file Rome:/usr/local >cp testDefaultPool.txt11 /sfs/sanfs cp: /sfs/sanfs/testDefaultPool.txt11: There is not enough space in the file system. Rome:/usr/local >

We can show what happened by looking at the SAN FIle System event log, /usr/tank/server/log/log.std. More details on the server error logs are in 13.3, Logging and tracing on page 521. Example 7-82 shows the relevant messages of the failed file create operation.
Example 7-82 Extract from log file showing file creation failure 2004-05-21 05:31:25 WARNING HSTCM0935W N mds1 No storage pool has been assigned to file 'testDefaultPool.txt11' in fileset 3 (ROOT) since no policy rule applied and there is no default storage pool. 2004-05-21 05:31:25 ERROR HSTSC0527E N mds1 Unable to create file 'testDefaultPool.txt11' in fileset 3 (ROOT) because no storage pool was assigned to it. 2004-05-21 05:31:25 WARNING HSTSC0551W E mds1 ALERT: No storage pool assigned during file creation in fileset 3 (ROOT), error occurred 1 time(s) since the last alert.

If you disable the default pool using the GUI, an extra warning is also displayed before you commit the operation, as shown in Figure 7-42. To disable the default storage pool or set another pool as default, select Manage Storage Pools, select the pool of interest, click General Properties from the drop-down menu, and click Go. From there, you can either disable the pool as default (if the previous default pool was selected), or select another storage pool to be the new default.

Figure 7-42 Disable default pool with GUI

Using User ID and Group ID as conditions in rules


If you have rule conditions based on UNIX user IDs or group ID, be aware of the way in which many UNIX backup/restore commands and applications, for example, the tar command, restore or unarchive files. Let us assume that your policy contained a rule specifying storing files owned by a user ID fred in a particular storage pool. Now suppose you have a tar archive, including files owned by user fred, which you want to restore to the SAN File System. You extract the files from the archive while you are logged in as the root user. The tar command (and many other similar commands), as it extracts each file, creates them with the UID/GID of the user actually performing the operation (that is, the numeric equivalent of the root user). Only when the extract is complete does it change the owner/group back to the original (fred). However, in this case, the files have already been created and placed in SAN File System as though they were owned by the root user. Therefore, even though the files are now correctly owned by fred, the policy rule for files owned by fred was not applied, and they will probably not be in the expected storage pool. Since many similar commands work this way, we do not recommend using user ID / group ID in policy rules.
Chapter 7. Basic operations and configuration

331

Checking the placement of files according to policy


To check how your rules are being applied, you can query the contents of a specified volume using the CLI command reportvolfiles, as shown in Example 7-83. You cannot use this command to query the contents of the System Pool; it only works on User volumes. The command only runs on the master MDS. The command displays the fileset and fully qualified directory and file name of each file found on the specified volume. In the example, we assume the policy Example_Policy, shown in Figure 7-28 on page 317, is active. The example shows that svcdiskA, which is in the mp3pool, does indeed contain files with the mp3 extension, and svcdiskD, in DBpool, contains DB2 files.
Example 7-83 Reporting files in a storage pool sfscli> lsvol Name State Pool Size (MB) Used (MB) Used (%) ============================================================ MASTER Activated SYSTEM 4192 304 7 svcdisk8 Activated DEFAULT_POOL 4080 1408 34 svcdiskA Activated mp3pool 4080 32 0 svcdiskD Activated DBpool 8176 16 0 sfscli> reportvolfiles svcdiskA home.user1:user1/music/MyMusic.mp3 home.user2:user2/test/sample.mp3 ... sfscli> reportvolfiles svcdiskD proj.CRM:CRM/db2/ProjDB2.dat ...

Policy statistics
Another aid in checking execution of your policies is the statpolicy command to display policy statistics. These statistics are maintained for each fileset and are reset when the following actions occur: The SAN File System cluster is stopped and restarted. A new policy is activated. A MDS is stopped, started, added, or dropped. A fileset is moved to another MDS or detached. You can manually reset the counters for the statistics by reactivating the current policy. There are two options to specify with the statpolicy command: -rule and -pool parameters. When the -rule parameter is used, the results will display the following for each rule in the active policy: Position: Rule name, ordinal position of the rule. Evaluation Errors: Number of times that a rule has caused an error while being evaluated, not including syntax errors. Evaluations Not Applied: Number of times the rule was evaluated but not applied. Applied Evaluations: Number of times the rule was evaluated and applied. Last Applied: Date and time the rule was last applied.

332

IBM TotalStorage SAN File System

When the -pool parameter is used, the results will display the following for each user pool: Storage pool name Number of times the file was placed into this storage pool Last time a file was placed into this storage pool Example 7-84 shows the results of the statpolicy command.
Example 7-84 Statpolicy results sfscli> statpolicy -rule Rule Name Position Evaluation Errors Evaluations Not Applied Applied Evaluations Last Applied ========================================================================================================= stgRule1 1 0 0 0 stgRule2 2 0 0 0 stgRule3 3 0 0 12 Jun 05, 2004 11:16:42 AM Default 0 0 0 651 Jun 04, 2004 11:39:19 PM sfscli> statpolicy -pool Pool Name Files Placed Last File Placed ================================================= DEFAULT_POOL 651 Jun 04, 2004 11:39:19 PM aixrome 12 Jun 05, 2004 11:16:42 AM lixprague 0 winwashington 0 -

You can also show policy statistics using the GUI. Select Manage Filing Policy Statistics. Then you can either pick the rule or pool option on the left hand side, as shown in Figure 7-43.

Figure 7-43 Display policy statistics

Chapter 7. Basic operations and configuration

333

7.8.9 Best practices for managing policies


As explained in Updating a policy on page 320, it is not possible to modify rules in an activated policy. If you want to modify the activated policy, you first need to make it inactive by activating another policy. Then you can make your required changes to the newly inactive policy, then re-activate it. How can you manage this process? A simplistic approach would be to temporarily activate the default policy while you make your changes to your real policy. But this means that during the time your enterprise policy is active, all files will be stored in the default pool, which is probably not what you want. Furthermore, in a non-uniform SAN File System configuration, we know there may not even be a default pool, as explained in Disabling the default User Pool on page 328. Therefore, if you enabled the default policy, you will risk I/O errors on your clients. Therefore, we propose the following practice for managing policies. When creating a policy, always make two copies of the policy, one named, for example, non-unif and the other one non-unif_standby, as shown in Example 7-85. We use the same source file, non-unif.txt, for both policies; therefore, their contents are identical. To do this in the SAN File System GUI, after creating the first policy, you would create a new policy but use the Clone Policy option on the screen shown in Example 7-25 on page 314.
Example 7-85 When creating policies, make always two copies sfscli> mkpolicy -file non-unif.txt non-unif CMMNP5193I Policy non-unif was created successfully. sfscli> mkpolicy -file non-unif.txt non-unif_standby CMMNP5193I Policy non-unif_standby was created successfully.

Then, when you activate the policy non-unif, you know that the non-unif_standby policy is identical. If you then later need to make any changes to your policy non-unif: 1. 2. 3. 4. 5. Activate the non-unif_standby policy with the usepolicy command. Edit the file non-unif.txt to update the rules. Run mkpolicy with the -f option command to make changes to the non-unif policy. Re-activate the non-unif policy. Propagate the updated policy to the standby policy, using the mkpolicy command with -f.

In this way, you always have your actual policy in effect. Example 7-86 shows the whole procedure.
Example 7-86 Activate standby policy if you need to do any changes in the active policy sfscli> usepolicy non-unif_standby Are you sure you want to use this policy? New files should be allocated to a pool that is accessible to the clients where the file is needed. [y/n]:y CMMNP5189I Policy non-unif_standby is now the active policy. sfscli>quit mds4:/usr/tank/admin/bin # vi non-unif.txt mds4:/usr/tank/admin/bin # sfscli sfscli> mkpolicy -file non-unif.txt -f non-unif CMMNP5193I Policy non-unif was created successfully. sfscli> usepolicy non-unif Are you sure you want to use this policy? New files should be allocated to a pool that is accessible to the clients where the file is needed. [y/n]:y CMMNP5189I Policy non-unif is now the active policy.

334

IBM TotalStorage SAN File System

sfscli> mkpolicy -file non-unif.txt -f non-unif_standby CMMNP5193I Policy non-unif_standby was created successfully.

You can use this procedure also when updating policies from the SAN File System GUI; the only difference when working with the GUI is that you do not use a rules definition file, but modify the rules directly from the GUI interface. You can see examples how to work with manage policies using the GUI in 7.8.4, Creating a policy and rules with GUI on page 311.

Chapter 7. Basic operations and configuration

335

336

IBM TotalStorage SAN File System

Chapter 8.

File sharing
This chapter describes the file sharing features of SAN File System. After reading this chapter, the SAN File System Administrator should have a better understanding of file sharing and how it can be used within SAN File System. In this chapter, we discuss the file sharing capabilities of SAN File System, including these topics: Overview: Homogenous and heterogeneous file sharing Basic heterogeneous file sharing Sample implementation Advanced heterogeneous file sharing Overview: Components and commands Configuration Sample implementation

Copyright IBM Corp. 2003, 2004, 2006. All rights reserved.

337

8.1 File sharing overview


SAN FIle System enables file sharing between multiple clients. File sharing is classified as either homogenous or heterogeneous. Homogenous means sharing or accessing files between SAN File System clients of either the same operating system or the same operating system group (for example, multiple Windows SAN File System clients, or multiple UNIX-based SAN File System clients). Heterogeneous means sharing or accessing files between unlike SAN File System clients; that is, between Windows SAN File System clients and UNIX-based SAN File System clients. For the purposes of this discussion, a UNIX-based SAN File System client includes Solaris, AIX, and Linux; and a Windows SAN File System client includes Windows 2000 and Windows 2003.

Homogenous file sharing


In homogenous file sharing, the permissions are all one type and are managed within the Windows or UNIX domain as appropriate. Therefore, permissions propagate to all the sharing clients. Full support is provided for UNIX-based and Windows standard file access permissions; however, currently AIX extended ACLs are not supported. In order to facilitate homogenous file sharing, you will need UIDs/GIDs (UNIX) or SIDs (Windows) to be consistent in your operating system domains. For example, a UID number 2000 on one UNIX-based system (AIX, Solaris, or Linux) must correspond to the same user with UID 2000 on every other UNIX-based system and similarly for SIDs (security IDs) with Windows. To facilitate this, a common ID management system is required for each domain (Windows and UNIX). For example, Active Directory or LDAP could be used for Windows, and Network Information Services (NIS) or LDAP for UNIX. Manual synchronization of ID files is another option for coordinating homogeneous file sharing. No matter which is used, the purpose of these methods is to ensure that permissions granted on one client will map directly to other clients.

Heterogeneous file sharing


In heterogeneous file sharing, SAN File System clients have the capability to access files that were not created on their own operating system platform. For example, Windows based clients could access UNIX files and UNIX based clients could access Windows files. Since Windows and UNIX file permissions are maintained and managed differently, SAN File System provides two methods for implementing heterogeneous file sharing, thus allowing flexible use of the global namespace: basic and advanced heterogeneous file sharing.

Basic heterogeneous file sharing


Basic heterogeneous file sharing is the default method of inter-platform file sharing within SAN File System. If the administrator wants to allow access to UNIX files from a Windows Client or vice versa, then at a minimum, basic heterogeneous file sharing must be implemented. In basic heterogeneous file sharing, to access a file or directory created by a Windows client on a UNIX-based client, you need to set the permissions accordingly for Everyone on Windows, because there is no default mapping of UID/GID on UNIX to SID on Windows. A UNIX-based client, when accessing a file or directory created by a Windows client, gets the permissions granted to Everyone. To access a file or directory created by a UNIX-based client on a Windows client, you need to set the permissions accordingly for the Other group within UNIX. A Windows client, when accessing a file or directory created by a UNIX-based client, gets the permissions granted to the Other group. Since the specific permissions do not match exactly between the two operating systems, translation is required. Table 8-1 on page 339 shows the mapping of permission types

338

IBM TotalStorage SAN File System

between UNIX and Windows. For example, if a file created on UNIX has write permission for the Other entity, the Windows client will see permissions (for Everyone) of both Write data and Append data. Conversely, if it is required for a UNIX client to be able to write to a directory created by a Windows client, then the Everyone entity for that folder must have all three permissions: Create file, Create folders, and Delete sub-folders/files. The permissions or ownership can only be changed on the client type (that is, Windows or UNIX) where the file/directory was created, that is, a Windows client cannot change any security metadata on a UNIX-created file, and a UNIX client cannot change any security metadata on a Windows-created file. This is referred to as the primary allegiance for a fileset. For files created on UNIX-based clients, SAN File System stores the actual UID/GID numbers and shares them across all UNIX-based clients, but they all appear as SID S-1-0-0 on Windows. For files created on Windows, SAN File System stores the actual SID and shares it across Windows clients, but they all appear as 999999/999999 on UNIX-based clients. UID/GID/SID are all mapped by the client to user/group/owner according to whatever scheme is in use on the client. Please see 8.2, Basic heterogeneous file sharing on page 340 for detailed information about setting up this form of file sharing within SAN File System.
Table 8-1 Windows and UNIX permissions mapping UNIX Permission Read Write Windows File Permission Read data Write data and Append data Execute data Windows Directory Permission List folder Create files and Create folders and Delete sub-folders/files Traverse folder

Execute

Advanced heterogeneous file sharing


Advanced heterogeneous file sharing is the second method of inter-platform file sharing within SAN File System. It allows more flexibility and security when sharing files across UNIX and Windows environments, through the use of user map entries that are created and maintained by the SAN File System Administrator. To implement and use advanced heterogeneous file sharing, an Active Directory Domain Controller must be used to store the Windows user and group information, and either an LDAP or a NIS Domain Controller must be used to store the user and group information of the UNIX based servers. Each MDS is then linked with the UNIX and Windows domain controllers, and user map entries are created that specify a UNIX and Windows domain-qualified user to be treated as equivalents during cross-platform file sharing. We will show detailed instructions for setting up advanced heterogeneous file sharing in 8.3, Advanced heterogeneous file sharing on page 347. Since the specific permissions do not match exactly between the two operating systems, translation is required. Table 8-1 shows the mapping of permission types between UNIX and Windows. For example, if a file created on UNIX has write permission for a particular user, the Windows client will see permissions for that mapped user for both Write data and Append data. Conversely, if it is required for a user on a UNIX client to be able to write to a directory created by a Windows client, then the Windows user to which the UNIX user is mapped must have all three permissions: Create file, Create folders, and Delete sub-folders/files.

Chapter 8. File sharing

339

As with basic heterogeneous file sharing, the permissions or ownership can only be changed on the client type (that is, Windows or UNIX) where the file/directory was created. That is, a Windows client cannot change any security metadata on a UNIX-created file, and a UNIX client cannot change any security metadata on a Windows-created file.

8.2 Basic heterogeneous file sharing


Through the use of the global namespace, SAN File System is designed to allow heterogeneous file sharing between Windows and UNIX based clients. In order to utilize this capability, each fileset that is created must have primary allegiance to a client type (Windows or UNIX) and the responsible client type must be used to set the appropriate permissions on the fileset.

8.2.1 Implementation: Basic heterogeneous file sharing


Two filesets have been created, aixfiles and winfiles, as shown in Example 8-1. We will designate the fileset winfiles to have primary allegiance to Windows (that is, only Windows clients will create files here). Similarly, the fileset aixfiles will have primary allegiance to UNIX.
Example 8-1 List filesets defined for sharing sfscli> lsfileset Name Container State Quota Type Quota (MB) Used (MB) Used (%) Threshold (%) Most Recent Image =========================================================================================== ROOT Attached Soft 0 0 0 0 winfiles Attached Soft 0 0 0 80 aixfiles Attached Soft 0 0 0 80 -

When the filesets are first created, they appear to a UNIX client (in this case, AIX) with no access, and UID/GID 1000000/1000000, as shown in Example 8-2. Any attempt to change to either the winfiles or aixfiles directory would fail.
Example 8-2 View UNIX-type permissions on newly created fileset on an AIX client # ls -la total 6 drwxr-xr-x drwxrwxrwx dr-xr-xr-x d--------d---------

6 3 2 3 3

root root root 1000000 1000000

system system system 1000000 1000000

144 72 48 72 72

Oct Oct Oct Oct Oct

07 04 04 07 07

04:47 22:30 22:30 2003 04:47

. .. .flashcopy aixfiles winfiles

On the Windows clients, the owner of each fileset is SID S-1-0-0, as shown in Figure 8-1 on page 341.

340

IBM TotalStorage SAN File System

Figure 8-1 View Windows permissions on newly created fileset

In this example, we are going to share the winfiles fileset between Windows and UNIX. First, you need to take ownership of the winfiles fileset, as described in Take ownership of fileset: Windows on page 301. After this has been done, we will set the permissions so that the UNIX clients will be able to read and list the winfiles fileset and Windows Administrator users will get full access to the directory. To enable the UNIX permissions, you need to set the permissions on Windows for the Everyone group, as shown in Figure 8-2.

Figure 8-2 Set permissions for Everyone group

Chapter 8. File sharing

341

The permissions for the Everyone group have been set to allow Read & Execute, List Folder Contents, and Read. Click Advanced to display the Access Control Settings and click View/Edit. Now we can see, in Figure 8-3, that the required permissions to allow a UNIX user to read and execute in the directory are set. The permissions were given in Table 8-1 on page 339. We must have List folder and Traverse folder to translate to UNIX permissions of Read and Execute. Since none of the write permissions from the table are given, the UNIX client will not be able to write to the folder.

Figure 8-3 Advanced permissions for Everyone

Next, verify that the Administrator group has the Windows permission set to Full control, by clicking the Administrators group, as shown in Figure 8-4 on page 343.

342

IBM TotalStorage SAN File System

Figure 8-4 Set permissions on Administrator group to allow Full control

Figure 8-5 summarizes the current permissions for the winfiles fileset: Members of the Windows Administrator group have Full control to the fileset and members of the Everyone group (which will be used by UNIX clients) have only Read & Execute.

Figure 8-5 View Windows permissions on winfiles fileset

Chapter 8. File sharing

343

On the AIX client, we can see that the other group now has r-x permissions (Example 8-3), and the UID/GID is now set to 999999/999999. This indicates that a Windows client has taken ownership of the fileset, as it has changed from the original 1000000.
Example 8-3 List AIX permissions after changing Windows permissions # ls -la total 6 drwxr-xr-x drwxrwxrwx dr-xr-xr-x d--------d------r-x

6 3 2 3 3

root root root 1000000 999999

system system system 1000000 999999

144 72 48 72 72

Oct Oct Oct Oct Oct

07 04 04 07 07

04:47 22:30 22:30 2003 04:47

. .. .flashcopy aixfiles winfiles

The permissions set mean that the AIX client can now change to the winfiles directory and list its contents, as shown in Example 8-4. It could also view the PDF file. However, it cannot write to the directory because the appropriate permissions are not set for Everyone.
Example 8-4 Read winfiles fileset on AIX client # cd winfiles # pwd /mnt/tank/SFS1/winfiles # ls -la total 20483 d------r-x 3 999999 drwxr-xr-x 6 root d------r-x 2 999999 -------r-x 1 999999 guidesg245416.pdf

999999 system 999999 999999

96 144 48 9753683

Oct Oct Oct Sep

07 07 07 25

16:29 04:47 04:47 07:39

. .. .flashcopy TSM Implementation

Next, we will share UNIX files to Windows clients in the fileset aixfiles. We assume the AIX client has already taken ownership of the fileset, as described in Take ownership of fileset: UNIX on page 300. To allow Windows clients to read and execute files within the aixfiles fileset, set the UNIX permissions to 755, which translates as shown in Example 8-5. The crucial thing is to set permissions appropriately for Other in this case, they are read and executed. This will translate into the correct Windows folder permissions, as shown in Table 8-1 on page 339.
Example 8-5 Set UNIX permission on aixfiles # chmod 755 aixfiles # ls -la total 6 drwxr-xr-x 6 root drwxrwxrwx 3 root dr-xr-xr-x 2 root drwxr-xr-x 3 root d------r-x 3 999999

system system system system 999999

144 72 48 72 96

Oct Oct Oct Oct Oct

07 04 04 07 07

04:47 22:30 22:30 2003 16:29

. .. .flashcopy aixfiles winfiles

The UNIX permission 755 basically means that the owner (root) has read, write, and execute permissions, members of the system group have read and execute permissions, and everyone else (including Windows clients) will have read and execute permissions. On the Windows client, the owner SID shows as S-1-0-0, as shown in Figure 8-6 on page 345.

344

IBM TotalStorage SAN File System

Figure 8-6 View Windows permissions on fileset

On the AIX client, we created a file in the fileset, aixfordummies.txt. It has been set to allow everyone to read the file, as shown in Example 8-6.
Example 8-6 View AIX permissions for text document within aixfiles fileset # cd aixfiles # pwd /mnt/tank/SFS1/aixfiles # ls -la total 11 drwxr-xr-x 3 root drwxr-xr-x 6 root d--------2 1000000 -rw-r--r-1 root

system system 1000000 system

96 144 48 17

Oct Oct Oct Oct

07 07 07 07

17:10 04:47 2003 17:11

. .. .flashcopy aixfordummies.txt

Chapter 8. File sharing

345

We confirm on the Windows client that it can read the file (since it inherits the permissions for the other category on UNIX) in Figure 8-7.

Figure 8-7 Read permission for Everyone group

On the Windows file, we confirm the read access to the file by listing it in the directory and accessing it in the Notepad application, as shown in Example 8-7. Note that any attempt to update that file would fail because we do not have write access.
Example 8-7 List aixfiles directory on Windows client C:\Documents and Settings\Administrator>s: S:\>dir Volume in drive S is SFS1 Volume Serial Number is 0000-905A Directory of T:\ 10/07/2003 05:47a <DIR> . 10/07/2003 05:29p <DIR> winfiles 10/07/2003 06:10p <DIR> aixfiles 0 File(s) 0 bytes 4 Dir(s) 12,079,595,520 bytes free S:\>cd aixfiles S:\aixfiles>dir Volume in drive S is SFS1 Volume Serial Number is 0000-905A Directory of S:\aixfiles 10/07/2003 06:10p <DIR> . 10/07/2003 05:47a <DIR> .. 10/07/2003 06:11p 17 aixfordummies.txt 1 File(s) 17 bytes 2 Dir(s) 12,079,595,520 bytes free S:\aixfiles>notepad aixfordummies.txt

You have now successfully shared files between UNIX and Windows clients using the basic heterogeneous file sharing capabilities of SAN File System.

346

IBM TotalStorage SAN File System

8.3 Advanced heterogeneous file sharing


SAN File System Version 2.2 introduced more advanced file sharing configurations that improve the flexibility, performance, and security of heterogeneous file sharing. It allows the administrator to set up cross-environment access checking such that files created on Windows can be accessed by authorized users on UNIX, and vice versa. To control cross-environment authorization, the administrator manages a set of user map entries using the administrative CLI or GUI. Each user map entry specifies a UNIX domain-qualified user and a Windows domain-qualified user that are to be treated as equivalent for the purpose of checking file access permission in a cross-platform file access situation. The SAN File System MDS cluster accesses the client's UNIX and Windows directory services, as needed, to obtain user ID and group membership information. Through the MDS, the client translates the effective user ID of the user trying to access the file from the platform where access is attempted, into the platform where the file was created. User and group level access is then checked using the ownership permissions and semantics in effect on the platform of creation. This flow is shown in Figure 8-8. After initial creation of a file, subsequent permission changes can only be made by clients of the same platform base (either Windows or UNIX-based); however, a file can be copied to allow a permission change.

Unix Auth. Windows Auth.

SFS Client

User Map Cache Group List Cache

Master MS
User map read and write User name and ID lookup

Subordinate MDS User Map Cache

Group list lookup

System Pool
User Map Table

NIS / LDAP

Active Directory

Figure 8-8 SAN File System user mapping

In order to enable advanced heterogeneous file sharing capabilities, additional components must be configured to work with the SAN File System, in addition to the basic heterogeneous file sharing configuration. If the additional components are not configured and if no user mappings are defined, the file sharing will occur according to the basic heterogeneous file sharing concept and configuration, as described in 8.2, Basic heterogeneous file sharing on page 340. In the sections below, we will summarize the configuration and setup procedure for advanced heterogeneous file sharing, then work through a sample setup.

Chapter 8. File sharing

347

8.3.1 Software components


The following components must be obtained by the administrator to enable the heterogeneous file sharing feature: A Single NIS or LDAP Directory Service for UNIX A Single Active Directory Service for Windows Winbind (subcomponent of Samba that can be obtained from http://www.samba.org) Each directory service must be configured by the administrator and will be used to store the user information for the UNIX and Windows users, respectively. Each directory service controller will be configured to talk to the SAN File System MDS cluster through the use of winbind (see 8.3.5, MDS configuration on page 355).

8.3.2 Administrative commands


After basic configuration of the Directory Services and winbind, the administrator must use either the SAN File System CLI or GUI to properly link the components through SAN File System and establish the user map entries. The administrator must declare one UNIX and one Windows directory server reference as a domain for use within SAN File System. The domains can be created and managed using the following CLI commands (or via the GUI using the Administer Access menu option): mkdomain rmdomain chdomain lsdomain Once the domains are created, the administrator must map the UNIX and Windows entries that are to be treated as equivalents. The user mappings can be created and managed using the following CLI commands: mkusermap rmusermap lsusermap refreshusermap All user map operations can be issued against domain-qualified user names, or user IDs.

8.3.3 Configuration overview


Here is a summary of the steps required to configure the advanced heterogeneous file sharing feature. Specific configuration instructions and a sample implementation follow. 1. Configure an Active Directory server as a Domain controller to manage Windows User IDs or identify an existing Active Directory server for this purpose. 2. Configure either an NIS or LDAP server as a Domain controller to manage UNIX User IDs or identify an existing NIS or LDAP server for this purpose. 3. Add SAN File System Windows User IDs to the Active Directory Domain. 4. Add SAN File System UNIX User IDs to the LDAP or NIS Domain. 5. Configure each MDS to access the directory server instances. 6. Declare the UNIX and Windows domain controllers to the SAN File System MDS. 7. Create the user map entries to the SAN File System MDS.

348

IBM TotalStorage SAN File System

8.3.4 Directory server configuration


A Windows (Active Directory) and UNIX (LDAP or NIS) directory server is required to store the effective User IDs for advanced heterogeneous file sharing. Either new or existing directory servers can be used. Note: The same LDAP Server used for the SAN File Systems administrative authentication can be used to store UNIX client User IDs to allow heterogeneous file sharing. A separate LDAP instance can also be used. Once the domain controllers (directory servers) are configured, you can add in the appropriate User IDs on the SAN File System clients to the relevant domain. You do not need to add all your User IDs to the domain - only those which you want to translate cross-domain need be added. SAN File System Windows Clients (Windows 2000 or 2003) User IDs must be added to the Active Directory Domain. SAN File System UNIX Clients (AIX, SUSE, Red Hat, Solaris) User IDs must be added to the UNIX LDAP or NIS Domain.

Active Directory and LDAP configuration summary


As previously stated, an Active Directory server AND either an NIS or LDAP server is required to serve the Windows and UNIX user IDs, respectively. You can use an existing instance or configure new severs. As a guide, we provide here a sample configuration. Bear in mind that the configuration of Active Directory, LDAP, and NIS are complex topics and a full explanation of these is beyond the scope of this redbook. The environment shown here represents a minimum setup; policies and expertise within individual organizations will dictate more complex configurations.

Chapter 8. File sharing

349

Our lab setup is shown in Figure 8-9. We will use this diagram as a reference for the rest of the configuration and implementation information contained in this chapter. We used an LDAP server for the UNIX directory service.

User Map
LDAP Enoch SANFSDom

LDAP:sanfsuser LDAP:xxxx LDAP:xxxx

AD: sanfsuser AD: xxxx AD: xxxx

Active Directory Goku sanfsdom.net

Directory servers

AIX agent47

Windows jacob

SAN FS clients

SAN

Storage SAN FS MDS manny emily


Figure 8-9 Sample configuration for advanced heterogeneous file sharing

Active Directory
We installed the Windows 2000 system goku as the Active Directory Server. It was configured as the Active Directory Domain Controller and nameserver (DNS) for the domain sanfsdom.net. See your Windows documentation for detailed instructions for setting up Active Directory. The Active Directory confirmation window for the domain controller and the domain is shown in Figure 8-10 on page 351.

350

IBM TotalStorage SAN File System

Figure 8-10 Created Active Directory Domain Controller and Domain: sanfsdom.net

We created a User called sanfsuser within the domain, as shown in Figure 8-11.

Figure 8-11 User Creation Verification in Active Directory

Chapter 8. File sharing

351

Figure 8-12 shows that we added our SAN File System Windows 2000 client (jacob) to the Active Directory Domain. This means that the Windows sanfsuser ID created can now be used to log into our SAN File System Windows client, jacob.

Figure 8-12 SAN File System Windows client added to Active Directory domain

LDAP
For our LDAP server, we used OpenLDAP running on a Red Hat Linux server called enoch. This LDAP server is also being used for our SAN File System administrator user authentication functionality. We added additional schema entries to enable advanced heterogeneous file sharing. You can use the same LDAP server as already used for SAN File System as we have done here, or use a different LDAP server. To extend the LDAP server, we added additional entries to our LDAP schema. Figure 8-13 shows the LDAP entries that were added below the top level, which in this case is SANFSBase.

o=SANFSBase

Existing entries

cn=SANFSdom

ou=Users

ou=Groups

cn=sanfsuser

cn=sanfsgroup

Figure 8-13 Sample heterogeneous file sharing LDAP diagram

352

IBM TotalStorage SAN File System

We created a new branch in our existing LDAP tree for our new domain for UNIX user IDs. The domain is called SANFSdom. Within this domain, we created containers for Users and Groups to contain our UNIX user ID and Group information, respectively. Within those containers, we created a user called sanfsuser and a group called sanfsgroup. The user created is linked to the group it belongs to through the attributes set during its definition. This can be seen in the sample LDIF file used to create the additional LDAP structure, listed in Example 8-8. To deploy a similar setup in your environment, attach the new container (SANFSdom, in our example) under your existing LDAP Directory base. Then you should import the LDIF file to your configuration to add the additional entries. Note: The LDIF file below uses o=SANFSBase as the root of the LDAP Directory tree, which differs from the LDAP example in Figure 4-1 on page 102. You should use your appropriate organization name entry.
Example 8-8 Sample LDAP LDIF file for file sharing # Information for the File Sharing Domain (SANFSdom) # SANFSdom, SANFSBase dn: ou=SANFSdom,o=SANFSBase ou: SANFSdom objectClass: organizationalunit # Users, SANFSdom, SANFSBase dn: ou=Users,ou=SANFSdom,o=SANFSBase ou: Users objectClass: organizationalunit # Groups, SANFSdom, SANFSBase dn: ou=Groups,ou=SANFSdom,o=SANFSBase ou: Groups objectClass: organizationalunit # sanfsgroup, Groups, SANFSdom, SANFSBase dn: cn=sanfsgroup,ou=Groups,ou=SANFSdom,o=SANFSBase cn: sanfsgroup objectClass: posixGroup gidNumber: 1000 memberUid: 1 # sanfsuser, Users, SANFSdom, SANFSBase dn: cn=sanfsuser,ou=Users,ou=SANFSdom,o=SANFSBase uid: sanfsuser gidNumber: 1000 objectClass: posixAccount objectClass: account cn: sanfsuser userPassword:: ZG9udGdpdmVvdXQ= homeDirectory: /tmp uidNumber: 6000

Chapter 8. File sharing

353

Configure UNIX clients to use LDAP user IDs


Consult your UNIX system documentation for details of how to configure a UNIX SAN File System client to use LDAP services for authenticating user IDs. As an example, we will use our AIX 5L V5.2 SAN File System client, agent47. We will configure it to join our LDAP domain, SANFSdom, so that we can use the UNIX user ID sanfsuser, which we created on the LDAP server, to log in to the SAN File System AIX client. 1. Verify that the AIX LDAP client packages are installed on the system: ldap.client.adt ldap.client.dmt ldap.client.java ldap.client.rte If they are not installed, install them now from the AIX 5L V5.2 distribution CD. 2. Edit the configuration file for LDAP, /etc/security/ldap/ldap.cfg, as shown in Example 8-9, specifying the host name of your LDAP server, administrator ID, password, and suffixes.
Example 8-9 Sample /etc/security/ldap/ldap.cfg file # cat ldap.cfg . . . # Comma separated list of ldap servers this client talks to #ldapservers:myldapserver.ibm.com ldapservers:enoch.tucson.ibm.com # LDAP server bindDN #ldapadmin:cn=admin ldapadmin:cn=Manager,o=SANFSBase # LDAP server bindDN password #ldapadmpwd:secret ldapadmpwd:fakepwd # Whether to use SSL to communicate with the LDAP server. Valid value # is either "yes" or "no". Default is "no". # Note: you need a SSL key and a password to the key to enable this. #useSSL: no useSSL:no . . . # Base DN where the user and group data are stored in the LDAP server. # e.g., if user foo's DN is: username=foo,ou=aixuser,cn=aixsecdb # then the user base DN is: ou=aixuser,cn=aixsecdb #userbasedn:ou=aixuser,cn=aixsecdb,cn=aixdata #groupbasedn:ou=aixgroup,cn=aixsecdb,cn=aixdata #idbasedn:cn=aixid,ou=system,cn=aixsecdb,cn=aixdata #hostbasedn:ou=hosts,cn=nisdata,cn=aixdata #servicebasedn:ou=services,cn=nisdata,cn=aixdata #protocolbasedn:ou=protocols,cn=nisdata,cn=aixdata #networkbasedn:ou=networks,cn=nisdata,cn=aixdata #netgroupbasedn:ou=netgroup,cn=nisdata,cn=aixdata #rpcbasedn:ou=rpc,cn=nisdata,cn=aixdata userbasedn:ou=Users,ou=SANFSdom,o=SANFSBase groupbasedn:ou=Groups,ou=SANFSdom,o=SANFSBase

354

IBM TotalStorage SAN File System

# LDAP class definitions. #userclasses:aixaccount,ibm-securityidentities userclasses:account,posixaccount,shadowaccount #groupclasses:aixaccessgroup groupclasses:posixgroup . . # LDAP server port. Default to 389 for non-SSL connection and # 636 for SSL connection #ldapport:389 ldapport:389 #ldapsslport:636 . . #

3. Use the mksecldap and secldaplcntd commands to configure LDAP and start the LDAP daemons, as shown in Example 8-10. These allow the AIX client to recognize and access the LDAP server. The format of the mksecldap command is:
# mksecldap -c -a <ldapadmin base dn> -p <password> -h <LDAP server IP address> Example 8-10 mksecldap and secldaplcntd # mksecldap -c -a cn=Manager,o=SANFSBase -p fakepwd -h enoch.tucson.ibm.com # secldapclntd

4. You can now use the user IDs defined in your LDAP server (sanfsuser in our example) to log in to your AIX client.

NIS configuration
We do not show NIS configuration here; however, if you are using NIS rather than LDAP to serve your UNIX IDs, you would enter the User IDs and groups into the NIS server, then configure your UNIX clients to use NIS for user logins.

8.3.5 MDS configuration


The SAN File System MDS must be configured to make queries to the Directory Services used to store the effective Windows and UNIX IDs. It is required to install the winbind component from the Samba application, as well as Heimdal. Winbind is used to allow the MDS to communicate with the Active Directory server and Heimdal is used for authentication of the MDS with the Active Directory server. The winbind and Heimdal packages are not included with the SAN File System software; they must be downloaded from their appropriate locations. Specific steps for installing these components on the MDS as well as other configuration requirements are described below. These steps must be completed on each MDS in the cluster - we recommend following the complete sequence on each MDS before proceeding to the next MDS. The process does not require interruption to SAN File System services.

File sharing installation scripts


Scripts are provided to assist in the MDS configuration for advanced heterogeneous file sharing. The hetsec_prereqs.tar file is included on the SAN File System installation CD in the common directory - extract and install it on each MDS as follows: 1. Copy the hetsec_prereqs.tar from the SAN File System installation CD to a temporary directory, for example, /tmp.
Chapter 8. File sharing

355

2. Change to the /usr/tank directory. 3. Extract the tar file using the tar xvf /tmp/hetsec_prereqs.tar command, as shown in Example 8-11.
Example 8-11 Extract hetsec_prereqs.tar output # cd /tmp /tmp # cp /media/cdrom/common/hetsec_prereqs.tar /tmp /tmp # cd /usr/tank /usr/tank # tar xvf /tmp/hetsec_prereqs.tar ./hetsec_prereqs/ ./hetsec_prereqs/krb5.conf.template ./hetsec_prereqs/install-winbind.sh ./hetsec_prereqs/build-winbind.sh ./hetsec_prereqs/smb.conf.template ./hetsec_prereqs/INSTALL ./hetsec_prereqs/build-heimdal.sh ./hetsec_prereqs/sanfswinbind /usr/tank #

Heimdal and winbind installation


Before proceeding, verify that the MDS can ping the Active Directory and the LDAP or NIS Servers by their IP Addresses and Fully Qualified Domain Names. You must be logged on to the MDS as the root user.

Install and build Heimdal package


1. Obtain and build the Heimdal package. Check the SAN File System release notes to verify the latest supported version - at the time of writing, this is Version 0.6.3. a. Go to the Web site http://www.pdc.kth.se/heimdal/. b. Click ftp://ftp.pdc.kth.se/pub/heimdal/src/. c. Download the heimdal-0.6.3.tar.gz package. Tip: The direct download link is ftp://ftp.pdc.kth.se/pub/heimdal/src/heimdal-0.6.3.tar.gz.

2. Create a heimdal directory in /usr/local:


# mkdir /usr/local/heimdal

3. Copy the downloaded heimdal-0.6.3.tar.gz package to the created directory /usr/local/heimdal. 4. From the /usr/local/heimdal directory, execute the following command to build and install heimdal:
bash /usr/tank/hetsec_prereqs/build-heimdal.sh

Successful completion is shown in Example 8-12 on page 357. As the script is executed, a lot of output will be produced. This process may take a few minutes.

356

IBM TotalStorage SAN File System

Example 8-12 Build heimdal /usr/local/heimdal # bash /usr/tank/hetsec_prereqs/build-heimdal.sh . . === HEIMDAL for SANFS install step complete. === === HEIMDAL for SANFS ready for use

Install and build winbind


1. Obtain the Samba distribution. Check the SAN File System release notes to verify the latest supported version; at the time of the writing of this redbook, this is Version 3.0.7. a. Go to http://www.samba.org. b. Download the samba-3.0.7.tar.gz package. 2. Create a winbind directory in /usr/local:
# mkdir /usr/local/winbind

3. Copy the downloaded samba-3.0.7.tar.gz package to the created directory /usr/local/winbind. 4. From the /usr/local/winbind directory, execute the following command to build winbind:
bash /usr/tank/hetsec_prereqs/build-winbind.sh

Successful completion is shown in Example 8-13. As the script is executed, a lot of output will be produced. This process may take a few minutes.
Example 8-13 Build winbind # bash /usr/tank/hetsec_prereqs/build-winbind.sh . . === SAMBA for SANFS install step complete. === === SAMBA for SANFS ready for system installation ===

Configure MDS to allow queries to Active Directory


We have to configure the MDS so it can query the Active Directory domain controller.

Configure Kerberos
The Kerberos configuration file, /etc/krb5.conf, is used to allow the MDS to authenticate to the Active Directory server. 1. Use the file /usr/tank/hetsec_prereqs/krb5.conf.template file as a template, replacing the fields in bold in Example 8-14 on page 358 with the values corresponding to your Active Directory server and domain. Our domain is SANFSDOM.NET and the Active Directory server is goku.tucson.ibm.com. 2. Save the edited file as /etc/krb5.conf. Attention: The krb5.conf file is case-sensitive. Make sure that the case of the updated entries match the case of the krb5.conf.template file.

Chapter 8. File sharing

357

Example 8-14 Sample krb5.conf file /etc # cat krb5.conf # # Edit this file to reflect your Windows domain: # # 1) Replace YOURDOMAIN.NET with your domain name. # 2) Replace the DNS addresses with the ones for your server. # 3) Remove the line at the top of the file containing "EDITED". # # THIS FILE MUST BE EDITED! [libdefaults] default_realm = SANFSDOM.NET default_etypes = des-cbc-crc des-cbc-md5 default_etypes_des = des-cbc-crc des-cbc-md5 [realms] SANFSDOM.NET = { # DNS address for your domain controller or ADS kdc = goku.tucson.ibm.com kpasswd_server = goku.tucson.ibm.com } [domain_realm] yourdomain.net = SANFSDOM.NET .yourdomain.net = SANFSDOM.NET

Configure winbind
1. Use the file /usr/tank/hetsec_prereqs/smb.conf.template file as a template to show the Active Directory domain name, as shown in Example 8-15. 2. Save the edited file as /usr/local/winbind/install/lib/smb.conf. Tip: We recommend setting the security parameter to security=domain.
Example 8-15 Sample smb.conf file /usr/local/winbind/install/lib # cat smb.conf # # 1) Change the lines containing "YOURDOMAIN" to reflect your # Windows domain name. # # 2) Uncomment one of the "security" lines. # If you are using a Windows NT domain controller, use # the "domain" form. If you are using an Active Directory # server, use the "ADS" form. If you are uncertain, # or you have trouble, try the "domain" form. # # 3) Remove the line at the top containing "EDITED". # # THIS FILE MUST BE EDITED! [global] # Your Windows domain name workgroup = SANFSDOM # The Kerberos "realm" for your domain. # This should be the way it appears in your /etc/krb5.conf file.

358

IBM TotalStorage SAN File System

realm = SANFSDOM.NET # Which kind of directory server you have (choose one): security = domain # security = ADS # How to find the Kerberos server (default) password server = * # What should winbind use between domain name and user name # As shown here, users would be listed as YOURDOMAIN+username winbind separator = + # How long to cache material from the ADS winbind cache time = 10 # How to create temporary "proxy" Unix users for Windows users. # A user/group ID will be assigned to Windows users from # this range by the winbind server, but SANFS does not use them. idmap uid = 20000-400000 idmap gid = 20000-400000 template shell = /bin/bash template homedir = /home/%D/%U

Path definitions
You must set path definitions to use the Heimdal packages. Execute the following command:
# export LD_LIBRARY_PATH=/usr/local/heimdal/install/lib:${LD_LIBRARY_PATH}

We also recommend using the following PATH statements to simplify running upcoming configuration steps with the commands:
# PATH=/usr/local/heimdal/install/bin:$PATH # PATH=/usr/local/winbind/install/bin:$PATH

Running these commands from the command line will only set the PATH variables for the current session. To ensure that the path variables are set upon a subsequent reboot or logon to the machine, add these statements to the .bashrc file, as shown in Example 8-16.
Example 8-16 Sample .bashrc file # cat .bashrc .... PATH=$PATH:/usr/local/heimdal/install/bin:/usr/local/winbind/install/bin export LD_LIBRARY_PATH=/usr/local/heimdal/install/lib:${LD_LIBRARY_PATH} ... #

Chapter 8. File sharing

359

Kerberos login test


1. From the MDS, log in to your Active Directory domain controller as a user that is authorized to add (join) machines to the domain (for example, the administrator account) using the kinit command. Replace <id> with an appropriately authorized user ID, and YOURDOMAIN.NET with your Active Directory domain. You must specify the Active Directory domain name using all upper case, regardless of how it is actually defined. Example 8-17 shows the execution in our environment.
#/usr/local/heimdal/install/bin/kinit <id>@YOURDOMAIN.NET

Tips: If you have not changed your Administrator password since it was created, you may need to change it to enable the use of encryption methods compatible with Heimdal. If you are unable initially to authenticate with kinit, change your password and try kinit again. If you receive the following message, kinit: krb5_get_init_creds: Clock skew too great, you must update the date and time on your MDS using the date command and then rerun kinit. If you set the PATH variable as described in Path definitions on page 359, you do not need to specify the full path name of these commands.
Example 8-17 Successful kinit output # kinit administrator@SANFSDOM.NET administrator@SANFSDOM.NET's Password: manny: NOTICE: ticket renewable lifetime is 1 week #

2. Verify that your login was successful with the klist command, as shown in Example 8-18.
Example 8-18 Login verification output using klist -v # klist -v Credentials cache: FILE:/tmp/krb5cc_0 Principal: administrator@SANFSDOM.NET Cache version: 4 Server: krbtgt/SANFSDOM.NET@SANFSDOM.NET Ticket etype: des-cbc-crc Auth time: Oct 12 16:13:31 2004 End time: Oct 13 02:09:10 2004 Renew till: Oct 19 16:13:31 2004 Ticket flags: renewable, initial, pre-authenticated Addresses: IPv4:9.11.209.148 manny:~ #

Add MDS to the Active Directory Domain


This step creates an account to allow the MDS to make queries to the Active Directory service. The password for this account is stored in the file secrets.tdb within Samba, and the winbind service automatically maintains the password to this account. The account and password is created when the MDS is added to the Active Directory service, using the following command:
# /usr/local/winbind/install/bin/net ads join

360

IBM TotalStorage SAN File System

Example 8-19 MDS joins the Active Directory domain # net ads join Using short domain name -- SANFSDOM Joined 'MANNY' to realm 'SANFSDOM.NET'

Winbind installation and startup


Next, we need to install the winbind library in the /lib directory, and set the winbind service to start at boot time, by copying a script to the /etc/init.d directory. The install-winbind.sh script in /usr/tank/hetsec_prereqs performs these tasks automatically. Example 8-20 shows sample output. Attention: If you are prompted to overwrite an existing file in the /lib directory, enter Yes.
Example 8-20 Install Winbind output # bash /usr/tank/hetsec_prereqs/install-winbind.sh The SANFS version of libnss_winbind.so must be installed in /lib. This is the version to be installed: -rwxr-xr-x 1 root root 20238 Oct 12 17:20 /usr/local/winbind/install/lib/libnss_winbind.so There is a version already installed in /lib: -rwxr-xr-x 1 root root 15549 Oct 27 2003 /lib/libnss_winbind.so.2 A copy of the original will be saved: -rwxr-xr-x 1 root root 15549 Oct 27 2003 /lib/libnss_winbind.original You will be asked whether you want to replace this file. cp: overwrite `/lib/libnss_winbind.so.2'? y #

Once this script has completed, the winbind service will start automatically upon reboot; however, to start the service immediately, run the command shown in Example 8-21.
Example 8-21 Starting Winbind # /etc/init.d/sanfswinbind start Starting WINBIND # done

Chapter 8. File sharing

361

MDS configuration to allow queries to LDAP or NIS


The following steps configure the MDS to query either the LDAP or NIS server for UNIX User IDs.

Configure NSS Switch


The file /etc/nsswitch.conf file must be edited to allow the MDS to recognize and access the LDAP or NIS server used for storing the UNIX User ID information. 1. Edit the file /etc/nsswitch.conf, as in Example 8-22, changing the entries:
passwd: compat group: compat

to:
passwd: compat nis ldap winbind group: compat nis ldap winbind Example 8-22 Sample nsswitch.conf file /etc # cat nsswitch.conf # # /etc/nsswitch.conf # # An example Name Service Switch config file. This file should be # sorted with the most-used services at the beginning. # # The entry '[NOTFOUND=return]' means that the search for an # entry should stop if the search in the previous entry turned # up nothing. Note that if the search failed due to some other reason # (like no NIS server responding) then the search continues with the # next entry. # # Legal entries are: # # compat Use Libc5 compatibility setup # nisplus Use NIS+ (NIS version 3) # nis Use NIS (NIS version 2), also called YP # dns Use DNS (Domain Name Service) for IPv4 only # dns6 Use DNS for IPv4 and IPv6 # files Use the local files # db Use the /var/db databases # [NOTFOUND=return] Stop searching if not found so far # # For more information, please read the nsswitch.conf.5 manual page. # # passwd: files nis # shadow: files nis # group: files nis passwd: compat nis ldap winbind group: compat nis ldap winbind hosts: networks: services: protocols: rpc: ethers: netmasks: netgroup: files dns files dns files files files files files files

362

IBM TotalStorage SAN File System

publickey: bootparams: automount: aliases:

files files files nis files

Restart the nameservice cache daemon


Restart the nameservice cache daemon to pick up the changes just made in /etc/nsswitch.conf, as shown in Example 8-23.
Example 8-23 Restart nameservice cache daemon # /etc/init.d/nscd restart Shutting down Name Service Cache Daemon Starting Name Service Cache Daemon # done done

Add MDS to LDAP domain


Attention: These steps need to be followed only if an LDAP domain has been configured to store the effective user IDs. It is not required if using NIS. The MDS has to be linked with the LDAP Directory Service in order to make directory service queries for user information. For an unsecured LDAP configuration, edit the BASE and URI entries in the file /etc/openldap/ldap.conf to reflect your environment. Our LDAP server is enoch and our BASE suffix is SANFSBase, as shown in Example 8-8 on page 353. Example 8-24 shows the ldap.conf file for our unsecured LDAP configuration.
Example 8-24 Sample /etc/openldap/ldap.conf file /etc/openldap # cat ldap.conf # $OpenLDAP: pkg/ldap/libraries/libldap/ldap.conf,v 1.9 2000/09/04 19:57:01 kurt Exp $ # # LDAP Defaults # # See ldap.conf(5) for details # This file should be world readable but not world writable. #BASE #URI dc=example, dc=com ldap://ldap.example.com ldap://ldap-master.example.com:666

BASE ou=SANFSdom,o=SANFSBase URI ldap://enoch.tucson.ibm.com/ #SIZELIMIT #TIMELIMIT #DEREF 12 15 never

For a Secured LDAP configuration, set the BASE and URI entries as for an unsecured LDAP configuration. Also: 1. Ensure the ldap.cert file is in the /etc/openldap directory.

Chapter 8. File sharing

363

2. Specify the following additional line after the URI entry in the /etc/openldap/ldap.conf file:
TLS_CACERT /etc/openldap/ldap.cert

Note: If your LDAP server is an AIX SecureWay or IBM Directory Server LDAP server that was initiated using AIX's mksecldap command, or if it is being used on AIX 5L V5.1 or V5.2, please edit the settings in /usr/share/doc/packages/nss_ldap/ldap.conf in order to correctly map the attributes. The guidelines mentioned above still apply. If in doubt, edit both ldap.conf files with the same information.

Add MDS to NIS Domain


Attention: These steps need to be followed only if an NIS Domain has been configured to store the effective UNIX user IDs. Do not perform them if using LDAP. The MDS has to be linked with the NIS Directory Service in order to make queries to NIS for UNIX user information. Do not perform these steps if you are using LDAP for the UNIX directory server. 1. Edit the /etc/yp.conf file to contain your actual NIS domain name and server name, as shown in Example 8-25. Create the file if it does not exist.
Example 8-25 Sample yp.conf file /etc # cat yp.conf domain sanfsdom server fvt2-drv1.bvnssg.net ypserver fvt2-drv1.bvnssg.net

2. Set the domain name to the /etc/defaultdomain file, as shown in Example 8-26.
Example 8-26 Sample /etc/defaultdomain file /etc # cat defaultdomain sanfsdom

3. Set the NIS domain name to allow the MDS to immediately recognize and access the NIS Directory Service, by running the following command:
# domainname sanfsdom

4. Run the following commands to allow the MDS to link to the NIS Domain upon reboot and to start it immediately:
# chkconfig ypbind on # /etc/init.d/ypbind start

Verify MDS can access Active Directory and LDAP/NIS domains


Your MDS is now configured for advanced heterogeneous file sharing. To ensure that the MDS can properly access the Active Directory and LDAP or NIS domains, run the getent passwd command. There will be many entries, but if your MDS is correctly accessing the Directory Servers, you will see entries for the users created on Active Directory and LDAP/NIS, as shown in Example 8-27 on page 365. In our example, the entry:
sanfsuser:x:6000:1000:sanfsuser:/tmp

shows the UNIX user from LDAP, and the entry:


SANFSDOM+sanfsuser:x:20003:20000:sanfsuser:/home/SANFSDOM/sanfsuser:/bin/bash

364

IBM TotalStorage SAN File System

shows our Windows user. The Windows users will be prefixed with the Active Directory domain, while the UNIX users that are being served from LDAP will appear just as normal UNIX IDs in this output.
Example 8-27 Output of getent passwd # getent passwd root:x:0:0:root:/root:/bin/bash bin:x:1:1:bin:/bin:/bin/bash . . . sanfsuser:x:6000:1000:sanfsuser:/tmp: SANFSDOM+Administrator:x:20000:20000::/home/SANFSDOM/Administrator:/bin/bash SANFSDOM+Guest:x:20001:20000::/home/SANFSDOM/Guest:/bin/bash SANFSDOM+krbtgt:x:20002:20000::/home/SANFSDOM/krbtgt:/bin/bash SANFSDOM+sanfsuser:x:20003:20000:sanfsuser:/home/SANFSDOM/sanfsuser:/bin/bash #

8.3.6 Implementation of advanced heterogeneous file sharing


This section will show the implementation and examples of heterogeneous file sharing. We continue with our very simple environment showing a single UNIX user in LDAP, sanfsuser, and a single Windows user in the Active Directory domain, also called sanfsuser. We will map these users so that they will be considered equivalent for the purpose of determining permissions when sharing files and directories cross-platform. These commands should be run on the master MDS.

Create domains and user maps in SAN File System


Use the steps below to create domains and user maps from within SAN File System. 1. Make sure you have successfully verified access to Active Directory and LDAP/NIS, as shown in the section Verify MDS can access Active Directory and LDAP/NIS domains on page 364. 2. Now we need to create the two domains necessary for mapping users on the MDS. We will create one domain called Windows for our Active Directory users and another domain called UNIX for our Open LDAP users. Use the mkdomain command, as shown in Example 8-28 on page 366. This command must be run on the master MDS. You can select your own domain names, but the type parameter must be set to win_ad, unix_ldap, or unix_nis, as appropriate. In our example, we have one domain of type win_ad and one domain of type unix_ldap. You always require a domain of type win_ad, and your other domain must be one of the UNIX types. Restriction: The current version of SAN File System supports only one domain of each type - Windows and UNIX.

Chapter 8. File sharing

365

Example 8-28 Create the user domains # sfscli mkdomain -type win_ad Windows CMMNP5469I Domain Windows was created successfully. # sfscli mkdomain -type unix_ldap Unix CMMNP5469I Domain Unix was created successfully. # sfscli lsdomain Name Type ================== Unix UNIX LDAP Windows Windows AD

3. Now we can map the users from the different domains to each other. In our case, we are mapping the user sanfsuser from the Active Directory domain to the user sanfsuser from the LDAP domain using the mkusermap command, as shown in Example 8-29. Typically there would be many user mappings to create. The src and tgt parameters are in the form user@domain, where user is an existing user ID, and domain is the appropriate domain name created in the previous step. Tip: In a typical environment, you will have many user mappings to create. You can automate this by scripting the mkusermap commands. You would need to extract the users to map from Active Directory or LDAP and NIS. You might choose an organizational standard for mapping IDs, for example, that the UNIX user IDs have the same name as the Windows ID, as we have shown here in our simple example.
Example 8-29 Create user map # sfscli mkusermap -src SANFSDOM+sanfsuser@Windows -tgt sanfsuser@Unix Are you sure that you want to create the user map? [y/n]:y CMMNP5490I The user mapping for SANFSDOM+sanfsuser@Windows with sanfsuser@Unix was created successfully.

4. Repeat the mkusermap command to map all appropriate users. When the user mapping is done, you can display the current mapping with the lsusermap command, as shown in Example 8-30.
Example 8-30 Display user map # sfscli lsusermap Unix Windows =================== sanfsuser SANFSDOM+sanfsuser

Now that we have successfully mapped our users, we are ready to show an example of how advanced heterogeneous file sharing will work.

Advanced heterogeneous file sharing on SAN File System in action


In this section, we will show examples of how advanced heterogeneous file sharing works with SAN File System. In our examples below, we created a fileset called svcfileset6 and modified the permissions of this fileset so that only administrator and sanfsuser have full permissions of the fileset. We will show this in more detail below.

UNIX to Windows
1. Our AIX SAN File System client, agent47, has been made a privileged client, as shown in Example 8-31 on page 367. The AIX client is also configured to authenticate users with

366

IBM TotalStorage SAN File System

our LDAP server, as we showed in Configure UNIX clients to use LDAP user IDs on page 354.
Example 8-31 List of clients, showing the privileged clients emily:~ # sfscli lsclient Client Session ID State Server Renewals Privilege ==================================================== jacob 1 Current emily 351 Root agent47 4 Current emily 47609 Root jacob 2 Current manny 350 Root agent47 5 Current manny 47666 Root

2. To set up, we log into our AIX client, agent47, as root and take ownership (including set permissions) of the fileset svcfileset6, as shown in Example 8-32. See Take ownership of fileset: UNIX on page 300 for more information about the take ownership operation. We will make the UNIX user ID sanfsuser the owner of the fileset and give full permissions for that fileset to sanfsuser and other members of the group sanfsgroup. Intrinsically, root also has full permissions. Everyone else will have read/execute permissions only. Currently, the only member of the group sanfsgroup is sanfsuser. The fileset svcfileset6 is attached at the directory /mnt/sanfs/sanfs/svcfileset6. Note: UNIX only displays the first eight characters of user and group IDs in directory listings. This is why they display in our output as sanfsuse and sanfsgro, respectively.
Example 8-32 Show fileset permission and ownership change # whoami root # lsuser sanfsuser sanfsuser id=6000 pgrp=sanfsgroup groups=sanfsgroup home=/tmp login=true su=true rlogin=true daemon=true admin=false sugroups=ALL admgroups= tpath=nosak ttys=AL L expires=0 auth1=SYSTEM auth2=NONE umask=22 registry=LDAP SYSTEM=compat loginti mes= loginretries=0 pwdwarntime=0 account_locked=false minage=0 maxage=0 maxexpi red=-1 minalpha=0 minother=0 mindiff=0 maxrepeats=8 minlen=0 histexpire=0 histsi ze=0 pwdchecks= dictionlist= fsize=2097151 cpu=-1 data=262144 stack=65536 core=2 097151 rss=65536 nofiles=2000 roles= # lsgroup sanfsgroup sanfsgroup id=1000 users=1,sanfsuser registry=LDAP # cd /mnt/sanfs/sanfs # chown sanfsuser:sanfsgroup svcfileset6 # chmod 775 svcfileset6 # ls -ld svcfileset6 drwxrwxr-x 4 sanfsuse sanfsgro 144 Oct 04 16:29 svcfileset6

Chapter 8. File sharing

367

3. Now, still at the AIX client, agent47, we will create a file in the fileset using our LDAP user, sanfsuser. We su to sanfsuser and change to the directory svcfileset6. We create a new file called unixfile.txt, as shown in Example 8-33. Note the default permissions on this file only sanfsuser has write permissions.
Example 8-33 Show example of file creation with sanfsuser # su - sanfsuser $ cd svcfileset6 $ vi unixfile.txt i created this file with sanfsuser on unix sfs client with chown sanfsuser:sanfsgroup svcfileset6 and chmod 775 svcfileset6 ~ "unixfile.txt" 6 lines, 125 characters # cat unixfile.txt i created this file with sanfsuser on unix sfs client with chown sanfsuser:sanfsgroup svcfileset6 and chmod 775 svcfileset6 $ ls -l total 17 -rw-r--r-1 sanfsuse sanfsgro d--------2 1000000 1000000 -rw-r--r-1 sanfsuse sanfsgro

6 Oct 04 16:14 junk.txt 48 Oct 04 16:01 lost+found 125 Oct 04 16:46 unixfile.txt

4. We will now test advanced heterogeneous file sharing by attempting to open and edit the same file as the Active Directory sanfsuser from a Windows SAN File System client. Since we mapped this user to the ID sanfsuser in UNIX (in Create domains and user maps in SAN File System on page 365), it should have the same file permissions. 5. First, we log onto the SANFSDOM domain at the Windows SAN File System client jacob, using the user ID sanfsuser (Figure 8-14).

Figure 8-14 Log on as sanfsuser

368

IBM TotalStorage SAN File System

6. We can explore the svcfileset6 directory and see the file just created - unixfile.txt, as shown in Figure 8-15. Since our Windows User ID is mapped to sanfsuser on UNIX, we have read and write permission, as shown in Figure 8-16. The permission boxes are grayed out, because the Windows client cannot change file security attributes, including permissions. File permissions and security attributes can only be altered by clients of the same type as the file creator, that is, only other UNIX-based clients can change the permissions of a file created on a UNIX-based system.

Figure 8-15 Contents of svcfileset6

Figure 8-16 unixfile.txt permissions

Chapter 8. File sharing

369

7. Because of the user mapping done at the MDS, we should be able to update and save this file. In Figure 8-17, we add some more lines of text to unixfiles.txt and save it over the original file, because the mapped file permissions give us full read and write access.

Figure 8-17 Edit the file in Windows as sanfsuser and save it

8. After saving the file, we return to the AIX client, agent47, and verify the updated content, as shown in Example 8-34.
Example 8-34 Show unixfile.txt file on AIX SAN File System client # cd /mnt/sanfs/sanfs # su sanfsuser $ cd svcfileset6 $ ls junk.txt lost+found unixfile.txt $ cat unixfile.txt i created this file with sanfsuser on unix sfs client with chown sanfsuser:sanfsgroup svcfileset6 and chmod 775 svcfileset6

Now i logged into the Windows SAN File System client with sanfuser and I opened the file for editing. It should save since I am sanfsuser.

Windows to UNIX
1. Now we will show the mapping in reverse. We will create a file from the Windows SAN File System client as sanfsuser in the Active Directory domain, and verify read/write access to it from the AIX client, as the LDAP sanfsuser user ID. In Figure 8-18 on page 371 and Figure 8-19 on page 371, we have created the file winfile.txt.

370

IBM TotalStorage SAN File System

Figure 8-18 Create the file on the Windows client as sanfsuser

Figure 8-19 Show file contents in Windows as sanfsuser

2. Now we will open and try to edit the file on the AIX client as sanfsuser. We can do this, since sanfsuser on UNIX has been mapped to sanfsuser on Windows (see Example 8-35). Note that the owner of winfile.txt is displayed as sanfsuser (we display the mapped UNIX user). The group is not translated, since groups are not directly mapped by SAN File System, and therefore displays as the default 999999. Group membership is checked, however.
Example 8-35 Show attempt to edit file with AIX client as sanfsuser # cd /mnt/sanfs/sanfs # su sanfsuser $ cd svcfileset6 $ ls -l total 26 -rw-r--r-1 sanfsuse d--------2 1000000 -rw-r--r-1 sanfsuse -rwx--xr-x 1 sanfsuse $ vi winfile.txt I created this^M file on my ^M Windows client ^M logged on as ^M sanfsuser. Now I am editing

sanfsgro 1000000 sanfsgro 999999

6 48 269 71

Oct Oct Oct Oct

04 04 04 06

16:14 16:01 17:10 14:11

junk.txt lost+found unixfile.txt winfile.txt

Chapter 8. File sharing

371

the file on a AIX client machine logged on as sanfsuser. Saving the file should be successfull. ~ "winfile.txt" 12 lines, 187 characters # cat winfile.txt I created this file on my Windows client logged on as sanfsuser. Now I am editing the file on a AIX client machine logged on as sanfsuser. Saving the file should be successfull. $ ls -l total 26 -rw-r--r-1 sanfsuse d--------2 1000000 -rw-r--r-1 sanfsuse -rwx--xr-x 1 sanfsuse

sanfsgro 1000000 sanfsgro 999999

6 48 269 187

Oct Oct Oct Oct

04 04 04 06

16:14 16:01 17:10 14:16

junk.txt lost+found unixfile.txt winfile.txt

3. We have shown the operation of the user mapping in SAN File Sharing.

Accessing files with non-mapped user IDs


What happens if you try to access a file created on one platform (for example, Windows) using an ID on the other platform (UNIX) where that ID is not mapped? In this case, the behavior will be as we have already described in 8.2, Basic heterogeneous file sharing on page 340, that is, the Everyone/Other permissions apply. To show this, on the AIX client, we su to a non-mapped user, jacuna. Since no mapping for this user has been created on the MDS, then the default behavior is to use the permissions granted to Other or Everyone. The file winfile.txt has only read and execute permissions for Everyone, as shown in Figure 8-20 on page 373. The permission boxes are grayed out, because it is not possible to change security attributes for files created on another OS platform. Since there are no write permissions on the file, the user jacuna cannot update this file, as shown in Example 8-36 on page 373. Its contents remain the same as in Example 8-35 on page 371.

372

IBM TotalStorage SAN File System

Figure 8-20 winfile.txt permissions from Windows Example 8-36 Show how editing the file with user jacuna will fail # su - jacuna $ cd svcfileset6 $ echo "Attempt to update by a non-mapped user" >> winfile.txt The file access permissions do not allow the specified action. ksh: winfile.txt: 0403-005 Cannot create the specified file. $ cat winfile.txt I created this file on my Windows client logged on as sanfsuser. Now I am editing the file on a AIX client machine logged on as sanfsuser. Saving the file should be successfull.

Chapter 8. File sharing

373

374

IBM TotalStorage SAN File System

Chapter 9.

Advanced operations
In this chapter, we cover the following topics: FlashCopy operations Data migration: planning and implementing Adding and removing an MDS from the cluster Monitoring and gathering performance statistics MDS failover Validating non-uniform SAN File System configurations

Copyright IBM Corp. 2003, 2004, 2006. All rights reserved.

375

9.1 SAN File System FlashCopy


The IBM TotalStorage SAN File System has a FlashCopy function, which creates a point in time copy or image of a fileset. The created image is a read-only, space-efficient image of the contents of a SAN File System fileset at the time it was taken. You can use backup applications or utilities on SAN File System clients to back up the contents of FlashCopy images, rather than the actual fileset. Doing this avoids any issues with open files that might cause problems when backing up live data. FlashCopy images are file-based; therefore, the SAN File System clients can see all the files and directories in the image. This means they can use this image for quick restore of parts of the fileset if required by simply copying the required files and folders back to the actual fileset. Finally, the entire fileset can be quickly reverted from a FlashCopy image by the SAN FIle System administrator.

9.1.1 How FlashCopy works


In this section, we discuss the underlying operation of the FlashCopy function.

Space for FlashCopy images


FlashCopy images consume space on the same volumes as the original fileset. Because FlashCopy uses a space-efficient method to make the image, the amount of space that will be used by FlashCopy images is not possible to predict. In the worst case (all blocks in the fileset change), the image will take up the same amount of space currently occupied by the non-FlashCopy objects within the fileset. In the best case (if nothing in the fileset changes), the FlashCopy images take up virtually no space (just pointers to the real fileset data). It is not possible to determine how much space is being occupied by a particular FlashCopy image at any particular time. Therefore, when planning space requirements, you need to include space for FlashCopy images. Obviously, the more images you make and maintain (up to 32 are possible at any one time), then the more space needs to be available. You should carefully monitor the User Pool space threshold. Be aware that the FlashCopy images also occupy space in the containing fileset and count towards its quota.

Copy on write
Immediately after the FlashCopy operation, the original fileset files (Source Data) and the FlashCopy images (Copy Data) of the files in the fileset share the same data blocks, that is, nothing is actually copied, making the operation space efficient, as shown in Figure 9-1 on page 377.

376

IBM TotalStorage SAN File System

Make Flashcopy

S OUR C E DA T A

Original fileset data

S O UR C E DA T A
Figure 9-1 Make FlashCopy

FlashCopy image is set of pointers back to original (source).

As soon as any updates are made to the actual fileset contents (for example, a client adds or deletes files, or updates contents of files), the fileset is updated by an operation called copy on write. This means that (only) the changed blocks in the fileset are written to a new location on disk. The FlashCopy image continues to point to the old blocks, while the actual fileset will be updated over time to point to the new blocks (see Figure 9-2).

Copy on-Write SI EI T P
Modify/delete existing data Create new data
Modify S and E, Delete T and add P

S O U R C E D A T A AI EI P S

S O UR C E DA T A
Figure 9-2 Copy on write

Modified data is written to a new location, representing current fileset state. Flashcopy pointers still point to original data.

Chapter 9. Advanced operations

377

In this case, two blocks were changed (S and E), one block was deleted (T), and a new block was written (P) in the actual fileset. The new blocks are written as shown, and the FlashCopy image continues to point to the original blocks. preserving the point-in-time copy. Therefore, any access to the FlashCopy image accesses the data blocks as they existed when the FlashCopy image was created, and any access to the fileset itself accesses the new data blocks.

9.1.2 Creating, managing, and using the FlashCopy images


FlashCopy images can be created, managed, and used using both the CLI and the GUI. Note: This is a SAN File System MDS function, that is, these operations are initiated from the MDS, not the SAN File System clients. In this section, we present the basic FlashCopy operations: Create FlashCopy images List FlashCopy images Restore / revert FlashCopy images Delete FlashCopy images

General characteristics of FlashCopy functions


These are some basic considerations regarding FlashCopy: FlashCopy images for each fileset are stored in a special subdirectory called .flashcopy under the filesets attachment point. The .flashcopy directory is a hidden directory, so by default, it will not appear in Windows Explorer in a SAN File System client. Figure 9-3 on page 379 shows the .flashcopy directories of several filesets, as viewed from a Windows client that has been enabled to show hidden files and folders.

378

IBM TotalStorage SAN File System

Figure 9-3 The .flashcopy directory view

A FlashCopy image is simply an image of an entire fileset as it exists at a specific point in time. While a FlashCopy image is being created, all data remains online and available to users and applications. The FlashCopy image operation is performed individually for each fileset, that is, you can create only one FlashCopy image at a time. FlashCopy images are full images; you cannot create incremental FlashCopy images. A fileset can have up to 32 read-only FlashCopy images. Once a FlashCopy image is created, its name cannot be changed. You can use a FlashCopy image for backing up files instead of the original Source Data. This will guarantee a consistent image of the files, since the files in a FlashCopy image are read-only. Clients have file-level access to FlashCopy images, to access older versions of files, or to copy individual files back to the real fileset, if required.

Chapter 9. Advanced operations

379

Create FlashCopy image with CLI


Use the mkimage command to make a FlashCopy of a fileset. Specify the name of the fileset to make the image of a directory name to store it in (which will appear as a subdirectory in the .flashcopy directory of the fileset), and a name for the image. Tip: If you are making FlashCopy images of several filesets, you can use the image name parameter to indicate that these images were made at the same time, for example, by using the current date and time in the directory and image name. This will make it easier to identify these images later as belonging together. You cannot modify the name, description, or directory of an existing FlashCopy image. While the mkimage command is in progress, you can view files in the filesets but you cannot modify them. Also, you cannot make a new image of a fileset while a previous image is being reverted (see Revert / restore FlashCopy images with CLI on page 384) on the same fileset. Note: A FlashCopy image is made of a single fileset; it does not include nested filesets. If you have nested filesets, you need to create FlashCopy images for each of them separately. Example 9-1 shows creating several FlashCopy images for the fileset asad. After creating the image, we used the lsimage command to display the newly created images.
Example 9-1 Create FlashCopy image using mkimage command sfscli> mkimage -fileset asad -dir Image4 Image4 CMMNP5168I FlashCopy image image4 on fileset asad was created successfully. sfscli> mkimage -fileset asad -dir Image1 Image1 CMMNP5168I FlashCopy image Image1 on fileset asad was created successfully. sfscli> mkimage -fileset asad -dir Image2 Image2 CMMNP5168I FlashCopy image Image2 on fileset asad was created successfully. sfscli> sfscli> lsimage Name Fileset Directory Name Date ===================================================== Image4 asad Asad1 May 17, 2004 1:12:28 AM Image1 asad Image1 May 17, 2004 1:18:28 AM Image2 asad Image2 May 17, 2004 1:18:52 AM sfscli>

Creating a FlashCopy image with the GUI


For the GUI, select Manage Copies Flashcopy Images and select Create FlashCopy Images from the pull-down menu, as shown in Figure 9-4 on page 381. Then click Go.

380

IBM TotalStorage SAN File System

Figure 9-4 Create FlashCopy image GUI

The next window (Figure 9-5) starts the 3-step wizard: Select Filesets, Set Properties, and Verify Setup.

Figure 9-5 Create FlashCopy wizard

Chapter 9. Advanced operations

381

Click Next to start the wizard. On the next window (Figure 9-6), select the fileset to make the image of. We chose fileset asad.

Figure 9-6 Fileset selection

Now specify the properties of the image: the image name, image directory, and description. Figure 9-7 shows the default settings.

Figure 9-7 Set Flashcopy image properties

Tip: A maximum of 32 FlashCopy images can be maintained for any fileset. If 32 images already exist and you try to create the 33rd image, the operation will fail unless you check the Force Image Creation box (or specify the -f flag on the mkimage command). In that case, the oldest image will be deleted to make room for the new image when it is created. Finally, verify the properties, as shown in Figure 9-8 on page 383, and click Next.

382

IBM TotalStorage SAN File System

Figure 9-8 Verify FlashCopy image properties

This completes the process, and the new image (Image-115 of asad) is created, as shown in Figure 9-9.

Figure 9-9 FlashCopy image created

Chapter 9. Advanced operations

383

Listing FlashCopy images with CLI


The lsimage command lists the names of the image files, the filesets they belong to, the Directory name, and time/date of creation, as shown in Example 9-2. The most-recently created FlashCopy images are listed first.
Example 9-2 lsimage command output sfscli> lsimage Name Fileset Directory Name Date ========================================================= Image-12 asad Image-12 May 13, 2004 12:25:09 AM Image-13 asad Image-13 May 13, 2004 12:26:02 AM Image-14 asad Image-14 May 13, 2004 12:26:21 AM Image-114 asad Image-114 May 13, 2004 12:26:52 AM Image-115 asad Image-115 May 13, 2004 12:54:42 AM sfscli>

Use the -l option to include the image description.

Listing FlashCopy images with GUI


Select Maintain Copies Flashcopy Images, as shown in Figure 9-10.

Figure 9-10 List of FlashCopy images using GUI

Revert / restore FlashCopy images with CLI


The reverttoimage command reverts or restores the current fileset to a specified FlashCopy image, replacing it completely with the point-in-time copy. When you revert a fileset to a specified FlashCopy image, the reverted FlashCopy image and all subsequent FlashCopy images of that fileset are deleted. The target FlashCopy image becomes the primary image for the fileset and no longer appears as an image listed in the .flashcopy directory.

384

IBM TotalStorage SAN File System

Tip: Because the specified FlashCopy image is deleted after you issue the reverttoimage command, it is recommended that you keep a secondary backup of the image before using the command for future use or disaster recovery.

Attention: If nested filesets exist within a fileset that you want to revert, you must manually detach all nested filesets before running the reverttoimage command. After the FlashCopy image of the parent fileset is reverted, reattach the nested fileset. Depending on the age of the specified FlashCopy image and the amount of unique file data in the image tree, the revert operation could result in significant background activity to clean up the file system objects that are no longer referenced. In Example 9-3, we revert the image Image-14 for the fileset asad. When we re-issue the lsimage command, we see that the reverted image, Image-14, has automatically been deleted, since its contents are now active in the fileset. Note too that images Image-114 and Image-115 have also been deleted, since they were created subsequently to the image Image-14, and therefore are invalid.
Example 9-3 reverttoimage command sfscli> lsimage Name Fileset Directory Name Date ========================================================= Image-12 asad Image-12 May 13, 2004 12:25:09 AM Image-13 asad Image-13 May 13, 2004 12:26:02 AM Image-14 asad Image-14 May 13, 2004 12:26:21 AM Image-114 asad Image-114 May 13, 2004 12:26:52 AM Image-115 asad Image-115 May 13, 2004 12:54:42 AM sfscli> sfscli> reverttoimage -fileset asad Image-14 Are you sure you want to revert to FlashCopy image Image-14 for fileset asad? [y/n]:y CMMNP5182I The FlashCopy image Image-14 successfully reverted. sfscli> sfscli> lsimage Name Fileset Directory Name Date ========================================================= Image-12 asad Image-12 May 13, 2004 12:25:09 AM Image-13 asad Image-13 May 13, 2004 12:26:02 AM sfscli>

Chapter 9. Advanced operations

385

Note on reverting FlashCopy images: When you revert to a FlashCopy image that is not the most recently made image, any images that were made subsequent to the image being reverted will automatically be deleted as a part of the revert process. This is because of the way the images are maintained by SAN File System in order to keep the overhead to a minimum. Conceptually, you can think of the images as a set of sequential pointers at one fixed point in time. The set of pointers terminates at the active (and therefore changing) data. In simple terms, once you have rolled back to an image, you cannot then roll forward to an intervening image as it will be removed. This is shown in Figure 9-11. You should therefore be careful when reverting to an image because of this restriction. If in doubt, remember you can always copy the data in an older FlashCopy image to a separate directory structure, instead of reverting it to the primary image. If you do that, you will still maintain all the images (within the restriction of maximum of 32 images per fileset).

At 10am Current data Image-4 9:50am Image-3 9:30am Image-2 9:20am Image-1 9:00am

Revert to Image-2 Current data Image-1 9:00am

Data is rolled back to 9:20am intervening images are deleted

Figure 9-11 List of FlashCopy images before and after a revert operation

Revert / restore FlashCopy images with GUI


Select Manage Copies Flashcopy Images and select Revert from the drop-down menu. Select the image to be restored, for example, Image-115, as shown in Figure 9-12 on page 387.

386

IBM TotalStorage SAN File System

Figure 9-12 Select image to revert

You will be asked to confirm the revert and the operation will proceed exactly as described in the previous section.

Remove / Delete FlashCopy images using CLI


The rmimage command deletes or removes one or more FlashCopy images for a specific fileset. Depending on the age of the FlashCopy image and the amount of unique file data in the image tree, the delete operation might result in significant background activity to clean up the file system objects that are no longer referenced. Example 9-4 shows the removal of the image Image-13.
Example 9-4 rmimage command sfscli> lsimage Name Fileset Directory Name Date ======================================================== Image-12 asad Image-12 May 13, 2004 12:25:09 AM Image-13 asad Image-13 May 13, 2004 12:26:02 AM sfscli> rmimage -fileset asad Image-13 Are you sure you want to delete FlashCopy image Image-13 for fileset asad? [y/n]:y CMMNP5176I FlashCopy image Image-13 for fileset asad successfully deleted. sfscli> lsimage Name Fileset Directory Name Date ======================================================== Image-12 asad Image-12 May 13, 2004 12:25:09 AM sfscli>

Chapter 9. Advanced operations

387

Remove / Delete FlashCopy images using GUI


Select Manage Copies Flashcopy Images and select Delete from the drop-down menu. In Figure 9-13, we selected Image-12.

Figure 9-13 Delete Image selection

Verify the action (Figure 9-14). If we had selected the Delete option (equivalent to the -f option on the rmimage command), then any open files in the image would also be deleted. This could cause application errors because of unexpected file removal; therefore, this option should be used with caution.

Figure 9-14 Delete Image verification

388

IBM TotalStorage SAN File System

Image-12 is now removed from the list of images, as shown in Figure 9-15.

Figure 9-15 Delete image complete

9.2 Data migration


This section gives an overview of migrating data from existing storage to SAN File System. Any large data migration should be thoroughly planned in advance in order to reduce the risk of error and minimize downtime to the organization. The overview steps for data migration are as follows: Plan migration: Prerequisites check Create / update policy and rules Time estimation for data migration Perform migration Verify migration

Chapter 9. Advanced operations

389

Figure 9-16 shows an overview of the data flow in the data migration from non-SAN File System to SAN File System. You will need an installed SAN File System client that has access to the source data, as well as, obviously, access to the SAN File System global namespace. As the data is migrated (or copied) to SAN File System, it is split: the data blocks go into User Pools and the metadata is generated and stored in the System Pool.

C lie n t
s e e s b o th o r ig in a l a n d d e s t in a t io n d ir e c t o r y

C re a te M e ta d a ta

S A N F ile S y s te m

es

r it

ul

e m

yr

et

lic

ad

Po

at a

O r ig in a l D a ta

M ig r a te d a ta

U ser P o o ls

S y s te m Pool

Figure 9-16 Data migration to SAN File System: data flow

Tools for data migration


For data migration, you can use the standard operating system provided copy commands or utilities, for example, cp, cpio, or tar on AIX, and xcopy or Explorer on Windows. You could also use backup applications to restore data from the latest backup into the SAN File System as the destination. For large data migration, SAN File System provides a data migration utility, migratedata, that has special functions suited for that purpose, including these: A plan phase to estimate the time in advance which the migration operation should take. A copy phase, where the actual data is copied. A verify phase, which checks that the data was successfully migrated. Transaction-based logging and checkpoints to provide re-startable data migration. In the rest of this section, we address migration using the SAN File System migratedata command.

9.2.1 Planning migration with the migratedata command


Data migration has to be well prepared and planned. It will probably be a lengthy and resource-intensive process, depending on the amount of data being migrated. Applications that use the data for updates must be stopped during the data migration in order to keep the whole set of data exactly consistent. Therefore, planning is very important to minimize the duration of data unavailability and ensure a smooth migration.

Prerequisites check
Here are the basic prerequisites and factors that must be taken into account in general cases: Windows and UNIX data must be migrated separately, by the appropriate client. The SAN File System cluster, with storage pool, filesets, policies, and security, as well as clients, must be properly configured.

390

IBM TotalStorage SAN File System

The client that performs the data migration must be able to access all the source file systems. You must have superuser privileges (for UNIX clients) or administrator privileges (for Windows clients) to migrate data. The client must be a privileged client, as described in 7.6.1, Fileset permissions on page 297. All applications that modify the data being migrated must be stopped until the migration completes to guarantee the data integrity. At least twice the space of the data should be available for migration (this includes the space occupied by the original data (X) and space occupied by the data once migrated to SAN File System (X)), therefore, 2X is required. The data migration utility does not verify that there is enough space in the storage pool where data is being migrated. Files in NTFS compressed drives will be expanded, and sparse files will become dense, or full, during data migration. Sufficient space must be available in the SAN File System to store the expanded files.

Policy for data placement


You have to plan data placement carefully before performing data migration. In your SAN File System installation, you would make several storage pools, depending on your applications requirements and quality of services. According to those requirements, you create the policy for the data to be migrated to the appropriate storage pools.

Estimate time for data migration


The command migratedata has an estimate function to estimate the time that will be required for the data migration. When executed in the PLAN phase, it gathers information about the available system resources, copies some sample files from the source directory into SAN File System to estimate transfer rates, and reports an estimated time for the migration of the data set. More details are in the following section.

9.2.2 Perform migration


You will use the migratedata command to actually migrate the data.

migratedata utility
The SAN File System data migration utility migratedata executes in three different phases, as specified in a parameter: plan migrate verify Estimates the time that it will take to migrate data with the available resources. Estimation is done by copying sample files from source directory to destination. Performs data migration. Verifies the integrity of the migrated data and metadata (such as owner, permission, and last modified time stamp).

The migratedata command is part of the SAN File System client, and is installed in /usr/tank/migration/bin/migratedata for UNIX, or <SYSTEM_DRIVE>:\Program Files\IBM\Storage Tank\Migration\migratedata.exe for Windows. The command syntax is as follows:
migratedata -log log_file (-f) -phase [ migrate | plan | verify ] -checkpoint blocks -resume -data -destdir dest_dir source_path

Chapter 9. Advanced operations

391

Where: -log log_file Specifies the log file in which migration activities are logged. When used with -phase migrate -resume, this log file is used for resuming after the last completed block or file. Specifies that the migration should continue even if there is an error with a file. If not specified, an error results in the entire migration being stopped at that point. Specifies the migration phase to run, selected from: Gathers information about the available system resources (memory, CPUs, size of source tree, and space available on the destination file system), copies some sample files from source directory to estimate transfer rates, and provides an estimated time for the migration. Migrates the specified data in the source path to the destination directory. This is the default phase. Verifies the integrity of the migrated data, as well as consistency of the metadata (such as owner, modification time stamp, and permissions). Number of blocks of file data migrated at which the checkpoint is written. Resumes the migration from the last completed block or file as logged in the log file specified by the -log parameter. Verifies every block of source data (file data and metadata) with the migrated data. plan

-f

-phase

migrate verify

-checkpoint -resume -data

Note: Verifying all data with this option is very time consuming, and can take as long as the migration itself. -destdir dest_dir Specifies the name of the destination directory for the migrated data. The destination directory must already exist, with appropriate permissions set. Specifies one or more paths of directories or files to migrate.

source_path

While migrating, consider the following requirements: You can specify more than one phase, for example, to plan, migrate, and verify the data, specify phase plan phase migrate phase verify. Although you can specify the phases in any order, the command always executes in this order: plan, migrate, and verify. This tool does not provide data locking for data being migrated. You have to stop applications that may modify the data during migration. This tool does not verify if there is enough space. You should have at least twice the space of the source data in the destination (as capacity of fileset quota and storage pool capacity).

392

IBM TotalStorage SAN File System

migratedata execution examples


Now we will show how the migratedata command works.

-phase plan
Example 9-5 shows migratedata -phase plan on an AIX system. The final line of output gives the estimated time to migrate the data. In this example, we will migrate data from the /home/testdata directory into the SAN File System at the location specified by -destdir. The plan phase works by copying some of the data into a temporary directory and calculating the I/O rate.
Example 9-5 migratedata -phase plan # /usr/tank/migration/bin/migratedata -log /var/tmp/migrate_plan.log -phase plan -destdir /sfs/sanfs/aixfiles/cb /home/testdata PLAN: Source directory: /home/testdata PLAN: Number of filesystem objects to migrate: 410 PLAN: Destination directory: /sfs/sanfs/aixfiles/cb/_tmp8867_ PLAN: On destination space required: 651.093750 MB, available: 339776 MB PLAN: Number of CPUs: 4, Available Memory: 1455 MB, IO Blocksize: 3 MB PLAN: Copy rate 5.264906 MB/sec, Estimated time: 0h:2m:3s #

-phase migrate
Example 9-6 shows the same command with -phase migrate. Now the data will be physically copied. A checkpoint will be taken after every 100 file blocks are written to allow the command to be re-started from the last checkpoint if the original command fails or is interrupted. Notice that the actual data rate is close to the estimated rate; however, we are only copying a small amount of data.
Example 9-6 migratedata -phase migrate # /usr/tank/migration/bin/migratedata -log /var/tmp/migrate_do.log -phase migrate -checkpoint 100 /sfs/sanfs/aixfiles/cb /home/testdata #PLAN: Source directory: /usr/local PLAN: Source directory: /home/testdata PLAN: Number of filesystem objects to migrate: 410 PLAN: Destination directory: /sfs/sanfs/aixfiles/cb PLAN: On destination space required: 651.093750 MB, available: 339648 MB MIGRATE: Number of CPUs: 4, Available Memory: 1453 MB, IO Blocksize: 3 MB MIGRATE: COPY STARTED MIGRATE: Copy rate 6.174036 MB/sec, Estimated time: 0h:1m:45s MIGRATE: COPY COMPLETE: 648.280576 MB copied at 5.857630 MB/sec # ls -l /sfs/sanfs/aixfiles/cb 0 drwxr-xr-x 3 root system 72 Jun 05 11:32 _tmp21595_/ 0 drwxr-xr-x 3 root system 72 Jun 05 11:27 testdata/ #

Chapter 9. Advanced operations

393

-phase verify
Example 9-7 shows the migratedata -phase verify execution. We give it the same log file as we specified in the migrate phase.
Example 9-7 migratedata -phase verify # /usr/tank/migration/bin/migratedata -log /var/tmp/migrate_do.log -phase verify -destdir /sfs/sanfs/aixfiles/cb /home/testdata #PLAN: Source directory: /home/testdata PLAN: Destination directory: /sfs/sanfs/aixfiles/cb /home/testdata VERIFY: Comparing files started. VERIFY: SUCCEEDED: Comparing files completed with 0 errors and 0 resets #

Note: You have to specify the same log file that the migrate phase produced during the migrate phase (/var/tmp/migrate_do.log, in this case). The log file produced at migration and verification phases looks like Example 9-8. In the migration phase, each object is logged with a time stamp, its attribute flags, and the result.
Example 9-8 migratedata log SAN FILE SYSTEM DATA MIGRATION (Version 1.1): Sat Jun 5 11:41:02 2004 11:41:02 PLAN: Source directory: /home/testdata 11:41:02 PLAN: Number of filesystem objects to migrate: 410 11:41:02 PLAN: Destination directory: /sfs/sanfs/aixfiles/cb 11:41:02 PLAN: On destination space required: 651.093750 MB, available: 339648 M B 11:41:02|/home/testdata|d|0|0|00000000000000000000000000000000|00000000000000000 000000000000000|DONE 11:41:02|/home/testdata/inst.images|d|0|0|00000000000000000000000000000000|00000 000000000000000000000000000|DONE 11:41:02|/home/testdata/inst.images/lost+found|d|0|0|000000000000000000000000000 00000|00000000000000000000000000000000|DONE 11:41:02|/home/testdata/inst.images/sdd|d|0|0|00000000000000000000000000000000|0 0000000000000000000000000000000|DONE 11:41:02|/home/testdata/inst.images/sfs|d|0|0|00000000000000000000000000000000|0 0000000000000000000000000000000|DONE 11:41:02|/home/testdata/inst.images/fixes|d|0|0|00000000000000000000000000000000 |00000000000000000000000000000000|DONE 11:41:03|/home/testdata/inst.images/fixes/U497868.bff|f|0|0|2004-06-05 11:36:32. 000000-05:00|00000000000000000000000000000000|DONE 11:41:03|/home/testdata/inst.images/tsmcli|d|0|0|0000000000000000000000000000000 0|00000000000000000000000000000000|DONE 11:41:03|/home/testdata/inst.images/tsmcli/IP22727.README.32bit|f|0|0|2004-06-05 11:35:02.000000-05:00|00000000000000000000000000000000|DONE 11:41:03|/home/testdata/inst.images/tsmcli/IP22727.README.FTP|f|0|0|2004-06-05 1 1:35:02.000000-05:00|00000000000000000000000000000000|DONE ***** lots of files deleted **** 11:41:03 MIGRATE: Number of CPUs: 4, Available Memory: 1453 MB, IO Blocksize: 3 MB 11:41:05 MIGRATE: COPY STARTED 05 11:35:53.000000-05:00|4dc3d21f0c7a0e20067693ce40d471ea|DONE8|6960128|2004-0605 11:36:00.000000-05:00|de4f40e506d9a887a1bad91c7bcb7a45|DONE4|7655424|2004-0605 11:36:09.000000-05:00|f78fcadb40a46c486051c21d2b04a718|DONE0|9369600|2004-0605 11:36:04.000000-05:00|4ab91bd126d02d9700c50cc4b7e63de3|DONE6|9020416|2004-06-

394

IBM TotalStorage SAN File System

05 11:35:52.000000-05:00|87d4685d53d93ee7bb11d2c632489159|DONE4|6631424|2004-0611:41:24 MIGRATE: Copy rate 6.174036 MB/sec, Estimated time: 0h:1m:45s 6-05 11:36:20.000000-05:00|8abe5948cf85397120719a2b74a2c28a|DONE|11264000|2004-0 06-05 11:35:47.000000-05:00|e1a994437d39390f9b6396cc6ad9b2dd|DONE0|6568960|200405 11:36:00.000000-05:00|a00edd4b96237131e6e7bdaefa2d8d1d|DONE8|5777408|2004-0605 11:36:04.000000-05:00|6d7378bed3f89756036924d844048a05|DONE2|5711872|2004-0605 11:35:47.000000-05:00|26eae095eb54e59af974407ed1eddbd7|DONE4|5581824|2004-066-05 11:36:29.000000-05:00|675dce6334f4c027259bbe945ae1de82|DONE|15199232|2004-0 6-05 11:35:55.000000-05:00|3fb4f44e22ba64282897d1cd42b4f44e|DONE|16478208|2004-0 05 11:35:55.000000-05:00|b489230af47d64e3fc9cfb520cd6e8da|DONE6|5446656|2004-0605 11:35:58.000000-05:00|671c92bce01ac3907a564e596d28341e|DONE6|5354496|2004-0605 11:36:01.000000-05:00|3063d3ce8d18d993c15a8e7adc206689|DONE8|9197568|2004-066-05 11:36:28.000000-05:00|96e86872cdf77b46f7abd979424053f2|DONE|14427136|2004-0 05 11:36:02.000000-05:00|f63e8b715295fc7f93704134444eb223|DONE8|6028288|2004-0605 11:35:57.000000-05:00|bef5cf3aef8904a063e9230a271d080d|DONE0|4741120|2004-0605 11:36:13.000000-05:00|89a2ec337dad77a66d574cceb536d71d|DONE0|4449280|2004-0605 11:36:08.000000-05:00|df736c493a3250e0b4e86cbbb9ef1ca0|DONE8|4937728|2004-0605 11:35:58.000000-05:00|3b7b67d1f3c45f62870ce2755fe9664e|DONE4|5684224|2004-0605 11:36:23.000000-05:00|7f9d489c57eacc6ea04ef9abded53a0c|DONE0|5698560|2004-0605 11:35:49.000000-05:00|10b9fd525411f11b102ed0d09dd49d5a|DONE8|3964928|2004-0605 11:35:54.000000-05:00|850a5e02dcafdcf2c39861a34827be8a|DONE4|3687424|2004-0605 11:36:06.000000-05:00|bbe5997304f2ef4d5283cf428f179505|DONE0|3589120|2004-0605 11:36:20.000000-05:00|c1be7d3734fb8ff98b2a21f0bb6d7cb8|DONE0|5017600|2004-0605 11:35:50.000000-05:00|b2cbfe3fc1c310026b38b29735a7be15|DONE4|3359744|2004-0605 11:36:10.000000-05:00|fccd9c07ed0adaaf23eebf0dc97b7e63|DONE4|2673664|2004-0605 11:35:57.000000-05:00|19a68a04030a918bac0011105714fbbd|DONE2|3392512|2004-06-10:43:16 ***** lots of files deleted **** 11:42:54 MIGRATE: COPY COMPLETE: 648.280576 MB copied at 5.857630 MB/sec SAN FILE SYSTEM DATA MIGRATION (Version 1.1): Sat Jun 5 11:48:59 2004 11:48:59 11:48:59 11:48:59 11:48:59 PLAN: Source directory: /home/testdata PLAN: Destination directory: /sfs/sanfs/aixfiles/cb VERIFY: Comparing files started. VERIFY: SUCCEEDED: Comparing files completed with 0 errors and 0 resets#

Resuming the migratedata operation


If the migration operation failed or was interrupted, you can restart it from the last checkpoint written by specifying the -resume parameter. You need to specify the same log file as was created during the incomplete migration.

9.2.3 Post-migration steps


In order to check the integrity of the migrated data, you can use the SAN File System utility migratedata verify phase, as outlined above. If some specific third-party application data is migrated, you will need to consult your applications provider specialist for appropriate data consistency checks to run before bringing your application back online. In such a case, we would recommend testing the application with test data migrated on SAN File System before cutting over to production. You will also need to check and modify any scripts, environment variables, or other parameters for each application, which might point to the old data location. These will need to be updated to reflect the new path or location of the data.

Chapter 9. Advanced operations

395

9.3 Adding and removing Metadata servers


Over the course of time, you may want to add or remove an MDS from your SAN File System cluster.

9.3.1 Adding a new MDS


If you get new engines, you can add them into the existing cluster with the addserver command. Let us assume we have a three node MDS SAN File System cluster, as shown in Example 9-9.
Example 9-9 Display cluster nodes using lsserver command sfscli> lsserver Name State Server Role Filesets Last Boot ========================================================= mds1 Online Master 4 Jun 07, 2004 4:52:02 AM mds3 Online Subordinate 4 Jun 07, 2004 9:23:11 AM mds2 Online Subordinate 3 Jun 09, 2004 10:46:04 PM

You need to install and configure the MDS. 1. Install the correct version of SUSE (it must be the same as the other nodes in the cluster), patches, and basic configuration, as in 5.2.1, Pre-installation setting and configurations on each MDS on page 127, 5.2.2, Install software on each MDS engine on page 127, 5.2.3, SUSE Linux 8 installation on page 128, 5.2.4, Upgrade MDS BIOS and RSA II firmware on page 135, and 5.2.5, Install prerequisite software on the MDS on page 135. This includes configuring the RSA card TCP/IP address and Ethernet bonding. 2. Make sure the SSH keys are setup between the new MDS and all existing MDS (as in step 5 on page 136 of 5.2.5, Install prerequisite software on the MDS on page 135). 3. Mount the SAN File System CD in the CD-ROM (for example, at /mount/cdrom). 4. Remove any previously installed Java version (run rpm -qa | grep IBMJava | xargs rpm e), and install the correct version of Java from the CD (run rpm -Uvh /media/cdrom/common/IBMJava2-142-ia32-JRE-1.4.2-1.0.i386.rpm.). 5. Generate a configuration file by running the installation script with the --genconfig option
/media/cdrom/SLES8/install_sfs-package-2.2.2-130.i386.sh --genconfig /tmp/sfs.conf

This creates a template file /tmp/sfs.conf. Edit it to include the correct SAN File System configuration parameters for your environment, as listed in Table 5-1 on page 147. 6. Now you can run the actual installation, using the script as input. Note the option now is --loadserver. Also, include the -noldap option as shown if using local authentication in your cluster. If not, do not include this option. If using local authentication, you must have defined the SAN File System user IDs and groups identical to the existing MDSs, as shown in 4.1.1, Local authentication configuration on page 100. Run the following command:
/media/cdrom/SLES8/install_sfs-package-<version>.sh -loadserver -sfsargs -f /tmp/sfs.conf -noldap

7. The installation will proceed as shown in Example 5-19 on page 139 and following. The output will be slightly different since we are installing one MDS rather than the entire cluster.

396

IBM TotalStorage SAN File System

8. After the installation script completes, run the CLI lsserver command on the newly added MDS to show the server state. It should be Not Added, Subordinate, as shown in Example 9-10.
Example 9-10 Check new server status # sfscli lsserver Name State Server Role Filesets Last Boot ========================================================== mds4 Not Added Subordinate 0 Sep 10 2005 6:14:07 AM

9. Now add this MDS to the existing cluster, as shown in Example 9-11, using the addserver command on the master MDS.
Example 9-11 Add a new node to the SAN File System cluster sfscli> addserver 9.42.164.113 CMMNP5205I Metadata server 9.42.164.113 on port 1737 was added to the cluster successfully.

10.Now issue the lsserver command again (see Example 9-12). It shows that the new node mds4 is added and started. You could now keep this as a spare MDS, or assign filesets to it. Note in this configuration, all filesets were static, therefore none got automatically moved to the new MDS. If there were dynamic filesets, we would expect some to be moved to the new MDS after the cluster detected a new member, to balance the workload.
Example 9-12 New node mds4 is added to the cluster and started sfscli> lsserver Name State Server Role Filesets Last Boot ========================================================= mds1 Online Master 4 Sep 07, 2004 4:52:02 AM mds3 Online Subordinate 4 Sep 07, 2004 9:23:11 AM mds2 Online Subordinate 3 Sep 09, 2004 10:46:04 PM mds4 Online Subordinate 0 Sep 10, 2004 6:14:07 AM

11.You should also verify the RSA connectivity to the new MDS, as described in 13.5.1, Validating the RSA configuration on page 538.

9.3.2 Removing an MDS


If you need to remove an MDS from the existing SAN File System cluster, you can do so by using the dropserver command. This will completely remove all records of this MDS. Note: Do not use this procedure to temporarily shut down an MDS (for example, for maintenance) when you intend to bring it back online within a short period of time. If you need to temporarily take an MDS out of the cluster, use the stopserver command, and then the startserver command to restart it. Please be aware that any static filesets assigned to this node MDS have to be reassigned to another node first using the setfilesetserver command described in 7.5.4, Moving filesets on page 294. Dynamic filesets will fail over automatically, as described in 9.5, MDS automated failover on page 413.

Chapter 9. Advanced operations

397

Example 9-13 shows how to remove a node from the cluster using the dropserver command. You can drop any server, except for the last remaining server in the cluster.
Example 9-13 Removing cluster node sfscli> dropserver mds4 Are you sure you want to drop Metadata server mds4? Filesets automatically assigned to this Metadata server will be reassigned to the remaining Metadata servers. You must reassign any statically assigned filesets manually. [y/n]:y CMMNP5214I Metadata server mds4 dropped from the cluster.

Attention: Note that addserver command uses an IP address of the node to be added, while the dropserver command needs the node name as its parameter.

9.3.3 Adding an MDS after previous removal


You can add a previously removed MDS back to the cluster. Note this is not required if you are temporarily bringing down a server for scheduled maintenance. In that case, all you need to do is to stop and start the server. If you do need to add an MDS that was previously removed from the cluster, use the following procedure: 1. From the Master MDS, remove the desired node using the dropserver command (if you have not already done so). 2. If you already have a configuration file (for example, /tmp/sfs.conf, created when the node was installed as in step 5 on page 396), you can check the parameters, modify it if necessary, and use it in the following steps. Otherwise, create a new one, as in step 5 on page 396 and modify it as necessary (see step 5 on page 396). 3. Then run setupsfs -f /tmp/sfs.conf -noldap -noprompt. As before, if you are using LDAP, you would omit the -noldap parameter. If using local authentication, you should check that the user IDs and passwords on the rejoining MDS match those on the other MDSs in the cluster. 4. Now you can add the node back to the cluster from Master MDS using the addserver command. You cannot re-add the node back to the cluster until you run the setupsfs command on it. Example 9-14 shows the error message you will get if you do not first run setupsfs.
Example 9-14 Addserver will fail if you try to re-add a previously removed node without setupsfs sfscli> addserver 9.42.164.113 CMMNP5456E Error sending message to Metadata server. Tip: make sure the Metadata server is installed and is running.

9.4 Monitoring and gathering performance statistics


The performance of SAN File System will vary according to real-life workloads. Some of the factors that can affect performance are: Number of clients actively accessing a fileset Number of client applications actively accessing a fileset Number of applications accessing multiple filesets Number of servers within the same SAN File System cluster

398

IBM TotalStorage SAN File System

SAN File System provides utilities for monitoring the cluster and Metadata traffic. These utilities can be invoked either from the SAN File System console (GUI) or the CLI and allow you to monitor the SAN File System by displaying the statistics, status, and logs. It is recommended to regularly monitor the cluster so that you can anticipate potential bottlenecks.

9.4.1 Gathering and analyzing performance statistics


In this section, we will show both CLI and GUI methods.

MDS CLI monitoring commands


Here is a summary of the SAN File System MDS monitoring utilities. statfileset Shows statistics per fileset.

statserver -workstats servername Shows statistics per server subordinate workload. statcluster workstats Shows statistics for the master MDS workload. statfile lsclient l Shows statistics about specified files. Shows statistics per client on a server, or cluster-wide.

statfileset
For gathering statistics related to filesets, use statfileset at the CLI, as shown in Example 9-15. The output shows various statistics for the filesets, which might help you in balancing the fileset workload among the MDS servers. You can easily see which filesets are more active than others. You can choose to display the statistics for one or more specific filesets by using the fileset parameter (for example, statfileset - fileset ROOT aixfiles). The output also shows which filesets are currently associated with each MDS. Tip: If statfileset is executed at the master MDS, statistics for all filesets are displayed. If it is executed at a subordinate MDS, then only statistics for filesets associated with that MDS are displayed.
Example 9-15 statfileset sfscli>statfileset Name Server Current Transactions Stopped Retried Started Completed ========================================================================= ROOT mds1 0 725 12 39826 39101 testfileset mds1 0 3 0 17020 17017 lixfiles mds1 0 12 0 43760 43748 user1 mds1 0 5 0 16942 16937 USERS mds4 0 17 0 16431 16414 userhomes mds4 0 5 0 16457 16452 aixfiles mds3 0 3 0 48613 48610 asad mds3 0 4 0 51 47 dbdir mds2 0 3 0 40 37 winhome mds2 0 3 0 32886 32883

This command displays how many transactions have started and completed on each fileset hosted by the MDS where the command is executed. The counters are reset each time the MDS is rebooted.

Chapter 9. Advanced operations

399

statserver
To check the workload on a specific MDS, use the CLI command statserver -workstats servername. In Example 9-16, you can see statistics gathered from the MDS tank-mds1.
Example 9-16 statserver -workstats tank-mds1:~ # sfscli statserver -workstats Name tank-mds1 Server Role Master Most Current Software Version 2.2.2.91 ===========Workload Statistics=========== Updates 11889 Total Transactions 11889 Dirty Buffers 0 Clean Buffers 1422 Free Buffers 8578 Total Buffers 10000 Session Locks 2 Data Locks 5 Byte Range Locks 0

This command shows the locks, buffers, and transactions on the MDS, so you can see which servers are most active. The transactions are re-set each time the MDS is started; therefore, to track them over a given time period, issue the command periodically and calculate the difference in values over successive iterations. Tip: If statserver is executed at the master MDS, you can choose any MDS. If it is executed at a subordinate MDS, then it only displays statistics for that MDS.

statcluster
To display cluster statistics, use the command statcluster -workstats, as shown in Example 9-17.
Example 9-17 statcluster -workstats tank-mds1:~ # sfscli statcluster -workstats Name ATS_GBURG ID 42999 State Online Target State Online Last State Change Sep 8, 2005 7:34:54 AM Last Target State Change Servers 2 Active Servers 2 Software Version 2.2.2.91 Committed Software Version 2.2.2.91 Last Software Commit Sep 6, 2005 10:30:15 AM Software Commit Status Not In Progress Metadata Check State Active Metadata Check Percent Completed 72 % Installation Date Sep 6, 2005 10:30:15 AM ============Master Server Workload Statistics============= System Updates 5 Total System Transactions 6 Clean Buffers 26 Dirty Buffers 0 Free Buffers 486

400

IBM TotalStorage SAN File System

Total Buffers

512

The statcluster command gives you additional information about the cluster, such as software version, cluster state, and buffer statistics. In addition, it includes the status of the Metadata Checker function. It also includes a section on the system metadata workload, so you can calculate what proportion of the total workload is comprised of the master MDS working with the System pool. There are several other options on the statcluster command; one of the most useful is the -netconfig parameter. This version can be executed on any MDS and will return (among other things) the IP address of the master MDS. Example 9-18 shows a typical output.
Example 9-18 statcluster -netconfig mds2:~ # sfscli statcluster -netconfig Name sanfs IP 9.42.164.114 Cluster Port 1737 Heartbeat Port 1738 Client-Server Port 1700 Admin Port 1800 Command issued from subordinate server

statfile
The statfile command displays metadata information about specified file(s), including the storage pool and fileset where the file is stored, the MDS to which the fileset is associated, and its size. There is also a verbose mode (-v on), which includes information such as date and time of creation and last access. It can only be run from the master MDS. You can use this command to check if policies are being applied as you want by seeing which storage pool a particular file is stored in. Example 9-19 shows a typical output. Note that you cannot use wildcards in the file specification; each file must be named in full.
Example 9-19 statfile sfscli> statfile sanfs/aixfiles/aixhome/cd/README.GUID Name Pool Fileset Server Size (B) File Modified ============================================================================================== sanfs/aixfiles/aixhome/cd/README.GUID aixrome aixfiles mds3 571 Jun 05, 2004 1:47:32 PM

Chapter 9. Advanced operations

401

lsclient
To display the current client workload, use lsclient -l, as shown in Example 9-20. This command gives statistics per client per MDS. For brevity in the example, we have limited it to a specific client; however, if you do not specify a client, the results for all clients will be shown.
Example 9-20 lsclient sfscli> lsclient -l Client Session ID State Server Renewals Last Renewal Next Renewal (secs) Privilege Client IP Port Client OS File System Version Transactions Started Transactions Complete Session Locks Data Locks Byte Range Locks =========================================================================================================== =========================================================================================================== ================== sanbs1-8 18 Current tank-mds1 23829 Sep 9, 2005 10:03:19 AM 19 Root 9.82.23.18 2235 Windows 2.2.1.54 5 5 1 2 0 san350-1 17 Current tank-mds1 23832 Sep 9, 2005 10:03:19 AM 19 Root 9.82.22.137 2824 Windows 2.2.2.82 81 81 1 2 0 sanm80 16 Current tank-mds1 23846 Sep 9, 2005 10:03:20 AM 20 Root 9.82.24.19 1021 AIX 2.2.2.82 1 1 0 1 0 sanm80 17 Current tank-mds2 23826 Sep 9, 2005 10:29:31 AM 17 Root 9.82.24.19 1023 AIX 2.2.2.82 0 0 0 0 0 sanbs1-8 19 Current tank-mds2 23827 Sep 9, 2005 10:29:34 AM 20 Root 9.82.23.18 2237 Windows 2.2.1.54 0 0 0 0 0 san350-1 18 Current tank-mds2 23826 Sep 9, 2005 10:29:32 AM 18 Root 9.82.22.137 2833 Windows 2.2.2.82 1 1 0 0 0

You can view client specific statistics, such as transactions, locks, and leases for each client. Note: SAN File System only keeps statistics for clients currently accessing the global namespace; it has no static view of all the clients, only the active clients.

Using the SAN File System console to monitor performance


You can also monitor SAN File System performance and create reports using the SAN File System console. To monitor performance, select Monitor System System Overview, as shown in Figure 9-17 on page 403.

402

IBM TotalStorage SAN File System

Figure 9-17 SAN File System overview

This shows a snapshot of overall system performance, including the state (online/offline, and so on) of each MDS, number of filesets assigned to each MDS, and number of transactions per minute. Recent error messages are shown at the bottom of the display, and you can use the filtering pull-downs to limit or expand the data displayed. You can tell which MDS is the current master by the small server stack to its left in the table. You can set the display to automatically refresh at a designated interval by selecting a time period from the Refresh Interval drop-down. Click the link for any MDS to show properties for that server, or click any link in the Filesets column to show the filesets currently assigned on that server. You can also view statistics on individual components, such as Servers, Client Sessions, Containers, Storage Pools, Volumes, LUNs, and Engines. To view statistics for specific SAN File System components: 1. Select Monitor System Statistics, as shown in Figure 9-20 on page 405. 2. Click the link on the left-hand side for the component for which you want to view statistics. You can select between the following components: Servers, Client Sessions, Containers, Storage Pools, Volumes, LUNs, and Engines. In this example, we have chosen to show statistics about client sessions and storage pools.

Chapter 9. Advanced operations

403

Figure 9-18 shows view statistics for the active client sessions, including locks and sessions, both active and expired.

Figure 9-18 View statistics: client sessions

The storage pool report provides statistics, such as volume size and usage, as shown in Figure 9-19. It also shows how many storage pools have reached their alert threshold.

Figure 9-19 Statistics: Storage Pools

3. Click Close to close the Statistics window.

404

IBM TotalStorage SAN File System

SAN File System reports


To create a report of component statistics: 1. Select Monitor System Statistics, as shown in Figure 9-20.

Figure 9-20 Console Statistics

2. Click Create Report (see Figure 9-21).

Figure 9-21 Create report

Chapter 9. Advanced operations

405

3. Select the components for which you want to include statistics in the report, then click Create Report (see Figure 9-22).

Figure 9-22 View report

4. Now you can view and print the report using the print function in your Web browser. 5. Click Close to close the report.

Monitoring MDS HBA performance


To monitor MDS FC adapter performance, if using SVC or ESS for your system metadata, you can use the SDD command datapath query adaptstats on the MDS. To execute the command, log in to the MDS and enter datapath query adaptstats at the Linux prompt, as shown in Example 9-21.
Example 9-21 datapath query adaptstats on MDS mds1:~ # datapath query adaptstats Adapter #: 0 ============= I/O: SECTOR: Adapter #: 1 ============= I/O: SECTOR: Total Read 23 23 Total Write 2 2 Active Read Active Write 0 0 0 0 Maximum 1 1 Total Read 2109 2109 Total Write 430899 430899 Active Read Active Write 0 0 0 0 Maximum 512 512

The above example shows statistics, including the number of reads and writes, for each HBA.

406

IBM TotalStorage SAN File System

Displaying LUN statistics


To display statistics on each individual LUN that is available to the MDS (via SDD), type datapath query devstats at the Linux prompt, as shown in Example 9-22. Note the datapath command is only available if using SDD-supported metadata storage.
Example 9-22 datapath query devstats on MDS mds1:~ # datapath query devstats Total Devices : 5 Device #: 0 ============= Total Read Total Write I/O: 2039 430895 SECTOR: 2039 430895 Transfer Size: <= 512 <= 4k 432934 0 Device #: 1 ============= Total Read Total Write I/O: 24 1 SECTOR: 24 1 Transfer Size: <= 512 <= 4k 25 0 Device #: 2 ============= Total Read Total Write I/O: 23 1 SECTOR: 23 1 Transfer Size: <= 512 <= 4k 24 0 Device #: 3 ============= Total Read Total Write I/O: 23 2 SECTOR: 23 2 Transfer Size: <= 512 <= 4k 25 0 Device #: 4 ============= Total Read Total Write I/O: 23 2 SECTOR: 23 2 Transfer Size: <= 512 <= 4k 25 0

Active Read Active Write 0 0 0 0 <= 16K <= 64K 0 0

Maximum 512 512 > 64K 0

Active Read Active Write 0 0 0 0 <= 16K <= 64K 0 0

Maximum 1 1 > 64K 0

Active Read Active Write 0 0 0 0 <= 16K <= 64K 0 0

Maximum 1 1 > 64K 0

Active Read Active Write 0 0 0 0 <= 16K <= 64K 0 0

Maximum 1 1 > 64K 0

Active Read Active Write 0 0 0 0 <= 16K <= 64K 0 0

Maximum 1 1 > 64K 0

This command shows read and write counts for each LUN.

Chapter 9. Advanced operations

407

To get a list of commands available with datapath, simply type datapath at the Linux prompt, as shown in Example 9-23.
Example 9-23 Available parameters with datapath mds1:~ # datapath Invalid command Usage: datapath datapath datapath datapath datapath datapath datapath datapath query adapter [n] query device [n] set adapter <n> online/offline set device <n> path <m> online/offline set device [n]/([n] [n]) policy rr/fo/lb/df query adaptstats [n] query devstats[n] open device <n> path <n>

MDS operating system commands


The MDS operating system also includes standard Linux utilities, including top and vmstat. The top command shows (in real time) the most active CPU processes on a system, including its process ID (PID), memory usage, swap file usage, and uptime (how long it has been running). The vmstat command provides real-time performance information summarized at the system level, including number of processes, paging information, and % CPU activity broken down by user, system, and idle time. These commands do not require root access to execute.

SAN File System client monitoring tools


The SAN File System client provides a status collector command - stfsstat, which shows statistics for that particular client. This command is invoked by sanfs_ctl stats on Solaris clients. This command is intended mainly for use by IBM service and detailed descriptions of the output is beyond the scope of this redbook. For more information, see IBM TotalStorage SAN File System Maintenance and Problem Determination Guide, GA27-4318. Here some examples of using stfsstat. Example 9-24 shows network statistics per MDS with which this client has a lease.
Example 9-24 SAN File System client network statistics root@sanm80:/usr/tank/client/bin > ./stfsstat -net -mount /mnt/sanfs Date: 2005-10-05 21:18:17 STFS Client Version:2.2.2.82 STFS stats since 2005-09-20 15:18:33

Network metrics for server ServerNo=0, ipAddress=9.82.24.96, port=1700 Connection Time = 0 Network Send metrics MsgSent AvgSize 181 176 Unreliable Acks 220136 152 Network Recv metrics MinSize 0 NAcks 0 MaxSize 1055 Attempts 0 AvgRTT 0 MinRTT 0 MaxRTT 0

408

IBM TotalStorage SAN File System

MsgRecv 152

AvgSize 247.34

MinSize 0

MaxSize 1450

Dropped 0

Unreliable Acks 220127 181

NAcks 0

Network metrics for server ServerNo=1, ipAddress=9.82.24.97, port=1700 Connection Time = 0 Network Send metrics MsgSent AvgSize 104 138 Unreliable Acks 212132 73 Network Recv metrics MsgRecv 73 AvgSize 184.03 MinSize 0 MaxSize 1094 Dropped 0 Unreliable Acks 212119 104 NAcks 0 MinSize 0 NAcks 0 MaxSize 244 Attempts 0 AvgRTT 0 MinRTT 0 MaxRTT 0

--------------------------------------------------------------------------------

Example 9-25 shows Transaction Manager (TM) statistics per client related to TM data structures, including the number of messages sent and received (per message type), the maximum lengths and average lengths of the transaction queue, and the number of transactions, messages, and leases lost. The statistics also include the number of transactions within certain time ranges, or buckets, for each transaction type.
Example 9-25 SAN File System transaction manager statistics root@sanm80:/usr/tank/client/bin > ./stfsstat -tm -mount /mnt/sanfs Date: 2005-10-05 21:19:41 STFS Client Version:2.2.2.82 STFS stats since 2005-09-20 15:18:33 TM Metrics ServNo 0 0 0 0 Messages Queues Xmit Queue Std Proc Queue DDL Proc Queue RRL Proc Queue Sent 187 Queues Xmit Queue Std Proc Queue DDL Proc Queue RRL Proc Queue Sent 107 MaxLen 1 1 1 0 BatchSz 1.00 MaxLen 1 1 1 0 BatchSz 1.00 AvgLen 0.0000 1.0000 1.0000 0.0000 MsgSize 144.15 AvgLen 0.0000 1.0000 1.0000 0.0000 MsgSize 105.58 Txns Enq 181 126 22 0 maxBatchSz 15 Txns Enq 104 44 27 0 maxBatchSz 15 Txns Deq 104 44 27 0 Txns Deq 181 107 22 0

ServNo 1 1 1 1 Messages

Chapter 9. Advanced operations

409

Transactions 117

Retry 1 Outstand 0

Forward 0 Max Outstand 1

Del/Fail 0

Abandon 0

Blind 0

Last Lease Thread Schedule 2005-10-05 21:19:41 Loss of Leases 3 IdentifyAttmpts 9 Reasserts 4 Avg #Obj/Reassert 0 Message Buffers Sent 294 Received 432329 Sent Bucket4 10-100ms AvgLen 130.87 AvgLen 40.10 MinLen 52 MinLen 40 MaxLen 1023 MaxLen 1450 Bucket <100mu

Messages Bucket2

Bucket3

100mu-1ms 1-10ms

AvgTime MinTime MaxTime Bucket5 Bucket6 (Time in Microsec) 100ms-1s >1s

Identify CreateFile 1 5 0 LookupName 30 0 0 RemoveName 1 5 0 SetAccessCtlAttr 3 0 0 ReadDir 1 0 0 ReadDirPlus 24 0 0 UpdateAccessTime AcquireSessionLock 18 2 0 DowngradeSessionLock DenySessionLock DiscardDirectory DiscardObjAttr AcquireDataLock 9 6 0 DowngradeDataLock DeferredDowngradeDataLock BlkDiskUpdate 0 6 0

9 6 0 30 0 6 0 3 0 1 0 26 2 34 20 0 6 5 11 1 15 0 10 40 6 0

3614 0 364 0 1919 0 553 0 471 0 16120 0 487 0

665 262 483 398 471 261

7240 498 2596 862 471 206916

0 0 0 0 0 0

302

1486

969 0

384

1554

1143 0

1027

1383

Messages RenewLease IdentifyResp ReportTxnStatus CreateFileResp LookupNameResp RemoveNameResp SetAccessCtlAttrResp ReadDirResp ReadDirPlusResp

Received 432113 5 11 6 25 6 3 1 26

410

IBM TotalStorage SAN File System

AcquireSessionLockResp DemandSessionLock InvalidateDirectory InvalidateObjAttr PublishBasicObjAttr AcquireDataLockResp DemandDataLock BlkDiskUpdateResp PublishClusterInfo PublishLoadUnitInfo Report Txn Status Subcodes

20 11 11 1 7 13 49 6 6 4

stpTxnRC_Success stpTxnRC_Name_Already_Exists stpTxnRC_Name_Not_Found stpTxnRC_Object_Not_Found stpTxnRC_Lock_Denied stpTxnRC_Wrong_Object_Type stpTxnRC_No_Space stpTxnRC_Object_Not_Empty stpTxnRC_Different_Container stpTxnRC_Invalid_Parameter stpTxnRC_Read_Only_Directory stpTxnRC_Change_Name_To_Self stpTxnRC_Range_Deadlock stpTxnRC_Range_Not_Available stpTxnRC_No_Conflicting_Range stpTxnRC_Retry_Required stpTxnRC_Internal_Error stpTxnRC_Others The RootClientFlag for this client: 1

0 0 5 0 1 0 0 0 0 0 0 0 0 0 0 1 0 4

--------------------------------------------------------------------------------

Example 9-26 shows metadata cache statistics for the client.


Example 9-26 SAN File System metadata cache statistics root@sanm80:/usr/tank/client/bin > ./stfsstat -mc -mount /mnt/sanfs Date: 2005-10-05 21:25:06 STFS Client Version:2.2.2.82 STFS stats since 2005-09-20 15:18:33 MC Metrics CacheMgr Threads Last CacheMgr Schedule Avg Schedule/Thread Evictions/Thread Emergency Evictions/Thread ZeroRef Evictions/Thread ZeroRef Dir Evictions/Thread ZeroRef Evictions/Thread (Time) ZeroRef Evictions/Thread (Capacity) ZeroRef Evictions/Thread (Deleted) ZeroRef Releases/Thread Deleted Releases/Thread 1 2005-10-05 21:25:02 263496 73 0 4 61 65 0 8 323 6

Chapter 9. Advanced operations

411

Batch Lock Downgrade:

AVG 1.7381

MAX 3 268435456 1800 64003 64003 2119234 1984 727175 726927 0 0 0 2 0 1 0 1 0 0 0 37 0 59989 92.81 % 88.18 % 1 60320 61

MIN 1

Desired Metadata Cache Size Metadata Cache Grace Period Object Hash Table Buckets Name Hash Table Buckets Total Client Memory Total Client Memory(MW Overhead) Memory Allocs Memory Frees Current Memory Waits Total Memory Waits Total File Objects Total Dir Objects Total Symlink Objects Total Objects Total Shadow Objects Total Inconsistent Objects Total Names Total Segments Total Attributes Access Time Updates Translation Discards Directory Name Discards Object Hit Ratio Name Hit Ratio MakeInconsistentObject operations MakeMRUObject operations MakeZeroRefObject operations

in 488 bytes

in 0 bytes in 0 bytes in 0 bytes

To monitor general client performance, you can use SDD commands, such as datapath query adaptstats/devstats, as well as operating system specific utilities, such as top (UNIX), perfmon (Windows), and vmstat (UNIX). To view statistics on the HBAs on the client, use datapath query adaptstats, as shown in Example 9-27.
Example 9-27 Viewing HBA statistics on client C:\Program Files\IBM\Subsystem Device Driver>datapath query adaptstats Adapter #: 0 ============= Total Read Total Write Active Read Active Write Maximum I/O: 40 20 0 0 2 SECTOR: 75 11 0 0 9 Adapter #: 1 ============= Total Read Total Write Active Read Active Write Maximum I/O: 0 8 0 0 0 SECTOR: 0 13 0 0 0

The command shows read and write figures for each HBA on that system.

412

IBM TotalStorage SAN File System

For statistics on the disk device, use datapath query devstats, as shown in Example 9-28 on page 413.
Example 9-28 Viewing disk statistics on client C:\Program Files\IBM\Subsystem Device Driver>datapath query devstats Total Devices : 5 Device #: 0 ============= Total Read Total Write Active Read Active Write Maximum I/O: 8 118832 0 0 20 SECTOR: 8 950656 0 0 160 Transfer Size: <= 512 <= 4k <= 16K <= 64K > 64K 8 118832 0 0 0 Device #: 1 ============= I/O: SECTOR: Transfer Size: Total Read 8 8 <= 512 8 Total Write 0 0 <= 4k 0 Active Read Active Write 0 0 0 0 <= 16K <= 64K 0 0 Maximum 1 1 > 64K 0

Device #: 2 ============= I/O: SECTOR: Transfer Size: Total Read 8 8 <= 512 8 Total Write 0 0 <= 4k 0 Active Read Active Write 0 0 0 0 <= 16K <= 64K 0 0 Maximum 1 1 > 64K 0

Device #: 3 ============= I/O: SECTOR: Transfer Size: Total Read 346 2712 <= 512 8 Total Write 645713 5165704 <= 4k 646051 Active Read Active Write 0 0 0 0 <= 16K <= 64K 0 0 Maximum 20 160 > 64K 0

Device #: 4 ============= I/O: SECTOR: Transfer Size: Total Read 8 8 <= 512 8 Total Write 0 0 <= 4k 0 Active Read Active Write 0 0 0 0 <= 16K <= 64K 0 0 Maximum 1 1 > 64K 0

9.5 MDS automated failover


SAN File System provides automated fail-over and controlled fail-back functions. If a MDS fails and the MDS autorestart service cannot bring it back into the cluster, or if a MDS is manually stopped, SAN File System automatically and nondisruptively fails over the MDS workload by redistributing its filesets and, if necessary, reassigning the master role to another active MDS. SAN File System also detects rogue MDSs via the autorestart service. A rogue MDS is one that is not reachable from the cluster, fails to respond to requests, and might be running or

Chapter 9. Advanced operations

413

have latent queued I/O. If a rogue MDS is detected, one of the other MDSs shuts it down, via the RSA adapter, before failing over its workload. Figure 9-23 summarizes the various failure possibilities and the actions that are taken in each instance.
Filesets affected? Fault or Operation On other MDS On failed MDS Action taken

Manually move a fileset between servers Manually stop a subordinate Manually stop the master Manually start a subordinate Recoverable Metadata server software fault, subordinate Recoverable Metadata server software fault, master Non-recoverable software or hardware fault, subordinate Non-recoverable software or hardware fault, master SANFS client hardware or software fault

NO

YES

Only filesets served by the destination MDS are affected; other MDS continue processing without pause. Filesets automatically moved. Filesets/master automatically moved. Filesets may be failed back (per config.). Server automatically restarted; filesets are not moved. Server automatically restarted; filesets/master role are not moved. Engine automatically shut down and workload moved to other servers. Engine automatically shut down and workload moved to other servers.

YES YES YES YES YES YES YES

YES YES YES YES YES YES YES

Any individual SFS files or directories locked by a client that fails are ordinarily released after 20 seconds, if the client has not recovered by that time.

Figure 9-23 SAN File System failures and actions

9.5.1 Failure detection


The administrative server provides an optional MDS restart service (autorestart) that runs on each engine. The service monitors the MDS software processes and restarts them in the event of failure. The MDS then tries to rejoin the cluster. If the restart and rejoin is successful, then no fileset failover is necessary. If it is not successful, then the MDS will be ejected from the cluster and its workload will be failed over (as explained in 9.5.2, Fileset redistribution on page 415). If an MDS experiences an operating system crash, it automatically receives a reboot request through its TCP/IP RSA interface. Then the MDS restart service is automatically started and in turn restarts the MDS processes on the engine. The restart service is enabled by default. The status of the service can be checked using the lsautorestart sfscli command on the master MDS, as shown in Example 9-29.
Example 9-29 Verifying restart service status tank-mds1:~ # sfscli lsautorestart Name Service State Last Probe State Last Probe Probes ======================================================================= tank-mds1 Running Live Server Sep 9, 2005 10:24:52 AM 23385 tank-mds2 Running Live Server Sep 9, 2005 10:51:04 AM 23668

Note that when an MDS is enabled to restart automatically, an SNMP trap is not sent when the MDS is restarted. Manually stopping an MDS or cluster disables the MDS restart service for that MDS or cluster. Manually starting the MDS or cluster reenables the MDS restart service for that MDS or cluster.

414

IBM TotalStorage SAN File System

The MDS restart service can manually be started using the startautorestart sfscli command.

9.5.2 Fileset redistribution


When an MDS is missing because of either a failure or manual operation (for example, scheduled maintenance), SAN File System automatically and nondisruptively redistributes the workload of the missing server to other remaining MDSs based on a distribution algorithm. Only the filesets on the failed or missing MDS are moved; filesets served by the running Metadata servers are not affected. The distribution algorithm first attempts to redistribute all static filesets (if any) to a spare, idle MDS that is set aside for failover (N+1 configuration). A spare MDS is one that has no static filesets assigned to it. If more than one spare exists, all static filesets assigned to the failed MDS are distributed among the spares in a round-robin fashion. Important: A spare MDS is a server that does not have any assigned (static or dynamic) filesets. The reason we say all static filesets are redistributed to the spare MDS, rather than all filesets, is that the only way to meaningfully use an N+1 configuration is when all the filesets are static, that is, when you can guarantee that the spare MDS will never normally have filesets to host. If there is no spare MDS, the static filesets are treated as dynamic filesets. The dynamic filesets are then distributed in a round-robin fashion to the MDS(s) with the fewest number of assigned filesets. The number of filesets per MDS is the only criteria taken into account for load balancing during redistribution. Tip: Because of the way the fileset distribution failover algorithm works, we recommend that you configure your environment with either all static or all dynamic filesets, rather than a mixture of both types. This will cause more predictable failover behavior. The failover is temporary for static filesets. A static fileset is a fileset that you manually assigned to a specific MDS (using the mkfileset or setfilesetserver command). These filesets are assigned back to their statically assigned MDS when that MDS rejoins the cluster. Dynamic filesets that get failed over are not automatically failed back after the MDS recovers. However, one or more dynamic filesets will be redistributed across the cluster to re-instate load balancing.

Chapter 9. Advanced operations

415

Failover
In Figure 9-24, we have a 4 MDS cluster.

Figure 9-24 List of MDS in the cluster

We can see that among these four servers, mds4 is acting as a spare server, since no fileset is explicitly assigned to it (value of 0 in the Filesets column). Note also in the list of filesets, shown in Figure 9-25, mds4 does not appear in the Assigned Server column.

Figure 9-25 List of filesets

In this configuration (only static filesets assignment and one server as spare), if any MDS fails, all of its filesets will automatically move to the spare MDS; in this case, mds4. We simulate a failure on mds3 by disconnecting both its Ethernet connections. We have to disconnect both, since the Ethernet bonding configuration would automatically failover to the other NIC if only one was down. After 10 seconds (by default in SAN File System V2.2.2), the failed MDS is removed from the cluster. It displays with an unknown state (the field State reports - ), as shown in Figure 9-26 on page 417. Note that in the case of a planned outage, the MDS should first be stopped gracefully using stopserver; in this case, it would then report State as Not Running.

416

IBM TotalStorage SAN File System

Figure 9-26 Metadata server mds3 missing

In the meantime, all filesets that were assigned to MDS mds3 (user1 and asad) are sequentially and automatically reassigned to the spare MDS mds4. We can see the result of the failover in Figure 9-27. The value in the Server column for these filesets still shows mds3, as they are statically assigned there. The time taken to complete the failover varies according to the number of filesets requiring failover, and how active they are, but should generally be up to about one minute, and often less.

Figure 9-27 Filesets list after failover

Tip: The failover process can be monitored using the catlog sfscli command, which will display error messages logged in the MDS. Filesets asad and user1 have been statically assigned to MDS mds3, but are currently assigned to MDS mds4 because of the failover.

Chapter 9. Advanced operations

417

Failback
In our case, we were able to fix the network problem without rebooting, that is, by reconnecting the Ethernet cables on mds3. This did not automatically restart the SAN File System processes. The MDS could report its presence to the other servers, but its state shows as Not running, as shown in Figure 9-28. Therefore, failback is not triggered.

Figure 9-28 Metadata server mds3 not started automatically

To trigger the failback, we have to restart the MDS processes on mds3. Select mds3 and choose Start... in the Select Action action drop-down menu. The message indicates that bringing up the server will cause all static filesets to be reassigned to their original server, as shown in Figure 9-29. In our case, this will cause filesets asad and user1 to fail back to MDS mds3.

Figure 9-29 Failback warning

Once the server is started, the static filesets are reassigned to the original MDS. If the failure had caused the MDS to reboot, it would have re-started SAN File System automatically, and therefore failback would be automatic. From a client perspective, the fileset movement (either manual or during failover) is not disruptive and therefore will not usually return an error to the calling application. It will typically cause a pause in any read or writes during the fail-over and fail-back processes. Some 418
IBM TotalStorage SAN File System

applications might experience a timeout; however, if this occurs, the client can retry the operation. Note that there are no administrative tasks necessary on the client side as part of the failover or failback.

9.5.3 Master MDS failover


If the master MDS fails, two processes are triggered: A new MDS takes on the master role, including running the administrative Web interface. Filesets assigned to the master are failed over. If the master MDS fails, the master role is reassigned to another MDS according to a quorum algorithm. This algorithm makes use of a quorum disk and a majority voting procedure to assign the master role to an MDS that is a member of the largest active, mutually-connected group of MDSs that all have access to the system storage pool. The quorum algorithm does not take into account the network connectivity between the MDSs and the clients. If a network partition separates the clients from the MDS, the chosen master might not be ideal. The new master MDS selected is essentially random; you cannot specify a particular MDS to be designated for takeover. Therefore, you should reserve some capacity (around 5%) for the master role workload on each MDS in the cluster. For planning purposes, a conservative figure for 100% utilization on an MDS would be 1200 transactions per second. We showed some methods for measuring MDS performance in 9.4.1, Gathering and analyzing performance statistics on page 399. So you can either monitor your MDS in a test environment or, preferably, use pre-installation capacity planning to predict the likely workload of your servers. In that way, you can choose to configure additional server capacity to be used in the event of failover. The master role does not failback, that is, once the master role has moved to another MDS, it will not move again until or unless the new master MDS fails. As well as failing over the master workload, the filesets assigned to that MDS are also failed over, using the process described in 9.5.2, Fileset redistribution on page 415.

Chapter 9. Advanced operations

419

Figure 9-30 shows that the master MDS is currently mds1. We gracefully stop the master MDS by selecting mds1 and Stop in the Select Action drop-down menu. The server then reports as Not running.

Figure 9-30 Graceful stop of the master Metadata server

At this moment, the cluster is in the process of re-electing a new master and failing over all filesets assigned to former master MDS mds1. As described in 9.5.2, Fileset redistribution on page 415, MDS mds4 will be assigned these filesets, since it is a spare. As part of the master failover, the SAN File system Web interface is no longer hosted by mds1. Any current browser sessions will hang and must be closed and re-opened pointing to any of the remaining MDS IP addresses. In our example, we re-opened the browser to https://mds3:7979/sfs/. After the initial login window, the console is automatically redirected to the Web interface hosted by the new master MDS. In Figure 9-31, we see that mds2 has assumed the master role.

Figure 9-31 Metadata server mds2 as new master

420

IBM TotalStorage SAN File System

Note that the new master MDS can also be determined from any running server using the statcluster command with the netconfig option, as shown in Example 9-30.
Example 9-30 statcluster command used to determine master Metadata server mds3:~ # sfscli statcluster -netconfig Name sanfs IP 9.42.164.115 Cluster Port 1737 Heartbeat Port 1738 Client-Server Port 1700 Admin Port 1800 Command issued from subordinate server

The line labeled IP gives us the IP address of the master Metadata server. In our case, the address 9.42.164.115 corresponds to server mds2. Later on, we can restart server mds1. This will trigger static fileset movements as part of failover, but will not affect the master metadata role assignment.

9.5.4 Failover monitoring


The failover processes described above can be monitored in two ways: Using the catlog -log event sfscli command: On each MDS, all events are reported to internal logs that are accessible using catlog. An extract from an execution of this command is shown in Example 9-31.
Example 9-31 View server logs with catlog command HSTGS0192I Info Normal mds2 May 31, 2004 12:19:27 AM Subordinate Metadata server mds2 has lost the master Metadata server mds1. Attempting to become the new master. HSTGS0194I Info Normal mds2 May 31, 2004 12:19:27 AM A cluster transition has been initiated HSTTM0001I Info Normal mds1 May 31, 2004 12:19:51 AM Client name WINWashington identified as client ID 22 from IP address 9.42.164.127, port 2477. HSTTM0144I Info Normal mds1 May 31, 2004 12:19:51 AM Client WINWashington (client ID 22) is using NLS converter UTF-16LE. HSTTM0151I Info Normal mds1 May 31, 2004 12:19:51 AM New cluster information published to client. ClientId= (22). HSTHA0074I Info Normal mds2 May 31, 2004 12:20:07 AM Launching failover script: /usr/tank/server/bin/stopengine.pl mds1 >> /usr/tank/server/log/log.stopengine 2>&1

Using the SNMP capabilities of SAN File System. In each phase of the failover process, SAN File System triggers SNMP traps. These SNMP traps can be sent to IBM Director on the Master Console, if deployed, or to any other SNMP manager available in your environment. We will show you how to send SNMP traps to IBM Director on the Master Console.

Configuring SAN File System to send SNMP traps to IBM Director


First, we need to set up SAN File System for SNMP. We will use the GUI in this section. See 13.6, Simple Network Management Protocol on page 543 for the corresponding sfscli commands. 1. First, log on to the administrative console and select Monitor System SNMP Properties SNMP Managers.

Chapter 9. Advanced operations

421

2. Check SNMP Manager, enter the Master Console IP address, and choose V2C for the SNMP version. Leave the default SNMP port and community as is, as shown in Figure 9-32.

Figure 9-32 Configuring SANFS for SNMP

Click Apply to save the changes. 3. Click SNMP Events under Monitor System SNMP Properties to select the severity of events that will be sent as a trap to the SNMP Managers, as shown in Figure 9-33.

Figure 9-33 Selecting the event severity level that will trigger traps

We choose to send all except information level events. Click Apply to save the changes. SAN File System is now configured for SNMP.

422

IBM TotalStorage SAN File System

4. Next, compile the SAN File System MIB on IBM Director and verify the traps. The SAN File System MIB is located on each MDS in the directory /usr/share/snmp/mibs/. Copy the MIB from this directory on the MDS, as shown in Example 9-32. Then save it on the Master Console as IBM-SANFS-MIB.mib.
Example 9-32 Copy the IBM-SANFS-MIB.txt using scp in cygwin $scp root@9.42.164.114:/usr/share/snmp/mibs/IBM-SANFS-MIB.txt . root@9.42.164.114's password: IBM-SANFS-MIB.txt 100% |*****************************| 10872 00:00

5. Log into IBM Director on the Master Console, as shown in Figure 9-34.

Figure 9-34 Log into IBM Director Console

6. In the Tasks menu, select Discover Systems SNMP Devices, as shown in Figure 9-35.

Figure 9-35 Discover SNMP devices

Chapter 9. Advanced operations

423

7. In the Groups window on the left side of the window, expand the All Groups group, right-click the SNMP Devices group, and then select Compile a new MIB, as shown in Figure 9-36.

Figure 9-36 Compile a new MIB

8. A window opens, prompting you to select the location of the new MIB. Select the IBM-TANK-MIB.mib file that you saved in step 4 on page 423, and click OK, as shown in Figure 9-37.

Figure 9-37 Select the MIB to compile

9. The Status Messages window displays, as shown in Figure 9-38 on page 425.

424

IBM TotalStorage SAN File System

Figure 9-38 MIB compilation status windows

10.To test the environment, you can log onto the master MDS and use the snmptrap operating system command, as shown in Example 9-33. Note this is not part of the SAN File System CLI; this will simply send a test trap to the SNMP manager specified.
Example 9-33 Sending test trap with snmptrap mds1:~ # snmptrap -v 2c -c public 9.42.164.160 '' IBM-SANFS-MIB:sanfsGenericTrap

11.Once the trap is sent, go into IBM Director and right-click All Events and select Open... under Event Log in the Tasks section, as shown in Figure 9-39.

Figure 9-39 Viewing all events in IBM Director

Chapter 9. Advanced operations

425

12.The trap appears, as shown in Example 9-40.

Figure 9-40 Viewing the test trap in IBM Director

IBM Director will now receive traps sent by SAN File System. For example, Figure 9-41 shows a trap that is sent when shutting down an MDS.

Figure 9-41 Trap sent when an MDS is shutdown

426

IBM TotalStorage SAN File System

In the trap details, we can see that the server is mds1. It is moving from State 1 (Online) to State 0 (Down). Each time an MDS changes state or the cluster state changes, a similar trap is sent to the specified SNMP Managers. Therefore, the entire fail-over and fail-back processes can be monitored using these traps. It is possible to create filters in IBM Director that will report only the SAN File System traps. In this way, you can have a fast overview of all SANFS related events. Consult the IBM Director documentation for more details on doing this task.

9.5.5 General recommendations for minimizing recovery time


Although SAN File system provides high availability, this section lists some recommendations in order to minimize recovery from failures: Workload balancing: A more highly loaded MDS will take longer to fail over; therefore, it is important to balance the total workload across all the MDSs. The main factors to consider for load balancing are total number of filesets and the workload (number of transactions) per fileset. Spare capacity: In order to support load induced by fileset re-distribution or master MDS failover, some spare capacity should be left on each MDS. If possible, we recommend an N+1 configuration, where you have one more MDS than is actually required for the planned workload. The extra MDS will have no static filesets assigned to it and can therefore serve as a spare to ensure there is always enough capacity in case of failure. Note you can only effectively exploit an N+1 configuration if using static filesets, since if you use dynamic filesets, you cannot prevent the spare MDS from having filesets assigned to it as they are created. System LUNs: In certain failure situations, the MDSs need to perform a scan of the system LUNs. Reducing the total number of system LUNs (that is, having fewer larger LUNS versus more smaller LUNs for the same storage capacity) can potentially lessen recovery time. Planned outage: In the case of a planned outage, move the affected filesets manually first before shutting down the MDS using the setfilesetserver sfscli command. Static filesets reassignment: After failover, static assignment of the filesets to remaining Metadata servers (using the setfilesetserver sfscli command) can provide gradual and controlled fileset re-distribution upon failback.

9.6 How SAN File System clients access data


We have already explained that SAN File System works by splitting metadata from the actual file data. When a client creates a file in a SAN File System fileset, the metadata for this object is stored in the MDS System pool, and the file data itself is stored in a User pool on one or more volumes. When a client performs an operation on a file system object, either a file or directory in the SAN File System global namespace, this action may or may not require access to the User pool volume where the file is stored. This depends on whether the operation needs to access the file data itself, or only metadata for that file system object (file/directory). Therefore, we can differentiate between: Actions that access metadata only: These are fulfilled by the MDS and do not require any I/O to LUNs in a User pool. Actions that access both MDS and the file data: These are fulfilled by the MDS as well as I/O to volumes in a User pool

Chapter 9. Advanced operations

427

A typical example of metadata access only is listing files in a directory, such as the ls command on UNIX (including Linux) systems. If you then look at the contents of a file, then you do require access to the volume that the file is stored on, since reading a file requires both metadata and file data access. So what happens in a non-uniform SAN File System configuration where a client has attached the global namespace but only has access to some of the volumes? In particular, what kind of visibility does the client have to SAN File System objects that are stored on volumes associated with LUNs that are not accessible by a client? To answer this question, we configured an environment with a fileset attached at the directory /sfs/sanfs/linuxfiles/linuxhome. Assume our policy directs all files in this fileset to the same pool. A quick test of this is to use the statfile command to find the storage pool in which one of the files in the fileset is stored. We find that it is stored in pool lixprague. Next, we list the volumes in that pool using the lsvol command. Finally, we confirm that the client AIXRome has no access to the volume vol_lixprague1, using the reportclient command. This sequence is shown in Example 9-34.
Example 9-34 Verifying client AIXRome has no access to the volume sfscli> statfile sanfs/lixfiles/linuxhome/test.txt Name Pool Fileset Server Size (B) File Modified =========================================================================================== = sanfs/lixfiles/linuxhome/test.txt lixprague lixfiles mds1 46 Jun 09, 2004 3:06:32 PM sfscli> lsvol -pool lixprague Name State Pool Size (MB) Used (MB) Used (%) =============================================================== vol_lixprague1 Activated lixprague 40944 8560 20 sfscli> reportclient -vol vol_lixprague1 Name ========= LIXPrague

Now, let us try an operation on the client AIXRome requiring metadata-only access. In Example 9-35, you can see that you can list files in the /sfs/sanfs/lixfiles/linuxhome directory using the ls -l command on this client, even though the LUN where the files are stored is inaccessible. This is because this operation only accesses metadata and is fulfilled by the MDS hosting the fileset.
Example 9-35 You can list file system objects with ls commands, even if data LUN is inaccessible [root@rome linuxhome]# ls -l total 4096035 -rw-r--r-1 root root 336 Jun 4 -rw-r--r-1 root root 302 Jun 4 -rw-r--r-1 root root 993 Jun 4 -rw-r--r-1 root root 4194304000 Jun drwxrwxrwx 2 root root 6 Jun 4 d--------2 1000000 1000000 2 May 26 -rw-r--r-1 root root 13 Jun 9 drwxr-xr-x 19 root root 20 Jun 1 -rw-r--r-1 root root 27 Jun 9 -rw-r--r-1 root root 10217 Jun 4

09:57 dsmerror.log 09:59 dsmj.log 09:58 dsmwebcl.log 1 15:28 hugefile.log 10:00 install 01:14 lost+found 14:50 next.txt 17:30 sysfiles 14:50 test.txt 10:01 tsm_restore.gif

However, when we try a command that requires access to the actual volume where the files are stored, it fails. Example 9-36 on page 429 shows a failed attempt to display the content of

428

IBM TotalStorage SAN File System

the file test.txt file with an I/O error, because the client cannot access the volume where the file is stored.
Example 9-36 However, you cannot read the content of the file if data LUN is inaccessible [root@rome linuxhome]# cat test.txt cat: test.txt: Input/output error [root@rome linuxhome]#

How can we restrict a SAN File System client from even seeing metadata about files, such as their names/last access dates/permissions, and so on? The answer is with standard operating system security measures: file and directory permissions. For our next example, on a privileged client, we set the permissions for the directory linuxhome to 700, which gives permission only to root users on privileged clients. Now on our client AIXRome, which is not a privileged client, we can see in Example 9-37, that we cannot list or change to the secured directory even if we are logged in as the root user.
Example 9-37 User lxuser does not have privileges to list files in linuxhome directory On MDS mds4:~ # sfscli statcluster -config |grep Privileged Privileged Clients LIXPrague,WINWashington On non-privileged client, AIXRome [root@rome lixfiles]$ ls -l total 5 drwx-----6 root root [root@rome lixfiles]# cd linuxhome [root@rome linuxhome]$ ls -ls ls: .: Permission denied [root@rome linuxhome]$

13 Jun

9 14:50 linuxhome

Therefore, we have demonstrated how client access works for both metadata and data access, as well as giving a brief example of how standard operating system methods can be used to prevent even metadata access to parts of the SAN File System.

9.7 Non-uniform configuration client validation


In a non-uniform configuration, SAN File System clients can access only a subset of the volumes available for user storage. This then enables clients to access only certain of the filesets. In order to access a fileset, it is required that the clients have access to all pools used by this fileset and therefore to all volumes in those pools. This section provides a way to validate that the volumes you have enabled to your clients provides a consistent access to the required filesets.

Chapter 9. Advanced operations

429

We will assume the following configuration: Fileset F has files stored in pools A and B. Pool A contains volume 1 and 2, and pool B contains volumes 3 and 4. If a client NT1 needs access to fileset F, the SAN configuration and the disk subsystem configuration must be set up so that the client has visibility to volumes 1, 2, 3, and 4. If you add additional volumes to either Pool A or Pool B, these must also be made visible to client NT1. Figure 9-42 shows this configuration.

Client

Filesets

Storage Pools and Volumes

1
SP A

F NT1 3
SP B

Figure 9-42 Example of required client access

When a client first initiates a contact to the SAN File System, for example, when you boot the client and start the client processes, the MDS will check that the client does not have incomplete access to any storage pool, that is, the client must either access all the volumes in a storage pool, or none of them. In the example above, if it is determined that the client has visibility to volume 3, but not volume 4, this will be detected at startup and the server logs will have the following message:
HSTCM0954W Client NT1 does not have access to volume 4 with diskID abcdef in storage pool B.

The client will still operate, but will give I/O errors if it tries to read data from volumes that are not visible to it. This is shown in 9.6, How SAN File System clients access data on page 427. If a client has only partial access to a storage pool (that is, visibility to only some volumes), then writes will still succeed, since the write will always be directed to a volume that the client can access. If, however, the client does not have access to ANY volumes in the required storage pool, the write will fail with an I/O error.

9.7.1 Client validation sample script details


To enhance client access checking, we now provide a sample script intended to report the validity of a client access against a given set of filesets. This script will give two sets of results: It will list the pools the client is able to access correctly and, if not, which LUNs are missing. It will validate the access of the client to a given set of filesets.

430

IBM TotalStorage SAN File System

The sample script provided must be run on the master MDS. It takes, as entry parameters, the client name and the list of filesets to validate.

Summary of script logic


1. It builds the list of all LUNs available to the client using the lslun -client sfscli command. This list is stored in an array named client_luns. 2. It builds the list of all pools defined (except the SYSTEM pool) in SAN File System and stores it in an array named pools. 3. For each of the pools in the array, the script retrieves its corresponding volumes using the lsvol -pool sfscli command. The volumes are stored with the following format: <vol1> <vol2> : <vol2> : <vol1> <vol4> : . :. The character : separates series of volume. Each series of volumes corresponds to a pool and the series are organized in the same order that the array pools is built. The character . designate an empty pool. 4. For each of the pools, the sample script then checks access from the client. The result of this step is a list of pools the client can correctly access (that is, it is able to access all volumes in each of the pool). This list is stored in an array named client_avail_pools. 5. For each of the filesets specified as entry parameters, the list of required pools is built using the reportfilesetuse sfscli command. This list is then compared against client_avail_pools. The content of the sample script is listed in Appendix C, Client configuration validation script on page 597. Important: This script is provided AS A SAMPLE ONLY. It is not part of the supported SAN File system software distribution. It will require careful testing to verify its results in your environment. You may want to modify it to include additional error checking or other functions. The script represents only the current state of the system. If you add a new storage pool, new volumes to a storage pool, or change the policy so that filesets can use different storage pools, you need to re-check client access.

9.7.2 Using the client validation sample script


Example 9-38 on page 432 shows a sample invocation of the script. The screen output summarizes the validity for the given filesets and also produces a log file containing details. The log is in the file check_fileset_access.sh.log in the directory from where you invoked the script. The script syntax is:
check_fileset_access.sh <client_name> <fileset> {fileset} {fileset} ...

You must give one client name and one or more fileset names to check access. In this example, the access of the client AIXRome to the volumes in the storage pools either currently or potentially used by filesets aixfiles, winhome, lixfiles, and dbdir is checked.

Chapter 9. Advanced operations

431

Example 9-38 Using the client validation sample script mds4:/tmp/eric # ./check_fileset_access.sh AIXRome aixfiles winhome lixfiles dbdir Gathering information from SANFS - please wait... Listing luns on client AIXRome... Listing pools on SANFS... CMMCI9006E No Volume instances found that match criteria: pool = empty_pool. Now checking filesets access... INFO - Client AIXRome has correct access to fileset aixfiles WARNING - Client AIXRome does not have correct access to fileset winhome INFO - Client AIXRome has correct access to fileset lixfiles INFO - Client AIXRome has correct access to fileset dbdir Please refer to ./check_fileset_access.sh.log for details.

Example 9-39 shows the corresponding log file.


Example 9-39 Log details from validation check mds4:/tmp/eric # more check_fileset_access.sh.log ####### Start checking fileset access for client AIXRome at Tue Jun ######### INFO Building list of luns acccessible by client AIXRome INFO Building list of pools defined on SANFS INFO Building the list of volumes for each pool in SANFS INFO Checking access to pool DEFAULT_POOL... INFO access to pool DEFAULT_POOL - OK INFO Checking access to pool Test_Pool1... INFO access to pool Test_Pool1 - OK INFO Checking access to pool lixprague... INFO access to pool lixprague - OK INFO Checking access to pool winwashington... WARNING client AIXRome does not have access to volume vol_winwashington1 in pool winwashington WARNING client AIXRome does not have access to volume vol_winwashington2 in pool winwashington WARNING client AIXRome has incomplete access to pool winwashington INFO Checking access to pool aixrome... INFO access to pool aixrome - OK INFO Checking access to pool empty_pool... INFO The pool empty_pool does not contain any volume WARNING client AIXRome has incomplete access to pool empty_pool INFO Checking access to pool small_pool... WARNING client AIXRome does not have access to volume small_pool-small_pool-0 in pool small_pool WARNING client AIXRome has incomplete access to pool small_pool INFO Checking fileset aixfiles... INFO - Client AIXRome has correct access to fileset aixfiles INFO Checking fileset winhome... WARNING Pool winwashington is missing 8 10:51:04 EDT 2004

432

IBM TotalStorage SAN File System

WARNING - Client AIXRome does not have correct access to fileset winhome INFO Checking fileset lixfiles... INFO - Client AIXRome has correct access to fileset lixfiles INFO Checking fileset dbdir... INFO - Client AIXRome has correct access to fileset dbdir ####### Checking for client AIXRome finished successfully ############

From this log, we can see that client AIXRome does not have correct access to fileset winhome, because it does not have access to volumes vol_winwashington1 and vol_winwashington2 from pool winwashington, as well as the volume in pool small_pool.

Chapter 9. Advanced operations

433

434

IBM TotalStorage SAN File System

10

Chapter 10.

File movement and lifecycle management


This chapter describes the file lifecycle management capabilities of SAN File System, including; Manually moving a single file Manually moving multiple files Defragmentation of files Automatically move or delete files using a file management policy

Copyright IBM Corp. 2003, 2004, 2006. All rights reserved.

435

10.1 Manually move and defragment files


SAN File System V2.2 introduced a file movement command - it can be used for single or multiple files. The mvfile command moves one or more files from their current storage pool location to a different, specified storage pool. You can also use this command to defragment a file rather than move it by specifying the same storage pool as its current storage pool. This section will describe how to perform those operations. The operation can be initiated with minimal disruption, meaning: No noticeable disruption to clients reading the file. Clients reading the file access the source copy while the move is in progress. File system I/O calls to the file by clients writing the file or with exclusive access to the file are delayed until the move is complete. A file may be opened while it is being moved. If it is opened for shared read access, the access will be to the source copy while the move is in progress. If it is opened for write or exclusive access, the open will be delayed until the move is complete. File system calls to the file will not return errors due to the move. When the move is complete, all clients accessing the file are transparently switched to use the destination copy. The space occupied by the original source is returned to free space.

10.1.1 Move a single file using the mvfile command


The mvfile command transfers the contents of a file from source to destination pool with minimal disruption to clients. Any flashcopies of the file are also automatically moved. Typically, applications do not need not be quiesced; however, the effect on the application will depend on the properties of the application. Delays in I/O calls can cause failures in some applications. The benefits of this feature is to be able to change the class of storage to match the business value of a file and effectively change the pool-dependent settings used by a file to improve performance. It can also be used to re-distribute a file to stripe across newly added volumes in a storage pool, or to correct the outdated or unintended effect of a file placement policy rule. In this example we will show how to move a single file using the mvfile command. Example 10-1 shows three storage pools defined: SYSTEM, poola, and poolb. Note the used capacity in poola - which is the default pool - is about 2432 MB.
Example 10-1 List storage pools # sfscli lspool Name Type Size (MB) Used (MB) Used (%) Threshold (%) Volumes ====================================================================== SYSTEM System 10224 384 3 80 1 poola User Default 143328 2432 1 80 2 poolb User 97248 0 0 80 2

Within the storage pools, we have defined volumes as shown in Example 10-2 on page 437. There are five volumes defined to the three storage pools. Data has been striped across both of the volumes that are associated with the Default User pool (poola).

436

IBM TotalStorage SAN File System

Example 10-2 List volumes # sfscli lsvol Name State Pool Size (MB) Used (MB) Used (%) ==================================================== MASTER Activated SYSTEM 10224 384 3 avol1 Activated poola 102384 1216 1 avol2 Activated poola 40944 1216 2 bvol1 Activated poolb 51184 0 0 bvol2 Activated poolb 46064 0 0

The active policy is shown in Example 10-3. This policy assigns all files to the default user storage pool. This means that everything that is copied onto the SAN File System namespace will end up in poola.
Example 10-3 List policy # sfscli lspolicy Name State Last Active Modified Description ================================================================================ DEFAULT_POLICY active Sep 9, 2004 9:25:14 PM May 6, 2004 3:40:05 AM Default policy set (assigns all files to default storage pool)

There are three filesets defined, as shown in Example 10-4.


Example 10-4 List fileset # sfscli lsfileset Name Fileset State Quota Type Quota (MB) Used (MB) Used (%) Threshold (%) Most Recent Image Server ================================================================================ ROOT Attached Soft 0 0 0 0 - mds2 homefiles Attached Soft 0 128 0 80 - mds1 workfiles Attached Soft 0 2304 0 80 - mds1

We have copied files into the homefiles fileset (which is attached to the directory homefiles). Figure 10-1 shows the view of these files on a Windows client.

Figure 10-1 Windows-based client accessing homefiles fileset Chapter 10. File movement and lifecycle management

437

The files are striped across the volumes of poola. Example 10-5 shows the list of contents (using the reportvolfiles command) of the two volumes in poola, avol1 and avol2.
Example 10-5 List contents of avol1 volume # sfscli reportvolfiles avol1 homefiles:homefiles/readme.doc homefiles:homefiles/dontreadme.doc homefiles:homefiles/instructions.txt homefiles:homefiles/SANFS Admin Guide.pdf homefiles:homefiles/SANFS Maint&PD Guide.pdf homefiles:homefiles/SANFS_InstallGuide.pdf homefiles:homefiles/sfs-package-2.1.0-7.i386.rpm homefiles:homefiles/StatusReport.doc workfiles:workfiles/inst.images/PMP3/U482893.bff workfiles:workfiles/inst.images/PMP3/520003.tar workfiles:workfiles/inst.images/PMP3/U482895.bff workfiles:workfiles/inst.images/PMP3/U485143.bff workfiles:workfiles/inst.images/PMP3/U485155.bff workfiles:workfiles/inst.images/PMP3/U485162.bff workfiles:workfiles/inst.images/PMP3/U485186.bff workfiles:workfiles/inst.images/PMP3/U485191.bff workfiles:workfiles/inst.images/PMP3/U485371.bff workfiles:workfiles/inst.images/PMP3/U485401.bff ***etc etc*** # sfscli reportvolfiles avol2 workfiles:workfiles/inst.images/PMP3/U497873.bff workfiles:workfiles/inst.images/PMP3/U497902.bff workfiles:workfiles/inst.images/PMP3/U497904.bff workfiles:workfiles/inst.images/PMP3/U497905.bff workfiles:workfiles/inst.images/PMP3/U497906.bff workfiles:workfiles/inst.images/devices.fcp.disk.ibm2145.rte workfiles:workfiles/inst.images/new/bos.adt.syscalls.5.2.0.30.bff workfiles:workfiles/inst.images/new/bos.diag.rte.5.2.0.30.bff workfiles:workfiles/inst.images/new/bos.diag.util.5.2.0.30.bff workfiles:workfiles/inst.images/new/devices.chrp.pci.rte.5.2.0.30.bff workfiles:workfiles/inst.images/new/devices.common.IBM.disk.rte.5.2.0.30.bff workfiles:workfiles/inst.images/new/devices.common.IBM.ethernet.rte.5.2.0.30.bff ***etc etc***

Now we will use the mvfile command to manually move a single file from one pool to another. If any FlashCopy images contain this file, then the file within any image will also be moved from the original storage pool to the destination. You must be logged into the operating system on the engine hosting the master MDS to run this command. The commands accepts the following parameters; -f Forces the MDS to move the file even if the file is open, that is, being accessed by a client. -pool pool_name Specifies the name of the storage pool to which to move the file. To defragment a file, rather than move it, specify the file's current storage pool. -client client_name Specifies the name of a SAN File System client to perform the move or defragment of the file. The client must have access to all the volumes contained in the current and target storage pools. To list all active clients that can access a volume, use the reportclient -vol command. To list the volumes in a storage pool, use the lsvol -pool command. 438
IBM TotalStorage SAN File System

path Specifies the fully qualified names of one or more files to move or defragment. A fully qualified name means the full directory path, for example, cluster-name / fileset-name / filename or cluster-name/file-name. This parameter does not support wildcard characters in directory or file names. Specifies that you want to read the names of one or more files to move or defragment from stdin (for example, - << /work/files_list.txt). Example 10-6 shows moving the file readme.doc from poola to poolb, using the client AIXRome to initiate the move.
Example 10-6 Move one file mds2:~ # sfscli mvfile -pool poolb -client AIXRome /sanfs/homefiles/readme.doc CMMNP5463I File /sanfs/homefiles/readme.doc was moved successfully.

In Example 10-7, we use the reportvolfiles command to verify that the readme.doc file was successfully moved to the bvol1 volume on poolb.
Example 10-7 Verify that file moved to bvol1 mds2:~ # sfscli reportvolfiles bvol1 homefiles:homefiles/readme.doc

10.1.2 Move multiple files using the mvfile command


The mvfile command can also move multiple files using a list of files piped into it (stdin). First, you need to create a file containing the names of the files that you want to move. In this example, we created a file called test1, as shown in Example 10-8. Make sure the files are listed in the format as shown.
Example 10-8 List of files to move # cat test1 |more /sanfs/homefiles/dontreadme.doc /sanfs/homefiles/instructions.txt /sanfs/homefiles/SANFS Admin Guide.pdf /sanfs/homefiles/SANFS Maint&PD Guide.pdf /sanfs/homefiles/SANFS_InstallGuide.pdf /sanfs/homefiles/sfs-package-2.1.0-7.i386.rpm /sanfs/homefiles/StatusReport.doc /sanfs/workfiles/inst.images/PMP3/U482893.bff /sanfs/workfiles/inst.images/PMP3/520003.tar

Chapter 10. File movement and lifecycle management

439

To move all the files contained in this input file, use the mvfile command format as shown in Example 10-9. This reads in the list of files contained in test1, and uses it as standard input (stdin) to the command.
Example 10-9 Move stack of file using the mvfile command # sfscli mvfile CMMNP5463I File CMMNP5463I File CMMNP5463I File CMMNP5463I File CMMNP5463I File CMMNP5463I File CMMNP5463I File CMMNP5463I File -pool poolb -client AIXRome - < test1 /sanfs/homefiles/dontreadme.doc was moved successfully. /sanfs/homefiles/instructions.txt was moved successfully. /sanfs/homefiles/SANFS Admin Guide.pdf was moved successfully. /sanfs/homefiles/SANFS Maint&PD Guide.pdf was moved successfully. /sanfs/homefiles/sfs-package-2.1.0-7.i386.rpm was moved successfully. /sanfs/homefiles/StatusReport.doc was moved successfully. /sanfs/workfiles/inst.images/PMP3/U482893.bff was moved successfully. /sanfs/workfiles/inst.images/PMP3/520003.tar was moved successfully.

Important: The actual I/O will be performed by the specified client to and from the target and source volumes. This should be considered when selecting a client to perform the move. We recommend selecting a less loaded client (for example, with spare CPU and I/O capacity) or scheduling the move to occur at a less busy time, in order to avoid performance impact on an already heavily loaded client. Once the files have been moved, verify that the file movement was successful using the reportvolfiles command. In Example 10-10, we verify that all the files have moved off the avol1 volume, since it is now empty.
Example 10-10 Verify files moved from avol1 volume # sfscli reportvolfiles avol1 CMMNP5122I No files were found on Volume avol1.

Finally, we use the reportvolfiles command to verify that the moved files are distributed across the volumes in poolb, bvol1 and bvol2, as shown in Example 10-11.
Example 10-11 Verify that files moved to bvol1 volume # sfscli reportvolfiles bvol1 homefiles:homefiles/readme.doc homefiles:homefiles/dontreadme.doc homefiles:homefiles/instructions.txt homefiles:homefiles/SANFS Admin Guide.pdf homefiles:homefiles/SANFS Maint&PD Guide.pdf homefiles:homefiles/SANFS_InstallGuide.pdf homefiles:homefiles/sfs-package-2.1.0-7.i386.rpm workfiles:workfiles/inst.images/PMP3/U482893.bff workfiles:workfiles/inst.images/PMP3/520003.tar workfiles:workfiles/inst.images/PMP3/U482895.bff # # sfscli reportvolfiles bvol2 homefiles:homefiles/sfs-package-2.1.0-7.i386.rpm homefiles:homefiles/StatusReport.doc workfiles:workfiles/inst.images/PMP3/520003.tar workfiles:workfiles/inst.images/PMP3/U485891.bff workfiles:workfiles/inst.images/PMP3/U485893.bff workfiles:workfiles/inst.images/PMP3/U485975.bff workfiles:workfiles/inst.images/PMP3/U485976.bff workfiles:workfiles/inst.images/PMP3/U485979.bff workfiles:workfiles/inst.images/PMP3/U485986.bff workfiles:workfiles/inst.images/PMP3/U485993.bff

440

IBM TotalStorage SAN File System

10.1.3 Defragmenting files using the mvfile command


The mvfile command can also be use to defragment files. Defragmentation and re-distribution means that blocks are moved into contiguous extents. The partitions are used round-robin across the volumes with available space, with results that files will be continuous in at least extent-sized pieces, and at most partition-sized pieces. Example 10-12 shows moving the file in the same storage pool, poolb, defragmenting the file. Note there is no indication that the file has been defragmented.
Example 10-12 Defragment a file # sfscli mvfile -pool poolb -client AIXRome /sanfs/homefiles/StatusReport.doc CMMNP5463I File /sanfs/homefiles/StatusReport.doc was moved successfully.

10.2 Lifecycle management with file management policy


This section will describe how to automate file movement using file management policy. This feature provides lifecycle management capabilities to SAN File System. A file management policy is a set of rules that specify conditions for either automatically moving files from one storage pool to another, or for deleting them entirely from the storage pool. The file management policy is created as a text file, which is then used as input to the lifecycle management script to actually perform the file moves or deletes. The policy can be executed ad hoc, or scheduled using the cron daemon on the master MDS. One or more policy rule files can be created and executed as desired. Here is a summary of the tasks for setting up and executing lifecycle management on SAN File System: 1. Create the file management policy file. 2. Run the lifecycle management script in plan phase, specifying the file management policy file as input. This phase scans the SAN File System global namespace, and produces an output file listing the files to be moved or deleted, according to the policy. 3. Run the lifecycle management script in execute phase, specifying the plan file from the previous step as the input. This phase actually moves or deletes the files as required. The policy rules are kept in one or more flat files that are created using a text editor. The lifecycle management script executes externally - that is, it is invoked at the UNIX command prompt on the master MDS. It cannot be initiated via the GUI, CLI, or CIM. Since only SAN File System clients actually have access to the user storage pools, an appropriate client with access to the source pool and target pool must be designated to perform the actual move or delete operations.

Chapter 10. File movement and lifecycle management

441

10.2.1 File management policy syntax


The parameters for the file management policy rules are: RULE rule-name Initiates the rule statement Identifies the rule. This parameter is optional.

MIGRATE-FROM-POOL or DELETE-FROM-POOL Specifies whether to move or delete files. source_pool_name Identifies the source storage pool of files to be moved or deleted. target-pool_name Identifies the target storage pool of files to be moved. This parameter is not used if DELETE-FROM-POOL is used.

FOR FILESET (fileset_name) Specifies one or more filesets in which the file resides. This parameter is optional WHERE AND Compares the file attributes specified in the rule with the attributes of the file to determine whether the file should be moved or deleted. Used to specify the compound of the following conditions:

AGE operator integer DAYS Age of file, specified as less than (<), less than or equal (<=), greater than (>), or greater than or equal (>=) to a number of days since the file was last accessed. SIZE operator integer KB | MB | GB Size of a file, specified as less than (<), less than or equal (<=), greater than (>), or greater than or equal (>=) to a number of kilobytes, megabytes, or gigabytes. You can specify an AGE qualifier, a SIZE qualifier, or both.

10.2.2 Creating a file management policy


Our example policy will move files in the homefiles fileset that are larger than 300 KB. The policy will also delete files that are older than 365 days from this pool. Example 10-13 shows that the file called sfs-package-2.1.0-7.i386.rpm is larger than 300 KB and resides in poolb.
Example 10-13 Show large file in poolb # sfscli statfile /sanfs/homefiles/sfs-package-2.1.0-7.i386.rpm Name Pool Fileset Server Size (B) File Modified ================================================================================ /sanfs/homefiles/sfs-package-2.1.0-7.i386.rpm poolb homefiles mds1 110770205 Mar 24, 2004 1:53:51 PM

On the Windows SAN File System client, we can see several files that are larger than 300 KB, as shown in Figure 10-2 on page 443.

442

IBM TotalStorage SAN File System

Figure 10-2 Verify file sizes in homefiles fileset

Now we need to create our policy. You need to do this with a text editor - there is no facility in the SAN File System CLI or GUI to do this. Save the rules to an output file - we have called it lcmscrpt.txt. Example 10-14 shows the contents of our sample policy. There are two rules: The first specifies that files in the homefiles fileset that are larger than 300 KB and which are in poolb will be moved to poola. The second specifies that any files in poolb which have not been accessed for 365 days or more will be deleted.
Example 10-14 Sample file management policy mds2:/tmp # cat lcmscrpt.txt RULE LrgFiles MIGRATE FROM POOL poolb TO POOL poola FOR FILESET (homefiles) WHERE SIZE >= 300 KB RULE oldFiles DELETE FROM POOL 'poolb' WHERE AGE >= 365 days

10.2.3 Executing the file management policy


To execute a file management policy, you need to run the lifecycle management policy script. This is a Perl script called sfslcm.pl in the /usr/tank/server/bin directory. This script can be used to plan and execute the file management policy.

Chapter 10. File movement and lifecycle management

443

The following options are available for the lifecycle management policy script: -verbose --log <logdir> --client <client> Print detailed output while executing. Log execution of the script into time/date stamped files in <logdir>. Preferred client(s) to move or delete files. The client must have access to all the volumes in all source and target storage pools specified in any move and delete operations. File name of the previously created file management policy. File name for the plan file. This file will be created if running in plan phase, or must already have been created if running in execute phase.

--rules <rulesfile> --plan <file>

--phase {plan | execute} Specifies whether to run the script in plan or execute phase. The script must first be run in plan phase, then in execute phase. In the plan phase, the script scans through the namespace, selecting files that match the criteria in the policy. It produces a human-readable output file, which lists the file names matching the policy that are to be moved or deleted, as well as the appropriate storage pools. In execute phase, this output file is then used as input to perform the designated actions on the selected files. In this example, we are using the rules file (lcmscrpt.txt) that we just created. The result of the plan phase will be written to /tmp/lcmoutput, as shown in Example 10-15. It is highly recommended to include the --verbose parameter, as this will display any errors when planning or executing the file management policy. Note you do not need to specify the --client parameter in this phase, since the plan phase simply scans the metadata to determine which files match the policy - no access to the user storage pools is required.
Example 10-15 Plan phase mds2:/usr/tank/server/bin # ./sfslcm.pl --verbose --rules /tmp/lcmscrpt.txt --plan /tmp/lcmoutput --phase plan 2004-09-28 01:48:19 : HSTHS0009I Beginning plan phase 2004-09-28 01:48:19 : HSTHS0010I Reading rules from file /tmp/lcmscrpt.txt 2004-09-28 01:48:19 : HSTHS0020I Rules summary: 1 pools were found in /tmp/lcmscrpt.txt 2004-09-28 01:48:19 : HSTHS0021I Pool poolb has 2 rules 2004-09-28 01:48:19 : HSTHS0022I End of rules summary report 2004-09-28 01:48:19 : HSTHS0011I Beginning to create plan for rules. 2004-09-28 01:48:19 : HSTHS0012I Running report of files in pool poolb 2004-09-28 01:48:21 : HSTHS0016I Adding plan records for pool poolb 2004-09-28 01:48:21 : HSTHS0017I Added 6 records for pool poolb 2004-09-28 01:48:21 : HSTHS0018I Finished creating plan. 6 records were created for 1 pools. 2004-09-28 01:48:21 : HSTHS0023I End of plan phase

After completing the plan phase we recommend that you examine the output file to ensure that the file management policy that you created has been executed as expected. Example 10-16 on page 445 lists the contents of our plan file. It shows that six files will be migrated from poolb to poola, as they match the criteria in the rule file. You can delete entries from it if you decide you do not want to migrate or delete certain files. Or, you might choose to split the file into pieces, and execute it concurrently on different clients or the same client in order to improve performance. This option is discussed further in 10.2.4, Lifecycle management recommendations and considerations on page 446. Be careful in editing the plan file that you do not delete important data in the records.

444

IBM TotalStorage SAN File System

In this case, we will execute the plan file as is.


Example 10-16 View plan file output mds2:/usr/tank/server/bin # cat /tmp/lcmoutputm MIGRATE:poolb:poola:homefiles:45:33:0:20040928103721:20040928103734:200409281037 21:20040604140250:2453504:/sanfs/homefiles/readme.doc MIGRATE:poolb:poola:homefiles:45:38:0:20040928104004:20040920122726:200409281040 04:20040324135703:2867200:/sanfs/homefiles/SANFS Admin Guide.pdf MIGRATE:poolb:poola:homefiles:45:39:0:20040928104005:20040920122726:200409281040 05:20040324135808:2322432:/sanfs/homefiles/SANFS Maint&PD Guide.pdf MIGRATE:poolb:poola:homefiles:45:40:0:20040928104006:20040920122726:200409281040 06:20040324135536:1134592:/sanfs/homefiles/SANFS_InstallGuide.pdf MIGRATE:poolb:poola:homefiles:45:41:0:20040928104007:20040920122727:200409281040 07:20040324135351:110772224:/sanfs/homefiles/sfs-package-2.1.0-7.i386.rpm MIGRATE:poolb:poola:homefiles:45:37:0:20040928120110:20040928120114:200409281201 10:20040928120114:3620864:/sanfs/homefiles/StatusReport.doc mds2:/usr/tank/server/bin #

Now we will execute the plan file by running the script in the execute phase, as shown in Example 10-17. We need to specify the --client parameter, specifying a SAN File System client with access to both poola and poolb, since these are referenced in the plan file.
Example 10-17 Execute phase mds2:/usr/tank/server/bin # ./sfslcm.pl --verbose --client AIXRome --plan /tmp/lcmoutputm --phase execute 2004-09-28 01:55:08 : HSTHS0024I Beginning execute phase 2004-09-28 01:55:08 : HSTHS0025I Executing plan from /tmp/lcmoutputm 2004-09-28 01:55:08 : HSTHS0026I Beginning operations on pool poolb, plan record 1 2004-09-28 01:55:21 : HSTHS0027I End of operations on pool poolb: 6 migrations, 0 deletes, 0 errors, 0 operations skipped due to errors 2004-09-28 01:55:21 : HSTHS0030I End execute phase

We can quickly check the execution of the script by rerunning the statfile command. In Example 10-13 on page 442, we saw that this file was in poolb. Now, as shown in Example 10-18, this file has moved from poolb to poola.
Example 10-18 Verify large file moved from poolb to poola # sfscli statfile /sanfs/homefiles/sfs-package-2.1.0-7.i386.rpm Name Pool Fileset Server Size (B) File Modified ================================================================================ /sanfs/homefiles/sfs-package-2.1.0-7.i386.rpm poola homefiles mds1 110770205 Mar 24, 2004 1:53:51 PM mds2:/usr/tank/server/bin #

You have now successfully automated the file movement using file management policy.

Chapter 10. File movement and lifecycle management

445

10.2.4 Lifecycle management recommendations and considerations


The lifecycle management feature is designed for repeated automated execution. You can achieve this by scheduling the plan and execute phase using the cron daemon. As with manual file movement, consider the impact of the lifecycle management operations on the client specified to execute it. While this feature is designed to minimize the I/O impact on the client that is performing the moves, if you have the choice, you should consider using a less loaded client or scheduling the execution to run at a less busy time. I/O during the file movement phase is serialized on the client, and is executed in 1 MB pieces. In this way, the I/O is gated, and avoids overwhelming a particular client to the detriment of its regular workload. Informal testing has observed scan performance (during the plan phase) of around 10 million files/hour using ordinary lab equipment. This figure is intended as a guideline - your equipment/setup may be different, giving different results. Only one scan (plan phase) can be executed at a time in the cluster; however, file moves or deletes (execute phase) can be parallelized to achieve higher aggregate throughput. To do this, split the plan file into pieces (using a text editor) according to fileset. Then execute each smaller plan file separately with the lifecycle management script. You can specify different clients, or even the same client for the multiple executes. You should ensure there is adequate bandwidth on the client(s) and the storage device to support this. When a file is moved, clients with read access are not delayed; however, writes to the file are delayed while the file is moved. So its highly recommended to schedule the file move to be executed during off-hours. If the execute phase of the script terminates prematurely for any reason, we recommend generating a new plan file and running it. This will make sure the plan phase correctly identifies any files matching the rules. If any entry in the plan file causes an error, the script terminates immediately. The script must be manually restarted in case of any errors - there is no automatic restart capability. Finally, you should try to execute the plan as soon as possible after generating the plan file. If execution of the plan is delayed, normal client activity against the file system could cause changes that invalidate the plan file in a small time frame.

446

IBM TotalStorage SAN File System

11

Chapter 11.

Clustering the SAN File System Microsoft Windows client


This chapter describes how to use Microsoft Cluster Server (MSCS) on Windows 2003 with the SAN File System client to provide high availability access to the SAN File System global namespace. It summarizes the installation of MSCS, installation of the SAN File System MSCS Enablement software, configuration of the SAN File System global namespace as a cluster resource, and finally, setting up a CIFS client so that it can access the SAN File System global namespace in a fail-over configuration. MSCS is also supported for SAN File System on Windows 2000; the setup should be very similar to the procedures given in this chapter for Windows 2003. For more information about setting up SAN File System with MSCS, see Microsoft Cluster Server Enablement Installation and Users Guide, GC30-4115.

Copyright IBM Corp. 2003, 2004, 2006. All rights reserved.

447

11.1 Configuration overview


Since Version 2.2.1, SAN File System supports the use of SAN File System Windows clients with Microsoft Cluster Server (MSCS). The cluster systems can run either: Microsoft Windows 2000 Advanced Server Microsoft Windows Server 2003 Enterprise Edition A maximum of two nodes in the MSCS cluster are supported. SAN File System metadata server cluster - two nodes. Two Windows 2003 systems with the SAN File System client code installed. They access the global namespace as the T drive. The Windows 2003 host names are sanbs1-8 and sanbs1-9. Each system has exclusive access to one LUN, which will be configured with a storage pool and a policy so that all files created by these systems will use only that pool. A Windows system that will access a portion of the SAN File System global namespace as a CIFS share. It will access this through the cluster, which will define the share as a cluster resource. Non-overlapping subdirectory portions of the SAN File System namespace are managed resources that are accessible on only one node in a given MSCS cluster at a time. Note: The Microsoft Cluster Server model is a shared nothing cluster implementation. Each resource is at one point in time exclusively owned by one cluster node or the other. Therefore, in order to map this model onto a SAN File System configuration, it is necessary to divide the SAN File System global namespace into non-overlapping portions which can be allocated to MSCS cluster nodes The configuration is shown in Figure 11-1.

Windows CIFS client

Drive M (CIFS share of Part of SAN FS namespace)

T:\cluster_dir]
Sanbs1-9

Sanbs1-8

SAN FS clients Windows 2003 With MSCS

SAN
SAN FS MDS Storage pool: CLUSTERPOOL Fileset: cluster_fs1

Figure 11-1 MSCS lab setup

448

IBM TotalStorage SAN File System

11.2 Cluster configuration


First, two Windows 2003 systems, which are already SAN File System clients, are installed with Microsoft Cluster Server (MSCS). We used a very simple installation; for more information about this topic, see the Microsoft documentation, as detailed instructions for installing MSCS is beyond the scope of this redbook.

11.2.1 MSCS configuration


The basic configuration on the Cluster Administrator, showing the initially defined resources, is shown in Figure 11-2.

Figure 11-2 Basic cluster resources

Chapter 11. Clustering the SAN File System Microsoft Windows client

449

The defined network interfaces are shown in Figure 11-3. There are two networks: a public and a private.

Figure 11-3 Network Interfaces in the cluster

Figure 11-4 shows the default cluster resource types. After configuration with SAN File System, we will see a new cluster resource type defined.

Figure 11-4 Cluster Resources

11.2.2 SAN File System configuration


The cluster nodes have the SAN File System client code installed, and are accessing the global namespace as drive T, as shown in Figure 11-4.

450

IBM TotalStorage SAN File System

Figure 11-5 SAN File System client view of the global namespace

Both MSCS nodes have access to a LUN on the SVC. We have configured this in the SAN File System so that only these nodes can write to this disk. First, we created a new storage pool, CLUSTERPOOL, as shown in Example 11-1.
Example 11-1 Storage pool for use by the MSCS tank-mds2:~ # sfscli lspool CLUSTERPOOL Name Type Size (MB) Used (MB) Used (%) Threshold (%) Volumes =================================================================== CLUSTERPOOL User 2032 0 0 80 1

We can see the ID of the LUN that is visible to the Windows 2003 servers using the lslun command, as shown in Example 11-2. This confirms that they both see the same LUN.
Example 11-2 Show the LUN visible to the clustered nodes sfscli> lslun -client sanbs1-9 Lun ID Vendor Product Size (MB) Volume State Storage Device WWNN Port WWN =========================================================================================== ====================== VPD83NAA6=600507680184001AA800000000000077 IBM 2145 2047 Available UNKNOWN UNKNOWN sfscli> sfscli lslun -client sanbs1-8 Lun ID Vendor Product Size (MB) Volume State Storage Device WWNN Port WWN =========================================================================================== ====================== VPD83NAA6=600507680184001AA800000000000077 IBM 2145 2047 Available UNKNOWN UNKNOWN

The LUN was then defined as a volume in the CLUSTERPOOL storage pool, using the mkvol command, as in Example 11-3.
Example 11-3 Add the shared LUN to the storage pool tank-mds1:~ # sfscli mkvol -lun VPD83NAA6=600507680184001AA800000000000077 -client sanbs1-8 -pool CLUSTERPOOL clustervol1 CMMNP5426I Volume clustervol1 was created successfully.

Chapter 11. Clustering the SAN File System Microsoft Windows client

451

Example 11-4 shows the newly added volume, using the lsvol command. We have other volumes, pools and clients; however, we are setting up a dedicated pool and fileset only for our MSCS configuration.
Example 11-4 List SAN FIle System volumes tank-mds1:~ # sfscli lsvol Name State Pool Size (MB) Used (MB) Used (%) ========================================================================== MASTER Activated SYSTEM 2032 224 11 ITSO_SYS_POOL-SYSTEM-0 Activated SYSTEM 2032 64 3 SVC-DEFAULT_POOL-0 Activated DEFAULT_POOL 2032 16 0 clustervol1 Activated CLUSTERPOOL 2032 0 0

Now we want to add a fileset, called cluster_fs1, to reside in the storage pool. We use the mkfileset command, as in Example 11-5.
Example 11-5 Create a fileset for use by MSCS tank-mds1:~ # sfscli mkfileset -attach ATS_GBURG -dir cluster_dir cluster_fs1 CMMNP5147I Fileset cluster_fs1 was created successfully. tank-mds1:~ # sfscli lsfileset -l Name Fileset State Serving State Quota Type Quota (MB) Used (MB) Used (%) Threshold (%) Images Most Recent Image Server Assigned Server Attach Point Directory Name Directory Path Parent Children Description =========================================================================================== =========================================================================================== ============================================ ROOT Attached Online Soft 0 0 0 0 0tank-mds1 tank-mds1 ATS_GBURG ATS_GBURG 2 Root fileset aixfiles Attached Online Soft 200000 16 0 80 0 tank-mds1 tank-mds1 ATS_GBURG/aixfiles aixfiles ATS_GBURG ROOT 0 cluster_fs1 Attached Online Soft 0 0 0 80 0 tank-mds1 ATS_GBURG/cluster_dir cluster_dir ATS_GBURG ROOT 0 -

We make sure that our MSCS nodes have root privileges to the SAN File System, using the addprivclient command, as shown in Example 11-6. For more information about privileged clients, see 7.6.2, Privileged clients on page 297.
Example 11-6 Create privileged clients. tank-mds1:~ # sfscli addprivclient sanbs1-8 sanbs1-9 Are you sure you want to add sanbs1-8 as a privileged client? [y/n]:y Are you sure you want to add sanbs1-9 as a privileged client? [y/n]:y CMMNP5378I Privileged client access successfully granted for sanbs1-8. CMMNP5378I Privileged client access successfully granted for sanbs1-9.

Now we want to create a policy so that all files in the fileset we created will be stored in the CLUSTERPOOL storage pool. This makes sure that the MSCS nodes, and only those nodes among the SAN File System clients, will have access to the LUN. We create a text file, called /tmp/policy.txt with the rule shown in Example 11-7 on page 453. The rule directs any file in the cluster_fs1 fileset to the storage pool CLUSTERPOOL. All other files will go into the designated default storage pool. In our configuration, all the other clients have access to another pool, which has been designated as the default pool. We discussed how to set up policy for non-uniform storage configurations like this in 7.8.8, Policy management considerations on page 328. 452
IBM TotalStorage SAN File System

Example 11-7 Contents of policy input file VERSION 1 /* Do not remove or change this line! */ rule 'stgRule1' set stgpool 'CLUSTERPOOL' for FILESET('cluster_fs1')

Currently, we have the default policy active. To create a new policy, we use the mkpolicy command, referencing our text file. The new policy is called cluster_policy, as shown in Example 11-8.
Example 11-8 Create a new policy tank-mds1:~ # sfscli mkpolicy -file /tmp/policy.txt cluster_policy CMMNP5193I Policy cluster_policy was created successfully.

Now we will activate the new policy, using the usepolicy command, as shown in Example 11-9.
Example 11-9 Activate the new policy tank-mds1:~ # sfscli lspolicy Name State Last Active Modified Description =========================================================================================== ============================================ DEFAULT_POLICY active Aug 19, 2005 4:26:10 PM Aug 19, 2005 10:41:19 AM Default policy set (assigns all files to default storage pool) cluster_policy inactive Aug 23, 2005 4:34:55 PM tank-mds1:~ # sfscli catpolicy cluster_policy cluster_policy: VERSION 1 /* Do not remove or change this line! */ rule 'stgRule1' set stgpool 'CLUSTERPOOL' for FILESET('cluster_fs1') tank-mds1:~ # sfscli usepolicy cluster_policy Are you sure you want to use this policy? New files should be allocated to a pool that is accessible to the clients where the file is needed. [y/n]:y CMMNP5189I Policy cluster_policy is now the active policy. tank-mds1:~ # sfscli lspolicy Name State Last Active Modified Description =========================================================================================== ============================================ DEFAULT_POLICY inactive Aug 26, 2005 1:34:55 PM Aug 19, 2005 10:41:19 AM Default policy set (assigns all files to default storage pool) cluster_policy active Aug 25, 2005 2:15:50 PM Aug 23, 2005 4:34:55 PM -

Chapter 11. Clustering the SAN File System Microsoft Windows client

453

The client can now see the directory corresponding to the fileset, which is cluster_dir, as shown in Figure 11-6.

Figure 11-6 Fileset directory accessible

We have to take ownership and set permissions on the directory for the fileset, cluster_dir. These were set as shown in Figure 11-7. We set the owner for this directory to be Administrator, and gave full control to the Administrator and to the Cluster Services account. Full permissions for the Cluster Services account is required if you will create a clustered CIFS share, as we are doing.

Figure 11-7 Show permissions and ownership

454

IBM TotalStorage SAN File System

Now, we check that the permissions and definitions are by correct by creating a file on the volume, as shown in Figure 11-8.

Figure 11-8 Create a file on the fileset.

Finally, to verify our policy is correct, we use the reportvolfiles command, as in Example 11-10. It confirms the newly created file has been stored in the volume corresponding to the LUN that is visible in the Microsoft cluster.
Example 11-10 Confirm the file is stored in the correct volume. tank-mds1:~ # sfscli reportvolfiles clustervol1 cluster_fs1:cluster_dir/test.file

Now we have our basic setup, the next step is to install the SAN File System Microsoft Cluster Server Enablement package.

11.3 Installing the SAN File System MSCS Enablement package


The SAN File System MSCS Enablement package, IBM-SFS-MSCS-Enablement-version.exe is installed in the package repository of the MDS with the SAN File System software package. It is located in /usr/tank/packages on the MDS. You can either use secure copy (for example, scp) to copy it down to each node in the MSCS cluster, or you can access it by opening the browser interface to the MDS, and selecting Download Client Software. You need to transfer the enablement package to each node in the MSCS cluster and install it on each node in turn. We will start with sanbs1-8.

Chapter 11. Clustering the SAN File System Microsoft Windows client

455

1. To start the installation, run the executable; ours was called IBM-SFS-MSCS-Enablement-WIN2K-2.2.2.93.exe. On the first window, shown in Figure 11-9, choose the installation language.

Figure 11-9 Choose the installation language

2. You will see the license agreement, as in Figure 11-10. Click Yes.

Figure 11-10 License Agreement

3. Now enter the options shown in Figure 11-11 on page 457, the User Name and Company Name can be any string appropriate for your environment, and for the Serial Number, enter IBM-SFS-MSCS-100. Note, the User Name does not have to be an operating system user ID.

456

IBM TotalStorage SAN File System

Figure 11-11 Complete the client information

4. Choose the installation directory for the package. We accepted the default, as shown in Figure 11-12.

Figure 11-12 Choose where to install the enablement software

Chapter 11. Clustering the SAN File System Microsoft Windows client

457

5. Finally, you can confirm the installation parameters (see Figure 11-13).

Figure 11-13 Confirm the installation parameters

6. After the installation completes, you are prompted to reboot the system. After rebooting sanbs1-8, we repeated the same installation steps on sanbs1-9 so that the cluster enablement software is available on each system. Now we need to define the SAN File System into the cluster.

11.4 Configuring SAN File System for MSCS


After installing the enablement software, a new resource type, SANFS, is automatically created, as shown in Figure 11-14 on page 459. This was not there before, as we note in Figure 11-4 on page 450.

458

IBM TotalStorage SAN File System

Figure 11-14 New SANFS resource is created

Now we will create a cluster group for our SAN File System. Right-click an existing cluster group and select New Group, as shown in Figure 11-15.

Figure 11-15 Create a new cluster group

Chapter 11. Clustering the SAN File System Microsoft Windows client

459

Enter the properties for your group - first you give it a name and optional description, as shown in Figure 11-16. Our group is called ITSOSFSGroup. Click Next to continue.

Figure 11-16 Name and description for the group

On the next window, you can specify preferred owners for the group. We left this blank (see Figure 11-17).

Figure 11-17 Specify preferred owners for group

Click Next to create the group. If it was created successfully, a window similar to Figure 11-18 on page 461 will display.

460

IBM TotalStorage SAN File System

Figure 11-18 Group created successfully

The group now appears in the main cluster display (see Figure 11-19). Note that it is offline.

Figure 11-19 ITSOSFSGroup displays

Chapter 11. Clustering the SAN File System Microsoft Windows client

461

Now we need to add a resource to the group for the actual SAN File System. Click Resource Types, right-click the SANFS resource, and then select New Resource, as shown in Figure 11-20.

Figure 11-20 Create new resource

Give your resource a name and optional description. Make sure SANFS is selected as the Resource type, and that the resource is in the group that you just defined (ITSOSFSGroup in our case, as shown in Figure 11-21). Click Next to continue.

Figure 11-21 New resource name and description

On the next window, Figure 11-22 on page 463, all nodes should be included in the Possible owners window. If they are not, move them by selecting each in turn, and clicking Add. Click Next. 462

IBM TotalStorage SAN File System

Figure 11-22 Select all nodes as possible owners

Next you can enter in any resource dependencies. We had none eligible to select, as shown in Figure 11-23. Click Next.

Figure 11-23 Enter resource dependencies

Chapter 11. Clustering the SAN File System Microsoft Windows client

463

In Figure 11-24, enter the parameters for the SANFS resource. Specifically, enter the fileset that you want to enter as a cluster resource.

Figure 11-24 SAN File System resource parameters

Click Browse; the drop-down list displays all first-level directories in the SAN File System global namespace, as in Figure 11-25. In our configuration, we have two directories that are actually fileset attachment points, T:\aixfiles and T:\cluster_dir, as well as the first-level directory, t:\lost+found, which is part of the ROOT fileset. You may only select first-level directories (of the format T:\directoryname) as resources here, regardless of whether they are fileset attachment points. Any directory that is below the first level (for example, T:\aixfiles\subdir1, T:\cluster_dir\subdir2\subdir3) is not available for definition as a cluster resource. We selected the fileset that we know is available to the MSCS configuration (cluster_dir).

Figure 11-25 Display filesets

The Parameters window re-displays with the fileset path shown, as in Figure 11-26 on page 465.

464

IBM TotalStorage SAN File System

Figure 11-26 Fileset for cluster resource selected

Click Finish. You will see a pop-up similar to Figure 11-27 that states that the new resource was successfully created.

Figure 11-27 Cluster resource created successfully

Chapter 11. Clustering the SAN File System Microsoft Windows client

465

If you click Resources, you will see the newly created resource. Note that it is offline, as we have not brought it online yet (see Figure 11-28).

Figure 11-28 New resource in Resource list

Now we will bring the group and resource online. Right-click the group (ITSOSFSGroup) and select Bring Online, as shown in Figure 11-29.

Figure 11-29 Bring group online

The display changes to indicate that the group, and its resources, are online (see Figure 11-30 on page 467). It is currently owned by the system where are doing the configuration, in this case, sanbs1-9.

466

IBM TotalStorage SAN File System

Figure 11-30 Group and resource are online

To check the initial failover of the resource, we shut down the owning node, sanbs1-9. We expect this resource to move to the other node, sanbs1-8. We start Cluster Administrator on sanbs1-8, and confirm that the ownership has transferred correctly. Note sanbs1-9 is showing as down (see Figure 11-31).

Figure 11-31 Resource moves ownership on failures.

Chapter 11. Clustering the SAN File System Microsoft Windows client

467

After rebooting sanbs1-9, the ownership of the SAN File System resource stays with sanbs1-8, since we did not specify a preferred owner (see Figure 11-32).

Figure 11-32 Resource stays with current owner after rebooting the original owner

Now we will show how to set up the MSCS configuration so that the SAN File System resource is shared to TCP/IP attached clients via CIFS.

11.4.1 Creating additional cluster groups


You can share additional filesets (if they are defined with the appropriate access and policy as described in this chapter). Each additional fileset should correspond to a cluster group, that is, you would define an additional group using the same process as we have just shown, and select another fileset, as in Figure 11-25 on page 464. Since each group corresponds to a different fileset, this ensures that no part of the SAN File System namespace will be shared by more than one cluster node at a time. This is required for consistency.

11.5 Setting up cluster-managed CIFS share


We used the following Microsoft article for the basic procedure. Note, however, that this article is referring to sharing a physical disk via CIFS. Since we are using SAN File System, the steps we follow are slightly different:
http://www.microsoft.com/technet/prodtechnol/windowsserver2003/library/ServerHelp/e59d82 6b-c1c7-4022-ad3a-cfc5656202c9.mspx

First, remember we had to set appropriate permissions on the directory corresponding to the fileset to be shared for the cluster services account, as in Figure 11-7 on page 454. We now need to define some additional resources in the group ITSOSFSGroup. First, we need an IP address to be associated with the CIFS share. This is the IP address that CIFS clients will use to access the share. Right-click the group and select New Resource. Give the resource a name, CIFSShareIP in our case, an optional description, select Resource type of IP Address, and the correct group (ITSOSFSGroup in our case), as in Figure 11-33 on page 469. Click Next.

468

IBM TotalStorage SAN File System

Figure 11-33 Create IP Address resource

Specify the properties. Here are the properties we used (displayed after we created the resource and brought it online). In the General tab, both cluster nodes were selected as possible owners (see Figure 11-34).

Figure 11-34 IP address resource: General properties

Chapter 11. Clustering the SAN File System Microsoft Windows client

469

We used the defaults for the Dependencies and Advanced tabs. On the parameters window (Figure 11-35), we specified a TCP/IP address to be associated with the CIFS share, 9.82.23.49, which was on the public LAN. For our environment, we obtained a new TCP/IP address; however, your environment may differ.

Figure 11-35 IP address resource: Parameters

Next, we created a Network Name resource by selecting New Resource. We selected Network Name as the resource type, and ITSOSFSGroup for the group. We named the resource ITSOWinNetwork. Figure 11-36 on page 471 shows the General properties for this resource.

470

IBM TotalStorage SAN File System

Figure 11-36 Network Name resource: General properties

Under Dependencies (Figure 11-37), we specified the SANFS and IPAddress resources to be brought online first.

Figure 11-37 Network Name resource: Dependencies

Chapter 11. Clustering the SAN File System Microsoft Windows client

471

In the Parameters tab (Figure 11-37 on page 471), we gave it a name, ITSOWINSHARE.

Figure 11-38 Network Name resource: Parameters

Finally, we create the File Share resource. Select New Resource, enter a name (ITSOWinShare in our case), select File Share as the resource type, and ITSOSFSGroup for the group. Figure 11-39 shows the General properties for this resource.

Figure 11-39 File Share resource: General properties

472

IBM TotalStorage SAN File System

Under the Dependencies tab (Figure 11-40), we selected the SANFS, IP Address, and Network Name resources.

Figure 11-40 File Share resource: dependencies

Figure 11-41 shows the Parameters tab. The Share name specified, ITSOWinShare, is the name by which the CIFS clients will reference it when they map the network drive.

Figure 11-41 File Share resource: parameters Chapter 11. Clustering the SAN File System Microsoft Windows client

473

We bring all the newly created resource online by right-clicking each one in turn and selecting Bring Online (see Figure 11-42). The resources are now online and owned by sanbs1-8, since this where we initially configured them.

Figure 11-42 All file share resources online

Now we can access the share from a TCP/IP attached client. To access the share, we select Tools Map Network Drive. We enter drive letter M, and \\9.82.23.49\ITSOWinShare for the Folder. This information matches the parameters defined for the IP Address resource in Figure 11-35 on page 470, and for Figure 11-41 on page 473. You may need to specify a different use name, depending on how your user authentication is configured.

Figure 11-43 Designate a drive for the CIFS share

Figure 11-44 on page 475 shows that the client can see the same files as are visible on the SAN File System client (compare with Figure 11-8 on page 455). We copied another file, SFS22_PDGuide.pdf, to the share to show it can be written to by the client. The other file in the directory, arb.sfs is automatically installed when we created the SAN File System resource, and is used by the MSCS nodes to arbitrate who is the owner of the resource.

474

IBM TotalStorage SAN File System

Figure 11-44 CIFS client access SAN File System via clustered SAN File System client

To test the behavior in a failover, we initiated a long copy operation from the client to the CIFS share (see Figure 11-45).

Figure 11-45 Copy lots of files onto the share

Chapter 11. Clustering the SAN File System Microsoft Windows client

475

At this time, sanbs1-8 owned the share and associated resources. We shut down this system. At the time of the failure, the drive became inaccessible to the client (see Figure 11-46).

Figure 11-46 Drive not accessible

The other node, sanbs1-9, took over the resource within seconds, and the drive again became accessible to the client. We could resume the copy operation, This was similar to the behavior that would be observed in a non-SAN File System environment, if a temporary network glitch caused a regular CIFS share to become unavailable. As another test, we copied some additional PDF files to the share, and opened one of them in Acrobat. We then shut down the cluster node owning the share resource. We tried to open a different PDF file, but got the same message that the drive was inaccessible, until the other node took over the resource (estimated less than 10 seconds). We could then open a new PDF file.

476

IBM TotalStorage SAN File System

12

Chapter 12.

Protecting the SAN File System environment


In this chapter, we discuss backup and restore techniques for the SAN File System, including the following topics: Introduction to backup recovery operations File/file system recovery Disaster recovery (DR) Disaster recovery backup and restore Metadata backup and restore SAN File System FlashCopy functions Detailed examples with IBM Tivoli Storage Manager

Copyright IBM Corp. 2003, 2004, 2006. All rights reserved.

477

12.1 Introduction
Data protection is the process of making extra copies of data, so that it can be restored in the event of various types of failure. The type of data protection (or backup) done depends on the kinds of failure that you wish to avoid. Various failures might require restore of a single file, an older version of a file, a directory, a LUN, or an entire system. Various methods for protecting the SAN File System are available, including these: SAN File System FlashCopy Ability to back up SAN File System files with third-party backup/restore applications (for example, IBM Tivoli Storage Manager, Legato NetWorker, and VERITAS NetBackup) Ability to use storage system-based protection methods (for example, FlashCopy and PPRC functions of the ESS and SVC), also known as LUN-based backup Ability to save the SAN File System cluster configuration and restore/execute it SAN File System FlashCopy provides a space-efficient image of the contents of part of the SAN File System global namespace at a particular moment. SAN File System supports the use of backup tools that may already be present in your environment. For example, if your enterprise currently uses a storage management product, such as IBM Tivoli Storage Manager, SAN File System clients can use the functions and features of that product to back up and restore files that reside in the SAN File System global namespace. Another option is LUN-based backup, which uses the hardware-based instant copy features available in the underlying storage subsystems supported by SAN File System, such as FlashCopy in SVC and ESS. Finally, you can use a SAN File System command to back up the system metadata. This will create a file that can then be converted into scripts that will automatically re-create the SAN File System metadata before restoring all of the user data. When backing up files stored in SAN File System, you must save both the actual files themselves and the file metadata. Our examples will show some approaches for this. For SAN File System, an administrator must also back up the system metadata, which includes information about fileset attachment points, storage pools, volumes, and file placement policies. This backup data is used to re-create the cluster state if necessary.

12.1.1 Types of backup


In SAN File System, backup and restore can be broadly classified into two types: File-based LUN-based In a file-based backup, the smallest unit of restore is an individual file. For file-based backup, there are two basic methods: SAN File System FlashCopy, which backs up at the fileset level, but provides the ability to restore parts of the fileset, such as directories, groups of files, or individual files. See 12.4, File recovery using SAN File System FlashCopy function on page 493 for more information. Operating system utilities and vendor provided backup/recovery tools. These include commands such as tar, cpio, xcopy, Windows Backup, IBM Tivoli Storage Manager, VERITAS NetBackup, and Legato NetWorker. All these should be able to access the SAN File System global namespace exactly as they would a local drive. 478
IBM TotalStorage SAN File System

An example of using Tivoli Storage Manager for file-based backup is shown in 12.5, Back up and restore using IBM Tivoli Storage Manager on page 502. When using a file-based backup method, it is important to be aware of the associated file metadata backup (this includes all the permissions and extended attributes of the files). This file metadata for Windows-created files can only be backed up completely from a Windows backup client or utility. Similarly, file metadata for UNIX (including Linux) files can only be backed up completely from another UNIX-based backup client or utility. Therefore, if it is important to preserve full file attribute information; we recommend creating separate filesets by primary allegiance, that is, you would have certain filesets that will only contain Windows created files, and other filesets that will only contain UNIX created files. In this way, you can back up these filesets from the appropriate client OS. In a LUN-based approach, the administrator can use the instant copy features that exist in the storage subsystems that SAN File System supports. See 12.2, Disaster recovery: backup and restore on page 479 for more details.

12.2 Disaster recovery: backup and restore


We will define a disaster as a situation in which the SAN File System is completely destroyed (all MDS and attached storage). Recovery of the SAN FIle System clients is beyond the scope of this discussion.

12.2.1 LUN-based backup


We recommend using a LUN-based backup, or instant copy feature of the supported storage subsystem, like FlashCopy for ESS and SVC. This will make a volume backup of all the LUNs in the SAN File System, both system and user pools.

Advantages of LUN-based backup


These are some of the advantages: It is done at the storage subsystem layer, so the SAN File System engines are not involved in the backup process. It deals with data at the byte level, and has the ability to back up and restore the entire SAN File System global namespace in a single operation. With LUN-based backup, the backup and restore of the complete file system (meaning both metadata and user data) is done as it happens at the LUN level.

Limitations of LUN-based backup


These are some of the limitations: It is not granular, and does not provide individual file or LUN restore capability. You must save and restore all the SAN File System LUNs, both metadata and file data. The LUNs in the System Pool and the User Pools form a consistency group (that is, they must be backed up and restored together). It requires the MDS to be fully quiesced.

Chapter 12. Protecting the SAN File System environment

479

12.2.2 Setting up a LUN-based backup


To back up the system using the LUN-based backup method, all the LUNs in the System Pool and User Pools must be in a static, consistent state, to ensure a static state of the SAN File System LUNs both for the metadata and the user data. The following steps are used: 1. Stop or pause all SAN File System client applications. Since this task is application-specific, you will need to follow the application documentation for details on performing this step. 2. Quiesce the SAN File System MDS using the quiescecluster command on the master MDS (see Example 12-1). This ensures that the SAN File System MDS and all clients have completed all active transactions, and flushes their data to disk.
Example 12-1 quiescecluster command sfscli> quiescecluster -state full CMMNP5229I Cluster successfully in quiescent state.

This procedure will also lock out any subsequent new I/O from the clients or MDS. 3. Copy the following critical system configuration files from each of the MDSs to offline media, such as tape or another system, for example, the Master Console. These files will be different on each MDS, so ensure that you copy them for each MDS separately. Example 12-2 shows part of the copy operation for our LAB setup. You will have to use a secure copy (scp) or secure ftp (sftp) utility, such as provided those by Cygwin or PuTTY. The files are: /etc/init.d/boot.local /etc/sysconfig/network/routes /etc/sysconfig/network/ifcfg-eth0 /etc/HOSTNAME /etc/hosts /etc/resolv.conf /root/.tank.passwd /usr/tank/admin/truststore /usr/tank/admin/config/cimom.properties /usr/tank/server/config/Tank.Bootstrap /usr/tank/server/config/Tank.Config Once copied to another location, you could use a third-party backup application or OS utility to save the files to removable media, or use tar on the MDS to back up these files. However, installing or running a third-party backup application on the MDS is not supported.
Example 12-2 Copying system config files $ scp truststore root@9.42.164.114:/usr/tank/admin/truststore root@9.42.164.114's password: truststore 100% 1901 $ scp cimom.properties root@9.42.164.114:/usr/tank/admin/config/cimom.properties root@9.42.164.114's password: cimom.properties s 100% 2450 $ scp Tank.Bootstrap root@9.42.164.114:/usr/tank/server/config/Tank.Bootstrap root@9.42.164.114's password: Tank.Bootstrap 100% 60 $ scp Tank.Config root@9.42.164.114:/usr/tank/server/config/Tank.Config

480

IBM TotalStorage SAN File System

root@9.42.164.114's password: Tank.Config

100%

79

As an alternative to backing up the files manually, you can use an option on the sfs command to create an archive of the files. Run setupsfs -backup on each MDS, as shown in Example 12-3. This command creates an archive of the critical files needed for recovery of the MDS(s) and is known as a DRfile. You can copy the archive file to diskette, or onto another server for an external copy.
Example 12-3 Create DRfile archive tank-mds1:/usr/tank/admin/bin # setupsfs -backup /etc/HOSTNAME /etc/tank/admin/cimom.properties /etc/tank/server/Tank.Bootstrap /etc/tank/server/Tank.Config /etc/tank/server/tank.sys /etc/tank/admin/tank.properties /usr/tank/admin/truststore /var/tank/server/DR/TankSysCLI.auto /var/tank/server/DR/TankSysCLI.volume /var/tank/server/DR/TankSysCLI.attachpoint /var/tank/server/DR/After_upgrade_to_2.2.1-13.dump /var/tank/server/DR/After_upgrade_to_2.2.1.13.dump /var/tank/server/DR/Before_Upgrade_2.2.2.dump /var/tank/server/DR/drtest.dump /var/tank/server/DR/Moved_to_ESSF20.dump /var/tank/server/DR/SFS_BKP_After_Upgrade_to_2.2.0.dump /var/tank/server/DR/Test_051805.dump /var/tank/server/DR/ATS_GBURG.rules /var/tank/server/DR/ATS_GBURG_CLONE.rules Created file: /usr/tank/server/DR/DRfiles-tank-mds1-20050912114651.tar.gz

4. Begin the storage subsystem copy service according to its specific procedures. In our Lab setup, we are using FlashCopy on an IBM TotalStorage SAN Volume Controller, Model 2146. Figure 12-1 shows the FlashCopy setup. We created Source and Target vdisks for all the User Pool and System Pool LUNs. The User Pool vdisks and the System Pool vdisks are then configured in a consistency group called sanfs_group.

Figure 12-1 SVC FlashCopy relationships and consistency group Chapter 12. Protecting the SAN File System environment

481

5. After the storage subsystem copy completes, re-enable the SAN File System MDS using the resumecluster command on the master MDS, as shown in Example 12-4.
Example 12-4 resumecluster sfscli> resumecluster CMMNP5233I Cluster successfully returned to the online state.

6. Restart the client applications using the specific procedures for those applications. Important: In order to ensure consistency of the restore in the event of a disaster, you need a copy of the configuration files from each MDS, which should be labeled to match the LUN copy (FlashCopy) image.

12.2.3 Restore from a LUN based backup


In this section, we discuss two scenarios for restore: MDS failure Loss of the back-end storage (System Pool and User Pools)

Restore from an MDS failure


In this scenario, the operating system of the MDS is corrupted and cannot be recovered; however, the storage (System Pool and User Pools) and the SAN are working and available with no corruption on the physical LUNs. We assume that a backup of the system configuration files was made, as described in 12.2.2, Setting up a LUN-based backup on page 480. The recovery process has the following steps: 1. Stop all the application servers (SAN File System clients), preferably by shutdown. For an AIX or Linux client, you could unload the SAN File System driver using the rmstclient command. For a Solaris client, unmount the SAN File System. A Windows client needs to be shut down to unload the SAN File System. 2. If possible, stop the SAN File System cluster with stopcluster from the master MDS, as shown in Example 12-5, and shut down all the MDSs. Note: If your MDSs are not operational, this step is omitted.
Example 12-5 Stop the MDS cluster sfscli> stopcluster Are you sure you want to shutdown the cluster? [y/n]:y CMMNP5242I Cluster shutdown successfully.

3. Power on each of the MDSs, and re-install the operating system, as described in 5.2.2, Install software on each MDS engine on page 127. 4. Now copy back the saved system configuration files on each MDS: /root/.tank.passwd /usr/tank/admin/truststore /usr/tank/admin/config/cimom.properties /usr/tank/server/config/Tank.Bootstrap /usr/tank/server/config/Tank.Config

482

IBM TotalStorage SAN File System

5. Log in to each the MDS and run the following commands:


# /usr/tank/server/bin/device_init.sh # /usr/tank/admin/bin/startCimom # /usr/tank/admin/bin/startConsole

6. On the master MDS, run startcluster, and verify that all the MDSs are running using the lsserver command, as shown in Example 12-6.
Example 12-6 Start cluster and verify sfscli> startcluster CMMNP5236I Cluster started successfully. sfscli> lsserver Name State Server Role Filesets Last Boot ======================================================== mds3 Online Master 0 May 21, 2004 9:19:27 AM mds4 Online Subordinate 1 May 21, 2004 5:17:07 AM sfscli>

7. Now start the SAN File System client systems.

Restore from a back-end storage failure


In this scenario, the MDS servers and the SAN itself have no error; however, there has been corruption in the storage system (the physical LUNs). We assume that a LUN-level backup of all the SAN File System LUNs (System Pool and all User Pools) has been done, and also that a single storage system is used. The recovery process has the following steps: 1. Stop all the application servers (SAN File System clients), preferably by shutdown. For AIX and Solaris clients, you can unload the SAN File System driver using the rmstclient command. 2. If possible, stop the SAN File System cluster with stopcluster from the master MDS, as shown in Example 12-5 on page 482. 3. Initiate a FlashCopy restore from the storage subsystem and ensure that the disks are assigned to the MDS. Refer to the respective storage system copy services guide for more details. 4. Reboot each MDS and run the command cat /proc/scsi/scsi to verify that the LUNs are visible to the MDS, as shown in Example 12-7.
Example 12-7 Query the LUNs mds1:/ # cat /proc/scsi/scsi Attached devices: Host: scsi0 Channel: 00 Id: 00 Lun: 00 Vendor: LSILOGIC Model: 1030 IM Type: Direct-Access Host: scsi0 Channel: 00 Id: 08 Lun: 00 Vendor: IBM Model: 32P0032a S320 Type: Processor Host: scsi2 Channel: 00 Id: 00 Lun: 00 Vendor: IBM Model: 2145 Type: Direct-Access Host: scsi2 Channel: 00 Id: 00 Lun: 01 Vendor: IBM Model: 2145 Type: Direct-Access Host: scsi2 Channel: 00 Id: 00 Lun: 03 Vendor: IBM Model: 2145 Type: Direct-Access Host: scsi2 Channel: 00 Id: 01 Lun: 00

Rev: 1000 ANSI SCSI revision: 02 1 Rev: 1 ANSI SCSI revision: 02 Rev: 0000 ANSI SCSI revision: 03 Rev: 0000 ANSI SCSI revision: 03 Rev: 0000 ANSI SCSI revision: 03

Chapter 12. Protecting the SAN File System environment

483

Vendor: IBM Model: 2145 Type: Direct-Access Host: scsi2 Channel: 00 Id: 01 Vendor: IBM Model: 2145 Type: Direct-Access Host: scsi2 Channel: 00 Id: 01 Vendor: IBM Model: 2145 Type: Direct-Access Host: scsi3 Channel: 00 Id: 00 Vendor: IBM Model: 2145 Type: Direct-Access Host: scsi3 Channel: 00 Id: 00 Vendor: IBM Model: 2145 Type: Direct-Access Host: scsi3 Channel: 00 Id: 00 Vendor: IBM Model: 2145 Type: Direct-Access Host: scsi3 Channel: 00 Id: 01 Vendor: IBM Model: 2145 Type: Direct-Access Host: scsi3 Channel: 00 Id: 01 Vendor: IBM Model: 2145 Type: Direct-Access Host: scsi3 Channel: 00 Id: 01 Vendor: IBM Model: 2145 Type: Direct-Access

Rev: 0000 ANSI SCSI revision: 03 Lun: 01 Rev: 0000 ANSI SCSI revision: 03 Lun: 03 Rev: 0000 ANSI SCSI revision: 03 Lun: 00 Rev: 0000 ANSI SCSI revision: 03 Lun: 01 Rev: 0000 ANSI SCSI revision: 03 Lun: 03 Rev: 0000 ANSI SCSI revision: 03 Lun: 00 Rev: 0000 ANSI SCSI revision: 03 Lun: 01 Rev: 0000 ANSI SCSI revision: 03 Lun: 03 Rev: 0000 ANSI SCSI revision: 03

5. On the Master MDS, run startcluster, and verify all the MDS are running with lsserver, as shown in Example 12-8.
Example 12-8 startcluster and verify sfscli> startcluster CMMNP5236I Cluster started successfully. sfscli> lsserver Name State Server Role Filesets Last Boot ========================================================= mds1 Online Master 6 May 25, 2004 11:13:06 PM mds2 Online Subordinate 2 May 25, 2004 10:58:53 PM

6. Now start the application servers and the SAN File System client.

12.3 Backing up and restoring system metadata


The SAN File System System Pool contains both file metadata and system metadata. File-based backup protects the file metadata (permissions, ACLs, and so on), but it is also vital to back up the system metadata (which includes information about fileset attachment points, storage pools, volumes, and policies).

12.3.1 Backing up system metadata


SAN File System provides a tool to create a dump file that contains a backup copy of the system metadata. It can be invoked in two ways: From the SAN File System Console (GUI) With the mkdrfile command, using the CLI interface

484

IBM TotalStorage SAN File System

Once created, the metadata dump file is stored in the directory /usr/tank/server/DR on the master Metadata servers local disk, and contains all information required to re-create the metadata. The metadata dump file can then be used to completely restore the global namespace tree (that is, all the fileset attach points) and the MDS server configuration, before using a client-based backup application to restore the actual data in the global namespace.

Creating the metadata dump file using the GUI


Here are the steps to follow: 1. Start the browser interface and log in using an ID with Backup, Administrator, or Operator privileges. 2. Select Maintain System Disaster Recovery, select Create from the drop-down menu, and select Go, as shown in Figure 12-2.

Figure 12-2 Metadata dump file creation start

Chapter 12. Protecting the SAN File System environment

485

Enter a name for the file, select Create Create new recovery file, and click OK. In Figure 12-3, we created a metadata dump file called SANFS_05-27-04. It is a good practice to include a date stamp in the file name.

Figure 12-3 Metadata dump file name

3. Finally, select the file name by checking it and selecting Create, as shown in Figure 12-4.

Figure 12-4 DR file creation final step

486

IBM TotalStorage SAN File System

Deleting / removing the metadata dump file using the GUI


1. Log in to the Console using an ID with Backup, Administrator, or Operator privileges. 2. Select Maintain System Disaster Recovery, select Delete from the drop-down menu, and click Go, as shown in Figure 12-5. In this case, we selected the file name SANFS_05-27-04.

Figure 12-5 Delete/remove the metadata dump file

3. Figure 12-6 shows the confirmation window with the name of the file to be deleted. Click Delete to complete the operation.

Figure 12-6 Verify deletion of the metadata dump file

Chapter 12. Protecting the SAN File System environment

487

Creating the metadata dump file using the CLI


The mkdrfile command is used to create the metadata dump file with the CLI, as shown in Example 12-9. Specify a name for the metadata dump file (we chose SANFS_10-22-04_1). To list all the metadata dump files, use the lsdrfile command.
Example 12-9 mkdrfile and lsdrfile commands example mds1:/ # sfscli sfscli> mkdrfile SANFS_10-22-04_1 CMMNP5359I Disaster recovery file SANFS_10-22-04_1 was created successfully. sfscli> lsdrfile Name Date and Time Size (KB) ================================================== SANFS_10-22-04_1 Oct 22, 2004 2:10:02 AM 4 SANFS_05-27-04 May 27, 2004 2:04:02 AM 4

Deleting/removing the metadata dump file using the CLI


To delete a metadata dump file, use rmdrfile, as shown in Example 12-10. Specify the name of the metadata dump file to delete (we deleted the file SANFS_05-27-04).
Example 12-10 rmdrfile command example sfscli> rmdrfile SANFS_05-27-04 Are you sure you want to delete disaster recovery file SANFS_05-27-04? [y/n]:y CMMNP5362I Disaster recovery file SANFS_05-27-04 was removed successfully. sfscli> lsdrfile Name Date and Time Size (KB) ================================================ SANFS_10-22-04_1 Oct 22, 2004 2:10:02 AM 4

12.3.2 Restoring the system metadata


To restore the system metadata, the information contained in the metadata dump file is processed using the builddrscript command (CLI only). The builddrscript command converts the metadata dump file into a set of recovery scripts that are then used to rebuild the metadata. The builddrscript command creates three scripts that the administrator must review and edit appropriately to produce a restore procedure. The scripts are then run to re-create the SAN File System configuration. The scripts are stored in the directory /usr/tank/server/DR. Note: This command will overwrite any files in the /usr/tank/server/DR directory that were created by a previous run of this command. If you want to preserve the existing files, copy them to another directory. Execute builddrscript, specifying the name of the appropriate metadata dump file, as shown in Example 12-11 on page 489. Tip: For easier identification of metadata dump files, we recommend incorporating a date and time stamp in the file name.

488

IBM TotalStorage SAN File System

Example 12-11 builddrscript command sfscli> builddrscript SANFS_10-22-04 CMMNP5363I Disaster recovery script files for SANFS_10-22-04 were built successfully.

Note: Backup, Operator, or Administrator privileges are required to run builddrscript. Example 12-12 shows the three created script files: TankSysCLI.auto: Commands to create Storage Pools, Filesets, and Policies TankSysCLI.volume: Commands to add Volumes to Storage Pools TankSysCLI.attachpoint: Commands to attach filesets
Example 12-12 script files # cd /usr/tank/server/DR # mds1:/usr/tank/server/DR # ls -l total 32 drwxr-xr-x 2 root root drwxr-xr-x 4 root root -rw-rw-rw1 root root -rw-r--r-1 root root -rw-r--r-1 root root -rw-r--r-1 root root

312 96 4023 1372 1719 4366

Oct Oct Oct Oct Oct Oct

10 02:37 . 5 03:37 .. 22 02:04 SANFS_10-22-04.dump 22 02:34 TankSysCLI.attachpoint 22 02:34 TankSysCLI.auto 22 02:34 TankSysCLI.volume

Important: The mkdrfile and builddrscript commands should be run frequently enough to ensure that any configuration changes are reflected in the output of these commands (at least whenever you make a change to the MDS configuration). You can use a backup utility to copy the dump file to an alternate location, or to tape, and so on.

Tip: These files can also be used to as documentation of the configuration, and can be used to selectively re-create entities, such as policies, in case these are inadvertently deleted. To restore the metadata from these script files, run the scripts in the order shown. Notice that these scripts are designed to be run on a new SAN File System installation in order to re-create the system metadata from scratch. Therefore, before running these scripts, you should have re-installed and configured each MDS, as described in 5.2, SAN File System MDS installation on page 126. Verify that all MDS in the cluster are online and that the cluster is running (using the lsserver command), as shown in Example 12-13.
Example 12-13 Check online server state sfscli> lsserver -state online Name State Server Role Filesets Last Boot ========================================================= mds1 Online Master 6 Oct 19, 2004 11:13:06 PM mds2 Online Subordinate 2 Oct 19, 2004 10:58:53 PM

1. Run the script TankSysCLI.auto:


# sfscli -script /usr/tank/server/DR/TankSysCLI.auto

Chapter 12. Protecting the SAN File System environment

489

Example 12-14 shows the contents of our lab setup TankSysCLI.auto file.
Example 12-14 TankSysCLI.auto file contents mds1:/usr/tank/server/DR # cat /usr/tank/server/DR/TankSysCLI.auto ################################################################################ # CLI Commands to create Storage Pools, Filesets, Service Classes and # Policy Sets. # These commands need NO manual intervention. ################################################################################ # SMDR Version: 2.2.0.90 # Time of Backup: Oct 22 02:03:49 ################################################################################ # Backup Master Node: 9.42.164.114:1737 ################################################################################ # Cluster Id: 60355 # Installation Id: 8361388714337296178 # DiskEpoch: 0 ################################################################################ chpool -thresh 80 -desc "Default storage pool" DEFAULT_POOL mkpool -partsize 16 -allocsize 4 -thresh 80 -desc "This is a test pool" Test_Pool1 mkpool -partsize 16 -thresh 80 lixprague mkpool -partsize 16 -thresh 80 winwashington mkpool -partsize 16 -thresh 80 aixrome setdefaultpool -quiet DEFAULT_POOL mkfileset -server mds2 -thresh 80 -desc "user home directories" userhomes mkfileset -server mds1 -thresh 80 user1 mkfileset -server mds2 -quota 1000 -qtype hard -thresh 65 dbdir mkfileset -server mds1 -thresh 80 aixfiles mkfileset -thresh 80 asad mkfileset -server mds1 -quota 1000 -qtype soft -thresh 90 winhome mkfileset -server mds1 -thresh 80 lixfiles mkpolicy -file /usr/tank/server/DR/Example_Policy.rules -desc "Example_Policy rules for handling *.mp3 and *DB2.* files" Example_Policy mkpolicy -file /usr/tank/server/DR/Test_Policy.rules -desc "For testing purpose" Test_Policy mkpolicy -file /usr/tank/server/DR/non-unif.rules non-unif usepolicy -quiet non-unif

2. Edit /usr/tank/server/DR/TankSysCLI.volume and modify it to match your current SAN settings, if there have been changes since the last creation of the metadata dump file and the scripts. Run the script TankSysCLI.volume with the command:
# sfscli -script /usr/tank/server/DR/TankSysCLI.volume

Example 12-15 shows the contents of our lab setup TankSysCLI.volume file.
Example 12-15 TankSysCLI.volume file contents mds1:/usr/tank/server/DR # cat TankSysCLI.volume ################################################################################ # CLI Commands using client-side information. # These commands need manual intervention. # # The first section of this file is a set of commands to add the volumes back # into the SAN File System. # The device names were as they appeared during backup on the master server. # The lun names were as they appeared during backup. # The clients listed for each volume are those that had a valid lease and # had SAN access to the volume at the time of the backup. # Please make sure that the client specified in the mkvol command is active.

490

IBM TotalStorage SAN File System

# Please make sure that the lun names appearing here actually exist and # have correct sizes and if not edit the lun names to correct values. # The System MASTER volume has to be specified in tank.properties or via # setupsfs and therefore has no corresponding CLI. # The other System Volumes can either be specified in tank.properties or # added using the CLI command, which appears inside comments for this reason. # # This file also contains commands to restore root privileges for any clients. # Any clients which had root privileges at the time of the backup have # addprivclient commands after the mkvol commands. Please uncomment lines or # change the client names as appropriate. ################################################################################ # SMDR Version: 2.2.0.90 # Time of Backup: Oct 22 02:03:49 ################################################################################ # Backup Master Node: 9.42.164.114:1737 ################################################################################ # Cluster Id: 60355 # Installation Id: 8361388714337296178 # DiskEpoch: 0 ################################################################################ ################################################################################ # MASTER System Disk # VolumeName= MASTER : Size= 2130706432 : Old GID= 74099EBC3308DF32 # Device= /dev/rsde # Lun= "???"

################################################################################ # User Volume # VolumeName= volume1 : Size= 107357405184 : Old GID= 740A862A20C0C3BF # Device= (Unavailable) # Client= AIXRome, Device Path= /dev/rvpath0 # Lun= "VPD83NAA6=600507680188801B2000000000000001" mkvol -lun "VPD83NAA6=600507680188801B2000000000000001" -client AIXRome -pool DEFAULT_POOL -f volume1 ################################################################################ # User Volume # VolumeName= Test_Pool1-Test_Pool1-0 : Size= 104840822784 : Old GID= 740AC37A24431F1C # Device= (Unavailable) # Client= AIXRome, Device Path= /dev/rvpath2 # Lun= "VPD83NAA6=600507680188801B200000000000000C" mkvol -lun "VPD83NAA6=600507680188801B200000000000000C" -client AIXRome -pool Test_Pool1 -f -desc "VPD83NAA6=600507680188801B200000000000000C" Test_Pool1-Test_Pool1-0 ################################################################################ # User Volume # VolumeName= vol_lixprague1 : Size= 32195477504 : Old GID= 740B45FF17B8F7C9 # Device= (Unavailable) # Client= LIXPrague, Device Path= /dev/sdc # Lun= "VPD83NAA6=600507680188801B200000000000001C" mkvol -lun "VPD83NAA6=600507680188801B200000000000001C" -client LIXPrague -pool lixprague -f vol_lixprague1 ################################################################################ # User Volume # VolumeName= vol_winwashington1 : Size= 32195477504 : Old GID= 740B464898206D9C # Device= (Unavailable)

Chapter 12. Protecting the SAN File System environment

491

# Client= WINWashington, Device Path= \??\VPATH#Disk&Ven_IBM&Prod_2145#1&1a681225&2&01#{53f56307-b6bf-11d0-94f2-00a0c91efb8b} # Lun= "VPD83NAA6=600507680188801B200000000000001D" mkvol -lun "VPD83NAA6=600507680188801B200000000000001D" -client WINWashington -pool winwashington -f vol_winwashington1 ################################################################################ # User Volume # VolumeName= vol_aixrome1 : Size= 32195477504 : Old GID= 740B465F78ED1865 # Device= (Unavailable) # Client= AIXRome, Device Path= /dev/rvpath4 # Lun= "VPD83NAA6=600507680188801B200000000000001B" mkvol -lun "VPD83NAA6=600507680188801B200000000000001B" -client AIXRome -pool aixrome -f vol_aixrome1 # # # # # addprivclient addprivclient addprivclient addprivclient addprivclient -quiet -quiet -quiet -quiet -quiet AIXRome WINcli WINCli LIXPrague WINWashington

3. Edit /usr/tank/server/DR/TankSysCLI.attachpoint to verify the settings, and run the script TankSysCLI.attachpoint with the following command. This is done only for setups where all filesets are attached only to the root directories of other filesets (as recommended). If you have filesets that are attached to directories, you will have to re-create those directories at a client, then reattach those filesets manually:
# sfscli -script /usr/tank/server/DR/TankSysCLI.attachpoint

Example 12-16 shows the contents of our lab setup TankSysCLI.attachpoint file.
Example 12-16 TankSysCLI.attachpoint file content mds1:/usr/tank/server/DR # cat TankSysCLI.attachpoint ################################################################################ # CLI Commands to attach filesets. # These commands need manual intervention. # All the "mkdir" and "attachfileset" commands should be run in the order # given. # The "mkdir" command should be run on a client to recreate the directory path # before running the following "attachfileset" CLI commands. ################################################################################ # SMDR Version: 2.2.0.90 # Time of Backup: Oct 22 02:03:49 ################################################################################ # Backup Master Node: 9.42.164.114:1737 ################################################################################ # Cluster Id: 60355 # Installation Id: 8361388714337296178 # DiskEpoch: 0 ################################################################################ # Root Fileset Attachpoint Name : sanfs ################################################################################ #mkdir -p sanfs/aixfiles attachfileset -attach sanfs/aixfiles -dir aixhome aixfiles #mkdir -p sanfs/lixfiles attachfileset -attach sanfs/lixfiles -dir linuxhome lixfiles #mkdir -p sanfs/userhomes attachfileset -attach sanfs/userhomes -dir user1 user1 #mkdir -p sanfs/winhome

492

IBM TotalStorage SAN File System

attachfileset -attach sanfs/winhome -dir win2kfiles winhome

Example 12-17 summarizes all the steps described above to create and restore the metadata using the metadata recovery dump file commands.
Example 12-17 Complete example of metadata recovery sfscli> mkdrfile SANFS_10-22-04_1 CMMNP5359I Disaster recovery file SANFS_10-22-04_1 was created successfully. sfscli> sfscli> lsdrfile Name Date and Time Size (KB) ================================================== SANFS_10-22-04_1 Oct 22, 2004 3:00:37 AM 4 SANFS_05-27-04 May 27, 2004 2:04:02 AM 4 sfscli> sfscli> builddrscript SANFS_10-22-04_1 CMMNP5363I Disaster recovery script files for SANFS_10-22-04_1 were built successfully. sfscli> quit mds1:/# cd /usr/tank/server/DR mds1:/usr/tank/server/DR # mds1:/usr/tank/server/DR total 36 drwxr-xr-x 2 root drwxr-xr-x 4 root -rw-rw-rw1 root -rw-rw-rw1 root -rw-r--r-1 root -rw-r--r-1 root -rw-r--r-1 root # ls -l root root root root root root root 352 96 4023 4023 1372 1719 4366 Oct Oct Oct Oct Oct Oct Oct 10 03:00 . 5 03:37 .. 22 02:04 SANFS_05-27-04.dump 22 03:00 SANFS_10-22-04_1.dump 22 03:00 TankSysCLI.attachpoint 22 03:00 TankSysCLI.auto 22 03:00 TankSysCLI.volume

mds1:/usr/tank/server/DR # sfscli -script /usr/tank/server/DR/TankSysCLI.auto mds1:/usr/tank/server/DR # sfscli -script /usr/tank/server/DR/TankSysCLI.volume mds1:/usr/tank/server/DR # sfscli -script /usr/tank/server/DR/TankSysCLI.attachpoint

After you have re-created your SAN File System configuration from scratch, you need to restore all the client files from a backup taken with a file-based application (such as Tivoli Storage Manager).

12.4 File recovery using SAN File System FlashCopy function


If only some user files have been lost, but the overall SAN File System remains healthy, we can use the FlashCopy image function in SAN File System to recover the files. FlashCopy images can be created, and then backed up later. A FlashCopy image contains read-only copies of the files in a fileset as they exist at a specific point in time. FlashCopy images are stored in a special subdirectory named .flashcopy under the filesets root attachment point. You can keep these images in the SAN File System and restore directly from them (using the reverttoimage command), or use standard backup tools on a SAN File System client to back up the files in the FlashCopy image. In this section, we demonstrate creating a FlashCopy image from the GUI and the CLI, deleting some SAN File System user files, then restoring them by reverting the FlashCopy images.
Chapter 12. Protecting the SAN File System environment

493

Tip: Individual files can be copied from an available image in the .flashcopy directory back to the fileset itself if only some files in the fileset need to be restored.

12.4.1 Creating FlashCopy image


1. In the GUI, select Maintain System FlashCopy Images, as shown in Figure 12-7.

Figure 12-7 FlashCopy option window GUI

2. Select Create from the drop-down menu and click Go. The Introduction window, listing the next three processes, displays, as shown in Figure 12-8 on page 495. Click Next to execute the Select Containers step.

494

IBM TotalStorage SAN File System

Figure 12-8 FlashCopy Start GUI window

3. We chose all the filesets (Figure 12-9). Click Next to go to the Set Properties step. Attention: When creating FlashCopy images, an administrator specifies each fileset to be included; the FlashCopy image feature does not automatically include nested filesets. The FlashCopy image operation is performed individually for each fileset.

Figure 12-9 Select Filesets

Chapter 12. Protecting the SAN File System environment

495

4. Now we specify the image name, image directory, and description for each FlashCopy image. The default name is Image followed by a sequence number. Figure 12-10 shows the default properties for our lab setup. Click Next to continue to the Verify Settings step. Tips: Although we selected all the filesets, they are created individually, one at a time. A fileset can have as many as 32 read-only FlashCopy images.

Figure 12-10 Set Properties of FlashCopy images

5. Check the selections, as shown in Figure 12-11 on page 497, and click Next.

496

IBM TotalStorage SAN File System

Figure 12-11 Verify FlashCopy settings

6. This completes the process, and the images are created, as shown in Figure 12-12.

Figure 12-12 FlashCopy images created

Note: All images are full backups; it is not possible to create incremental FlashCopy images.

Chapter 12. Protecting the SAN File System environment

497

From the client, we can see the FlashCopy images for all the filesets. Note by default the .flashcopy directory is hidden on Windows. Figure 12-13 shows Windows Explorer on a Windows SAN File System client. We can see the FlashCopy images, just created, including the contents of Image-8, and a directory called INSTALL and its subdirectories, reflecting the directories in the actual fileset.

Figure 12-13 Windows client view of the FlashCopy images

12.4.2 Reverting FlashCopy images


We will simulate a fileset loss by deleting the contents of the fileset winhome. This fileset contains one folder, called INSTALL. Figure 12-14 on page 499 shows that the INSTALL folder is gone from the primary fileset; however, it still appears in the FlashCopy Image-8 subdirectory. We will revert the FlashCopy image of the fileset winhome using the SAN File System GUI.

498

IBM TotalStorage SAN File System

no INSTALL directory

Figure 12-14 Client file delete

1. From the SAN File System GUI, select Manage Copies FlashCopy Images. We select the image Image-8 and the Revert to... action from the drop-down menu, as shown in Figure 12-15.

Figure 12-15 FlashCopy image revert selection

Chapter 12. Protecting the SAN File System environment

499

2. Verify and confirm the Image revert, as shown in Figure 12-16, and click OK to continue.

Figure 12-16 Image restore / revert verification and restore

Image-8 is now reverted, as shown in Figure 12-17 on page 501, replacing the current contents of the fileset.

500

IBM TotalStorage SAN File System

Figure 12-17 Remaining FlashCopy images after revert

Now, if we view the files from the Windows client, we see our directory INSTALL and its contents, as shown in Figure 12-18.

Figure 12-18 Client data restored

Chapter 12. Protecting the SAN File System environment

501

12.5 Back up and restore using IBM Tivoli Storage Manager


In this section, we discuss using a backup/recovery application, such as IBM Tivoli Storage Manager, with the SAN File System clients, to perform a file-based backup of files in the SAN File System global namespace.

12.5.1 Benefits of Tivoli Storage Manager with SAN File System


Because SAN File System is a global namespace (the files are visible to all clients), this means the files can be backed up from any SAN File System client. Therefore, you can back up those files, either directly from the filesets or from a SAN File FlashCopy image on a completely separate SAN File System client from the client that normally runs any applications on these files, thus giving application server-free backup. This eliminates the application servers themselves from the data path of the backup and frees them from expending any CPU cycles on the backup process. If you back up the files from a FlashCopy image, this effectively almost eliminates the backup window, that is, a period of outage of the application to clients, since you create an online consistent copy of the data that is then backed up. The application then proceeds uninterrupted while the backup is executed against the FlashCopy image. This principle is shown in Figure 12-19.

User

Pools System

Tape Library

SAN
"Production" SAN FS Clients Tivoli Storage Manager Server SAN FS Clients Tivoli Storage Manager/LAN-Free clients

LAN
The following procedure provides application server-free backup:

Figure 12-19 Exploitation of SAN File System with Tivoli Storage Manager

1. Create FlashCopy images of the filesets that you want to protect. This requires minimal disruption to the SAN File System clients that are performing a production workload (Web servers, application servers, database servers, and so on). 502

IBM TotalStorage SAN File System

2. Now you can back up these FlashCopy images using a file-based backup application like Tivoli Storage Manager, where the Tivoli Storage Manager client is installed on a separate SAN File System client. It still sees all the files, but the backups run independently of the production SAN File System clients. To keep all file attributes, if you have both Windows and UNIX (including Linux)-created data in your SAN File System environment, it should be separated by fileset. Then you should run two separate Tivoli Storage Manager clients in this instance: a Windows Tivoli Storage Manager/SAN File System client to back up Windows files, and an AIX Tivoli Storage Manager/SAN File System client to back up UNIX (including Linux) files. You can also run multiple instances of these if required to improve backup performance. The Tivoli Storage Manager server can be on any supported Tivoli Storage Manager server platform, and only needs to be SAN and LAN attached. It does not need to be a SAN File System client. 3. If you have implemented a non-uniform SAN File System configuration, such that not all filesets are visible to all clients, you will need additional backup clients to ensure that all filesets can be backed up by a client that has visibility to it. 4. You can use the LAN-free backup client to also back up these files directly over the SAN to a SAN-attached library as shown, rather than using the LAN for backup data traffic. Therefore, we have LAN-free and (application) server-free backup capability. Note: Tivoli Storage Manager, in backing up the files in SAN File System, automatically also backs up the associated file metadata. Tivoli Storage Manager also supports restoring files to the same or a different location, and even to a different Tivoli Storage Manager client. This means you could restore files backed up from SAN File System not only to a different SAN File System environment, but also (as in a disaster recovery situation) to a local file system on another UNIX or Windows Tivoli Storage Manager client that is not a SAN File System client, that is, you could still restore these files from a Tivoli Storage Manager backup, even if you do not have a SAN File System environment to restore them to. After all, they are just files to Tivoli Storage Manager; the metadata will be handled appropriately for the restore platform, depending on whether the restore destination is a directory in the SAN File System global namespace or a local file system.

12.6 Backup/restore scenarios with Tivoli Storage Manager


We present the following scenarios for restore: Back up user data in Windows filesets using Tivoli Storage Manager client for Windows Selected file to original location. File restore to different location from a FlashCopy image backup. Back up user data in UNIX filesets using Tivoli Storage Manager client for AIX Back up and restore files using data in an actual fileset. Back up and restore SAN File System FlashCopy images using the -snapshotroot TSM option. In our lab, we installed the Tivoli Storage Manager server on a Windows 2000 machine and two clients on the following platforms: AIX 5L Version 5.2, Maintenance Level 03, 32-bit version Windows 2000 Service Pack 4

Chapter 12. Protecting the SAN File System environment

503

Both Tivoli Storage Manager server and client code versions used in our lab were at V5.2.2.0. Please note that in order to back up SAN File System data from AIX and Windows SAN File System clients, you need Tivoli Storage Manager client V5.1 and higher. To back up SAN File System data from Linux and Solaris clients, you need Tivoli Storage Manager client V5.2.3.1 or higher. All these clients are also SAN File System clients. In the following sections, we will introduce sample backup/restore scenarios for both Windows and UNIX SAN File System filesets.

12.6.1 Back up Windows data using Tivoli Storage Manager Windows client
First, we will back up the files with the Tivoli Storage Manager client: 1. To start the GUI, select Start Programs Tivoli Storage Manager Backup-Archive GUI, and select the Backup function. Select the files to back up, as shown in Figure 12-20. Notice that the SAN File System drive and filesets appear as a Local drive in the Backup-Archive client.

Figure 12-20 User files selection

2. Start the backup by clicking Backup. The files will be backed up to the Tivoli Storage Manager server. Note that we have selected for our backup not only the actual content of the INSTALL directory, but also its SAN File System FlashCopy image, which resides in folder .flashcopy/Image-8. If you make a FlashCopy image each day (using a different directory) and back it up, Tivoli Storage Manager incremental backup will back up all the files each time. In 12.6.3, Backing up FlashCopy images with the snapshotroot option on

504

IBM TotalStorage SAN File System

page 510, we will show you how to back up SAN File System FlashCopy images incrementally using the Tivoli Storage Manager -snapshotroot option.

Restore user data using Tivoli Storage Manager client for Windows
Having backed up both actual data and its FlashCopy image, we can execute our restore scenarios.

Scenario 1: Restore selected file to original destination


Here we will restore files from the Tivoli Storage Manager backup of the actual fileset. We deleted the INSTALL directory, which we now will restore using Tivoli Storage Manager. We are just showing the restore of one folder for demonstration purposes, but Tivoli Storage Manager can restore multiple files/folders or an entire file system. 1. Start the Tivoli Storage Manager Backup/Archive client and select Restore. In Figure 12-21, we chose to restore the folder.

Figure 12-21 Restore selective file selection

Chapter 12. Protecting the SAN File System environment

505

2. We chose to restore to the original location, as shown in Figure 12-22. Click Restore to start the restore.

Figure 12-22 Select destination of restore file(s)

3. The deleted file is restored.

Scenario 2: Restore FlashCopy image to a different destination


In this scenario, we will restore the files backed up from the FlashCopy image to the real fileset location. 1. Start the Tivoli Storage Manager Backup/Archive client. Select Restore. 2. Select the files to restore, as shown in Figure 12-23. We are restoring the Image-8 folder from the FlashCopy image.

Figure 12-23 Restore files selection for FlashCopy image backup

506

IBM TotalStorage SAN File System

3. Select the destination to restore the files to. We will restore the folder to the win2kfiles fileset in S:\winhome\win2kfiles\testfolder, as shown in Figure 12-24. Click Restore to start the restore. Note that we could not (and it would not make sense to) restore the files to the .flashcopy directory, as FlashCopy images, so their directories are read-only.

Figure 12-24 Restore files destination path selection

The restore of the FlashCopy files is now complete; the original folder is restored. Tip: Regular periodic FlashCopy images are highly recommended. They are the most efficient method for quickly backing up and restoring files in scenarios where the metadata is still available.

12.6.2 Back up user data in UNIX filesets with TSM client for AIX
In this section, we introduce the following backup/restore scenarios: Back up and restore files using data in an actual fileset. Back up and restore SAN File System FlashCopy images using the -snapshotroot TSM option.

Back up and restore files using data in an actual fileset


In this scenario, we will back up sample files in the filesets aixfiles and lixfiles, as shown in Example 12-18. Our example will use the Tivoli Storage Manager command-line interface.
Example 12-18 Files to back up using Tivoli Storage Manager AIX client Rome:/sfs/sanfs/lixfiles/linuxhome/install >ls -l total 2048 -rw-rw-rw1 root system 696679 Jun 01 11:07 TIVguid.i386.rpm Rome:/sfs/sanfs/lixfiles/linuxhome/install >ls -l ../../../aixfiles/aixhome/inst.images total 48897 -rw-r--r-1 root system 0 May 26 17:47 .toc -rw-r--r-1 root system 25034752 May 26 17:46 510005.v2.tar drwxr-x--2 root system 48 May 26 17:47 lost+found/

Chapter 12. Protecting the SAN File System environment

507

1. Now we back up the files with the Tivoli Storage Manager client. Example 12-19 shows the output.
Example 12-19 Backing up files using Tivoli Storage Manager AIX command line client Rome:/sfs/sanfs >dsmc selective "/sfs/sanfs/aixfiles/aixhome/inst.images/*" "/sfs/sanfs/lixfiles/linuxhome/install/*" IBM Tivoli Storage Manager Command Line Backup/Archive Client Interface - Version 5, Release 2, Level 2.0 (c) Copyright by IBM Corporation and other(s) 1990, 2003. All Rights Reserved. Node Name: AIXROME Session established with server NPSRV2: Windows Server Version 5, Release 2, Level 2.0 Server date/time: 06/02/04 09:51:00 Last access: 06/02/04 Selective Backup function invoked. Directory--> 72 /sfs/sanfs/aixfiles/aixhome/inst.images [Sent] Directory--> 312 /sfs/sanfs [Sent] Directory--> 96 /sfs/sanfs/aixfiles [Sent] Directory--> 144 /sfs/sanfs/aixfiles/aixhome [Sent] Normal File--> 27,673,600 /sfs/sanfs/aixfiles/aixhome/inst.images/IP22727.tivoli.tsm.client.ba.32bit [Sent] Selective Backup processing of '/sfs/sanfs/aixfiles/aixhome/inst.images/*' finished without failure. Directory--> 72 /sfs/sanfs/lixfiles/linuxhome/install [Sent] Directory--> 312 /sfs/sanfs [Sent] Directory--> 72 /sfs/sanfs/lixfiles [Sent] Directory--> 192 /sfs/sanfs/lixfiles/linuxhome [Sent] Normal File--> 696,679 /sfs/sanfs/lixfiles/linuxhome/install/TIVguid.i386.rpm [Sent] Selective Backup processing of '/sfs/sanfs/lixfiles/linuxhome/install/*' finished without failure. Total number of objects inspected: 10 Total number of objects backed up: 10 Total number of objects updated: 0 Total number of objects rebound: 0 Total number of objects deleted: 0 Total number of objects expired: 0 Total number of objects failed: 0 Total number of bytes transferred: 27.05 MB Data transfer time: 2.32 sec Network data transfer rate: 11,909.43 KB/sec Aggregate data transfer rate: 9,186.13 KB/sec Objects compressed by: 0% Elapsed processing time: 00:00:03

09:48:37

2. In Example 12-20, we simulate data loss in the fileset backed up in step 1. We will delete directories /sfs/sanfs/aixfiles/aixhome/inst.images/inst.images and /sfs/sanfs/lixfiles/linuxhome/install.
Example 12-20 Simulating the loss of data by deleting directories that we backed up in step 1 Rome:/sfs/sanfs >rm -rf /sfs/sanfs/lixfiles/linuxhome/install Rome:/sfs/sanfs >rm -rf /sfs/sanfs/aixfiles/aixhome/inst.images

3. Now we will restore our files using the Tivoli Storage Manager AIX line client from the backup created in step 1, as shown in Example 12-21 on page 509. 508
IBM TotalStorage SAN File System

Example 12-21 Restoring files from Tivoli Storage Manager AIX client backup dsmc restore "/sfs/sanfs/aixfiles/aixhome/inst.images/*";dsmc restore "/sfs/sanfs/lixfiles/linuxhome/install/*" IBM Tivoli Storage Manager Command Line Backup/Archive Client Interface - Version 5, Release 2, Level 2.0 (c) Copyright by IBM Corporation and other(s) 1990, 2003. All Rights Reserved. Restore function invoked. Node Name: AIXROME Session established with server NPSRV2: Windows Server Version 5, Release 2, Level 2.0 Server date/time: 06/02/04 09:59:47 Last access: 06/02/04

09:56:34

ANS1247I Waiting for files from the server... Restoring 72 /sfs/sanfs/aixfiles/aixhome/inst.images [Done] Restoring 27,673,600 /sfs/sanfs/aixfiles/aixhome/inst.images/IP22727.tivoli.tsm.client.ba.32bit [Done] Restore processing finished. Total number of objects restored: 2 Total number of objects failed: 0 Total number of bytes transferred: 26.39 MB Data transfer time: 20.45 sec Network data transfer rate: 1,321.14 KB/sec Aggregate data transfer rate: 1,174.53 KB/sec Elapsed processing time: 00:00:23 IBM Tivoli Storage Manager Command Line Backup/Archive Client Interface - Version 5, Release 2, Level 2.0 (c) Copyright by IBM Corporation and other(s) 1990, 2003. All Rights Reserved. Restore function invoked. Node Name: AIXROME Session established with server NPSRV2: Windows Server Version 5, Release 2, Level 2.0 Server date/time: 06/02/04 10:00:10 Last access: 06/02/04

09:59:47

ANS1247I Waiting for files from the server... Restoring 72 /sfs/sanfs/lixfiles/linuxhome/install [Done] Restoring 696,679 /sfs/sanfs/lixfiles/linuxhome/install/TIVguid.i386.rpm [Done] < 680.40 KB> [ - ] Restore processing finished. Total number of objects restored: 2 Total number of objects failed: 0 Total number of bytes transferred: 680.40 KB Data transfer time: 0.36 sec Network data transfer rate: 1,877.42 KB/sec Aggregate data transfer rate: 135.87 KB/sec Elapsed processing time: 00:00:05

Chapter 12. Protecting the SAN File System environment

509

4. Now we check if the files have been restored to their original locations in Example 12-22.
Example 12-22 Check if files have been successfully restored Rome:/sfs/sanfs >ls -l /sfs/sanfs/lixfiles/linuxhome/install total 2048 -rw-rw-rw1 root system 696679 Jun 01 13:26 TIVguid.i386.rpm Rome:/sfs/sanfs >ls -l /sfs/sanfs/aixfiles/aixhome/inst.images total 55296 -rw-r----1 root system 27673600 Jun 01 14:38 IP22727.tivoli.tsm.client.ba.32bit

12.6.3 Backing up FlashCopy images with the snapshotroot option


Before we introduce the actual backup/restore scenario that uses snapshotroot (the Tivoli Storage Manager option for backing up data), we will briefly explain the purpose of this Tivoli Storage Manager option. Please note that the content of this section may require an in-depth knowledge of Tivoli Storage Manager concepts, which is beyond the scope of this redbook. If you need additional information about Tivoli Storage Manager, please refer to the Tivoli Storage Manager product manuals and the Redbooks IBM Tivoli Storage Management Concepts, SG24-4877 and IBM Tivoli Storage Manager Implementation Guide, SG24-5416. The snapshotroot option can be used with Tivoli Storage Manager incremental and selective backups as well as archives. It helps to associate snapshot data created by any third-party application with a native FlashCopy capability, such as SAN File System FlashCopy, with the actual file space data stored on the Tivoli Storage Manager. Please be aware that snapshotroot does not provide any functionality to take a FlashCopy (snapshot) image; it only helps to manage data that has been already created by any FlashCopy capable software application. How does snapshotroot work? In order to explain the benefit of using this option for SAN File System FlashCopy images, let us introduce the following example. Important: Please note that this section only introduces our example, highlighting any necessary considerations and steps you need to take. The actual cookbook-style on how we have set up both our SAN File System and Tivoli Storage Manager environment to use this advanced approach is described in Setting up the environment for snapshotroot-based backup on page 513. Assume that we have a fileset called aixfiles, attached to directory /sfs/sanfs/aixfiles/aixhome. When we create a SAN File System FlashCopy image for this particular fileset, a subdirectory will be created in the /sfs/sanfs/aixfiles/aixhome/.flashcopy directory. That subdirectory holds the snapshot of the actual files and directories stored in /sfs/sanfs/aixfiles/aixhome and its subdirectories. Example 12-23 shows two FlashCopy images that we have created for the purpose of this scenario.
Example 12-23 SAN File System FlashCopy images in /sfs/sanfs/aixfiles/aixhome.flashcopy directory Rome:/sfs/sanfs/aixfiles/aixhome/.flashcopy /sfs/sanfs/aixfiles/aixhome/.flashcopy Rome:/sfs/sanfs/aixfiles/aixhome/.flashcopy total 2 d--------5 root system 120 d--------5 root system 120 >pwd >ls -l Jun 01 22:32 Image06-01-2004/ Jun 01 22:33 Image06-02-2004/

Now, in order to back up the SAN File System FlashCopy image using the Tivoli Storage Manager client, you would normally run the following command:
dsmc incr "/sfs/sanfs/aixfiles/aixhome/.flashcopy/Image06-01-2004/*" -subdir=yes

510

IBM TotalStorage SAN File System

In this case, the Tivoli Storage Manager client will start to process the data in the /sfs/sanfs/aixfiles/aixhome/.flashcopy/Image06-01-2004/ directory and its subdirectories. With snapshotroot, we are able to base the backup on the SAN File System FlashCopy image, while still preserving (from the Tivoli Storage Manager server point of view) the actual absolute directory structure and file names from which that particular SAN File System FlashCopy image originates. However, the main reason you might consider using a snapshotroot based backup approach is that it gives you the ability to back up SAN File System FlashCopy images using Tivoli Storage Manager incremental methods. This requires you to add virtual mount point definitions into the Tivoli Storage Manager clients dsm.sys configuration file for: All the filesets you plan to back up Each and every SAN File System FlashCopy image you create for any of your filesets In Example 12-24, you can see how we have defined virtual mount points in our dsm.sys configuration file.
Example 12-24 Virtual mount point definitions example virtualmountpoint virtualmountpoint virtualmountpoint /sfs/sanfs/aixfiles/aixhome /sfs/sanfs/aixfiles/aixhome/.flashcopy/Image06-01-2004 /sfs/sanfs/aixfiles/aixhome/.flashcopy/Image06-02-2004

This is because without virtual mount point definitions, the Tivoli Storage Manager server would store all SAN File System related backups into a single file space (in our example, AIXRome /sfs), as shown in Example 12-25.
Example 12-25 q filespace command: no virtual mount point definitions tsm: NPSRV2>q filesp Node Name: AIXROME File space Name: /sfs FSID: 5 Platform: AIX File space Type: SANFS Is File space Unicode?: No Capacity (MB): 294,480.0 Pct Util: 3.7

Chapter 12. Protecting the SAN File System environment

511

If we, however, define a virtual mount point for our aixfiles fileset and also for all of our SAN File System FlashCopy images, and run a Tivoli Storage Manager backup, then the file space layout on the Tivoli Storage Manager server (output of the q filesp command) will now look as shown in Example 12-26.
Example 12-26 q filespace command: With virtual mount point definitions tsm: NPSRV2>q filesp Node Name: AIXROME File space Name: /sfs FSID: 5 Platform: AIX File space Type: SANFS Is File space Unicode?: No Capacity (MB): 294,480.0 Pct Util: 3.7 Node Name: AIXROME File space Name: /sfs/sanfs/aixfiles/aixhome FSID: 6 Platform: AIX File space Type: SANFS Is File space Unicode?: No Capacity (MB): 304,688.0 Pct Util: 3.6 Node Name: AIXROME File space Name: /sfs/sanfs/aixfiles/aixhome/.flashcopy/Image06-01-2004 FSID: 7 Platform: AIX File space Type: SANFS Is File space Unicode?: No Capacity (MB): 304,688.0 Pct Util: 3.6 Node Name: AIXROME File space Name: /sfs/sanfs/aixfiles/aixhome/.flashcopy/Image06-02-2004 FSID: 8 Platform: AIX File space Type: SANFS Is File space Unicode?: No Capacity (MB): 304,688.0 Pct Util: 3.6

So far, we have explained the purpose of the snapshotroot option, and outlined the role of the Tivoli Storage Manager clients virtual mount points. Now we will describe how to actually back up SAN File System data using the snapshotroot option.

Using the snapshotroot option to back up SAN File System filesets


As mentioned earlier in this section, you could back up a SAN File System FlashCopy image using a dsmc command with a syntax similar to this:
dsmc incr "/sfs/sanfs/aixfiles/aixhome/.flashcopy/Image06-01-2004/*" -subdir=yes

512

IBM TotalStorage SAN File System

But with the snapshotroot backup approach, you in fact do not run backup against the SAN File System FlashCopy image directory, but rather against the actual data directory. The SAN File System FlashCopy directory is then specified as the option for the snapshotroot option, as shown here:
dsmc incr /sfs/sanfs/aixfiles/aixhome/ -snapshotroot=/sfs/sanfs/aixfiles/aixhome/.flashcopy /Image06-02-2004

Since we have now explained the whole concept behind backing up SAN File System data based on their FlashCopy images using the snapshotroot Tivoli Storage Manager client option, we can now show a step-by-step scenario for this type of backup in our lab environment.

Setting up the environment for snapshotroot-based backup


In this section, we describe all the necessary steps to configure snapshotroot-based Tivoli Storage Manager backup of the SAN File System data. The snapshotroot option for SAN File System is supported, at the time of writing, on Tivoli Storage Manager V5.2.3_1 and higher clients for AIX, Windows, Solaris, and Linux. 1. Make sure you have a virtual mount point defined for your fileset in the Tivoli Storage Manager dsm.sys file; if not, create the definition:
virtualmountpoint /sfs/sanfs/aixfiles/aixhome

2. Create the SAN File System FlashCopy image. You can use either the SAN File System graphical console or the command-line interface. In our example, we use the command-line interface:
sfscli>mkimage -fileset aixfiles -dir Image06-01-2004 aixfiles_fcopy1

3. Add a new virtual mount point definition in the dsm.sys file for the newly created SAN File System FlashCopy image in step 2:
virtualmountpoint /sfs/sanfs/aixfiles/aixhome/.flashcopy/Image06-01-2004

4. Make sure that the .flashcopy directory is excluded from normal Tivoli Storage Manager backups by adding the appropriate exclude.dir option into the dsm.sys file:
exclude.dir /.../.flashcopy

5. This step is for AIX only. If you are configuring this example on another client than AIX, please skip this step. Add the testflag option to the dsm.sys file in order to prevent undesired object updates due to AIX LVM inode number differences between the actual and FlashCopy data:
testflag ignoreinodeupdate

6. Example 12-27 shows the completed dsm.sys file for our environment.
Example 12-27 Example of the dsm.sys file in our environment Rome:/usr/tivoli/tsm/client/ba/bin >cat dsm.sys SErvername config1 COMMmethod TCPip TCPPort 1500 TCPServeraddress 9.42.164.126 Nodename AIXRome Passwordaccess generate ***** added for SAN File System ***** testflag ignoreinodeupdate virtualmountpoint /sfs/sanfs/aixfiles/aixhome virtualmountpoint /sfs/sanfs/aixfiles/aixhome/.flashcopy/Image06-01-2004 virtualmountpoint /sfs/sanfs/aixfiles/aixhome/.flashcopy/Image06-02-2004

Chapter 12. Protecting the SAN File System environment

513

7. Perform the Tivoli Storage Manager incremental, selective or archive backup operation. In our case, we performed an incremental backup of the fileset, with the snapshotroot based on Image06-02-2004:
dsmc incr /sfs/sanfs/aixfiles/aixhome/\ \-snapshotroot=/sfs/sanfs/aixfiles/aixhome/.flashcopy/Image06-02-2004

8. Now that we have backed up the SAN File System data incrementally using the FlashCopy image, we can now delete that FlashCopy image on MDS using the command-line interface:
rmimage -fileset aixfiles aixfiles_fcopy1

In the next section, we introduce the backup scenario based on the snapshotroot option, which will demonstrate how the Tivoli Storage Manager incremental backup using snapshotroot really works.

Scenario: backup using the snapshotroot option


Assume that we have a fileset named aixfiles and no SAN File System Flash Copy images have been created yet. The SAN File System filesets directory initially contains a file named file1.exe. 1. First, we create SAN File System Flash Copy image for fileset aixfiles, as shown in Example 12-28.
Example 12-28 Create SAN File System Flash Copy image sfscli> mkimage -fileset aixfiles -dir aixfiles-image-1 aixfiles-image-1 CMMNP5168I FlashCopy image aixfiles-image-1 on fileset aixfiles was created successfully

2. Next, we will add the virtual mount point definition to our DSM.SYS configuration file and run an incremental backup of the filesets data using the snapshotroot option, as shown in Example 12-29.
Example 12-29 Run Tivoli Storage Manager backup of the data Rome:/sfs/sanfs/aixfiles/aixhome >dsmc incr /sfs/sanfs/aixfiles/aixhome/ -snapshotroot=/sfs/sanfs/aixfiles/aixhome/.flashcopy/aixfiles-image-1 IBM Tivoli Storage Manager Command Line Backup/Archive Client Interface - Version 5, Release 2, Level 2.0 (c) Copyright by IBM Corporation and other(s) 1990, 2003. All Rights Reserved. Node Name: AIXROME Session established with server NPSRV2: Windows Server Version 5, Release 2, Level 2.0 Server date/time: 06/08/04 14:29:57 Last access: 06/08/04

14:29:05

Incremental backup of volume '/sfs/sanfs/aixfiles/aixhome/' Directory--> 48 /sfs/sanfs/aixfiles/aixhome/lost+found [Sent] Normal File--> 5,495,760 /sfs/sanfs/aixfiles/aixhome/file1.exe [Sent] Successful incremental backup of '/sfs/sanfs/aixfiles/aixhome/*'

Total Total Total Total Total Total

number number number number number number

of of of of of of

objects objects objects objects objects objects

inspected: backed up: updated: rebound: deleted: expired:

2 2 0 0 0 0

514

IBM TotalStorage SAN File System

Total number of objects failed: 0 Total number of bytes transferred: 5.24 MB Data transfer time: 0.44 sec Network data transfer rate: 11,973.98 KB/sec Aggregate data transfer rate: 1,775.06 KB/sec Objects compressed by: 0% Elapsed processing time: 00:00:03 Rome:/sfs/sanfs/aixfiles/aixhome >

The file /sfs/sanfs/aixfiles/aixhome/file1.exe has been backed up by Tivoli Storage Manager using the SAN File System image in /sfs/sanfs/aixfiles/aixhome/.flashcopy/aixfiles-image-1/file1.exe. 3. Next, we will add a new file into the /sfs/sanfs/aixfiles/aixhome directory named file2.exe. 4. Now we create a new SAN File System FlashCopy image in the /sfs/sanfs/aixfiles/aixhome/.flashcopy/aixfiles-image-2 directory, as shown in Example 12-30.
Example 12-30 Creating a new FlashCopy image sfscli> mkimage -fileset aixfiles -dir aixfiles-image-2 aixfiles-image-2 CMMNP5168I FlashCopy image aixfiles-image-2 on fileset aixfiles was created successfully.

5. Now we will add a new virtual mount point for the new SAN File System FlashCopy image aixfiles-image-2 (Example 12-31).
Example 12-31 Adding a new virtual mount point definition into DSM.SYS and run new backup Rome:/sfs/sanfs/aixfiles/aixhome >cat /usr/tivoli/tsm/client/ba/bin/dsm.sys SErvername config1 COMMmethod TCPip TCPPort 1500 TCPServeraddress 9.42.164.126 Nodename AIXRome Passwordaccess generate ***** added for SAN File System ***** testflag ignoreinodeupdate virtualmountpoint /sfs/sanfs/aixfiles/aixhome virtualmountpoint /sfs/sanfs/aixfiles/aixhome/.flashcopy/aixfiles-image-1 virtualmountpoint /sfs/sanfs/aixfiles/aixhome/.flashcopy/aixfiles-image-2

Chapter 12. Protecting the SAN File System environment

515

6. Now, run backup again, using the snapshotroot option pointing to the latest Flashcopy image: aixfiles-image-2 (Example 12-32).
Example 12-32 Run backup again, this time using the aixfiles-image-2 image Rome:/sfs/sanfs/aixfiles/aixhome >dsmc incr /sfs/sanfs/aixfiles/aixhome/\ \-snapshotroot=/sfs/sanfs/aixfiles/aixhome/.flashcopy/aixfiles-image-2 IBM Tivoli Storage Manager Command Line Backup/Archive Client Interface - Version 5, Release 2, Level 2.0 (c) Copyright by IBM Corporation and other(s) 1990, 2003. All Rights Reserved. Node Name: AIXROME Session established with server NPSRV2: Windows Server Version 5, Release 2, Level 2.0 Server date/time: 06/08/04 14:45:17 Last access: 06/08/04

14:29:57

Incremental backup of volume '/sfs/sanfs/aixfiles/aixhome/' Normal File--> 5,495,760 /sfs/sanfs/aixfiles/aixhome/file2.exe [Sent] Successful incremental backup of '/sfs/sanfs/aixfiles/aixhome/*'

Total number of objects inspected: 3 Total number of objects backed up: 1 Total number of objects updated: 0 Total number of objects rebound: 0 Total number of objects deleted: 0 Total number of objects expired: 0 Total number of objects failed: 0 Total number of bytes transferred: 5.24 MB Data transfer time: 0.40 sec Network data transfer rate: 13,141.99 KB/sec Aggregate data transfer rate: 1,771.75 KB/sec Objects compressed by: 0% Elapsed processing time: 00:00:03

As you can see, the /sfs/sanfs/aixfiles/aixhome directory has been backed up incrementally, this time using the aixfiles-image-2 image. Therefore, we just backed up the newly added file, file2.exe. 7. Now we will create the file3.exe file in the /sfs/sanfs/aixfiles/aixhome directory. 8. Next, we will make a SAN File System FlashCopy image named aixfiles-image-3, as shown in Example 12-33.
Example 12-33 Making another SAN File System FlashCopy image sfscli> mkimage -fileset aixfiles -dir aixfiles-image-3 aixfiles-image-3 CMMNP5168I FlashCopy image aixfiles-image-3 on fileset aixfiles was created successfully.

9. Next, we add another file named file4.exe. 10.Finally, we will run a backup, pointing the snapshotroot to the aixfiles-image-3 SAN File System FlashCopy image (do not forget to add a new virtual mount point for the aixfiles-image-3 image to the DSM.SYS configuration file). In this case, only the file3.exe is backed up and file4.exe should be ignored. Why? Because we have added the file4.exe to the actual file system directory and did not generate a new SAN File System FlashCopy image afterwards. The SAN File System FlashCopy image aixfiles-image-3 does not contain the image of the file file4.exe, as the file was created after the image was taken. Therefore, file4.exe is not backed up. This is how the snapshotroot option works; in each

516

IBM TotalStorage SAN File System

case, the fileset will be backed up incrementally, using the specified FlashCopy image as a base.
Example 12-34 Final backup Rome:/sfs/sanfs/aixfiles/aixhome >hotroot=/sfs/sanfs/aixfiles/aixhome/.flashcopy/aixfiles-image-3 IBM Tivoli Storage Manager Command Line Backup/Archive Client Interface - Version 5, Release 2, Level 2.0 (c) Copyright by IBM Corporation and other(s) 1990, 2003. All Rights Reserved. Node Name: AIXROME Session established with server NPSRV2: Windows Server Version 5, Release 2, Level 2.0 Server date/time: 06/08/04 15:07:56 Last access: 06/08/04 <

14:45:17

Incremental backup of volume '/sfs/sanfs/aixfiles/aixhome/' Normal File--> 5,495,760 /sfs/sanfs/aixfiles/aixhome/file3.exe [Sent] Successful incremental backup of '/sfs/sanfs/aixfiles/aixhome/*'

Total number of objects inspected: 4 Total number of objects backed up: 1 Total number of objects updated: 0 Total number of objects rebound: 0 Total number of objects deleted: 0 Total number of objects expired: 0 Total number of objects failed: 0 Total number of bytes transferred: 5.24 MB Data transfer time: 0.40 sec Network data transfer rate: 13,115.34 KB/sec Aggregate data transfer rate: 1,768.19 KB/sec Objects compressed by: 0% Elapsed processing time: 00:00:03

As you can see, our assumption that only the file3.exe file will be backed up was right. The Tivoli Storage Manager backup client searches the actual data directory /sfs/sanfs/aixfiles/aixhome for existing objects to be backed up, but for the backup itself, it uses the SAN File System FlashCopy directory specified by the snapshotroot option, in our case, /sfs/sanfs/aixfiles/aixhome/aixfiles-image-3.

Chapter 12. Protecting the SAN File System environment

517

The above scenario also explains the role of the virtual mount point entries in the DSM.SYS configuration file. As you can see in Example 12-31 on page 515, there is one virtual mount point created for the /sfs/sanfs/aixfiles/aixhome directory. We need this option to say to the Tivoli Storage Manager server that it has to create and use a new separate file space for the /sfs/sanfs/aixfiles/aixhome directory. See the output of the q filesp command from the Tivoli Storage Manager command line interface shown in Example 12-35.
Example 12-35 Query filesp command output from Tivoli Storage Manager CLI interface tsm: NPSRV2>q filesp Node Name: AIXROME File space Name: /sfs/sanfs/aixfiles/aixhome FSID: 10 Platform: AIX File space Type: SANFS Is File space Unicode?: No Capacity (MB): 352,800.0 Pct Util: 8.1

Simply put, if you did not specify a virtual mount point for the /sfs/sanfs/aixfiles/aixhome directory (which is also the attach point of the SAN File System fileset aixfiles), and then ran a backup, the TSM file space name would be /sfs only (as shown in Example 12-25 on page 511) and you would not be able to run incremental backups using the snapshotroot option. So, why do we need virtual mount point entries in DSM.SYS for all of our SAN File System FlashCopy images? The reason is that you can only specify a mount point to the snapshotroot option, not a directory. If the virtualmountpoint entry for the aixfiles-image-3 image was not made, and you tried to run a backup with the snapshotroot option pointing to the /sfs/sanfs/aixfiles/aixhome/.flashcopy/aixfiles-image-3 directory, the Tivoli Storage Manager client would generate an error message, as shown in Example 12-36 below.
Example 12-36 Need for dsm.sys virtual mountpoint entries for SAN File System FlashCopy images Rome:/sfs/sanfs/aixfiles/aixhome >dsmc incr /sfs/sanfs/aixfiles/aixhome/ -snapshotroot=/sfs/sanfs/aixfiles/aixhome/.flashcopy/aixfiles-image-3 ANS7533E The specified drive '/sfs/sanfs/aixfiles/aixhome/.flashcopy/aixfiles-image-3' does not exist or is not a local drive.

Conclusions
Using the snapshotroot option with your Tivoli Storage Manager backup client gives you the ability to use SAN File System FlashCopy images to back up data while still using the incremental backup method. In order to use the snapshotroot option, you need to add a virtualmountpoint entry for the actual fileset and also for each and every SAN File System FlashCopy image generated for that particular fileset. You can avoid the manual modification to your DSM.SYS file (in order to add a specific virtualmountpoint entry) by choosing the standard naming convention for your SAN File System FlashCopy images (Image-1, Image-2, Image-3, and so on). Since the SAN File System supports a maximum number of FlashCopy images of 32, you can predefine all your virtual mount points to your DSM.SYS configuration file and then automate the backup process using scripts.

518

IBM TotalStorage SAN File System

13

Chapter 13.

Problem determination and troubleshooting


In this chapter, we cover the following topics: Remote Access Support Event handling Data collection Remote Supervisor Adapter II Simple Network Management Protocol (SNMP) Hints and tips Log/trace message format Detailed information about this topic can also be found in the manual IBM TotalStorage SAN File System Maintenance and Problem Determination Guide, GA27-4318

Copyright IBM Corp. 2003, 2004, 2006. All rights reserved.

519

13.1 Overview
This chapter covers the different features and capabilities for performing problem determination (PD) for SAN File System. We have previously described the major functions and features of SAN File System. There are several possibilities and layers to aid in monitoring the system and providing problem determination support. This chapter will describe these items and show how they can be used.

13.2 Remote access support


Remote access to the engines in a SAN File System configuration is a feature provided to enhance the effectiveness and responsiveness of IBM support and service personnel when addressing SAN File System problems. With remote access, an IBM support representative can diagnose problems while not physically located at the client site. Full secure access to the Master Console is provided. Remote access (RA) support is built upon a collection of tools and technologies, some common to the computing industry in general, and others specific to the Master Console and the IBM Storage Software family. This section provides an overview of the feature, its uses, and the underlying technology. Note you will only get the remote access feature if you deploy the Master Console. How does the IBM support representative gain remote access to a SAN File System configuration? The support engineer uses a connection ID and a Master Console account to establish a secure connection over the VPN tunnel. To set this up: 1. The client logs into the Master Console. 2. From the Master Console, the client initiates a secure connection to the IBM Virtual Private Network (VPN) server in the IBM demilitarized (yellow) zone (DMZ). The application on the master console to initiate the VPN connection is called the IBM Connection Manager, as shown in Figure 13-1.

Figure 13-1 IBM Connection Manager

520

IBM TotalStorage SAN File System

Note: If a firewall is used, be aware that many firewalls disable VPN connections by default (for example, Cisco PIX firewall). Consult your firewall documentation on how to enable VPN traffic. 3. The support engineer obtains the Customer Connection ID for the newly established secure VPN connection from the client. 4. The support engineer establishes a secure connection to the VPN server. 5. Using the connection ID and an account on the Master Console, the support representative establishes secure access to the Master Console over the VPN tunnel. 6. The support representative connects to the SAN File System MDS using SSH. This process is shown in Figure 13-2.

1 6

VPN Tunnel M a s te r C o n s o le
5

VPN G a te w a y

RCHASRS3

S u p p o rt D e s k to p P C

S A N F ile S ys te m M e ta d a ta S e rv e r C lu s te r C u s to m e rr C u s to m e In tra n e tt In tr a n e In te rrn e tt In te n e IB M D M Z IB M D M Z (Y e llo w Z o n e )) (Y e llo w Z o n e IB M IB M In tr a n e tt In tra n e

Figure 13-2 Steps for remote access

13.3 Logging and tracing


SAN File System provides various logging and tracing mechanisms for use with the MDS and clients. Logging messages provide a trail of routine system activities, that is, operations that occur in normal day-to-day use of the product; as such, they are of interest to clients and administrators, as well as to support and services personnel. The contents of the logs, together with the observed symptoms, will be used as a basis to begin isolating problems. For problems that seem to be related to: Administrative user access: See 13.3.3, Administrative and security logs on page 528. Cluster, Metadata servers, or metadata: See 13.3.2, Metadata server logs on page 525. Clients: See 13.3.5, Client logs and traces on page 530.

Chapter 13. Problem determination and troubleshooting

521

13.3.1 SAN File System Message convention


The format of the log message received will help you determine the type of error. The message format is shown in Figure 13-3. The XXXX, YY, and Z fields are alphabetic, and the nnnn field is a 4-digit numbered. Table 13-3 on page 547 shows the detailed list of values for the Component and Sub Component fields, which can be used to decode the message. For example, HSTAD001I would be an informational Basic Administration Message from the SAN File System Administration Service.

Component

Message Number

XXXYYnnnnZ
Sub Component Severity Level I=Informational W=Warning E=Error S=Severe

Figure 13-3 SAN File System message format

General log structure


Each of the MDS logs has a maximum size of 250 MB. Once a log file reaches its maximum size, it is renamed to a .old extension, for example, a full log.std file is renamed log.std.old. An existing .old file is overwritten when a subsequent .old file of the same name is created. The log.std file is then cleared and used for new messages of the indicated type. In this way, 500 MB of each type of log data is maintained. The contents of the log files persist on each server restart. If you access these logs through the master MDS, you will see a consolidated view of the logs from each of the MDSs in the cluster. If you access these logs through a subordinate MDS, you will only see the logs for that particular MDS. To view the consolidated server log, start sfscli and type catlog. To get help on what parameters and options are available to use with catlog, type help catlog, as shown in Example 13-1.
Example 13-1 Options to use with catlog sfscli> help catlog catlog Displays the contents of the various log files maintained by the administrative server and the cluster.

>>-catlog--+--------+--+--------------------+-------------------> +- -?----+ '- -entries--+-25--+-' +- -h----+ +-50--+ '- -help-' +-75--+ '-100-'

522

IBM TotalStorage SAN File System

>--+---------------------+--+--------------------+--------------> | .-cluster--. | '- -date--YYYY-MM-DD-' '- -log--+-admin----+-' +-audit----+ +-event----+ '-security-' .-,--------. V | >--+---------------------+-- -level----+-info-+-+-------------->< '- -order--+-newest-+-' +-err--+ Press Enter To Continue... '-oldest-' +-warn-+ '-sev--'

Parameters -? | -h | -help Displays a detailed description of this command, including syntax, parameter descriptions, and examples. If you specify a help option, all other command options are ignored. -entries Specifies the number of log entries to show at a time, from oldest to newest. Valid values are 25, 50, 75, or 100. If not specified, this command shows the entire log. -log Displays entries in the specified log, ordered by time stamp starting with the most recent entry. The default is cluster. Displays entries in the administrative log, which maintains a history of messages created by the administrative server. Press Enter To Continue... admin

audit

Displays entries in the audit log, which maintains a history of all commands issued by any administrator for all metadata servers in the cluster. Displays entries in the cluster log, which maintains a history of messages created by all Metadata servers in the cluster. Displays event entries in the event log, which maintains a history of event messages issued by all Metadata servers in the cluster.

cluster

event

security Displays entries in the security log, which maintains a history of administrative-user login activity. -date Specifies the date at which you want the displayed log entries to start. The date must be in the format YYYY-MM-DD, where YYYY is the year, MM is the month, and DD is the day. This date must be the current date or older. Future dates are not acceptable.

Chapter 13. Problem determination and troubleshooting

523

Press Enter To Continue... -order Specifies the direction of the displayed log entries. You can specify one of the following values: newest Displays the log entries form newest to oldest. This is the default value if the -date parameter is not specified. Displays the log entries form oldest to newest. This is the default value if the -date parameter is specified.

oldest

-level info | err | warn | sev Specifies the severity level of the displayed log entries. If not specified, all severity levels are displayed. You can specify one or more levels, separate by a comma and no space (for example, -level info,warn,err)

Description If you run this command from an engine hosting a subordinate metadata server, logs for only the local engine are displayed. If you run this command from the engine hosting the master Metadata server, logs for the entire cluster are displayed. Press Enter To Continue...

If there are log entries that have not been displayed, you are prompted to press Enter to display the next set of entries or to type exit and press Enter to stop. This command displays the following information for the specified log: * * * * * * Message identifier. Severity level (Info, Error, Warning, Severe). Message type (Normal or Event). Name of the Metadata server that generated the message. Date and time the message was generated. Message description.

Tip: It is important that the date and time are correct on the metadata servers so messages are logged correctly and log entries are displayed correctly when using the -time parameter.

Example Display the event entries in the cluster log The following example displays the error messages in the event log that occurred on or after Press Enter To Continue... January 4, 2003. sfscli> catlog -log event -date 2003-01-04 -level err ID Level Type Server Date and Time

524

IBM TotalStorage SAN File System

============================================================ HSTSS0009E Error Event ST3 Jan 2, 2003 8:39:15 PM HSTNL0019E Error Event ST2 Jan 2, 2003 8:40:46 PM Message ====================================================== The Metadata server rpm is not installed. Unable to extract boot record when server is running.

13.3.2 Metadata server logs


This chapter describes where the SAN File System metadata server logs are stored. The following logs for the metadata server are stored on each MDS engine (see Table 13-1).
Table 13-1 MDS log files Log Audit log Dump log Server log Trace log RSA command log File name log.audit log.dmp log.std log.trace log.stopengine Location /usr/tank/server/log /usr/tank/server/log /usr/tank/server/log /usr/tank/server/log /usr/tank/server/log Maximum file size 250 MB 250 MB 250 MB

Although the Audit Log (log.audit), Trace Log (log.trace), and Server Log (log.std) have a maximum file size of 250 MB, SAN File System actually stores 500 MB of data for each of these logs. When any of these logs reaches its maximum size, it is renamed to include the extension .old. If a file by that name already exists, SAN File System overwrites the existing file. Then the log is cleared so that it can start accepting new messages again. The log.dmp file starts over for either of these occurrences if the metadata server has restarted (for example, it restarted due to a server crash): The start of each day. The file reaches a size of 1 MB. When you display these logs from the master metadata server using either the administrative command-line interface or the SAN File System console, you see a consolidated view of all the logs from each engine in the cluster. The consolidated view of the server message log is called the Cluster log. Note: You can also display the Event Log. This log is actually a subset of the messages stored in the Cluster Log. It contains only messages with a message type of event.

Chapter 13. Problem determination and troubleshooting

525

Event log
The event log, /usr/tank/server/log/log.std (Example 13-2), records normal and event messages (including error messages) for the SAN File System MDS. These messages capture routine server activity and error conditions, and are always enabled. This log file always exists for each running instance of a SAN File System MDS.
Example 13-2 Sample log message written to log.std 2004-06-03 05:51:40 cluster master. INFORMATIONAL HSTPG0022I N mds1 Now running as the

You can view the log in sfscli using catlog -log event, as shown in Example 13-3.
Example 13-3 View the server log sfscli> catlog -log event -date 2004-06-04 ID Level Server Date and Time Message =========================================================================================== ============================ HSTCM0395W Warning mds4 Jun 04, 2004 3:20:01 AM Alert. The server state has changed from Online(10) to NotRunning(0). HSTCM0396E Error mds3 Jun 04, 2004 7:59:57 AM Alert. The server state has changed from Online(10) to Joining(5). HSTCM0394I Info mds3 Jun 04, 2004 7:59:57 AM Alert. The server state has changed from Joining(5) to Online(10).

Note that the catlog sfscli command run from the master metadata returns cluster wise logs. When run from a subordinate MDS, catlog only returns logs for the local server.

Audit log
The audit log /usr/tank/server/log/log.audit (Example 13-4) contains administrative audit messages. Audit messages are generated in response to operations performed by the SAN File System administrative server. It does not capture every administrative operation, but records all commands that modify system or user metadata, including commands that would have made such a change but failed. The file also records the user ID issuing the command, along with time stamp and completion status of the requested operation. The file does not record simple query operations; such operations do not alter metadata, and since they are likely to be more numerous than those that do, their presence could easily overwhelm logging and interpretation of more meaningful operations. Audit logging is always enabled, and the log file always exists for each running instance of an MDS.
Example 13-4 Sample audit message written to log.audit 2004-06-03 21:19:01 INFORMATIONAL HSTAD0019I A mds1 User Name: ITSOAdmin Command Name: ServerServiceStopService Parameters: SYSTEMCREATIONCLASSNAME=STC_ComputerSystem SYSTEMNAME=mds4 CREATIONCLASSNAME=STC_TankService NAME=TankService . Command Succeeded.

You can use OS utilities (for example, cat or vi) to view the actual file, or within sfscli, use catlog -log audit, as shown in Example 13-5 on page 527.

526

IBM TotalStorage SAN File System

Example 13-5 View the audit log sfscli> catlog -log audit -date 2004-06-03 ID Level Server Date and Time Message

=========================================================================================== HSTAD0019I Info mds4 Jun 03, 2004 2:56:54 AM User Name: ITSOAdmin Command Name: Filesetlistassociatedpools Parameters: NAME=user1 . Command Succeeded.

Trace log
The /usr/tank/server/log/log.trace file receives trace messages. Because a minimal amount of tracing is always enabled to support first-failure data capture, this file (Example 13-7 on page 528) will always exist. However, the number of messages and the level of detail the messages convey is highly dependent on the current trace settings for the server in question. The default level of tracing active at all times is 0, which sends only the most important messages, which is useful for providing initial first-failure data capture (FFDC) information. Tracing messages provides details about the execution of internal code paths a look inside the black box. Tracing is therefore of interest primarily to IBM support, service, and development, and typically clients would only change settings at their direction. Higher levels of tracing can generate significant CPU activity; therefore, its use should be limited to where necessary. Tracing can be enabled via the GUI, or through the CLI using the trace command. You can control: When tracing begins and ends The MDS components for which tracing will occur The level of detail (verbosity) to show during tracing To get help on the parameters available with the trace command, enter legacy trace from the sfscli session, as shown in Example 13-6.
Example 13-6 Trace options sfscli> legacy trace trace: Trace Command Help -----------------trace enable [ module ] - if module is omitted, the enabled modules are displayed - if module is given, the module will emit messages trace disable [ module ] - if module is omitted, the disabled modules are displayed - if module is given, the module will stop emitting messages trace list - displays a list of trace modules in the server trace verbosity [0 - 9] - if value is omitted, the current verbosity is printed - if value is given, it sets the volume of tracing output (0 = min, 9 = max) trace emit "string" - emits the specified string to the trace log NOTE: Module names can specified with wildcard (* or ?) characters.

Chapter 13. Problem determination and troubleshooting

527

An example trace log is in Example 13-7.


Example 13-7 Sample trace message written to log.trace 2003-04-16 12:55:02:169400 TID:1024 INFORMATIONAL HSTPG0075I T PGM:PGMREP mdsnode.ibm.com First-fail data capture tracing has been started and is functional.

13.3.3 Administrative and security logs


For problems that seem to be related to administrative user access, use the security log and the administrative log to determine the cause of the problem. The following logs for the administrative server are stored on the engine hosting those servers (see Table 13-2).
Table 13-2 Administrative logs Log Administrative log Console log Security log Standard error Standard out File name cimom.log console.log security.log stderr.log stdout.log Location /usr/tank/admin/log /usr/tank/admin/log /usr/tank/admin/log /usr/tank/admin/log /usr/tank/admin/log Maximum file size -

Administrative log
The administrative log (/usr/tank/admin/log/cimom.log) contains messages generated by the Administrative server. If, from the master metadata server, you display the administrative log from either the administrative command-line interface or the SAN File System console, all administrative logs on all engines in the cluster are consolidated into a single view (see Example 13-8).
Example 13-8 Cimom log 2005-08-26 07:17:48-08:00 I CMMOM0203I **** CIMOM Server Started **** 2005-08-26 07:17:48-08:00 I CMMOM0204I CIMOM Version: 1.2.0.21 2005-08-26 07:17:48-08:00 I CMMOM0205I CIMOM Build Date: 06/13/05 Build Time: 03:51:40 PM 2005-08-26 07:17:48-08:00 I CMMOM0206I OS Name: Linux Version: 2.4.21-231-smp 2005-08-26 07:17:48-08:00 I CMMOM0200I SSG/SSD CIM Object Manager 2005-08-26 07:17:51-08:00 I CMMOM0410I Authorization is active 2005-08-26 07:17:51-08:00 I CMMOM0400I Authorization module = com.ibm.storage.storagetank.auth.SFSLocalAuthModule 2005-08-26 07:17:51-08:00 I CMMOM0901I IndicationProcessor started 2005-08-26 07:17:51-08:00 I CMMOM0906I No pre-existing indication subscriptions 2005-08-26 07:17:51-08:00 I CMMOM0404I Security server starting on port 5989

Security log
The security log (/usr/tank/admin/log/security.log) displays the administrative user login activity for the Administrative server. If you display these logs from either the CLI or the SAN File System console, all administrative and security logs on all engines in the cluster are consolidated into a single view. To view this consolidated log using the CLI, use catlog -log security for the security log (see Example 13-9 on page 529).

528

IBM TotalStorage SAN File System

Example 13-9 Security log sfscli> catlog -log security ID Level Type Server Date and Time Message ========================================================================================== CIMOM[com.ibm.http.HTTPServer.SecurityServer(HTTPServer.java:430)]: Info tank-mds3 Aug 26, 2005 2:17:51 PM The creation date of KeyStore is Fri Aug 26 07:16:49 PDT 2005 CIMOM[com.ibm.http.TrustStoreThread.run(TrustStoreThread.java:78)]: Info tank-mds3 Aug 26, 2005 2:17:51 PM The current date is Fri Aug 26 07:17:51 PDT 2005 CMMOM0302I Info tank-mds3 Aug 26, 2005 2:20:16 PM User (null) on client localhost could not be authenticated CIMOM[com.ibm.http.TrustStoreThread.run(TrustStoreThread.java:78)]: Info tank-mds3 Aug 27, 2005 2:17:51 PM The current date is Sat Aug 27 07:17:51 PDT 2005 CIMOM[com.ibm.http.TrustStoreThread.run(TrustStoreThread.java:78)]: Info tank-mds3 Aug 28, 2005 2:17:51 PM The current date is Sun Aug 28 07:17:51 PDT 2005

Use catlog -log admin for the administrative log, as shown in Example 13-10.
Example 13-10 Consolidated administrative log from the CLI sfscli> catlog -log admin ID Level Type Server Date and Time Message ========================================================================================== CMMOM0203I Info Normal tank-mds3 Aug 26, 2005 2:17:48 PM **** CIMOM Server Started **** CMMOM0204I Info Normal tank-mds3 Aug 26, 2005 2:17:48 PM CIMOM Version: 1.2.0.21 CMMOM0205I Info Normal tank-mds3 Aug 26, 2005 2:17:48 PM CIMOM Build Date: 06/13/05 Build Time: 03:51:40 PM CMMOM0206I Info Normal tank-mds3 Aug 26, 2005 2:17:48 PM OS Name: Linux Version: 2.4.21-231-smp CMMOM0200I Info Normal tank-mds3 Aug 26, 2005 2:17:48 PM SSG/SSD CIM Object Manager CMMOM0410I Info Normal tank-mds3 Aug 26, 2005 2:17:51 PM Authorization is active CMMOM0400I Info Normal tank-mds3 Aug 26, 2005 2:17:51 PM Authorization module = com.ibm.storage.storagetank.auth.SFSLocalAuthModule CMMOM0901I Info Normal tank-mds3 Aug 26, 2005 2:17:51 PM IndicationProcessor started CMMOM0906I Info Normal tank-mds3 Aug 26, 2005 2:17:51 PM No pre-existing indication subscriptions CMMOM0404I Info Normal tank-mds3 Aug 26, 2005 2:17:51 PM Security server starting on port 5989 CMMOM0402I Info Normal tank-mds3 Aug 26, 2005 2:17:51 PM Platform is Unix CMMOM0901I Info Normal mds1 May 06, 2004 3:19:58 AM IndicationProcessor was started CMMOM0906I Info Normal mds1 May 06, 2004 3:19:58 AM No preexisting indication subscriptions CMMOM0404I Info Normal mds1 May 06, 2004 3:19:58 AM Security server starting on port 5989 CMMOM0402I Info Normal mds1 May 06, 2004 3:19:58 AM Platform is Unix

Chapter 13. Problem determination and troubleshooting

529

13.3.4 Consolidated server message logs


The consolidated view of the server message log is called the cluster log. To view it, use catlog -log cluster, as shown in Example 13-11 (extract).
Example 13-11 Cluster log sfscli> catlog -log cluster ID Level Type Server Date and Time Message =========================================================================================== HSTOP0021W Warning Normal mds1 May 05, 2004 5:23:55 AM Warning. The -servername option was specified more than once. HSTOP0031W Warning Normal mds1 May 05, 2004 5:23:55 AM The -servername option has already been set. Further changes are not permitted. HSTEV0005I Info Normal mds1 May 05, 2004 5:23:55 AM The machine's serial number is 23RG610. The machine's model number is 41461RX. HSTPG0041I Info Normal mds1 May 05, 2004 5:23:55 AM ****************************SERVER STARTED****************************

13.3.5 Client logs and traces


Logging and tracing is provided on each SAN File System client to help diagnose client-related problems. There is also a client dump facility to help with data collection in the event that IBM Support requests this information.

Windows client logging


Windows client log messages are written to the standard Event Log (Figure 13-4 on page 531). To view this log, select Start Programs Administrative Tools Event Viewer.

530

IBM TotalStorage SAN File System

Figure 13-4 Event viewer on Windows 2000

You can use the IBM eGatherer to collect all necessary logs needed for IBM Technical Support. It is available from:
http://www.ibm.com/pc/support/site.wss/document.do?lndocid=MIGR-4R5VKC

Windows client tracing


Windows client tracing is written in the file C:\Program Files\IBM\Storage Tank\Client\log\sanfs.log for Windows. Tracing has to be enabled first (use the client command stlog to enable and control Windows client tracing; see the IBM TotalStorage SAN File System Maintenance and Problem Determination Guide, GA27-4318 for more details). You can view trace log using a standard text editor, for example, WordPad. Example 13-12 shows some sample output from the Windows trace log.
Example 13-12 SAN File System client logs on Windows #E 6142412640|889BF4E0 TIcCreate::OpenRootDir:1763 B8F667A4 \ Throwing status: STATUS_FILE_IS_A_DIRECTORY (C00000BA) #E 6144423406|88C7A520 csmLease TStreamSocket::Send:1710 88D56B28 CheckStatus failed: STATUS_IO_TIMEOUT (C00000B5) #E 6144423406|88C79880 csmXmit TStreamSocket::Send:1710 88D56B28 CheckStatus failed: STATUS_IO_TIMEOUT (C00000B5) #E 6144423406|88C77CC0 csmRecv TStreamSocket::Receive:1787 88D56B28 CheckStatus failed: STATUS_IO_TIMEOUT (C00000B5) #E 6144424406|887397A0 Reassert TStreamSocket::Disconnect:1864 88D56B28 CheckStatus failed: STATUS_CONNECTION_INVALID (C000023A)

Chapter 13. Problem determination and troubleshooting

531

Windows client dump


You can configure Windows to generate a dump file if the Windows SAN File System client terminates abnormally. By default, the file is C:\WINNT\memory.dmp; however, this is configurable. If the SAN File System client hangs, you can force the creation of a dump file. But you must first configure the system to allow the creation of a dump file. To do this: Make sure that the system is configured to generate a dump file. Select Start Settings Control Panel System Advanced Startup and Recovery System Failure to verify the settings. Use the registry editor (regedit) to modify the following registry setting: Hive: HKEY_LOCAL_MACHINE Key: System\CurrentControlSet\Services\i8042prt\Parameters Name: CrashOnCtrlScroll Data Type: REG_DWORD Value: 1 A value of 1 enables the feature. Once this is correctly set up, to create a dump file, press and hold the right Ctrl key while pressing ScrLk twice from the keyboard. The dump file is generated the next time you power on the machine in the location specified.

AIX client logging and tracing


Use the stfsdebug command and the syslog facility to enable tracing and logging.

syslog facility
The SAN File System client for AIX generates both log and trace messages, which are routed through the syslog facility on the AIX operating system. The syslog facility captures log and trace output from the kernel as well as other operating system services. By default, the syslog facility discards all kernel output. However, you can configure the syslog facility to specify a destination for the messages by modifying /etc/syslog.conf. Specifying a file as the destination: You can specify a file to receive kernel messages, such as /var/adm/ras/messages, for example. To specify that file, perform the following steps: 1. Create /var/adm/ras/messages if it does not already exist. You can use the AIX touch command to create an empty file. 2. Edit /etc/syslog.conf and insert this line:
kern.debug /var/adm/ras/messages

3. Restart the syslogd daemon: a. Run kill -hup syslogd_PID. b. Refer to the AIX Commands Reference for more information about the syslogd daemon. Specifying the console as the destination: Note: If you specify the console as the destination, messages are also written to /var/spool/mqueue/syslog. To specify the console as the destination for kernel messages, perform the following steps: 1. Edit /etc/syslog.conf and insert the line kern.debug dev/console using vi, for example. 532
IBM TotalStorage SAN File System

2. Restart the syslogd daemon:


kill -hup syslogd_PID

3. Refer to the AIX Commands Reference for more information about the syslogd daemon.

stfsdebug command
You can use the stfsdebug command to enable tracing for an AIX client. In addition, you can specify which components (called classes) are traced as well as the level of detail to include. You can also use stfsdebug to query the current status of all trace classes. The stfsdebug command requires the full path name of the SAN File System kernel module loaded on the client machine, which you can find by viewing the client configuration file (stclient.conf). The trace output enabled by stfsdebug is sent to the syslog facility.

AIX client dump


You can initiate a kernel dump of an AIX SAN File System if is still running but no longer responding to commands. You can initiate a dump in two ways: sysdumpstart If you can get to a system prompt via telnet or SSH, issue the command sysdumpstart to make a kernel dump. Refer to the AIX documentation for more information. At the system If you cannot establish remote access to the client, and the console also does not respond, you can initiate a kernel dump by pressing the system reset button.

Linux client logging and tracing


Configure the syslog facility and select one or more SAN File System classes to enable tracing and logging on the SAN File System Linux client.

syslog facility
The SAN File System client for Linux generates both log and trace messages, which are routed through the syslog facility on the Linux operating system. The syslog facility captures log and trace output from the kernel as well as other operating system services. By default, the syslog facility discards all kernel output. However, you can configure the syslog facility to specify a destination for the messages by modifying /etc/syslog.conf. Specifying a file as the destination: You can specify a file to receive kernel messages, such as /var/log/messages, for example. To specify that file, perform the following steps: 1. Create /var/adm/ras/messages if it does not already exist. You can use the Linux touch command to create an empty file. 2. Edit /etc/syslog.conf and insert this line:
kern.debug /var/log/messages

3. Restart the syslogd daemon:


/sbin/service syslog restart

4. Refer to the Linux man page for syslogd for more information about the syslogd daemon. Specifying the console as the destination: To specify the console as the destination for kernel messages, perform the following steps: 1. Edit /etc/syslog.conf and insert or un-edit the line kern.debug dev/console.

Chapter 13. Problem determination and troubleshooting

533

2. Restart the syslogd daemon:


sbin/service syslog restart

3. Refer to the Linux man page for syslogd for more information about the syslogd daemon.

13.4 SAN File System data collection


SAN File System provides a utility to collect logs and system information, known as the One-Button Data Collection Toolset for SAN File System (OBDC) utility, or first failure data-capture. The main goal of the OBDC utility is to reduce support cost and allow client problems to be resolved as quickly as possible. In general, the OBDC utility would be run either by the IBM service organization, or with the assistance of IBM service. You can invoke the OBDC utility in one of the following ways: For an engine in the cluster: From the SAN File System console (browser), select Maintain System Collect Diagnostic Data (see Figure 13-5).

Figure 13-5 OBDC from GUI

From the CLI, run /usr/tank/server/bin/obdc to collect the default data or add additional parameters to customize the data collection (Example 13-13.)
Example 13-13 OBDC from CLI tank-mds3:/tmp # /usr/tank/server/bin/obdc One-Button Data Collection Toolset for SAN File System This program will gather relevant system information for the purpose of diagnosing SANFS failures. All collected data will be stored in an archive which you can then examine before returning it to IBM for analysis. No private data will be transmitted by this program without your permission.

534

IBM TotalStorage SAN File System

You can type 'obdc --help' for more options. OUTPUT DIRECTORY: /usr/tank/OBDC/ TEMPORARY DIRECTORY: /tmp/ SANFS HOME DIRECTORY: /usr/tank/ Do you want to continue? [yes/no] yes 1. Collecting Administrative Configuration Files (0%) 2. Collecting Administrative Log Files (1%) 3. Collecting SANFS Administrative Server Version (3%) 4. Collecting Administrative Legacy Overflow File (5%) 5. Collecting Attached Storage (7%) 6. Collecting HBA Devices (8%) 7. Collecting Network Devices (10%) 8. Collecting PCI Devices (12%) 9. Collecting SCSI Devices (14%) 10. Collecting Network ARP Table (16%) 11. Collecting Network Device Configuration (17%) 12. Collecting Network Connections (19%) 13. Collecting Network Routing Table (21%) 14. Collecting Disk Usage (23%) 15. Collecting Operating System Environment (25%) 16. Collecting Operating System /etc/inittab file (26%) 17. Collecting Memory Usage Statistics (28%) 18. Collecting Operating System Loadable Modules Configuration (30%) 19. Collecting Disk Mounts (32%) 20. Collecting Operating System Processes (33%) 21. Collecting Security logs, specifically, /usr/local/winbind/install/var/log.winbindd. (35%) 22. Collecting Installed Software (37%) 23. Collecting System Log Files (39%) 24. Collecting Operating System Version (41%) 25. Collecting SAN Adapter Statistics (42%) 26. Collecting SAN VPATH Mappings (44%) 27. Collecting SAN Device Statistics (46%) 28. Collecting SAN SDD Kernel Statistics (48%) 29. Collecting SAN SDD Driver Version (50%) 30. Collecting Server Bootstrap File (51%) 31. Collecting Server Configuration Files (53%) 32. Collecting SANFS Administrator List (55%) 33. Collecting SANFS Autorestart Statistics (57%) 34. Collecting SANFS Client List (58%) 35. Collecting SANFS Disaster Recovery File List (60%) 36. Collecting SANFS RSA Card Information (62%) 37. Collecting SANFS LUN List (64%) 38. Collecting SANFS Cluster Server List (66%) 39. Collecting Server Log Files (67%) 40. Collecting Server SHOW Command (69%) 41. Collecting Server SHOW Command (71%) 42. Collecting Server SHOW Command (73%) 43. Collecting Server SHOW Command (75%) 44. Collecting Server SHOW Command (76%) 45. Collecting Server SHOW Command (78%) 46. Collecting Server SHOW Command (80%) 47. Collecting Server SHOW Command (82%) 48. Collecting Server SHOW Command (83%) 49. Collecting Server SHOW Command (85%) 50. Collecting Server SHOW Command (87%) 51. Collecting Server SHOW Command (89%) 52. Collecting Server SHOW Command (91%) 53. Collecting SANFS Server Version (92%) 54. Collecting WAS Configuration Files (94%)

Chapter 13. Problem determination and troubleshooting

535

55. Collecting WAS Installed Applications (SANFS) (96%) 56. Collecting WAS Server Log Files (98%) obdc: The collection was successfully stored in /usr/tank/OBDC/OBDC-083105-0355-6264.tar.gz tank-mds3:/tmp #

From a UNIX client (including AIX, Solaris, and Linux), access the client and, from a shell prompt, run /usr/tank/client/bin/obdc to collect the default data or add additional parameters to customize the data collection. From a Windows client, log in and, from a shell prompt, run C:\Program Files\IBM\Storage Tank\client\bin\obdc.exe (Example 13-14) to collect the default data.
Example 13-14 OBDC on Windows client C:\>"C:\Program Files\IBM\Storage Tank\Client\bin\obdc.exe" One-Button Data Collection Toolset for SAN File System This program will gather relevant system information for the purpose of diagnosing SANFS failures. All collected data will be stored in an archive which you can then examine before returning it to IBM for analysis. No private data will be transmitted by this program without your permission. You can type 'obdc --help' for more options. OUTPUT DIRECTORY: TEMPORARY DIRECTORY: SANFS HOME DIRECTORY: C:\Documents and Settings\Administrator\Application ... C:\DOCUME~1\ADMINI~1\LOCALS~1\Temp\ C:\Program Files\IBM\Storage Tank\

Do you want to continue? [yes/no] yes 1. Collecting Client Configuration Files (0%) 2. Collecting Client Log Files (4%) 3. Collecting Client Log Files (8%) 4. Collecting SANFS Client Version (12%) 5. Collecting Attached Storage (16%) 6. Collecting HBA Devices (20%) 7. Collecting Network Devices (25%) 8. Collecting PCI Devices (29%) 9. Collecting SCSI Devices (33%) 10. Collecting Network ARP Table (37%) 11. Collecting Network Device Configuration (41%) 12. Collecting Network Connections (45%) 13. Collecting Network Routing Table (50%) 14. Collecting Disk Usage (54%) 15. Collecting Operating System Environment (58%) 16. Collecting Memory Usage Statistics (62%) 17. Collecting Disk Mounts (66%) 18. Collecting Operating System Processes (70%) 19. Collecting Installed Software (75%) 20. Collecting System Log Files (79%) 21. Collecting Operating System Version (83%) 22. Collecting SAN Adapter Statistics (87%) 23. Collecting SAN VPATH Mappings (91%) 24. Collecting SAN Device Statistics (95%) obdc: The collection was successfully stored in C:\Documents and Settings\Administrator\Application Data\IBM\Storage Tank\OBDC\OBDC-060804-1422-2104.tar.gz

536

IBM TotalStorage SAN File System

13.5 Remote Supervisor Adapter II


The Remote Supervisor Adapter II is a third-generation system management adapter for IBM ^ xSeries servers. It is based on the IBM PowerPC 405 32-bit RISC processor operating at 200 MHz. It is a half-length PCI adapter running at 66 MHz/32-bit speed. It comes as a standard feature in the xSeries 365 and is preinstalled in PCI slot 1. For many other servers, it is available as an optional feature. For the xSeries 346, it is a slimline card.

Video connector (1) System-management connector (6) Ethernet connector (2) External power supply connector (3) Mini-USB connector (4) Power LED Adapter activity LED Reset button (recessed) ASM/serial breakout connector (5)

Video adapter System management daughter card

Figure 13-6 Remote Supervisor Adapter II

Video connector (1 in Figure 13-6). The RSA II contains an additional video subsystem on the adapter. If you install the RSA II in a server, it will automatically disable the onboard video. You should connect the servers monitor to the RSA II video connector. 10/100 Ethernet connector (2). For connection to a 10 Mbps or 100 Mbps Ethernet-based client LAN or management LAN. Power connector (3). You still can access the RSA II if the server is powered down when you use the external power supply (supplied when the adapter is purchased as an option). Connect the power supply to a different power source as the server (for example, a separate UPS). Mini-USB connector (4). This port provides the ability for remote keyboard and mouse when using the remote control feature. Connect this to a USB port of the server. Breakout connector (5). To use the RSA II as focal point for an ASM network. Before SAN File System V2.2.2, the access to a remote MDS's RSA card was through a dedicated RS-485 serial network. In V2.2.2 and beyond, all access to a remote RSA card is over the IP network. This makes it even more critical that the IP network have redundancy. Designing for network redundancy was discussed in 3.8.5, Network planning on page 84. The RSA TCP/IP connection is used to shut down a rogue MDS, as described in Fencing through remote power management (RSA) on page 82. When this happens, it is logged in the file /usr/tank/server/log/log.stopengine.

Chapter 13. Problem determination and troubleshooting

537

13.5.1 Validating the RSA configuration


Setting up the RSA configuration is part of the installation process, described in Verifying boot drive and setting RSA II IP configuration on page 127. Since this function is critical, correct configuration should be validated. The following should be checked before bringing a SAN File System system into production. Each RSA card should have its service processor named the same as the MDS name of the server that the RSA card is installed in. The IP address of the RSA card should be reachable from the network interfaces of each MDS. The IP address of the RSA card in a given MDS is specified as the "SYS_MGMT_IP" parameter when installing SAN File System (see Table 5-1 on page 147). The RSA card firmware and server BIOS should be kept in sync and up to date on each MDS. The sfscli legacy "lsengine" command should return the correct power state for each MDS in the cluster, from all MDSs in the cluster (master and subordinate), as shown in Example 13-15. You can only run this command after SAN File System is installed.
Example 13-15 Verify RSA connectivity tank-mds3:~ lsengine: Power state Power state tank-mds3:~ tank-mds4:~ lsengine: Power state Power state tank-mds4:~ # sfscli legacy lsengine for tank-mds4 at sysmgmtip 9.82.22.174 is ON. for tank-mds3 at sysmgmtip 9.82.22.173 is ON. # # sfscli legacy lsengine for tank-mds4 at sysmgmtip 9.82.22.174 is ON. for tank-mds3 at sysmgmtip 9.82.22.173 is ON. #

13.5.2 RSA II management


The RSA cards in each MDS support remote hardware-specific operations such as system monitoring, shutdown, and startup. A Java runtime is required to access the management interface of the RSAII card, which can be installed by going to:
http://www.java.com/en/download/manual.jsp

To connect to the RSAII card, open up a Web browser and point it to the IP address of the RSAII card, as shown in Figure 13-7 on page 539.

538

IBM TotalStorage SAN File System

Figure 13-7 RSAII interface using Internet Explorer

Chapter 13. Problem determination and troubleshooting

539

For example, once logged in to the RSAII card, you can reboot or shut down the server. To restart the server, click Power/Restart in the Tasks section, as shown in Figure 13-8.

Figure 13-8 Accessing remote power using RSAII

You can also view the BIOS log by selecting Event Log under Monitors (see Figure 13-9 on page 541).

540

IBM TotalStorage SAN File System

Figure 13-9 Access BIOS log using RSAII

To completely manage servers from a remote location, you need more than just a keyboard-video-mouse (KVM) redirection. For example, to install the operating system or some patches, you need remote media to connect a CD-ROM or diskette to the server, or you will have to have someone physically load the installation media in the CD-ROM or diskette drive.

Chapter 13. Problem determination and troubleshooting

541

When you launch a remote console for the first time in your browser, a security warning window will pop up. This warning comes from the Java applets that remote control uses. It is quite usual to see these warnings, and you can trust this certificate from IBM and click Yes or Always (see Figure 13-10).

Figure 13-10 Java Security Warning

In the remote control window, a set of buttons simulates specific keystrokes and also shows the video speed selector, as in Figure 13-11. The slider is used to limit the bandwidth that is devoted to the remote console display on your computer.

Figure 13-11 RSA II: Remote control buttons

Reducing the video speed can improve the rate at which the remote console display is refreshed by limiting the video data that must be displayed. You can reduce, or even stop, video data to allow more bandwidth for remote disk, if desired. Move the slider left or right until you find the bandwidth that achieves the best results. Now we are able to manage an MDS server remotely, as shown in Figure 13-12 on page 543. This displays the boot messages appearing on the actual console.

542

IBM TotalStorage SAN File System

Figure 13-12 ASM Remote control

More information about the capabilities of the RSA II card is in the manual Remote Supervisory Adapter Users Guide, 88P9243, available at:
http://www.ibm.com/pc/support/site.wss/document.do?lndocid=MIGR-57091

13.6 Simple Network Management Protocol


Simple Network Management Protocol (SNMP) is typically used to monitor network performance and hardware as well as to find and solve network problems. SNMP consists of two main components: SNMP agents, which are software components that reside on managed devices and collect management information (using Management Information Bases or MIBs). SNMP agents issue traps when SNMP events occur. These traps are sent through User Datagram Protocol (UDP) to an SNMP Manager. SNMP manager, usually a network management application, which monitors and controls devices on which SNMP agents are running and can receive SNMP traps. IBM Tivoli NetView is an example of an SNMP manager.

13.6.1 SNMP and SAN File System


In SAN File System, each MDS generates log messages in response to many faults or cluster operations. A subset of these log messages may be delivered to an SNMP agent from the metadata server if the cluster has a target SNMP manager configured.

Chapter 13. Problem determination and troubleshooting

543

In a SAN File System system with strong high availability requirements, the end user should configure an SNMP manager and specify the target IP address according the instructions in the manual IBM TotalStorage SAN File System Installation and Configuration Guide, GA27-4316. This will allow a SAN File System administrator to receive critical alerts and respond if the SAN File System cluster encounters faults. In most cases, the system will automatically handle the fault, but the Administrator will want to repair any inoperable system components (such as engine or switch failures) and investigate the cause of all unexpected faults. When a specified event type occurs, SAN File System sends the SNMP trap and logs the event in the cluster log. Note: SAN File System supports asynchronous monitoring through traps, but does not support SNMP GETs or PUTs for active management, that is, an SNMP Manager cannot manage SAN File System. Examples of events that might generate SNMP trap messages include the following: MDS executes a change in state. MDS detects that another MDS is not active. The size of a fileset reaches a specified percentage of its capacity.

Configuring RSA II for SNMP


To configure SNMP in the Remote Supervisor Adapter II, we can connect via GUI to the RSA II IP address and go to Network Protocols (see Figure 13-13):

Figure 13-13 SNMP configuration on RSA II

SNMP agent Use this field to specify whether you want to forward alerts to SNMP communities on your network or to allow a SNMP manager to query the SNMP agent. To allow alerts to be sent to SNMP communities, click the drop-down button and select Enabled.

544

IBM TotalStorage SAN File System

Note: To enable the SNMP agent, the following criteria must be met: ASM contact is specified. ASM location is specified. At least one Community name is specified. At least one valid IP address is specified for that Community. Alert recipients whose notification method is SNMP will not receive alerts unless both SNMP traps and SNMP agent are enabled. SNMP traps Use this field to convert all alert information into the ASM MIB SNMP format in order for those alerts to be sent to a SNMP Manager. To allow conversion of alerts to SNMP format, click the drop-down button and select Enabled. Note: Alert recipients whose notification method is SNMP will not receive alerts unless both SNMP traps and SNMP agent are enabled. Communities Use these fields to define the administrative relationship between SNMP agents and SNMP managers. You must define at least one community. Each community definition consists of three parameters. To set up a Community: a. In the Name field, enter a name, or authentication string, that corresponds to the desired community. b. Enter the IP addresses of this community in the corresponding IP address fields for this community. c. Click Save to store your new community configuration. d. On the System page, enter your Contact and Location information. If these are already configured, skip the next step. e. Click Save to store your new Contact and Location. f. Go to the Network Protocols page and choose Enabled in the SNMP agent drop-down list. g. Select the desired entry in the SNMP traps drop-down. h. Click Save to enable your new Community configuration. Note: If an error message window appears, make the necessary adjustments to the fields listed in the error window. Then click the Save button to save your corrected information. Also, you must configure at least one community in order to enable this SNMP agent.

Chapter 13. Problem determination and troubleshooting

545

Configuring SAN File System for SNMP


To configure SNMP, do the following: 1. Add SNMP managers You can add up to two SNMP managers to SAN File System. To configure an SNMP manager, use the addsnmpmgr command on the master Metadata server, as shown in Example 13-16.
Example 13-16 Add SNMP manager using the CLI sfscli> addsnmpmgr -ip 9.42.164.160 -port 162 -ver v2c -community public CMMNP5339I SNMP manager was added successfully.

2. Set traps Here you specify what kinds of events you want to trigger an SNMP trap. SAN File System messages have four severity levels: informational, warning, error, and severe. Use the settrap command (Example 13-17), choosing the event severity levels desired. If you specify all, then all events will trigger an SNMP trap. This option cannot be combined with any other setting. If you specify none, then no SNMP traps will be sent. This option cannot be combined with any other setting. Any other single level, or combination of levels, is valid.
Example 13-17 Set alerts using CLI sfscli> settrap -event sev,warn,err CMMNP5338I SNMP trap event level was set successfully.

The following commands are available when configuring SNMP: lssnmpmgr: Displays a list of SNMP managers and their attributes. lstrapsetting: Displays a list of event types that currently generate an SNMP trap. rmsnmpmgr: Removes an SNMP manager. settrap: Specifies whether an SNMP trap is generated and sent to all SNMP managers when a specific type of event occurs on the MDS. addsnmpmgr: Adds an SNMP manager to receive SNMP traps.

13.7 Hints and tips


Here are some hints and tips you may find useful: If there are any problems creating the cluster, first verify its configuration, using the /usr/tank/server/bin/tank lsconfig command, /usr/tank/admin/config/cimom.properties, and /etc/tank/Tank.Config on all MDS. Make sure they are the same except for the MDS-specific parameters, for example, SERVER_NAME. If using LDAP, verify that your LDAP server is operational, using the process described in 4.1.2, LDAP and SAN File System considerations on page 101. The minimum size for the initial system volume for SAN File System is 2 GB. Check the logs as described in 13.3, Logging and tracing on page 521: usr/tank/server/log/log.std: For MDS installation problems usr/tank/admin/log/cimom.log: For admin server problems usr/tank/admin/log/console.log: For console/GUI issues If you cannot log in to the GUI because of user name or password problems, or a message says it cannot contact the CIMOM agent, check that the console is running on your MDS. 546
IBM TotalStorage SAN File System

To start it, use /usr/tank/admin/bin/startConsole and /usr/tank/admin/bin/startCimom. Also, verify that your LDAP server is running. To verify that your MDS can communicate with the LDAP server, start a sfscli session and type lsserver. If the command returns an error, as shown in Example 13-18, verify that your LDAP server is up and running, as discussed in 4.1, Security considerations on page 100. Another problem might be that you have logged in with an ID that has insufficient privileges to run the specified command. The manual IBM IBM TotalStorage SAN File System Administrators Guide and Reference, GA27-4317 has information about the privileges required to run each SAN File System command.
Example 13-18 MDS cannot communicate with LDAP server sfscli> lsserver CMMUI9901E User access to command "lsserver" denied. Tip: Contact Technical Support for assistance.

13.8 SAN File System Message conventions


SAN File System messages can be decoded using Table 13-3. For example, HSTAD001I would be an informational Basic Administration Message from the SAN File System Administration Service.
Table 13-3 Message prefix convention Component XXXYYnnnnZ Sub-component XXXYYnnnnZ Sub-component description SERVER sub-components: Server catalog (Single catalog for all platforms) HST HST HST HST HST HST HST HST HST HST HST HST HST HST HST HST AD BT CC CK CM DB DP FC FS GI GM GR GS HA IL IO Administration Service Basic Administration Messages B-Tree Index Manager Collection Classes Server fsck Cluster Manager Database Action Dispatcher Foundation Classes Free Space Map Global Disk I/O Manager Global Memory Manager Global Root Directory Manager Group Services High Availability Manager Server RPM Direct Local Disk I/O Manager

Chapter 13. Problem determination and troubleshooting

547

Component XXXYYnnnnZ HST HST HST HST HST HST HST HST HST HST HST HST HST HST HST HST HST HST HST HST

Sub-component XXXYYnnnnZ IP LM LP LV MG NC NE NL NS OM OP PC PG SC SM TM TP UC VC WA

Sub-component description Internet Protocol Services Lock/Lease Manager LALR Parser Generator Logical Volume Manager Message Formatter National Language Compiler Net Server Program messages, default catalog nlsmsg National Language Support Object Meta-Data Manager Run-Time Options Processor Policy Server Program Standard Container Schema Manager Administration Session Manager Protocol Transaction Manager Storage Tank Protocol Utility Version Control Manager Write Ahead Log Administrative server

HST HST HST HST

AP AS NP WU

Provider messages Script messages SAN File System CIMOM providers (error messages) SAN File System UI Console Scripts Administrative agent

CMM CMM CMM CMM CMM

CI NP NW OM UI

CLI, CIM, and common errors to CLI and GUI SAN File System UI Console and CLI SAN File System Console Object Manager Administrative agent UI Framework

548

IBM TotalStorage SAN File System

Component XXXYYnnnnZ

Sub-component XXXYYnnnnZ

Sub-component description CLIENT sub-components (User Level)

HST HST HST HST HST HST

CL CU DI, DR, ST, UM IA MO OP

stfsclient Client Common User (common to all) Client AIX User (only) Client AIX lpp install scripts stfsmount Command Line Option Parser CLIENT sub-components (Kernel Level)

HST HST HST HST

AK CS CW SM

Client AIX Kernel Client Setup perl script Client Windows Client State Manager

Chapter 13. Problem determination and troubleshooting

549

550

IBM TotalStorage SAN File System

Part 4

Part

Exploiting the SAN File System


In this part of the redbook, we explain in detail how IBM TotalStorage SAN File System provides benefits in a DB2 UDB environment.

Copyright IBM Corp. 2003, 2004, 2006. All rights reserved.

551

552

IBM TotalStorage SAN File System

14

Chapter 14.

DB2 with SAN File System


In this chapter, we explain in detail how IBM DB2 UDB exploits IBM TotalStorage SAN File System, including these topics: Policy placement Storage management Load balancing Direct I/O support FlashCopy Database path considerations

Copyright IBM Corp. 2003, 2004, 2006. All rights reserved.

553

14.1 Introduction to DB2


IBM DB2 Universal Database (UDB) is the industry's first multimedia, Web-ready relational database management system, strong enough to meet the demands of large corporations and flexible enough to serve medium-sized and small e-businesses. DB2 Universal Database combines integrated power for business intelligence, content management, and e-business with industry-leading performance and reliability. The IBM TotalStorage SAN File System strengthens the solution by providing a highly available computing environment with centrally managed storage. This chapter describes how SAN File System can enhance a DB2 environment through policy management, storage management, load balancing, direct I/O support, and improved availability. We assume an environment where DB2 is installed on SAN File System clients.

14.2 Policy placement


SAN File System policy allows control over the storage pools used for data, based on the attributes of the files themselves. SAN File System uses rules based on different attributes that automatically determine which storage pool to use for that data. Thus, even if two different files reside in the same directory, the policy may determine that the two files belong in different storage pools. This lends itself ideally to a DB2 environment where you may want different expected I/O response times for your index data as opposed to your large object (LOB) data. Also, you could choose to put the DB2 logs on more robust storage pools (for example, with RAID-5 protection), and store temporary data on JBOD. SAN File System policy is an effective mechanism to provide a class of service for different types of data, even if that same data resides in the same directory. An important point to note is that the physical data placement is handled by the SAN File System Metadata server (MDS) and is therefore transparent to the DB2 database. File placement is determined at the time of initial file creation. DB2 supports System Managed Storage (SMS) tablespaces and Database Managed Storage (DMS) tablespaces. SMS tablespaces are ideally suited to a SAN File System environment; however, DMS tablespace can also exploit many of the features within SAN File System.

14.2.1 SMS tablespaces


SMS tablespaces are stored in files managed by the operating system. SMS tablespaces allocate space as needed using the directories defined in the create tablespace command. In each directory for an SMS tablespace container, DB2 will automatically store different types of data for each table into different files with different file name extensions. Table 14-1 on page 555 illustrates the DB2 file naming convention for some of the data in SMS tablespaces.

554

IBM TotalStorage SAN File System

Table 14-1 Naming convention for objects within a DB2 SMS table space container File name SQLTAG.NAM SQLxxxxx.DAT SQLxxxxx.LF SQLxxxxx.LB SQLxxxxx.LBA SQLxxxxx.INX SQLxxxxx.IN1 SQLxxxxx.BKM SQLxxxxx.TDA SQLxxxxx.TIX SQLxxxxx.TLB SQLxxxxx.LOG File contents Table space container tag to verify consistency All table rows except LONG VARCHAR, LONG VARGRAPHIC, BLOB, CLOB, or DBCLOB data LONG VARCHAR or LONG VARCHARGRAPHIC data BLOB, CLOB, or DBCLOB data Contains allocation and free space information Index data for a table Index data for a table Dimension block index for a multidimensional clustered table Temporary regular data Temporary index data Temporary LOB data DB2 transaction logs

With traditional storage, all files in an SMS container would reside on the same set of devices, since all the files would reside in the same directory. Since SAN File System provides the ability to choose a storage pool based on the file extension, the different types of DB2 data stored in an SMS container may be placed into different storage pools. Thus, within an SMS table space container, we can now put regular data on different devices than index data and Long/LOB data using SAN File System. Previously, this ability has been traditionally limited to DMS table spaces only. The ability of SAN File System to automatically place different files in different storage pools can significantly reduce file placement tasks that may otherwise be required of the database administrator.

14.2.2 DMS tablespaces


With DMS tablespaces, the database manages the storage, and a list of all files and the size of those files are defined in advance in the create tablespace command. The full size of the tablespace will also be allocated when the tablespace is created. With tables stored in DMS tablespaces, the user can explicitly put Long/LOB data, index data, and regular data into three different DMS tablespaces. Thus, the user can choose to distribute the different types of data to different devices using traditional storage or by implementing a policy strategy within a SAN File System environment. Note: DMS raw table spaces cannot be used within a SAN File System environment.

Chapter 14. DB2 with SAN File System

555

14.2.3 Other data


You should also consider index files used by DB2 UDB when determining a data placement strategy, as shown in Table 14-2.
Table 14-2 Naming convention for objects with DB2 UDB File name SQLxxxxxxx.LOG File contents DB2 transaction logs

14.2.4 Sample SAN File System policy rules


Example 14-1 is a sample of SAN File System rules that could be used for DB2 configured with SMS tablespaces. This will separate the different types of DB2 data to different User Pools based on the file extension.
Example 14-1 Sample rules for DB2 RULE RULE RULE RULE RULE RULE RULE 'stgRule1' 'stgRule2' 'stgRule3' 'stgRule4' 'stgRule6' 'stgRule7' 'stgRule8' SET SET SET SET SET SET SET STGPOOL STGPOOL STGPOOL STGPOOL STGPOOL STGPOOL STGPOOL 'log' WHERE NAME LIKE '%.LOG' 'data' WHERE NAME LIKE '%.DAT' 'index' WHERE NAME LIKE '%.INX' 'lob' WHERE NAME LIKE '%.LB' 'lob' WHERE NAME LIKE '%.LBA' 'lob' WHERE NAME LIKE '%.LFA' 'temp' WHERE NAME LIKE '%.TDA'

Note: If no matching rule is found for a particular file, it will be placed in the default storage pool.

Figure 14-1 illustrates how these rules would cause DB2 data to be assigned.

*.LO G

*.D T A

*.IN X

*.LB

*.LB A

*.LF

*.TD A

log S torageP ool

data S torageP ool

index S torageP ool

lob S torageP ool

tem p S torageP ool

Figure 14-1 Example storage pool layout for DB2 objects

556

IBM TotalStorage SAN File System

14.3 Storage management


SAN File System polices allow data to be automatically placed in the appropriate storage pools based on the predetermined set of requirements. It is essential to also manage the space available in these storage pools to avoid reaching a disk full condition. With DMS tablespaces, the database administrator defines the amount of space that should be allocated for the tablespace in advance. If that space limit is reached, the database administrator can dynamically extend the size of the tablespace with DB2 as long as additional disk space is available. With SMS tablespaces, DB2 will allocate the space as required and as permitted by the OS and file system. Thus, with SMS tablespaces, much of the storage management functionality is now pushed outside the database, as the availability of free space is dependent on what is available in the file system.

Threshold alerts
SAN File System provides an alert mechanism based on the amount of allocated space within a storage pool. This is significant within a DB2 environment using SMS tablespaces, since space is acquired as needed. If an alert is triggered, indicating that a storage pool has reached its configured threshold, the SAN File System administrator can choose to add additional storage to the storage pool. Thus, storage can be added before any database users experience any "out of disk space" error codes.

Free space management


In traditional file system environments, each machine is assigned a certain amount of free space to be used for such things as temporary storage. This can lead to a significant amount of overall wasted storage, as it is likely that not all this free space will be consumed at any given point in time. SAN File System provides an infrastructure to pool all this free space into one pool, available to all machines. For example, if the maximum amount of free space used at any given point in time is only 60% of the total storage space allocated for temporary storage, then we have saved the other 40%. Since DB2 also uses temporary space, this could result in significant saved disk space in an environment running multiple DB2 servers.

14.4 Load balancing


The SAN File System Metadata servers (MDS) manage such things as file locations, file permissions, and locking. Only the metadata information travels over the IP network, minimizing the data transfer on the IP network. The SAN File System clients still access the regular data over the SAN and thus can benefit from the high performance a SAN can provide. For ideal performance, it is beneficial to evenly balance the workload across the MDSs. Each fileset is assigned to an MDS, and thus the filesets can be balanced across the MDS servers in the cluster. You can choose to balance the workload generated by DB2 UDB across all MDSs or assign the filesets used by DB2 UDB to a subset of MDSs and filesets being used by other applications to the remaining MDSs. SAN File System provides administrative commands that can be used to monitor transaction rates on each MDS. Transaction rate parity across all MDSs will provide better SAN File System performance. File set assignment can be changed from one MDS to another to balance transaction rates within the SAN File System environment. You should plan the filesets to be used based on expected I/O transaction rates, since this (rather than file size or storage space consumption) will drive load on the MDS. Figure 14-2 on page 558 illustrates the workload distribution of filesets.

Chapter 14. DB2 with SAN File System

557

DB2 Partition 0

DB2 Partition 1

DB2 Partition 2

DB2 Partition 3

SAN

MDS1

MDS2

/db2inst1/NODE0000 /db2inst1/NODE0001
Figure 14-2 Workload distribution of filesets for DB2

/db2inst1/NODE0002 /db2inst1/NODE0003

The diagram in Figure 14-2 shows a DB2 instance with four DB2 partitions stored on a 2-node SAN File System cluster. The DB2 instance is called db2inst1, and it has DB2 partitions 0, 1, 2, and 3. On the create database command, all the database files, by default, will go under a db2inst1 directory with a correlating subdirectory for each partition. If the NODE0000, NODE0001, NODE002, and NODE0003 directories represent SAN File System filesets, then they can be evenly distributed across the MDS servers. For example, we could assign the NODE0000 and NODE0001 filesets to MDS 1 and the NODE0002 and NODE0003 filesets to MDS 2.

14.5 Direct I/O support


Direct I/O is a mechanism used to bypass the file system cache. Caching is a critical part of performance, as it is much more costly to retrieve data from disk than it is to retrieve that data from memory. However, applications such as DB2 do their own caching, so by default we would have DB2 caching data in the DB2 bufferpools and the operating system caching data in the filesystem cache. Figure 14-2 shows this double caching occurring.

558

IBM TotalStorage SAN File System

DB2
DB2 Buffer pool

File system cache

Disk

Figure 14-3 Default data caching

Since DB2 is already caching its own data, the extra caching occurring in the file system cache is not only a potential inefficient use of memory, but additional processing is required to perform read and writes into the file system cache. If the filesystem cache is being used ineffectively, turning on direct I/O will not only save system resources, but performance gains may also be achieved. Important: Enabling direct I/O does not guarantee performance enhancements. These performance gains will be achieved with direct I/O only if the file system cache is not being used effectively. The potential performance gains will vary depending on the nature of the workload. Direct I/O is already supported on Windows and can be enabled via the DB2NTNOCACHE DB2 profile variable:
db2set DB2NTNOCACHE=ON

Direct I/O is supported in DB2 V8.1 FP4 for AIX and can be enabled via the DB2_DIRECT_IO DB2 profile variable. This will enable direct I/O for SMS containers, excluding long data, LOB data, and temporary data:
db2set DB2_DIRECT_IO=ON

Chapter 14. DB2 with SAN File System

559

14.6 High availability clustering


SAN File System provides a global namespace that lends itself well to high availability clustering solutions. If a machine encounters a fatal error and cannot be recovered, all the data inside the database can still be made available to a secondary machine. On the secondary machine, DB2 needs to be installed and the database cataloged on the path on which it is located and it is then available. Solutions such as HACMP can be used to automate the failover. HACMP provides a mechanism to automate the failover of DB2 UDB in the event of such situations as a server outage. HACMP will fail over the DB2 partition to the backup machine and then the DB2 partition will access its data on the backup machine. Since SAN File System provides a global namespace with the data already available on the backup machine, the data is automatically and immediately available on the backup server. Thus, the SAN File System configuration can help reduce fail-over time to further reduce any impact to the DB2 clients.

14.7 FlashCopy
SAN File System provides the ability to take point-in-time copies of your data using its Flash Copy capability. If, for example, an application logic error occurred that requires you to go back to an earlier copy of the data, a FlashCopy image taken previously could be reverted, and then a rollforward performed to a point-in-time just before the application logic error occurred. Note: Before a FlashCopy image is created, you must suspend DB2 first using the DB2 suspend command. The ability to create FlashCopy images, and the use of a global namespace, provide the capability to offload a point-in-time file level backup to other machines. To do this: 1. Suspend the database using the DB2 suspend command. This suspends all write operations to a DB2 UDB database partition (that is, tablespaces and log files). Read operations are not suspended and can therefore continue. Applications can continue to process insert, update, and delete operations that use the DB2 buffer pools. 2. After the suspend is completed, take a FlashCopy image to create a point-in-time file level copy of the database. 3. As soon as the FlashCopy image is created, the database can then be resumed and normal processing can continue to occur with virtually no impact to the clients. 4. A secondary machine can then be used to access the FlashCopy image and perform any necessary file system backups, avoiding the extra consumption of resources on the initial machine, and providing application server-free backup.

14.8 Database path considerations


While running in a SAN File System environment with a global namespace, it is important to ensure that multiple clients do not unintentionally contend for the same directories or files. Therefore, you need to understand where DB2 places the database files. The DB2 create database command allows the user to specify a path location on UNIX and a drive letter for Windows. The database files will then be placed at the path/directory location specified under a subdirectory with an identical name to the instance.

560

IBM TotalStorage SAN File System

In our example (Figure 14-4), we will make the following assumptions: We have one DB2 server machine with two DB2 instances called INSTANCEA and INSTANCEB on each. The path location for the create database command on UNIX is /mnt/sanfs/mydir. The drive letter for the create database command on Windows is T. A database called DatabaseA is created under INSTANCEA, and a database called DatabaseB is created under INSTANCE B.

DB2 Server (UNIX)

/mnt/sanfs/mydir/INSTANCEA /mnt/sanfs/mydir/INSTANCEB

DB2 Server (Windows)

T:\INSTANCEA T:\INSTANCEB

Figure 14-4 Directory structure information

With one DB2 server creating the databases with the locations used above, everything will work successfully. However, if the environment consisted of two DB2 servers and they chose to use the same instance names and path/drive locations, they would be competing for the same directory. On UNIX, one potential workaround is to ensure that each unique DB2 server machine specifies a different path for the create database command. Alternatively, another mechanism to ensure path uniqueness is to have unique DB2 instance names across the environment. In that case, the same path can be chosen for each create database command, and it is the unique instance name that will guarantee no contention for the same directories. With DB2 for Windows, only the drive letter can be specified on the create database command. So, this cannot be used to avoid directory contention, since each SAN File System client will see the same drive. However, a unique instance name convention can be used to avoid contention for the same directories.

Chapter 14. DB2 with SAN File System

561

562

IBM TotalStorage SAN File System

Part 5

Part

Appendixes
In this part of the redbook, we provide the following supplementary information: Appendix A, Installing IBM Directory Server and configuring for SAN File System on page 565 Appendix C, Client configuration validation script on page 597 Appendix B, Installing OpenLDAP and configuring for SAN File System on page 589 Appendix D, Additional material on page 603

Copyright IBM Corp. 2003, 2004, 2006. All rights reserved.

563

564

IBM TotalStorage SAN File System

Appendix A.

Installing IBM Directory Server and configuring for SAN File System
In this appendix, we discuss the following topics: Installing IBM Directory Server V5.1 Creating the LDAP database Configuring IBM Directory Server V5.1 for SAN File System Starting the LDAP Server and configure Admin Server Example of LDIF file used in our configuration

Copyright IBM Corp. 2003, 2004, 2006. All rights reserved.

565

Installing IBM Tivoli Directory Server V5.1


This section shows how to install IBM Tivoli Directory Server V5.1. This software can be downloaded from the Web site:
http://www.software.ibm.com/webapp/download/search.jsp?rs=ldap&go=y

Here are the steps to follow: 1. To start the installation, run setup.exe from the directory IDS_SMP. You will first be prompted for a language to use for the install. We selected English. Click OK. 2. Next you will see the Welcome window. Click Next to continue. 3. The license agreement now appears. Select the button that you accept the terms, and click Next. 4. Select a directory to install IBM Directory Server. The default is C:\Program Files\IBM\LDAP, as in Figure A-1. Accept this or enter an alternative, and click Next.

Figure A-1 Select location where to install

5. Select the language for IBM Directory Server, as shown in Figure A-2 on page 567, and click Next.

566

IBM TotalStorage SAN File System

Figure A-2 Language selection

6. Select the setup type (Figure A-3). We chose Typical. Click Next.

Figure A-3 Setup type

Appendix A. Installing IBM Directory Server and configuring for SAN File System

567

7. Accept the defaults in the features window (Figure A-4) and click Next.

Figure A-4 Features to install

8. Specify a user ID and password for DB2, as in Figure A-5. DB2 is used as the underlying repository, and is installed automatically. You can specify a new user ID or an existing one. If the ID exists, the password must be correct. We chose to create a new user ID, db2admin. Click Next.

Figure A-5 User ID for DB2

568

IBM TotalStorage SAN File System

9. You will see a summary of the installation options selected, as in Figure A-6. If everything is correct, click Next.

Figure A-6 Installation summary

10.You will see a pop-up that DB2 will install in the background (see Figure A-7). This may take up to 20 minutes. Click OK to continue. 11.After some time, the GSKit pop-up appears. GSKit will install in the background, and may take up to five minutes. The IBM Global Security Toolkit (GSKit) provides a Secure Sockets Layer (SSL) with encryption strengths up to Triple DES.

Figure A-7 GSKit pop-up

12.After some time, the WebSphere Application Server Express appears. This will install in the background, and may take up to 10 minutes. WebSphere provides the application environment for IBM Directory Server. 13.After this is complete, the IBM Directory Server client README file displays. Review it and click Next to continue. 14.The server README displays. Review it and click Next to continue.

Appendix A. Installing IBM Directory Server and configuring for SAN File System

569

15.You will be prompted to restart your system now or later. A reboot is required to complete the installation. Select Yes, restart my system and click Next. The Installation Complete window opens, as shown in Figure A-8. Click Finish to continue, and reboot the server.

Figure A-8 Installation complete

Creating the LDAP database


After the system has rebooted, the IBM Directory Server configuration tool will start automatically, as shown in Figure A-9. If it does not start, launch it using Start Programs IBM Directory Server 5.1 Directory Configuration.

Figure A-9 Configuration tool

570

IBM TotalStorage SAN File System

1. Set the Administrator DN and password. We used the following values: Administrator DN: cn=Manager,o=ITSO Password: password Click OK to continue. 2. You will see confirmation that the Administrator DN and password have been successfully set, as shown in Figure A-10. Click OK.

Figure A-10 User ID pop-up

3. In the next window, click Configure database in the left column. In the Configure Database window, select Create a new database and click Next. You will be prompted for the user ID for the DB2 database that was specified during the installation (see step 8 on page 568). We entered db2admin and our specified password, as shown in Figure A-11. Click Next.

Figure A-11 Enter LDAP database user ID

Appendix A. Installing IBM Directory Server and configuring for SAN File System

571

4. Select a name for your LDAP database. This database will be created in DB2. We chose SFSLDAP, as in Figure A-12. Click Next.

Figure A-12 Enter the name of the database

5. In Figure A-13, you specify the codepage for your DB2 database. Select the default Create a universal DB2 database and click Next.

Figure A-13 Select database codepage

572

IBM TotalStorage SAN File System

6. Select the drive letter to create the LDAP database, as in Figure A-14. The default is the C partition. Click Next.

Figure A-14 Database location

7. Figure A-15 summarizes the entries made. Verify these for correctness and click Finish to create the database.

Figure A-15 Verify database configuration

Appendix A. Installing IBM Directory Server and configuring for SAN File System

573

8. Figure A-16 shows the output messages for creating the database. When it is complete, click Close.

Figure A-16 Database created

Configuring IBM Directory Server for SAN File System


IBM Directory Server V5.1 has now been installed and the database has been created. You now need to create and import your configuration. LDAP configurations are specified in a format known as LDAP Data Interchange Format (LDIF). This is a text-based file, which stores information in object-oriented hierarchies of entries. LDIF is used to import and export directory information between LDAP-based directory servers, or to describe a set of changes that are to be applied to a directory. We showed the tree structure for our environment in Figure 4-1 on page 102. See Sample LDIF file used on page 587 for the actual LDIF file we created corresponding to this tree. You can use this file as a base to modify for your environment. Save the file with an extension of .ldif (we used ITSOLDAP.ldif). 1. Start the IBM Directory Server configuration tool by selecting Start Programs IBM Directory Server 5.1 Directory Configuration.

574

IBM TotalStorage SAN File System

2. Select Manage suffixes on the left hand side. Enter a string corresponding to your organization attribute, for example, o=ITSO, and click Add, as shown in Figure A-17.

Figure A-17 Add organizational attribute

3. You will see your new attribute listed under Current suffix DNs. 4. Click Import LDIF data from the left column. Enter in the file name of your saved LDIF configuration file. Tip: IBM Directory Server expects a c:\tmp directory on your system drive when importing an LDIF file. Make sure that you have this directory; if it does not exist, create it.

Appendix A. Installing IBM Directory Server and configuring for SAN File System

575

Click Import to start the import of the LDIF file, as shown in Figure A-18.

Figure A-18 Browse for LDIF

576

IBM TotalStorage SAN File System

5. The file will import, displaying status messages, as shown in Figure A-19.

Figure A-19 Start the import

6. When the import completes, close the configuration tool. Your LDAP server is now configured for SAN File System.

Starting the LDAP Server and configuring Admin Server


1. Now you want to start the LDAP server. From a command prompt on the LDAP server system, enter ibmslapd, as shown in Example A-1.
Example: A-1 Start the Directory Server C:\Documents and Settings\Administrator>ibmslapd Server starting. Plugin of type EXTENDEDOP is successfully loaded from libevent.dll. Plugin of type EXTENDEDOP is successfully loaded from libtranext.dll. Plugin of type EXTENDEDOP is successfully loaded from libldaprepl.dll. Plugin of type PREOPERATION is successfully loaded from libDSP.dll. Plugin of type EXTENDEDOP is successfully loaded from libevent.dll. Plugin of type EXTENDEDOP is successfully loaded from libtranext.dll. Plugin of type AUDIT is successfully loaded from C:/Program Files/IBM/LDAP/bin/l ibldapaudit.dll. Plugin of type EXTENDEDOP is successfully loaded from libevent.dll. Plugin of type EXTENDEDOP is successfully loaded from libtranext.dll. Plugin of type DATABASE is successfully loaded from C:/Program Files/IBM/LDAP/bi n/libback-rdbm.dll. Plugin of type REPLICATION is successfully loaded from C:/Program Files/IBM/LDAP /bin/libldaprepl.dll. Plugin of type EXTENDEDOP is successfully loaded from libevent.dll. Plugin of type DATABASE is successfully loaded from C:/Program Files/IBM/LDAP/bi n/libback-config.dll. Error code -1 from odbc string:" SQLConnect " SFSLDAP . SQL1063N DB2START processing was successful. Plugin of type EXTENDEDOP is successfully loaded from libloga.dll. Non-SSL port initialized to 389. IBM Directory (SSL), Version 5.1 Server started.

Once the Directory Server has started, leave this window open in the background. If you close the window, the Directory Server will stop.

Appendix A. Installing IBM Directory Server and configuring for SAN File System

577

2. Now you need to start the admin server. From another command prompt, enter startserver.bat server1 from the directory Program Files\IBM\ldap\appsrv\bin\, as shown in Example A-2.
Example: A-2 Start admin server C:\Program Files\IBM>cd ldap\appsrv\bin\ C:\Program Files\IBM\LDAP\appsrv\bin>startserver.bat server1 ADMU0116I: Tool information is being logged in file C:\Program Files\IBM\LDAP\appsrv\logs\server1\startServer.log ADMU3100I: Reading configuration for server: server1 ADMU3200I: Server launched. Waiting for initialization status. ADMU3000I: Server server1 open for e-business; process id is 1916

Once the admin server has started, you can now close this command prompt. 3. To check that the admin server is working, point your Web browser to http://localhost:9080/IDSWebApp/IDSjap/Login.jsp (the IBM Directory Server). If this does not respond, replace localhost with the actual host name. The Administration login page should display, as shown in Figure A-20.

Figure A-20 IBM Directory Server login

4. Enter the default user name superadmin and password secret and click Login. The main administrator console should appear, as in Figure A-21 on page 579.

578

IBM TotalStorage SAN File System

Figure A-21 IBM Directory Server Web Administration Tool

Appendix A. Installing IBM Directory Server and configuring for SAN File System

579

5. To change the default administrator login password, select Change console administrator login from the left column, as shown in Figure A-22. Enter a new password and click OK.

Figure A-22 Change admin password

580

IBM TotalStorage SAN File System

6. Click Manage console servers. Here you add the host name of your local machine, as shown in Figure A-23.

Figure A-23 Add host

Appendix A. Installing IBM Directory Server and configuring for SAN File System

581

Click Add. This will bring up the Add server window, as shown in Figure A-24.

Figure A-24 Enter host details

Enter the host name (shortname is fine) and leave the other options as default. Click OK.

582

IBM TotalStorage SAN File System

7. The window shown in Figure A-25 appears, confirming that the local host is now added.

Figure A-25 Verify that host has been added

Appendix A. Installing IBM Directory Server and configuring for SAN File System

583

8. You can test that it has been added correctly by logging out and then re-logging in, using your host name and LDAP user name, as shown in Figure A-26.

Figure A-26 Login to local host name

Select the local host name in the LDAP host name drop-down list. Enter the Username and Password as defined in the LDIF file that you imported. In this example, the user name is cn=Manager,o=ITSO and the password is password. Click Login.

584

IBM TotalStorage SAN File System

9. The default admin console should now display, as shown in Figure A-27.

Figure A-27 Admin console

Verifying LDAP entries


Once the IBM Directory Server has been installed and configured for SAN File System, it is recommended that you log in to the IBM Directory Server Administration tool and verify that you can browse the entries that were imported from the LDIF file. To do this, log in to the IBM Directory Server Web Administration Tool, as shown in Figure A-26 on page 584.

Appendix A. Installing IBM Directory Server and configuring for SAN File System

585

Select Directory Management and then Manage Entries, as shown in Figure A-28.

Figure A-28 Manage entries

Select the Object that you want to browse and click Expand. In this example, we are selecting the o=itso object. Select the objectclass that you want to browse and click Expand. In this example, we are selecting the ou=Users, as shown in Figure A-29.

Figure A-29 Expand ou=Users

Verify that the users, which you specified in the LDIF file that was imported to the LDAP server, exist. In this example, we can see the Administrator, Backup, Monitor, and Operator user accounts.

586

IBM TotalStorage SAN File System

You have now verified that the LDIF file has been imported correctly and IBM Directory Server is now installed and ready for use by SAN File System. See the manual IBM Tivoli Directory Server Administration Guide, SC32-1339 for more information about how to use and configure IBM Directory Server. This can be found at:
http://www.ibm.com/software/sysmgmt/products/support/IBMDirectoryServer.html

Sample LDIF file used


Example A-3 shows the LDIF configuration file we used in our setup. You could use this as is, by replacing the string ITSO with an appropriate value for your company or department, or make additional changes if desired.
Example: A-3 Sample LDIF file version: 1 # ITSO dn: o=ITSO objectClass: organization o: ITSO # Manager, ITSO dn: cn=Manager,o=ITSO objectClass: organizationalRole cn: Manager # Users, ITSO dn: ou=Users,o=ITSO objectClass: organizationalUnit ou: Users # ITSOAdmin Administrator, Users, ITSO dn: cn=ITSOAdmin Administrator,ou=Users,o=ITSO objectClass: inetOrgPerson cn: ITSOAdmin Administrator sn: Administrator uid: ITSOAdmin userPassword: password # ITSOMon Monitor, Users, ITSO dn: cn=ITSOMon Monitor,ou=Users,o=ITSO objectClass: inetOrgPerson cn: ITSOMon Monitor sn: Monitor uid: ITSOMon userPassword: password # ITSOBack Backup, Users, ITSO dn: cn=ITSOBack Backup,ou=Users,o=ITSO objectClass: inetOrgPerson cn: ITSOBack Backup sn: Backup uid: ITSOBack userPassword: password # ITSOOper Operator, Users, ITSO dn: cn=ITSOOper Operator,ou=Users,o=ITSO objectClass: inetOrgPerson cn: ITSOOper Operator sn: Operator Appendix A. Installing IBM Directory Server and configuring for SAN File System

587

uid: ITSOOper userPassword: password # Roles, ITSO dn: ou=Roles,o=ITSO objectClass: organizationalUnit ou: Roles # Administrator, Roles, ITSO dn: cn=Administrator,ou=Roles,o=ITSO objectClass: organizationalRole cn: Administrator roleOccupant: cn=ITSOAdmin Administrator,ou=Users,o=ITSO # Monitor, Roles, ITSO dn: cn=Monitor,ou=Roles,o=ITSO objectClass: organizationalRole cn: Monitor roleOccupant: cn=ITSOMon Monitor,ou=Users,o=ITSO # Backup, Roles, ITSO dn: cn=Backup,ou=Roles,o=ITSO objectClass: organizationalRole cn: Backup roleOccupant: cn=ITSOBack Backup,ou=Users,o=ITSO # Operator, Roles, ITSO dn: cn=Operator,ou=Roles,o=ITSO objectClass: organizationalRole cn: Operator roleOccupant: cn=ITSOOper Operator,ou=Users,o=ITSO

588

IBM TotalStorage SAN File System

Appendix B.

Installing OpenLDAP and configuring for SAN File System


In this appendix, we discuss the following topics: Introduction to OpenLDAP 2.0.x on Red Hat Linux Installation of OpenLDAP RPMs Configuration of OpenLDAP Client Configuration of OpenLDAP Server Configuration of OpenLDAP for SAN File System

Copyright IBM Corp. 2003, 2004, 2006. All rights reserved.

589

Introduction to OpenLDAP 2.0.x on Red Hat Linux


This section briefly describes the steps to install OpenLDAP 2.0.x on Red Hat Linux. Specific versions of LDAP are required, depending on the version of Red Hat Linux. Example B-1 outlines the Red Hat version and the OpenLDAP build versions.
Table B-1 OpenLDAP build versions Red Hat Linux release AS2.1 7.3 8.0 9.0 OpenLDAP build version 2.0.21 2.0.23 2.0.25 2.0.27

We used Red Hat Linux 9; however, the installation should be substantially similar for other releases. SUSE Linux will also be very similar.

Installation of OpenLDAP packages


1. Determine the Red Hat Linux release number. The release number is stored in the file /etc/redhat-release. In Example B-1, we display the contents of this file, confirming that we are at Release 9.
Example: B-1 Verify the Linux release # cat /etc/redhat-release Red Hat Linux release 9 (Shrike)

(If you run SUSE Linux, the release number will be in /etc/SUSE-release.) 2. Determine the version of OpenLDAP that is currently installed. Enter rpm -qa | grep openldap at the Linux prompt, as shown in Example B-2.
Example: B-2 Determine the version of LDAP installed # rpm -qa |grep openldap openldap-2.0.27-8 openldap-clients-2.0.27-8 openldap-servers-2.0.27-8

If a default Red Hat Linux installation was used, there should be at least one OpenLDAP RPM installed. If you have the right version installed as in Table B-1, skip to Configuration of OpenLDAP client on page 591. If no OpenLDAP RPMs are installed, or there is an invalid version, you will need to install it. The required RPMs for an LDAP server on Red Hat Linux are openldap-2.0.xx-F, openldap-client-2.0.xx-F, and openldap-server-2.0.xx.F, where 2.0.xx-F corresponds to Table B-1. 3. The LDAP RPMs can either be found on your Red HAT CD or downloaded from one of the following RPM download sources: http://www.rpmfind.net. Seach on openldap and select based on the distribution. http://www.redhat.com. Select Download, then search on openldap. Note that other distributions may not be listed here.

590

IBM TotalStorage SAN File System

Note: You only need to download RPMs that are not installed. For example, if you have openldap-2.0.xx and openldap-client-2.0.xx installed but not openldap-server-2.0.xx, then you only need to download the openldap-server-2.0.xx package. 4. After downloading the RPMs to your Linux server, change to the download directory and start the installation using the rpm command, as shown in Example B-3.
Example: B-3 Install OpenLDAP # rpm -ivh openldap*

The RPMs will be installed, with a hash-mark progress bar. If the RPMs are not installed due to any missing prerequisite RPMs, find the RPMs using step 3 on page 590. If, however, the RPMs do not install due to prerequisite files or mismatched file versions, the RPM version selected is not appropriate for the Red Hat Linux installation. You will need to undertake further investigation into the specific files in conflict, and confirm which OpenLDAP RPM version matches those files. 5. Verify that the OpenLDAP RPMs have been installed with rpm -qa | grep openldap at the Linux prompt, as shown in Example B-4.
Example: B-4 Verify that LDAP packages has been installed # rpm -qa | grep openldap openldap-2.0.27-8 openldap-clients-2.0.27-8 openldap-servers-2.0.27-8

Now, the three RPMs should be installed: the base, the clients, and the servers.

Configuration of OpenLDAP client


This section explains how to configure the OpenLDAP client. This component is not needed for SAN File System, but is included for reference if you need to extend your LDAP configuration. Customize the client configuration file /etc/openldap/ldap.conf. Edit the ldap.conf file using your favorite text editor, such as vi. This file contains information for the LDAP clients. It holds default values so that they do not have to be specified on the command line. The values to be customized for your installation are defined in Table B-2. Note: The BASE parameter should reflect the root of your LDAP tree; we showed our configuration in Figure 4-1 on page 102.
Table B-2 ldap.conf parameters Parameter HOST BASE Description Set to the IP address of the host that the local LDAP clients will be using. This is the base DN for any searches. Searches, such as those performed by ldapsearch, will be restricted to this DN by default. In our example, it is o=ITSO.

Appendix B. Installing OpenLDAP and configuring for SAN File System

591

The first part of the ldap.conf file is shown in Example B-5.


Example: B-5 ldap.conf # # # # # # $OpenLDAP: pkg/ldap/libraries/libldap/ldap.conf,v 1.4.8.6 2000/09/05 17:54:38 kurt Exp $ LDAP Defaults See ldap.conf(5) for details This file should be world readable but not world writable. dc=example, dc=com ldap://ldap.example.com ldap://ldap-master.example.com:666

#BASE #URI

#SIZELIMIT 12 #TIMELIMIT 15 #DEREF never HOST 127.0.0.1 BASE o=ITSO

After editing the file, save and quit from the editor. For a more detailed description of this file, refer to the manual page (man ldap.conf).

Configuration of OpenLDAP server


This section shows how to initially configure the OpenLDAP server initially. The server is known as the Stand-alone LDAP daemon (or slapd). 1. Edit the servers configuration file /etc/openldap/slapd.conf. Entries to edit are shown in Table B-3. We are using the configuration described in Figure 4-1 on page 102.
Table B-3 The slapd.conf parameters Parameter suffix rootdn Description Set this to the base suffix specified in ldap.conf in Configuration of OpenLDAP client. In our example, it is o=ITSO. This is the DN of the LDAP root user. While this can have any hierarchy, it can also most easily be placed under the suffix. In our example, it is cn=Manager,o=ITSO. This will be set to a shielded (not encrypted) password at the next step. For now, for your convenience, set it to @rootpw@.

rootpw

The slapd.conf file used in our example is shown in Example B-6.


Example: B-6 slapd.conf #suffix "dc=my-domain,dc=com" suffix "o=ITSO" #rootdn "cn=Manager,dc=my-domain,dc=com" rootdn "cn=Manager,o=ITSO" # # Cleartext passwords, especially for the rootdn, should # be avoided. See slappasswd(8) and slapd.conf(5) for details. # Use of strong authentication encouraged. # rootpw secret rootpw @rootpw@

592

IBM TotalStorage SAN File System

After editing the file, save and quit. 2. Create a shielded password for the root DN. Enter the command shown in Example B-7.
Example: B-7 Create shielded password for the root DN # export SLAPPW=slappasswd

Note: The parameter slappasswd is enclosed by the back-quotes (`). When prompted, enter the same password twice. It will be concealed like any UNIX password input. 3. When you return to the prompt, the SLAPPW variable will contain the shielded string that is needed for the slapd.conf file. Put the value of this variable into the slapd.conf file, as shown in Example B-8. Be careful to enter this string exactly, especially if you are not familiar with Linux command syntax.
Example: B-8 Add the shielded password to the slapd.conf # sed -e "s/@rootpw@/$SLAPPW/" slapd.conf > slapd.conf.1 # mv slapd.conf.1 slapd.conf

The basic configuration of your LDAP server has now been done and you are now ready to start your LDAP server. 4. Start the LDAP server using the service command at the Linux prompt, as shown in Example B-9.
Example: B-9 Start LDAP server # service ldap start

You should receive a green OK. If not, check for error messages in /var/log/messages that relates to the slapd and then try again. 5. It is now recommended that you configure the LDAP server to start automatically on boot using the chconfig command. This is shown in Example B-10.
Example: B-10 Configure LDAP server to start automatically # chkconfig --level 235 ldap on

6. Make sure that your LDAP server is running and responding to queries using ldapsearch, as shown in Example B-11.
Example: B-11 Verify that LDAP responds to queries # ldapsearch -h localhost -x -b <base_suffix> '(objectclass=*)'

Appendix B. Installing OpenLDAP and configuring for SAN File System

593

No entries should be returned, but you should get a positive response from the LDAP server, as shown in Example B-12.
Example: B-12 Verify that ldap server responds to queries # ldapsearch -h localhost -x -b o=ITSO '(objectclass=*)' version: 2 # filter: (objectclass=*) # requesting: ALL # search result search: 2 result: 32 No such object # numResponses: 1#

If the LDAP server responded correctly to the query, you are now ready to configure your LDAP server to work with SAN File System.

Configure OpenLDAP for SAN File System


In this example, we will assume a base suffix of o=ITSO and root DN of cn=Manager,o=ITSO. 1. Enter the stdin input mode of ldapadd, as shown in Example B-13.
Example: B-13 Enter input mode of ldapadd # ldapadd -x -W -h localhost -D "cn=Manager,o=ITSO" Enter LDAP Password: (<=== HERE INPUT PASSWORD)

Enter your root DN password as prompted, which you entered in step 2 on page 593 of Configuration of OpenLDAP server on page 592. If you entered your password correctly, you will not see a prompt. This indicates that ldapadd is waiting for you to type input at the keyboard. While in this mode, add the entry for the base suffix, as shown in Example B-14. Once the base suffix has been entered, press enter a second time to indicate the end of the entry. Type Ctrl+D to exit from the input mode.
Example: B-14 Add base suffix # ldapadd -x -W -h localhost -D "cn=Manager,o=ITSO" Enter LDAP Password: (<=== HERE INPUT PASSWORD) dn: o=ITSO objectClass: organization o: ITSO (<=== 2ND ENTER) adding new entry "o=ITSO" (<=== PRESSED Ctrl+D) #

2. Use ldapsearch to verify that the entry was added to the LDAP database, as shown in Example B-15.
Example: B-15 Verify that entry was successfully added to LDAP database # ldapsearch -x -h localhost -x -b o=ITSO '(objectclass=organization)'

3. Import your LDAP configuration using ldapadd. This is a file with a .ldif suffix (LDIF stands for Lightweight Directory Interchange Format). We used the file ITSOLDAP.ldif shown in Sample LDIF file used on page 587. At a minimum, you will want to edit this file to modify the base suffix (ITSO in our case). The value here should match the organization name

594

IBM TotalStorage SAN File System

that you wish to use, and also match the entry made in the slapd.conf file (Example B-6 on page 592). You may also want to modify users and passwords according to your requirements. Save the file, noting the file name, such as sfsbase.ldif. 4. Import your entries in the file with ldapadd, as shown in Example B-16. When prompted, enter your root DN password, which is the same as you entered in step 2 on page 593 of Configuration of OpenLDAP server on page 592. Make sure to use the right o=xxxxx parameter on the ldapadd command for your environment.
Example: B-16 Import LDIF # ldapadd -x -W -h localhost -D "cn=Manager,o=ITSO" -f sfsbase.ldif Enter LDAP Password: adding new entry "cn=Manager,o=ITSO" adding new entry "ou=Users,o=ITSO" adding new entry "cn=ITSOAdmin Administrator,ou=Users,o=ITSO" adding new entry "cn=ITSOMon Monitor,ou=Users,o=ITSO" adding new entry "cn=ITSOBack Backup,ou=Users,o=ITSO" adding new entry "cn=ITSOOper Operator,ou=Users,o=ITSO" adding new entry "ou=Roles,o=ITSO" adding new entry "cn=Administrator,ou=Roles,o=ITSO" adding new entry "cn=Monitor,ou=Roles,o=ITSO" adding new entry "cn=Backup,ou=Roles,o=ITSO" adding new entry "cn=Operator,ou=Roles,o=ITSO"

5. Use ldapsearch again to verify the objects, as described previously in Example B-15 on page 594. 6. The LDAP directory (ldbm) files reside under the directory /var/lib/ldap/ by default. You can list them to check that they exist, as shown here in Example B-17.
Example: B-17 Verify ldbm exists # ls -lt /var/lib/ldap/ total 56 -rw------1 ldap -rw------1 ldap -rw------1 ldap -rw------1 ldap -rw------1 ldap -rw------1 ldap -rw------1 ldap

ldap ldap ldap ldap ldap ldap ldap

8192 8192 8192 8192 8192 8192 8192

Sep Sep Sep Sep Sep Sep Sep

21 21 21 21 21 21 21

16:41 16:41 16:41 16:41 16:41 16:41 16:41

cn.dbb dn2id.dbb id2entry.dbb nextid.dbb objectClass.dbb sn.dbb uid.dbb

Tip: If you want to re-configure the LDAP directory from scratch, stop slapd, remove the ldbm files, start slapd, then re-do the steps in this section.

Appendix B. Installing OpenLDAP and configuring for SAN File System

595

596

IBM TotalStorage SAN File System

Appendix C.

Client configuration validation script


This appendix contains the listing of the script discussed in 9.7.1, Client validation sample script details on page 430.

Copyright IBM Corp. 2003, 2004, 2006. All rights reserved.

597

Sample script listing


Example C-1 shows the listing of the script. Instructions to get a softcopy of this script are available in Appendix D, Additional material on page 603.
Example: C-1 Client configuration validation sample script listing #!/bin/bash ######################################################################## # # Script name : check_fileset_acces.sh # Author : ITSO Redbook team # Purpose : This script checks that a given SANFS client has complete # access to all required pools for given filesets according # to the output of the reportfilesets command # IMPORTANT: This script is provided AS A AN UNSUPPORTED SAMPLE ONLY. # It is not part of the supported SAN File system software # distribution. It will require careful testing to verify # the accuracy of the results in your environment. You may # want to modify it to include additional error checking or # other functions. # ######################################################################## #let's first make sure the script is correctly invoked if [ $# -lt 2 ] then echo "Please specify a client name and at least one fileset name" echo "Syntax: check_fileset_acces.sh <client_name> <fileset1> <fileset2> ...." exit 0 fi #Store the client name client=$1 #initialize the array that will host the list of pool the client can #fully access client_avail_pool=( ) nb_lun=0 nb_pool=0 nb_vol=0 log_file="$0.log" echo "" >> $log_file echo "####### Start checking fileset access for client $1 at `date` #########" >> $log_file echo "" >> $log_file echo "Gathering information from SANFS - please wait..." #build the list of luns for the given client echo "INFO Building list of luns acccessible by client $client" >> $log_file #the lslun -client command returns the list of volumes accessible by the client #we filter out with awk to keep only volume names (5th column) echo "Listing luns on client $client... " client_luns=( `sfscli lslun -client $client -hdr off |awk '{print $5}' |grep -v '^-' ` ) #just in case... if [ ${#client_luns[*]} -eq 0 ]

598

IBM TotalStorage SAN File System

then echo "The specified client cannot access any LUN or does not exist" exit 0 fi

#build the list of pools available in the system echo "INFO Building list of pools defined on SANFS" >> $log_file echo "Listing pools on SANFS..." default_pool=( `sfscli lspool -hdr off -type default | awk '{print $1}'` ) user_pools=( `sfscli lspool -hdr off -type user| awk '{print $1}'` ) pools=( ${default_pool[@]} ${user_pools[@]} ) nb_pools=${#pools[@]} #just in case.... if [ "$nb_pools" -le 0 ] then echo "No pool found on SANFS !" echo "No pool found - stopping now." >> $log_file exit 0 fi #build the list of vol for each pool

echo "INFO Building the list of volumes for each pool in SANFS" >> $log_file index=0 while [ "$index" -lt "$nb_pools" ] do #we build the list of volumes for each pool current_vol=( `sfscli lsvol -hdr off -pool ${pools[$index]} | awk '{print $1}' 2>>$log_file` ) if [ "${#current_vol[*]}" -ne 0 ] then #it's not empty - let concatenate it pool_vol=( ${pool_vol[@]} ${current_vol[@]} ) else #if it's empty, we put "." pool_vol[${#pool_vol[*]}]="." fi #we have place the ":" delimiter pool_vol[${#pool_vol[*]}]=":" index=$((index + 1)) done #We now check which pool the client can correctly (entirely) access pool_vol_cur_index=0 for p in ${pools[@]} do echo "" >> $log_file echo "INFO Checking access to pool $p..." >> $log_file access=1 end_of_pool=0 while [ $end_of_pool -ne 1 ] do pool_vol_cur=${pool_vol[$pool_vol_cur_index]} pool_vol_cur_index=$((pool_vol_cur_index+1))

Appendix C. Client configuration validation script

599

if [ $pool_vol_cur = ":" ] then end_of_pool=1 #echo "Finished checking pool $p - moving to next one" >> $log_file else if [ $pool_vol_cur = "." ] then access=0 echo "INFO The pool $p does not contain any volume" >> $log_file else vol_found=0 client_luns_index=0 while [ $vol_found -eq 0 -a $client_luns_index -lt "${#client_luns[*]}" ] do if [ "$pool_vol_cur" = "${client_luns[$client_luns_index]}" ] then vol_found=1 fi client_luns_index=$((client_luns_index+1)) done if [ $vol_found -eq 0 ] then access=0 echo "WARNING client $client does not have access to volume $pool_vol_cur in pool $p" >> $log_file fi fi fi done if [ "$access" -eq 1 ] then echo "INFO access to pool $p - OK" >> $log_file client_avail_pools[${#client_avail_pools[*]}]=$p else echo "WARNING client $client has incomplete access to pool $p" >> $log_file fi done echo "Now checking filesets access..."

#let's now see the specified filesets until [ -z "$2" ] do cur_fileset="$2" echo "" >> $log_file echo "INFO Checking fileset $cur_fileset..." >> $log_file fileset_pools=( `sfscli reportfilesetuse -hdr off $cur_fileset | awk '{print $1}'` ) fileset_access=1 fileset_pool_index=0 #if the fileset does not require any access to pool, consider an error or wrong fileset name if [ ${#fileset_pools[@]} -eq 0 ] then echo "Fileset $cur_fileset is invalid"

600

IBM TotalStorage SAN File System

fileset_access=0 fi

while [ $fileset_access -eq 1 -a $fileset_pool_index -lt "${#fileset_pools[@]}" ] do # variable "avail_pools_index" will be index to go through the list of pools for the client avail_pools_index=0 # variable "pool_found" set to 1 if the searched pool is within the client pool list pool_found=0 while [ $pool_found -eq 0 -a $avail_pools_index -lt "${#client_avail_pools[@]}" ] do if [ "${fileset_pools[fileset_pool_index]}" = "${client_avail_pools[avail_pools_index]}" ] then # We found the pool pool_found=1 fi avail_pools_index=$((avail_pools_index+1)) done if [ $pool_found -eq 0 ] then fileset_access=0 echo "WARNING Pool ${fileset_pools[fileset_pool_index]} is missing" >> $log_file fi fileset_pool_index=$((fileset_pool_index+1)) done if [ "$fileset_access" -eq 0 ] then echo "WARNING - Client $client does not have correct access to fileset $cur_fileset" echo "WARNING - Client $client does not have correct access to fileset $cur_fileset" >> $log_file else echo "INFO - Client $client has correct access to fileset $cur_fileset" echo "INFO - Client $client has correct access to fileset $cur_fileset" >> $log_file fi #move to the next fileset shift done echo "" echo "Please refer to $log_file for details." echo "####### Checking for client $client finished successfully ############" >>$log_file exit 0

Appendix C. Client configuration validation script

601

602

IBM TotalStorage SAN File System

Appendix D.

Additional material
This redbook refers to additional material that can be downloaded from the Internet as described below.

Locating the Web material


The Web material associated with this redbook is available in softcopy on the Internet from the IBM Redbooks Web server. Point your Web browser to:
ftp://www.redbooks.ibm.com/redbooks/SG247057

Alternatively, you can go to the IBM Redbooks Web site at:


ibm.com/redbooks

Select the Additional materials and open the directory that corresponds with the redbook form number, SG247057.

Using the Web material


The additional Web material that accompanies this redbook includes the following files: File name SG247057.zip Description check_fileset_access.sh - SAN File System validation script

System requirements for downloading the Web material


The following system configuration is recommended: Hard disk space: Operating System: Processor: Memory: 1 MB minimum Any that can unzip a file No minimum No minimum

Copyright IBM Corp. 2003, 2004, 2006. All rights reserved.

603

How to use the Web material


Create a subdirectory (folder) on your workstation, and unzip the contents of the Web material zip file into this folder. Secure file transfer the shell script to a directory on the master MDS.

604

IBM TotalStorage SAN File System

Abbreviations and acronyms


AIX API CIFS CIM CIMOM CIO CLI DBCS DFS DFSMS DMS DMZ ESS FAT FFDC GBIC Gbps GUI HBA IBM IP iSCSI ISL ITSO JBOD JFS LAN LDAP LOB LUN LVM MDS NAS NFS NTFS OBDC Advanced Interactive Executive Application Programming Interface Common Internet File System Common Information Model Common Information Model Object Model Chief Information Officer Command Line Interface Double Byte Character Set Distributed File System Data Facility System Managed Storage Database Managed Storage Demilitarized Zone Enterprise Storage Server File Allocation Table First-Failure Data Capture Gigabit Interface Converter Gigabits Per Second Graphical User Interface Host Bus Adapter International Business Machines Corporation Internet Protocol Internet Small Computer System Interface Inter-Switch Link International Technical Support Organization Just a Bunch Of Disks Journalled File System Local Area Network Lightweight Directory Access Protocol Large Object Logical Unit Number Logical Volume Manager Metadata Server Network Attached Storage Network File System New Technology File System One Button Data Collection SP SSH SSL SVC TCO TCP/IP UDB VPN RAS RSA SAN SAN FS SDD SID SMS SNIA OS RA RAID Operating System Remote Access Redundant Array of Inexpensive Disk Reliability, Availability, and Serviceability Remote Supervisor Adapter Storage Area Network SAN File System Subsystem Device Driver Security ID System Managed Storage Storage Networking Industry Association Service Pack Secure Shell Secure Sockets Layer SAN Volume Controller Total Cost of Ownership Transmission Control Protocol/Internet Protocol Universal Database Virtual Private Network

Copyright IBM Corp. 2003, 2004, 2006. All rights reserved.

605

606

IBM TotalStorage SAN File System

Related publications
The publications listed in this section are considered particularly suitable for a more detailed discussion of the topics covered in this redbook.

IBM Redbooks
For information about ordering these publications, see How to get IBM Redbooks on page 611. Note that some of the documents referenced here may be available in softcopy only. Designing and Optimizing an IBM Storage Area Network, SG24-6419 DS4000 Best Practices and Performance Tuning Guide, SG24-6363 Getting Started with zSeries Fibre Channel Protocol, REDP-0205 Get More out of your SAN with IBM Tivoli Storage Manager, SG24-6687 IBM SAN Survival Guide, SG24-6143 IBM Tivoli Storage Management Concepts, SG24-4877 IBM Tivoli Storage Manager Implementation Guide, SG24-5416 IBM TotalStorage Enterprise Storage Server: Implementing the ESS in Your Environment, SG24-5420 IBM TotalStorage SAN Volume Controller, SG24-6423 IBM TotalStorage Enterprise Storage Server Implementing ESS Copy Services in Open Environments, SG24-5757 Implementing Systems Management Solutions using IBM Director, SG24-6188 Understanding the IBM TotalStorage Open Software Family, SG24-7098 Understanding LDAP - Design and Implementation, SG24-4986 Virtualization in a SAN, REDP-3633

Other publications
These publications are also relevant as further information sources: IBM Tivoli Directory Server Administration Guide, SC32-1339 IBM TotalStorage FAStT Storage Manager Version 8.4x Installation and Support Guide for AIX, HP-UX, and Solaris, GC26-7622 IBM TotalStorage FAStT Storage Manager Version 8.4x Installation and Support Guide for Intel-based Operating System Environments, GC26-7621 IBM TotalStorage FAStT Storage Manager Version 9 Installation and Support Guide for AIX, HP-UX, Solaris, and Linux on POWER, GC26-7648 IBM TotalStorage FAStT Storage Manager Version 9 Installation and Support Guide for Intel-based Operating System Environments, GC26-7649 IBM TotalStorage Master Console for SAN File System and SAN Volume Controller Installation and Users Guide Version 3 Release 1, GC30-4090

Copyright IBM Corp. 2003, 2004, 2006. All rights reserved.

607

IBM TotalStorage SAN File System Administrators Guide and Reference, GA27-4317 IBM TotalStorage SAN File System Maintenance and Problem Determination Guide, GA27-4318 IBM TotalStorage SAN File System Planning Guide, GA27-4344 IBM TotalStorage SAN File System Installation and Configuration Guide, GA27-4316 Microsoft Cluster Server Enablement Installation and Users Guide, GC30-4115 (NOT FOUND) Remote Supervisory Adapter Users Guide, 88P9243, found at:
http://www.ibm.com/pc/support/site.wss/MIGR-4TZQAK.html

Online resources
These Web sites and URLs are also relevant as further information sources: Distributed Management Task Force
http://www.dmtf.org

Download Cygwin
http://www.cygwin.com http://www.cygwin.com/setup.exe

Download IBM Subsystem Device Driver


http://www.ibm.com/support/dlsearch.wss?rs=540&tc=ST52G7&dc=D430

Download Java plug-ins


http://www.java.sun.com/products/plugin

Download Java software from Sun Microsystems


http://www.java.com/en/download/manual.jsp

Download OpenSSH
http://www.openssh.com

Download PuTTY
http://www.putty.nl

Download RPM images


http://www.rpmfind.net

Heimdal Kerberos 5
http://www.pdc.kth.se/heimdal

Heimdal download site


ftp://ftp.pdc.kth.se/pub/heimdal/src/heimdal-0.6.3.tar.gz

Host Bus Adapters support search tool


http://knowledge.storage.ibm.com/HBA/HBASearchTool

IBM Director Home Page


http://www.ibm.com/servers/eserver/xseries/systems_management/xseries_sm.html

IBM Directory Server V5.1 download


http://www.software.ibm.com/webapp/download/search.jsp?rs=ldap&go=y

608

IBM TotalStorage SAN File System

IBM eGatherer Home Page


http://www.ibm.com/pc/support/site.wss/document.do?lndocid=MIGR-4R5VKC

IBM ^ xSeries 345 Flash BIOS update (Linux update package)


http://www.ibm.com/pc/support/site.wss/document.do?lndocid=MIGR-54484

IBM Personal computing support - Flash BIOS update (Linux update package) IBM ^ xSeries 345
http://www.ibm.com/pc/support/site.wss/document.do?lndocid=MIGR-54484

IBM Personal computing support - Flash BIOS Update (Linux package) - IBM ^ xSeries 346
http://www.ibm.com/pc/support/site.wss/document.do?sitestyle=ibm&lndocid=MIGR-57356

IBM Personal computing support - Flash BIOS update (Linux update package) IBM ^ xSeries 365
http://www.ibm.com/pc/support/site.wss/document.do?sitestyle=ibm&lndocid=MIGR-60101

IBM Personal computing support - IBM FAStT Storage Manager for Linux - TotalStorage
http://www.ibm.com/pc/support/site.wss/document.do?lndocid=MIGR-60591

IBM Personal computing support - Remote Supervisor Adapter II Firmware Update IBM ^ xSeries 345
http://www.ibm.com/pc/support/site.wss/document.do?lndocid=MIGR-46489

IBM Personal computing support - Remote Supervisor Adapter II Firmware Update for Linux - IBM ^ xSeries 346
http://www.ibm.com/pc/support/site.wss/document.do?sitestyle=ibm&lndocid=MIGR-56759

IBM Personal computing support - Remote Supervisor Adapter II Firmware update - IBM eServer xSeries 365
http://www.ibm.com/pc/support/site.wss/document.do?sitestyle=ibm&lndocid=MIGR-53861

IBM SAN File System: Interoperability - IBM TotalStorage Open Software Family
http://www.ibm.com/servers/storage/software/virtualization/sfs/interop.html

IBM Tivoli Director


http://www.ibm.com/software/sysmgmt/products/support/IBMDirectoryServer.html

IBM TotalStorage Enterprise Storage System Technical Support


http://www.ibm.com/servers/storage/support/disk/2105.html

IBM TotalStorage DS4x00 (FAStT) Linux RDAC Software Package - Fibre Channel Solutions
http://www.ibm.com/support/docview.wss?rs=593&uid=psg1MIGR-54973&loc=en_US:

IBM TotalStorage DS4x00 (FAStT) Technical Support


http://www.ibm.com/storage/support/fastt http://www.ibm.com/servers/storage/support/fastt/index.html

IBM TotalStorage SAN File System Technical Support


http://www.ibm.com/storage/support/sanfs

IBM TotalStorage SAN Volume Controller Technical Support


http://www.ibm.com/servers/storage/support/virtual/2145.html

IBM TotalStorage Storage services


http://www.storage.ibm.com/services/software.html

Related publications

609

IBM TotalStorage: Subsystem Device Driver for Linux


http://www.ibm.com/support/docview.wss?rs=540&context=ST52G7&uid=ssg1S4000107&loc=en_US& cs=utf-8&lang=en

IBM TotalStorage support: Downloads


http://www.ibm.com/servers/storage/support/download.html

IBM TotalStorage support: DS4500 Midrange Disk System FC-2 HBA current downloads:
http://www.ibm.com/servers/storage/support/disk/ds4500/hbadrivers1.html

IBM TotalStorage support: DS4500 Midrange Disk System Storage Manager current level downloads
http://www.ibm.com/servers/storage/support/disk/ds4500/stormgr1.html

IBM TotalStorage support: Search for host bus adapters, firmware and drivers
http://www.ibm.com/servers/storage/support/config/hba/index.wss

IBM TotalStorage support: TotalStorage Multipath Subsystem Device Driver Downloading:


http://www.ibm.com/servers/storage/support/software/sdd/downloading.html

IBM TotalStorage: Support for DS4100 Midrange Disk System Troubleshooting


http://www.ibm.com/servers/storage/support/disk/ds4100

IBM TotalStorage: Support for DS4300 Midrange Disk System


http://www.ibm.com/servers/storage/support/disk/ds4300

IBM TotalStorage: Support for DS4400 Midrange Disk System


http://www.ibm.com/servers/storage/support/disk/ds4400

IBM TotalStorage: Support for DS4500 Midrange Disk System


http://www.ibm.com/servers/storage/support/disk/ds4500

IBM TotalStorage: Support for TotalStorage DS6800 Troubleshooting


http://www.ibm.com/servers/storage/support/disk/ds6800

IBM TotalStorage: Support for TotalStorage DS8100 Troubleshooting


http://www.ibm.com/servers/storage/support/disk/ds8100

IBM TotalStorage: Support for TotalStorage DS8300 Troubleshooting


http://www.ibm.com/servers/storage/support/disk/ds8300

IBM TotalStorage: Support for Subsystem Device Driver


http://www.ibm.com/servers/storage/support/software/sdd

Microsoft: Create a cluster-managed file share: Server Clusters (MSCS)


http://www.microsoft.com/technet/prodtechnol/windowsserver2003/library/ServerHelp/e59d82 6b-c1c7-4022-ad3a-cfc5656202c9.mspx

Microsoft Windows Update


http://v4.windowsupdate.microsoft.com/catalog/en/default.asp

PuTTY: a free telnet/ssh client


http://www.chiark.greenend.org.uk/~sgtatham/putty

QLogic
http://www.qlogic.com

QLogic HBA driver


http://www.qlogic.com/support/oem_detail_all.asp?oemid=22

610

IBM TotalStorage SAN File System

Qlogic Support: OEM Download driver


http://www.qlogic.com/support/ibm_page.html

Rpmfind.net
http://rpmfind.net

RSA II Users Guide


http://www.ibm.com/pc/support/site.wss/document.do?lndocid=MIGR-57091

Red Hat Linux Home Page


http://www.redhat.com

Samba home page


http://www.samba.org

SNIA - Storage Networking Industry Association: SNIA's Mission


http://www.snia.org/news/mission/

SNMP Home Page


http://www.snmp.com

Storage Networking Industry Association


http://www.snia.org

Sun Microsystems
http://www.sun.com

SUSE Linux Home Page


http://www.suse.com

Virtual I/O Server


http://www.software.ibm.com/webapp/set2/sas/f/vios/documentation/faq.html http://www.software.ibm.com/webapp/set2/sas/f/vios/documentation/home.html

How to get IBM Redbooks


You can search for, view, or download Redbooks, Redpapers, Hints and Tips, draft publications and Additional materials, as well as order hardcopy Redbooks or CD-ROMs, at this Web site:
ibm.com/redbooks

Help from IBM


IBM Support and downloads:
ibm.com/support

IBM Global Services:


ibm.com/services

Related publications

611

612

IBM TotalStorage SAN File System

Index
Symbols
.tank.passwd 247, 253 /etc/defaultdomain 364 /etc/krb5.conf 357 /etc/nsswitch.conf 362 /etc/openldap/ldap.conf 363 /etc/resolv.conf 130 /etc/security/ldap/ldap.cfg 354 /etc/sysconfig/network/routes 130 /etc/yp.conf 364 /usr/local/winbind/install/lib/smb.conf 358 cpio 390 fuser 174 installp 170 lsdev 112, 115 lslpp 114 mkdir 172 mount 174, 182 sysdumpstart 533 tar 331, 390 touch 532533 varyoffvg 116 varyonvg 117 AIx commands mksecldap 364 alert 260, 269, 546 an 15 API 9 application availability 22 application server consolidation 34 application support 87 asymmetric virtualization 13 audit log 526 authentication 100 authorization 100 automatic failover 61 automatic restart 60 autorestart service 82, 413414

A
Access Control Lists 26 ACL 26, 57, 297 Active Directory 54, 78, 338339, 348, 350 l ogin 360 add server 396 addprivclient 299 addserver 398 administration 42 SAN File System 252 administrative log 529 Administrator 74, 102 Adobe Acrobat 195 advanced heterogeneous file sharing 78 AIX 28, 51, 169 client configuration file 533 configure SAN File System 175 configure SDD 115 DB2 559 expand disk 280 HACMP 52 install SDD 113 LVM 85 RDAC 120 SAN File System client dump 533 SAN File System client logging 532 SAN File System configuration 172 SecureWay 364 SMIT 113, 170 stclient.conf 533 stfsdebug 532533 syslog 532533 system logging 532533 take fileset ownership 300 unmount SAN File System 174, 182 upgrade client 244 virtual I/O 53 AIX commands cfgmgr 170, 280 chmod 300 chown 300 cp 390 Copyright IBM Corp. 2003, 2004, 2006. All rights reserved.

B
Backup 74, 102 backup and recovery 4 BIOS 135, 234235, 540 block aggregation 9 block subsystems 9 browser interface 256 bufferpools 558

C
cache 94 caching 53, 558 catpolicy 311 change validation 20 CHKDSK 57 CIFS 78, 468 CIM 6, 10, 31, 40, 45 agent 8 object manager 8 Object Manager see CIM/OM CIM/OM 8 CIMOM 186 cimom.log 528 CIM-XML 7 client data access 427

613

monitor 412 privileged 298 show volume access 304 validation script 430 client dump 532533 client logging 530, 532533 client tracing 531 client-server 51 cluster log 530 clustering 87 Common Information Model 31, 45 Connection Manager 46 consolidated logs 530 consolidation 34 logical storage consolidation 5 physical 5 copy on write 376 copy services 16 copy-on-write 80 Create a shielded password 593 create database 558 create storage pool 269 cron 441 Customer Connection ID 521 cygwin 252, 480

D
data migration 8889, 389 offline 89 online 90 data migration phases 393 data replication functions 5 data sharing 11, 14 database DB2 554 database location 560 DB2 199, 554, 568 AIX 559 and SAN File System 554 bufferpools 558 database location 560 Database Managed Storage 554 direct I/O 558559 FlashCopy 560 free space 557 global namespace 560 index data 555 large object data 554 logs 554 policy placement 554 SAN File System rules 556 space consumption 557 storage management 557 System Managed Storage 554 transaction logs 555 Windows 559 DB2 commands create database 558, 560 create tablespace 554555 suspend 560

DB2_DIRECT_IO 559 DB2NTNOCACHE 559 default storage pool 263, 277 default User Pool 79 default user pool 49, 328 delete server 397 demilitarized zone 520 device candidate list 185 device management 23 DFSMS 11, 16 direct I/O 53, 87, 558559 directory server 348349 disable default pool 328 Disabling the default User Pool 328 disk consolidation 5 disk performance management 23 DMS 554 DMTF 6, 10 DMZ 520 DNS 130, 350 DRfile 481 dropserver 398 DS4000 34, 79, 119 DS8000 10 DTMF 6 dual hardware components 59 dump 532533 dynamic 287 dynamic fileset 415

E
engine 37, 40, 252 enterprise class shared disk array 5 ESS 24, 34, 66, 7879, 235, 406 CIM/OM 8 PPRC (Metro Mirror) 24 SDD 69 Ethernet bonding 60, 67, 84 SAN Fiile System Ethernet bonding 131 event log 331, 526, 530 expand SVC disk 279

F
fabric management 20 failback 418 fail-over 413 failover 61, 233, 419 failover monitoring 421 failover time 427 FAStT 34, 45, 66, 6970, 79 RDAC 69, 119 FAT 28 fencing 82 FFDC 527 file metadata 34, 390 file level virtualization 17 file lifecycle management 441

614

IBM TotalStorage SAN File System

file management policy 50, 441 create 442 execute 443 syntax 442 file metadata 41 file permission mapping 338339 file placement policy 49, 80, 304 file preallocation 324 file sharing 78, 338 administrative commands 348 directory server 348349 heterogeneous 338 homogeneous 338 homogenous 338 user domain 347 file system cache 558 definition 25 free space 557 LAN 28 local 28 permissions 26 SAN 28 security 26 file/record subsystems 9 fileset 41, 4647, 286287, 300, 415, 557 assign MDS 294 attach 296 change characteristics 295 convert static to dynamic 295 delete 296 detach 296 dynamic 287 failover 415 link to storage pool 304 metadata 427 permissions 291, 340 primary allegiance 297, 340 quota 290 redistribution 415 static 287, 415 statistics 399 storage pool relationship 304 take ownership 367 threshold 290 filesets 415 firewall 189, 521 first-failure data capture 527 FlashCopy 42, 47, 55, 58, 80, 291, 376, 478, 560 considerations 378 copy on write 376 create 380 directory 378 list images 380, 384 remove image 387 revert image 384 space 376 flexible SAN 69 forecasting 21 free space 557

G
GBIC 66 getent passwd 364 global namespace 34, 41, 46, 54, 560 grace period 41 groups 72 GSKit 569

H
HACMP 52, 87, 560 hard quota 48 hardware element management 12 hardware faults 60 HBA 37, 40, 271 HBA performance 406 heartbeat 60 Heimdal 355356 heterogeneous file sharing 54, 78, 338 high availability 11 homogenous file sharing 54, 338 HTTP 7 hwclock 129130

I
IBM Director 45, 421 IBM Directory Server 566 configuration 570 configuring 574 create LDAP database 570 DB2 568 GSKit 569 install 566 start admin server 578 IBM Global Security Toolkit 569 IBM services SAN File System IBM services 90 IBM Tivoli Storage Manager 48 IBM Tivoli Storage Manager see ITSM IBM Tivoli Storage Manager see Tivoli Storage Manager IBM Tivoli Storage Resource Manager 21 IBM TotalStorage Multiple Device Manager 22 IBM TotalStorage Open Software Family 14 IBM TotalStorage Productivity Center 19 IBM TotalStorage SAN File System 16, 30 IBM TotalStorage SAN File System see SAN File System IBM TotalStorage SAN Volume Controller see SVC IBM TotalStorage Virtualization Family 31 ifcfgeth0 130 ifconfig 131 IFS 52 IIS 57 implementation services 90 in-band virtualization 1315 install MDS 138, 237 install SAN File System 164 installable file system 36 instant copy 58 interoperability 10 Index

615

iSCSI 4, 40 ITSM 502 ITSRM 21

J
Java 38, 187188 JBOD 79, 554 JFS 28 JRE 187

K
Kerberos 357 kinit 360 klist 360

L
LAN 28 LAN file system 28 LDAP 37, 7374, 78, 100, 102, 186, 261, 339 client 591 configure for SAN File System 594 configure OpenLDAP server 592 Data Interchange Format 574 database 74, 102, 570 DN 74 IBM Directory Server 566 install OpenLDAP 590 LDIF 102, 574 OpenLDAP 590 slapd 592 start admin server 578 start OpenLDAP server 593 start server 578 switch to local authentication 246 User 74, 102 user ID 74 userPassword 74 verify entriesLDAP browse directory 585 WebSphere 569 LDAP commands ibmslapd 577 ldapadd 594 ldapsearch 103104, 593595 mksecldap 355 secldaplcntd 355 slappasswd 593 startserver 578 LDIF 102, 574 leases 41 Legato NetWorker 478 LI 254 license agreement 138 life cycle management 441 lifecycle management 50 create policy 442 execute policy 443 recommendations 446

Linux 164 change password 129 direct I/O 53 Ethernet bonding 60 install OpenLDAP 590 kernel upgrade 128 LDAP 590 RDAC 121 Red Hat 590 SAN File System client logging 533 service pack 128129 SUSE 3738, 121, 128129, 590 syslog 533 system logging 533 zSeries 178 Linux client setup 165 Linux commands fuser 182 passwd 129 rpm 591 service 593 shutdown 131 top 408, 412 useradd 252 vmstat 408, 412 List policy contents 310 list user map 366 load balancing 557 LOB 554, 559 local authentication 72, 100, 186 change from LDAP 246 local file system 28 locks 41 log.audit 526 log.std 331, 526 log.trace 527 logging 331, 521, 530, 532533 client 530 SAN File System 331, 526528, 530 SAN File System clients 530 logical consolidation 5 logs DB2 554 LPAR 53, 178 LUN 20, 263 LUN expansion 70 LUN masking 67 LUN statistics 407 LUN-based backup 478479 LVM 85

M
Management Information Base 543 master console 38, 520 and firewall 189 installation 187 SVC 187 master failover 419 master MDS 41

616

IBM TotalStorage SAN File System

failover 419 identifying 255 MBCS 56, 296, 322 MDM 22 MDM Replication Managerl 24 MDS 17, 36, 40 Active Directory queries 357 verifySDD 118 MDS autorestart 414 memory 94 memory.dmp 532 Metadata 14 metadata 14, 34, 47, 78, 88, 94, 427 Metadata server 17, 36, 40, 67 RSA 71 metadata server see MDS Metro Mirror 24 MIBs 543 Microsoft Management Console 157 migratedata 89 migration services 90 mkpolicy 311, 453 MMC 157 MOF 6 Monitor 74, 102 monitoring failover 421 mprivclient 300 MSCS 53, 88, 447449 Multi-pathing device driver 69 Multiple Device Manager 2223 Multiple Device Manager Replication Manager 24

configure for SAN File System 594 configure server 592 install 590 slapd 592 start server 593 Operator 74, 102 Oplocks 162 out-of-band 14 out-of-band virtualization 13 out-of-space condition 22

P
PD 520 performance management 23 physical consolidation 5 planning worksheets 95 policy 49, 304305, 334 policy best practices 334 policy rule file 309 policy rule syntax 307 policy statistics 332 policy-based automation 11 pool 268 POSIX 53 PPRC 24, 78 preallocation 324 prerequisite software 135 primary allegiance 297, 340, 479 privileged client 86, 297298, 366, 391, 429 problem determination 520 proxy model 8 PuTTY 198, 252, 480

N
N+1 415, 427 name resolutioin 130 nameservice cache 363 NAS 4 NAT 189 nested fileset 48, 289 nested filesets 83 NetView 213 network bonding 60, 84 NFS 297 NIS 54, 78, 338339, 348, 355, 364 NLS 322 non-uniform 69, 79, 303, 328, 334, 428429 Now we backup the files with 508 NTFS 28, 54, 57

Q
QLogic 135, 271 quorum disk 419 quota 48

R
RAID 5, 66 RAID-5 78 RAM 94 RDAC 38, 69, 85, 119, 136, 235 re-add server 398 recovery time 427 Red Hat 51 Red Hat Linux 590 OpenLDAP 590 Redbooks Web site 611 Contact us xxiii regedit 532 reliability 59 Remote Aaccess 520 Remote Access 46 remote mirroring 5 Remote Supervisor Adapter 60 remove server 397 remove volume 70

O
ODBC 534 ODBC see one-button data collection offline data migration 89 one-button data collection 534 online data migration 90 onsistency group 479 open standards 10 OpenLDAP 352, 590 client 591

Index

617

replication 14 rogue MDS 413 rogue server 82 ROI 21 rolling upgrade 233 root squashing 297 RS-485 67, 243 RSA 38, 46, 71, 135, 231, 235, 414, 538 fencing 82 logs 537 RSA II 60 rule file 309

S
Samba 78, 355, 357 winbind 348, 357358 SAN 22, 66 distance 5 fencing 82 non-uniform configuration 69 uniform configuration 69 zoning 108 SAN FIle System administration 42 uniform SAN configuration 69 SAN File System 14, 17, 28, 31, 252, 256, 269, 558559 .tank.passwd 247, 253 access to LUNs 67 activate policy 311 activate volume 266 Active Directory 349350 Active Directory query 357 active policy 305 add MDS 396 add privileged client 261, 298299 add SNMP manager 546 add volume 263264, 270 administration 37, 252, 256, 557 administrative log 529 administrative roles 72 administrators 261 advanced file sharing 347 AIX client 169 AIX client configuration file 172 AIX client dump 533 AIX client logging 532 alert 260, 269, 290, 546, 557 allocation size 269 and DB2 554 and firewall 189 application support 87 assign fileset server 294 attach fileset 296 audit log 526 authentication 72, 100, 186 authorization 100 automatic failover 61, 233 automatic restart 60 autorestart 262, 414 autorestart service 82, 413414

backup 478 balanced workload 80 browser access 252 cache 94, 558 caching 53 catpolicy 310 change cluster configuration 298 change fileset 295 change storage pool 276 change volume 266 check metadata 262 CIMOM 186 cimom.log 528 clear log files 262 CLI 37, 42, 46, 74, 102, 228, 252 CLI password 253254 CLI user 247 CLI_USER 254 client 51, 94 client access validation 430 client configuration file 533 client data access 427 client dump 532533 client installation 149 client log 530 client logging 530, 532533 client monitoring 412 client operations 296 client properties 157 client tracing 531 client validation script 430 clients 261 cluster configuration 262 cluster log 530 cluster name 293 cluster statistics 400 cluster status 262, 298 clustering 52, 87 commands 44 configuration files 481 configure AIX client 175 configure OpenLDAP 594 consolidated logs 530 convert static to dynamic 295 copy on write 376 create file management policy 442 create fileset 290 create FlashCopy 380 create policy 309, 311 create user domain 365 create user map 366 create volume 264 data access 427 data migration 8889, 389 decrease cluster 397 default storage pool 263, 277, 556 default user pool 49, 79, 328 defragment files 441 deinstall client 157 delete fileset 296

618

IBM TotalStorage SAN File System

delete MDS 397 delete storage pool 277 delete volume 266 detach fileset 296 detect new LUNs 263 device candidate list 185 direct I/O 53, 87 directory server 348349 disable autorestart 262 disable default pool 330 discover LUNs 269 disk accessl SAN File System LUN access 67 disk acess 69 display cluster status 262 display engines 262 display LUNs 264 display policy rules 310 domain 365 drain volume 266 DRfile 481 DS4000 119 dual hardware components 59 dump 532533 dynamic 287 dynamic fileset 415 enable autorestart 262 engine 37, 40, 252 engine status 262 error log 331 Ethernet bonding 60, 67, 84 event logging 331, 526 execute file management policy 443 expand system volume 284 expand user volume 277 failback 418 fail-over 413 failover 233, 419 failover monitoring 421 failover time 427 FAStT support 119 faulty disk 268 fencing 82 fiileset 46 file defragmentation 441 file lifecycle management 441 file management policy 50, 441 file metadata 41 file metadata information 401 file movement 436 file permission mapping 338339 file placement policy 49, 80, 304 file sharing 78, 297, 338 fileset 41, 4647, 286, 415, 557 fileset assignment 287 fileset failover 415 fileset hard quota 48 fileset ownership 300 fileset permissions 291 fileset quota 290

fileset redistribution 415 fileset soft quota 48 fileset statistics 399 fileset status 262 fileset threshold 290 filesets 415 first-filure data capture 527 FlashCopy 42, 47, 55, 58, 80, 291, 376, 478, 560 FlashCopy considerations 378 flexible SAN 69 global namespace 34, 41, 46, 54 grace period 41 groups 72 GUI 252, 256 GUI monitoring 402 GUI Web server 43 hard quota 48 hardware faults 60 hardware validation 105 HBA 37 heartbeat 60 Heimdal 356 helper service 156 heterogeneous file sharing 78, 338 heterogenous file sharing 54 high availability 81, 413 homogenous file sharing 54, 338 IBM Director 421 identifying master MDS 255 implementation services 90 increase cluster 396, 398 install AIX client 169 install client 149 install Master Console 187 install Solaris client 168 installation 126, 138, 237 instant copy 58 iSCSI 40 Kerberos 357 kernel extension 172 LDAP 73, 100, 186, 252, 348, 566, 590 leases 41 license agreement 138 lifecycle management 50, 441 lifecycle management recommendations 446 Linux client 164 Linux client logging 533 Linux kernel 128 list administrators 261 list clients 261, 402 list engines 262 list filesets 291 list FlashCopy images 380, 384 list logs 261 list LUNs 260, 264 list mapped user IDs 364 list policy 310 list pools 265 list server 262 list servers 258

Index

619

list storage pools 260 list user map 366 list volume contents 267 list volumes 259 listing logs 522 load balancing 557 local authentication 72, 100, 138, 186, 246 locks 41 log files 261, 331 log.audit 526 log.std 331, 526 log.trace 527 logging 331, 521, 526528, 530, 532533 logs 417, 522 LUN 263 LUN expansion 70 LUN-based backup 478479 LUNs 260 make fileset 290 make FlashCopy 380 make user domain 365 make user map 366 make volume 264, 270 map users 347 Master Console 38, 45, 520 master console installation 187 master failover 419 master metadata server 41 MBCS 56, 322 MDS 17, 36, 40 MDS installation 138, 237 MDS performance 400 MDS validation 105 message format 522 metadata 47, 78, 88, 94 Metadata server 3637, 67 metadata size 92 migrate data 391 migrate to local authentication 246 migration services 90 MMC 157 modify fileset 295 modify storage pool 276 modify volume 266 monitor clients 412 monitor server performance 400 monitoring 398 monitoring failover 421 move file 436 MSCS 88 multi-path device driver 119 N+1 415, 427 nested fileset 48, 289 nested filesets 83 network bonding 60, 67, 84 network infrastructure 71 new LUNs 263 NIS 355, 364 NLS 322 non-uniform configuration 69, 79, 303, 328, 334,

428429 NTFS restrictions 57 ODBC 534 offline data migration 89 one-button data collection 534 online data migration 90 Oplocks 162 package installation 138, 237 partition size 269 password 254 performance monitoring 398 permission mapping 338339 planning worksheets 95 policy 49, 304305, 334, 554 policy best practices 334 policy evaluation 328 policy rule 305, 556 policy rule examples 322 policy rule syntax 307 policy rules 310 policy statistics 332 pool 268 preallocation 324 prerequisite software 135 primary allegiance 297, 340, 479 privileged client 86, 261, 291, 297298, 366, 391, 429, 452 problem determination 520 processes 262 QLogic driver 135 quiesce cluster 262 quota 290 RDAC 121, 235 re-add MDS 398 reassign fileset 262, 294 recovery time 427 rediscover LUNs 263, 269 reliability 59 Remote Access 46, 520 remove faulty disk 268 remove fileset 296 remove FlashCopy image 387 remove MDS 397 remove privileged client 300 remove storage pool 277 remove volume 266 volume drain 70 report files in volume 332 report fileset use 304 reports 405 resume cluster 262 revert FlashCopy image 384 rogue server 82, 413 rolling upgrade 67, 233 root access 86, 297 root squashing 297 RSA 38, 71, 135, 231, 235, 538 RSA card 46 RSA fencing 82 RSA II 44, 60

620

IBM TotalStorage SAN File System

RSA logs 537 rule file 309 Samba 78, 357 SAN fencing 82 SDD 118, 235 secure shell 136, 228 security log 528 server log 526 server statistics 262, 400 server status 258, 262 set alerts 546 set default User Pool 263 set hardware clock 129130 setup local authentication 100 sfscli 44, 74, 102 show cluster status 262 show filesets 291 show LUN access 304 show pools 265 show user map 366 show volume access 304 sizing 9192 Snap-in 158 SNMP 421, 543 soft quota 48 software 39 software faults 60 Solaris client 168 spare MDS 415, 427 ssh 228 ssh keys 136 start cluster 262 start server 262 statfileset 399 static fileset 287, 415 statserver 400 stclient.conf 172 stop cluster 262 stop server 262 storage pool 4849, 78, 268, 555 storage pool design 78 storage pool threshold 269 supported clients 85 suspend volume 263 system metadata 41 System Pool 49, 78, 91 system time 130 system volume size 92 take ownership of fileset 300 TankSysCLI.attachpoint 492 TankSysCLI.auto 490 TankSysCLI.volume 490 threshold 269 time zone 130 TMVT 105, 146 trace properties 163 tracing 527, 531 transaction rate 557 uniform configuration 69 UNIX-based client 338

update policy 311 upgrade 233 upgrade AIX client 244 upgrade kernel 128 upgrade to local authentication 246 upgrade Windows client 245 upgrading 230 use local authentication 100 usepolicy 311 user domain 347, 365 user ID synchronization 338 user map entries 54, 339, 347 User Pools 49, 79, 92 user volumes 40 validate RSA 538 verify servers 258 verify volumes 259 viewing logs 522 virtual I/O 53 volume 263 volume contents 267 volume drain 70, 266 volume expansion 277, 284 volume files report 332 volume visibility 69 volumes 42 VPN 46, 520 winbind 348, 357358 Windows client 149 Windows client dump 532 Windows client logging 530 Windows client tracing 531 Windows driver 156 workload 79 workload balancing 80, 427 workload unit 286 zSeries client 178 SAN File system Metadata server 40 SAN File System client AIX 51 Linux 51 pSeries 52 Red Hat Linux 51 statistics 408 SuSE Linux 52 VMWare 51 Windows 2000 51 Windows 2003 51 zSeries 52 SAN File System client commands stfsdebug 532533 stlog 531 SAN FIle System commands stopautorestart 242 SAN File System commands 299300, 453 activatevol 259, 266 addprivclient 261, 298, 452 addserver 396398

Index

621

addsnmpmgr 546 attachcontainer 296 attachfileset 296 autofilesetserver 295 builddrscript 488489 catlog 261, 417, 421, 522, 526, 528530 chclusterconfig 262, 298 chfileset 295 chpool 276 chvol 266 clearlog 262 datapath query adaptstat 406, 412 detachfilieset 296 disabledefaultpool 330 dropserver 397398 expandvol 281, 286 hwclock 129130 ldapadd 595 legacy trace 527 lsadmuser 248, 261 lsautorestart 82, 262, 414 lsclient 261, 402 lscluster 489 lsdrfile 488 lsengine 262 lsfileset 291292 lsimage 380, 384385 lslun 260, 264, 266267, 270, 275, 278, 281, 284, 304, 431, 451 lspolicy 310 lspool 260, 265, 276, 278, 282 lsproc 262 lsserver 230, 234, 258, 262, 285, 397, 483484 lssnmpmgr 546 lstrapsetting 546 lsusermap 366 lsvol 259, 265267, 276, 278, 282, 286, 304, 428, 431, 438, 452 migratedata 89, 390391, 393 mkdomain 365 mkdrfile 484, 488 mkfileset 290, 415, 452 mkimage 380, 382 mkpolicy 310, 334 mkpool 269 mkusermap 366 mkvol 70, 263264, 270, 276, 451 mvfile 436, 438, 440441 pmf 534 quiescecluster 262, 480 rediscoverluns 263, 269, 275 reportclient 278, 303304, 428, 438 reportfilesetuse 303304, 431 reportvolfiles 267268, 332, 438440, 455 resetadmuser 100, 263 resumecluster 262, 482 reverttoimage 384385, 493 rmdrfile 488 rmfileset 296 rmimage 387388

rmpool 277 rmsnmpmgr 546 rmstclient 174, 182, 244, 280, 482483 rmvol 70, 266268, 277 sanfs_ctl disk 186 setdefaultpool 263 setfilesetserver 262, 294, 397, 415, 427 settrap 546 setupsfs 231, 481 setupstclient 165, 172, 174, 182, 280 sfscli 258 startautorestart 82, 241, 262, 415 startcluster 262, 483484 startmetadatacheck 262 startserver 262, 397 statcluster 236, 255, 262, 298, 400, 421 statfile 401, 428, 445 statfileset 262 statpolicy 332 statserver 262, 400 stfsclient 172 stfsdisk 186 stfsdriver 172 stfsmount 172 stfsumount 174 stopautorestart 234, 262 stopcluster 262, 482483 stopmetadatacheck 262 stopserver 234, 262, 397, 416 suspendvol 263 tankpasswd 254 tmvt 105 trace 527 upgradecluster 230, 243 usepolicy 334, 453 SAN Volume Controller 12, 1416, 38, 187 SCSI 40 SDD 38, 69, 85, 109, 136, 407 configure for AIX 115 install on AIX 113 install on Windows 2000 110 upgrade driver 235 SDD commands cfgvpath 274 datapath 408 datapath query adapter 112, 118 datapath query device 111, 116, 118, 270, 274, 483 datapath query devstats 407 hd2vp 117 vp2hd 117 secure ftp 480 secure LDAP 74 secure shell 228 security log 528 server consolidation 34 server log 526 Service Location Protocol 40 services implementation 90 migration 90

622

IBM TotalStorage SAN File System

setupstclient 172 sfscli 44, 258, 490 script option 489 sfscli -script 489 sfslcm.pl 443 shared disk capacity 5 shared nothing 448 Simple Network Management Protocol 543 sizing 9192 slapd 592 slapd.conf 592 SLP 23 SMI 67 SMI-S 2223 SMIS 10 SMIT 113 SMS 554 SNIA 67, 910, 2223, 31 SNIA Storage Model 9 SNMP 189, 414, 421, 543 add manager 546 set traps 546 snmptrap 425 soft quota 48 software faults 60 Solaris 51, 168 RDAC 120 space consumption 557 spare MDS 415, 427 Specify a user and password 568 SSH 136, 252, 521 ssh 228, 233 SSL 74, 569 certificate 74 standards organizations 6 static 287 static fileset 415 stclient.conf 172, 175, 533 stfsclient 172 stfsdebug 532533 stfsdriver 172 stfsmount 172 stfsstat 408 storage administration costs 4 costs 21 forecasting 21 growth 21 return on investment 21 space consumption 557 standards 10 standards organizations 6 TCO 4, 10 virtualization 11, 31 storage consolidation 5, 34 storage level virtualization 11 storage management costs 21 storage model SNIA 9

storage partitioning 67 storage pool 4849, 78, 268 alert 269 allocation size 269 partition size 269 threshold 269 Storage Resource Management ROI 21 Subsystem Device Driver see SDD Sun Cluster 52 SUSE 3738, 51, 121, 128129, 590 SVC 10, 1516, 34, 45, 66, 7071, 187, 235, 270, 277, 284, 406 expand disk 279 LUN id 278 vdisk 271 SVC see SAN Volume Controller svcinfo lsvdiskhostmap 278 symbolic link 46 symmetric virtualization 13 sysdumpstart 533 syslog 532533 syslog.conf 532533 syslogd 534 system metadata 41 System Pool 49, 78, 91 add volume 274 expand volume 284 system-managed storage 16

T
take ownership 300 take ownership of fileset 367 tankpasswd 247 TCO 4, 10, 16 The LDAP RPMs can 590 threshold 269, 290 Tivoli 12 Tivoli Bonus Pack for SAN Management 22 Tivoli Storage Manager 478 Tivoli Storage Manager see ITSM Tivoli Storage Resource Manager 21 TMVT 105, 146 TotalStorage Open Software Family 14 TotalStorage Productivity Center 19 TotalStorage Productivity Center for Data 21 TotalStorage Productivity Center for Disk 22 TotalStorage Productivity Center for Fabric 20 TotalStorage Productivity Center for Replication 24 touch 532533 TPC 19 TPC for Data 21 TPC for Disk 22 device management 23 Device Manager 23 performance management 23 TPC for Fabric 20, 22, 45 TPC for Replication 24 trace 163 trace log 527 Index

623

tracing 527, 531 transaction rate 557 TSM see Tivoli Storage Manager TSRM 21 tunnel 520

W
watchdog 60 WBEM 6, 10 WebSphere 569 winbind 348, 355, 357358 Windows Active Directory 54, 78, 338339, 348350 DB2 559 Directory Change Notification 57 Event Log 530 expand LUN 282 MSCS 53 privileged client 301 registry editor 532 SAN File System client dump 532 SAN File System client logging 530 SAN File System client tracing 531 short names 162 take fileset ownership 301 upgrade client 245 Wordpad 531 Windows 2000 51 install SDD 110 RDAC 119 Windows 2003 51, 110 Windows commands perfmon 412 xcopy 390 Wordpad 531 workload balancing 427

U
UDP 543 uniform configuration 69 United Linux 128129 UNIX device candidate list 185 privileged client 300 take fileset ownership 300 UNIX-based client 338 upgrade SAN File System 230 UPS 60 usepolicy 311 user domain 347, 365 user ID 74 user map entries 54, 339, 347, 366 User Pools 49, 79, 92 add volume 270 expand volume 277 userPassword 74

V
validate 538 validation 430 vdisk 271 VERITAS 48, 85 VERITAS NetBackup 478 VFS 52 VIO 53 virtual disks 15 virtual file system 36 virtual I/O 53 Virtual Private Network 520 virtual volumes 16 virtualization 11, 31 asymmetric 13 fabric level 11 file level 17 in-band 1315 network level 11 out-of-band 13 server level 11 storage level 11 symmetric 13 VM 11 volume 263 create 270 list contents 332 volume drain 266 volume visibility 69 VPN 46, 520521 VSAN 107

X
xSeries 37

Z
z/VM 178 zones 20 zoneShow 108 zoning 67, 108 zSeries 178

624

IBM TotalStorage SAN File System

IBM TotalStorage SAN File System

(1.0 spine) 0.875<->1.498 460 <-> 788 pages

Back cover

IBM TotalStorage SAN File System


New! Updated for Version 2.2.2 of SAN File System Heterogeneous file sharing Policy-based file lifecycle management
This IBM Redbook is a detailed technical guide to the IBM TotalStorage SAN File System. SAN File System is a robust, scalable, and secure network-based file system designed to provide near-local file system performance, file aggregation, and data sharing services in an open environment. SAN File System helps lower the cost of storage management and enhance productivity by providing centralized management, higher storage utilization, and shared access by clients to large amounts of storage. We describe the design and features of SAN File System, as well as how to plan for, install, configure, administer, and protect it. This redbook is for all who want to understand, install, configure, and administer SAN File System. It is assumed the reader has basic knowledge of storage and SAN technologies.

INTERNATIONAL TECHNICAL SUPPORT ORGANIZATION

BUILDING TECHNICAL INFORMATION BASED ON PRACTICAL EXPERIENCE


IBM Redbooks are developed by the IBM International Technical Support Organization. Experts from IBM, Customers and Partners from around the world create timely technical information based on realistic scenarios. Specific recommendations are provided to help you implement IT solutions more effectively in your environment.

For more information: ibm.com/redbooks


SG24-7057-03 ISBN 0738496391

Potrebbero piacerti anche