Sei sulla pagina 1di 183

HP StorageWorks

Enterprise Backup Solution design guide

Part number: 5697-7309


Tenth edition: April 2009
© Copyright 2003-2009 Hewlett-Packard Development Company, L.P.
Hewlett-Packard Company makes no warranty of any kind with regard to this material, including, but not limited to, the implied warranties of
merchantability and fitness for a particular purpose. Hewlett-Packard shall not be liable for errors contained herein or for incidental or consequential
damages in connection with the furnishing, performance, or use of this material.
This document contains proprietary information, which is protected by copyright. No part of this document may be photocopied, reproduced, or
translated into another language without the prior written consent of Hewlett-Packard. The information is provided “as is” without warranty of any
kind and is subject to change without notice. The only warranties for HP products and services are set forth in the express warranty statements
accompanying such products and services. Nothing herein should be construed as constituting an additional warranty. HP shall not be liable for
technical or editorial errors or omissions contained herein.
Adobe® and Acrobat® are trademarks of Adobe Systems Incorporated.
Intel and Itanium are trademarks or registered trademarks of Intel Corporation or its subsidiaries in the United States and other countries.
Microsoft, Windows, and Windows XP are U.S. registered trademarks of Microsoft Corporation.
Oracle® is a registered U.S. trademark of Oracle Corporation, Redwood City, California.
UNIX® is a registered trademark of The Open Group.

Enterprise Backup Solution design guide


Contents

About this guide . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9


Intended audience . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . .. .. ....... . . . . 9
Prerequisites. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . .. .. ....... . . . . 9
Related documentation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . .. .. ....... . . . . 9
Document conventions and symbols . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . .. .. ....... . . . . 9
Rack stability . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . .. .. ....... . . . 10
HP technical support . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . .. .. ....... . . . 10
HP-authorized reseller. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . .. .. ....... . . . 11
Helpful websites . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . .. .. ....... . . . 11

1 Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13
Supported components . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . .. .. . .. ....... 13
Supported topologies . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . .. .. . .. ....... 13
Serial-attach SCSI (SAS) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . .. .. . .. ....... 13
Direct-attach SCSI . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . .. .. . .. ....... 13
Direct-attach SAS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . .. .. . .. ....... 14
Point-to-point . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . .. .. . .. ....... 14
Switched fabric . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . .. .. . .. ....... 14
Platform and operating system support . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . .. .. . .. ....... 14
Use of native backup programs and commands . . . . . . . . . . . . . . . . . . . . . . . . . .. . .. .. . .. ....... 14

2 Hardware setup . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17
Components. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17
HP StorageWorks Secure Key Manager . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19
The Secure Key Manager features . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19
Configuration preparation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20
Getting help . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20
HP StorageWorks ESL E-Series tape libraries . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21
SCSI or FC connections . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22
Setting the bar code length . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23
Creating multi-unit ESL E-Series tape libraries using the Cross Link Kit . . . . . . . . . . . . . . . . . . . . . . . . . 23
Check connectivity and performance with L&TT . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24
HP StorageWorks EML E-series tape libraries . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25
Cabling and configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 30
Setting the bar code front panel display and host reporting configuration . . . . . . . . . . . . . . . . . . . . . . 31
Check connectivity and performance with L&TT . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31
HP StorageWorks MSL5000 and MSL6000 Series tape libraries . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 32
Connecting the MSL5000/6000 tape library . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 32
MSL5000/6000 Series library with Fibre Channel routers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 32
MSL5000/6000 Series library with direct-attached SCSI . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33
Creating a multi-stack unit . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35
Setting the bar code length . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 37
Check connectivity and performance with L&TT . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 37
HP StorageWorks MSL2024, MSL4048, and MSL8096 Series Fibre Channel (FC) tape libraries . . . . . . . . 38
Back panel overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 38
Configuring drive information . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 39
Viewing drive status information . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 40
SAN connectivity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 40
Native Fibre Channel drives (NFC). . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 40
Network Storage Routers (NSR) N1200-320 4Gb . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 41
Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 42
Setting the bar code length . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 43
Check connectivity and performance with L&TT . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 43
The HP StorageWorks 6000 Virtual Library System (VLS). . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 44

Enterprise Backup Solution design guide 3


Features . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 44
Setting the bar code. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 44
VLS6105 and VLS6109 rack order . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 45
VLS6200, and VLS6500 disk array rack mounting order . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 45
VLS6600 disk array rack mounting order . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 46
VLS6840 and VLS6870 rack order . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 47
VLS6105, VLS6109, VLS6218, VLS6227, VLS6510, VLS6518 cabling . . . . . . . . . . . . . . . . . . . . . . . 48
VLS6600 cabling. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 49
VLS6840 and VLS6870 cabling . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 50
Check connectivity and performance with L&TT . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 51
Operating system LUN requirements and restrictions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 51
HP StorageWorks 9000–series Virtual Library System (VLS) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 52
Features . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 52
Benefits . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 52
VLS9000-series components . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 52
Installing VLS9000 cables. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 54
Setting the bar code. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 63
Check connectivity and performance with L&TT . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 63
HP StorageWorks 12000/300 Virtual Library System Gateway. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 64
Benefits . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 64
System status monitoring. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 64
Redundancy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 65
VLS12000/300 components . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 65
Prepare the EVA for the VLS12000 or VLS300 Gateway . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 66
EVA design considerations (VLS12000 or VLS300 Gateway) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 66
Setting the bar code. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 68
Check connectivity and performance with L&TT . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 68
HP StorageWorks 1000i Virtual Library System . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 69
Benefits . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 69
Important concepts. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 69
Internet SCSI (iSCSI) protocol . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 69
Disk-to-Disk-to-Tape (D2D2T) backup capabilities . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 70
Redundant Array of Independent Disks (RAID) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 70
Emulation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 70
Retention planning . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 71
Setting the bar code. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 71
Check connectivity and performance with L&TT . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 71
HP StorageWorks D2D Backup Systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 72
Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 72
Features and benefits . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 72
HP dynamic deduplication . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 72
How it works . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 73
The HP approach to deduplication - D2D and VLS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 73
HP Dynamic deduplication. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 73
What deduplication ratio can I expect? . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 73
Compatibility . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 73
Tape drives and performance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 75
Tape drives. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 75
WORM technology . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 75
File (data) compression ratio . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 76
Ultrium performance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 76
HP StorageWorks Interface Manager and Command View TL . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 77
Library partitioning . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 77
Mixed media support . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 77
Command View TL/Secure Manager mapping algorithms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 77
Mapping requirements lead to rules . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 77
Interface Manager modes: automatic and manual . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 78
Basic vs. Advanced Secure Manager . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 78
Interface Manager discovery. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 79

4
Secure Manager mapping rules. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 79
Single Fibre Channel port example . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 83
Basic Secure Manager and manual mapping . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 89
Interface Manager card problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 90
Fibre Channel interface controller and Network Storage Router . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 93
Common configuration settings . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 93
Indexed maps . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 95
Port 0 device maps . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 95
Auto assigned maps . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 95
SCC maps . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 95
Network Storage Routers have limited initiators for single- and dual-port routers . . . . . . . . . . . . . . . . . 95
Fibre Channel switches . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 95
HP StorageWorks 4/8 San Switch and HP StorageWorks 4/16 San Switch—file system full resolution . . . 97
EBS and the multi-protocol router . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 97
Fibre Channel host bus adapters . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 98
HBAs and performance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 98
Third-party Fibre Channel HBAs. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 99
HP StorageWorks 3Gb SAS BL Switch . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 99
Important tips . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 99
RAID array storage . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 100
Raid arrays and performance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 100
Third-party RAID array storage . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 100
EBS power on sequence . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 100
Using HP StorageWorks Library and Tape Tools (L&TT) to verify disk system data performance . . . . . . . . 101

3 Zoning . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 103
Emulated private loop . . . . . . . .. .. . .. . .. .. . .. .. . . . . . . . . . . . . . . . . . . . . . . . . . . . . .... . . . 103
Increased security . . . . . . . . . . .. .. . .. . .. .. . .. .. . . . . . . . . . . . . . . . . . . . . . . . . . . . . .... . . . 103
Optimized resources . . . . . . . . .. .. . .. . .. .. . .. .. . . . . . . . . . . . . . . . . . . . . . . . . . . . . .... . . . 103
Customized environments . . . . .. .. . .. . .. .. . .. .. . . . . . . . . . . . . . . . . . . . . . . . . . . . . .... . . . 104
Zoning components . . . . . . . . .. .. . .. . .. .. . .. .. . . . . . . . . . . . . . . . . . . . . . . . . . . . . .... . . . 104
EBS zoning recommendations . .. .. . .. . .. .. . .. .. . . . . . . . . . . . . . . . . . . . . . . . . . . . . .... . . . 104

4 Configuration and operating system details . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 105


Basic storage domain configurations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 105
Nearline configuration information . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 108
Setting up routers in the SAN environment . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 108
About Active Fabric and SCC LUNs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 108
Configuring the router for systems without the Interface Manager. . . . . . . . . . . . . . . . . . . . . . . . . . . 108
Rogue applications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 109
HP-UX . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 110
Initial requirement (HP-UX 11.23 on IA-64 and PA-RISC). . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 111
Initial requirement (HP-UX 11.31 on IA-64 and PA-RISC). . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 112
HP-UX 11.31 can experience poor I/O performance on VxFS file systems due to memory blocking during high
system memory usage . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 113
Poor I/O perfomance resolution . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 113
HP-UX 11.23 and HP-UX 11.31 large LUNs. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 113
HP-UX 11.23 EMS tape polling . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 114
HP-UX 11.23: Disabling rewind-on-close devices with st_san_safe. . . . . . . . . . . . . . . . . . . . . . . . 114
Configuring the SAN . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 114
Final host configurations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 114
Installation checklist . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 115
Windows Server and Windows Storage Server . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 116
Configuring the SAN . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 116
Installing the HBA device driver (Windows Server 2008/2003) . . . . . . . . . . . . . . . . . . . . . . . . . 116
Storport considerations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 116
Installation checklist . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 117
Windows 2003 known issues . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 117
Interop issues with Microsoft Windows persistent binding for tape LUNs . . . . . . . . . . . . . . . . . . . 118

Enterprise Backup Solution design guide 5


Tape drive polling. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 119
Disabling RSM polling for LTO tape driver . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 119
Disabling RSM polling for SDLT tape driver . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 119
SCSIport driver issue . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 120
Library slot count/Max Scatter Gather List issue . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 120
Tape.sys block size issue . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 120
Removable Storage Manager issue . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 120
Emulex SCSIport driver issue . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 120
FC interface controller device driver issue . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 121
Not Enough Server Storage is Available to Process this Command—network issue . . . . . . . . . . . . 121
Updating the HP Insight Management Agents for Microsoft Windows using the ProLiant Support Pack
Version 7.70 (or later). . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 121
NAS and ProLiant Storage Server devices using Microsoft Windows Storage Server 2003 . . . . . . . . . . . 123
Known issues with NAS and ProLiant Storage Servers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 123
Tru64 UNIX . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 124
Backup software patch . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 124
Configuring the SAN . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 124
Confirming mapped components . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 125
Installed and configured host bus adapters . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 125
Visible target devices . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 125
Configuring switch zoning . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 125
Installation checklist . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 126
Red Hat and SUSE Linux . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 127
Operating system notes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 127
Installing HBA drivers and tools . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 127
Additional SG device files . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 128
Installation checklist . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 128
Linux known issues. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 129
Rewind commands being issued by rebooted Linux hosts . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 129
Tape devices not discovered and configured across server reboots . . . . . . . . . . . . . . . . . . . . . . . . . 129
Sparse files causing long backup times with some backup applications . . . . . . . . . . . . . . . . . . . . . . 129
Novell NetWare . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 130
NetWare environment considerations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 130
Heterogeneous Windows and NetWare environment limitations . . . . . . . . . . . . . . . . . . . . . . . . . . . 130
FCA2214 configuration settings . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 130
Configuring switch zoning . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 131
Installation checklist . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 131
Sun Solaris . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 132
Configuring the SAN . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 132
Sun native driver configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 132
Troubleshooting with the cfgadm utility . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 133
QLogic driver configuration for QLA2340 and QLA2342 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 134
Emulex driver configuration for LP10000 and LP10000DC . . . . . . . . . . . . . . . . . . . . . . . . . . . . 135
Configuring Sun Servers for tape devices on SAN . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 136
Configuring switch zoning . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 137
Installation checklist . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 137
Solaris known issues . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 138
ESL E-Series or EML E-Series library power cycle. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 138
HP LTO3 tape drive not recognized when MPxIO is enabled on Solaris 10 . . . . . . . . . . . . . . . . . 138
IBM AIX. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 139
Configuring the SAN . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 139
IBM 6228, 6239, 5716, or 5759 HBA configuration. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 139
Configuring switch zoning . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 141
Installation checklist . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 141
Installing backup software and patches . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 141

5 Backup and recovery of Virtual Machines. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 143


HP StorageWorks EBS VMware backup and recovery strategy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 143
HP Integrity Virtual Machines (Integrity VM) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 144

6
6 Management tools . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 145
HP Storage Essentials Storage Resource Management Software . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 145
Features and benefits . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 145
HP Systems Insight Manager (HP SIM). . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 146
Features:. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 146
Key benefits: . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 146
Management agents . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 147
Known issues . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 147
Recommendations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 147

7 Encryption in an EBS environment . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 149


Overview. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. .. . .. .. . .. . .. .. . .. .. . .... . . . . 149
EBS support and encryption . . . . . . . . . . . . . . . . . . . . . . . . .. .. . .. .. . .. . .. .. . .. .. . .... . . . . 149
FIPS compliance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. .. . .. .. . .. . .. .. . .. .. . .... . . . . 149
Compression and encryption . . . . . . . . . . . . . . . . . . . . . . . .. .. . .. .. . .. . .. .. . .. .. . .... . . . . 149
Deduplication and encryption . . . . . . . . . . . . . . . . . . . . . . .. .. . .. .. . .. . .. .. . .. .. . .... . . . . 149

8 High availability. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 151


Disk array multi-path (MPIO) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 156
Clustering . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 156
Highlights . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 156
Benefits . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 156
EBS Support of failover versus non-failover applications. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 158
EBS clustering with failover . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 158
EBS clustering with no failover support . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 159
HP-UX MC/ServiceGuard . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 159

9 Performance: Finding bottlenecks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 161


Overview. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 161
Process for identifying bottlenecks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 161
Test environment . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 162
Hardware . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 162
Enterprise storage . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 163
Performance tools . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 163
1. Evaluate the tape subsystem’s WRITE performance. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 164
Tape WRITE performance observations in the example SAN test environment.. . . . . . . . . . . . . 164
2. Evaluate the tape subsystem’s READ performance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 164
Tape READ performance observations in the example SAN test environment . . . . . . . . . . . . . . 165
3. Evaluate the disk subsystem’s WRITE performance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 166
Disk WRITE performance observations in the example SAN test environment. . . . . . . . . . . . . . 166
4. Evaluate the disk subsystem’s READ performance. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 168
Disk READ performance observations in the example SAN test environment . . . . . . . . . . . . . . 168
5. Evaluate the backup and restore application’s effect on disk and tape performance . . . . . . . . . . . . 169
Backup application performance observations in the example SAN test environment. . . . . . . . . . . 169
Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 170
Suggestions for improving the tape subsystem's performance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 170
Suggestions for improving disk performance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 170
Suggestions for improving the data protection application performance . . . . . . . . . . . . . . . . . . . . . . 170
Other factors to consider . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 170
For more performance information: . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 171

10Library and Tape Tools . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 173


Tape drive troubleshooting using L&TT benefits . . . . . . . . . . . . . .. .. . .. . .. .. . .. .. . .. . .. ... 173
Tape drive performance questions . . . . . . . . . . . . . . . . . . . . . . . . .. .. . .. . .. .. . .. .. . .. . .. ... 173
Good maintenance comes first . . . . . . . . . . . . . . . . . . . . . . . . .. .. . .. . .. .. . .. .. . .. . .. ... 174
Troubleshooting with L&TT starts here . . . . . . . . . . . . . . . . . . . . . . .. .. . .. . .. .. . .. .. . .. . .. ... 174
Is your drive connected correctly? . . . . . . . . . . . . . . . . . . . . . . .. .. . .. . .. .. . .. .. . .. . .. ... 174
Is the drive working as expected? . . . . . . . . . . . . . . . . . . . . . . .. .. . .. . .. .. . .. .. . .. . .. ... 175
Is the firmware up to date? . . . . . . . . . . . . . . . . . . . . . . . . . . . .. .. . .. . .. .. . .. .. . .. . .. ... 175
Is the drive performing as expected? . . . . . . . . . . . . . . . . . . . . .. .. . .. . .. .. . .. .. . .. . .. ... 175

Enterprise Backup Solution design guide 7


Is the media in good condition?. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 175
Basic L&TT operations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 176
Windows . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 176
Installing L&TT . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 176
Running L&TT . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 176
Checking and updating FW revision . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 176
Checking installation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 176
Checking drive health . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 177
Checking performance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 177
Checking media . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 177
Sending a support ticket to HP . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 177
HP-UX, Tru64, or Linux . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 178
Uninstalling previous versions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 178
Installing the latest version . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 178
If you have trouble using L&TT . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 178
HP technical support . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 179
More information . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 179

Index . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 181

8
About this guide
This guide provides information to design an HP StorageWorks Enterprise Backup Solution (EBS).

Intended audience
This guide is intended for system administrators implementing an EBS that have experience with the
following:
• Tape backup technologies, tape libraries, and backup software
• SAN environments
• Fibre Channel

Prerequisites
Before installing the EBS hardware, consider the items below:
• Review the HP Enterprise Backup Solutions Compatibility Matrix located at http://www.hp.com/go/ebs
to ensure that the components selected are listed
• Knowledge of the operating system(s)
• Knowledge of the EBS hardware components listed in Chapter 1
• Knowledge of switch zoning and selective storage presentation

Related documentation
In addition to this guide, HP provides relevant information:
• Implementation matrix for supported backup applications
• Installation guides for EBS hardware components

Document conventions and symbols


Table 1 Document conventions

Convention Element
Medium blue text: Figure 1 Cross-reference links and e-mail addresses

Medium blue, underlined text Website addresses


(http://www.hp.com)
Bold font • Key names

• Text typed into a GUI element, such as into a box

• GUI elements that are clicked or selected, such as menu and list
items, buttons, and check boxes

Italics font Text emphasis

Monospace font • File and directory names

• System output

• Code

• Text typed at the command-line

Monospace, italic font • Code variables

• Command-line variables

Monospace, bold font Emphasis of file and directory names, system output, code, and text
typed at the command line

Enterprise Backup Solution design guide 9


WARNING! Indicates that failure to follow directions could result in bodily harm or death.

CAUTION: Indicates that failure to follow directions could result in damage to equipment or data.

IMPORTANT: Provides clarifying information or specific instructions.

NOTE: Provides additional information.

TIP: Provides helpful hints and shortcuts.

Rack stability

WARNING! To reduce the risk of personal injury or damage to equipment:


• Extend leveling jacks to the floor.
• Ensure that the full weight of the rack rests on the leveling jacks.
• Install stabilizing feet on the rack.
• In multiple-rack installations, secure racks together.
• Extend only one rack component at a time. Racks may become unstable if more than one component is
extended.

HP technical support
Telephone numbers for worldwide technical support are listed on the HP support website:
http://www.hp.com/support/.
Collect the following information before calling:
• Technical support registration number (if applicable)
• Product serial numbers
• Product model names and numbers
• Applicable error messages
• Operating system type and revision level
• Detailed, specific questions
For continuous quality improvement, calls may be recorded or monitored.
HP strongly recommends that customers sign up online using the Subscriber's choice website:
http://www.hp.com/go/e-updates.
• Subscribing to this service provides e-mail updates of the latest product enhancements, drivers, and
firmware documentation updates.
• Signing up allows easy access to your products by selecting Business support and then Storage under
Product Category.

10
HP-authorized reseller
For the name of the nearest HP-authorized reseller:
• In the United States, call 1-800-345-1518.
• Elsewhere, see the HP website: http://www.hp.com. Click Contact HP to find locations and telephone
numbers.

Helpful websites
For support and product information, see the following HP websites:
• http://www.hp.com
• http://www.hp.com/go/storage
• http://www.hp.com/support/
• http://www.docs.hp.com
• http://www.hp.com/support/tapetools
• http://www.hp.com/support/manuals

Enterprise Backup Solution design guide 11


12
1 Overview
The HP StorageWorks Enterprise Backup Solution (EBS) is an integration of data protection software and
industry-standard hardware, providing a complete enterprise class solution. HP has joined with leading
software companies to provide software solutions that support the backup and restore processes of
homogeneous and heterogeneous operating systems in a shared storage environment.
EBS software partner’s data protection solutions incorporate database protection, storage management
agents, and options for highly specialized networking environments.
Data protection software focuses on data backup and restore using an automated LTO Ultrium, SuperDLT,
or DLT tape library and Fibre Channel technology. The EBS combines the functionality and management of
Fibre Channel Storage Area Network (SAN), data protection software, and scaling tools to integrate tape
and disk storage subsystems in the same SAN environment.
Enterprise data backup and restore can be accomplished with different tape devices in various
configurations, using a variety of transport methods such as the corporate communication network, a
server SCSI bus, or a Fibre Channel infrastructure. EBS uses a storage area network (SAN) that provides
dedicated bandwidth independent of the local area network (LAN). This independence allows single or
multiple backup or restore jobs to run without the network traffic caused by data protection environments.
Depending on the backup software used, submitted jobs are run locally on the backup server to which the
job was submitted. Data, however, is sent over the SAN to the tape library rather than over the LAN. This
achieves greater speed and reduces network traffic. Jobs and devices can be managed and viewed from
either the primary or any server or client connected within the EBS which has the supported data protection
software solution installed. All servers within the EBS server group can display the same devices.
To implement an Enterprise Backup Solution:
1. Consult the HP Enterprise Backup Solutions Compatibility Matrix available at:
http://www.hp.com/go/ebs.
2. Consult the EBS design guide for EBS hardware configurations currently supported and how to
efficiently provide shared tape library backup in a heterogeneous SAN environment.
3. Install and configure the backup application or backup software. Recommendations for individual
backup applications and software may be found in separate implementation guides.
For more information about EBS, go to http://www.hp.com/go/ebs.

Supported components
For complete EBS configuration support information, refer to the HP Enterprise Backup Solutions
Compatibility Matrix located at: http://www.hp.com/go/ebs.

Supported topologies
A Fibre Channel SAN supports several network topologies, including point-to-point and switched fabric.
These configurations are constructed using switches and routers.
Serial-attach SCSI (SAS)
The Serial-attach SCSI (SAS) interface is the successor technology to the parallel SCSI interface, designed
to bridge the gap in performance, scalability, and affordability. SAS combines high-end features from fibre
channel (such as multi-initiator support and full-duplex communication) and the physical interface
leveraged from SATA (for better compatibility and investment protection), with the performance, reliability,
and ease-of-use of traditional SCSI technology.
Direct-attach SCSI
Direct-attach SCSI (DAS) is the most common form of attachment to both disk and tape drives. Direct-attach
SCSI allows a single server to communicate directly to the given target device over a SCSI cable. These
configurations do not allow for multi-hosting a single target device, because the target device is dedicated
to the server. These configurations are not covered in this document.

Enterprise Backup Solution design guide 13


Direct-attach SAS
Direct-attach SAS (Serial-attach SCSI) is another form of attachment to both disk and tape drives using the
SAS interface. This allows a single server to communicate directly to the given target device over a SAS
cable. SAS configurations also do not allow for multihosting a single target device because the target
device is dedicated to the server. Direct-attach SAS configurations are not covered in this document.
Point-to-point
Point-to-point, or direct-attach fibre (DAF), connections are direct Fibre Channel connections made between
two nodes, such as a server and an attached tape library. This configuration requires no switch to
implement. It is very similar to a SCSI bus model, in that the storage devices are dedicated to a server.
Switched fabric
A switched fabric topology allows nodes to talk directly to each other through temporarily established
direct connections. This provides simultaneous dedicated bandwidth for each communication between
nodes.
Because of this, switched fabric topologies provide significantly more performance and scalability than
arbitrated loop topologies.
Also, switched fabric topologies do not suffer from susceptibility to I/O interruptions due to errors, resets,
or power failures from third party-nodes. Because communications are established directly between nodes,
interruption events are isolated by the fabric environment.
Finally, because many nodes never need to communicate with each other, such as between two hosts,
interoperability issues are significantly reduced in a fabric topology as compared to loops. Nodes need
only interoperate with the switch and the target node instead of every node on the loop or fabric.
Switched fabric configurations are implemented with Fibre Channel switches. Switches may be cascaded
or meshed together to form larger fabrics.

NOTE: See Figure 49, Figure 50, and Figure 51 for an example of basic switched fabric, point-to-point,
and direct-attached SCSI configurations.

Platform and operating system support


Library sharing in a heterogeneous environment is supported. All platforms may be connected through one
or more switches to a tape library. The switches do not need to be separated by operating system type, nor
do they have to be configured with separate zones for each operating system.
The host server needs to detect all of the tape and robotic devices intended to be used; shared access to
tape drives is handled by the backup application software running on each host.
While some operating systems found in enterprise data centers may not be supported on the storage
network by EBS, it is still possible to back up these servers as clients over the LAN and still be supported.
See Figure 47 for a diagram that includes LAN client connections. See the ISV compatibility matrix for
more information.

Use of native backup programs and commands


A limited number of backup programs and commands that are native to a particular operating system are
verified for basic functionality with SCSI direct-attached tape drives and autoloaders only. Tape libraries
and virtual library systems are not tested. These programs and commands are limited in their ability to
handle complicated backups and restores in multi-host, storage area networks (SANs). They are not
guaranteed to provide robust error handling or performance throughput. Use of these programs and/or
commands in a user developed script is not recommended for use with tape libraries in an Enterprise
Backup Solution shared storage environment. Refer to the HP Enterprise Backup Solutions Compatibility
Matrix at http://www.hp.com/go/ebs for a list of tested and supported applications that are specifically
designed for backup and restore operations.

14 Overview
The following table shows the native utilities tested on each Operating System:

Utilities Supported HP-UX Linux Windows


Tape drive commands

Tar Yes Yes No

DD (dump) Yes Yes No

Pax Yes Yes No

Mt Yes Yes No

Make tape recovery (BFT) Yes No No

NT backup No No Yes

Library and autochanger commands

Mc Yes No No

Mtx No Yes No

RSM No No Yes

HP-UX 11.11 or higher, Linux Red Hat EL 2.1 or higher, Linux SUSE SLES 8 or higher, and Windows Server
2000 or higher are tested. Current tape drive and autoloader (library) drivers are located at
http://www.hp.com.

Enterprise Backup Solution design guide 15


16 Overview
2. Hardware setup
Components
Table 2 provides a description of the key components comprising a SAN backup solution.

NOTE: For a complete listing of supported servers and hardware, refer to the HP Enterprise Backup
Solutions Compatibility Matrix at http://www.hp.com/go/ebs.

For more information on HP StorageWorks libraries, see http://www.hp.com/go/tape.

Table 2 SAN backup components

Component Description
Host bus adapter Host bus adapters (HBAs) are used to connect servers to Fibre Channel topologies. They
provide a similar function to SCSI host bus adapters or network interface cards (NIC).

The device driver for an HBA is typically responsible for providing support for any of the
Fibre Channel topologies—point-to-point, loop, or fabric. In most cases, the device driver
also provides a translation function of presenting Fibre Channel targets as SCSI devices to
the operating system. This provides compatibility with existing storage applications and file
systems that were developed for SCSI devices.

Switch Switches are the Fibre Channel infrastructure component used to construct fabrics. Switches
may be cascaded together to configure larger fabrics.

Switches typically have an Ethernet port for managing them over the network. This port
provides status and configuration for the switch and individual ports.

Tape library/VLS The tape library or VLS provides the nearline storage for backup on the SAN. The tape
library provides automated tape handling which becomes a key requirement when
consolidating backup across multiple servers.

Fibre Channel The controller (also referred to as a bridge or router) device provides connection between
interface controller Fibre Channel networks and SCSI tape and robotic devices. This device is similar to a Fibre
Channel disk controller for RAID subsystems. The controller acts as an interface to the SCSI
device, and can send or receive SCSI commands through encapsulated Fibre Channel
frames.

Fibre Channel The Interface Manager card, in conjunction with HP StorageWorks Command View TL
interface manager software, provides remote management of the library via a serial, telnet, or web-based GUI
interface.

Cables and SFPs Three types of cables exist to connect Fibre Channel devices together—copper cables,
short-wave or multi-mode optical cables, and long-wave or single-mode optical cables. Each
type of cable provides different maximum lengths, as well as cost.

Fibre Channel devices have ports which either require a specific type of cable, or require a
separate module referred to as an SFP (Small Form-factor Pluggable). An SFP-based port
allows the customer to use any type of cable by using the appropriate type of SFP with it. For
example, Fibre Channel ports use fibre-optic SFP modules with LC connectors.

Data protection Data protection software is deployed on each of the hosts on a SAN that will perform
software backup. This typically requires installing server-type licenses and software on each of these
hosts. Many of these backup applications also provide a separate module or option, which
enables software to manage shared access to the tape drives on a SAN. This may need to
be purchased in addition to the typical software licenses.

Enterprise Backup Solution design guide 17


Table 2 SAN backup components

Component Description
SAN management SAN Management software is used to manage resources, security, and functionality on a
software SAN. This can be integrated with host-based device management utilities or embedded
management functionality such as switch Ethernet ports.

HP StorageWorks L&TT is a robust diagnostic tool for tape storage and magneto-optical storage
Library and Tape products. L&TT provides functionality for firmware downloads, verification of device
Tools (L&TT) operation, maintenance procedures, failure analysis, corrective service actions,
and a range of utility functions.
Performance tools assist in troubleshooting backup and restore issues in the overall
system. It also provides seamless integration with HP's hardware support
organization by generating and e-mailing support tickets. It is ideal for customers
who want to verify their installation, ensure product reliability, self-diagnostics, and
faster resolution of tape device issues.
Ensure that LTT is installed on the backup servers and is ready to use, should there
be a need to contact HP support.

18 Hardware setup
HP StorageWorks Secure Key Manager
The HP StorageWorks Secure Key Manager reduces your risk of a costly data breach and reputation
damage while improving regulatory compliance with a secure centralized encryption key management
solution for HP LTO4 enterprise tape libraries. The Secure Key Manager automates key generation and
management based on security policies for multiple libraries. This occurs transparent to ISV backup
applications. The Secure Key Manager is a hardened server appliance delivering secure identity-based
access, administration, and logging with strong auditable security designed to meet the rigorous Federal
Information Processing Standard (FIPS) 140-2 security standards. Additionally, the Secure Key Manager
provides reliable lifetime key archival with automatic multi-site key replication, high availability clustering
and failover capabilities.
The HP StorageWorks Secure Key Manager provides centralized key management for HP StorageWorks
Enterprise Storage Libraries (ESL) E-Series Tape Libraries and HP StorageWorks Enterprise Modular Library
(EML) E-Series Tape Libraries. In addition to the clustering capability, the Secure Key Manager provides
comprehensive backup and restore functionality for keys, as well as redundant device components and
active alerts. The Secure Key Manager supports policy granularity ranging from a key per library partition
to a key per tape cartridge while featuring an open extensible architecture for emerging standards and
allowing additional client types in the future needing key management services. These clients may include
other storage devices, switches, operating systems and applications. Keep your confidential data secure
yet highly available with automated single point of management for your encryption keys using the HP
Secure Key Manager, a member of the "HP Secure Advantage" portfolio.

The Secure Key Manager features


Reduce risk of a data breach—Mitigate your risk of data exposure. Keep your tape-encrypted data
private and protect your company’s reputation with HP Secure Key Manager while improving regulatory
compliance and avoiding financial consequences of a breach. Avoid situations requiring disclosure of
unauthorized access to unencrypted private information.
Centralized with automatic policy-based key generation—HP Secure Key Manager reduces
the complexity of managing encryption keys across a distributed infrastructure with a single point of
management. Independent of tape drive count, multiple ESL/EML LTO4 tape libraries are supported per
node, which further boosts investment protection. Only network connectivity is required.
Transparent to ISV applications—Minimize impact to existing backup and recovery processes. The
key management and data encryption occurs transparent to the backup application. The data can be
decrypted on an HP Secure Key Manager library client that has permission to access the key. Check the
EBS matrix for ISV support of the LTO-4 drive.
Extensible to emerging open standards—The HP Secure Key Manager architecture and plans
support future encryption clients beyond HP ESL and EML Tape Libraries. It is the platform HP is using to
build infrastructure-wide centralized key management for information protection across the enterprise.
Hardened server appliance—The HP Secure Key Manager features a closed Linux kernel, dual
locking bezel with durable pick-resistant locks and tamper-evident enclosure seals to provide platform
security substantially beyond a general purpose server key repository.
Secure identity-based access, administration and digitally signed logs—The HP Secure Key
Manager also provides a trusted infrastructure for enforcement of internal security policies/controls and a
trusted audit trail of encryption and key management activities as evidence for compliance and audit
verifications.
Designed for FIPS 140-2 security standards validation—The HP Secure Key Manager is
appropriate for stringent cryptographic installations and supports AES-256 key generation. FIPS 140-2
Level 2 validation is pending. The Federal Information Processing Standard (FIPS) Publication 140-2 is a
U.S. government standard used to validate cryptographic modules.
Automatic multi-site key replication and failover—High availability and reliability are
paramount because keys must be retained for the life of the data which may be for decades. The HP
Secure Key Manager delivers high availability of archived keys for same or multi-site coverage. Key
replication and failover occurs automatically in a clustered configuration.

Enterprise Backup Solution design guide 19


Comprehensive backup and restore functionality for keys—For more availability options, the
HP Secure Key Manager can automatically generate additional copies of the keys, policies, certificates
and configuration even in a clustered installation.
Redundantdevice components and active alerts—For improved overall reliability the HP Secure
Key Manager has redundant dual fans, power supplies and disk drives (RAID 1 mirroring) along with
active alerts and health checks to maximize uptime.

Configuration preparation
To prepare to configure the system, have ready all information listed on the pre-install survey. This
information was gathered by your site Security Officer and the HP installation team before the system was
shipped; if it has been lost, obtain the form from the appendix of the HP StorageWorks Secure Key
Manager users guide and complete it. If portions of this information are inaccurate or unknown, the
installation will be incomplete and data encryption can not occur.
See the "HP StorageWorks Secure Key Manager Installation and replacement" for completed details on
the configuration and installation of the SKM. Also, check the EBS Compatibility Matrix at
http://www.hp.com/go/ebs for compatibility and any additional notes when using the SKM with data
protection application software.

Getting help
If you cannot find the information that you need in this overview, there are several other resources that you
can use to get more detailed information.
• The HP StorageWorks Secure Key Manager users guide on the documentation CD
• The HP website, http://www.hp.com
• Your nearest HP authorized reseller (locations and telephone numbers of these resellers are given on the
HP website)
• HP technical support telephone numbers:
• In North America, 1–800–633–3600
• For other regions, telephone numbers are given on the HP website.

20 Hardware setup
HP StorageWorks ESL E-Series tape libraries
The HP StorageWorks ESL E-Series enterprise tape library scales up to 712 LTO or 630 SDLT cartridge slots
in a single library frame, and up to 3546 LTO slots or 3138 SDLT slots in a multi-frame library. Offered with
Ultrium 1840, Ultrium 960, Ultrium 460, SDLT 600, and SDLT 320 tape technologies, the ESL E-Series
offers storage density of up to 56.8 terabytes per square foot of floor space.
Each single library frame may contain up to 24 tape drives (four drives per drive cluster) as shown in
Figure 1. Each library frame must contain at least one drive cluster. In a dual frame library, where two
frames are joined into a single library using the Cross-Link Mechanism, the first frame may contain up to
20 drives, and the second frame may contain up to 24 drives. In libraries with more that two frames, see
Figure 4 for details on numbers of drives and slots supported.

Figure 1 Drive cluster numbering

Enterprise Backup Solution design guide 21


A drawing showing the cluster numbering and drive positioning is located inside the rear door. See
Figure 2. Drive clusters are numbered starting at the top of the cabinet beginning with 0.

B A 2 1
D C 4 3
B A 6 5
D C 8 7
B A 10 9
D C 12 11

B A 14 13
D C 16 15

B A 18 17
D C 20 19

B A 22 21
D C 24 23

Figure 2 Cluster and drive numbering

SCSI or FC connections

NOTE: Many of the figures in this chapter show a library with SCSI drives and e2400-160 interface
controllers. Your library may look different and could exclude the e1200–160 robotics controller card,
include Ultrium 460-FC and 960-FC drives, and include e2400-FC 2G and 4G interface controllers.

22 Hardware setup
Setting the bar code length
To change the bar code length:
1. Access the front panel display and select Menu.
2. Select Setup.
3. Enter the password.
4. Scroll down to bar code length and enter the desired length. The default length is 6.

Creating multi-unit ESL E-Series tape libraries using the Cross Link Kit
HP StorageWorks ESL Cross Link Kit connects up to five 712e or 630e tape libraries together as a single
tape library to scale up to 3546 LTO slots (1418 TB) or 3138 SDLT slots (941 TB) and up to 44 tape drives.
The ESL E-Series Cross Link Kit requires specific software and firmware supplied on CD with the Cross Link
Kit. Software licenses are required only on the first or primary library frame.
Small count tape libraries (322e, 286e) are only supported if upgraded to a fully populated tape library,
and only if they are the first or primary tape library.

NOTE: The Cross Link Kit requires removal of one cluster of either tape drives or slots along the back wall
of the first ESL cabinet. In an LTO library, the back wall clusters hold 14 slots. In an SDLT library, the back
wall clusters hold 12 slots.

Figure 3 ESL E-Series tape libraries using the Cross Link Kit

Enterprise Backup Solution design guide 23


Figure 4 Multi-frame (Cross Link) ESL E-Series guidelines

Check connectivity and performance with L&TT


• Install L&TT on the server and run it to scan and find the tape devices. See ”Library and Tape Tools” on
page 173 for more information.
• Check to ensure that the topology is as expected. L&TT will show the connectivity and topology for each
host HBA and tape device connected.
• Run the L&TT Device Performance test on each device to verify that the host to device data path is
configured for full speed operation. This step requires a write-enabled tape cartridge.

NOTE: HP recommends leaving L&TT installed on the host so it is readily available if there is a need to run
diagnostics if directed by HP support at a later date. It has no background services and no impact on the
host or other applications.

24 Hardware setup
HP StorageWorks EML E-series tape libraries
The HP StorageWorks EML E-series library is available in two models—the 103e, which is a 12U
rack-mounted design containing up to four tape drives and 103 LTO tape slots, and the 245e, which is a
24U rack-mounted design holding up to eight drives and 245 LTO slots. Upgrade kits are available to fill
out a 42U rack with a configuration that can contain up to 442 LTO slots and 16 tape drives.
The Fibre-Channel LTO drives are connected to a SAN fabric via e2400-FC 2G/4G interface controllers,
or directly to the SAN in the case of Ultrium 1840 LTO4 drives, which are in turn managed by the HP
StorageWorks Interface Manager card. HP StorageWorks Command View TL is used to communicate with
the Interface Manager via web browser for configuration.

8 9

2
7 10
3 12
1 6
5
12 11
3
6
12 13
4
12

14

15

10447

Figure 5 Enterprise Modular Library, model 103e


1 Base module 2 Robotics unit (parked)
3 Viewing window 4 Operator control panel
5 Load port (S-cartridge capacity) 6 Redundant module power supply
(optional)
7 Module primary power supply 8 Customer reserved space (2U)
9 Library main power switch 10 Library fans
11 Base module card cage 12 Tape drives
13 Cable management features 14 Extension bars (power strips)
15 Power distribution units

Enterprise Backup Solution design guide 25


1 3

8
4
6 8
2 4 5 9
8
7
4 8

3 4 10

11

10448

Figure 6 Enterprise Modular Library, model 245e

1 See model 103e (Figure 5) 2 Tape drive expansion module


3 Card cage expansion module 4 Viewing window
5 Load port (10-cartridge capacity) 6 Module primary power supply
7 Redundant module power supply 8 Tape drives
(optional)
9 Cable management features 10 Card cage module fans
11 Redundant card cage power
supplies

26 Hardware setup
3

Figure 7 Example of a fully expanded Enterprise Modular Library

1 Tape drive expansion module 2 Capacity expansion module

Enterprise Backup Solution design guide 27


1

7
9
2
7

8 10

3 7 11

4 7 10

3 7 11

5 7 11

10967

Figure 8 Example of front view of the Enterprise Modular Library

Switch for the internal network of 2 Base module


1
the LTO4 tape drive
3 Tape drive expansion module 4 Card cage expansion module
5 Capacity expansion module 6 Robotics unit
7 Viewing windows 8 Operator control panel (OCP)
9 5-Cartridge load port 10 4U blank covers
11 10-Cartridge load ports

28 Hardware setup
1
7
2
8
11
12
9
3 13
10

4 12 9 10

12 13
5 11

4 12 9 10
14

6 10

10968

Figure 9 Example of front view of the Enterprise Modular Library

Reserved space 2 Switch for the internal network of


1
the LTO4 tape drive
3 Base module 4 Tape drive expansion module
5 Card cage expansion module 6 Capacity expansion module
7 Main power switchs 8 Base module card cage
(e2400-FC 2Gb interface
controller shown)
9 Tape drives (LTO3 tape drives 10 Cable management features
shown)
11 Fans 12 Power supplies
13 Power strips 14 Power distrubution unit (PDU)

Enterprise Backup Solution design guide 29


Cabling and configuration

LAN

Ethernet
SAN
Fibre Channel

Figure 10 EML cabling


Figure 10 shows the back of an EML 103e library with the robotic controller card at the top, the Interface
Manager (IM) card just below, the interface controller (IC) card, and the tape drive enclosures at the
bottom. The tape drives are connected to the interface controller via fiber optic cable. The top-most drive
(drive 0) should be connected to the tape drive 0 (TD 0) port on the IC, with the rest following in sequence.
The Ethernet port on the IC is connected to the FC controller ports on the IM card. For better organization,
connect each IC to the IM Ethernet ports in sequential order, although this is not required.
The IM is connected to the robotic controller via Ethernet from the IM Cascade port to the Public port on the
robotic controller card.
After cabling the library as described above, the library can be powered on. When the initialization
process is complete, do the following:
1. Access the library via the front touch panel to configure the network settings.
2. Select the Configuration tab.
3. Select Library Configuration and then Change Network Settings.
4. Enter the library administration password.
5. Choose either DHCP or Manual in the Address Config field. If Manual was selected, touch the IP
address and change the address to the desired setting. Repeat this for the subnet mask, gateway, and
DNS server, if applicable.
6. When complete, click Save. The EML is now accessible via the Command View TL browser interface for
further configuration and monitoring.

30 Hardware setup
Setting the bar code front panel display and host reporting configuration
HP StorageWorks EML E-Series tape libraries bar code reporting can be configured as six to eight
characters and left or right aligned. If six characters with left alignment is chosen, any characters after the
sixth are truncated. With six characters and right alignment, only the last six characters are shown with the
beginning characters truncated.
The LTO labels have L1, L2, L3, and L4 as the media identifiers for the respective LTO1, LTO2, LTO3, and
LTO4 cartridges. All cleaning cartridges should use the format CLNxxxL1 type of label, where xxx is a
number between 000 and 999 for all types of LTO drives. WORM tape cartridges for LTO3 and LTO4
have media identifiers of LT. The length and justification of the bar code reporting format, as sent to the
host and as viewed on the front panel, can be configured through the front panel configuration section.
To set the bar code front panel display and host reporting configuration on the EML library:
1. Access the EML front panel top display and select the Configuration tab.
2. Select the Library Configuration tab and enter the password (the default password for EML is 112233).
3. Select the Configure bar code reporting format tab.
4. Select Format for Front Panel and configure according to the requirements listed above.
5. Select Format for Host Reporting and configure according to the requirements listed above.

Check connectivity and performance with L&TT


• Install L&TT on the server and run it to scan and find the tape devices. See ”Library and Tape Tools” on
page 173 for more information.
• Check to ensure that the topology is as expected. L&TT will show the connectivity and topology for each
host HBA and tape device connected.
• Run the L&TT Device Performance test on each device to verify that the host-to-device data path is
configured for full speed operation. This step requires a write-enabled tape cartridge.

NOTE: HP recommends leaving L&TT installed on the host so it is readily available if there is a need to run
diagnostics if directed by HP support at a later date. It has no background services and no impact on the
host or other applications.

Enterprise Backup Solution design guide 31


HP StorageWorks MSL5000 and MSL6000 Series tape libraries
The HP StorageWorks MSL5000 and MSL6000 tape libraries provide a high-performance backup and
restore solution utilizing Ultrium LTO or SDLT tape drives in compact 5U or 10U form factors. This solution
can be used for either SAN or direct-SCSI-attached environments and provides scalability for up to 16 tape
drives and 240 slots. The tape libraries are accessed through an intuitive GUI control panel or an
integrated remote web interface for both on-site and remote management.

Connecting the MSL5000/6000 tape library

WARNING! Make sure the power to each component is off and the power cords are unplugged before
making any connections.

NOTE: Read the documentation included with each component for additional operating instructions
before installing.

MSL5000/6000 Series library with Fibre Channel routers


Connect the library to the router using SCSI interface cables. One router supports up to two drives and one
robotic device. See Figure 11 for cabling the two-drive and four-drive libraries.
For more detailed instructions, refer to the HP StorageWorks MSL5000 and 6000 Series Tape Library user
guide for interface cable specifications.

Figure 11 Two- and four-drive libraries


1 SCSI cable
2 Terminator
3 Fibre cable

32 Hardware setup
MSL5000/6000 Series library with direct-attached SCSI
Figure 12 shows a typical SCSI cable configuration for a library with two tape drives installed using
multi-host systems or multiple SCSI HBAs.

Figure 12 MSL6030/MSL6026 library SCSI cable configuration (two tape drives)

1 SCSI terminator 2 0.5 m cable


3 Host cable (Bus 1, to host system) 4 Host cable (Bus 0, to host system)

Four tape drives


Figure 13 shows a typical SCSI cable configuration for a library with four tape drives installed using
multi-host systems or multiple SCSI HBAs.

Figure 13 MSL6060/MSL6052 library SCSI cable configuration (four tape drives)


1 SCSI terminator 2 0.5 m cable
3 Host cable (Bus 1, to host system) 4 Host cable (Bus 3, to host system)
5 Host cable (Bus 2, to host system) 6 Host cable (Bus 0, to host system)

Enterprise Backup Solution design guide 33


NOTE: Daisy-chaining Ultrium 460, Ultrium 960, and SDLT 600 drives is not recommended due to
degraded performance. However, two drives per bus is an acceptable configuration for SDLT 1 and 2, as
well as for Ultrium 230.

Setting up serial port communications


Before supplying power to the interface card, HP recommends setting up serial port communications with
your host computer, unless serial I/O was previously established and is currently running.
The interface card is designed to communicate with a terminal or any operating system utilizing a terminal
emulator. For example, most Windows operating systems can use a terminal. Be sure the baud rate, data
bits, stop bits, parity, and flow control are set correctly.
To set up serial communications with the interface card:
1. Plug the serial cable into one of the host computer’s serial ports (COM1 or COM2), and then plug the
other end of the serial cable into the interface card’s serial port.
2. Start the terminal emulator.
3. Set the terminal emulator to use the appropriate COM port.
4. Specify the following settings for the port:

Baud rate: 9600, 19200, 38400, 57600, or 115200


(Autobaud only recognizes these baud rates)

Data bits: 8

Stop bits: 1

Parity: None

Flow control: None or XON/XOFF

NOTE: Before initially applying power to the library, make sure all the FC devices are powered on first,
and that they have finished performing individual self tests. This helps to ensure that device discovery works
correctly.

5. Apply power to the tape library. The power-on process can take up to 90 seconds. Once complete, the
main menu should be accessible.
SCSI bus configuration
The interface card provides the capability to reset SCSI buses during the interface card boot cycle. This
allows the devices on a SCSI bus to be set to a known state. Configuration provides for the SCSI bus reset
feature to be enabled or disabled.
The interface card negotiates for the maximum values for transfer rates and bandwidth on a SCSI bus. If an
attached SCSI device does not allow the full rates, the interface card uses the best rate it can negotiate for
that device. Negotiation is on a device specific basis, so the unit can support a mix of SCSI device types
on the same SCSI bus.
FC port configuration
By default, the configuration of the FC port on the interface card is set to N_Port mode.
FC arbitrated loop addressing
On a Fibre Channel Arbitrated Loop, each device appears as an Arbitrated Loop Physical Address
(AL_PA). To obtain an AL_PA, two addressing methods, called soft and hard addressing, can be used by
the interface card. Soft addressing is the default setting. For hard addressing, the user specifies the AL_PA
of the interface card.

34 Hardware setup
Soft addressing
When acquiring a soft address, the interface card acquires the first available loop address, starting from
address 01 and moving up the list of available AL_PAs in the chart from 01 to EF. In this mode, the
interface card obtains an available address automatically and then participates on the FC loop, as long as
there is at least one address available on the loop connected to the interface card. Fibre Channel supports
up to 126 devices on an Arbitrated Loop.
Hard addressing
When acquiring a hard address, the interface card attempts to acquire the AL_PA value specified by the
user in the configuration settings. If the desired address is not available at loop initialization time, the
interface card comes up on the FC loop using an available soft address. This allows both the loop and the
unit to continue to operate. An example of this scenario would be when another device on the Arbitrated
Loop has acquired the same address as that configured on the interface card.
Hard addressing is recommended for FC Arbitrated Loop environments where it is important that the FC
device addresses do not change. Device address changes can affect the mapping represented by the host
operating system to the application, and have adverse effects. An example of this would be a tape library
installation, where the application configuration requires fixed device identification for proper operation.
Hard addressing ensures that the device identification to the application remains constant.
FC switched fabric addressing
When connected to a Fibre Channel switch, the interface card is identified to the switch as a unique device
by the factory programmed World Wide Name (WWN) and the World Wide Port Names (WWPN),
which are derived from the WWN.
Creating a multi-stack unit
The MSL5000 and MSL6000 series libraries can be stacked in a scalable combination with additional
MSL5000 or MSL6000 series libraries to form a multi-unit, rack-mounted configuration. Through use of a
pass-thru mechanism (PTM), all multi-unit libraries in the stack can operate together as a single virtual
library system. Stacked units are interconnected through their rear panel Ethernet connections and an
external Ethernet router mounted to the rack rails, or through an internal Ethernet router installed in a
library expansion slot.
The external Ethernet hub also provides an additional connector when libraries are combined in their
maximum stacked height.
• A maximum of eight libraries can be connected in this manner.
• Any combination of libraries, not exceeding 40U in total stacked height, can also be used.
• A multi-unit library system appears to the host computer system and library control software as a single
library.
• For multi-unit applications, the top library becomes the primary master unit and all other lower libraries
are slave units.

NOTE: The PTM continues to function even if a slave library is physically removed from the rack
configuration during normal library operation.

Enterprise Backup Solution design guide 35


The library robotics can pick and place tape cartridges into a movable elevator that encompasses the full
length of the PTM. In this manner, individual tapes can be passed up or down between the libraries
contained in the stack. Robotic access to the PTM is located at the rear of the unit, between the tape drives
and the power supply. The PTM drive motor source power is relay-switched from the original primary
master library to the newly assigned secondary master library.

Figure 14 Multi-unit library configuration with the external router

Figure 15 shows how to connect a multi-unit library configuration using an embedded router card.

Figure 15 Multi-unit library configuration with embedded router

36 Hardware setup
Setting the bar code length
1. Access the front panel and select Menu.
2. Select Edit Options.
3. Select Library.
4. Enter password.
5. Scroll down to bar code options and enter the desired values. The default values are as follows:
• bar code label size = 8
• bar code alignment = left Align
• bar code label check digit = Disabled
• bar code reader = Retries enabled

Check connectivity and performance with L&TT


• Install L&TT on the server and run it to scan and find the tape devices. See ”Library and Tape Tools” on
page 173 for more information.
• Check to ensure that the topology is as expected. L&TT will show the connectivity and topology for each
host HBA and tape device connected.
• Run the L&TT Device Performance test on each device to verify that the host to device data path is
configured for full speed operation. This step requires a write-enabled tape cartridge.

NOTE: HP recommends leaving L&TT installed on the host so it is readily available if there is a need to run
diagnostics if directed by HP support at a later date. It has no background services and no impact on the
host or other applications.

Enterprise Backup Solution design guide 37


HP StorageWorks MSL2024, MSL4048, and MSL8096 Series Fibre
Channel (FC) tape libraries
HP StorageWorks MSL2024, MSL4048, and MSL8096 Fibre Channel (FC) tape libraries allow the
installation of the tape library into a SAN environment. They are compatible with most operating systems
and environments that support the FC interface. However, the library requires direct support from the
operating system or a compatible backup application to take full advantage of its many features. To verify
compatibility, see http://www.hp.com/go/ebs.
The HP StorageWorks MSL2024, MSL4048, and MSL8096 Fibre Channel libraries are available in the
following configurations:
MSL2024 tape library
• MSL2024 tape library with one full-height Ultrium LTO Fibre Channel drive
MSL4048 tape library
• MSL4048 tape library with one or two full-height Ultrium LTO Fibre Channel drive
MSL8096 tape library
• MSL8096 tape library with one to four full-height Ultrium LTO Fibre Channel drive
All tape libraries have a bar code reader, removable magazines, and are customer expandable with
exchangeable drives. Customer replaceable parts are the drives, chassis, and magazines. Replaceable
parts that are specific to the MSL4048 and MSL8096 tape library models are the library controller and
power supply.

Back panel overview


Figure 16 Back panel view of the MSL2024 tape library

Figure 17 Back panel view of the MSL4048 tape library

38 Hardware setup
Figure 18 Back panel view of the MSL8096 tape library

Number Description
1 Fibre Channel ports A and B, left to right
2 Fan vent
3 Power connector
4 Tape drive(s)
5 Ethernet port
6 Serial port (For HP Service use only)
7 USB port
8 Pull-out tab containing the serial number and other product information

Configuring drive information


MSL4048 and MSL8096: Configuration > Drive Configuration
MSL2024: Home > Configuration > Change Drive (1/2)
This screen helps configure the FC ports for each drive in the tape library. Each drive has an A port and a
B port.

IMPORTANT: It is important to configure the FC ports correctly before using the tape library.

NOTE: HP recommends cabling Port A only and configuring Port B for Auto Detect on Fibre Speed and
Port Type.

Enterprise Backup Solution design guide 39


For Port A:
• Select Fibre Speed: Auto Detect, 1Gb/s, 2Gb/s, or 4Gb/s
• Select Port Type: Fabric (N) if connecting to a fibre switch, or Loop (NL) if connecting directly to an HBA
or fibre hub.
• Selecting Loop (NL) requires setting the Loop Mode: Soft, Hard or Hard Auto-Select.
• Selecting Hard requires setting the ALPA address.
For Port B:
• Select Fibre Speed: Auto Detect
• Select Port Type: Auto Detect
Viewing drive status information
• MSL4048 and MSL8096: Info > Status > Drive
This screen displays the status of the selected drive. It shows the drive status, source slot, tape bar code,
error code (if appropriate), drive temperature, status of cooling fan and drive activity.
• MSL2024: Home > Status/Information > Drive 1/2 Information
This screen displays the serial number, drive type, firmware revision and error log for the selected drive.
Additional Fibre Channel drive information:
• The link status of each port may be: No Light, Logged In, Logged Out, ALPA Conflict, or Negotiation
Link. No Light or ALPA Conflict indicates an error condition.
• The speed for each port: Auto Detect, 1Gb/s, 2Gb/s, or 4Gb/s.

SAN connectivity
HP’s new StorageWorks MSL2024, MSL4048, or MSL8096 tape libraries may be connected to the SAN in
two ways:
1. Native Fibre Channel tape drives integrated into the library for direct SAN connectivity.
2. SCSI tape drives in the library, with a Network Storage Router for SAN connectivity.
This document recommends how to decide which option to use when attaching an MSL2024, MSL4048, or
MSL8096 tape library to a SAN.
One of the most important considerations in data protection is the reliability of the backup and restore
jobs. How the backup SAN is implemented can affect the completion rate and performance of the backup
and restore jobs.
Native Fibre Channel drives (NFC)
Native Fibre Channel drives allow for direct connection into a SAN without an intermediate Network
Storage Router. The NFC drive based library offers the best price-to-performance ratio for integration into a
SAN. The NFC drives can be configured directly using the library Remote Management Utility (RMU). The
RMU allows the administrators to set Port Speed, Port Topology, Addressing Mode and Arbitrated Loop
Physical Address (ALPA).

NOTE: HP supports switched fabric and point-to-point (P2P) topologies, but does not support arbitrated
loop configurations.

The NFC tape drive library is configured into the backup software in the same way as any other HP tape
library by performing a device scan from the backup software. NFC drives and SCSI drives with an
attached NSR can be used together within the same library.
Since Native Fibre Channel drives allow direct connection into a SAN, it is important to be mindful of the
size and/or scope of the SAN in which the library and drives are being attached.

40 Hardware setup
IMPORTANT: HP strongly recommends connecting HP Native Fibre Channel tape libraries into a SAN
only where hosts are relatively homogenous that are grouped within smaller private SANs. Information
regarding component compatibility can be found on the HP Enterprise Backup Solutions Compatibility
Matrix available on the EBS website at: www.hp.com/go/ebs. The matrix is updated monthly.

With this topology, the NFC tape drives are connected directly into the SAN but are not isolated from SAN
traffic such as Target Resets or rogue applications or servers. In a complex or poorly implemented SAN,
those items can cause backup and restore jobs to abort, requiring manual intervention to restart the job
and ensure it completes.

Figure 19 Native Fibre Channel connectivity

Network Storage Routers (NSR) N1200-320 4Gb


The external Network Storage Router is used to connect HP SCSI based tape library drives into a Storage
Area Network (SAN). The Network Storage Router is a proven and reliable technology designed for MSL
series libraries which provides flexibility and intelligence for better storage manageability when connecting
into a SAN. The NSR includes functionality such as Host Device Configuration, Logical Unit Management,
and Device Discovery Modes. These features provide additional flexibility and isolation between the SAN
environment and the SCSI tape library.
The NSR is not embedded into the library and requires an additional 1U of rack space. The router provides
two LVD SCSI ports for drive connection and a single fibre port used to connect into the SAN or fibre host.
The external NSR can be configured and managed using multiple user interfaces (UIs). The NSR supports
the following user interfaces: Visual Manager (a web browser-based interface), Serial, Telnet, and FTP.
Because the NSR is not embedded in the MSL2024 or MSL4048, at least two IP address are required: one
IP address for the library and one IP address for each NSR.

Enterprise Backup Solution design guide 41


With this topology, the SCSI tape drives are connected into the SAN with the NSR. Because of the
intelligence inherent in the NSRs, they will isolate the tape drives from undesirable traffic such as Target
Resets and traffic from “rogue” servers or applications. This isolation greatly improves backup and restore
reliability in large or complex SAN environments, but its benefits can be seen even in smaller SANs.

Figure 20 Network storage router connectivity

Summary
• Use native FC tape libraries for best price performance in connecting tape to a SAN.
• Use SCSI-based tape libraries with a Network Storage Router when more flexibility and isolation of
tape devices in the SAN is required.

Native Fibre drives SCSI Drives with NSR


Cost Lowest Higher
Reliability High Highest
Configurability Lower Higher
Manageability Lower Higher
Fibre switch ports One per drive One per NSR (up to 2 drives)
required
Environment Small, controlled SAN Heterogeneous, larger SAN

42 Hardware setup
Setting the bar code length
1. Access the front panel.
2. Scroll over to Configuration.
3. Scroll down to bar code Reporting.
4. Enter password (if requested).
5. Scroll down to bar code options and enter the desired values. The default values are as follows:
• bar code label size = 8
• bar code alignment = left align

Check connectivity and performance with L&TT


• Install L&TT on the server and run it to scan and find the tape devices. See ”Library and Tape Tools” on
page 173 for more information.
• Check to ensure that the topology is as expected. L&TT will show the connectivity and topology for each
host HBA and tape device connected.
• Run the L&TT Device Performance test on each device to verify that the host to device data path is
configured for full speed operation. This step requires a write-enabled tape cartridge.

NOTE: HP recommends leaving L&TT installed on the host so it is readily available if there is a need to run
diagnostics if directed by HP support at a later date. It has no background services and no impact on the
host or other applications.

Enterprise Backup Solution design guide 43


The HP StorageWorks 6000 Virtual Library System (VLS)
Integrating seamlessly into existing backup applications and processes, the HP StorageWorks 6000 Virtual
Library System (VLS 6000) accelerates backup performance in complex SAN environments while
improving overall reliability. Emulating popular tape libraries and tape drives, the VLS 6000 matches the
existing data protection environment, removing the need to change backup software or monitoring policies.
By emulating multiple tape drives simultaneously, more backup jobs can be done in parallel resulting in
reduced backup times. Additionally, because the data resides on disk, single file restores are exceptionally
fast.
The HP StorageWorks 6000 Virtual Library System (VLS) is a RAID 5, Serial ATA disk-based SAN backup
device that emulates HP's physical tape libraries. This configuration allows disk-to-virtual tape (disk-to-disk)
backups to be performed using existing backup application(s).

Features
• Emulates popular tape drives and libraries
• Up to 16 libraries and 64 drives emulated simultaneously within a single virtual library system
• Certified with popular backup software packages through HP StorageWorks Enterprise Backup Solution
• Over 575 MB/s throughput
• Scales capacity and performance
• Data compression
• Hot Swap array drives
• Redundant array power supplies and cooling
• RAID 5
• Mounts in a standard 19-inch rack

Setting the bar code


With the HP VLS6000-series virtual library, bar code templates are created for use with the virtual
cartridges on which data is stored. These bar codes can have as few as 2 characters or as many as 99,
and they are completely configurable by the administrator.
When configuring the bar code templates, follow the requirements (if any) for bar code prefixes and length
per the backup application. In addition, it is a good idea to use a bar code prefix that differs from any
physical cartridge bar codes in a tape library to which data may be migrated behind the VLS. That way,
physical and virtual cartridges can be easily recognized from within the backup application.

44 Hardware setup
The following diagrams show the rack order and cabling configuration of the various VLS systems.

VLS6105 and VLS6109 rack order

Figure 21 VLS6105 and VLS6109 rack order

1 Node
2 Disk array 1
3 Disk array 2

VLS6200, and VLS6500 disk array rack mounting order

Figure 22 VLS6200, and VLS6500 disk array rack mounting order

1 Disk array 3
2 Disk array 2
3 Node
4 Disk array 0
5 Disk array 1

Enterprise Backup Solution design guide 45


VLS6600 disk array rack mounting order

Figure 23 VLS6600 disk array rack mounting order

1 Disk array 7
2 Disk array 6
3 Disk array 5
4 Disk array 4
5 Node
6 Disk array 0
7 Disk array 1
8 Disk array 2
9 Disk array 3

46 Hardware setup
VLS6840 and VLS6870 rack order

10

11

12

13

14

15

16

17

Figure 24 VLS6840 and VLS6870 rack order


1 Disk array 15 2 Disk array 14
3 Disk array 13 4 Disk array 12
5 Disk array 11 6 Disk array 10
7 Disk array 9 8 Disk array 8
9 Node 10 Disk array 0
11 Disk array 1 12 Disk array 2
13 Disk array 3 14 Disk array 4
15 Disk array 5 16 Disk array 6
17 Disk array 7

Enterprise Backup Solution design guide 47


VLS6105, VLS6109, VLS6218, VLS6227, VLS6510, VLS6518 cabling

Figure 25 VLS6105, VLS6109, VLS6218, VLS6227, VLS6510, VLS6518 cabling

1 VHDCI connector B2, connect to disk array 0


2 VHDCI connector B1, connect to disk array 1
3 VHDCI connector A1, connect to disk array 2
4 VHDCI connector A2, connect to disk array 3

48 Hardware setup
VLS6600 cabling

6
1 2 3 4

6 5 8 7 1

11517

Figure 26 VLS6600 — Connecting the VHDCI connectors to disk arrays

1 VHDCI connector slot 5, A1, connect to disk array 0


2 VHDCI connector slot 5, A2, connect to disk array 1
3 VHDCI connector slot 5, B1, connect to disk array 2
4 VHDCI connector slot 5, B2, connect to disk array 3
5 VHDCI connector slot 4, A1, connect to disk array 4
6 VHDCI connector slot 4, A2, connect to disk array 5
7 VHDCI connector slot 4, B1, connect to disk array 6
8 VHDCI connector slot 4, B2, connect to disk array 7

Enterprise Backup Solution design guide 49


VLS6840 and VLS6870 cabling

16

15

14

13

12

11

10
5 6 9 10

9
1 14
2 13

4 15
1
3 16

2
7 8 11 12

Figure 27 VLS6840 and VLS6870 cabling

VHCDI connector slot 8, A1, 2 VHCDI connector slot 8, A2,


1
connect to disk array 0 connect to disk array 1
3 VHCDI connector slot 8, B1, 4 VHCDI connector slot 8, B2,
connect to disk array 2 connect to disk array 3
5 VHCDI connector slot 7, A1, 6 VHCDI connector slot 7, A2,
connect to disk array 4 connect to disk array 5
7 VHCDI connector slot 7, B1, 8 VHCDI connector slot 7, B2,
connect to disk array 6 connect to disk array 7
9 VHCDI connector slot 6, A1, 10 VHCDI connector slot 6, A2,
connect to disk array 8 connect to disk array 9
11 VHCDI connector slot 6, B1, 12 VHCDI connector slot 6, B2,
connect to disk array 10 connect to disk array 11
13 VHCDI connector slot 5, A1, 14 VHCDI connector slot 5, A2,
connect to disk array 12 connect to disk array 13
15 VHCDI connector slot 5, B1, 16 VHCDI connector slot 5, B2,
connect to disk array 14 connect to disk array 15

50 Hardware setup
Check connectivity and performance with L&TT
• Install L&TT on the server and run it to scan and find the tape devices. See ”Library and Tape Tools” on
page 173 for more information.
• Check to ensure that the topology is as expected. L&TT will show the connectivity and topology for each
host HBA and tape device connected.
• Run the L&TT Device Performance test on each device to verify that the host to device data path is
configured for full speed operation. This step requires a write-enabled tape cartridge.

NOTE: HP recommends leaving L&TT installed on the host so it is readily available if there is a need to run
diagnostics if directed by HP support at a later date. It has no background services and no impact on the
host or other applications.

Operating system LUN requirements and restrictions


The VLS automatically assigns a logical unit number (LUN) to each virtual library and tape drive created on
the VLS in the order in which they are created by you, starting with LUN0 and increasing incrementally by
one as each new virtual library or tape drive is created on a FC host port (LUN1, LUN2 and so on). By
default, the VLS allows all hosts connected to the VLS through the SAN to access all virtual devices
configured on the VLS.
The operating system may not detect some of the virtual devices when it scans the SAN for new hardware:
• More LUNs on the FC host ports than the operating system is configured to see. By default, Windows
and HP-UX hosts can see a maximum of 8 LUNs per FC host port.
• A gap exists in the LUN numbering on the FC host port. Most operating systems will stop looking for a
virtual device on a FC host port once a gap in the LUN numbering is detected.
The VLS offers LUN mapping and LUN masking to overcome these operating system LUN limitations. For
implementation details, refer to the “LUN Management” section of the “HP StorageWorks Virtual Library
System user guide” for the VLS12000/300 or VLS6000.

Enterprise Backup Solution design guide 51


HP StorageWorks 9000–series Virtual Library System (VLS)
Features
The HP StorageWorks 9000 virtual library system (VLS) is a RAID disk-based SAN backup device that
emulates physical tape libraries, allowing you to perform disk-to-virtual tape (disk-to-disk) backups using
your existing backup application(s). The many benefits of performing data backups to a VLS instead of
physical tape are described in Benefits.
The VLS emulates a variety of physical tape libraries, including the tape drives and cartridges inside the
libraries. You determine the number and types of tape libraries a VLS emulates, and the number and type
of tape drives and cartridges included in each tape library to meet the needs of your environment. You
configure the size of the virtual cartridges in your VLS, which provides even more flexibility.
The VLS automigration features allow you to establish data pools to create and manage mirror (echo copy)
or snapshot (smart copy) replication of data for additional protection against data loss.
The VLS accommodates mixed IT platform and backup application environments, allowing all your servers
and backup applications to access the virtual media simultaneously. You can specify which servers are
allowed to access each virtual library and tape drive you configure. You can change the default LUNs
assigned to the virtual library and tape drives for each host as needed to accommodate different operating
system requirements and restrictions.
Data stored on a VLS is easily cloned to physical tape for off-site disaster protection or long-term archival
using a backup application.

Benefits
Integrating a VLS into your existing storage and backup infrastructure delivers the following benefits:
• Faster backups—Backup speeds are limited by the number of tape drives available to the SAN
hosts. The VLS emulates many more tape drives than are available in physical tape libraries, allowing
more hosts to run backups concurrently. The VLS is optimized for backups and delivers faster
performance than a simple disk-to-disk solution.
• Faster single file restores—A single file can be restored much faster from disk than tape.
• Lower operating costs—Fewer physical tape drives and cartridges are required as full backups to
tape are eliminated. Also, fewer cartridges are required as small backups stored on multiple virtual
cartridges can be copied to one physical cartridge.
• More efficient use of storage space—Physical tape libraries cannot share storage space with
other physical tape libraries, and physical cartridges cannot share storage space with other physical
cartridges. This unused storage space is wasted.
Storage space is not wasted in a VLS, because VLS storage space is dynamically assigned as it is used.
Storage space is shared by all the libraries and cartridges configured on a VLS.
• Reduced risk of data loss and aborted backups—RAID-based storage is more reliable than
tape storage.
Aborted backups caused by tape drive mechanical failures are eliminated.

VLS9000-series components
A VLS9000-series system consists of at least one VLS9000 node, at least one VLS9000 array (one base
disk array enclosure and three expansion disk array enclosures), and one VLS9000 connectivity kit (two
Ethernet switches for internal inter-node connections and two Fibre Channel (FC) switches for disk storage
connections). See the drawing of racked system below.
Each VLS9000 node contains hardware data compression, dual processors, one 4 Gb quad port FC HBA,
two 2048 MB memory modules, and two 60 Gb SATA hard drives.

52 Hardware setup
1

3
26

FC FC
Port 0 Port 1

6
DIRTY
CLEAN

10/100 BASE-T STATUS

DIRTY
CLEAN

11744

Figure 28 Example of VLS9000-series system

1 Node 0, primary node


2 FC switch #1
3 FC switch #2
4 Ethernet Switch 2524 (100 Mb)
5 Ethernet Switch 2824 (1 Gb)
6 Base disk array enclosure
7 Expansion disk array enclosure
8 Expansion disk array enclosure
9 Expansion disk array enclosure

You can install either the VLS9000 20-port or 32-port connectivity kit in your VLS9000-series system. The
20-port connectivity kit includes two 10-port FC switches and two Ethernet switches. The 32-port
connectivity kit includes two 16-port FC switches and two Ethernet switches.
The 32-port connectivity kit allows you to install more VLS9000 nodes and VLS9000 arrays in your
VLS9000-series system than the 20-port connectivity kit. Two FC ports (one FC port on each FC switch) are
required for each VLS9000 node or VLS9000 array installed in a VLS9000-series system. See VLS9030
capacity for configuration options.

NOTE: For maximum performance, install one VLS9000 array for every VLS9000 node installed.
For maximum capacity, install two VLS9000 arrays for every VLS9000 node installed.

Enterprise Backup Solution design guide 53


Up to two VLS9000 arrays may be installed for every VLS9000 node in a VLS9000-series system. To add
a VLS9000 array, purchase a VLS9000 capacity kit. A VLS9000 capacity kit includes one VLS9000 array
and one capacity license for the VLS9000 array. Adding nodes and arrays increases the VLS9000-series
storage capacity as shown in VLS9030 capacity. Adding nodes also increases the performance if there is
more than one array installed for every node (see note above). See the HP StorageWorks VLS9000 Virtual
Library System Quickspec on the HP web site
http://h18004.www1.hp.com/storage/disk_storage/disk_to_disk/vls/
index.html?jumpid=reg_R1002_USEN for performance data.

Table 3 VLS9030 capacity (with 2:1 data compression)

Nodes 20-port connectivity kit 32-port connectivity kit


1 60 TB — 120 TB (1 — 2 arrays) 60 TB — 120 TB (1 — 2 arrays)
2 120 TB — 240 TB (2 — 4 arrays) 120 TB—240 TB (2 — 4 arrays)
3 180 TB—360 TB (3 — 6 arrays) 180 TB — 360 TB (3 — 6 arrays)
4 240 TB—360 TB 4 — 6 arrays) 240 TB — 480 TB (4 — 8 arrays)
5 300 TB (5 arrays) 300 TB — 600 TB (5 — 10
arrays)
6 NA 360 TB — 600 TB
7 NA 420 TB — 540 TB (7 — 9 arrays)
8 NA 480 TB (8 arrays)

Installing VLS9000 cables


To install VLS9000 cables, follow the instructions below.
1. On the primary node:
4 3 2 1 12 11

6 7 8 9 10
11488

Figure 29 Primary node port cabling


1 FC host port, port 0
2 FC host port, port 1
3 FC storage port, port 2, on primary node connects to port 0 of Fibre Channel switch #2
4 FC storage port, port 3, on primary node connects to port 0 of Fibre Channel switch #1
5 USB connector, on primary node connects to USB/Ethernet adapter, then to port 1 of
switch 2524
6 Serial connector to access CLI
7 Video connector
8 Keyboard connector
9 NIC 1, on primary node only connects to the customer-provided external network
10 NIC 2, on primary node connects to port 1 of switch 2824

54 Hardware setup
11 Power supply 1
12 Power supply 2

a. Connect one end of a USB connecter to the USB port. Connect the other end to the USB/Ethernet
adapter. Connect a 1 meter Ethernet cable to the adapter, then connect the Ethernet cable to port 1
of Ethernet Switch 2524 (see Figure 32).
b. Secure the USB/Ethernet adapter to the upper left inside rack brace.
c. Connect a 1 meter Ethernet cable to NIC2. Connect the other end of the cable to port 1 of Ethernet
Switch 2824 (see Figure 33).
d. Connect one end of an Ethernet cable (not included) to NIC1. Connect the other end of the cable to
the existing external network.
e. Connect one end of an FC cable (not provided) to host port 0. Connect the other end to an external
FC switch/fabric that connects to your tape backup hosts.
f. If desired, connect one end of an FC cable (not provided) to host port 1. Connect the other end to
an external FC switch/fabric that connects to your tape backup hosts. Otherwise, connect a
loopback plug to host port 1.
g. Connect one end of an FC cable to storage port 3. Connect the other end to port 0 of Fibre
Channel switch #1 after inserting a transceiver in the port (see Figure 34 or Figure 35).
h. Connect one end of an FC cable to storage port 2. Connect the other end to port 0 of Fibre
Channel switch #2 after inserting a transceiver in the port (see Figure 36 or Figure 37).

IMPORTANT: Do not touch the fibre channel cable tips.


Do not secure fibre channel with cable ties.

i. Connect to the serial port (cable is provided) to access the command-line user interface at initial
configuration. Also connect to this during debug activities. Disconnect from this port during normal
operations.

NOTE: You must connect to the keyboard and video connectors when performing Quick Restore
(keyboard and monitor not included).

j. Connect a black power cable to the power supply on the left.


k. Route the black power cable through the left side of the rack and plug it into a PDM.
l. Connect a gray power cable to the power supply on the right.
m. Route the gray power cable through the right side of the rack and plug it into a PDM.
n. Begin routing the cables through the cable ties that shipped with the racks.
2. On the secondary node(s):

NOTE: Use this procedure to install any secondary nodes — node 1, 2, 3, and so on.

Enterprise Backup Solution design guide 55


4 3 2 1 8 7

6
11489

Figure 30 Secondary node port cabling


1 FC host port, port 0
2 FC host port, port 1
3 FC storage port, port 2, on secondary node connects to the next available port of Fibre
Channel switch #2
4 FC storage port, port 3, on secondary node connects to the next available port of Fibre
Channel switch #1
5 USB port, on secondary node connects to USB/Ethernet adapter, then to the next
available port of switch 2524
6 NIC 2, on secondary nodes connects to the next available port of switch 2824
7 Power supply 1
8 Power supply 2

a. Connect one end of a USB cable to the USB port. Connect the other end of the cable to the
USB/Ethernet adapter. Connect a 1 meter Ethernet cable to the adapter, then connect the Ethernet
cable to the next available port of Switch 2524 (see Figure 32).
b. Secure the USB/Ethernet adapter to the upper left inside rack brace.
c. Connect a 1 meter Ethernet cable to NIC2. Connect the other end of the cable to the next available
port of Switch 2824 (see Figure 33).
d. Connect one end of an FC cable (not provided) to host port 0. Connect the other end to an external
FC switch/fabric that connects to your tape backup hosts.
e. If desired, connect one end of an FC cable (not provided) to host port 1. Connect the other end to
an external FC switch/fabric that connects to your tape backup hosts. Otherwise, connect a
loopback plug to host port 1.
f. Connect one end of an FC cable to storage port 3. Connect the other end to the next available port
of Fibre Channel switch #1 after inserting a transceiver in the port (see Figure 34 or Figure 35).
g. Connect one end of an FC cable to storage port 2. Connect the other end to the next available port
of Fibre Channel switch #2 after inserting a transceiver in the port (see Figure 36 or Figure 37).

IMPORTANT: Do not touch the fibre channel cable tips.


Do not secure fibre channel cable with cable ties.

h. Connect a black power cable to the power supply on the left.


i. Route the black power cable through the left side of the rack and plug it into a PDM.
j. Connect a gray power cable to the power supply on the right.
k. Route the gray power cable through the right side of the rack and plug it into a PDM.
l. Continue routing the cables through the cable ties that shipped with the racks.
3. On each VLS9000 array:

56 Hardware setup
1 FC FC
Port 0 Port 1
DIRTY
CLEAN

2
10/100 BASE-T STATUS

DIRTY
CLEAN

11762

Figure 31 Disk array enclosure SAS port cabling

SAS cable, SAS port of RAID controller 0 of base disk array enclosure connects to SAS
1
input port of expansion controller 0 of expansion disk array enclosure 0
2 SAS cable, SAS port of RAID controller 1 of base disk array enclosure connects to SAS
input port of expansion controller 1 of expansion disk array enclosure 0
3 SAS cable, SAS output port of expansion controller 0 of expansion disk array enclosure 0
connects to SAS input port of expansion controller 0 of expansion disk array enclosure 1
4 SAS cable, SAS output port of expansion controller 1 of expansion disk array enclosure 0
connects to SAS input port of expansion controller 1 of expansion disk array enclosure 1
5 SAS cable, SAS output port of expansion controller 0 of expansion disk array enclosure 1
connects to SAS input port of expansion controller 0 of expansion disk array enclosure 2
6 SAS cable, SAS output port of expansion controller 1 of expansion disk array enclosure 1
connects to SAS input port of expansion controller 1 of expansion disk array enclosure 2

Remove the tape and end caps from the SAS cables before installing.
a. Verify that both power switches are off for each disk array enclosure in the rack.
b. Connect one end of a SAS cable to RAID controller 0 of the base disk array enclosure. Connect the
other end to SAS input port of expansion controller 0 of expansion disk array enclosure 0.
c. Connect one end of a SAS cable to RAID controller 1 of the base disk array enclosure. Connect the
other end to SAS input port of expansion controller 1 of expansion disk array enclosure 0.
d. Connect one end of a SAS cable to SAS output port of expansion controller 0 of the expansion disk
array enclosure 0. Connect the other end to SAS input port of expansion controller 0 of expansion
disk array enclosure 1.
e. Connect one end of a SAS cable to SAS output port of expansion controller 1 of the expansion disk
array enclosure 0. Connect the other end to SAS input port of expansion controller 1 of expansion
disk array enclosure 1.
f. Connect one end of a SAS cable to SAS output port of expansion controller 0 of the expansion disk
array enclosure 1. Connect the other end to SAS input port of expansion controller 0 of expansion
disk array enclosure 2.
g. Connect one end of a SAS cable to SAS output port of expansion controller 1 of the expansion disk
array enclosure 1. Connect the other end to SAS input port of expansion controller 1 of expansion
disk array enclosure 2.
h. Connect black power cables to power modules on the left.
i. Route the black power cables through the left side of the rack and plug them into a PDM.

Enterprise Backup Solution design guide 57


j. Connect gray power cables to power modules on the right.
k. Route the gray power cables through the right side of the rack and plug them into a PDM.
l. Repeat this procedure for each array.
m. Secure the SAS cables for each array with a Velcro tie.

1 2 3 4 5 6 13 14 15 16

26

7 8 9 10 11 12 17 18
11766

Figure 32 Ethernet Switch 2524 port cabling


1 Ethernet cable from USB adapter on primary node
2 Ethernet cable from USB adapter on 2nd node
3 Ethernet cable from USB adapter on 3rd node (if present)
4 Ethernet cable from USB adapter on 4th node (if present)
5 Ethernet cable from USB adapter on 5th node (if present)
6 Ethernet cable from USB adapter on 6th node (if present)
7 Ethernet cable from USB adapter on 7th node (if present)
8 Ethernet cable from USB adapter on 8th node (if present)
9 Ethernet cable from RAID controller 0 of 8th array (if present)
10 Ethernet cable from RAID controller 0 of 7th array (if present)
11 Ethernet cable from RAID controller 0 of 6th array (if present)
12 Ethernet cable from RAID controller 0 of 5th array (if present)
13 Ethernet cable from RAID controller 0 of 4th array (if present)
14 Ethernet cable from RAID controller 0 of 3rd array (if present)
15 Ethernet cable from RAID controller 0 of 2nd array (if present)
16 Ethernet cable from RAID controller 0 of 1st array
17 Ethernet cable from Ethernet port of Fibre Channel switch #1
18 Ethernet cable from port 24 of switch 2824

a. Ensure that the power cable is connected to the switch, as described in the racking instructions.
b. Connect one end of a 2 meter Ethernet cable to the Ethernet port of Fibre Channel switch #1.
Connect the other end of the Ethernet cable to port 23 of Ethernet Switch 2524.
c. Connect one end of an Ethernet cable to port 24 of Ethernet Switch 2824. Connect the other end of
the Ethernet cable to port 24 of Ethernet Switch 2524.
d. Ensure that the Ethernet cables from the NIC2 ports of each node are firmly set in the appropriate
ports on the switch.
e. Connect one end of an Ethernet cable to port 16 of Ethernet Switch 2524. Connect the other end of
the Ethernet cable to RAID controller 0 on the base disk array enclosure, of first array (array 0).
f. Working backwards from port 16 on Ethernet Switch 2524, connect one end of an Ethernet cable to
the next available Ethernet port on Ethernet Switch 2524. Connect the other end of the Ethernet
cable to RAID controller 0 on the base disk array enclosure, of the second array (array 1).
g. Repeat steps f for the next array.
h. Secure Ethernet cables with a Velcro tie to the right side of the rack.
4. On Ethernet Switch 2824:

58 Hardware setup
1 3 5 7 9 11 13 15 17

2 4 6 8 10 12 14 16 18
11765

Figure 33 Ethernet Switch 2824 port cabling

1 Ethernet cable from NIC2 of primary node


2 Ethernet cable from NIC2 of 2nd node (if present)
3 Ethernet cable from NIC2 of 3rd node (if present)
4 Ethernet cable from NIC2 of 4th node (if present)
5 Ethernet cable from NIC2 of 5th node (if present)
6 Ethernet cable from NIC2 of 6th node (if present)
7 Ethernet cable from NIC2 of 7th node (if present)
8 Ethernet cable from NIC2 of 8th node (if present)
9 Ethernet cable from RAID controller 1 of 8th array (if present)
10 Ethernet cable from RAID controller 1 of 7th array (if present)
11 Ethernet cable from RAID controller 1 of 6th array (if presen
12 Ethernet cable from RAID controller 1 of 5th array (if present)
13 Ethernet cable from RAID controller 1 of 4th array (if present)
14 Ethernet cable from RAID controller 1 of 3rd array (if present)
15 Ethernet cable from RAID controller 1 of 2nd array (if present)
16 Ethernet cable from RAID controller 1 of 1st array
17 Ethernet cable from Ethernet port of Fibre Channel switch #2
18 Ethernet cable from port 24 of switch 2524

a. Ensure that the power cable is connected to the switch, as described in the racking instructions.
b. Ensure that the Ethernet cables from the NIC2 ports of each node are firmly set in the appropriate
ports.
c. Connect one end of a 2 meter Ethernet cable to the Ethernet port of Fibre Channel switch #2.
Connect the other end of the Ethernet cable to port 23 of Switch 2824.
d. Connect one end of an Ethernet cable to port 16 of Ethernet Switch 2824. Connect the other end of
the Ethernet cable to RAID controller 1 on the base disk array enclosure, of first array (array 0).
e. Working backwards from port 16 on Ethernet Switch 2824, connect one end of an Ethernet cable to
the next available Ethernet port on Ethernet Switch 2824. Connect the other end of the Ethernet
cable to RAID controller 1 on the base disk array enclosure, of the second array (array 1).
f. Repeat step e for the next array.
g. Secure Ethernet cables with a Velcro tie to the right side of the rack.
5. On Fibre Channel switch #1:

Enterprise Backup Solution design guide 59


1 2 3 4 5 6 7 8 9 10

11

11763

Figure 34 Fibre Channel switch #1 port cabling (20-port connectivity kit shown)

1 FC cable from FC port 3, of primary node


2 FC cable from FC port 3, of 2nd node (if present)
3 FC cable from FC port 3, of 3rd node (if present)
4 FC cable from FC port 3, of 4th node (if present)
5 FC cable from FC port 3, of 5th node (if present) (or FC port 0, of RAID controller 0 of 6th
array)
6 FC cable from FC port 0, of RAID controller 0 of 5th array (if present)
7 FC cable from FC port 0, of RAID controller 0 of 4th array (if present)
8 FC cable from FC port 0, of RAID controller 0 of 3rd array (if present)
9 FC cable from FC port 0, of RAID controller 0 of 2nd array (if present)
10 FC cable from FC port 0, of RAID controller 0 of 1st array
11 Ethernet cable from port 23 of switch 2524

1 2 3 4 5 10 11 12 13

6 7 8 9 14 15 16 17
11764

Figure 35 Fibre Channel switch #1 port cabling (32-port connectivity kit shown)

1 Ethernet cable from port 23 of switch 2524


2 FC cable from FC port 3, of primary node
3 FC cable from FC port 3, of 2nd node (if present)
4 FC cable from FC port 3, of 3rd node (if present)
5 FC cable from FC port 3, of 4th node (if present)
6 FC cable from FC port 3, of 5th node (if present)
7 FC cable from FC port 3, of 6th node (if present)
8 FC cable from FC port 3, of 7th node (if present) (or FC port 0, of RAID controller 0 of
10th array)
9 FC cable from FC port 3, of 8th node (if present) (or FC port 0, of RAID controller 0 of 9th
array)
10 FC cable from FC port 0, of RAID controller 0 of 8th array (if present)

60 Hardware setup
11 FC cable from FC port 0, of RAID controller 0 of 7th array (if present)
12 FC cable from FC port 0, of RAID controller 0 of 6th array (if present)
13 FC cable from FC port 0, of RAID controller 0 of 5th array (if pre
14 FC cable from FC port 0, of RAID controller 0 of 4th array (if present)
15 FC cable from FC port 0, of RAID controller 0 of 3rd array (if present)
16 FC cable from FC port 0, of RAID controller 0 of 2nd array (if present)
17 FC cable from FC port 0, of RAID controller 0 of 1st array

a. Ensure that the Fibre Channel cables from FC port 3, of each node are firmly set in the appropriate
ports.
b. Connect one end of a Fibre Channel cable to port 9 (if 10-port switch) or port 15 (if 16-port switch)
of Fibre Channel switch #1 after inserting a transceiver in the port. Connect the other end of the
Fibre Channel cable to Fibre Channel port 0, of RAID controller 0, of first array (array 0).
c. Working backwards from the last Fibre Channel port on Fibre Channel switch #1, connect one end
of a Fibre Channel cable to the next available Fibre Channel port on Fibre Channel switch #1 after
inserting a transceiver in the port. Connect the other end of the Fibre Channel cable to Fibre
Channel port 0, of RAID controller 0, of the second array (array 1).
d. Repeat step c for the next array.
e. Remove transceivers from any Fibre Channel ports not connected to another device or insert a plug
in the transceiver.
Each unconnected transceiver will generate connection failure notifications.
f. Connect a power cable to the switch. Then, route the AC power cable through the holes in the rack
to the back of the rack and connect the cable to a PDM.

NOTE: If there are two power supplies:


1. Connect a power cable to the power supply on the left.
2. Plug the power cable into a PDM on the left side of the rack.
3. Connect a power cable to the power supply on the right.
4. Plug the power cable into a PDM on the right side of the rack.

g. Secure the FC cables and the Ethernet cables installed in the previous steps together with Velcro ties.
Route them to the right side of the rack.
6. On Fibre Channel switch #2:

1 2 3 4 5 6 7 8 9 10

11

11763

Figure 36 Fibre Channel switch #2 port cabling (20-port connectivity kit shown)

1 FC cable from FC port 2, of primary node


2 FC cable from FC port 2, of 2nd node (if present)
3 FC cable from FC port 2, of 3rd node (if present)
4 FC cable from FC port 2, of 4th node (if present)
5 FC cable from FC port 2, of 5th node (if present) (or FC port 0, of RAID controller 1 of 6th
array)

Enterprise Backup Solution design guide 61


6 FC cable from FC port 0, of RAID controller 1 of 5th array (if present)
7 FC cable from FC port 0, of RAID controller 1 of 4th array (if present)
8 FC cable from FC port 0, of RAID controller 1 of 3rd array (if present)
9 FC cable from FC port 0, of RAID controller 1 of 2nd array (if present)
10 FC cable from FC port 0, of RAID controller 1 of 1st array
11 Ethernet cable from port 23 of switch 2824

1 2 3 4 5 10 11 12 13

6 7 8 9 14 15 16 17
11764

Figure 37 Fibre Channel switch #2 port cabling (32-port connectivity kit shown)

1 Ethernet cable from port 23 of switch 2824


2 FC cable from FC port 2, of primary node
3 FC cable from FC port 2, of 2nd node (if present)
4 FC cable from FC port 2, of 3rd node (if present)
5 FC cable from FC port 2, of 4th node (if present)
6 FC cable from FC port 2, of 5th node (if present)
7 FC cable from FC port 2, of 6th node (if present)
8 FC cable from FC port 2, of 7th node (if present) (or FC port 0, of RAID controller 1 of
10th array)
9 FC cable from FC port 2, of 8th node (if present) (or FC port 0, of RAID controller 1 of 9th
array)
10 FC cable from FC port 0, of RAID controller 1 of 8th array (if present)
11 FC cable from FC port 0, of RAID controller 1 of 7th array (if present)
12 FC cable from FC port 0, of RAID controller 1 of 6th array (if present)
13 FC cable from FC port 0, of RAID controller 1 of 5th array (if present)
14 FC cable from FC port 0, of RAID controller 1 of 4th array (if present)
15 FC cable from FC port 0, of RAID controller 1 of 3rd array (if present)
16 FC cable from FC port 0, of RAID controller 1 of 2nd array (if present)
17 FC cable from FC port 0, of RAID controller 1 of 1st array

a. Ensure that the Fibre Channel cables from FC port 2, of each node are firmly set in the appropriate
ports.
b. Connect one end of a Fibre Channel cable to port 9 (if 10-port switch) or port 15 (if 16-port switch)
of Fibre Channel switch #2 after inserting a transceiver in the port. Connect the other end of the
Fibre Channel cable to Fibre Channel port 0, of RAID controller 1, of first array (array 0).
c. Working backwards from the last Fibre Channel port on Fibre Channel switch #2, connect one end
of a Fibre Channel cable to the next available Fibre Channel port on Fibre Channel switch #2 after
inserting a transceiver in the port. Connect the other end of the Fibre Channel cable to Fibre
Channel port 0, of RAID controller 1, of the second array (array 1).
d. Repeat step c for the next array.

62 Hardware setup
e. Remove transceivers from any Fibre Channel ports not connected to another device or insert a plug
in the transceiver.
Each unconnected transceiver will generate connection failure notifications.
f. Connect a power cable to the switch. Then, route the AC power cable through the holes in the rack
to the back of the rack and connect the cable to a PDM.

NOTE: If there are two power supplies:


1. Connect a power cable to the power supply on the left.
2. Plug the power cable into a PDM on the left side of the rack.
3. Connect a power cable to the power supply on the right.
4. Plug the power cable into a PDM on the right side of the rack.

g. Secure the fibre channel cables and the Ethernet cables installed in the previous steps together with
Velcro ties. Route them to the right side of the rack
The VLS9000 system hardware installation is complete. Continue installation by configuring the identities
of each node.

Setting the bar code


With the HP VLS9000-series virtual library, bar code templates are created for use with the virtual
cartridges on which data is stored. These bar codes can have as few as 2 characters or as many as 99,
and they are completely configurable by the administrator. When configuring the bar code templates,
follow the requirements (if any) for bar code prefixes and length per the backup application. In addition, it
is a good idea to use a bar code prefix that differs from any physical cartridge bar codes in a tape library
to which data may be migrated behind the VLS. That way, physical and virtual cartridges can be easily
recognized from within the backup application.

Check connectivity and performance with L&TT


• Install L&TT on the server and run it to scan and find the tape devices. See ”Library and Tape Tools” on
page 163 for more information.
• Check to ensure that the topology is as expected. L&TT will show the connectivity and topology for each
host HBA and tape device connected.
• Run the L&TT Device Performance test on each device to verify that the host to device data path is
configured for full speed operation. This step requires a write-enabled tape cartridge.

NOTE: HP recommends leaving L&TT installed on the host so it is readily available if there is a need to run
diagnostics if directed by HP support.

Enterprise Backup Solution design guide 63


HP StorageWorks 12000/300 Virtual Library System Gateway
The HP StorageWorks 12000/300 Virtual Library System Gateway (VLS12000/300) is a RAID disk-based
SAN backup device that emulates physical tape libraries, allowing disk-to-virtual tape (disk-to-disk) backups
using existing backup application(s). The many benefits of performing data backups to a VLS12000/300
instead of physical tape are described in Benefits.
The VLS12000/300 emulates a variety of physical tape libraries, including the tape drives and cartridges
inside the libraries. With the VLS12000/300, you can:
• Determine the number and types of tape libraries a VLS emulates, and the number and type of tape
drives and cartridges included in each tape library to meet the needs of your environment.
• Configure the size of the virtual cartridges in your VLS12000/300, which provides even more
flexibility. The VLS12000/300 accommodates mixed IT platform and backup application environments,
allowing all your servers and backup applications to access the virtual media simultaneously.
• Specify which servers are allowed to access each virtual library and tape drive you configure.
• Change the default LUNs assigned to the virtual library and tape drives for each host as needed to
accommodate different operating system requirements and restrictions.
• Clone data stored on a VLS12000/300 is easily cloned to physical tape for off-site disaster protection
or long-term archival using a backup application.

Benefits
Integrating a VLS12000/300 into your existing storage and backup infrastructure delivers the following
benefits:
• Fast data restoration and backup performance: The HP StorageWorks 12000 Virtual Library
System EVA Gateway is a multi-node gateway solution for the EVA that easily scales in performance
and capacity to 4800 MB/sec and 1080 TB of useable storage, with hardware compression.
Accelerated deduplication retains up to 50 times more data readily available on disk
• Cost-effective data protection and on-going management: The VLS12000 EVA Gateway
is deployed, managed and operated just like a tape library minimizing disruptions to your environment.
Emulations include the HP StorageWorks ESL and MSL series tape libraries as well as HP 1/8 G2
Autoloader and all HP Ultrium Tape Drives as well as DLT7000, DLT8000 and SDLT 320 Tape Drives.
• Reliability: While the VLS12000 EVA Gateway contains reliable hardware featuring hot plug disk
drives, standard redundant power supplies and fans, the real reliability is for your data protection
process. Simplifying the process by which storage is shared means fewer errors occur.
• Easy operation: The VLS12000 EVA Gateway is deployed, managed and operated just like a tape
library minimizing disruptions to your environment. Emulations include the HP StorageWorks ESL and
MSL series tape libraries as well as HP 1/8 G2 autoloaders and all HP Ultrium Tape Drives and
DLT7000, DLT8000 and SDLT 320 Tape Drives.
• Automigration: The HP Virtual Library Systems support Automigration which allows VLS to move
data to a physical library or another VLS. Smart copy will be further enhanced when HP delivers
low-bandwidth replication.
• Accelerated deduplication: The VLS12000 EVA Gateway now supports capacity licensing for
Accelerated deduplication. The data deduplication capacity LTU will be licensed by the number of EVA
LUNS presented to the VLS. One license per LUN (T9709A) is required. Accelerated deduplication
retains up to 50 times more data readily available on disk.

NOTE: For more information on deduplication with the VLS12000, refer to the Data Deduplication page
found at the HP Data Storage web site: http://welcome.hp.com/country/us/en/prodserv/storage.html.

System status monitoring


VLS hardware, environmental, and virtual device (library, tape drive, cartridge) status is constantly
monitored by the VLS software and displayed on the VLS web user interface, Command View VLS.

64 Hardware setup
A notification alert is generated by the VLS software when a hardware or environmental failure is detected
or predicted. VLS notification alerts are displayed on Command View VLS, and can also be sent as mail to
the mail addresses you specify and/or SNMP traps to the management consoles you specify.
For more information about viewing VLS hardware status, and/or receiving VLS notification alerts by mail
or as SNMP traps, see the HP StorageWorks 12000/300 Virtual Library System user guide, Monitoring
chapter.

Redundancy
The VLS12000/300 includes some important redundancy features:
• Redundant fans
Each node includes redundant fans. If a fan fails in a node (head unit), the remaining fans run at a
faster speed, temporarily providing enough cooling.
• Redundant power supply
Each node includes a redundant power supply. With redundant power supplies, if one power supply
fails in a node, the remaining functional power supply provides enough power for the node to function.
HP recommends that the primary power supplies be connected to a separate power source at the site
from the redundant power supplies.

CAUTION: Replace a failed fan or power supply as soon as possible to maximize the life expectancy of
the remaining fan(s) or power supply and to maintain redundancy.

• Redundant system disks


Each VLS12000/300 node (head unit) contains two system hard drives configured into a RAID 1
(mirrored) volume. This provides dual boot capability and quick recovery if one of the system hard
drives fail.
For more information about VLS features, see the HP website: http://www.hp.com.

VLS12000/300 components
A VLS12000/300 consists of at least two nodes (one primary node and between one and seven
secondary nodes) and dual LAN switches for internal inter-node connections. See the drawing of racked
nodes below. Each node contains dual processors, two dual port FC HBAs, four 512 MB memory modules,
and two 80 Gb hard drives. No external storage is included with the VLS12000/300; instead, the
gateway uses external storage in existing arrays.

26

11149

Figure 38 VLS12000/300 components

1 Ethernet switch 2824 (1 Gb)


2 Ethernet switch 2524 (100 Mb)

Enterprise Backup Solution design guide 65


3 Node 0, primary node
4 Node 1, secondary node
The two nodes include a base license to configure up to 25 LUNs—ten LUNs per gateway node plus five
2 TB upgrade licenses—which gives the gateway up to 50 TB capacity.
Up to six nodes can be added to a VLS12000/300 for a total of eight nodes in a single gateway. Each
additional node adds licenses for up to ten more LUNs and increases maximum external capacity by up to
20 TB. Capacity can also be increased by purchasing capacity bundles, each of which adds licensing for
one additional external array LUN and increases maximum external capacity by up to 2 TB.

NOTE: Maximum capacity for each LUN on the VLS12000/300 is 2 TB.

Adding nodes and licenses increases the VLS12000/300 storage capacity as shown in Table 4. Adding
nodes also increases the performance. See the HP StorageWorks VLS12000/300 Virtual Library System
Quickspec on the HP website (http://h18006.www1.hp.com/products/storageworks/6000vls) for
performance data.
Table 4 VLS12000/300 capacity

Nodes Maximum capacity without expansion LTUs


2 50 TB
3 70 TB
4 90 TB
5 110 TB
6 130 TB
7 150 TB
8 170 TB

Prepare the EVA for the VLS12000 or VLS300 Gateway


Arrays that will be connected to the VLS Gateway must already be set up with the appropriate
configuration as described in the solutions guide, including:
• Command View EVA is installed, at firmware revision 5100 or later, and functioning properly.
• There are either two external FC switches/fabrics or two zones on an external FC switch/fabric so that
there are two (high availability) data pathways from the VLS Gateway to the EVA.
• All of the VRaid LUNs required for the VLS have been created on the EVA according to the design
guidelines. (For example, each LUN is roughly the same size-2 TB is preferred. The LUNs can not be
read-only. RAID 5 is recommended. Path failover is balanced across both EVA controllers).

NOTE: Minimum capacity for EVA LUNs is 100 GB. Ensure that all EVA LUNs attached to the Gateway
meet this requirement.

EVA design considerations (VLS12000 or VLS300 Gateway)


Before you connect the VLS12000 or VLS300 Gateway to the external EVA, consider these concepts and
perform the related tasks:
• The VLS12000 and VLS300 attach to a properly functioning EVA running firmware v. 5.0 or higher.
Ensure that your EVA meets these requirements before connecting the VLS12000 or VLS300. See the
HP Enterprise Backup Solutions Compatibility Matrix at www.hp.com/go/ebs for latest version
supported.

66 Hardware setup
• Before installing the VLS12000 or VLS300, ensure that the external FC switch/fabric has either two
external FC switches/fabrics or two zones. This is required so that there are two data pathways from
the VLS to the EVA. Dual pathing provides path balancing and transparent path failover on the
VLS12000 and VLS300.

Network Interconnection

Port 2 Port 3 Port 2 Port 3

Node 0 Node 1

Command View
EVA Fabric 2
Fabric 1
Management
Server

EVA FP1 FP2

FP2 FP1

Controller Controller
A Cache B
Mirror
Ports

Loop Pair 1 Loop Pair 1

FC Loop Switches

FP = Fibre (Host) Port Drive Enclosures


11204

Figure 39 VLS12000 EVA Gateway connected to an EVA


• Ensure that the zoning configuration is complete and that storage ports 2 and 3 on each VLS node
connect to different switches/fabrics or zones. The EVA controllers must also be connected to both
switches/fabrics or zones.
• VLS nodes can be configured through the EVA to have dual FC paths to the back-end disk arrays. With
dual paths, if one path fails the node can still access each LUN through the remaining path(s). To get
the best performance from the EVA, you must have balanced performance across the array controllers.
To address these requirements:
• VLS software will automatically recognize the multiple paths to the disk array LUNs. If it does not,
revisit your configuration and make changes as necessary so that the software does recognize all
paths upon rebooting the system.
• Path failure is reported through the array information presented in the graphical user interface.

NOTE: Path failure can reduce throughput for backup and restore operations.

• Balance data traffic across both controllers of the array (called A and B in Command View VLS). To do
this, ensure that the preferred path for half of the VRaids in the array is set to Path A-Failover only, and
the preferred path for the other half of the VRaids in the array is set to Path B-Failover only.

Enterprise Backup Solution design guide 67


• The EVA LUNs must be fully initialized and ready for use before attaching the VLS to the EVA. It can
take several hours to initialize an entire EVA; therefore, HP advises you to create the LUNs in advance
of the installation.
• Ensure that all VRaid LUNs required for the VLS have been created on the EVA such that the LUNs are
RAID 5, are not configured as read-only, and are roughly the same size (preferably 2 TB each).
• The VLS12000 EVA Gateway has a capacity of 1024 total paths to its array LUNs, as well as having a
total LUN count capacity of 256. In order to maintain the maximum number of LUNs without exceeding
the maximum LUN paths, no more than four paths should be established per array LUN.
• It is possible to share an EVA between the VLS12000 or VLS300 and other applications. If the EVA is
shared, VLS traffic may overwhelm the array and significantly reduce performance across all
applications sharing the array. Alternately, the other applications may slow down the performance of
the VLS. If this happens, manual load-balancing between applications is required (that is, you will have
to schedule the VLS traffic only when all other applications are inactive). For this reason, HP
recommends using a dedicated EVA for the VLS12000 or VLS300.

Setting the bar code


With the HP VLS12000/300-series virtual library, bar code templates are created for use with the virtual
cartridges on which data is stored. These bar codes can have as few as 2 characters or as many as 99,
and they are completely configurable by the administrator. When configuring the bar code templates,
follow the requirements (if any) for bar code prefixes and length per the backup application. In addition, it
is a good idea to use a bar code prefix that differs from any physical cartridge bar codes in a tape library
to which data may be migrated behind the VLS. That way, physical and virtual cartridges can be easily
recognized from within the backup application.

Check connectivity and performance with L&TT


• Install L&TT on the server and run it to scan and find the tape devices. See ”Library and Tape Tools” on
page 173 for more information.
• Check to ensure that the topology is as expected. L&TT will show the connectivity and topology for each
host HBA and tape device connected.
• Run the L&TT Device Performance test on each device to verify that the host to device data path is
configured for full speed operation. This step requires a write-enabled tape cartridge.

NOTE: HP recommends leaving L&TT installed on the host so it is readily available if there is a need to run
diagnostics if directed by HP support.

68 Hardware setup
HP StorageWorks 1000i Virtual Library System
The HP StorageWorks 1000i Virtual Library System (VLS1000i) is a RAID 5, serial ATA disk-based LAN
backup device that emulates standalone HP LT02 drives and HP Autoloader 1/8 with LT02 physical tape
drives, allowing you to perform disk-to-virtual tape (disk-to-disk) backups using your existing backup
applications.
The VLS1000i emulates the HP Autoloader 1/8 with LT02 physical tape libraries, including the tape drives
and cartridges inside the libraries. You determine the number of tape libraries a VLS1000i emulates, and
the number of tape drives and cartridges included in each tape library to meet the needs of your
environment. You configure the size of the virtual cartridges in your VLS1000i, which provides even more
flexibility. The VLS1000i emulates up to 6 tape libraries, 12 tape drives, and 180 cartridges.
The VLS1000i accommodates mixed IT platform and backup application environments, allowing all your
servers and backup applications to access the virtual media simultaneously. You specify which servers are
allowed to access each virtual library and tape drive you configure.

Benefits
Integrating a VLS1000i into your existing storage and backup infrastructure delivers the following benefits:

• Faster backups
The VLS1000i is optimized for backups and delivers faster performance than a simple disk-to-disk
solution. The VLS1000i emulates many more tape drives than are available in physical tape libraries,
allowing more hosts to run backups concurrently.
• Faster single file restores
A single file can be restored much faster from disk than tape.
• Lower operating costs
Fewer physical tape drives and cartridges are required as full backups to tape are eliminated. Also,
fewer cartridges are required as small backups stored on multiple virtual cartridges can be copied to
one physical cartridge.
• More efficient use of storage space
Physical tape libraries cannot share storage space with other physical tape libraries, and physical
cartridges cannot share storage space with other physical cartridges. This unused storage space is
wasted.
Storage space is not wasted in a VLS, because VLS storage space is dynamically assigned as it is used.
Storage space is shared by all the libraries and cartridges configured on a VLS1000i.
• Reduced risk of data loss and aborted backups
RAID 5-based storage is more reliable than tape storage.
Aborted backups caused by tape drive mechanical failures are eliminated.

Important concepts
To understand the configuration of the backup network and how it fits into the local-area network (LAN),
review the following sections.
Internet SCSI (iSCSI) protocol
Internet SCSI (iSCSI) is a standard protocol for universal access to shared storage devices over standard,
Ethernet-based transmission control protocol/Internet protocol (TCP/IP) networks. The connection-oriented
protocol transports SCSI commands, data, and status across an IP network.
The iSCSI architecture is based on a client-server model. The client is a host system that issues requests to
read or write data. iSCSI refers to a client as an initiator. The server is a resource that receives and
executes client requests. iSCSI refers to a server as a target.
File servers, which store the programs and data files shared by users, normally play the role of server. With
the VLS1000i, the application and backup servers within your network act as clients or initiators and the
VLS1000i acts as a server or target. The initiators can either be iSCSI software simulation or host bus
adapters (HBAs) on the server that is being backed up.

Enterprise Backup Solution design guide 69


Disk-to-Disk-to-Tape (D2D2T) backup capabilities
The VLS1000i is a storage resource used by a single backup server or shared by multiple backup servers
using an Ethernet network. By using standard backup software, you can copy backup data that resides on
the VLS1000i to physical tape for long-term data retention.
The following illustration shows application servers sending backup data over a Gigabit Ethernet (GbE)
LAN to backup servers sharing VLS D2D storage over GbE.

NOTE: The connection from the Client — Tape can be either FC or direct attached SCSI.

Clients

Ethernet

VLS 1000i Client Client - Tape

Figure 40 Disk-to-Disk-to-Tape backup


In addition to being part of the LAN, the backup servers and the VLS1000i are part of the GbE backup
LAN.
Redundant Array of Independent Disks (RAID)
RAID provides convenient, low-cost, reliable storage by saving data on more than one disk drive
simultaneously. If one disk drive in a RAID 5 configuration becomes unavailable, the others continue to
work in a degraded state, thus avoiding downtime for users.
Emulation
The VLS1000i can emulate:
• A standalone tape drive, with a 1:1 relationship between cartridges and drives
• A library, with multiple cartridges and 1 or more drives
Both emulations are based on LTO-2 drive technology. When you use emulation, the disk drives on the
VLS1000i appear to your backup software as LTO-2 tape cartridges, which simplifies the setup process
while simultaneously providing data compression and the attributes of backing up data to disk.

NOTE: Data compression can be used, but it reduces the data transfer speed significantly.

70 Hardware setup
Retention planning
Retention planning and sizing go hand in hand. How long do you need to keep data on disk? How many
full backups do you want to keep on disk? How many incremental backups? How do you want to optimize
retention times of the VLS1000i? Retention policies help you recycle virtual media. Bear the following
considerations in mind as you plan retention policies:
• If the data’s useful life is too short to warrant backup to tape, you might choose to keep it on disk.
• Once the retention period expires, the virtual media is automatically recycled (remember that you never
remove tapes from a virtual library so you want the backup application to keep re-using the same
virtual tapes based on their retention recycling periods).
• In your backup application you should set the tape expiration dates (that is, when the tape is marked as
worn out) high because virtual media does not wear out.
• Backup-job retention time is for virtual media.
• Copy-job retention time is for physical media.
• When copying through the backup application, the virtual and physical pieces of media are tracked
separately and the retention times should be considered and set individually.

Setting the bar code


Selecting Bar Code Seed tells the VLS to specify the first bar code label and to automatically generate
subsequent labels. Deselecting Bar Code Seed allows you to use your own naming convention. Type the
name (up to six alphanumeric characters) in the text box that appears.
The sequencing of the bar codes begins with the last character listed before the L2. For example, if you
entered a seed of AAAAAA, the next code is AAAAAB. When the code reaches AAAAAZ, it rolls the
second to the last characters, and starts over (AAAABA, AAAABB, and so on). If you entered a seed of
111111, the next number would automatically be 111112.

IMPORTANT: If there is more than one VLS1000i on the same subnet, HP strongly recommends setting the
bar code seed manually for each appliance to avoid bar code conflicts.

Check connectivity and performance with L&TT


• Install L&TT on the server and run it to scan and find the tape devices. See ”Library and Tape Tools” on
page 173 for more information.
• Check to ensure that the topology is as expected. L&TT will show the connectivity and topology for each
host HBA and tape device connected.
• Run the L&TT Device Performance test on each device to verify that the host to device data path is
configured for full speed operation. This step requires a write-enabled tape cartridge.

NOTE: HP recommends leaving L&TT installed on the host so it is readily available if there is a need to run
diagnostics if directed by HP support

Enterprise Backup Solution design guide 71


HP StorageWorks D2D Backup Systems
Overview
The HP StorageWorks D2D Backup Systems provide disk-based data protection for small or medium-sized
data centers and distributed environments. With 2.25 to 18 TB of useable capacity and speeds of over
540 GB/hour you can significantly reduce your backup window. The D2D Backup Systems feature HP
Dynamic deduplication allowing you to retain up to 50x more data on the same disk providing efficient
storage and fast restore, enabling low bandwidth replication over a WAN for cost-effective off-site backup
and recovery. Integrating into your existing environment, it works with your backup software to automate
and consolidate the backup of 6 to 24 servers onto a single, intelligent, rack-mountable device. Reduce
errors caused by media handling; plus proven RAID 6 technology and hot spare with D2D4112.

Features and benefits


• The HP StorageWorks D2D Backup Systems offer instant access to backups for rapid restores.
• The D2D Backup Systems offer high performance backup speeds of more than 540 GB/hour to a
disk-based system instead of sequentially to a tape drive or tape autoloader, meaning that you can
substantially reduce your backup window. A choice of iSCSI and 4 Gb Fibre Channel interfaces is
available.
• HP StorageWorks D2D Backup Systems include hardware-based RAID 5 or 6 to reduce the risk of data
loss due to disk failure and additional hot spare in D2D4112.
• HP StorageWorks D2D Backup Systems can be directly attached to stand alone tape drives, tape
autoloaders and tape libraries to provide a complete disk-to-disk-to-tape (D2D2T) data protection
solution.
• HP Dynamic deduplication reduces the disk space required to store backup data sets by up to 50x
without impacting backup performance. Retaining more backup data on disk for longer, enables
greater data accessibility for rapid restore of lost or corrupt files and reduces business productivity
impact.
• HP D2D Backup Systems with HP Dynamic deduplication reduce the network bandwidth needed for
remote data replication. With more cost-effective replication, these systems allow automated,
centralized backup of remote sites, and deliver a practical disaster recovery solution for data centers.
• The D2D4112 allows you to scale up from 9 to 18 TBs of useable capacity as your data storage
requirements grow, by using a simple and cost effective capacity upgrade for a lower cost alternative to
purchasing additional systems.
• HP StorageWorks D2D Backup Systems are either 1U or 2U and are easily rack-mounted in standard
racks for efficient use of space in the data center. Supported by all leading backup applications, this
allows the device to be installed and used without additional investment.
• HP StorageWorks D2D Backup Systems can back up from 6 to 24 servers using a standard Ethernet or
Fibre Channel network simultaneously to a disk-based solution at speeds of more than 540 GB per
hour instead of sequentially to a tape drive or autoloader, meaning that you can substantially reduce
your backup window.
• HP StorageWorks D2D Backup Systems have an intuitive web-based browser interface allowing you to
monitor your D2D Backup System, locally or remotely, to view results or change settings. This
self-managing device also reduces your routine maintenance.

HP dynamic deduplication
Data deduplication is a method of reducing storage needs by eliminating redundant data so that over time
only one unique instance of the data is actually retained on disk. As a result, up to 50x more backup data
can be retained in the same disk footprint.
Adding data deduplication to disk-based backup delivers a number of benefits:
• A cost effective way of keeping your backup data on disk for a number of weeks or even months. More
efficient use of disk space effectively reduces the cost-per-gigabyte of storage and the need to purchase
more disk capacity.
• Making file restores fast and easy from multiple available recovery points. By extending data retention
periods on disk, your backup data is more accessible for longer periods of time, before archiving to

72 Hardware setup
tape. In this way lost or corrupt files can be quickly and easily restored from backups taken over a
longer time span.
• Ultimately, data deduplication makes the replication of backup data over low bandwidth WAN links
viable (providing offsite protection for backup data) as only changed data is sent across the connection
to a second device (either a second identical device or one that comes from this product family).
How it works
Deduplication works by examining the data stream as it arrives at the storage appliance, checking for
blocks of data that are identical and eliminating redundant copies. If duplicate data is found, a pointer is
established to the original set of data as opposed to actually storing the duplicate blocks-removing or
"deduplicating" the redundant blocks from the volume. The key here is that the data deduplication is being
done at the block level to remove far more redundant data than deduplication done at the file level where
only duplicate files are removed.
Data deduplication is especially powerful when it is applied to backup, since most backup data sets have
a great deal of redundancy. The amount of redundancy will depend on the type of data being backed up,
the backup methodology and the length of time the data is retained.
Example. Backing up a large customer database that gets updated with new orders throughout the day.
With the typical backup application you would normally have to back up, and more importantly store, the
entire database with each backup, (and even incremental backups will store the full database again). With
block-level deduplication, you can backup the same database to the device on two successive nights and,
due to its ability to identify redundant blocks, only the blocks that have changed will be stored. All the
redundant data will have pointers established.
The HP approach to deduplication - D2D and VLS
Recognizing the differing needs of the small and medium businesses versus large and enterprise data
centers, HP has selected two different deduplication technologies to match each requirement.
HP Dynamic deduplication for HP StorageWorks D2D Backup Systems - meeting the needs of smaller IT
environments with requirements for low cost solutions, smaller storage capacities, ease of use and broad
compatibility.
HP Accelerated deduplication for HP StorageWorks Virtual Library Systems (VLS) - delivering maximum
benefit for data center environments by being optimized for performance and scalability. Accelerated
deduplication takes place outside of the backup window, therefore, allowing all system resources to be
focused on completing the backup before starting deduplication. Accelerated deduplication also retains a
full copy of the latest backup so that restore times are exceptionally fast.
HP Dynamic deduplication
The HP patented Dynamic deduplication algorithm has been designed specifically for smaller IT
environments, such as remote and branch offices and small data centers, to provide for low cost solutions
with a small footprint. It uses inline deduplication based on hash algorithms with additional levels of error
prevention and correction to verify the integrity of data backup and restore. Importantly and unlike some
other forms of data deduplication technology the HP Dynamic deduplication is independent of the data
format recorded and works with most the leading backup application packages.
What deduplication ratio can I expect?
The actual data deduplication ratio you can expect will depend on a number of factors including; the type
of data, the backup methodology used, the length of time you retain your data. However, assuming
standard business data mix and extended on disk retention (periods of more than 12 weeks) you could
expect to see:
20:1 capacity ratio; assuming a weekly, full and daily incremental backup model
50:1 capacity ratio; assuming daily, full backups
For more information on achieving deduplication ratios, go to: http://www.hp.com/go/deduplication

Compatibility
The HP D2D Backup Systems are supported on servers that use Microsoft Windows or Linux operating
systems, including HP ProLiant, HP Integrity Servers, and a variety of third-party servers.

Enterprise Backup Solution design guide 73


For compatibility details on specific servers, see the following website for the latest hardware compatibility
information: http://www.hp.com/go/connect for HP StorageWorks D2D2500 and D2D4000 or
http://www.hp.com/go/ebs for HP StorageWorks D2D4000 and D2D4112.
The HP StorageWorks D2D Backup Systems support a variety of Fibre channel switches and HBAs. For
more details of SAN compatibility, see the following website for the latest information:
http://www.hp.com/go/ebs
For additional documents and information on the HP's D2D Backup Systems, see:
http://www.hp.com/go/d2d

Figure 41 HP StorageWorks D2D2500 Backup System

Figure 42 Back panel of HP StorageWorks D2D2500 Backup System


1 Power socket
2 Network port 1 - always used for data connection
3 Network port 2 - used for data connection only if network is configured for
dual-port IP addresses
4 Management LAN port - do not use for data connection
5 Beacon LED
6 PCI slots

Figure 43 HP StorageWorks D2D4000 Backup System

74 Hardware setup
Figure 44 Back Panel of HP StorageWorks D2D4000 Backup System
1 Power sockets
2 Network port 1 - always used for data connection
3 Network port 2 - used for data connection only if network is configured for
dual-port IP addresses
4 Management LAN port - do NOT use for data connection
5 PCI slots

Tape drives and performance


Tape drives
HP tape drives have varying levels of performance. Factors such as file size (larger is better), directory
depth, and data compressibility all affect system performance. Data interleaving during backup also
affects restore performance. Sending multiple data streams to HP StorageWorks tape libraries is a simple
way to scale backup performance. Table 5 shows performance information for various tape drives.

Table 5 Tape drive throughput speed (native)

Tape drive Throughput MB/s


Ultrium 1840 120

Ultrium 1760 80

Ultrium 960 80

Ultrium 920 60

Ultrium 448 25

Ultrium 232 16

SDLT 600 36

WORM technology
WORM (or Write Once, Read Many) storage is a data storage technology that allows information to be
written to storage media a single time, and read many times, preventing the accidental or intentional
altering or erasing the data. Driven by the recent growth of legislation in many countries, usage of WORM
storage is increasing for archiving corporate data such as financial documents, e-mails, and health
records. To meet the demands of this data growth, HP now supports WORM technology in the Ultrium
1840, Ultrium 1760, Ultrium 960, Ultrium 920, and SDLT600 tape drives. WORM tape drive technology is
supported in a variety of HP StorageWorks Tape Libraries, and is supported by many backup applications.
In addition to WORM media, the Ultrium 1840, Ultrium 1760, Ultrium 960, Ultrium 920, and SDLT600
tape drives are capable of reading and writing to standard Ultrium and SDLT media respectively. WORM
media can be mixed with other traditional media by using mixed media solutions as documented by HP.
For more information about support and compatibility visit http://www.hp.com/go/tape or
http://www.hp.com/go/ebs.

Enterprise Backup Solution design guide 75


File (data) compression ratio
The more compressible the data, the faster the possible backup rate. The speed of the feed source must
increase to prevent drive idle time. For example, an Ultrium 1840 drive writes data at a maximum native
transfer rate of 120 MB/sec, which translates to 432 GB/hr. With 2:1 compression, the transfer rate
increases to 864 GB/hr.
HP tests show that not all data can be compressed equally. Table 6 shows typical compression ratios of
various applications. The compression ratio affects the amount of data that can be stored on each tape
cartridge, as well as the speed at which the tape drives can read or write the data.

Table 6 Typical file compression ratio

Data type Typical compression


CAD 3.8:1

Spreadsheet/word processing 2.5:1

Typical file/print server 2.0:1

Lotus Notes databases 1.6:1

Microsoft Exchange/SQL Server databases 1.4:1

Oracle/SAP databases 1.2:1

Ultrium performance
To optimize the performance of Ultrium tape drives in a Storage Area Network (SAN):
• Ensure that the source of the data to be backed up can supply the data at a minimum of 120MB/sec
native for Ultrium 1840 and 80MB/sec native for Ultrium 960 drives.
To optimize the performance of Ultrium tape drives in a UNIX environment:
• Do not use the native backup applications. UNIX tar and cpio provide poor performance.
• Use a third-party backup application.
• Structure the file system so it can make use of the concurrency feature offered in almost all UNIX
third-party backup applications.
Concurrency is an alternative way of streaming high-performance tape drives. This means backing up
multiple data sources simultaneously to a single tape drive. The format on tape is then an interleaf of
the data on the disks. Verify that the backup software supports concurrency. HP Data Protector, EMC
NetWorker, and Symantec NetBackup all support concurrency. This technique can also be applied to
network backups where the backup jobs from independent hosts are interleaved as they are passed
over the network and are written to tape.

NOTE: Concurrency will increase backup performance; however, restore performance will be negatively
impacted. The files are not sequential. Rather, they are broken up and distributed across the tape.

76 Hardware setup
HP StorageWorks Interface Manager and Command View TL
The HP StorageWorks Interface Manager provides the first step toward automating EBS. The Interface
Manager card is a management card designed to consolidate and simplify the management of multiple
Fibre Channel interface controllers installed in the library. It also provides SAN-related diagnostics and
management for library components including interface controllers, drives, and robotics. The Interface
Manager card, in conjunction with HP StorageWorks Command View TL software, provides remote
management of the library via a serial, telnet, or web-based GUI interface.

IMPORTANT: Command View TL, the Interface Manager card, and interface controllers have specific
firmware dependencies. See the Interface Manager Firmware Required Minimum Component table in the
HP Enterprise Backup Solutions Compatibility Matrix.

NOTE: Some of the default host map settings that the Interface Manager applies to the library may not be
appropriate for every data center. Due to the automation of the host mappings, certain customer data
centers many need to customize the mapping for their EBS environment.

Library partitioning
Partitioning provides the ability to create separate, logical tape libraries from a single tape library. Logical
libraries (partitions) behave like a physical library to backup and restore applications. The ESL9000, ESL
E-Series, and the EML E-Series libraries all support partitioning. The advanced version of Secure Manager
in conjunction with HP StorageWorks Command View TL is required to enable and configure partitioning
in these libraries.
Mixed media support
Partitioning a library enables the use of mixed media in a library with various backup applications. A
library can consist of multiple drive and media types. Similar drive types and media can be grouped
together as one partition, with a maximum of six partitions.

NOTE: For additional information, see the HP StorageWorks Partitioning in an EBS Environment
Implementation Guide available on the HP website: http://www.hp.com/go/ebs.

Command View TL/Secure Manager mapping algorithms


The Command View TL Secure Manager feature enables users to grant Fibre Channel host bus adapters
access to devices in a tape library. Secure Manager automatically generates the Fibre Channel port LUN
to target device mappings that will be presented to a specific host bus adapter (or host). Without Secure
Manager, this process is manual, technical, and tedious if there are multiple interface controllers.
At a basic level, the process includes the following steps:
1. Determine which devices are connected to an FC interface controller.
2. Logically order the devices using device type and the logical tape drive location (or number).
3. The devices are then load balanced across the FC interface controller’s host side Fibre Channel ports (if
applicable).
4. For each Fibre Channel port, a sequential LUN value is given to each device assigned to that port.
The process is repeated for all FC interface controllers.
Mapping requirements lead to rules
The rules or algorithms that the Interface Manager uses to generate LUN maps are based on a list of
requirements that were developed to ensure that the resulting LUN maps will work well in nearly all SAN
environments.

Enterprise Backup Solution design guide 77


New requirements:
• Must have Basic Secure Manager functionality: All hosts with access privileges see all library devices.
• Must have Advanced Secure Manager functionality: Each host with access privileges can see any, or
all, library devices.
• Mapping will automatically load balance to prevent overloading a single host-side FC port.
• Mapping will detect common cabling errors and automatically compensate for them.
• Active Fabric LUNs will be disabled where possible.
• This simplifies the installation by only showing tape or robot devices in most cases.
• Mapping will automatically assign LUN numbers.
• Mapping changes will have as little impact as possible on host applications.
• All maps must be compatible with HP-UX, Windows, and Solaris.
• This led to the requirement for always having a LUN 0, and for that LUN to be either a tape or
robot.
• Map changes will not require a reboot to take effect.

Interface Manager modes: automatic and manual


The Interface Manager card operates in two different modes: automatic and manual. In automatic mode,
the Interface Manager automatically configures and enforces a consistent configuration across all FC
interface controllers including port settings and mapping. By enforcing this configuration, automatic mode
greatly speeds up the mapping and port configuration process as well as eliminating the chance that a FC
interface controller will be improperly set up.
There are some specific configurations that automatic mode will not currently address. For these cases, the
Interface Manager provides a manual mode. However, as the name implies, manual mode fully relies on
the correct set up and maintenance of the fibre port and map settings; therefore, manual mode requires
more handling from the administrator and increases the chances that errors will be made in the FC
interface controller configurations.
For simplicity and reliability, automatic mode is the recommended mode setting for nearly all installations.
However, certain configurations require the library administrator to put the Interface Manager in manual
mode.
The conditions where Manual mode is needed:
• FC interface controllers are connected to mixed topologies (for example, connected to switches and
directly to host HBAs).
• The FC interface controllers are connected to a mix of multiple speed switches.
• A specific operating system or software application is being used that has unique fibre LUN mapping
requirements.
• The configuration requires that 3 or 4 tape drives be assigned to a single Fibre Channel port instead of
1 or 2.
• Dual SAN configurations require manual mode.

NOTE: Any customizations created in Manual mode are lost when changing from Manual to Automatic
mode.

Basic vs. Advanced Secure Manager


Command View TL provides a default implementation of Secure Manager known as Basic Secure
Manager. Basic Secure Manager allows the library administrator to grant or deny a Fibre Channel host
bus adapter access to all devices in a tape library. Also, in Basic Secure Manager, all hosts use the same
device mapping. In Manual mode, changes made to the map from one host are seen by all the hosts that
are assigned access to the library.

78 Hardware setup
With the Secure Manager license installed, Advanced Secure Manager will be enabled. Advanced Secure
Manager allows the library administrator to grant or deny hosts access to any combination of devices in
the library. Unlike Basic Secure Manager, each host can have a unique map. For example, the master
server in a backup solution might be the only server that can control the library robotics. In this scenario, all
backup servers may be granted access to the tape drives in the library, but will be denied access to the
robotics controller. The master server would be granted access to the robotics controller and possibly one
or more tape drives.

Interface Manager discovery


When the Interface Manager is initialized, all FC interface controllers register with the Interface Manager.
The Interface Manager then interrogates each FC interface controller to determine which devices are
available. If a tape drive is discovered, the tape drive is interrogated to determine drive attributes and
device serial number.

NOTE: If the tape library robotics controller is not discovered by the Interface Manager, the discovery
process cannot complete successfully. In this case, Secure Manager will be disabled until robotics
connectivity is established. This is done because the Interface Manager must rely on the library controller to
identify the drives belonging to the library and their logical positions.

After the robotics controller is discovered, the Interface Manager issues a “read element status” command
to determine the drive configuration of the library. The Interface Manager uses the read element status data
to determine the number of drives in the library, the logical position of each drive in the library, and the
serial number for each drive. The Interface Manager can then correlate the serial numbers returned by the
robotics controller with the serial number reported by each tape drive to determine the physical location of
each tape drive in the library.
Example:
The tape drive connected to SCSI Bus 0 on IC 1 reports a serial number of “XYZ.” Also assume the robotics
controller reports that the library has eight tape drives and logical tape drive 4 has a serial number of
“XYZ.” The Interface Manager can then use the serial number “XYZ” to identify the tape drive at SCSI Bus
0, IC 1 as logical tape drive 4. If the physical location for each reported tape drive is correlated to the
logical drive number, then the Interface Manager discovery process completes successfully and Secure
Manager will be available.

NOTE: Secure Manager is also available for target devices that were previously discovered but are
currently offline. For example, if tape drive 2 was initially discovered and subsequently taken offline for
repair, Secure Manager will operate with the previous device attributes until the drive is brought back
online.

Secure Manager mapping rules


The Fibre Channel port LUN to TARGET device mapping created by Secure Manager is referred to as a
Device Map in Command View TL. Secure Manager uses the following rules when creating and modifying
a Device Map. Each rule is covered in more depth in the following sections.
1. Devices are ordered based on type and logical position (drive number).
2. Each map will start with LUN 0.
3. New maps remove gaps in the LUN map, but modified maps leave them.
4. If there is more than one FC port on an FC interface controller, then load balancing algorithms are
used.
a. The first two tape devices are assigned to the first FC port at the next available LUN. The next two
tape devices are assigned to the second FC port at the next available LUN.
b. If there are remaining tape devices, then the next two tape devices are assigned to the first FC port,
and the next two to the second FC port. This is repeated until all tape devices are assigned.
5. Maintain FC Port/LUN Map after cabling change on same interface controller.

Enterprise Backup Solution design guide 79


6. Advanced Secure Manager map creation performs Basic Secure Manager but then removes devices
and gaps, if necessary.
7. Maintain current FC Port/LUN assignments when adding devices in Advanced Secure Manager.
8. If devices are removed with Advanced Secure Manager any gap made is retained.
9. If devices are removed and added with Advanced Secure Manager, attempts are made to not disturb
the other device mappings.
10. Active Fabric is the last LUN on a map of each FC port.
11. If an HBA has access to the robotics for a library partition, the logical robotics device will be added as
the next available LUN on the IC physically connected to the robotics.
12. Maps for partitioned libraries still follow the load balancing rules based on the physical drive location.
13. The order each partition is mapped depends on the order that the HBA was added to partitions.
1. Devices are ordered based on type and logical position (drive number).
• Secure Manager uses the drives’ logical position (drive number) as the basis for all mapping
operations.
• Robotics are always first in the order of devices; therefore, they are always assigned LUN 0 in a
non-partitioned library.
• Robotics are always assigned to FC Port 0.
• Drives have the next priority. They are ordered based on their logical position (the lower the logical
position or drive number, the lower the assigned LUN).
• Active Fabric, if present, will be the last device in a map.
Table 7 Normal device ordering

LUN FC Port 0
0 Robotics

1 Drive 1

2 Drive 2

3 AF

2. Each map will start with LUN 0.


LUN maps always start with LUN 0. Therefore the first device in any map will be assigned
to 0.
3. New maps remove gaps in the LUN map, but modified maps leave them.
New maps are filled without any gaps in the LUN numbering sequence. If a particular host is denied
access to a drive, a new LUN map for that host will NOT have an empty placeholder for that drive. Instead
the next logical drive will receive the LUN number that would have been used by the inaccessible drive.
See Table 8.
Modified maps do leave gaps in the LUN map. If a host previously could see all library devices and then
later access was withdrawn for a device, and doing this created a gap in the LUN map, that gap would
remain in order to maintain the addresses of the remaining devices in the map. See Table 8.

NOTE: Any changes to device access thereafter will follow the rules for modifying an existing map rather
than the rules for creating new maps.

Table 8 New and modified maps on a 1FC port FC interface controller for a host that cannot access
drive 1

NEW MODIFIED
LUN FC Port 0 LUN FC Port 0

0 Robotics 0 Robotics

80 Hardware setup
Table 8 New and modified maps on a 1FC port FC interface controller for a host that cannot access
drive 1

NEW MODIFIED
1 Drive 2 1

2 2 Drive 2

The new map did not have access to drive 1 when the map was first created. The modified map had
access to drive 1 when the map was created but was modified to remove access to it.
4. If there is more than one FC port on a FC interface controller, load balancing algorithms are used
Tape devices attached to a particular FC interface controller are sorted in ascending order by logical
position in the library. The first two tape devices are assigned to the first FC port at the next available
LUNs. The next two tape devices are assigned to the second FC port at the next available LUNs. If there
are more than two tape devices per FC port, then the following tape devices are assigned in a similar
fashion starting over at the first FC port.
Secure Manager takes the following steps when load balancing target devices across FC ports on a dual
port FC interface controller (such as an e2400-160):
1. If the robotics controller is attached to the FC interface controller, it is mapped to FC port 0 at LUN 0.
Table 9 Load balancing with robotics and four tape drives on a two FC port FC interface controller

LUN FC Port 0 FC Port 1


0 Robotics Drive 3

1 Drive 1 Drive 4

2 Drive 2

2. The tape devices attached to the FC interface controller are sorted in ascending order by logical
position in the library.
3. The first two tape devices are assigned to the first FC port at the next available LUNs.
Table 10 Load balancing with two tape drives on a two FC port FC interface controller

LUN FC Port 0 FC Port 1


0 Drive 1
1 Drive 2

NOTE: The drives are both assigned to FC Port 0 so that if drives 3 and 4 are added later, the maps will
be contiguous.

Enterprise Backup Solution design guide 81


4. The next two tape devices are assigned to the second FC port at the next available LUNs.
Table 11 Load balancing with three tape drives on a two FC port FC interface controller

LUN FC Port 0 FC Port 1


0 Drive 1 Drive 3

1 Drive 2

Table 12 Load balancing with four tape drives on a two FC port FC interface controller

LUN FC Port 0 FC Port 1


0 Drive 1 Drive 3

1 Drive 2 Drive 4

5. If there are remaining tape devices, then the next two tape devices are assigned to the first FC port, the
next two to the second FC port. This is repeated until all tape devices are assigned.
Table 13 Load balancing with robotics and eight tape drives on a two FC port FC interface controller

LUN FC Port 0 FC Port 1


0 Robotics Drive 3

1 Drive 1 Drive 4

2 Drive 2 Drive 7

3 Drive 5 Drive 8

4 Drive 6

The above algorithms are applied to all FC interface controllers in the library.
5. Maintain FC Port/LUN map after cabling change on same interface controller.
If a device’s cable is moved from one port to another on the same FC interface controller, Secure Manager
attempts to maintain the current FC Port/LUN mapping for the device.
Because devices are mapped by logical position, the Interface Manager can correct for devices that have
been cabled to different ports on the FC interface controller at power up and FC interface controller
reboot. This remapping is not available if the tape device has been moved to a different FC interface
controller. The purpose of this feature is to maintain a consistent view of the devices for the hosts connected
to the library.
6. Advanced Secure Manager map creation is done like Basic Secure Manager but then removes devices and gaps,
if necessary.
Advanced Secure Manager mapping starts the same way as Basic Secure Manager LUN mapping. Then
for each host that cannot see the entire library, devices are removed from the map, and any gaps they
make are removed.
The bottom line is that the same rules for creating maps apply to both Basic and Advanced Secure
Manager.
• LUN numbers are only assigned to the devices the host can access.
• LUN numbers always start at 0 and are consecutive (no gaps).

82 Hardware setup
NOTE: Any changes to device access thereafter follow the rules for modifying an existing map rather than
the rules for creating new maps.

Table 14 Advanced Secure Manager Mapping step 1: Maps are created using the same rules as Basic
Secure Manager

LUN FC Port 0 FC Port 1


0 Robotics Drive 3

1 Drive 1 Drive 4

2 Drive 2

Table 15 Advanced Secure Manager Mapping step 2: Devices the host cannot access are removed
.

LUN FC Port 0 FC Port 1


0 Robotics Drive 3

1 Drive 1 Drive 4

2 Drive 2

Table 16 Advanced Secure Manager Mapping step 3: Remove gaps in the map
.

LUN FC Port 0 FC Port 1


0 Drive 1 Drive 3

1 Drive 2

7. Maintain current FC Port/LUN assignments when adding devices in Advanced Secure Manager.
If an existing map is modified to add a device, previous FC Port/LUN assignments are retained in an
attempt to present a consistent device mapping to the host. The device map is not re-ordered when devices
are added.

NOTE: Devices are added to the FC Port that they would have been assigned to using Basic Secure
Manager Rules.

Single Fibre Channel port example


• Assume Advanced Secure Manager is enabled and that a host has initially been given access to the
robotics and drives 2-4:
Table 17 Map for host with access to robotics and drives 2-4

LUN FC Port 0
0 Robotics

1 Drive 2

2 Drive 3

3 Drive 4

Enterprise Backup Solution design guide 83


• If the library administrator later grants the host access to Drive 1, the current FC Port/LUN mappings are
retained and the new Device Map will look like:
Table 18 Map for host given access to drive 1 after robotics and drives 2-4 were mapped

LUN FC Port 0
0 Robotics

1 Drive 2

2 Drive 3

3 Drive 4

4 Drive 1

8. If modifying an existing map to remove device access with Advanced Secure Manager, any gap made is retained.
If Advanced Secure Manager is used to remove access to a device for a host with a preexisting map, any
gap this change makes is maintained.

Table 19 1 FC port Advance Secure Manager device access removal: Removing device

LUN FC Port 0
0 Robotics

1 Drive 1

2 Drive 2

Table 20 1 FC port Advance Secure Manager device access removal: End result

LUN FC Port 0
0

1 Drive 1

2 Drive 2

Some operating systems have issues with non-contiguous LUN maps. Therefore it is recommended to avoid
gaps in the LUN map if at all possible.
There are three methods of removing the gap(s) created by this process.
1. All hosts using this map need to be removed and re-added, and a new map needs to be created.
2. Changing the Mode from Automatic to Manual and back to Automatic clears out all customizations (not
recommended unless the number of customizations is low).
3. Add access to a device connected to that FC interface controller, and that would normally be mapped
to that FC port, and it will fill the first gap in the LUN order.

NOTE: Options 1 and 2 require the backup software to reconfigure the library. Option 3 should only
require reconfiguring the software for the new device.

9. If devices are removed and added with Advanced Secure Manager, attempts are made to not disturb the other
device mappings.
If host access is changing to add and remove devices, efforts are made to not disturb the devices. If
possible, the newly added device fills the gap made by the removed device. This is done to retain the LUN
assignments of the other devices. A device is only added back to the FC port that it would have been
assigned to in Basic Secure Manager (that is, Robotics and drives 1 and 2 will always be on FC port 0).

84 Hardware setup
NOTE: When devices need to be removed and added, it is recommended to remove devices first and
then add new devices second, to prevent or lessen the chance of creating gaps in LUN maps, which may
create problems in some operating systems.

Table 21 Advance Secure Manager device access change step 1 (1FC Port): Remove devices

LUN FC Port 0
0 Robotics

1 Drive 1

2 Drive 2

Table 22 Advance Secure Manager device access change step 2 (1 FC Port): Add devices to fill gaps if
possible

LUN FC Port 0
0 Drive 3

1 Drive 1

2 Drive 2

Example:
A host has access to the robotics and drives 2, and 4.

Table 23 Map from 2 FC port IC with access to the robotics and drives 2 and 4

LUN FC Port 0 FC Port 1


0 Robotics Drive 4

1 Drive 2

Advanced Secure Manager is used to remove access to the robotics.

Table 24 Robotics access is removed

LUN FC Port 0 FC Port 1


0 Robotics Drive 4

1 Drive 2

Table 25 Access to drive 3 is added


.

LUN FC Port 0 FC Port 1


0 Drive 4

1 Drive 2 Drive 3

10. Active Fabric is the last LUN on a map of each FC port.


One Active Fabric (AF) controller LUN is included as the last LUN in each FC port map that has a drive
with a Direct Backup (X-Copy) license. Active Fabric is used by L&TT to communicate with the FC interface
controllers and by some software for X-Copy/Direct Backup (a licensed functionality) implementation.

Enterprise Backup Solution design guide 85


IMPORTANT: Active Fabric is NOT displayed in Secure Manager. Active Fabric does not conform to the
rules that other devices are governed by. It will always be the last LUN in the map for each FC port. If
another device is added to the end of the list, it will take the LUN currently occupied by AF and AF will take
the LUN after the device. If the last device is removed, then AF will move to fill in the gap.

Table 26 Map of four drive library on a two FC port IC with Direct Backup enabled on Drive 3 or 4 and
.

AF visible

LUN FC Port 0 FC Port 1


0 Robotics Drive 3

1 Drive 1 Drive 4

2 Drive 2 AF

Example:
AF mapping when access to a device is added.
Table 27 LUN map where access to drive 2 is not granted and Direct Backup is enabled for drive 1

LUN FC Port 0
0 Robotics

1 Drive 1

2 AF

Table 28 AF LUN map placement after adding drive 2

LUN FC Port 0
0 Robotics

1 Drive 1

2 Drive 2

3 AF

Example:
AF mapping when access to a device is removed.

Table 29 LUN map where access to drive 2 is being removed and direct backup is enabled for drive 1

LUN FC Port 0
0 Robotics

1 Drive 1

2 Drive 2

3 AF

86 Hardware setup
Table 30 AF map placement after drive 2 is removed

LUN FC Port 0
0 Robotics

1 Drive 1

2 AF

11. If an HBA has access to the robotics for a library partition, the logical robotics device is added as the next
available LUN on the IC physically connected to robotics.
Each partition of a partitioned library has its own logical robotics device. This device is mapped to the FC
port 0 of the IC that is physically connected to the robotics. The logical robotics device is first in order of
devices for that partition but it will not have any priority over devices that have already been mapped to a
particular HBA. In other words, it will have the highest priority in the new devices added to the map but it
will not displace any device previously added to the map.

NOTE: When partitioning is in use, only logical (or virtual) robotics devices are mapped. The physical (or
actual) robotics device will not appear in any LUN map.

Table 31 A map for an HBA with access to the robotics and drives for Partition 1 and Partition 2

LUN FC Port 0
0 Robotics (Partition 1)

1 Physical Drive 1 (Partition 1, Drive 1)

2 Physical Drive 2 (Partition 1, Drive 1)

3 Robotics (Partition 2)

4 Physical Drive 3 (Partition 2, Drive 1)

5 Physical Drive 4 (Partition 2, Drive 2)

Partition 1 has drives 1 and 2. Partition 2 has drives 3 and 4. All drives and robotics are connected to a 1
host port IC, and the HBA was given access to Partition 1 first.

Table 32 A map for an HBA with access to the robotics on two partitions

LUN FC Port 0
0 Robotics (Partition 1)

1 Robotics (Partition 2)

The IC physically connected to the robotics has only one host port and is not connected to any drives.

NOTE: If a host has access to the robotics and drives for many partitions of a partitioned library, then the
map for the IC connected to the robotics could exceed eight LUNs. If this occurs, then ensure that the HBA,
OS, drive, and software have support for more than eight LUNs.

Enterprise Backup Solution design guide 87


12. Maps for partitioned libraries still follow the load balancing rules based on the physical drive location.
The load balancing algorithms used to distribute traffic between FC Port 0 and FC Port 1 still use the same
rules as non-partitioned libraries and are based on the physical instead of the logical partition drive
numbering.

Table 33 A map for an HBA with access to the drives for Partition 1 and Partition 2

LUN FC Port 0 FC Port 1


0 Physical Drive 1 (Partition 1, Drive 1) Physical Drive 3 (Partition 2, Drive 1)

1 Physical Drive 2 (Partition 1, Drive 2) Physical Drive 4 (Partition 2, Drive 2)

Partition 1 has physical drives 1 and 2. Partition 2 has physical drives 3 and 4. All drives are connected to
a two FC port IC, and the HBA was given access to Partition 1 first.

Table 34 A map for an HBA with only access to Partition 2

LUN FC Port 0 FC Port 1


0 — Physical Drive 3 (Partition 2, Drive 1)

1 — Physical Drive 4 (Partition 2, Drive 2)

Partition 2 has physical drives 3 and 4. Physical drives 3 and 4 are connected to a two host FC port IC.
The physical robotics is connected to a different IC not represented in Table 34.
13. The order each partition is mapped depends on the order that the HBA was added to partitions.
Because the mapping occurs when each HBA is added to a partition, the order that an HBA is added to
partitions governs the order in which each partition’s devices show up in the maps for that HBA. For this
reason, HP recommends that each HBA be added to partitions in order starting with partitions containing
the lowest numbered physical drives and ending with the highest numbered physical drives.
Example:
An HBA is added to Partition 1 and then Partition 2. Partition 1 contains physical drive 1. Partition 2
contains physical drive 2. The robotics, drive 1 and drive 2 are all connected to an IC with one host port.

Table 35 The HBA is granted access to Partition 1

LUN FC Port 0
0 Robotics (Partition 1)

1 Physical Drive 1(Partition 1, Drive 1)

Table 36 The HBA is then granted access to Partition 2

LUN FC Port 0
0 Robotics (Partition 1)

1 Physical Drive 1 (Partition 1, Drive 1)

2 Robotics (Partition 2)

3 Physical Drive 2 (Partition 2, Drive 1)

88 Hardware setup
Example:
An HBA is added to Partition 2 and then Partition 1. Partition 1 contains physical drive 1. Partition 2
contains physical drive 2. The robotics, drive 1 and drive 2 are all connected to an IC with one host port.

Table 37 The HBA is granted access to Partition 2

LUN FC Port 0
0 Robotics (Partition 2)

1 Physical Drive 2 (Partition 2, Drive 1)

Table 38 The HBA is then granted access to Partition 1

LUN FC Port 0
0 Robotics (Partition 2)

1 Physical Drive 2 (Partition 2, Drive 1)

2 Robotics (Partition 1)

3 Physical Drive 1 (Partition 1, Drive 1)

Basic Secure Manager and manual mapping


If Basic Secure Manager is used, all devices are included in the map. Every host that is granted access to
the library has the same view of the devices. For example, if a device map is modified (manual mode) for
a particular host, all hosts will see the modifications.
See the white paper, Command View TL/Secure Manager Mapping Algorithms, on the HP website at:
http://www.hp.com/go/ebs, under Related Information.

Enterprise Backup Solution design guide 89


Interface Manager card problems
Table 39 and Table 40 describe the status and network LEDs for the Interface Manager card.

Table 39 Status LED diagnostic codes

Red LED Green LED Description


On Off BIOS code failed to run.

Blinks 1x per 5 Off Hardware POST failed. No firmware images


second interval are loaded.

Blinks 2x per 5 Off No CompactFlash disk or valid boot sector


second interval image found.
Transfer the memory module from the old card
to the new card if the Interface Manager was
replaced.

Blinks 3x per 5 Off Specified firmware image files were not


second interval found. Neither the current nor the previous
image was found.

Blinks 4x per 5 Off Load or execute command failed (boot code


second interval remains at end of process). This indicates that
load, decompress, or execution failed on both
the current and previous image files.

Off Blinks 1x per 5 second Normal state. Load or execute command


interval succeeded. Boot code successfully loaded,
decompressed, and initiated execution of one
of the image files.

Table 40 Network link activity/speed LEDs

LED Status Description


Link Activity LED Off Port disconnected / no link

(left side of each On Port connected to another Ethernet device

Ethernet port) Flashing Data is being transmitted / received

Link Speed LED On Port is operating at 100 Mbps

(right side of each Off Port is operating at 10 Mbps, or port is not


Ethernet port) connected (see Link Activity LED)

90 Hardware setup
Table 41 describes common symptoms relating to the Interface Manager card and how to resolve them.

Table 41 Common Interface Manager issues

Symptom Possible cause Solution


Command View Bad network connection • Verify that the Interface Manager card and
TL server does not the management station are correctly
detect the connected to the LAN.
Interface
Manager card • Use LEDs to troubleshoot Ethernet cabling.
• Ping the Interface Manager to verify network
health.

Interface Manager card not • Power on the library. Observe status and link
powered on or in ready state LEDs.
• Interface Manager must be at firmware I120
or higher on an ESL E-series library.
• Interface Manager must be at firmware I130
or higher if connected to an e2400-FC 2G.

Incorrect IP address • Verify that the correct IP address of the


Interface Manager card is entered in
Command View TL.
• See the HP StorageWorks ESL E-Series
Unpacking and Installation Guide for
information on obtaining the correct IP address
using the OCP.
• Configure Command View TL with the correct IP
address. See the HP StorageWorks Interface
Manager and Command View TL User Guide
for information on adding a library or see
http://www.hp.com/support/cvesl.

Interface Bad network connection • Verify that the Interface Manager card is
Manager card properly connected to the FC interface
does not detect controllers and that the cables are good.
one or more FC
interface • Use LEDs to troubleshoot Ethernet cabling.
controllers • See the HP StorageWorks ESL E-Series
Unpacking and Installation Guide for more
information.

Incorrect interface controller, or Make sure that the e2400-160 interface controller
controller has less than minimum has lettering to the side of the ports. If lettering is
required firmware above or below the ports, then the wrong
controller type was installed. Contact the service
provider.
Update the firmware to the latest version as
indicated in the HP Enterprise Backup Solutions
Compatibility Matrix, and restore the defaults on
the interface controller (e2400-160 or
e1200-160).

Defective Interface Manager card or Observe status and link LEDs. Replace defective
FC interface controller card or controller.

Interface SCSI cables not connected properly Check cabling connections.


Manager card
does not detect
drives or library

Enterprise Backup Solution design guide 91


Table 41 Common Interface Manager issues

Symptom Possible cause Solution


SCSI settings or termination not set • Check the SCSI settings for the device.
properly
• Check that the SCSI bus is properly
terminated and ensure that the terminator
LEDs indicate a normal state (green).

Timing issues Reset the corresponding FC interface controller.

Drive not powered on or in ready • Make sure the drive is not set to off.
state
• Troubleshoot the drive.
Command View Incompatible browser version or • Make sure to use a minimum of Microsoft
TL does not run in Java support not enabled Internet Explorer v6.0 SP1 or later, or
the browser Netscape Navigator v6.2 or later.
• Make sure that Java support is enabled in the
browser.

Java Runtime Environment (JRE) not Download and install the Java 2 Platform,
installed Standard Edition v1.42 or later from
http://wwws.sun.com/software/download/
technologies.html.

Bad network connection or network • Check all physical network connections. If the
down connections are good, contact the network
administrator.
• Ping the management station. If pinging fails
and the IP address is correct, contact the
network administrator.

Wrong IP address Check the IP address of the management station.


On the management station, open a command
shell and enter ipconfig. Using this IP address is
required (or the network name of the
management station) in the URL to access
Command View TL.

Management station not running, or • Check to see if the management station is


Command View TL service not operational.
running on management station.
• Use the Services applet to verify that the
Command View TL service is running on the
management station. Click Start > Settings
> Control Panel > Administrative
Tools > Services.

92 Hardware setup
Fibre Channel interface controller and Network Storage Router
The HP StorageWorks FC interface controller and HP StorageWorks Network Storage Router (NSR) are
Fibre Channel-to-SCSI routers that enable a differential SCSI tape device to communicate with other devices
over a SAN.
Table 42 outlines the recommended maximum device connections per SCSI bus and per Fibre Channel
port. The purpose of these recommendations is to minimize SCSI issues and maximize utilization of the
Fibre Channel bandwidth.

NOTE: The Interface Manager uses custom algorithms to determine how devices are presented and
mapped. See the white paper Command View TL/Secure Manager Mapping Algorithms for additional
information.

Table 42 Recommended maximum drive connections

Drive Type Number of drives Number of drives Number of drives Number of drives
per SCSI bus per 1 Gb FC per 2 Gb FC per 4 Gb FC
Ultrium 232 2 2 4 4

Ultrium 448 1 1 2 2

Ultrium 960 1 1 1 2

SDLT 600 1 1 2 2

Common configuration settings


To provide connectivity between hosts and devices, the router must establish an address on each connected
Fibre Channel network and SCSI bus. The following paragraphs discuss configuration settings that are
commonly modified and are available in the Visual Manager UI and the Serial/Telnet UI. For procedural
information on accessing and changing these settings, refer to the documentation that ships with the router.
SCSI bus configuration
The router provides the capability to reset SCSI buses during the router boot cycle. This allows devices on a
SCSI bus to be in a known state. The reset option can be enabled/disabled during configuration of the
router. The SCSI bus reset feature is enabled in the default configuration but is disabled for configurations
using multiple initiators, tape changers or other devices that have long reset cycles, or for environments that
are adversely affected by bus resets.
The router negotiates the maximum values for transfer rates and bandwidth on a SCSI bus. If an attached
SCSI device does not allow the full rates, the router will use the best rates it can negotiate for that device.
Because negotiation is on a device-specific basis, the router can support a mix of SCSI device types on the
same SCSI bus.
Fibre Channel port configuration
By default, the configuration of the Fibre Channel ports is set to N_Port, forcing the router to negotiate a
fabric only mode. This minimizes conflicts when both the router and another Fibre Channel device, such as
a switch, are using Auto Sensing for Fibre Channel ports.

NOTE: By default, the Fibre Channel port speed is set to 4 GB/s. Changes to the Fibre Channel port
speed must be manually set, such as for 2 GB/s. If set incorrectly and the router is plugged into a Loop or
Fabric, the unit may receive framing errors, which can be found in the trace logs, and the fiber link light
will be off because of the incorrect Fibre Channel link speed.

Fibre Channel switched fabric configuration


When connected to a Fibre Channel switch, the router is identified to the switch as a unique device by the
factory programmed World Wide Name (WWN).

Enterprise Backup Solution design guide 93


Discovery mode
This feature makes it easy to discover attached Fibre Channel and SCSI target devices and automatically
map them on the host side for the bus/port in question.
There are two discovery methods available:
• Manual discovery
• Auto discovery
Auto Discovery can be set to occur after reboot events (when the router reboots) or link-up events (for
instance, when cables are attached). Auto Discovery can be disabled by setting the router to Manual
Discovery.
Host device configuration
A host system using a Fibre Channel host bus adapter (HBA) will typically map devices into the existing
device-mapping scheme used by that operating system. Refer to the HBA manual for the mapping table.
Refer to the configuration chapter of the HBA manual for any important information regarding host
configuration.
Logical Unit Management
Because SAN resources can be shared, it is possible for multiple hosts to have access to the same devices
on the SAN. To prevent conflicts, the router provides LUN management as a means to restrict device access
to certain hosts. LUN management goes beyond simple LUN masking, to prevent gaps in the list of LUNs
presented to a host.
LUN management maps can be created for different views of the devices attached to the router. Each Fibre
Channel host is assigned a specific map configuration. Not only can the administrator control which
devices a host may access, but also which LUNs are used to access these devices.
For a Fibre Channel host, a map is a table of LUNs, where each entry is either empty or contains device
address information needed for host/device communication.
For a SCSI host, a map contains a list of target IDs, each of which has its own table of LUNs with address
information needed for host/device communication.

NOTE: The router can respond to multiple target IDs (also known as Alternate Initiator ID) on a SCSI bus.
This feature is not currently supported with HP tape libraries.

Both Fibre Channel ports and SCSI buses have pre-defined maps.
There are four pre-defined maps:
• Indexed (default)
• Port 0 device map
• Auto assigned
• SCC

NOTE: If the fabric port ID of a host HBA changes, then the tape library Fibre Channel interface
controller(s) may need to be rebooted to pick up the new port ID and ensure that the proper device map is
given to the host HBA.

Some actions that will cause a fabric port ID change include:


• Moving the host HBA FC cable to a different FC switch port.
• Changing the FC switch fabric ID.
When a host sends a command, the router will select which map to use, based on the port receiving the
command and the ID of the host sending the command. For Fibre Channel ports, the host ID is the World
Wide Name; for SCSI buses, the host ID is the initiator ID (0 - 15). When a host is unknown or is not
assigned a specific map, the router will use the default map.

94 Hardware setup
Indexed maps
An indexed map is initially empty.
Port 0 device maps
The Port 0 device map is used when editing and assigning oncoming hosts.
Auto assigned maps
An Auto assigned map is built dynamically and contains all of the devices found during discovery. This
map changes automatically any time the discovery process finds a change in the devices attached. This
map cannot be modified.
SCC maps
An SCC map is only available on Fibre Channel ports and contains only a single entry for LUN 0. This
LUN is a router controller LUN. Access to attached devices is managed using SCC logical unit addressing.
Buffered tape writes
This option is designed to enhance system performance by returning status on consecutive write commands
prior to the tape device receiving data. In the event that data does not transfer correctly, the router will
return a check condition on a subsequent command.
Commands other than Write are not issued until status is received for any pending write, and status is not
returned until the device completes the command. This sequence is appropriate for tasks such as file
backup or restore.
Some applications require confirmation of individual blocks being written to the medium, such as for audit
trail tapes or log tapes. In these instances, the Buffer Tape Writes option must be disabled.
Connecting the router
When physically connecting the tape library to the router, HP strongly recommends that the tape devices
be connected in sequential order. For example, the library controller and the first pair of tape drives (drive0
and drive1) be connected to the first SCSI bus; the second pair of tape drives (drive2 and drive3) be
connected to the second SCSI bus, and so on. Connecting the devices in this manner provides for a
consistent view of the devices across platforms and, should problems arise, aids in the troubleshooting
process.

Network Storage Routers have limited initiators for single- and dual-port routers
The maximum number of active initiators for the HP StorageWorks Network Storage Router (NSR) is 250
on 2Gb routers with firmware 5.6.87 or newer, and on the 4Gb NSR/IFC with firmware 5.7.18 or newer.
Prior to these indicated firmware revisions, the maximum number of initiators was 128.
An initiator is one that has logged into the router but does not have to be transmitting data or be currently
logged in. The initiator count includes: hosts, switches, array controllers, and FC router ports. Each instance
of the initiator counts toward this maximum. For example, an initiator visible by two FC router ports
increases the FC active initiator count by two.
When the maximum number of active initiators is exceeded, the router will allow a new FC initiator to log
into the router by accepting PLOGI (Port Login), or PRLI (Process Login) commands. However, if a SCSI
command is sent to the router from that FC initiator, it will be rejected with a Queue Full response. If
commands from an FC initiator are consistently being rejected with a Queue Full response, the router
environment must be examined to see if the number of active FC initiators exceeds the maximum of 250 on
a dual FC port router, or 128 on a single FC port router.
To prevent issues with too many active initiators logging into the NSR, limit the number of initiators by
creating a FC switch zone that has less than 128 initiators for a single FC port router, or less than 250
initiators for a dual FC port router.

Fibre Channel switches


Only short-wave optical cables are supported for connection between hosts and switches, and tape
libraries with their FC bridges. But, short-wave optical cables may be used for any connection throughout
the topology.

Enterprise Backup Solution design guide 95


Long-wave optical cables and other protocols such as Fibre Channel over IP (FCIP) can provide long
distance connections between FC switches (ISLs). This allows customers to connect SANs across different
rooms or buildings on a site, or across multiple sites where a long-wave fibre cable is available or where
FCIP bridges are used. EBS supports most ISL types that HP supports in its storage environments. Table 43
lists supported Fibre Channel interconnect types for EBS. Refer to the HP StorageWorks SAN Design Guide
for additional information on supported Fibre Channel and FCIP devices.

Table 43 Storage product interconnect/transport support

Interface/transport ISLs only


8 Gbps Fibre Channel via 50 micron multi-mode fiber optic cable and short-wave SFPs. Up to 150
meters per cable segment.

8 Gbps Fibre Channel via 62.5 micron multi-mode fiber optic cable and short-wave SFPs. Up to 50
meters per cable segment.

4 Gbps Fibre Channel via 50 micron multi-mode fiber optic cable and short-wave SFPs. Up to 150
meters per cable segment.
4 Gbps Fibre Channel via 62.5 micron multi-mode fiber optic cable and short-wave SFPs. Up to 70
meter per cable segment.

2 Gbps Fibre Channel via 50 micron multi-mode fiber optic cable and short-wave SFPs. Up to 300
meters per cable segment.

2 Gbps Fibre Channel via 62.5 micron multi-mode fiber optic cable and short-wave SFPs. Up to 150
meters per cable segment.

2 Gbps Fibre Channel via 9 micron single-mode fiber optic cable and long-wave SFPs. Up to 10 X
kilometers per cable segment.

2 Gbps Fibre Channel via 9 micron single-mode fiber optic cable and extended reach SFPs. Up to X
35 kilometers per cable segment.

1 Gbps Fibre Channel via 50 micron multi-mode fiber optic cable and short-wave GBICs. Up to 500
meters per cable segment.
1 Gbps Fibre Channel via 62.5 micron multi-mode fiber optic cable and short-wave GBICs. Up to
200 meters per cable segment.

1 Gbps Fibre Channel via 9 micron single-mode fiber optic cable and long-wave GBICs. Up to 35 X
kilometers per cable segment depending on the switch series used.

1 Gbps Fibre Channel via 9 micron single-mode fiber optic cable and very long distance GBICs. Up X
to 100 kilometers per cable segment.

NOTE: See the HP Enterprise Backup Solutions Compatibility Matrix and the HP StorageWorks SAN
design guide for updates regarding support of additional interconnect types.

NOTE: See the HP StorageWorks SAN design guide for maximum supported distances across SAN.
Depending on the total length across the SAN, backup and restore speeds may vary. The longer the total
length across the SAN, the more buffering is needed to stream data without performance impacts. For
some adapters, backup restore speeds will be slow across long connections.

96 Hardware setup
HP StorageWorks 4/8 San Switch and HP StorageWorks 4/16 San
Switch—file system full resolution
The HP StorageWorks 4/8 San Switch and the HP StorageWorks 4/16 San Switch have a 118 MB root file
system that is typically over 80 percent utilized. If an issue on the switch results in a core dump, the root file
system can become 100 percent filled, causing erratic switch behavior to occur.
The following command can be run on the switch to clear core files from the switch, thereby freeing space
on the root file system:
supportsave -R

EBS and the multi-protocol router


The HP StorageWorks Enterprise Backup Solution is the foundation for consolidated Fibre Channel tape
storage solutions. These solutions provide tape storage that is easy to manage and grow while reducing a
customer's total cost of ownership. Figure 45 presents a logical view of the Enterprise Backup Solution
using the multi-protocol (MP) router. Note that the tape library on fabric 3 is being shared by hosts from all
three independent fabrics (represented by blue lines and blue halos). See http://www.hp.com/go/ebs for
compatibility, design, and implementation information.

18.2 GB 15K 18.2 GB 15K 18.2 GB 15K


ULTRA3 SCSI ULTRA3 SCSI ULTRA3 SCSI

18.2 GB 15K 18.2 GB 15K 18.2 GB 15K


ULTRA3 SCSI ULTRA3 SCSI ULTRA3 SCSI

18.2 GB 15K
ULTRA3 SCSI
18.2 GB 15K
ULTRA3 SCSI
18.2 GB 15K
ULTRA3 SCSI
1
18.2 GB 15K 18.2 GB 15K 18.2 GB 15K
ULTRA3 SCSI ULTRA3 SCSI ULTRA3 SCSI

18.2 GB 15K 18.2 GB 15K 18.2 GB 15K


ULTRA3 SCSI ULTRA3 SCSI ULTRA3 SCSI

18.2 GB 15K 18.2 GB 15K 18.2 GB 15K


ULTRA3 SCSI ULTRA3 SCSI ULTRA3 SCSI

18.2 GB 15K 18.2 GB 15K 18.2 GB 15K


ULTRA3 SCSI ULTRA3 SCSI ULTRA3 SCSI

18.2 GB 15K 18.2 GB 15K 18.2 GB 15K


ULTRA3 SCSI ULTRA3 SCSI ULTRA3 SCSI

18.2 GB 15K 18.2 GB 15K 18.2 GB 15K


ULTRA3 SCSI ULTRA3 SCSI ULTRA3 SCSI

2
18.2 GB 15K 18.2 GB 15K 18.2 GB 15K
ULTRA3 SCSI ULTRA3 SCSI ULTRA3 SCSI

18.2 GB 15K 18.2 GB 15K 18.2 GB 15K


ULTRA3 SCSI ULTRA3 SCSI ULTRA3 SCSI

18.2 GB 15K 18.2 GB 15K 18.2 GB 15K


ULTRA3 SCSI ULTRA3 SCSI ULTRA3 SCSI

18.2 GB 15K 18.2 GB 15K 18.2 GB 15K


ULTRA3 SCSI ULTRA3 SCSI ULTRA3 SCSI

18.2 GB 15K 18.2 GB 15K 18.2 GB 15K


ULTRA3 SCSI ULTRA3 SCSI ULTRA3 SCSI

1 18.2 GB 15K
ULTRA3 SCSI

18.2 GB 15K
18.2 GB 15K
ULTRA3 SCSI

18.2 GB 15K
18.2 GB 15K
ULTRA3 SCSI

18.2 GB 15K
ULTRA3 SCSI ULTRA3 SCSI ULTRA3 SCSI

18.2 GB 15K 18.2 GB 15K 18.2 GB 15K


ULTRA3 SCSI ULTRA3 SCSI ULTRA3 SCSI

18.2 GB 15K 18.2 GB 15K 18.2 GB 15K


ULTRA3 SCSI ULTRA3 SCSI ULTRA3 SCSI

2
18.2 GB 15K
ULTRA3 SCSI

18.2 GB 15K
ULTRA3 SCSI
18.2 GB 15K
ULTRA3 SCSI

18.2 GB 15K
ULTRA3 SCSI
18.2 GB 15K
ULTRA3 SCSI

18.2 GB 15K
ULTRA3 SCSI
3

18.2 GB 15K 18.2 GB 15K 18.2 GB 15K


ULTRA3 SCSI ULTRA3 SCSI ULTRA3 SCSI

18.2 GB 15K 18.2 GB 15K 18.2 GB 15K


ULTRA3 SCSI ULTRA3 SCSI ULTRA3 SCSI

18.2 GB 15K 18.2 GB 15K 18.2 GB 15K


ULTRA3 SCSI ULTRA3 SCSI ULTRA3 SCSI

18.2 GB 15K 18.2 GB 15K 18.2 GB 15K


ULTRA3 SCSI ULTRA3 SCSI ULTRA3 SCSI

18.2 GB 15K 18.2 GB 15K 18.2 GB 15K


ULTRA3 SCSI ULTRA3 SCSI ULTRA3 SCSI

18.2 GB 15K 18.2 GB 15K 18.2 GB 15K


ULTRA3 SCSI ULTRA3 SCSI ULTRA3 SCSI

18.2 GB 15K 18.2 GB 15K 18.2 GB 15K


ULTRA3 SCSI ULTRA3 SCSI ULTRA3 SCSI

18.2 GB 15K 18.2 GB 15K 18.2 GB 15K


ULTRA3 SCSI ULTRA3 SCSI ULTRA3 SCSI

3 5
18.2 GB 15K 18.2 GB 15K 18.2 GB 15K
ULTRA3 SCSI ULTRA3 SCSI ULTRA3 SCSI

18.2 GB 15K 18.2 GB 15K 18.2 GB 15K


ULTRA3 SCSI ULTRA3 SCSI ULTRA3 SCSI

2
18.2 GB 15K 18.2 GB 15K 18.2 GB 15K
ULTRA3 SCSI ULTRA3 SCSI ULTRA3 SCSI

18.2 GB 15K 18.2 GB 15K 18.2 GB 15K


ULTRA3 SCSI ULTRA3 SCSI ULTRA3 SCSI

3
San Switch 2/16

Figure 45 EBS using MP router


1 Disk arrays 2 Servers
3 Fibre Channel switches 4 MP router
5 Tape library with Fibre Channel
interconnect

The MP router simplifies SAN design, implementation, and management through centralization and
consolidation, providing a seamless way to connect and scale across multiple SAN fabrics without the
complexity of merging them into a single large fabric. One of the benefits of SAN connectivity using the

Enterprise Backup Solution design guide 97


MP router is that SAN troubleshooting and fault isolation is simplified within smaller environments,
increasing data availability. Another benefit is that there is no need to resolve zoning or naming conflicts
between SAN islands, which simplifies the consolidation process.
With the MP router it is possible to create logical SANs (LSANs) that enable selective connectivity between
devices residing in different SAN fabrics. Selective sharing of devices using the MP router is useful for SAN
islands that are geographically separated and/or managed by different organizations. Improved asset
utilization can be realized by implementing the multi-protocol router through more efficient storage resource
sharing, such as sharing tape libraries across multiple SAN fabrics or seamlessly moving storage from a
SAN fabric that has a storage surplus to a SAN fabric that has a storage deficit.
Benefits of tape consolidation:
• Centralize backup of multiple SANs fabrics in a single location to maximize value of backup devices
and resources
• Increase utilization of high-end backup resources
• Leverage off-peak network connectivity
• Reduce management overhead requirements
• Increase asset utilization

NOTE: The MPR is only supported in EBS configurations for bridging SAN islands. Connecting a library
or host directly to an MPR is not supported.

Fibre Channel host bus adapters


Fibre Channel host bus adapters (HBAs) provide:
• High speed connections from the server (host) to the SAN.
• Direct interface to fiber optic cables through standard Gigabaud Link Modules (GLMs) or Small
Form-factor Pluggables (SFPs).
• Support for Fibre Channel switched fabric connections or direct-attached fibre (DAF).
The Fibre Channel HBA provides a high-speed connection between a server and the EBS.

NOTE: See the Fibre Channel HBA documentation for installation instructions for option boards.

HBAs and performance


With today's high performance tape drives, bottlenecks can exist in many different components on the
SAN, including HBAs. It is important to understand the performance characteristics of backups and restores
as they come from a Fibre Channel disk array, through the host, and to the tape devices. Often times
performance can be improved by adding HBAs to the host to allow for more maximized throughput from
the disk array controller to the tape drive through the SAN. It is important to know what the maximum
sustained feed speeds are from disk and tape, so HBAs can be matched to meet the performance needs.
For example, if sharing one 2Gbps FC HBA (max 200MB/s) for both disk and tape, and the disk is
sending 128MB/s sequential data to an LTO3 tape drive, there will only be 72MB/s bandwidth remaining
for the tape drive to utilize. In this example, since LTO3 tape drives have a native speed of 80MB/s
(160MB/s compressed) and the single FC HBA throughput has been reached, backups will run slower than
the full rated speed of the LTO3 drives. As a best practice, use a dedicated FC HBA for each tape drive.

NOTE: See the Getting the most performance from your HP StorageWorks Ultrium 960 tape drive white
paper located at http://h71028.www7.hp.com/ERC/downloads/5982-9971EN.pdf, under white papers.

98 Hardware setup
Third-party Fibre Channel HBAs
Third-party HBAs, such as the Emulex LPe12002, might be supported in order to allow connectivity to EBS
in SANs that include third-party disk arrays.
For a complete listing of supported servers and hardware, see the HP Enterprise Backup Solutions
Compatibility Matrix at http://www.hp.com/go/ebs.

HP StorageWorks 3Gb SAS BL Switch


The HP StorageWorks 3Gb SAS BL Switch for HP BladeSystem c-Class enclosures allows direct-connect
storage with external shared SAS storage devices. The HP StorageWorks 3Gb SAS BL Switch is a
double-wide interconnect module that enables each server blade with an HP StorageWorks SmartArray
P700m controller card to communicate with HP StorageWorks disk arrays and tape libraries.
The HP StorageWorks MSL2024, MSL4048, and MSL8096 Tape Libraries and the HP StorageWorks 1/8
G2 Tape Autoloaders with SAS tape drives are supported on the HP StorageWorks SAS BL switch.

Important tips
• Unless zoning is configured, all servers containing a P700m controller are connected to all tape drives
automatically.
• SAS tape libraries must be connected to a port on a SAS BL switch (they cannot be connected to a SAS
port on an MSA).
• Each SAS tape drive has a single SAS port. Redundancy is not supported, so the host end of the fanout
cable will only be connected to one port on one SAS BL switch. Either SAS BL switch in the c-Class
enclosure can be used. The corresponding switch port on the other SAS BL switch can be left open, or it
can be connected to other devices.
• To use all four channels of the 3Gb SAS BL switch port, use a SAS fanout cable, which has one
mini-SAS connector on the switch end and four mini-SAS connectors on the tape drive end. The
available SAS fanout cables are approved for use with the tape library or autoloader and the SAS BL
switch:
• AN975A - HP StorageWorks 2m External Mini-SAS to 4x1 Mini-SAS Cable Kit
• AN976A - HP StorageWorks 4m External Mini-SAS to 4x1 Mini-SAS Cable Kit
The following illustration is a representation of a fanout cable:

Figure 46 Fanout cable

NOTE: See the HP Direct connect shared storage for HP BladeSystem solution deployment guide for
detailed configuration information.

Enterprise Backup Solution design guide 99


RAID array storage
The EBS supports several RAID (Redundant Array of Independent Disks) array systems. RAID technology
coordinates multiple disk drives to protect against the loss of data availability if one of them fails. RAID
technology can help a storage system provide on-line data access that does not break down (highly
available system) when it is coupled with other technologies, such as:
• Uninterruptible power control systems
• Redundant power supplies and fans
• Intelligent controllers that can back each other up
• Operating environments that can detect and respond to storage systems recovery actions
For a complete listing of supported RAID arrays, refer to the HP Enterprise Backup Solutions Compatibility
Matrix at http://www.hp.com/go/ebs.
Raid arrays and performance
With today's high performance tape drives bottlenecks can exist in many different components on the
SAN, including RAID array controllers. It is important to understand the performance characteristics of
backups and restores as they come from a Fibre Channel disk array, through the host, and to the tape
devices. Often times performance can be improved by pulling data from multiple RAID array controllers to
the host and then the tape devices through the SAN. It is important to know what the maximum sustained
feed speeds are from the HBAs and tape devices, so disk controllers can be matched to meet the
performance needs.
Third-party RAID array storage
To minimize any conflicts between a third-party disk array and the HP tape library, HP recommends the
following configuration guidelines:
• Use a separate HBA for connectivity to the HP tape library. See the HP Enterprise Backup Solutions
Compatibility Matrix for supported HBAs. Zoning is used to separate the third-party disk array from the
HP tape library. See ”Zoning” on page 103 for more details on zoning recommendations.
• If the same HBA must be used, ensure that the driver is supported for both the disk array and the HP
tape library.
• If multi-path software will be implemented, see ”Disk array multi-path (MPIO)” on page 156 for details
on supported configurations.

EBS power on sequence


To ensure proper solution start-up, power on the equipment using the sequence shown in Table 44.

NOTE: This sequence is for initial start up. After the fabric is up and running, the general rule is to boot
the online and nearline devices (disks and tapes and their controllers) before booting servers. It may be
necessary to reboot servers if online or nearline devices are rebooted without the server being rebooted.

Table 44 EBS power on sequence

Sequence Component Instructions


1 SAN switch Wait at least 60 seconds for the
self-initialization to complete.

2 Online storage Wait for the self-initialization to be complete.

3 Nearline storage (tape Wait for the tape library/VLS to fully initialize
library/VLS) (this can take as long as 20 minutes).

4 Router If using a router that is not embedded in a


tape library, wait approximately 60 seconds
for the controller to initialize.

100 Hardware setup


Table 44 EBS power on sequence

Sequence Component Instructions


5 Servers Power up each of the servers.

6 Data protection software Launch the data protection software


application on the primary server and then
on each secondary server. Wait for each
application to load completely before
starting the next application.

Using HP StorageWorks Library and Tape Tools (L&TT) to verify disk


system data performance
• Install L&TT on the host and run the System Performance test to measure the data source transfer rate
from the RAID array.
• Compare with expected rate and the transfer rate of the tape devices. Allow for the number of tape
devices to be streamed concurrently and also the expected compression ratio (e.g. 2:1).
Example: Streaming two Ultrium 960 tape drives with 2:1 compression ratio requires a sustained
transfer rate from the RAID array of 80 MB/S x 2 x 2 = 320 MB/S.

Enterprise Backup Solution design guide 101


102 Hardware setup
3 Zoning
Zoning is a fabric management service used to create logical device subsets within a SAN. Zoning enables
resource partitioning for management and access control.
One or more Fibre Channel switches create the Fibre Channel fabric, an intelligent infrastructure that serves
as a backbone for deploying and managing information technology (IT) resources as a network. Use
zoning to arrange fabric-connected devices into logical groups over the physical fabric configuration.
Zoning provides automatic and transparent management for the SAN, and allows the flexibility to allocate
pools of storage in the SAN to meet different closed user group objectives. By creating zones of storage
and computers, barriers can be set up among different operating environments to:
• Deploy a logical fabric subset
• Create, test, and maintain separate areas within the fabric
Zoning also provides the following benefits:
• Increases environmental security
• Optimizes IT resources
• Customizes environments
• Easily manages a SAN
Zoning refers to the ability to partition the switch into multiple logical SANs. This feature is primarily
supported for disk and tape configurations. Shared access to tape drives is handled by the backup
application software running on each host. As such, generally any tape-related zones need to be
configured to allow all hosts to see all tape drives and libraries.
Overlapping zones refer to a configuration where a single switch port or device WWN participates in
more than one zone.

Emulated private loop


Emulated private loop (EPL) is a feature that allows a switch to emulate a hub and provide private
arbitrated loop connectivity for non-public hosts or devices. Brocade refers to this feature in their switches
as Quickloop.
Because of the potential for loop initialization primitives (LIPs) to interrupt an I/O during a backup, this
feature is NOT supported for tape devices.

Increased security
The Fibre Channel fabric provides fast, reliable, and seamless information access within the SAN. Zoning
segments the fabric into zones that are comprised of selected storage devices, servers, and workstations.
Since zone members can only see other members in the same zone, access between computers and
storage can be controlled.

Optimized resources
Zoning helps to optimize IT resources in response to user demand and changing user profiles. It can be
used to logically consolidate equipment for convenience. Zoning fabric characteristics are the same as
other fabric services:
• Administration from any fabric switch
• Automatic, transparent distribution of zone definitions throughout the fabric—A single failure cannot
interrupt zoning enforcement to other SAN connections.
• Automatic service scaling with fabric size—There is no requirement to upgrade systems as switches are
added and connectivity increases.
• Automatic, transparent deployment—There is no requirement for human intervention unless the zoning
specification must change.

Enterprise Backup Solution design guide 103


Customized environments
Zoning enables customizing of a SAN environment. With zoning, users can:
• Integrate support for heterogeneous environments by isolating systems that have different operating
environments or uses
• Create functional fabric areas by separating test or maintenance areas from production areas
• Designate closed user groups by allocating certain computers and storage, such as RAID disks, arrays,
and tapes, to a zone for exclusive use on computers that are zone members
• Simplify resource utilization by consolidating equipment logically for convenience
• Facilitate time-sensitive functions by creating a temporary zone to back up a set of devices that are
members of other zones
• Secure fabric areas by controlling port-level access
Zoning components
Zoning is comprised of three components described in Table 45.
Table 45 Zoning components

Component Description
Zone configuration A set of zones. When zoning is enabled, one zone configuration is in effect.

Zone A set of devices that access one another. All computers, storage, and other devices
connected to a fabric can be configured into one or more zones.

Zone member A device located within a zone.

EBS zoning recommendations


Due to complexities in multi-hosting tape devices on SANs, it is best to make use of zoning tools to help
keep the backup/restore environment simple and less susceptible to the effects of changing or problematic
SANs.Zoning provides a way for servers, disk arrays, and tape controllers to only see what hosts and
targets they need to see and use. The benefits of zoning in EBS include but are not limited to:
• The potential to greatly reduce target and LUN shifting
• Limiting unnecessary discoveries on the NSR or FC interface controllers
• Reducing stress on backup devices by polling agents
• Reducing the time it takes to debug and resolve anomalies in the backup/restore environment
• Reducing the potential for conflict with untested third-party products
Zoning may not always be required for configurations that are already small or simple. Typically the bigger
the SAN is, the more zoning is needed. HP recommends the following for determining how and when to
use zoning.
• Small fabric (16 ports or less)—may not need zoning.
If no zoning is used, it is recommended that the tape controllers reside in the lowest ports of the switch.
• Small to medium fabric (16 - 128 ports)—use host-centric zoning.
Host-centric zoning is implemented by creating a specific zone for each server or host, and adding only
those storage elements to be utilized by that host. Host-centric zoning prevents a server from detecting
any other devices on the SAN or including other servers, and it simplifies the device discovery process.
• Disk and tape on the same pair of HBAs is supported along with the coexistence of array multipath
software (no multipath to tape, but coexistence of the multipath software and tape devices).
• Large fabric (128 ports or more)—use host-centric zoning and split disk and tape targets.
Splitting disk and tape targets from being in the same zone together will help to keep the tape
controllers free from discovering disk controllers which it doesn't need to see, unless extended copy or
3PC data movement is required. For optimal performance, where practical, dedicate HBAs for disk and
tape.

NOTE: Overlapping zones are supported.

104 Zoning
4 Configuration and operating system details
Basic storage domain configurations
The basic EBS storage domain can be configured in many different ways. It can be a small configuration
with direct attached devices as in direct attached SCSI and direct attached fibre or it can consist of a
heterogeneous connection of multiple HP PA-RISC servers, HP IA-64 servers, HP AlphaServers, HP ProLiant
servers, HP ProLiant Storage Servers, Sun Solaris servers, IBM AIX servers, and other third-party servers
sharing multiple libraries and RAID array storage systems. Refer to the HP Enterprise Backup Solutions
Compatibility Matrix located at: http://www.hp.com/go/ebs.
Figure 47 While some operating systems found in enterprise data centers might not be supported on the
storage network by EBS, it is still possible to back up these servers as clients over the LAN and still be
supported. See the ISV compatibility matrix for more information.

2
5
3

Figure 48 Basic direct attach storage (DAS) configuration


1 LAN clients 2 Ethernet switch
3 Backup server 4 Tape library
5 Ethernet connection 6 SCSI/SAS connection

Enterprise Backup Solution design guide 105


1

2
5
3

Figure 49 Basic point-to-point direct-attached fibre (DAF) configuration


1 LAN clients 2 Ethernet switch
3 Backup server 4 Tape library
5 Ethernet connection 6 Fibre Channel connection

NOTE: While adding tape and or library devices to a server without a system reboot might work
intermittently, it is not recommended nor supported. A reboot is required in order to properly create the
device files.

106 Configuration and operating system details


Figure 50 Basic storage domain configuration on switched fabric

1 RAID array storage 2 HP tape library storage


3 FC SAN switch 4 IBM p-series UNIX server
5 Sun Solaris UNIX cluster 6 HP AlphaServer
7 HP ProLiant server 8 HP PA-RISC cluster
9 Sun Solaris UNIX server 10 HP AlphaServer cluster
11 HP NAS server 12 HP ProLiant cluster
13 HP PA-RISC server

Enterprise Backup Solution design guide 107


Nearline configuration information
The Enterprise Backup Solution (EBS) supports several operating systems (OSs). This section provides an
overview for configuring the EBS using the following OSs:
• HP-UX
• Microsoft® Windows
• NAS or ProLiant Storage Server devices using Microsoft Windows Storage Server 2003
• Tru64 UNIX®
• Linux
• NetWare
• Sun Solaris
• IBM AIX

Setting up routers in the SAN environment


Set up the qualified tape library and router. Refer to the following section to complete the setup.
With the release of the Interface Manager card, many of the steps to configure the Fibre Channel interface
controllers are now automated. See the previous sections regarding the Interface Manager card and Fibre
Channel interface card, as well as the user guides for each.
About Active Fabric and SCC LUNs
The HP-UX operating system can experience issues in configuring tape devices when a controller LUN (or
Active Fabric LUN) is presented at Fibre Channel LUN 0, by the Fibre Channel to SCSI tape router. The
fcparray driver within HP-UX may recognize the NSR or other Fibre Channel controllers as disk controllers
and create 16 ghost devices for each tape device. It is recommended that FC interface controller maps for
HP-UX hosts do not have a controller LUN at Fibre Channel LUN 0.
Configuring the router for systems without the Interface Manager
To configure the router, perform the following steps. Refer to the FC Interface Controller User Guide or
Network Storage Router User Guide for complete installation instructions.
1. Power up the HP qualified tape library and wait for it to initialize.
2. After the tape library has initialized and is online, power on the properly cabled router and wait for it
to initialize.
3. Use Internet Explorer or Netscape to access the router Visual Manager user interface. Enter the IP
address of the router network interface in the address field.

NOTE: See the router user guide for instructions on Ethernet connectivity.

4. Verify that the minimum supported firmware level is installed. The firmware level is listed on the main
router Visual Manager user interface page in the PLATFORM section.
5. Verify that all tape and robotic devices in the tape library are recognized by the router. In the router
Visual Manager user interface, select Discovery from the main menu to display all devices recognized
by each Fibre Channel (FC) module and SCSI module in the router.
6. Verify the router is logged into the Fibre Channel switch. Ensure that the router logs into the switch as an
F-Port. This can be done by running a telnet session to the switch or browsing to the switch with a web
browser.
7. Set up selective storage presentation by using the FC port map settings. These maps allow the ability to
selectively present tape and robotic devices to hosts on the SAN. See chapter 2 for additional
information on mapping. Also refer to the FC interface controller user guide or Network Storage Router
user guide for complete instructions on creating maps for presenting to specific hosts.
At this point in the procedure:
a. The tape library is online and properly configured on the router with all devices showing as
mapped or attached.

108 Configuration and operating system details


b. The router is logged into the Fibre Channel switch as an F-Port.
c. The host is logged into the Fibre Channel switch as an F-Port. In some cases with ProLiant blade
servers, the host may be logged in as a public loop or L-port.
d. For cascaded or meshed switches, verify that all ISL links are logged in as E-Ports.

8. After setting up the router, re-verify connectivity and performance using HP StorageWorks Library and
Tape Tools (L&TT).

Configuring the router in a DAF environment


1. Set the Fibre Channel port Performance Mode (1Gb, 2Gb, or 4Gb, depending on the hardware to
which the router is connected). The router is not auto-switching.
2. Configure mapping.
3. Active Fabric (AF) is the last LUN used on the map.
4. Set Port Mode to Auto Sense.
5. Set Hard AL_PA to Enable.
6. Click Set AL_PA to select any available AL_PA. The only remaining AL_PA is the host bus adapter
(HBA). Using a high number will avoid potential conflicts.
7. Reboot the router.

Setting up a dual SAN


1. Install HP StorageWorks E2400 controller cards. One controller card will be exclusively for robotic
controller access for dual SANs. The remaining E2400 controller cards will be for drive access.
Because the E2400 controller card sees two SANs, differentiate which Fibre Channel port will be
assigned to SAN 1 or SAN2.
Each E2400 only supports up to four drives per drives cluster; therefore, assign drives to the controller
cards accordingly.
2. Use HP StorageWorks Command View TL to grant robot and drive access to hosts contained in each
SAN.
3. Verify that the host operating system will detect library and tape drives.
4. Install and configure the backup application.

Rogue applications
Rogue applications is a category of software products commonly found in SAN environments that can
interfere with the normal functioning of backup and restore operations. Rogue applications include system
management agents and monitoring software and a wide range of tape drive and system configuration
utilities. A list of known rogue applications and the operating systems on which they are found is shown
below. This list is not intended to be an exhaustive list.

• Windows (all versions)


• SAN Surfer (HBA configuration utility)
• HBAnywhere/lputilnt (HBA configuration utilities)
• HP System Insight Manager (management agents)
• HP Library & Tape Tools (tape utilities)
• Linux (all versions)
• SAN Surfer
• HP Library & Tape Tools
• mt commands (native to OS)

Enterprise Backup Solution design guide 109


• Unix
• mt commands (native to OS)
• Solaris
• SUN Explorer (system configuration utility)

These applications, utilities, and commands have been shown to interfere with components in the data
path and, when run concurrently with backup or restore operations, have the potential to cause job failures
or corrupted data. For example, HBA utilities such as SAN Surfer and HBAnywhere provide the ability to
reset the Fibre Channel port(s); utilities such as HP Library and Tape Tools allow for complete device
testing, device resets, and firmware upgrades; and management agents and utilities such as HP Systems
Insight Manager and SUN Explorer poll tape devices and may cause contention for device access.
Some specific recommendations for dealing with rogue applications are listed here:
• SCSI Reserve & Release—If your backup application supports the use of SCSI reserve and release,
enable and use it. Reserve and release can prevent unwanted applications or commands from taking
control of a device.
• SAN Zoning—EBS recommends host-based SAN switch zoning. When zoning is employed, rogue
applications are much less likely to interfere with tape device operation.
• SUN Explorer—This is an optional utility that can be installed as part of the Solaris install. When
installed, Explorer runs from a cron job and queries all attached peripheral devices, including tape
devices. HP recommends that the crontab entry for Explorer be edited to allow the utility to run at times
that do not coincide with system backups. Disable the tape module of Explorer from running by
modifying the file:
/etc/opt/SUNWexplo/default/explorer
Locate the EXP_WHICH variable and modify it as follows:
EXP_WHICH=”default,!tape”
The modules that Explorer runs are found in /opt/SUNWexplo/tools. To prevent Explorer from
running a module, add it to EXP_WHICH preceded with an exclamation point (!).
• HP Systems Insight Manager—Make sure that the latest version of the Insight Manager agents are
installed on the system. Tape drive-friendly changes to the manner in which devices are polled have
been implemented (post version 7.1).

HP-UX
The configuration process for HP-UX involves:
• Upgrading essential EBS hardware components to meet the minimum firmware and device driver
requirements.

NOTE: See the HP Enterprise Backup Solutions Compatibility Matrix for all current and required
hardware, software, firmware, and device driver versions, including Hardware Enablement Kits and
Quality Packs on the HP website:
http://www.hp.com/go/ebs

• Installing the minimum patch level support. Go to the following website to obtain the necessary
patches:
http://www.hp.com/support

NOTE: See the installation checklist at the end of this section to ensure all of the hardware and
software is correctly installed and configured in the SAN.

110 Configuration and operating system details


Initial requirement (HP-UX 11.23 on IA-64 and PA-RISC)
HP currently supports HP-UX 11.23 in an EBS environment using an HP AB465A, A9782A, A9784A,
A6795A, A6826A, AB378A/AB378B, AB379A/AB379B, AD193A, AD194A, AD300A, AD299A,
AD355A, or QMH2462 FC HBA. Contact HP or your HP reseller for information on how to acquire these
cards.
The following OS software bundles contain the drivers for the A67956A adapter:
• FibrChanl-00 B.11.23.0803 HP-UX (B11.23 IA PA) and all patches the bundle requires per bundle
installation instructions
The following OS software bundles contain the drivers for the A6826A, A9782A, A9784A, AB378A,
AB379A, AB465A, AD193A, AD194A, AD300A, and QMH2462 adapters:
• FibrChanl-01 B.11.23.08.02 HP-UX (B11.23 IA PA) and all patches the bundle requires per bundle
installation instructions.
The following OS software bundles contain the drivers for the AD299A and AD355A adapters:
• FibrChanl-02 B.11.23.0712 HP-UX (B11.23 IA PA) and all patches the bundle requires per bundle
installation instructions.
Patches and installation instructions are provided at the HP-UX support website:
http://www.hp.com/go/support
After the hardware is installed, proceed with the following steps:
1. Check for support of the HBA in the currently installed FibrChanl bundle (refer to the HP Enterprise
Backup Solutions Compatibility Matrix for supported revisions):
# /usr/sbin/swlist -l bundle | egrep -i "AB465A|A9782A|A9784A|A6795A|A6826A
|AB378|AB379|AD193A|AD194A|AD300A|AD299A|AD355A"

NOTE: QMH2462 adapter support will not be listed using the swlist utility; however, the current
FibrChanl-01 bundle does support the adapter.

2. The drivers stape, sctl, and schgr must all be installed in the kernel. To see if these drivers are installed,
enter the following command:
# /usr/sbin/kcmodule schgr sctl stape
The following example shows output from kcmodule where the stape driver is not installed:
Module State Cause
schgr static explicit
sctl static depend
stape unused
If one or more of the above drivers is in the unused state, they must be installed in the kernel. If they are
all installed (static state), proceed to the next section, “Configuring the SAN.”
3. Use kcmodule to install modules in the kernel, for example to install the stape module do the following:
# /usr/sbin/kcmodule stape=static
Enter Yes to backup the current kernel configuration file and initiate the new kernel build.
4. Reboot the server to activate the new kernel.
# cd /
# /usr/sbin/shutdown -r now
The HP-UX 11iv2 Quality Pack (QPK1123) December 2007 (B.11.23.0712.070a) and Hardware Enablement
Pack (HWEable11i) June 2007 (B.11.23.0712.070) contain required software bundles. These patches and
installation instructions are provided at the HP website
http://www.itrc.hp.com

Enterprise Backup Solution design guide 111


Initial requirement (HP-UX 11.31 on IA-64 and PA-RISC)
HP currently supports HP-UX 11.31 in an EBS environment using an HP AB465A, A9782A, A9784A,
A6795A, A6826A, AB378A/AB378B, AB379A/AB379B, AD193A, AD194A , AD300A, AD299A,
AD355A, AD221A, AD222A, AD393A, QMH2462, or LPe1105 FC HBA. Contact HP or your HP reseller
for information on how to acquire these cards.
The following OS software bundles are required for the A67956A adapter:
• FibrChanl-00 B.11.31.0809 HP-UX (B11.31 IA PA) and all patches the bundle requires per bundle
installation instructions
The following OS software bundles are required for the AB465A, A9782A, A9784A, A6826A,
AB378A/AB378B, AB379A/AB379B, AD193A, AD194A and AD300A adapters:
• FibrChanl-01 B.11.31.0809 HP-UX (B11.31 IA PA) and all patches the bundle requires per bundle
installation instructions.
The following OS software bundles contain the drivers for the AD299A, AD355A, AD221A, AD222A,
and AD393A adapters:
• FibrChanl-02 B.11.31.0809 HP-UX (B11.31 IA PA) and all patches the bundle requires per bundle
installation instructions.
Patches and installation instructions are provided at the HP-UX support website:
http://www.hp.com/go/support
After the hardware is installed, proceed with the following steps:
1. Check for support of the HBA in the currently installed FibrChanl bundle (see the HP Enterprise Backup
Solutions Compatibility Matrix for supported revisions):
# /usr/sbin/swlist -l bundle | egrep -i "AB465A|A9782A|A9784A|A6795A|A6826A
|AB378|AB379|AD193A|AD194A|AD300A|AD299A|AD355A|AD221A|AD222A|AD393A"

NOTE: The QMH2462 and LPe1105 adapters support will not be listed using the swlist utility;
however, the current FibrChanl-01 and FibrChanl-02 bundles do support the adapters.

2. The drivers stape, sctl, and schgr must all be installed in the kernel. To see if these drivers are installed,
enter the following command:
# /usr/sbin/kcmodule schgr sctl stape
The following example shows output from kcmodule where the stape driver is not installed:
Module State Cause
schgr static explicit
sctl static depend
stape unused
If one or more of the above drivers is in the unused state, they must be installed in the kernel. If they are
all installed (static state), proceed to the next section, “Configuring the SAN.”
3. Use kcmodule to install modules in the kernel, for example to install the stape module do the following:
# /usr/sbin/kcmodule stape=static
Enter Yes to backup the current kernel configuration file and initiate the new kernel build.
4. Reboot the server to activate the new kernel.
# cd /
# /usr/sbin/shutdown -r now
The HP-UX 11iv3 Quality Pack (QPKBASE) September 2007 (B.11.31.0709.312a) and Hardware
Enablement Pack (HWEable11i) September 2007 (B.11.31.0709.312) contain required software bundles.
These patches and installation instructions are provided at the HP website:
http://www.itrc.hp.com

112 Configuration and operating system details


HP-UX 11.31 can experience poor I/O performance on VxFS file systems due to memory blocking
during high system memory usage
The HP-UX 11.31 kernel, subsystems, and file I/O data cache can consume up to 90 percent of system
memory during normal operation. When a heavy file I/O application such as a data protection
application starts, the memory usage can reach close to 100 percent. In such conditions, if VxFS attempts
to allocate additional memory for inode caching, this can result in memory blocking and subsequent poor
file I/O performance. In extreme conditions, this scenario can cause data protection applications to time
out during file system reads, which could result in backup job failures.
Poor I/O perfomance resolution
To avoid the situation of backup job failures due to memory blocking, modify the kernel tunable parameter
vx_ninode. The vx_ninode parameter determines the number of inodes in the inode table to help VxFS
in caching. By default, the size of the inode cache is decided (auto-tuned) at boot time by VxFS depending
on the amount of physical memory in the machine.
When modifying the value of vx_ninode, HP recommends the following:

Physical memory or kernel VxFS inode cache (number of


available memory inodes)
1 GB 16384

2 GB 32768

3 GB 65536

> 3 GB 131072

To determine the current value of vx_ninode, run the following at the shell prompt:
# /usr/sbin/kctune vx_ninode
To set vx_ninode to 32768, run the following command at the shell prompt:
# /usr/sbin/kctune vx_ninode=32768

NOTE: The kernel tunable parameters filecache_min and filecache_max control the amount of
physical memory that can be used for caching file data during system I/O operations. By default, these
parameters are automatically determined by the system to better balance the memory usage among file
system I/O intensive processes and other types of processes. The values of these parameters can be
lowered to allow a larger percentage of memory to be used for purposes other than file system I/O
caching. Determining whether or not to modify these parameters depends on the nature of the applications
running on the system.

HP-UX 11.23 and HP-UX 11.31 large LUNs


By default, HP-UX 11.23 can see a maximum of eight LUNs (0-7) per Fibre Channel port. Once the
maximum number of LUNs on a device is detected, HP-UX 11.23 stops looking for more LUNs. Physical and
virtual tape libraries should be configured to present no more than eight LUNs to an HP-UX 11.23 host.
By default, HP-UX 11.31 can see a maximum of eight LUNs (0-7) per Fibre Channel port when using legacy
device special files (DSF). Once the maximum number of LUNs on a device is detected, HP-UX 11.31 stops
looking for more LUNs. Physical and virtual tape libraries should be configured to present no more than
eight LUNs to an HP-UX 11.31 host using legacy DSFs.
For an HP-UX 11.31 server using persistent DSFs, the maximum number of LUNs per bus on the server can
be increased by entering:
# scsimgr set_attr –a max_lunid=32
The connected tape of virtual tape devices can be viewed by entering:
# ioscan -m lun

Enterprise Backup Solution design guide 113


HP-UX 11.23 EMS tape polling
HP-UX 11.23 Online Diagnostics no longer supports tape drives. It is possible that an HP-UX 11.23 server
that has been continually upgraded from the fully supported EMS tape polling in the past to the current OS
level might have some tape polling remnants still installed and configured. In that case, the archived
dm_stape.cfg file can be copied to the /var/stm/config/tools/monitor folder, and the polling
interval can be set to 0 to disable polling. Otherwise, it is not supported and should not be used.
HP-UX 11.23: Disabling rewind-on-close devices with st_san_safe
Turning on the HP-UX 11.23 kernel tunable parameter st_san_safe disables tape device special files
that are rewind-on-close. This will prevent utilities like mt from rewinding a tape that is in use by another
utility.
Some applications or utilities require rewind-on-close device special files (for example, the frecover
utility that comes with HP-UX). In this case, disabling rewind-on-close devices renders the utility unusable.
Most data protection applications such as HP Data Protector can be configured to use SCSI
reserve/release, which protects them from rogue rewinds by other utilities.
The requirements of the data protection environment should be considered when determining whether or
not to enable st_san_safe.
To determine if rewind-on-close devices are currently disabled, enter:
# /usr/sbin/kctune st_san_safe
If the value of st_san_safe is 1, then rewind-on-close devices are disabled. If the value is 0, then
rewind-on-close devices are enabled. To disable rewind-on-close devices, enter:
# /usr/sbin/kctune st_san_safe=1
Configuring the SAN
Set up the qualified tape library and router. See the documentation provided with each Storage Area
Network (SAN) component for additional component setup and configuration information.
Due to current issues with the fcparray driver within HP-UX, HP recommends that there be no SCC LUN set
to 0 on the router.
Final host configurations
When the preliminary devices and the appropriate drivers listed earlier are installed and the SAN
configuration is complete, the host should see the devices presented to it.
1. Run ioscan to verify that the host detects the tape devices.
# ioscan
2. After verifying that all devices are detected, check for device files assigned to each device. For HP-UX
11.31 legacy device special files (DSFs), run the following commands:
# ioscan -fnkC tape
# ioscan -fnkC autoch
3. For HP-UX 11.31 persistent DSFs, run the following commands:
# ioscan -fnNkC tape
# ioscan -fnNkC autoch

NOTE: Some data protection products might not currently support HP-UX 11.31-persistent DSFs for tape.
See the data protection product documentation for more information.

4. If no device files have been installed, enter the following command:


# insf -C tape -e
# insf -C autoch -e
5. Repeat step 2 to make sure device files are assigned.
6. Install and configure the backup software.

114 Configuration and operating system details


Installation checklist
With a complete SAN configuration, review the questions below to ensure that all components on the SAN
are logged in and configured properly.
• Are all hardware components at the minimum supported firmware revision, including: Server, HBA,
Fibre Channel Switch, Fibre Channel to SCSI Router, Interface Manager, Command View TL, tape
drives, library robot?
• Are all recommended HP-UX patches, service packs, quality packs or hardware enablement bundles
installed on the host?
• Is the minimum supported HBA driver loaded on the host?
• Are all tape and robotic devices mapped, configured and presented to the host from the Fibre Channel
to SCSI Router, or Interface Manager?
• Is the tape library online?
• Is the Fibre Channel to SCSI router correctly logged into the Fibre Channel Switch?
• Is the host HBA correctly logged into the Fibre Channel Switch?
• If multiple Fibre Channel switches are cascaded or meshed, are all ISL ports correctly logged in?
• If using zoning on the Fibre Channel Switch, is the server HBA and Tape Library's Fibre Channel to
SCSI Router in the same switch zone (either by WWN or by switch Port)?
• If using zoning on the Fibre Channel Switch, has the zone been added to the active switch
configuration?

Enterprise Backup Solution design guide 115


Windows Server and Windows Storage Server
This section provides instructions for configuring Windows Server 2008, Windows Server 2003, and
Windows 2003 in an Enterprise Backup Solution (EBS) environment.
Windows Storage Server is often the operating system used to build network attached storage (NAS)
servers. While this is a modified build of Windows, it operates in the same manner as other Windows
servers in an EBS environment. The storage servers have limitations on how much they can be changed.
Refer to the storage server installation and administration guides for more information:
http://www.hp.com/go/servers
The configuration process involves:
• Upgrading essential EBS hardware components to meet the minimum firmware and device driver
requirements.
• Installing the minimum patch/service pack level support for:
• Windows Server 2008/2003 on 32- and 64-bit platforms
• Data Protection software
See the following websites to obtain the necessary patches:
For HP: http://www.hp.com/support
For Microsoft: http://www.microsoft.com

NOTE: See the HP Enterprise Backup Solutions Compatibility Matrix for all current and required
hardware, software, firmware, and device driver versions at http://www.hp.com/go/ebs.

See the “Installation Checklist” at the end of this section to ensure proper installation and configuration of
the hardware and software in the SAN.

Configuring the SAN


This procedural overview provides the necessary steps to configure a Windows Server 2008/2003 host
into an EBS. See the documentation provided with each storage area network (SAN) component for
additional component setup and configuration information.

NOTE: To complete this installation, log in as an administrator with administrator privileges.

Installing the HBA device driver (Windows Server 2008/2003)


Obtain the appropriate Smart Component driver install package from http://www.hp.com. Double-click
on the .exe file and the driver will be installed for you. A reboot might be necessary after the driver
installation.
Storport considerations
EBS supports Storport configurations with the Emulex Storport mini-port drivers and QLogic Storport
mini-port drivers. Prior to installing the Storport mini-port HBA driver, the Storport storage driver
(storport.sys) must be updated. Check the HP Enterprise Backup Solutions Compatibility Matrix for the
currently supported version.

CAUTION: Failure to upgrade the Storport storage driver prior to installing the HBA mini-port driver may
result in system instability.

116 Configuration and operating system details


Installation checklist
To ensure that all components on the SAN are logged in and configured properly, review the following
questions:
• Are all hardware components at the minimum supported firmware revision, including: Server, HBA,
Fibre Channel Switch, Fibre Channel to SCSI/FC Router, Interface Manager, Command View TL, tape
drives, library robot?
• Are all recommended VLS, Windows Server 2008/2003 patches and service packs installed on the
host?
• Is the minimum supported HBA driver loaded on the Windows server?
• Are all tape and robotic devices/VLS libraries mapped, configured, and presented to the host from the
Fibre Channel to SCSI Router, or Interface Manager?
• Is the tape library/VLS online?
• Is the Fibre Channel to SCSI/FC router correctly logged into the Fibre Channel Switch?·Is the host HBA
correctly logged into the Fibre Channel Switch?
• If multiple Fibre Channel switches are cascaded or meshed, are all ISL ports correctly logged in?
• If using zoning on the Fibre Channel Switch, is the server HBA and Tape Library's Fibre Channel to
SCSI/FC Router in the same switch zone (either by WWN or by switch Port)?
• If using zoning on the Fibre Channel Switch, has the zone been added to the active switch
configuration?
• Is the Removable Storage Manager (RSM) in Windows Server disabling Properly? (See ”Windows
2003 known issues” on page 117 for more information on RSM.)
• Is the Test Unit Ready (TUR) command stopped? (See ”Windows 2003 known issues” on page 117 for
more information on TUR.)
• Has connectivity and performance been tested using HP StorageWorks Library and Tape Tools (L&TT)?

Windows 2003 known issues


Target and LUN shifting
Device binding can be helpful in resolving issues where device targets, and sometimes even LUNs, shift.
For operating systems such as Windows and HP-UX, issues can arise when a given target or LUN changes
in number. This can be caused by something as simple as plugging or unplugging another target (typically
a disk or tape controller) into the SAN. In most cases this can be controlled through the use of good zoning
or persistent binding.
The Windows operating system can still have issues even when zoning and binding are used. The cause of
many of these issues is due to the way that Windows enumerates devices. Windows enumerates devices as
they are discovered during a scan sequence. They are enumerated with device handles such as \\Tape0,
\\Tape1, and so on. The Windows device scan sequence goes in the order of bus, target, and LUN. Bus is
the HBA PCI slot, then target, which is representative of a WWN, and LUN, which is representative of a
device behind the WWN. The order will be the lowest bus, then a target and its LUNs, then on to the next
target, until it sees no more on that HBA. Then on to the next HBA, and its targets and LUNs. A common
cause for device shifting is when the tape device is busy and cannot respond in time for the OS to
enumerate it. Each device after that shifts up a number. See Table 46 and Table 47.

Table 46 Scenario 1 — All devices accounted for on scan

Bus Target LUN Device name Device in tape library


0 1 0 Changer0 Robot on NSR port 0

0 1 1 Tape0 Drive 1 on NSR port 0

0 1 2 Tape1 Drive 2 on NSR port 0

0 2 1 Tape2 Drive 3 on NSR port 1

0 2 2 Tape3 Drive 4 on NSR port 1

Enterprise Backup Solution design guide 117


Note that the target persistency in the Emulex lputilnt.exe or HBAnyware utilities or QLogic's SAN Surfer
utility would help to make sure that NSR Port 0 was target ID 1, and the same for NSR port 1 for target ID
2. The same applies to LUN binding in the Emulex full port driver utility.
Some backup applications call the tape device by using the Windows device name. As noted, the device
name may shift and cause a problem for the backup application. Some applications monitor for this
condition and adjust accordingly. Other applications must wait for a reboot and scan of devices, or the
application must be manually reconfigured to match the current device list.
What neither of the binding utilities do is affect Window's device numerology. For example, if the server
boots, and Drive 2 on NSR port 0 was busy or offline when the server was scanning. Now the handles will
be as follows:

Table 47 Scenario 1 — All devices accounted for on scan

Bus Target LUN Device name Device in tape library


0 1 0 Changer0 Robot on NSR port 0

0 1 1 Tape0 Drive 1 on NSR port 0

0 1 2 Busy or off-line

0 2 1 Tape1 Drive 3 on NSR port 1

0 2 2 Tape2 Drive 4 on NSR port 1

Note that Drive 3 and Drive 4 have different Windows device names.

NOTE: Some vendor applications use device serialization and are not affected by LUN shifting.

Interop issues with Microsoft Windows persistent binding for tape LUNs
Windows Server 2003 provides the ability to enable persistence of symbolic names assigned to tape LUNs
by manually editing the Windows registry. Symbolic name persistence means that tape devices will be
assigned the same symbolic name across reboot cycles, regardless of the order in which the operating
system actually discovers the device. This feature was originally released by Microsoft as a stand-alone
patch and was later incorporated into SP1 (see http://www.microsoft.com/ and search for KB873337 for
details). The persistence registry key is as follows:
H_Key_Local_Machine\System\CurrentControlSet\Control\Tape\Persistence
Persistence=1 symbolic tape names are persistent Persistence=0
non-persistent
Persistence is disabled by default. When you enable persistence, symbolic tape names (also referred to as
logical tape handles) change significantly. For example, \\.\Tape0 becomes \\.\Tape2147483646 .
The new symbolic tape name is not configurable. Some applications are unable to correctly recognize and
configure devices that have these longer persistent symbolic names. Applications known to have issues with
this device naming convention are all versions of HP Library and Tape Tools up to and including version
4.2 SR1a and EMC NetWorker v7.3 and later. HP L&TT is expected to release an updated version to
correct this issue later in 2007; EMC's patch schedule for NetWorker is unknown.
As a workaround, persistent binding of Fibre Channel port target IDs, enabled through the Fibre Channel
host bus adapter utilities (such as Emulex lputilnt, HBAnyware, and QLogic SAN Surfer) can provide some
benefit. Target ID binding assures that targets are presented in a consistent manner but cannot guarantee
consistent presentation of symbolic tape names.

118 Configuration and operating system details


Tape drive polling
The Windows Removable Storage Manager service (RSM) polls tape drives on a frequent basis. Windows’
built-in backup software (NTBACKUP) relies on the RSM polling to detect media changes in the tape drive.
In SAN configurations, this RSM polling can have a significant negative impact on tape drive performance.

NOTE: For SAN configurations, HP strongly recommends disabling RSM polling.

Disabling RSM polling for LTO tape driver


HP’s Ultrium tape driver, Hplto.sys v1.0.3.0, disabled RSM polling as a consequence of the driver
installation. Due to the NTBACKUP issue mentioned above, the v1.0.3.1 version of the driver has
re-enabled RSM polling.
Customers wishing to disable RSM polling should complete the steps below. Future driver releases,
beginning with the 1.0.4.0 driver kit, will not modify this parameter.
1. Install the 1.0.4.0 or later driver.
2. Disable device polling in the system registry by completing one of the two steps below:
a. The driver package contains a DisableAutoRun.reg file that can used to modify the system registry
and disable RSM polling. Logged into the system as a user with Administrative privileges and
double-click on the DisableAutoRun.reg file. The system will prompt by asking “Are you sure you
want to add the information in c:\tmp\DisableAutoRun.reg to the registry?” (Path information will
vary depending on where the .reg file is stored.) Responding “Y” or “Yes” will modify the
appropriate registry entry to disable polling.
b. If the driver package does not include the DisableAutoRun.reg file, manually edit the system registry
using RegEdit. Logged into the system as a user with Administrative privileges, run RegEdit, and
navigate to the following registry key:
HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\hplto.
To disable RSM polling, edit the AutoRun value found in this key. A value of 0 (zero) indicates that polling
is disabled; a value of 1 indicates that polling is enabled.
3. After completing steps 1 and 2, reboot the affected system.

IMPORTANT: Adding or removing tape drives from the system may cause an older driver inf file to be
re-read, which in turn can re-enable RSM polling. If tape drives are added or removed, check the registry
for proper configuration and, if necessary, repeat step 2 above.

Disabling RSM polling for SDLT tape driver


HP’s SDLT tape driver, v3.0.2.0 or later, has a properties page which allows the user to disable device
polling. To disable polling in the HP SDLT driver:
1. Open the Device Manager.
2. Double-click an SDLT tape drive.
3. Click the DLT tab.
4. Check the Increase performance by disabling support for Microsoft Backup Utility checkbox.
5. Repeat steps 2 through 4 for each SDLT tape drive.
Microsoft provides an additional method for disabling device polling via Registry edits. For more
information, see the Microsoft Knowledge Base article number 842411. Refer to the Microsoft website to
view this article: http://support.microsoft.com/default.aspx?scid=kb;en-us;842411.

Enterprise Backup Solution design guide 119


SCSIport driver issue
Windows Server 2003 configured with SCSIport mini-port HBA drivers will stop enumerating devices if
there is no LUN 0 presented from any Fibre Channel target. This is a SCSIport driver issue and will affect
both disk and tape. The current workaround is to ensure that a LUN 0 is presented from the target's device
map for any devices to be seen on a specific target. This issue does not exist in Storport configurations.
Library slot count/Max Scatter Gather List issue
The “Max Scatter Gather List” parameter for the Emulex driver must be changed for hosts with a library
that has a slot count greater than 2000. The recommended fix is to make the Windows registry parameter
“MaximumSGList” a value of 0x40 (64) or greater which allows the transfer of larger blocks of data. This
may already be a default parameter in newer driver revisions.
Tape.sys block size issue
At the end of March 2005, Microsoft released Service Pack 1 (SP1) for the Windows Server 2003
platform. With SP1, Microsoft changed the driver tape.sys to allow for NTBackup tapes written on 64-bit
Windows 2003 Server to be read or cataloged on 32-bit Windows 2003 Server systems. The change in
the tape.sys driver imposes a limit on the data transfers to a block size of no greater than 64KB.
Performance issues will be more apparent on all high-performance tape drives such as the HP Ultrium 960
and SDLT 600 using the tape.sys driver.
As a result, backup applications using HP tape drivers (hplto.sys, hpdat.sys, hpdltw32.sys,
hpdltx64.sys, and hpdltw64.sys) and tape.sys driver may experience poor performance and/or
failed backup jobs. If experiencing either of these symptoms, check the backup application with the
software vendor to see if the backup application is using the Microsoft tape driver, tape.sys.
Microsoft released a hotfix that replaces the affected tape.sys file with a version that removes the 64KB
limitation on block sizes; see http://support.microsoft.com/kb/907418/en-us. This hotfix was integrated
into Windows 2003 Service Pack 2 (SP2).
Removable Storage Manager issue
Windows Server 2003 has a potential issue between backup applications and the Windows Removable
Storage Manager service (RSM). If RSM is enabled and allowed to discover tape devices and then later
disabled, the RSM database may conflict with the configuration settings for the backup software. Symptoms
may include shifting of logical device handles (such as \\tape0), loss of device access, and failed jobs.
Follow the steps below to remove the entries from the RSM database and completely disable the service:
1. Disconnect the Windows node from the SAN (unplug all Fibre Channel cables).
2. Delete all files and subfolders under the ..\system32\NtmsData folder (location of the
system32 folder varies with different Windows versions).
3. Enable and start the Removable Storage service in the Microsoft Computer Management applet.
4. Access the Removable Storage (My Computer / Manage / Storage / Removable Storage) in the
Microsoft Computer Management applet.
5. Verify that there are no tape or library devices listed (other than the direct attached devices such as
the CD-ROM drive).
6. Stop and disable the Removable Storage service in the Microsoft Computer Management applet.
7. Reconnect the Windows node to the SAN (plug all Fibre Channel cables back in).
8. Reboot.
Emulex SCSIport driver issue
A potential tape I/O performance issue has been discovered with Windows Server 2003 32-bit systems
configured with Emulex SCSIPort mini-port HBA drivers. This issue only affects those backup applications
using tape block sizes/transfer lengths exceeding 128KB. Emulex SCSIPort mini-port HBA drivers use a
MaximumSGList entry in the Windows registry that defines the maximum data transfer length supported by
the adapter for SCSI commands. Early Emulex drivers, version 5.5.20.8 and older, set this registry entry to
a value of 33 decimal (21 hexadecimal) limiting SCSI transfers to a maximum of 128KB. Beginning with
version 5.5.20.9, this registry entry was increased to 129 decimal (81 hexadecimal) increasing SCSI
transfers to 512KB. The issue surfaces when upgrading an installed Emulex HBA SCSIPort mini-port driver
from driver version 5.5.20.8 and earlier to driver version 5.5.20.10 or later (driver version 5.5.20.9 is

120 Configuration and operating system details


exempt from this issue). During the upgrade, the existing MaximumSGList registry entry is not modified
from 33 to 129. Since it remains at the lower value (33), the SCSI transfer length remains at 128K, thus
possibly affecting performance when large block sizes/transfer lengths are used.
To resolve this issue, modify the MaximumSGList in the registry as follows:

CAUTION: Using the Registry Editor incorrectly can cause serious, system-wide problems. Microsoft
cannot guarantee that any problems resulting from the use of Registry Editor can be solved. Use this tool at
your own risk. Back up the registry before editing.

1. Click Start > Run to open the Run dialog box.


2. Enter regedit to launch the registry editor.
3. Navigate to the following registry key:
\\HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\lpxnds\Parameters\Device\
MaximumSGList
4. Change the REG_DWORD value from 33 to 129.
FC interface controller device driver issue
In Windows environments, the Hardware Discovery Wizard detects the presence of the controller LUN,
identifies it to the user as HP StorageWorks FC interface controller, and prompts for installation of a device
driver. The FC interface controller does not require a device driver and is fully functional without the device
driver. However, until a device entry is created in the System Registry, the Windows operating system (OS)
will classify it as an unknown device. Each time the server is booted, the Hardware Discovery Wizard will
prompt for installation of a driver file. A device information file (.inf) is available, which installs a null
device driver and creates a device entry in the System Registry. The inf file is located on the HP website for
your product or by searching for “HP_CPQ_router_6.zip” or later version. By using this file, a storage
router can be essentially 'registered' with the device manager once, minimizing user interaction.
Not Enough Server Storage is Available to Process this Command—network issue
A network issue was observed where a backup application was unable to make a network connection to a
remote network client. The error that was reported from Windows 2003, through the backup application,
was “Not enough server storage is available to process this command”. This error can occur with various
versions of Windows Server (NT, 2000, 2003) that also has Norton AntiVirus or IBM AntiVirus software
installed. This is a known network issue documented in the following Microsoft Knowledge Base article at
http://support.microsoft.com/default.aspx?scid=kb;en-us;177078
To resolve the issue, a registry edit is required. Please review the steps detailed in the Microsoft Knowledge
Base Article to resolve the issue.
Updating the HP Insight Management Agents for Microsoft Windows using the ProLiant Support
Pack Version 7.70 (or later)
When updating the HP Insight Management Agents for Microsoft Windows using the ProLiant Support
Pack Version 7.70 (or later) the "Disable Fibre Agent Tape Support" option is inadvertently unchecked by
default. This occurs because the previous data in the registry is not saved during the software update from
theManagement Agents Version 7.60 (or earlier).

Enterprise Backup Solution design guide 121


Figure 51 is an example of the Disable Fibre Agent Tape Support option selected prior to the ProLiant
Support Pack Version 7.70 (or later) update installation.

Figure 51 Fibre Agent Tape Support option selected

Follow the link below to see the full advisory:


http://h20000.www2.hp.com/bizsupport/TechSupport/Document.jsp?lang=en&cc=us&objectID=c0122
9672

122 Configuration and operating system details


NAS and ProLiant Storage Server devices using Microsoft Windows
Storage Server 2003
Server storage differs from direct-attached storage in several ways.
• Stored data can have multiple sets of file attributes due to heterogeneous access (NFS, SMB, Novel,
Macintosh).
• The storage server is typically not supported by a console.
• Storage server vendors are often forced to use a common backup protocol, NDMP, because of lack of
backup application support of the underlying or customized storage server OS.
• Storage servers using Data Protection Manager (DPM) provide administrative tools for protecting and
recovering data on the file servers in the network.
HP StorageWorks NAS and HP ProLiant Storage Server devices are built on Windows Storage Server
(WSS) 2003. Backup applications supported by Windows 2003 also run on WSS 2003, and the terminal
services on the Microsoft based storage server supports backup application's GUI. The major backup
vendors are actively testing their applications on the WSS framework.
All tape devices (both SCSI and FC connected) supported by the Windows systems are automatically
available on Windows Storage Server 2003 storage server solutions. Since most storage servers are built
with a specialized version of the OS, some of the device drivers may be outdated or unavailable. Updates
to the storage server product from HP may have more current drivers. These updates are available for
download from the HP server website. Newer tape device drivers are made available from hardware and
software vendors and are used on these platforms. See the following website for the HP Enterprise Backup
Solutions Compatibility Matrix and certified Windows device drivers: http://www.hp.com/go/ebs
Known issues with NAS and ProLiant Storage Servers
Storage servers are highly dependent on networking resources to serve up data. Backup applications are
also highly dependent on networking resources to establish communications with other backup servers to
coordinate the usage of the tape libraries. At times this dependency on networking services can conflict,
and the backup application may lose contact with the other servers, causing backups to fail. Take note of
any extended networking resources used for storage servers that may be shared with backup, such as NIC
teaming, and make sure that communications are not broken.

Enterprise Backup Solution design guide 123


Tru64 UNIX
The configuration process for Tru64 UNIX can be a somewhat seamless process. When firmware and
driver revisions of the components are at minimum EBS acceptable levels, the integration process is as
simple as installation of hardware and configuring devices to the SAN fabric. This is possible because
Tru64 UNIX maintains driver and configuration parameters in the OS kernel and device database table.
It is recommended, however, that if new console firmware is available, it is applied as outlined below to
Tru64 UNIX servers in an EBS.

NOTE: Refer to the HP Enterprise Backup Solutions Compatibility Matrix for all current and required
hardware, software, firmware, and device driver versions at:
http://www.hp.com/go/ebs.

To ensure correct installation and configuration of the hardware, see ”Installation checklist” on page 115.
Backup software patch
Refer to your backup software vendor to determine if any updates or patches are required.
Configuring the SAN
This procedural overview provides the necessary steps to configure a Tru64 UNIX host into an EBS. Refer to
the documentation provided with each storage area network (SAN) component for additional component
setup and configuration information.
1. Prepare the required rack mounted hardware and cabling in accordance with the specifications listed
in backup software user guide as well as the installation and support documentation for each
component in the SAN.

NOTE: Loading Console firmware from the Console firmware CD may also update the host bus adapter
(HBA) firmware. This HBA firmware may or may not be the minimum supported by EBS. Refer to the HP
Enterprise Backup Solutions Compatibility Matrix for minimum supported HBA firmware revisions.

2. Upgrade the AlphaServer to the latest released Console firmware revision.


Refer to http://www.hp.com/support to obtain the latest Console firmware revision.
a. Boot the server to the chevron prompt (>>>).
b. Insert the Console firmware CD into CD-ROM drive.
c. To see a list of all accessible devices, at the chevron prompt, type:
>>> show dev
d. Obtain the CD-ROM device filename from the device list. Where DQA0 is an example CD-ROM
device filename for the CD-ROM drive, at the chevron prompt type:
>>> Boot DQA0
e. Complete all of the steps in the readme file, as noted in the message prompt.
f. If the minimum supported HBA firmware revision was installed in this step, go to step 3. If the
minimum supported HBA firmware revision was not installed in this step, upgrade at this time. Refer
to the release notes provided with the HBA firmware for installation instructions. To verify the latest
supported revisions of HBA firmware and driver levels for the 32-bit KGPSA-BC, and 64-bit
KGPSA-CA, FCA2354, FCA2384, FCA2684 and FCA2684DC, refer to the HP Enterprise Backup
Solutions Compatibility Matrix at: http://www.hp.com/go/ebs

NOTE: HBA firmware can be upgraded before or after installing Tru64 UNIX. The driver will be installed
after Tru64 UNIX is installed. Contact Global Services to obtain the most current HBA firmware and drivers.

124 Configuration and operating system details


3. Install the Tru64 patch kit.
a. Refer to the release notes and perform the steps necessary to install the most current Tru64 patch kit.
b. The current patch kit installs the current Tru64 UNIX HBA driver. To verify that the installed HBA
driver meets minimum support requirements, refer to the HP Enterprise Backup Solutions
Compatibility Matrix at: http://www.hp.com/go/ebs
4. Upgrade the HBA driver if the HBA does not contain the most current supported driver.
a. Contact Global Services to obtain the latest HBA driver.
b. Upgrading the HBA driver may require building a new kernel. Create a backup copy of the kernel
file (/vmunix) before building a new kernel.
c. If building a new kernel was necessary, reboot the server. If building a new kernel was not
necessary, at a Tru64 UNIX terminal window type:
# hwmgr -scan scsi.
5. Verify that the Tru64 UNIX host is logged in to the Fibre Channel switch.
Make sure that the server logs in to the switch as an F-port.
Confirming mapped components
This section provides the commands needed to confirm that the components have been successfully
installed in the SAN.
Installed and configured host bus adapters
To obtain a list of all host bus adapters (HBAs) that are physically installed and configured in the server,
enter the following command in a terminal window on the Tru64 host:
# emxmgr -d
For Tru64 5.1b the following command is recommended:
# hwmgr -show fibre
Visible target devices
• WWN of the router—To view a list of target devices that are visible to each installed HBA on the
Tru64 UNIX host, enter the following command where emx0 is the name of the HBA.
# emxmgr -t emx0
For Tru64 5.1b the following command is recommended:
# hwmgr -show fibre -adapter -topology
Verify that the WWN of the router is included in the list.
• Tape and Robot Devices—To view a list of all of the tape and robotic devices that are visible and
configured by Tru64 UNIX host, enter the following command:
# hwmgr -view dev
Tru64 UNIX dynamically builds all device files. This process may take several minutes.
Configuring switch zoning
If zoning will be used, either by World Wide Name or by port, perform the setup after the HBA has
logged into the fabric. Refer to the Fibre Channel switch documentation for complete switch zone setup
information. Ensure that the World Wide Name (WWN) or port of the Fibre-Channel-to-SCSI bridge is in
the same zone as the WWN or port of the HBA installed in the server.

Enterprise Backup Solution design guide 125


Installation checklist
To ensure that all components on the SAN are logged in and configured properly, review the following
questions:
• Are all hardware components at the minimum supported firmware revision, including: Server, HBA,
Fibre Channel switch, Fibre Channel to SCSI router, Interface Manager, Command View TL, tape drives,
library robot?
• Are the current Tru64 operating system patches installed, and is the server running the current console
firmware?
• Is the minimum supported HBA driver loaded on the host?
• Are all tape and robotic devices mapped, configured, and presented to the host from the Fibre Channel
to SCSI Router, or Interface Manager?
• Is the tape library online?
• Is the Fibre Channel to SCSI router correctly logged into the Fibre Channel switch?
• Is the host HBA correctly logged into the Fibre Channel switch?
• If multiple Fibre Channel switches are cascaded or meshed, are all ISL ports correctly logged in?
• If using zoning on the Fibre Channel switch, is the server HBA and Tape Library's Fibre Channel to SCSI
router in the same switch zone (either by WWN or by switch port)?
• If using zoning on the Fibre Channel switch, has the zone been added to the active switch
configuration?

126 Configuration and operating system details


Red Hat and SUSE Linux
This section provides instructions for configuring Linux in an Enterprise Backup Solution (EBS) environment.
The configuration process involves:
• Upgrading and installing the EBS hardware components to meet the minimum firmware and device
driver requirements (this includes all supported server, Fibre Channel Host Bus Adapters (FC HBA),
switches, tape libraries, tape drives, and interconnect components).
• Installing the minimum required patches, both for the operating system and the backup application.

NOTE: See the HP StorageWorks Enterprise Backup Solutions Compatibility Matrix for all current and
required hardware, software, firmware, and device driver versions at: http://www.hp.com/go/ebs.

Operating system notes


• In general, EBS configurations are not dependent on a specific kernel errata level. This is not the case
for support of HP disk storage products. EBS follows the recommended minimum kernel errata versions
as documented for HP disk arrays. This support can be found on the Single Point of Connectivity
Knowledge (SPOCK) web site at: http://www.hp.com/storage/spock
Access to SPOCK requires an HP Passport account.
• HP recommends installing the kernel development option (source code) when installing any Linux
server. Availability of source code ensures the ability to install additional device support software that
will be compiled into the kernel.

Installing HBA drivers and tools


Obtain the latest HP-supported Emulex or Qlogic driver kit from the HP support website:
1. From an Internet browser, go to http://www.hp.com.
2. Click Support and Drivers.
3. In the For Product box, search for the driver kit appropriate for your model HBA (for example,
FCA2214, A6826A or FC2143).
4. Select the operating system version of the system in which the HBA is installed.
5. See the driver kit release notes.
6. Install the driver kit by running the Install script included in the kit. HP recommends using the Install
script instead of running individual RPMs to ensure that drivers are installed with the appropriate
options and that the fibre utilities are installed properly.
7. Beginning with the driver kits that included the 8.01.06 QLogic driver and the 8.0.16.27 Emulex driver
(both kits released October 2006), execute the following script found in the fibreutils directory:
# cd /opt/hp/hp_fibreutils/pbl
# ./pbl_inst.sh -i
8. Reboot the server to complete the installation.

NOTE: Step 7 of the above procedure was introduced to eliminate the need to have hp_rescan -a run
as part of /etc/rc.local (or some other boot script). In previous versions of the driver kit, executing the
hp_rescan utility was necessary to work around an intermittent issue with device discovery of SCSI-2 tape
automation products. Executing the pbl script inserts the probe-luns utility into the boot sequence and
identifies and adds SCSI-2 device strings for legacy tape products into the kernel's blacklist. The result is
that all of the supported tape libraries and drives should be discovered correctly without any additional
steps by the user.

9. Verify that the host has successfully discovered all tape drive and library robotic devices using one of
the following methods:
• Review the device listing in /proc/scsi/scsi
• Review the output from the hp_rescan command

Enterprise Backup Solution design guide 127


• Review the output from the lssg command
If there are devices that have not been successfully discovered, review the HBA driver installation
procedure above, particularly step 5, then proceed to the ”Installation checklist” on page 128.

HP's fibre utilities, located in the /opt/hp/hp_fibreutils directory, are installed as part of the
driver kit and include the following:

• hp_rescan used to force a rescan of all SCSI buses


• scsi_info used to query a device
• adapter_info displays HBA information (i.e. World Wide Names)
• lssd lists disk devices (sd device files)
• lssg lists online and nearline devices (sg device files)
• hp_system_info lists system configuration information

Additional SG device files


In most environments, the default number of SG device files is sufficient to support all of the required
devices. If the environment is fairly large and the default number of SG device files is fewer than the
combined total of disk, tape, and controller devices being allocated to the server, then additional device
files need to be created. SG device files are preferable to the standard st (symbolic tape) device files due to
SCSI timeout values that may not be sufficient in length to support some tape operations.
To create additional SG device files, perform the following:
# mknod /dev/sgX c 21 X
X signifies the number of the device file that does not already exist. For additional command options, see
the mknod man page.

Installation checklist
To ensure that all components on the SAN are configured properly, review the following questions:
• Are all hardware components at the minimum supported firmware revision, including: server, HBA,
Fibre Channel switch, interface controller, Interface Manager, CommandView TL, tape drives, library
robot?
• Are there any required Linux operating system patches missing (required patches are noted on the EBS
Compatibility Matrix)?
• Is the supported HBA driver loaded on the host?
• Are all tape and robotic devices mapped, configured, and presented to the host from the interface
controller or Interface Manager?
• Is the tape library online?
• Is the FC-attached tape drive logged into the Fibre Channel switch (F-port)?
• Is the interface controller logged into the Fibre Channel switch (F-port)?
• Is the host HBA correctly logged into the Fibre Channel switch (F-port)?
• If multiple Fibre Channel switches are cascaded or meshed, are all ISL ports correctly logged in?
• If using zoning on the Fibre Channel switch, is the interface controller, or tape drive, configured into the
same switch zone as the host (either by WWPN or by switch port number)?
• If using zoning on the Fibre Channel switch, has the host's zone been added to the active switch
configuration?

128 Configuration and operating system details


Linux known issues
Rewind commands being issued by rebooted Linux hosts
Device discovery that occurs as part of a normal Linux server boot operation issues a SCSI rewind
command to all attached tape drives. For backup applications that do not employ SCSI Reserve and
Release, if the rewind command is received while the tape drive is busy writing, the result is a corrupted
tape header and an unusable piece of backup media.
This issue could manifest itself as either a failed verify operation, a failed restore operation, or the inability
to mount a tape and read the tape header. If a backup verification is not completed, the normal backup
process might not detect that an issue exists. This issue is present today in SUSE Enterprise Linux 9 and will
become an issue for SUSE Enterprise Linux 8 and Red Hat Enterprise Linux 3 and 4 with the introduction of
the QLogic v8.01.06 and Emulex v8.0.16.27 driver kits.

NOTE: Refer to Customer Advisory c00788781 for additional details on the new driver kits and their
associated installation procedure changes.

The scope of this issue includes any EBS configuration that uses a backup application which does not
implement SCSI Reserve and Release and contains at least one Linux host which has shared access to tape
devices. Backup applications known to be affected are HP Data Protector (all versions) and Legato
NetWorker prior to v7.3.
The only recommended work-around for affected applications is to not reboot Linux servers while other
hosts are running backups.

Tape devices not discovered and configured across server reboots


Tape drives disappear from Linux servers after the host reboots. This issue was identified and
communicated in Customer Advisory OT050715_CW01, dated 26 September 2005. Adding the line
"hp_rescan -a" to /etc/rc.d/rc.local resolved the issue. Hp_rescan is an HP Host Bus Adapter (HBA) utility
included and installed with the Fibre Channel HBA driver kit.
This issue, which affects Red Hat installations and, intermittently some SUSE Linux installations, is now
understood to be an issue with the mid-layer SCSI driver and interaction with SCSI-2 tape automation
products. The permanent resolution to this issue is to upgrade to the latest Fibre Channel driver kit (QLogic
8.01.06 or Emulex 8.0.16.27, both released in October 2006). This driver kit introduced a revised
installation procedure, incorporating the probe-luns utility into the boot sequence. The revised installation
procedure was outlined earlier in this section and was also communicated in Customer Advisory
c00788781, dated 11 October 2006.

Sparse files causing long backup times with some backup applications
Some Integrity and X64 64-bit HP Servers running the Red Hat Enterprise Linux 3 operating system (or
later) may have longer than expected system backup times or appear to be stalled when backing up the
following file:
/var/log/lastlog
This file is known as a "sparse file." The sparse file may appear to be over a terabyte in size and the
backup software will take a long time to back up this file. Most backup software applications have the
capability to handle sparse files with special sparse command flags. An example of this is the "tar" utility,
which has the "-sparse" or "-S" flag that can be used with sparse files.
If your backup application does not include support for backing up sparse files, then /var/log/lastlog
should be excluded from the backup.

Enterprise Backup Solution design guide 129


Novell NetWare
NetWare environment considerations
Novell NetWare servers using the Compaq Fibre Channel host bus adapters must enable the FC interface
controller's Force FCP Response Code setting. This setting is enabled/disabled on a per FC port basis and
can be set via the FC interface controller's browser interface, FC Module Configuration Settings menu.
Without the Force FCP Response Code bit enabled, the Compaq Fibre Channel host bus adapter will not
properly detect any devices behind the FC interface controller.
For details on enabling the Force FCP Response Code bit, refer to the Fibre Channel Module Configuration
sections in the User Interface chapters of the HP StorageWorks e2400-160 FC Interface Controller User
Guide.

NOTE: This setting only applies to the Compaq Fibre Channel host bus adapter. The FCA-2214 host bus
adapter for NetWare does not require this setting and will not operate correctly if the Force FCP Response
Code bit is enabled.

Heterogeneous Windows and NetWare environment limitations


Heterogeneous operating system environments that include Windows 2000/2003 and NetWare require
that the NetWare servers use the FCA-2214 host bus adapters for access to shared tape configurations. As
discussed in the previous section, the Compaq Fibre Channel host bus adapters require the Force FCP
Response Code bit to be enabled in the FC interface controller. This setting is not compatible with any of
the host bus adapters supported in Windows servers.
If the NetWare configuration includes HSG80-based disk storage systems (MA8000,EMA12000,
EMA16000 arrays) and the NetWare servers are using the Compaq Fibre Channel host bus adapters, then
these HBAs are used strictly for access to the shared disk array and FCA-2214 HBAs are used for access to
the shared tape libraries. This type of configuration should utilize separate Fibre Channel switch zones for
disk and tape. If the Compaq Fibre Channel host bus adapters are replaced with FCA-2214s, then it is
permissible to configure disk and tape on the same host bus adapter. This configuration requires Fibre
Channel driver QL2300.HAM v6.51 or later and Secure Path 3.0C SP2 or later.

NOTE: All third-party backup applications may not be supported on all hardware. Refer to the HP
Enterprise Backup Solutions Compatibility Matrix at http://www.hp.com/go/ebs.
.

FCA2214 configuration settings

NOTE: The FCA2214 was formerly the FCA2210.

When installing the FCA2214 host bus adapter, the following load line and option settings are included in
the server's STARTUP.NCF file:
LOAD QL2300.HAM SLOT=x /LUNS /PORTNAMES /ALLPATHS [/MAXLUNS=x]
• Where SLOT specifies the PCI slot in which the adapter is installed.
• /LUNS directs NetWare to scan for all LUNs during the load of this driver instance. Without this
parameter, NetWare will only scan for LUN 0 devices. The scanned LUN number range is 0 to (n - 1)
where n is specified by the /MAXLUNS=n option. By default, this value is set to 32.
• /PORTNAMES causes NetWare to internally track devices by Fibre Channel port name rather than
node name. This parameter is required when storage LUNs do not have a 1:1 correspondence across
port names.
• /ALLPATHS disables native failover and reports all devices on all adapter paths back to the operating
system.

130 Configuration and operating system details


• /MAXLUNS=x is an optional setting. If the configuration includes more than 32 LUNs behind a single
adapter, this setting must be used to tell the operating system to scan for additional LUN values beyond
LUN 31.
If the target configuration includes Secure Path multi-path software, the Secure Path installation will modify
the startup options in the STARTUP.NCF file for the FCA2214 adapter. Specifically, the installation script will
add the /PORTDOWN=2 parameter to the load line for the QL2300.HAM driver. This parameter sets the
timeout period for adapter link down and storage port down; the expiration of this timer, set in seconds,
triggers the failover logic in the Secure Path software.
For additional information, refer to the installation guide that accompanies the FCA2214 adapter.
Configuring switch zoning
If zoning will be used, either by World Wide Name or by port, perform the setup after the HBA has
logged into the fabric. Refer to the Fibre Channel switch documentation for complete switch zone setup
information. Ensure that the World Wide Name (WWN) or port of the Fibre-Channel-to-SCSI bridge is in
the same zone as the WWN or port of the HBA installed in the server.
Installation checklist
To ensure that all components on the SAN are logged in and configured properly, review the following
questions:
• Are all hardware components at the minimum supported firmware revision, including: Server, HBA
(QL2300.HAM), Fibre Channel Switch, Fibre Channel to SCSI router, Interface Manager, Command
View TL, tape drives, library robot?
• Are all recommended NetWare support patches installed on the host?
• Is the minimum supported HBA driver loaded on the host?
• Are all tape and robotic devices mapped, configured and presented to the host from the Fibre Channel
to SCSI router, or Interface Manager?
• Is the tape library online?
• Is the Fibre Channel to SCSI router correctly logged into the Fibre Channel switch?
• Is the host HBA correctly logged into the Fibre Channel switch?
• If multiple Fibre Channel switches are cascaded or meshed, are all ISL ports correctly logged in?
• If using zoning on the Fibre Channel switch, is the server HBA and Tape Library's Fibre Channel to SCSI
router in the same switch zone (either by WWN or by switch port)?
• If using zoning on the Fibre Channel switch, has the zone been added to the active switch
configuration?

Enterprise Backup Solution design guide 131


Sun Solaris
This section provides instructions for configuring Sun Solaris in an Enterprise Backup Solution (EBS)
environment. The configuration process involves:
• Upgrading essential EBS hardware components to meet the minimum firmware and device driver
requirements.
• Installing the following minimum patches for Sun Solaris:
• Solaris 9 requires 112233-12, 112834-06, and 113277-51
• Solaris 10 SPARC requires 118822-36 and 118833-36
• Solaris 10 x86/64 requires 118855-36
• Installing the minimum patch/service pack level support for the backup software
See the following websites to obtain the necessary patches:
For HP: http://www.hp.com/support
For Sun: http://www.sun.com

NOTE: Refer to the HP Enterprise Backup Solutions Compatibility Matrix for all current and required
hardware, software, firmware, and device driver versions at: http://www.hp.com/go/ebs.

See ”Installation checklist” on page 137 to ensure that the hardware and software in the SAN is correctly
installed and configured.
Configuring the SAN
This procedural overview provides the necessary steps to configure a Sun Solaris host into an EBS. See the
documentation provided with each storage area network (SAN) component for additional component
setup and configuration information.
Currently supported adapters for Sun Solaris include Sun, QLogic, and Emulex-branded HBAs. HP
StorageWorks EBS supports all 4Gb and 8Gb HBAs with the Sun native driver. For some models of 2Gb
HBAs, the QLogic qla and Emulex lpfc drivers are supported.
Device binding can help resolve issues where device targets shift. Issues can arise when a given target or
LUN changes number. In most cases, this can be controlled through the use of good zoning or persistent
binding. When using QLogic or Emulex drivers, configuring for persistent binding is recommended. For the
Sun native driver, persistent binding is not necessary unless recommended by the backup application
vendor or for an environment where tape devices will be visible across multiple hosts.
For configuring persistent binding with the Sun native driver, see the Sun document Solaris SAN
Configuration and Multipathing Guide at http://docs.sun.com/app/docs/doc/820-1931.
Sun native driver configuration
1. Prepare the required rack mounted hardware and cabling in accordance with the specifications listed
in backup software user guide as well as the installation and support documentation for each
component in the SAN.

NOTE: To complete this installation, a root login is required.

2. For Solaris 9, download the current Sun StorEdge SAN Foundation Software (SFS) from
http://www.sun.com/storage/san. Select the following files for download:
• Install_it Script SAN 4.4.x (SAN_4.4.x_install_it.tar.Z)
• Install_it Script SAN 4.4.x Readme (README_install_it.txt)
The README document explains how to uncompress the downloaded file and execute the Install_it
Script.

132 Configuration and operating system details


NOTE: From Sun's site, the Install_it Script is considered an optional download, but does include
all required SFS packages and patches for Solaris 9. The Install_it Script will identify the type of
HBA and version of Solaris before installing the appropriate SFS packages and patches.

3. SFS functionality is included within the Solaris 10 operating system. The Sun native SUNWqlc driver is
included with Solaris 10. For Solaris 10 01/06 or later release, SUNWemlxs and SUNWemlxu driver
packages are included. To obtain SUNWemlx packages, go to Sun’s Products Download page at
http://www.sun.com. Search for “StorageTek Enterprise Emulex Host Bus Adapter Device Driver.”
Install the appropriate patch:
• SUNWqlc on Solaris 10 SPARC, install patch 119130-33 or later
• SUNWqlc on Solaris 10 x86/64, install 119131-33 or later
• SUNWemlx on Solaris 10 SPARC, install patch 120222-31 or later
• SUNWemlx on Solaris 10 x86/64, install patch 120223-31 or later
4. Update the HBA fcode if needed using the flash-upgrade utility included in the appropriate patch.
• SG-XPCI1FC-QF2 (X6767A) and SG-XPCI2FC-QL2 Patch 114873-05 or later
• SG-XPCI2FC-QF2 (X6768A) and SG-XPCI2FC-QF2-Z Patch 114874-07 or later
• SG-XPCI1FC-EM2 and SG-XPCI2FC-EM2 Patch 121773-04 or later
• SG-XPCI1FC-QF4 (QLA2460) and SG-XPCI2FC-QF4 (QLA2462) Patch 123305-04 or later
5. Reboot the server with -r option:
#reboot -- -r
6. Use the cfgadm utility to show the HBA devices:
#cfgadm -al
7. Use the cfgadm utility to configure the HBA devices. “c2” is the HBA device in this example.
#cfgadm -c configure c2
8. Use devfsadm utility to create device files:
#devfsadm
Troubleshooting with the cfgadm utility
• Getting the status of FC devices using cfgadm:
# cfgadm -al
Example output for above command:
Ap_Id Type Receptacle Occupant Condition
c3 fc-fabric connected configured unknown
c3::100000e002229fa9 med-changer connected configured unknown
c3::100000e002429fa9 tape connected configured unknown
c3::50060e80034fc200 disk connected configured unknown
c4 fc-fabric connected configured unknown
c4::100000e0022286ec tape connected configured unknown
c4::100000e0024286ec tape connected configured unknown
c4::50060e80034fc210 disk connected configured unknown
This output shows a media changer at LUN 0 for the 100000e0022229fa9 world wide name, and
tape and disk devices at LUN 0 for other world wide names. The devices are connected and have been
configured and are ready for use. “cfgadm -al -o show_FCP_dev” can be used to show the devices
for all LUNs of each Ap_Id.
• Fixing a device with an “unusable” condition:
If the condition field of a device in the cfgadm output is “unusable,” then the device is in a state such
that the server cannot use the device. This may have been caused by a hardware issue. In this case, do
the following to resolve the issue:
1. Resolve the hardware issue so the device is available to the server.
2. After the hardware issue has been resolved, use the cfgadm utility to verify device status and to
mend the status if necessary:

Enterprise Backup Solution design guide 133


• Use cfgadm to get device status:
# cfgadm -al
• For a device that is “unusable” use cfgadm to unconfigure the device and then re-configure the
device. For example (this is an example only, your device world wide name will be different):
# cfgadm -c unconfigure c4::100000e0022286ec
# cfgadm –f -c configure c4::100000e0022286ec
• Use cfgadm again to verify that the condition of the device is no longer “unusable”:
# cfgadm -al
QLogic driver configuration for QLA2340 and QLA2342
Substitute for your device name appropriately.
1. Prepare the required rack mounted hardware and cabling in accordance with the specifications listed
in backup software user guide as well as the installation and support documentation for each
component in the SAN.
2. After installing the HBA, verify proper hardware installation. At the OpenBoot PROM ok prompt, type:
show-devs
If the HBA is installed correctly, an entry similar to the following is displayed (the path will vary slightly
depending on your configuration):
/pci@1f,4000/QLGC,qla@5
Verify the HBA hardware installation in Solaris at the shell prompt by typing:
prtconf -v | grep QLGC
If the HBA is installed correctly, and the driver has not yet been installed, a device similar to the
following is displayed:
QLGC,qla (driver not attached)

NOTE: To complete this installation, log in as root.

3. After installing the HBA, install the device driver. The driver comes with the HBA or can be obtained
from http://www.qlogic.com.
4. To ensure that no previous device driver was installed, at the prompt, type:
#pkginfo | grep QLA2300
If no driver is installed, a prompt is returned. If there is a driver installed, verify that it is the correct
revision by entering:
#pkginfo -l QLA2300
If the driver needs to be removed, enter:
#pkgrm <package name>
5. Install the new driver. Navigate to the directory where the driver package is located and at the prompt,
type:
#pkgadd -d ./<package name>
6. Make sure that the driver is installed. At the prompt, type:
#pkginfo -l QLA2300
7. Look at /kernel/drv/qla2300.conf (the device configuration file) to make sure the configuration is
appropriate.
Fibre Channel tape support is enabled. An example follows:
hba0-fc-tape=1;
Persistent binding can be configured by binding SCSI target IDs to the Fibre Channel world wide port
name of the router or tape device. To set up persistent binding, the persistent binding only option is
enabled. An example follows.
hba0-persistent-binding-configuration=1;

134 Configuration and operating system details


After enabling persistent binding only, router or tape drive world wide port names (wwpn) is bound to
SCSI target IDs. For example, if a router has a wwpn of 1111222233334444 and is visible to hba0,
bind it to SCSI target ID 64 as follows:
hba0-SCSI-target-id-64-fibre-channel-port-name = “1111222233334444”;
Emulex driver configuration for LP10000 and LP10000DC
Substitute for your device name appropriately. The example shown is for a dual FC port adapter connected
to the fabric.
1. Prepare the required rack mounted hardware and cabling in accordance with the specifications listed
in backup software user guide as well as the installation and support documentation for each
component in the SAN.

NOTE: To complete this installation, a root login is required.

2. After installing the HBA, verify proper hardware installation. At the OpenBoot PROM ok prompt, type:
show-devs
If the HBA installed correctly, devices similar to the following will be displayed (the path will vary
slightly depending on your configuration).
/pci@8,700000/fibre-channel@1,1
/pci@8,700000/fibre-channel@1
Verify the HBA hardware installation in Solaris at the shell prompt by typing:
prtconf -v | grep fibre-channel
If the HBA is installed correctly, devices similar to the following are displayed:
fibre-channel (driver not attached)
fibre-channel (driver not attached)
3. Install the HBA device driver. The driver can be obtained from http://www.emulex.com.
4. To ensure that no previous device driver was installed, at the prompt, type:
#pkginfo -l lpfc
If no driver is loaded, a prompt is returned. If there is a driver installed, verify that it is the correct
revision. If the driver removal is required, enter:
#pkgrm <package name>
5. Install the new driver. Navigate to one directory level above where the driver package directory is
located and at the prompt, type:
#pkgadd -d .
Select the lpfc package.
6. Make sure that the driver is installed. At the prompt, type:
#pkginfo -l lpfc
7. Verify the HBA driver attached by typing:
#prtconf -v | grep fibre-channel
If the driver attached, devices similar to the following are displayed:
fibre-channel, instance #0
fibre-channel, instance #1
8. Look at /kernel/drv/lpfc.conf (the device configuration file) to make sure the configuration is
appropriate.
For World Wide Port Name binding, add the following line:
fcp-bind-method=2;
For FCP persistent binding, the setting fcp-bind-WWPN binds a specific World Wide Port Name to a
target ID. The following example shows two NSR FC ports zoned in to the second interface on the HBA:
WWPN SCSI ID
fcp-bind-WWPN="100000e0022286dd:lpfc1t62",
"100000e002225053:lpfc1t63";

Enterprise Backup Solution design guide 135


NOTE: The interface definitions appear in /var/adm/messages. The interfaces lpfc0 and lpfc1
map to the following devices:
lpfc0 is /pci@8,700000/fibre-channel@1
lpfc1 is /pci@8,700000/fibre-channel@1,1

NOTE: Refer to comments within the lpfc.conf for more details on syntax when setting
fcp-bind-WWPN. Add the following to item 2 within section “Configuring Sun Servers for tape
devices on SAN”:
For LP10000 adapter:
name="st" class="scsi" target=62 lun=0;
name="st" class="scsi" target=62 lun=1;
name="st" class="scsi" target=62 lun=2;
name="st" class="scsi" target=62 lun=3;

Configuring Sun Servers for tape devices on SAN

NOTE: The information in the following examples, such as target IDs, paths, and LUNs, are examples
only. The specific data for your configuration may vary.

NOTE: This section applies to Solaris 9 and Solaris 10 prior to Update 5 (05/08). Configuration of the
st.conf file is no longer required with Solaris 10 Update 5 (05/08) or later. Tape devices will be
discovered automatically after a reboot.

1. Edit the st.conf file for the type of devices to be used and also for binding. The st.conf file should
already reside in the /kernel/drv directory. Many of the lines in the st.conf file are commented out.
To turn on the proper tape devices, uncomment or insert the appropriate lines in the file.
tape-config-list=
“COMPAQ DLT8000”, “Compaq DLT8000”,”DLT8k-data”,
“COMPAQ SuperDLT1”,”Compaq SuperDLT”,”SDLT-data”,
“COMPAQ SDLT320”, “Compaq SuperDLT 2", “SDLT320-data”,
“HP SDLT600”, ”HP SDLT600”, ”SDLT600-data”,
“HP Ultrium 4-SCSI”, ”HP Ultrium LTO 4”, ”LTO4-data”,
“HP Ultrium 3-SCSI”, ”HP Ultrium LTO 3”, ”LTO3-data”,
“HP Ultrium 2-SCSI”, ”HP Ultrium LTO 2”, ”LTO2-data”,
“HP Ultrium 1-SCSI”, ”HP Ultrium LTO 1”, ”LTO1-data”;

NOTE: The tape-config list is composed of a group of triplets. A triplet is composed of the Vendor ID +
Product ID, pretty print, and the data property name. The syntax is very important. There must be eight
characters for the vendor ID (COMPAQ or HP) before the product ID (DLT8000, SDLT600, Ultrium, etc). In
the above line, there are exactly two spaces between “COMPAQ” and “DLT8000”, and there are exactly
six spaces between “HP” and “Ultrium”. The order of the triplets is also important for Ultrium tape drives for
discovery. The pretty print value will be displayed in the boot log /var/adm/messages for each tape drive
discovered that matches the associated vendor ID + product ID string.

136 Configuration and operating system details


Below the tape-config list is a list of data property names used to configure specific settings for each
device type.
DLT8k-data = 1,0x38,0,0x39639,4,0x1a,0x1b,0x41,0x41,3;
SDLT-data = 1,0x38,0,0x39639,4,0x90,0x91,0x90,0x91,3;
SDLT320-data = 1,0x36,0,0x39639,4,0x92,0x93,0x92,0x93,3;
SDLT600-data = 1,0x36,0,0x39639,4,0x92,0x93,0x92,0x93,3;
LTO1-data = 1,0x3b,0,0x29639,4,0x00,0x00,0x00,0x40,3;
LTO2-data = 1,0x3b,0,0x29639,4,0x00,0x00,0x00,0x42,3;
LTO3-data = 2,0x3b,0,0x38659,4,0x44,0x44,0x44,0x44,3,60,1200,600,1200,600,600,18000;
LTO4-data = 2,0x3b,0,0x38659,4,0x46,0x46,0x46,0x46,3,60,1200,600,1200,600,600,18000;

Some data protection applications handle the SCSI reservation of the tape drives and others require the
operating system to do so. For a complete description of setting SCSI reservation, see the options bit
flag ST_NO_RESERVE_RELEASE on the man page for “st”.

The ST_NO_RESERVE_RELEASE flag is part of the fourth parameter in the data property name. For
LTO1-data and LTO2-data, a value of 0x9639 means the operating system handles reserve/release and
a value of 0x29639 means the application handles reserve/release. For LTO3-data and LTO-4 data, a
value of 0x18659 means the operating system handles reserve/release and a value of 0x38659
means the application handles reserve/release.
2. Define tape devices for other adapters by adding lines similar to the following to the SCSI target
definition section of the st.conf file.
Example for QLogic adapters:
name=”st” class=”scsi” parent=”/pci@1f,4000/QLGC,qla@1”
target=64 lun=0;
name=”st” class=”scsi” parent=”/pci@1f,4000/QLGC,qla@1”
target=64 lun=1;

NOTE: The parent is the location of the HBA in the /devices directory.

NOTE: The target can be chosen; however, it must not conflict with other target bindings in the st.conf and
sd.conf files.

3. Perform a reconfiguration reboot (reboot -- -r) on the server and verify that the new tape devices
are seen in /dev/rmt.
Configuring switch zoning
If zoning will be used, either by World Wide Name or by port, perform the setup after the HBA has
logged into the fabric. Refer to the Fibre Channel switch documentation for complete switch zone setup
information. Ensure that the World Wide Name (WWN) or port of the Fibre-Channel-to-SCSI bridge is in
the same zone as the WWN or port of the HBA installed in the server.
Installation checklist
To ensure that all components on the SAN are logged in and configured properly, review the following
questions:
• Are all hardware components at the minimum supported firmware revision, including: Server, HBA,
Fibre Channel switch, Fibre Channel to SCSI router, Interface Manager, Command View TL, tape drives,
library robot?
• Are all recommended Solaris patches installed on the host?
• Is the minimum supported HBA driver loaded on the host?
• Are all tape and robotic devices mapped, configured and presented to the host from the Fibre Channel
to SCSI router, or Interface Manager?
• Is the tape library online?
• Is the Fibre Channel to SCSI router correctly logged into the Fibre Channel switch?
• Is the host HBA correctly logged into the Fibre Channel switch?

Enterprise Backup Solution design guide 137


• If multiple Fibre Channel switches are cascaded or meshed, are all ISL ports correctly logged in?
• If using zoning on the Fibre Channel switch, is the server HBA and Tape Library's Fibre Channel to SCSI
router in the same switch zone (either by WWN or by switch port)?
• If using zoning on the Fibre Channel switch, has the zone been added to the active switch
configuration?

Solaris known issues


ESL E-Series or EML E-Series library power cycle
ESL E-Series or EML E-Series library power cycle causes unexpected conditions on Solaris 9 and 10 servers
using FC host bus adapters and the SUNWemlxs, SUNWemlxu, or SUNWqlc drivers. The FC HBAs place
the tape devices in an “unusable” or “failed” condition upon initial discovery shortly after the FC link is
established. See Customer Advisory c00965686 for additional details and workaround.
HP LTO3 tape drive not recognized when MPxIO is enabled on Solaris 10
See Sun Bug ID 6352224 at http://sunsolve.sun.com for details and workaround. This problem was only
reported on Solaris 10 SPARC and Solaris 10 x86/64. Patches 120011-14 for Solaris SPARC and
120012-14 for Solaris 10 x86/64 fix the Bug ID. These patches are included in the recommendated patch
clusters.

138 Configuration and operating system details


IBM AIX
The configuration process for IBM AIX in an EBS environment involves:
• Upgrading essential EBS hardware components to meet the minimum firmware and device driver
requirements
• Installing the minimum patch level support for:
• IBM AIX
• Backup software
Refer to the following websites to obtain the necessary patches:
For HP: http://www.hp.com/support
For IBM: http://www.ibm.com

NOTE: Refer to the HP Enterprise Backup Solutions Compatibility Matrix for all current and required
hardware, software, firmware, and device driver versions at http://www.hp.com/go/ebs.

Refer to the Quick Checklist at the end of this section to ensure proper installation and configuration of all
of the hardware and software in the SAN.
Configuring the SAN
This procedural overview provides the necessary steps to configure an AIX host into an EBS. Refer to the
documentation provided with each storage area network (SAN) component for additional component
setup and configuration information.

NOTE: To complete this installation, log in as root.

Prepare the required hardware and cabling in accordance with the specifications listed in chapter 2 of this
guide as well as the installation and support documentation for each component in the SAN.
IBM 6228, 6239, 5716, or 5759 HBA configuration

NOTE: See the EBS compatibility matrix concerning IBM AIX OS version support for these Host Bus
Adapters.

1. Install the latest maintenance packages for your version of AIX. This ensures that the latest drivers for the
6228/6239/5716/5759/5773/5774 HBA are installed on your system. For AIX 4.3.3, the latest
packages must be installed because the base OS does not contain drivers for the newer HBAs.
2. Install the IBM 6228/6239/5716/5759/5773/5774 HBA, and restart the server.
3. Ensure that the card is recognized. At the prompt, type:
#lsdev -Cc adapter
There is a line in the output similar to the following:
fcs0 Available 1D-08 FC Adapter
If the adapter is not recognized, check that the correct HBA driver is installed:
6228: #lslpp -L|grep devices.pci.df1000f7
6239: #lslpp -L|grep devices.pci.df1080f9
5716: #lslpp -L|grep devices.pci.df1000fa
5759: #lslpp -L|grep devices.pci.df1000fd
5773: #lslpp -L|grep devices.pciex.df1000fe
5774: #lslpp -L|grep devices.pciex.df1000fe
There are lines in the output for lslpp similar to the following for a 6239 HBA:
devices.pci.df1080f9.diag 5.1.0.1 C F PCI-X FC Adapter Device
devices.pci.df1080f9.rte 5.1.0.1 C F PCI-X FC Adapter Device

Enterprise Backup Solution design guide 139


For AIX 5.1, the device drivers may need to be installed separately from the Maintenance pack. See the
IBM installation guide for the 6239.
4. For information about the HBA, such as the WWN, execute the following command:
#lscfg -vl fcs0

The output will look similar to the following:


DEVICE LOCATION DESCRIPTION

fcs0 1H-08 FC Adapter

Part Number.................00P4295
EC Level....................A
Serial Number...............1E3180B22A
Manufacturer................001E
FRU Number..................00P4297
Device Specific.(ZM)........3
Network Address.............10000000C9345CF9
ROS Level and ID............02E01871
Device Specific.(Z0)........2003806D
Device Specific.(Z1)........00000000
Device Specific.(Z2)........00000000
Device Specific.(Z3)........03000909
Device Specific.(Z4)........FF601231
Device Specific.(Z5)........02E01871
Device Specific.(Z6)........06631871
Device Specific.(Z7)........07631871
Device Specific.(Z8)........20000000C9345CF9
Device Specific.(Z9)........HS1.81X1
Device Specific.(ZA)........H1D1.81X1
Device Specific.(ZB)........H2D1.81X1
Device Specific.(YL)........U0.1-P2-I2/Q1

5. After the HBA has successfully logged into the SAN fabric and the necessary zoning is configured,
configure the HBA and devices within the fabric. At the prompt, type:
#cfgmgr -1 <devicename> -v
Within the command, <devicename> is the name from the output of the lsdev command in step 3,
such as fcs0.
6. To ensure all tape device files are available, at the prompt, type:
#lsdev -HCc tape
7. By default, AIX creates tape devices with a fixed block length. To change the devices to have variable
block lengths, at the prompt, type:
#chdev -1 <tapedevice> -a block_size=0
Configuration of the tape devices (where tape devices are rmt0, rmt1, and so on) are complete.

NOTE: HP tape drives (SDLT and LTO) use the IBM host tape driver. When properly configured, a device
listing will show the tape device as follows:

For IBM native HBAs: Other FC SCSI Tape Drive


For non-IBM native HBAs: Other SCSI Tape Drive

140 Configuration and operating system details


Configuring switch zoning
If zoning will be used, either by World Wide Name or by port, perform the setup after the HBA has
logged into the fabric. Refer to the Fibre Channel switch documentation for complete switch zone setup
information. Ensure that the World Wide Name (WWN) or port of the Fibre-Channel-to-SCSI bridge is in
the same zone as the WWN or port of the HBA installed in the server.
Installation checklist
To ensure that all components on the SAN are logged in and configured properly, review the following
questions:
• Are all hardware components at the minimum supported firmware revision, including: Server, HBA,
Fibre Channel switch, Fibre Channel to SCSI router, Interface Manager, Command View TL, tape drives,
library robot?
• Are all recommended AIX maintenance packages installed on the host?
• Is the minimum supported HBA driver loaded on the host?
• Are all tape and robotic devices mapped, configured and presented to the host from the Fibre Channel
to SCSI router, or Interface Manager?
• Is the tape library online?
• Is the Fibre Channel to SCSI router correctly logged into the Fibre Channel switch?
• Is the host HBA correctly logged into the Fibre Channel switch?
• If multiple Fibre Channel switches are cascaded or meshed, are all ISL ports correctly logged in?
• If using zoning on the Fibre Channel switch, is the server HBA and Tape Library's Fibre Channel to SCSI
router in the same switch zone (either by WWN or by switch port)?
• If using zoning on the Fibre Channel switch, has the zone been added to the active switch
configuration?
Installing backup software and patches
After all components on the SAN are logged in and configured, the system is ready for the installation of
any supported backup software. Refer to the installation guide for your particular software package, or
contact the vendor for detailed installation procedures and requirements.
After installing the backup software, check with the software vendor for the latest updates and patches. If
any updates or patches exist for your backup software, install them now.

Enterprise Backup Solution design guide 141


142 Configuration and operating system details
5 Backup and recovery of Virtual Machines
Virtual Machine software is used for portioning, consolidating, and managing computing resources
allowing multiple, unmodified operating systems and their applications to run in virtual machines that share
physical resources. Each virtual machine represents a complete system with processors, memory,
networking, storage, and BIOS.

Table 48 Virtual machine backup methods

VM image Host VM file VM image LAN Host VM LAN Off-host proxy


backup from backups to backup to backup to backup server
host to local local tape media host media host
tape
Requires file-level No Yes No Yes VMware = Yes
recovery (VCB)
Hyper-V = Yes
(VSS)
HPVM = Yes
(ZDB)

Application data Yes (cold) Yes (hot/cold) Yes (cold) Yes (hot/cold) VMware = Yes
to backup (VCB)
Hyper-V = Yes
(VSS)
HPVM = Yes
(ZDB)

Backup window Yes Yes Yes Yes No


required

Large number of Not suggested Not suggested Not suggested Not suggested Yes
VMs to backup

ZDB = Snapshot-based backup on an off-host proxy server


VCB = VMware Consolidated Backup

NOTE: Be sure to do the following:


• See the backup software documentation for supported virtual machine backup methods.
• See the virtual machine documentation for supported backup devices.
• See the EBS Compatibility Matrix for backup application VM support and VM tape support.

HP StorageWorks EBS VMware backup and recovery strategy

NOTE: ESX server and VMs do not support an FC or iSCSI connected tape device. A proxy server can be
used to manage SAN or iSCSI devices.

• VMware Consolidated Backup (VCB) offloads backup responsibility from ESX servers to a dedicated
backup proxy (or proxies). This reduces the load on ESX servers. VCB provides full-image backup and
restore capabilities for all virtual machines and file-based backups for virtual machines running the
Microsoft Windows operating systems.

Enterprise Backup Solution design guide 143


• VMs can also be set up for LAN backup the same as a regular client. See backup software
documentation for details.
• For complete details on Virtual Machine backup and recovery including VCB, LAN-based and local
media server backups, see HP StorageWorks EBS Solutions guide for VMware Consolidated Backup at
www.hp.com/go/ebs under the EBS whitepapers link.

• For complete details on Zero Downtime Backup of an Oracle database running on a VMware virtual
machine, see the HP StorageWorks Oracle on VMware ZDB Solution implementation guides at
www.hp.com/go/ebs, under the EBS whitepapers link.

NOTE: VMware datastores residing on HP StorageWorks EVA storage arrays should use the "Windows
host profile mode" for the VCB proxy server.

HP Integrity Virtual Machines (Integrity VM)


HP supports, certifies, and sells HP Integrity Virtual Machines (HPVM) Virtualization software on HP
Integrity servers.

HPVM is an application installed on an HP-UX server and allows multiple, unmodified operating systems
(HP-UX, Windows and Linux) and their applications to run in virtual machines that share physical
resources.

The HP Virtual Server Environment (VSE) for HP Integrity provides an automated infrastructure that can
adapt in seconds with mission-critical reliability. HP VSE allows you to optimize server utilization in real
time by creating virtual servers that can automatically grow and shrink based on business priorities and
service.

NOTE: The HP Integrity VM host and VMs do support FC SAN connected tape and Virtual Library
Systems (VLS) devices.

• Off-host backups using HP storage array hardware mirroring or snapshots can be used to shorten the
backup windows and off-load resources require for backup.

• VMs can also be setup for LAN backup the same as a regular client or media host. See backup
software documentation for details.

• For complete details on Virtual Machine backup and recovery including Off-host, LAN-based and local
media server backups, see HP StorageWorks EBS Solutions Guide for HP Integrity Virtual Machine
Backup at www.hp.com/go/ebs under the EBS whitepapers link.

144 Backup and recovery of Virtual Machines


6 Management tools
HP has developed several important tools to assist with managing different devices within the EBS.

HP Storage Essentials Storage Resource Management Software


The award winning HP Storage Essentials Storage Resource Management Enterprise Edition Software Suite
simplifies enterprise heterogeneous (DAS, SAN, and NAS) management with powerful SRM provisioning,
storage metering, customized reporting, business application and backup monitoring, with end to end to
performance management. Storage Essentials SRM Software integrates with HP System Insight Manager to
provide unified server and storage management; while integration with HP Service Desk software delivers
integrated end-to-end IT services management. Include HP Data Protector and you have a powerful solution
to monitor the overall status of the entire backup process as well as visualize backup configuration and
recoverability. The architecture is built on industry standards, allowing for quick deployment, with the
agility to easily manage, monitor and react to change, and investment protection so you can minimize the
costs of adopting, and upgrading storage resources.
HP SRM Enterprise Edition Software enhancements include HP SRM Performance Pack Software, which
helps EVA administrators quickly visualize the big performance picture of their entire EVA SAN, auto
discovery and monitoring of host clusters increases availability, Intersystem's Cache database extends SRM
benefits to Healthcare and financial industries, greater scalability for larger SANs means your business
does not have to wait to grow, HP SRM Backup Manager enhancements improve efficiency and reporting,
and SAP ACC integration helps administrators efficiently and dynamically manage and visualize SAN
resource allocation and movement in mySAP. If you include HP Operations for Windows/Unix, you have a
single management solution that spans Windows and Unix applications, networks, servers, infrastructure
and storage. Imagine a single unified interface that puts you in control while improving the overall quality
ofstorage services offered to end users.

Features and benefits


• Improve administrative efficiency, reduce costs and complexity—HP Storage Essentials
SRM Software Suite integration with HP System Insight Manager delivers unified server and storage
management, improving efficiency via shared services like single sign on, security administration, asset
management, reporting, and discovery, licensing, storage resource management and event
management.
• Easy software integration increases productivity and manageability—HP Storage
Essentials SRM Software suite integration with HP software via Smart Plug-In (SPI) delivers integrated IT
Services management in areas like provisioning, operations, reporting and charge back and
managing an end-to-end Microsoft exchange and Unix environment when included with HP Operations
for Windows or Unix.
• Automation of network discovery for quick access to the right data—Big picture
visualization automatically discovers and maps the storage network (DAS, SAN, and NAS) and backup
topology by pictorially displaying the objects, path and zones between the application and LUN on
which the corresponding data resides.
• Increases business application availability—Monitor and report on business applications
including Exchange, Oracle, Sybase, MS SQL and Intersystem's Cache, and monitor host cluster
configurations including Microsoft Server Cluster and VERITAS Cluster (Solaris) and with infrastructure
dependencies to facilitate faster end-to-end root cause analysis.
• End-to-end file and performance management—Monitor end-to-end performance from
application objects down to storage subsystems and file identification. Monitors and reports on
application, host, HBA, switch, disk subsystem and now HP EVA. Enables quick detections of
performance “bottleneck,” so service level agreements (SLA) are maintained or exceeded.
• Industry standard based architecture—Designed on the Storage Management Initiative
Specification (SMI-S), industry standard for storage network management based on the Common
Information Model (CIM) and Web-Based Enterprise Management (WBEM). Enables support for
multi-vendor storage infrastructures and ensuring investment protection.

Enterprise Backup Solution design guide 145


• Customizable Reports—Extensive set of reports can be customized and scheduled for automatic
e-mail distribution, satisfying requirements for capacity and performance management, executive
dashboards and planning, asset management, and chargeback. Supports a wide variety of reporting
formats, including HTML, XML, Microsoft Excel, and PDF.
• Unified tool to manage XP arrays, Servers and Infrastructure—Helps IT staff manage
heterogeneous SANs, infrastructure and HP XP arrays from a single unified interface. This simplifies
management and reduces TCO since traditional device managers like HP Command View XP are
usually not required. HP SRM XP Provider software is included with SRM Enterprise Edition Software.
• Integrated NAS and infrastructure Backup Management—Analyze impact of configuration
changes and file sharing activities for HP ProLiant Storage Servers, HP Enterprise File System (EFS)
Clustered Gateway and Netapps. Monitor overall status of the backup process and recoverability with
a single view of backup activities for HP Data Protector and Symantec NetBackup software.
• Efficiently manage, visualize SAN resource allocation and movement for SAP
ACC—HP Storage Essentials SRM Application Integration Software for SAP Adaptive Computing
Controller (ACC) helps customers with SAP ACC seamlessly move and visualize SAN attached storage
and infrastructure resources between physical servers when SAP application services requests more
storage. Ideal for SAP test environments.

HP Systems Insight Manager (HP SIM)


HP SIM is the foundation for HP's unified infrastructure management strategy. It provides hardware level
management for HP ProLiant, Integrity, and HP 9000 servers; HP BladeSystems; and HP StorageWorks
MSA, EVA, and XP storage arrays. HP SIM also provides management of non-HP gear through industry
standards.
HP SIM alone is an effective unified infrastructure management tool. When used in conjunction with
Essentials plug-ins, it becomes a comprehensive, easy-to-use platform that enables organizations to
holistically control their Windows, HP-UX, and Linux environments.

Features:
• Delivers fault monitoring, inventory reporting, and configuration management for ProLiant, Integrity,
and HP 9000 systems as well as HP StorageWorks MSA, EVA, XP arrays and various third-party arrays
via a web-based GUI or command line.
• Provides base-level management of HP clients and printers. Can be extended with HP client
management software and HP Web JetAdmin for more advanced management capabilities.
• Delivers notification of and automates response to pre-failure or failure conditions through automated
event handling.
• Facilitates secure, scheduled execution of OS commands, batch files, and custom or off-the-shelf
applications across groups of Windows, Linux, or HP-UX systems.
• Enables centralized updates of BIOS, drivers, and agents across multiple ProLiant servers with system
software version control.
• Enables secure management through support for SSL, SSH, OS authentication, and role-based security.
• Installs on Windows, HP-UX, and Linux.

Key benefits:
HP Systems Insight Manager, HP's unified server-storage management tool, helps maximize IT staff
efficiency and hardware platform availability for small and large server deployments alike. It is designed
for end-user setup, and its modular architecture enables systems administrators to plug in additional
functionality as needed.
• Unified server and storage management
• Improves efficiency of the IT staff
• Extensibility through plug-in applications
• Integrate new technologies in response to changing conditions

146 Management tools


Management agents
Many management agents are able to keep track of nearline storage devices through the use of in-band
polling. In-band polling is done by requesting data from the storage device in-band, or in the same path as
data travel. Typical SCSI commands that are used to gather this data are Inquiry and Log Sense
commands.

Known issues
Inquiry commands report information such as make, model, and serial number, while Log Sense reports
other health statistics. Due to the multi-hosted nature of backups on SANs, these polling agents can cause
the tape controller or tape and robotic devices to become unstable due to the flooding of these commands
coming from all of the hosts that can see the devices. HP Fibre Channel interface controllers have an
inquiry caching feature that minimizes the impact of inquiry commands in backup/restore environments.
Log Sense commands can still cause issues on SAN backups as is the case with HP Systems Insight
Manager versions 6.4 and 7.0 and 7.1. Insight Manager uses a timeout for Log Sense commands that can
sometimes be exceeded in a normal backup environment. Side effects from this behavior may include a
robotic device to become unresponsive, poor performance, or a tape controller reboot.
Version 7.2 and later of the Insight Management agents will begin to use Inquiry commands for polling as
opposed to Log Sense commands. Utilities have also been made available for versions 7.0 and 7.1. The HP
Utility for Disabling Fibre Agent Tape Support can be used to allow these backup jobs to complete without
being overrun with Log Sense commands. This utility disables the Fibre Agent Tape support, which disables
the monitoring of the Fibre Attached Tape Library. Deploying this utility will disable only Fibre Attached
Tape Library monitoring, leaving the monitoring of all other devices and peripherals by the Storage Agents
unaffected. The HP Utility for Disabling Fibre Agent Tape Support is available in SoftPaq SP25792 at the
following URL:
ftp://ftp.compaq.com/pub/softpaq/sp25501-26000/SP25792.EXE

NOTE: In current versions of the HP management agents, there is also an option to disable Fibre Agent
Tape Support:

1. Select Start > Control Panel > HP Management Agents.

2. Click the Storage tab and select the box.

Recommendations
Be aware of the management applications that run in backup environments as they may issue commands
to the tape and robotic devices for polling or monitoring purposes, and adversely impact backup jobs in
progress. Sometimes these applications or agents can be running on the server as part of the installed
operating system. If these agents are running in the backup environment, and they are not needed, then
disable them. If they must be run, then limit the agent to 1 or 2 servers that see the nearline storage device.
Sometimes it is not necessary for the agents to poll nearline devices, as they are already monitored by the
backup application. In most cases, the backup application will remotely monitor the backup/restore
environment. Refer to your management agent software updates for more information on how to manage
nearline device polling.

Enterprise Backup Solution design guide 147


148 Management tools
7 Encryption in an EBS environment
Overview
Data encryption has become extremely important to many businesses, especially those who handle
sensitive information, such as banks, online businesses, payroll departments, and so on. In many instances,
businesses are required by government regulation to use some form of encryption to protect their data.
There are many products available that provide encryption on the local disk, on storage arrays, in the
storage network itself, and at the tape drive during backups. When and how data is encrypted can be
critical to the user's ability to retrieve that data successfully and have it be in a usable state when it is
restored. In addition, the keys used to encrypt the data have to be correctly protected, so that the data can
be decrypted and used when needed.

EBS support and encryption


Generally, encryption implementation falls outside of the need for interoperability support, as is typical
with EBS. Whether the data is encrypted or not matters little to the ability of servers to have their data
backed up in an EBS environment. However, key management is more critical, because the ability for
backups and restores to occur successfully depends entirely on how the necessary keys are retrieved and
used in an encrypted-data environment. In addition, encrypted data has an effect on certain supported
features, such as compression and data deduplication, which must be considered.

FIPS compliance
The Federal Information Processing Standard (FIPS) 140-2 standards are the U.S. government standards for
the protection of cryptographic modules. Cryptographic modules are the elements of an encryption product
in which the algorithms that encrypt the data are maintained. There are currently four levels of security
defined in the standard–the higher the level, the more stringent the security.
EBS recommends that encryption products be certified at no lower than FIPS 140-2 Level 2, which requires
evidence of physical tampering of the cryptographic module and role-based authentication of users.
Role-based authentication allows for differing levels of security for user accounts and can also include
quorum-based security, in which a set number of users are required to validate the execution of a particular
task.

Compression and encryption


It is always good practice when backing up data to make sure that any data compression occurs before
the data is encrypted. Encrypted data should have no redundant character strings, which is a natural target
for a compression algorithm; therefore, the compression ratio of an encrypted file will (or should be) almost
nothing, making compression worthless. When writing to an LTO4 tape drive, for example, if both
compression and encryption are enabled, the drive will always compress first, then encrypt. However, if the
encryption is happening before the tape drive in the data path, compression should be disabled on the
tape drive, whether it is a virtual drive from a VLS or a physical tape drive.

Deduplication and encryption


Encryption of data presents a similar issue with deduplication as with compression. Since deduplication
seeks to identify like patterns (whether at the block level or higher) so that only changes are backed up,
encrypted data would hinder that. Therefore, it is recommended that deduplication not be deployed in a
configuration in which encrypted data will be backed up, unless that encrypted data is a small subset of
the total backup data.

Enterprise Backup Solution design guide 149


150 Encryption in an EBS environment
8 High availability
Typical high availability and cluster configurations for online storage include multiple paths to the disk
arrays and automatic failover. This provides redundant I/O paths and maintains system uptime if a path
fails (see Figure 52).

1 1

2 2

Figure 52 Typical multi-path configuration for disk

1 Server 2 SAN switch


3 Disk array

Tape devices can also be made visible to the operating system through multiple paths, but no automated
multi-path failover capability is currently available. Only a single path at a time can be used between any
host and tape device or tape library controller, including LTO4 drives, regardless of the number of FC ports
(see Figure 53).

Enterprise Backup Solution design guide 151


1 1

2 2

Figure 53 No automated multi-path failover capability for tape

1 Server 2 SAN switches


3 HP StorageWorks tape library with
embedded router or LTO4 tape drives
connected directly to the SAN

Multi-path disk array configurations typically use a special driver on the host. This driver recognizes two
views of the same disk array, presents only one device image to the host, and provides an automatic
failover capability between them. This type of functionality does not exist for tape and would need to be
developed either in a device driver or within the backup application for a redundant-path capability to be
present.
Error recovery provides an additional hurdle. When a disk I/O such as a read or write to a logical block
fails, the host can simply retry the I/O. In this case, the device driver switches over to a different path when
the first attempt fails and the whole process is transparent to the application writing or reading the data.

152 High availability


Tape backup is not as simple. Because the tape drive is a stateful device (tracks I/O status), additional
information is needed on how the I/O failed in order to recover properly. For example, if a write command
sent to a tape drive fails, one possible condition might be that the tape device did not receive the
command. If that were the case, the host would simply resend the command. Another possibility might be
that the tape device received the command and executed it, but then the tape device failed to send the
status back to the host. In this case, the tape would need to be rewound to a specific point prior to the
failure, and the sequence of writes restarted from there. Similar issues exist when sending positioning
commands to a tape drive such as forward or rewind.
Multiple paths to a tape or robotic device are only supported when custom device mapping is enabled
within ETLA, requiring manual creation of the LUN map within the library controller. Users must take care to
ensure that OS device drivers and backup applications only use one path at a time to avoid device
ghosting and device access violations. HP cannot guarantee proper backup or restore operations when
multiple hosts can access a tape device or robot simultaneously without the management of a backup
application that controls and manages device access. Consult with your application vendor for support
details regarding multiple instances of tapes drives and/or robotics and any specific failover procedures.

NOTE: Detailed instructions on how to manually configure LUN maps within the library controllers are in
the Partitioning in an EBS Environment v.2 implementation guide, located under EBS Whitepapers &
Implementation Guides at http://www.hp.com/go/ebs.

Enterprise Backup Solution design guide 153


1 1

2 2

Figure 54 Balancing tape

1 Server 2 SAN switch


3 HP StorageWorks tape library 4 Disk array
with embedded Fibre Channel
interface controller

It is possible to balance I/O across multiple paths or SANs to your tape library, provided there is only one
path to any single device.

154 High availability


1 1

2 2

Figure 55 Connecting one library across multiple SANs


1 Server 2 SAN switch
3 HP StorageWorks tape library with embedded Fibre Channel interface controller

NOTE: When connecting one tape library to multiple SANs, use controllers with multiple Fibre Channel
host ports, such as the NSR M2402 or the e2400-160. Connecting a tape drive or robotic device to more
than one controller is not supported.

Enterprise Backup Solution design guide 155


Disk array multi-path (MPIO)
EBS supports a variety of different disk array multi-path products, The table below outlines the currently
supported products, grouped by operating system.

Table 49 MPIO products by operating system

Windows Windows MPIO

HP-UX Secure Path (11i v1,11i v2), PV-LINKS (11i v1, 11i v2), Native Multipath (11i
v3)

Red Hat and SUSE Linux QLogic Native Multi-path, Emulex MultiPulse, HP Device Mapper

Solaris Sun MPxIO, Veritas DMP

AIX IBM MPIO

Tru64 Tru64 Native Multi-Path

NetWare NetWare MPIO

The sharing of disk and tape on the same pair of HBAs in a multi-path environment is fully supported with
the following exceptions/caveats:
• In Windows 2003 MSCS cluster environments, sharing of disk and tape on the same pair of HBAs
requires StorPort miniport drivers.
• For AIX environments running Data Protector, tape devices must be isolated to their own HBA port.
• Regardless of operating system environment, multi-path to tape is not supported (only one of the two
configured HBAs can access the tape devices).
Specific ISV application support is detailed in the HP Enterprise Backup Solutions Compatibility Matrix.
Any special support considerations will be footnoted.

Clustering
A Fibre Channel cluster in the EBS with data protection software consists of two servers and storage that
can be distributed between two or more distant computer rooms interconnected by Fibre Channel links. A
Fibre Channel cluster topology can be considered as an extension of a local SCSI Cluster where all the
parallel SCSI shared buses are replaced by extended serial SCSI shared buses using Fibre Channel
switches.

Highlights
• Communications between computers and storage units use the new high-speed Fibre Channel standard
to carry SCSI commands and data over fiber optic links.
• Storage cabinets contain Fibre Channel disks and Fibre Channel components to connect to the SAN.

Benefits
• Computers and storage can be located in different rooms, distance can expand up to 10 kms.
• High-availability — Full hardware redundancy to ensure that there is no single point of failure.
• Electrical insulation — The cluster can be split between two electrically independent areas.
Backup for cluster configurations may be deployed using either separate switches and HBAs or common
switches and HBAs. However, these configurations do not provide a failover path for tape or tape libraries.
To use separate switches, the configuration requires installing an additional HBA in each server, and a
separate switch, as shown in the following diagram. Again, this option provides better performance for
applications with large storage and/or short backup window requirements.

156 High availability


1 1

3 4

Figure 56 Cluster configuration with separate switches for disk and tape

1 Server 2 SAN switches


3 Disk array 4 HP StorageWorks tape library with
embedded router

Enterprise Backup Solution design guide 157


In addition, configurations may be deployed using a common HBA for disk and tape. In these
configurations, multiple HBAs and switches are used to provide failover and redundancy for the disk
subsystem. One of the HBAs and switches are shared for tape access. The following diagram provides an
example.

1 1

3 4

Figure 57 Cluster configuration with a common HBA for disk and tape

1 Server 2 SAN switches


3 Disk array 4 HP StorageWorks tape library with
embedded router

NOTE: For Microsoft Windows 2000 and Windows 2003 using the SCSIport driver, Microsoft does not
recommend the sharing of disk and tape devices on the same Fibre Channel host bus adapter; however,
HP has tested and certified the sharing of disk and tape, in a Microsoft Cluster Server, with their supported
HBAs. See the HP Enterprise Backup Solutions Compatibility Matrix for a listing of Supported HBAs with
Windows 2000/2003. For Windows 2003 servers using the StorPort driver, the sharing of disk and tape
is supported by Microsoft and HP.

EBS Support of failover versus non-failover applications


EBS can be set up to support clustering with application failover, or no application failover. Failover of the
EBS data protection application can be supported when the application is cluster aware and has been
configured properly to use nearline devices. In the case of application failover, a backup job can be
restarted or will resume using checkpoint restart after the failover occurs. In the case of non-application
failover, the job will fail, and the backup job will occur on the next scheduled backup policy, or through
manual intervention.

EBS clustering with failover


Currently there are a limited number of applications that support application clustering with failover. In this
case the cluster alias is used as the backup server, and has some recovery hooks built into the backup
process.

158 High availability


These applications include but are not limited to:
• EBS with Symantec NetBackup on HP Tru64 5.1a Clusters
• EBS with Symantec NetBackup on Microsoft Cluster Server
• EBS with Symantec Backup Exec Windows Server on Microsoft Cluster Server
• EBS with Legato NetWorker on HP Tru64 5.1a Clusters

EBS clustering with no failover support


EBS in non-failover environments is set up to exist as an independent backup server on each node of the
cluster, and the cluster alias is not used in the backup application.
Refer to the HP Enterprise Backup Solutions Compatibility Matrix for a list of supported cluster
environments. More detail will be provided in future EBS implementation guides.
Refer to the application notes for cluster where available from each of the backup application and backup
software vendors.

HP-UX MC/ServiceGuard
Backup for MC/ServiceGuard configurations may be deployed using standard backup software, such as
HP Data Protector or Symantec NetBackup without installing and configuring Advanced Tape Services
(ATS). In this case, the backup software instead of ATS provides all backup functionality including sharing
and failover. This is the only option for MC/SG configurations participating in a multi-cluster or
heterogeneous SAN environment.

1 1 1 1

Figure 58 Backup for a 4-node HP-UX MC/ServiceGuard cluster


1 HP-UX host 2 SAN switch
3 HP StorageWorks tape library with embedded FC interface controllers

Enterprise Backup Solution design guide 159


160 High availability
9. Performance: Finding bottlenecks
Overview
All backup and restore environments have bottlenecks that determine the maximum performance.
The objective of this chapter is to provide processes to help identify those bottlenecks.
Two common backup and restore performance questions are:
• Why are backups so slow?
• Why are restores so slow?
The following sections help to identify and resolve component related performance problems through
processes and examples.

Process for identifying bottlenecks


In order to get the maximum performance of a backup and restore system, it is important to understand that
many elements influence backup and restore throughput.
A process is needed that breaks down a complex SAN infrastructure into simple parts, that can then be
analyzed, measured, and compared. The results of which can then be used to plan a backup strategy that
maximizes performance.

Figure 59 Example topology

The following steps are used to evaluate the throughput of a complex SAN infrastructure:

1. Tape subsystem’s WRITE performance.


2. Tape subsystem’s READ performance.
3. Disk subsystem’s WRITE performance.
4. Disk subsystem’s READ performance.
5. Backup and restore application’s effect on disk and tape performance.
The next section provides details of each of the steps in the example SAN test environment. Analyzing the
results provides information for identifying bottlenecks in the SAN on a component level.

Enterprise Backup Solution design guide 161


Test environment
A test environment was created that included the latest versions of Linux RedHat, Windows Server 2003,
and HP-UX 11.23. Figure 60 shows the topology layout of the test environment. A 4Gb infrastructure was
used when available.

Figure 60 Test environment

Hardware
Table 50 shows the operating systems and the specific HBAs used in each server.
Table 50 Hardware

Model Operating system HBA (speed)


HP Proliant DL380 G3 Windows Server 2003 (32bit) LP9002/LP1050 (2Gb/s)

HP Proliant DL380 G4 Windows Server 2003 (32bit) QLE2462 (4Gb/s)

HP Proliant DL380 G3 Red Hat Enterprise Linux 4.0 (32bit) FCA2214DC (2Gb/s)

HP Proliant DL380 G4 Red Hat Enterprise Linux 4.0 (64bit) QLE2462 (4Gb/s)

HP Integrity rx2600 HP-UX 11.23 (64bit) AB379A (4Gb/s)

162 Performance: Finding bottlenecks


Enterprise storage
Table 51 shows the firmware level tested and the host Fibre Channel ports for each storage device.
Table 51 Enterprise storage

Model Firmware level FC ports


HP StorageWorks XP12000 Disk array 50-04-31 2 at 4Gb/s
2 at 2Gb/s
HP StorageWorks Enterprise Virtual Array 5000 3028 4 at 2Gb/s

HP StorageWorks ESL E-series Tape Library with four Library - 4.10 2 at 4Gb/s
HP Ultrium 960 (LTO3) tape drives
LTO3 drives - L26W

Performance tools
HP provides free diagnostic tools for testing system performance. These tools isolate major component
bottlenecks and are available at the following websites:
• http://www.hp.com/support/tapetools
• http://www.hp.com/support/pat

Table 52 Tape and disk performance tools

Tool name Description Support operating system


HP Library and L&TT includes all the individual tools listed below and can be used HP-UX, Windows, Linux, Tru64,
Tape Tools alone to perform all these tests. The device performance function Novell, and OpenVMS
of this tool measures the WRITE and READ performance of a
server to tape drive independent of disk, and to disk independent
of tape.
Details on L&TT are available chapter 8, ”Library and Tape
Tools” on page 173.
HPTapePerf This tool measures WRITE and READ performance of a server to a HP-UX, Windows, Linux, and
tape drive directly from memory independent of disk. Solaris

HPCreateData This tool measures WRITE performance of a server to disk HP-UX, Windows, Linux, and
directly from memory independent of the tape drive. Solaris
HPCreate data also creates definable data of known size,
structure, and compressibility to enable backups and
restores to be easily benchmarked using a consistent set of
data.
HPReadData This tool measures the READ performance of a server from disk HP-UX, Windows, Linux, and
independent of the tape drive. Solaris

Table 53 Example datasets created with HPCreateData

Dataset Array/Size File type/Compression RAIDset


Dataset 1 EVA / 30GB LUN / 10GB data Mixed file size 4K – 128MB / 1:1 RAID 1

Dataset 2 XP / 30GB LUN / 10GB data Mixed file size 4K – 128MB / 1:1 RAID 5 (3+1)

Dataset 3 EVA / 30GB LUN / 10GB data Mixed file size 4K – 128MB / 1:1 RAID 5

Dataset 4 XP / 30GB LUN / 10GB data Mixed file size 4K – 128MB / 1:1 RAID 5 (7+1)

Dataset 5 XP and EVA / 30GB LUN / 10GB data 2G – or larger / 2:1 RAID 5

Enterprise Backup Solution design guide 163


1. Evaluate the tape subsystem’s WRITE performance
L&TT or HPTapePerf can be used for this test. These tools WRITE data directly from memory to a tape device
and reveal the true performance capability of the tape device independent of a data source such as disk.
L&TT or HPTapePerf can specify parameters such as block size, compression ratio, and how much data to
send to tape. Different block sizes can be used to determine the optimal settings for the tape device and to
determine the maximum attainable tape device WRITE throughput from the server.

Tape WRITE performance observations in the example SAN test environment.

• Linux performed poorest when using a block size of 64Kb. (See Figure 61.)
• Different block sizes had a minimal effect on HP-UX and Windows. (See Figure 61.)
• The Windows Fibre Channel host bus adapter (HBA) configuration can impact performance. (See note
below.)

NOTE: During tests on a Windows server with an Emulex HBA it was determined that the default
Windows registry value MaximumSGList for Emulex HBAs was set too low. The
MaximumSGList value controls the size of the data transfer length which was 128 KB.
Increasing the value of MaximumSGList from 21 (hex) to 81 (hex) resulted in a data transfer
length of 512 KB and much better performance.

CAUTION: Editing the Windows Registry incorrectly can cause serious, system-wide problems.
Microsoft cannot guarantee that any problems resulting from editing the Registry can be solved.
Back up the registry before editing.

Figure 61 Tape WRITE throughput

Tape Write Throughput (2:1 Compress Data)

160.00

140.00

120.00

100.00

HP-UX 11.23
MB/s

80.00 Red Hat EL 4


Windows 2003

60.00

40.00

20.00

0.00
32768 65536 131072
Block Size (bytes)

2. Evaluate the tape subsystem’s READ performance


L&TT or HPTapePerf can be used for this test. These tools READ data from a tape device directly to memory
and reveal the true performance capability of the tape device independent of a data target such as disk.

164 Performance: Finding bottlenecks


Either L&TT or HPTapePerf can specify a parameter such as block size. Different block sizes can be used to
determine the optimal settings for the tape device and to determine the maximum tape device READ
throughput from the server.
Tape READ performance observations in the example SAN test environment

• Confirmed poor tape performance when using 64Kb block sizes on Linux. (See Figure 62.)
• Due to no buffering, tape READs performed 30-35% slower than tape WRITEs. (See Figure 63.)
Figure 62 Tape READ throughput

Figure 63 WRITE vs. READ throughput 2:1 Compressed Data

Enterprise Backup Solution design guide 165


3. Evaluate the disk subsystem’s WRITE performance
L&TT or HPCreateData can be used for this test. These tools WRITE data directly from memory to disk and
reveal the true performance capability of disk independent of a data source such as tape.
L&TT or HPCreateData can specify parameters such as file size, number of files, compression ratio, and
how much data to send to disk. Different file sizes and the number of files can be used to determine the
impact they have on disk performance.

NOTE: The datasets created represent a cross section of user data with both uncompressed and
compressed data with different RAID levels utilized on the disk arrays, and different files sizes. See
(Table 53 on page 163.)

Disk WRITE performance observations in the example SAN test environment

• File size and file count has a major impact on performance. (See Figure 64.)
Figure 64 shows how file systems with hundreds of thousands of small files spend an extraordinary
amount of time opening and closing the files while writing or reading from disk. The result is very poor
backup and restore performance with file sizes less than 512K. Windows has a much higher file system
overhead compared to HP-UX and Linux.
• Striped (RAID5) versus mirrored (RAID1) data on a disk array has mixed results. (See Figure 65.)
The impact of striped data versus mirrored data varied depending on the type of operation. Striped
data tended to perform better for disk reads which resulted in better backup performance; however
mirrored data tended to perform better for disk writes which resulted in better restore performance.
• XP Striped set RAID5 3+1P versus RAID5 7+1P showed 7+1P performed significantly better. (See
Figure 66.)
Figure 64 EVA RAID5 Write Throughput

166 Performance: Finding bottlenecks


Figure 65 HP-UX EVA Disk Throughput

Figure 66 HP-UX XP Disk Throughput

Enterprise Backup Solution design guide 167


4. Evaluate the disk subsystem’s READ performance
L&TT or HPReadData can be used for this test. These tools READ data from disk directly to memory to get
the actual read transfer rate.
L&TT or HPReadData specifies the directory or file to read. The file size and number of files are determined
by the data that resides on the specified disk directory. Reading datasets with different file sizes and file
counts help to determine the impact that file size and file count have on performance.

NOTE: The datasets used in this test were created in ”3. Evaluate the disk subsystem’s WRITE
performance” on page 166, and are now being read from disk. (See Table 53 on page 163.)

Disk READ performance observations in the example SAN test environment


• File size and file count have a significant impact on disk READ performance. (See Figure 67.) Systems
with hundreds of thousands of small files spend an extraordinary amount of time writing to or reading
from disk.
• As shown in the previous section, striped (RAID5) data tended to perform better for disk reads. (See
Figure 65 on page 167.)
• Also shown in the previous section, RAID5 7+1P performed significantly better than RAID5 3+1P for
disk READS. (See Figure 66 on page 167.)

Figure 67 EVA RAID5 READ Throughput

168 Performance: Finding bottlenecks


5. Evaluate the backup and restore application’s effect on disk and tape
performance
When the independent disk and tape tests are complete, use an application such as HP OpenView
Storage Data Protector, to run several backups and restores using the same data that was created by
HPCreateData in ”3. Evaluate the disk subsystem’s WRITE performance” on page 166. This will help
determine the impact of the data protection application.
Most data protection applications include parameters that can be modified to affect throughput
performance. Common parameters include block size, segment size, and number of data buffers. Trying
different combinations of these parameters helps to verify the impact they have on performance of backups
and restores.

Backup application performance observations in the example SAN test environment


• Using the datasets created with L&TT, the overhead introduced by the application caused a 10-20%
reduction in backup throughput. Compare the color-coded bullets to the matching colored bars in
Figure 68 (data is shown for backups only).
• In some cases Data Protector's internal performance measurement was significantly less than actual
throughput to the tape drive. To get accurate throughput measurement to the tape drive it may be
necessary to measure port speeds on the SAN switch.
• The data protection application block size and number of data buffer parameters have varying results.
Due to the wide range of factors that can impact backup and restore performance, finding the optimal
block size and number of data buffers is dependent on the environment. Different values for these
parameters must be tested in each data protection environment to find the optimal settings. A block size
of 256 KB is recommended for LTO3 tape devices.
Figure 68 Data protector throughput (Mixed file sizes)

Enterprise Backup Solution design guide 169


Summary
Once the bottlenecks of a data protection environment have been located they need to be tuned to
improve backup and restore performance. The following suggestions are based on the example SAN test
environment:
Suggestions for improving the tape subsystem's performance
• To improve tape performance on a Linux server, avoid block sizes of 64KB when possible.
• To find the optimal block size in a backup environment, perform tape tests using varied block sizes.
• Windows registry MaximumSGList value for an Emulex HBA should be set to 81 (hex).
Suggestions for improving disk performance
• To improve disk performance, avoid large amounts of small files when possible. See the first bullet
under ”Suggestions for improving the data protection application performance” in the next section.
• To improve backup performance, striped data (RAID5) should be used when possible. Large stripe sets
performed better than small. Striping data may decrease restore performance because RAID stripe sets
have to be re-created in real-time.

Suggestions for improving the data protection application performance


• To improve backup performance of large amounts of small files, the data protection application may
support RAW device backups. RAW device backups have limitations dependent on the data protection
application. Single file backups and restores, and incremental backups may not be supported.
• To find the optimal block size of the data protection application, perform backup and restore tests using
varied block sizes. A block size of 256K is recommended for LTO3 tape devices.
• To find the optimal buffer size of the data protection application, perform backup and restore tests
using varied buffer sizes.
Other factors to consider
• Server CPU and memory:
• Is the CPU busy during tape writes?
• Is there enough memory to buffer data?
The server may need more memory or CPU power, or perhaps the server is busy with other
applications. Other HP testing revealed the following rules-of-thumb for server processor-sizing on LTO3
tape drives. (See Table 54.)

Table 54 Sizing on tape drives

2:1 Win 32 Win 64 PA RISC HP-UX RAM Number Number


Speed Process Process Process Itanium needed of of
MB/s or or or Process (MB/dri 4MB/s 6MB/s
Speed/ Speed/ Speed/ or ve) Disks Disks in
drive drive drive Speed/ Drives RAID 5
drive in RAID Needed
5
Needed
Ultrium 1840 240 3.3 GHz 2.4 GHz 2.4 GHz 1.7 GHz 768 60 40

Ultrium 160 2.2 GHz 1.6 GHz 1.4 GHz 1.1 GHz 512 40 27
960/1760

Ultrium 920 120 1.6 GHz 1.2 GHz 1 GHz 850 MHz 384 30 20

SDLT 600 72 1 GHz 733 MHz 650 MHz 525 MHz 256 18 12

Ultrium 460 60 850 MHz 633 MHz 550 MHz 433 MHz 224 15 10

Ultrium 448 48 700 MHz 500 MHz 433 MHz 350 MHz 192 12 8

170 Performance: Finding bottlenecks


Table 54 Sizing on tape drives

2:1 Win 32 Win 64 PA RISC HP-UX RAM Number Number


Speed Process Process Process Itanium needed of of
MB/s or or or Process (MB/dri 4MB/s 6MB/s
Speed/ Speed/ Speed/ or ve) Disks Disks in
drive drive drive Speed/ Drives RAID 5
drive in RAID Needed
5
Needed
SDLT 320 32 475 MHz 350 MHz 300 MHz 250 MHz 128 8 6

Ultrium 232 32 475 MHz 350 MHz 300 MHz 233 MHz 128 8 6

• SAN bandwidth:
• Does the SAN have enough bandwidth to move the data?
SAN switches have tools for measuring performance. These tools can be used to ensure that the SAN
has the needed bandwidth.
• Disk subsystem limitations:
Perhaps the disk subsystem is too busy or not configured optimally. See the disk subsystem
documentation for methods of improving performance.
• Multi-streaming:
This chapter did not include tests that send multiple data streams in parallel to single or multiple tape
drives. Multi-streaming can significantly improve backup performance, provided that the server, SAN
infrastructure, tape devices, and disk subsystem can support the streams.

For more performance information:


See the Getting the most performance from your HP StorageWorks Ultrium 960 tape drive white paper, at:
http://h71028.www7.hp.com/ERC/downloads/5982-9971EN.pdf
See the HP StorageWorks Ultrium 960 tape drive technical white paper, at:
http://h71028.www7.hp.com/ERC/downloads/5983-0148EN.pdf

Enterprise Backup Solution design guide 171


172 Performance: Finding bottlenecks
10. Library and Tape Tools
HP StorageWorks Library and Tape Tools (L&TT) is a free diagnostic tool for HP's tape and magneto-optical
devices.
L&TT provides key features designed for use by both HP storage customers and trained service personnel:
• Intuitive user interface requires no training; easy to install and easy to use.
• Web-based smart firmware downloads, updates and notifications.
• Device assessment tests for all HP supported devices for health assessment including device failure
analysis.
• Performance tests for identifying bottlenecks in the system for both the devices and disk I/O.
• Provides support tickets which are an all-inclusive source of device information for the user and HP
support center.

NOTE: Frequent firmware image updates are released on the Internet. For optimal performance, HP
recommends updating your system periodically with the latest device firmware.

L&TT is available for free download from: http://www.hp.com/support/tapetools


Support for L&TT is available from the website and via the L&TT team at: L&TT_team@hp.com.

Tape drive troubleshooting using L&TT benefits


HP provides free tools for troubleshooting HP StorageWorks tape drives.
These tools help to:
• Determine the condition of the tape drive.
• Identify potential issues.
• Find out how to take corrective action.

NOTE: HP recommends performing diagnostic tests on your tape drive before requesting a replacement.

Tape drive performance questions


This section lists common tape drive performance questions.
• Is the drive connected correctly?
• Is the drive working as expected?
• Is the drive firmware up to date?
• Is the drive performing as expected?
• Is the media in good condition?

NOTE: This chapter does not replace the detailed documentation that is available for L&TT
troubleshooting.

Enterprise Backup Solution design guide 173


Good maintenance comes first
The health of your tape drive and media is highly influenced by the way they are treated. Good
maintenance helps prevent many typical customer problems.
• Maintenance
• Keep FW updated.
• Use the drive in a clean and controlled environment.
• Clean the drive if prompted.
• Periodic health checks
• Run L&TT drive assessment tests.
• Run L&TT media analysis tests on tapes holding key data.
• Follow good practice
• Properly store media.
• Backup data regularly.
• For the most critical data, create a second backup.
HP storage media is developed and tested with HP drives and libraries to deliver optimum performance in
backup and restore operations and lifelong integrity for archive libraries. HP media and drives are tested
beyond industry standards to achieve optimum performance in busy, real time environments, day in, day
out. For more informations, see http://www.hptapemedia.com.

Troubleshooting with L&TT starts here


Is your drive connected correctly?
Your drive is physically connected to the server through the HBA (host bus adapter), cabling (or fabric),
connectors and drive electronics. It also needs software components for the operating system and
application to communicate with the drive. These include the HBA and tape drivers as well as operating
system and application configuration. If any of these parts are not working well, your application may not
be able to see the drive.
If the backup application is not communicating correctly with the drive, then it is important to check all of
the above components.
The L&TT Install Check is available to check the installation of LTO, DLT and SDLT drives (DDS under
development) and all of the above through the following tests:
• HBA type and driver
• Device type and driver
• Correct SCSI configuration
• Data I/O from/to the internal data buffer of the drive
• Write/read of data to/from the tape
• Performance of the drive
• Performance of the host system
• Sufficient cooling
See ”Windows” on page 176 to install L&TT.
See ”Checking installation” on page 176 to run the L&TT install check.
The test will indicate what it has found and may offer appropriate corrective action.
If the test shows that the drive is installed correctly and problems persist, look at the wider system to find the
cause. For instance, the backup application or data source (such as the disk subsystem) may not be
configured correctly. Run the drive assessment test (see ”Checking drive health” on page 177) to determine
how well the drive is working. If the test shows a problem, the tool may resolve it. If not, then contact the
relevant vendor for support such as your backup provider or HP for drive support.

174 Library and Tape Tools


NOTE: It is always a good idea to visually inspect the HBA, cables, and connectors looking for good
seating and compliance with supported equipment.

Is the drive working as expected?


To find out if the drive is working correctly, download L&TT and run the Drive Assessment test.
See ”Windows” on page 176 to install L&TT.
See ”Checking drive health” on page 177 to run the Drive Assessment test.
This test writes to and reads from a tape that is known to be good. It also checks that the firmware is up to
date and logs in the drive which hold a record of its previous activity and can identify issues that have
occurred previously.
If the test passes OK, then the drive is considered to be working. If there continues to be a problem, then
look at other parts of your system.
If the drive fails the assessment test, it could be due to poor media. Try again with a different tape. If the
tape is good, there is an issue with your hardware. Contact HP support with the results of the tests.

Is the firmware up to date?


It is important to keep the drive’s firmware up to date because latest revisions include improvements to
resolve known field issues.
See ”Checking and updating FW revision” on page 176 to update firmware using L&TT.

Is the drive performing as expected?


If the drive is not performing as stated in the product specs, run these tests:
• Device performance test – measures the transfer rate of the drive, HBA, and cabling.
• Back-up performance test – data source transfer rate from the disks or network for back-up.
• Restore performance test – data restore transfer rate to the disks or across the network for restores.
The tests measure the actual transfer rates and identify drive or the system problems. In most cases, it is the
system performance that is the limiting factor. See www.hp.com/support/pat for detailed information on
finding and fixing performance issues.
See ”Windows” on page 176 to install L&TT.
See ”Checking performance” on page 177 to run these tests.

Is the media in good condition?


To determine the health of your media, download L&TT and run the media validation test.
See ”Windows” on page 176 to install L&TT.
See ”Checking media” on page 177 to run the media analysis test.
This test reads all of the data from the tape and measures the error rate. This gives a good indication of
how well written the data is on the tape. The results of the READ are also determined by the condition of
the tape drive. Make sure that you run this test on a properly functioning drive.
Future releases of L&TT will provide age and usage data of the tape, including tape condition warnings.

NOTE: This test does not write to the tape, so the contents of the tape are safe. Setting the tape to
write-protect is an option, but not necessary.

If the test passes, then all of the data is considered to be readable.

Enterprise Backup Solution design guide 175


When the cartridge is loaded, (again for DDS/DAT) look at the amount of RAWs in the tape log section.
This gives an indication of the number of issues a cartridge has. If the test is unable to read the data from
the media and the drive is considered to be in good condition, (See ”Is the drive working as expected?”)
then the media is likely the problem.
In this case there are a number of options:
• If you still have the data available on your system, then you will still be able to write that data to
another tape.
• Try it on another drive. It is possible that the drive that wrote the data was also suspect, but it may be
able to read it back itself.
• Contact HP support for more detailed assistance.

Basic L&TT operations


• The guides that follow are very brief and will get you started. If you have any problems or questions
refer to the more detailed information at www.hp.com/support/tapetools.

Windows
Installing L&TT
• Find the latest version of L&TT for your OS at www.hp.com/support/tapetools.
• Run the install package (note upgrading from 4.0 and higher will automatically uninstall the previous
version).
• L&TT is ready to run.
Running L&TT
• Run the L&TT executable and follow the device scan messages.
• Select the device and verify that L&TT can locate it.
Checking and updating FW revision
Check if latest firmware is on the drive:
1. Select drive in L&TT.
2. Select the FW button.
3. Select the Local Firmware Files tab.
4. Select Get Files from Web button.
5. Check to ensure that your firmware is up to date (in all local devices indicated).
If not, load the firmware on to your server and upgrade your drive.
6. Use the download button to download the latest firmware to your server.
7. Reselect your drive.
8. Select your drive to be upgraded.
9. Click the Start Update button.
Wait for update to complete. Do not turn off your drive at this point. The drive LEDs will show the
update in progress.
10. When complete, you can reselect your drive and continue.
Checking installation

NOTE: This is only supported on Windows.

1. Run L&TT from the start menu. Start->Programs->HP StorageWorks Library and Tape
Tools->HP L&TT Installation Check. Wait for device selection screen (you will see several
initialization screens as the devices are located).
2. Select the device you want to check and click Start Verification.

176 Library and Tape Tools


3. Select the options for verification (default is a good set). You will need to insert a tape that can be
written to. When the test is complete review the Results and Recommendations windows.
Checking drive health
1. Run L&TT.
2. Select the drive.
3. Select the Test icon.
4. Select the drive.
5. Select the Drive Assessment test from the test group pull-down. This is the default test. Leave the
options as default to run the full test.
6. Click Start Test and follow the instructions. Have a tape ready that you believe to be good and can
write to. A new tape is ideal. When the test has completed the results can be found under the Test
Results tab.
Checking performance
1. Run L&TT.
2. Select the drive.
3. Select the device or system performance icons.
a. Device icon: Select any options and click start. You will need a tape that can be written to for this
test.
b. System icon: Select back-up or restore test, any options, and click start. You do not need a tape for
these tests.

NOTE: The system tests are data safe. The backup test is read-only and the restore test creates new data
files on the system. Data on the system is not overwritten.

NOTE: The disk subsystem performance tests are located under Sys Perf on main GUI.
The system performance backup Pre-test: Tests disk READs.
The system performance restore Pre-test: Test disk WRITEs.

Checking media
1. Run L&TT.
2. Select the drive.
3. Select the Test icon.
4. Select the drive.
5. Select Media Analysis test from the test group pull-down. The test defaults to five minutes of data
reading, so use the options if you want to do more than that.
6. Click Start Test and follow the instructions. Have the tape ready that you want to check. Note that this
tape will not be written to – the data is safe (though you can also set it to write protect, if desired).
When the test has completed, the results can be found under the Test Results tab.

Sending a support ticket to HP


Before running a Support Ticket, you will need to be in contact with HP support.
1. Run L&TT.
2. Select the drive.
3. Select the Support icon.
4. Select the View Support Ticket button. Wait for L&TT to pull the drive logs and generate the support
ticket. The support ticket will be displayed.
5. From the support ticket viewer, select File > Email…

Enterprise Backup Solution design guide 177


6. Fill out the form provided, using the email address provided by your call agent.
7. Select Send.

HP-UX, Tru64, or Linux


Uninstalling previous versions
Before installing L&TT on Linux, you must first uninstall any previous versions.
To determine if L&TT is already installed, run the following command:
rpm -qa | grep ltt
To remove a previous version of L&TT, run the following command:
rpm -e ltt

Installing the latest version


To install L&TT for HP-UX, Tru64, or Linux:
1. Log in as root.
2. Navigate to the following temporary directory:
cd /tmp
3. Download or copy the L&TT tar file, hp_ltt<xx>.tar (where <xx> is the version number) to
this directory. If you are copying the file from a different location, enter the following (substitute
the directory in which the file currently resides for <directory name>):
cp /<directory name>/hp_ltt<xx>.tar /tmp

NOTE: The Tru64 tar filename uses the letter o in place of zeroes. For L&TT 4.2, the filename is
hp_ltt42.tar.

4. Un-tar the L&TT tar file:


tar -xvf hp_ltt<xx>.tar
5. Run the install script in the /tmp directory:
./install_hpltt

NOTE: For Linux, the L&TT installer verifies that the operating system you are installing on is supported. If
the Linux distribution or release is unsupported, the install script displays a message indicating an
installation failure and lists the supported operating systems.

6. After the software is successfully installed, enter the following commands to remove the /tmp/ltt
directory and its contents:
cd /tmp
rm -rf ltt
rm -rf install_hpltt

NOTE: For more information regarding firmware upgrades, generating a support ticket, checking
performance, and/or using utilities in HP-UX, Tru64, or Linux, see the latest L&TT users guide chapter
concerning Command line functionality at the web page http://www.hp.com/support/, under
documentation.

If you have trouble using L&TT


There are a number of routes to get help:
There is more information available on the L&TT website. See ”More information” below.
HP Support is available. See ”HP technical support” below.

178 Library and Tape Tools


If the above doesn’t work for you, then the L&TT team is able to support you directly via e-mail using the
L&TT_team@hp.com address.

HP technical support
Telephone numbers for worldwide technical support are listed on the HP support website:
http://www.hp.com/support/.
Collect the following information before calling:
• Technical support registration number (if applicable)
• Product serial numbers
• Product model names and numbers
• Applicable error messages
• Operating system type and revision level
• Detailed, specific questions
For continuous quality improvement, calls may be recorded or monitored.
HP strongly recommends that customers sign up online using the Subscriber's choice website:
http://www.hp.com/go/e-updates.
• Subscribing to this service provides you with e-mail updates on the latest product enhancements, newest
versions of drivers, and firmware documentation updates as well as instant access to numerous other
product resources.
After signing up, you can quickly locate your products by selecting Business support and then Storage
under Product Category.

More information
This chapter is very brief and is aimed at providing you with the most useful information about
troubleshooting with L&TT.
There is more detailed information available to you on the hp.com website at the following two areas:
• L&TT specific information. From the L&TT website: www.hp.com/support/tapetools, follow the link to
Technical Support & Documentation. The most comprehensive and easiest to use document is the L&TT
support chapter which is a Windows help file. You can download this document from here.
• Product specific troubleshooting. From the hp.com page select Support & Troubleshooting and
then enter your product into the search form.

Enterprise Backup Solution design guide 179


180 Library and Tape Tools
Index

A cabling and configuration 30


Active Fabric LUNs 108 described 25
adapter, Fibre Channel host bus 98 setting bar code properties 31
audience 9 emulated private loop 103
authorized reseller, HP 11 ESL E-Series tape library
auto assigned maps 95 creating multi-unit libraries with Cross Link Kit 23
auto discovery 94 described 21

B F
backup and recovery of Virtual Machines 143 Fibre Channel
buffered tape writes 95 connecting to switches 96
fabric 103
C HBAs 98
Interface Controller
cables
described 17, 93
described 17
Interface Manager 17, 77
clustering 156
discovery 79
Command View TL 77
troubleshooting 90
components
port configuration 93
listed 17
switched fabric configuration 93
configuration
tape controller, connecting 95
basic storage domain 105
file compression ratio 76
nearline 108
settings, FC controller 93
H
conventions
document 9 HBA
text symbols 10 described 17
installing 98
D performance 98
third-party 99
D2D2T backup 70
help, obtaining 10, 11
data compression ratio 76
host bus adapter
data protection software
PCI Fibre Channel for the Server 98
described 17
host device configuration 94
focus 13
HP
device
authorized reseller 11
connections, recommended 93
storage website 11
Disabling RSM polling
Subscriber’s choice website 10, 179
LTO tape driver 119
technical support 10
SDLT tape driver 119
HP-UX, configuring 110
discovery mode 94
disk-to-disk-to-tape backup 70
I
document
conventions 9 IBM AIX 139
prerequisites 9 indexed maps 95
related documentation 9 installing
drive clusters 21 HBA 98

E K
EBS known issues
described 13 management agents 147
Multi-Protocol Router 97 NAS 123
solution steps 13
support of failover 158 L
EML E-Series tape library LED

Enterprise Backup Solution design guide 181


link activity 90 setting up NRS 108
link speed 90 topologies 14
link activity LED 90 SCC maps 95
link speed LED 90 SCSI
Linux 127 bus configuration 93
logical unit management 94 SCSI protocol 69
LUN Secure Manager
persistant binding issues 118 behavior 77
LUN maps 94 manual mapping 89
LUNs, Active Fabric 108 mapping rules 79
operation modes 78
M SFPs, described 17
management agents 147 solution
management tools 145 start-up 100
manual discovery 94 Subscriber’s choice, HP 10, 179
MSL5000/6000 Series tape libraries Sun Solaris 132
connecting 32 supported components 13
multi-stack units 35 switch
Multi-Protocol Router 97 described 17
switched fabric 14, 93
N switches 96
NAS devices, configuring 123 Symantec
nearline configuration information 108 Jumbo Patch, installing 141
NetWare 130 NetBackup for DataCenter v3.4, installing 141
NSR symbols in text 10
configuring 108
connecting 95 T
described 93 Tape drive polling 119
limited initiators for single and dual port routers 95 tape drives
setting up in SAN 108 determining backup and restore performance 75
numbering, drive clusters 22 throughput speed 75
Ultrium performance 76
O tape library
operating system described 17
support 14 EML E-Series 25
overview 13 ESL E-Series 21
MSL5000 and MSL6000 32
P Virtual Library System (VLS) 44
technical support, HP 10
PCI Fibre Channel Host Bus Adapter 98
text symbols 10
platform support 14
troubleshooting, Interface Manager 90
point-to-point connections 14
Tru64 UNIX 124
power on sequence 101
prerequisites 9
U
R UNIX, and Ultrium tape drives 76
rack stability, warning 10
V
RAID array storage 100
recovery of Virtual Machines 143 Virtual Library System (VLS)
related documentation 9 cabling, illustrated 48
Retention planning described 44
VLS1000i 71 rack order, illustrated 45, 47
router Virtual Machines
configuring in DAF environment 109 backup 143
recovery 143
S VLS1000i
benefits 69
SAN
D2D2T backup 70
configuring with HP-UX 114
disk-to-disk-to-tape backup 70
management software, described 18

182
emulation 70 described 103
important concepts 69 optimizing resources 103
iSCSI protocol 69 overlapping 103
RAID 70 security 103
retention planning 71
VLS12000/300
benefits 64
components 65
redundancy 65
system status monitoring 64
VLS6000
features 44
setting bar code 44
VLS6105
cabling 48
rack order 45
VLS6109
cabling 48
rack order 45
VLS6200
disk array rack mounting order 45
VLS6218
cabling 48
VLS6227
cabling 48
VLS6500
disk array rack mounting order 45
VLS6510
cabling 48
VLS6518
cabling 48
VLS6600
cabling 49
disk array rack mounting order 46
VLS6840
cabling 50
rack order 47
VLS6870
cabling 50
rack order 47
VLS9000
benefits 52
components 52
features 52
installing cables 54

W
warning
rack stability 10
websites
Command View TL 91
HP storage 11
HP Subscriber’s choice 10, 179
WORM technology 75

Z
zoning
benefits 103
components 104

Enterprise Backup Solution design guide 183

Potrebbero piacerti anche